I participate in a mathematical class in coding theory, and we need to make a project for our terminological label.
My project is a client with an echo server, which shows what errors were introduced during the trip from the client to the server, and then back. And test different error correction schemes for efficiency and how suitable they are for this task.
Coding is not really a problem, I was able to do something to detect errors, request a retransmission if I can’t fix them, etc.
The problem is that so far, in order for me to introduce any type of bit error, I have to do it artificially, since other data transfer layers have their own error correction protocols.
My question is: is there a way around this?
I have no idea how I will do this or even where to start.
Also, I know that there are protocols that I can not mess with, so theres will always fix errors in the background at these levels. But I would like to be able to pretend that one of these layers did not check things myself, and then my application will be given the opportunity to play this role.
If this is not possible, what are some good methods for modeling errors that occur during transmission. I could not find statistical information about the distribution of errors, even on a simple example of a channel. Given those, I could continue the current approach when the server introduces errors in the message.
source
share