After I was unable to complete my own implementation of LSTM (for generating music), I decided to study TensorFlow to create an RNN similar to that used by Aran Nayebi and Matt Vitelli for use with music generation ( https: //cs224d.stanford .edu / reports / NayebiAran.pdf ). However, at the moment I'm just trying to evaluate the sine function - I'm going to switch to the correct musical sound after I get it to work.
I have a small problem with my network, and I can’t understand why this does not give me real results when I take a sample from the network during training.
Here my network output looks like this:

- - , , .
:
, .
!