Answer
Yes, you can use tf.nn.dropout for DropConnect , just use tf.nn.dropout to wrap your weight matrix instead of multiplying by post matrix. Then you can cancel the weight change by multiplying by the loss in this way
dropConnect = tf.nn.dropout( m1, keep_prob ) * keep_prob
Code example
Here is a sample code that computes the XOR function using drop connect. I also commented on the code that lunges which you can plug in and compare the result.
import tensorflow as tf
x = [[0.,0.],[1.,1.],[1.,0.],[0.,1.]]
y_ = [[1.,0.],[1.,0.],[0.,1.],[0.,1.]]
x0 = tf.constant( x , dtype=tf.float32 )
y0 = tf.constant( y_ , dtype=tf.float32 )
keep_prob = tf.placeholder( dtype=tf.float32 )
m1 = tf.Variable( tf.random_uniform( [2,12] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
b1 = tf.Variable( tf.random_uniform( [12] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
dropConnect = tf.nn.dropout( m1, keep_prob ) * keep_prob
h1 = tf.sigmoid( tf.matmul( x0, dropConnect ) + b1 )
m2 = tf.Variable( tf.random_uniform( [12,2] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
b2 = tf.Variable( tf.random_uniform( [2] , minval=0.1 , maxval=0.9 , dtype=tf.float32 ))
y_out = tf.nn.softmax( tf.matmul( h1,m2 ) + b2 )
loss = tf.reduce_sum( tf.square( y0 - y_out ) )
train = tf.train.AdamOptimizer(1e-2).minimize(loss)
with tf.Session() as sess:
sess.run( tf.initialize_all_variables() )
print "\nloss"
for step in range(5000) :
sess.run(train,feed_dict={keep_prob:0.5})
if (step + 1) % 100 == 0 :
print sess.run(loss,feed_dict={keep_prob:1.})
results = sess.run([m1,b1,m2,b2,y_out,loss],feed_dict={keep_prob:1.})
labels = "m1,b1,m2,b2,y_out,loss".split(",")
for label,result in zip(*(labels,results)) :
print ""
print label
print result
print ""
Output
Both options are able to correctly separate the input from the correct output.
y_out
[[ 7.05891490e-01 2.94108540e-01]
[ 9.99605477e-01 3.94574134e-04]
[ 4.99370173e-02 9.50062990e-01]
[ 4.39682379e-02 9.56031740e-01]]
Here you can see that the output from dropConnect was able to correctly classify Y as true, true, false, false.