Applying logistic regression to the MNIST database using Python

Let D be an array of 60000x784, where D [i] represents some image of a handwritten digit, and D [i] [j] represents the intensity of the corresponding pixel. Intensity always has some real number from 0 to 1.

Let T be an array of 60000x1, where for simplicity T [i] is either 0 or 1. The real database stores numbers from 0 to 9, but let it not go this far and assume that the entire database contains a bunch of zeros and from them.

So, we have a training set, which consists of the input given by D and the corresponding goals given by T.

We would like to implement a logistic regression model in which the error function used is determined by the negative probability of the log.

enter image description here enter image description here enter image description here

θ - , ( 1x784). , .

enter image description here

0, .

.

help pylab:

import numpy as np

def sigmoid(self,s):
    return expit(s)

J (θ) θ, , :

 def getInfo(self, D, T, theta):
        afterSigmoid = self.sigmoid(D.dot(theta))
        Ein = -np.sum(target*np.log(afterSigmoid)+
        (1-target)*np.log(1-afterSigmoid))
        gradient=-D.T.dot(T-afterSigmoid)
        return (Ein, gradient)

def log_grad(self, data, target, theta):
    iterations = 1000
    it = 0
    step = 0.01
    while(it <= iterations):
        (Ein, vt) = self.getInfo(data, target, theta)
        print(Ein,it)
        theta = theta - step*vt
        it = it + 1
    return theta

:

nan 1
nan 2
nan 3
nan 4
nan 5
nan 6
nan 7
nan 8
nan 9
nan 10
nan 11
nan 12
nan 13
nan 14
nan 15
nan 16
.
.
.

, , , python.

+4

Source: https://habr.com/ru/post/1607445/


All Articles