Here is an annotated code snippet for a simple gradient descent for logistic regression. To introduce regularization, you'll want to update the cost and gradient equations. In this code, theta are parameters, X are class predicates, y are class labels, and alpha is learning speed
Hope this helps :)
function [theta,J_store] = logistic_gradientDescent(theta, X, y,alpha,numIterations) % Initialize some useful values m = length(y); % number of training examples n = size(X,2); %number of features J_store = 0; %J_store = zeros(numIterations,1); for iter=1:numIterations %predicts the class labels using the current weights (theta) Z = X*theta; h = sigmoid(Z); %This is the normal cost function equation J = (1/m).*sum(-y.*log(h) - (1-y).*log(1-h)); %J_store(iter) = J; %This is the equation to obtain the given the current weights, without regularisation grad = [(1/m) .* sum(repmat((h - y),1,n).*X)]'; theta = theta - alpha.*grad; end end
source share