I am working on a neural network system to perform SED fitting as part of a student project at the University of Western Australia.
I created a set of 20,000 runs through the SED installer known as MAGPHYS . Each run has 42 input values and 32 output values that interest us (the system has more outputs, but we do not need them)
I experimented with the Keras neural network package to create a network to learn this feature.
My current network design uses 4 hidden layers, completely interconnected, with 30 connections between each layer. Each layer uses TanH activation features. I also have a 42-level input layer and a 32-dimensional output layer, both also using TanH activation, for a total of 6 layers.
model = Sequential()
loss = 'mse'
optimiser = SGD(lr=0.01, momentum=0.0, decay=0, nesterov=True)
model.add(Dense(output_dim=30, input_dim=42, init='glorot_uniform', activation='tanh'))
for i in range(0, 4):
model.add(Dense(output_dim=30, input_dim=30, init='glorot_uniform', activation='tanh'))
model.add(Dense(output_dim=32, input_dim=30, init='glorot_uniform', activation='tanh'))
model.compile(loss=loss, optimizer=optimiser)
I use the minimum / maximum normalization of my input and output data to squash all values between 0 and 1. I use the stochastic gradient descent optimizer and I experimented with various loss functions such as mean squared error, mean absolute error, average absolute percentage error and etc.
, , , , . , , . , , , , , .
( 32 ):
Output Correct
9.42609868658 = 9.647
9.26345946681 = 9.487
9.43403506231 = 9.522
9.35685760748 = 9.792
9.20564885211 = 9.287
9.39240577382 = 8.002
, 9,2-9,4, .
, , , , ?
- , ?