Pages: [1]
  Print  
Author Topic: Neuralnet-model not working correctly ?  (Read 1814 times)
VD
Guest
« on: March 13, 2009, 12:42:33 PM »

Hello,

I'm getting strange results with Rapidminer Neuralnet-model. However, the
Weka-model (W-MultilayerPerceptron) predicts the label correctly. I believe
that in the Neuralnet-model no Threshold (or Bias) is used

Here are the data for testing (and the models below):

Data
in A   in B   out C (label)
.142   0.127   1.5
.118   0.101   1.5
.145   0.144   0
.127   0.111   1.5
.195   0.189   0.5
.172   0.172   0
.119   0.111   0.7
.119   0.111   0.7
.147   0.132   1.5
.148   0.147   0
.194   0.189   0.5
.192   0.189   0.5
.142   0.13   1.5
.146   0.131   1.5
.233   0.23   0.5
.165   0.165   0
.165   0.165   0
.255   0.252   0.5
.265   0.26   0.5
.238   0.234   0.5
.250   0.244   0.5
.254   0.251   0.5
.238   0.234   0.5
.249   0.245   0.5
.254   0.25   0.5
.238   0.234   0.5
.252   0.246   0.5
.238   0.233   0.5
.252   0.246   0.5
.239   0.233   0.5
.252   0.246   0.5



ANN with RapidMiner-Model (Neuralnet)

<operator name="Root" class="Process" expanded="yes">
<operator name="ExampleSource" class="ExampleSource">
<parameter key="attributes" value="C:\Documents and
Settings\drv2fe\Desktop\ANN_testdata.aml"/>
</operator>
<operator name="NeuralNet" class="NeuralNet">
<parameter key="default_hidden_layer_size" value="4"/>
<parameter key="keep_example_set" value="true"/>
<parameter key="training_cycles" value="500"/>
</operator>
<operator name="ModelApplier" class="ModelApplier">
<parameter key="keep_model" value="true"/>
</operator>
</operator>


ANN with Weka-Model (W-MultilayerPerceptron)

<operator name="Root" class="Process" expanded="yes">
<operator name="ExampleSource" class="ExampleSource">
<parameter key="attributes" value="C:\Documents and
Settings\drv2fe\Desktop\ANN_testdata.aml"/>
</operator>
<operator name="W-MultilayerPerceptron"
class="W-MultilayerPerceptron">
<parameter key="H" value="4"/>
<parameter key="keep_example_set" value="true"/>
</operator>
<operator name="ModelApplier" class="ModelApplier">
<parameter key="keep_model" value="true"/>
</operator>
</operator>

Best Regards

Volker
Logged
VD
Guest
« Reply #1 on: March 13, 2009, 12:44:33 PM »

... sorry - I wanted to send my previous mail to "Problems and Support"

Volker
Logged
Ingo Mierswa
Administrator
Hero Member
*****
Posts: 1226



WWW
« Reply #2 on: March 13, 2009, 03:30:42 PM »

Hi,

did you change the output layer to linear? A sigmoid output layer does not make too much sense here. Furthermore, a larger learning rate and momentum as well as more training cycles definitely help. I am not a neural network expert myself so maybe there are other tuning options. However, the bias seems to be there and therefore I do not think this is the problem here.

By the way, we have just updated to a new version of Joone which we use for the internal neural networks calculation and we also adapted some default settings of the neural network learner. As a result, on some data sets the results are much better than before, on others nothing changed. The same applies to the comparison with the multilayer perceptron of Weka. And this is the great thing of course: just try all the options and choose which fits best for your data.

Cheers,
Ingo
Logged

Did you try our new Marketplace? Upload or download new Extensions, add comments, and organize your operators. Have a look at  http://marketplace.rapid-i.com
earmijo
Full Member
***
Posts: 143


« Reply #3 on: March 21, 2009, 09:53:43 PM »

I'm also having problems with the operator NeuralNet. I actually have gone back to RapidMiner 4.3 where this problem was not present. The results I'm getting are completely off for version 4.4 (perhaps even sooner, version > 4.3.00). Even when comparing the results from NeuralNet and NeuralNetSimple with exactly the same settings the the results  for NeuralNet is always much worse.
Logged
michael hecht
Guest
« Reply #4 on: March 23, 2009, 10:53:09 AM »

This was also the first thing I noticed with the new RM 4.4

At my computer the memory fills up to the upper level and I got no result. Some clicks later RM crashed.
With the 4.3 version there was no problem and the NN of RM was much faster than the Weka version.
Since there was announced a coloured visualisation of net-weights I was a little bit disappointed.

Meanwhile I also switched back to RM 4.3 since also the switching to  the Results Mode after running a
workflow seems to be extremely slower than before. Possibly there are more bugs than the one in Neural-Nets.

Unfortunately in the 4.3 version there is the really serious bug with the polynomial regression, so non of both
RM versions woks with the models I need most frequently, i.e. NN and Polynomial Regression.
Logged
Ingo Mierswa
Administrator
Hero Member
*****
Posts: 1226



WWW
« Reply #5 on: March 23, 2009, 01:22:56 PM »

Hello,

I never thought that so many people actually still use neural networks today so I am sorry for the inconvenience those changes have caused. Please be assured that we have checked all changes on a variety of 8 different data sets and we did not notice the described problems but an increase in predictive power. By the way: I always got the feeling that neural networks are finally a bit outdated by now but anyway ... if they are the best solution for your problems we of course should support them.

Here are the changes within the neural net part of RapidMiner for those of you who consider a (partial) change back to the neural net of version 4.3:

* For the NeuralNet operator:
   - the underlying libary Joone was updated to the latest version. This could have caused memory and runtime issues although we did not notice anything like this ourself.
   - the default parameters for learning rate and momentum were changed to the default parameters stated by Joone (0.7 and 0.7 now, was: 0.3 and 0.2 as far as I remember)
   - the number of training cycles was increased to 1000 (was 500, definitely increased the runtime)
   - the output layer type is now automatically determined (sigmoid for classification, linear for regression).

* The new NeuralNetSimple operator (did you try it?) is a simpler and for the same number of training cycles also faster version of the NeuralNet operator. It also uses a threshold for each hidden layer level. We had made some short but good experiences with it if the number of layers and the layer size are appropriate so you could give it a try.


So I would recommend to change back the parameters of the NeuralNet operator and see what happens. If this does not deliver the expected results than you could replace the new Joone library (joone.jar) in the lib directory by the old one of RM 4.3. If this still is not enough than you actually depend on a different setting for your output layer (which I would consider a bit weird but anyway...) and you can go back within the CVS history and recompile and integrate the old NeuralNet operator yourself. The first two changes are pretty easy, the last one is not and I would only recommend it if it is 100% necessary.

Quote
Possibly there are more bugs than the one in Neural-Nets.

There definitely are more than only 1 bug in RM! A software with this complexity is never free of bugs - even if we are not aware of them I am 100% positive that there are lots of them included in RM. We fixed more than 30 bugs for version 4.4 and included about 100 new features and optimizations. Of course it is very likely that there are even new bugs integrated together with those changes. The only question is: do those bugs affect your work or not? If yes I can only recommend our Enterprise Editions where we guarantee to fix them as soon as possible. I am sure that you understand that we cannot give away this type of guarantees and service work just for free like we already do with the software itself.

If the EE is not an option for you, I ask for as much information as possible ("this no longer works" is actually not too helpful) together with a description of the tasks, of the processes, excerpts of the data etc. so that we can try to find a solution.

Quote
Unfortunately in the 4.3 version there is the really serious bug with the polynomial regression, so non of both
RM versions woks with the models I need most frequently, i.e. NN and Polynomial Regression.

I consider you as lucky: just install both versions then. Of course it is not nice to switch the versions but at least you know a possible workaround. And as it has happened with the PolynomialRegression bug I can only say that for neural networks it is the same: if there is a bug, it will be fixed as soon as time allows. If there is no bug but simply a question of finding appropriate settings: fine. This forum is here so that users can help each other to find those.

I can only ask all of you to help us fixing your neural network problems by sending in as much information as possible and I am sure we will come to a good solution either in form of a bugfix or in a guide of how to adapt settings best.

Cheers,
Ingo




Logged

Did you try our new Marketplace? Upload or download new Extensions, add comments, and organize your operators. Have a look at  http://marketplace.rapid-i.com
Pages: [1]
  Print  
 
Jump to: