Pages: [1]
  Print  
Author Topic: How to avoid out of memory when running FSSs?  (Read 2313 times)
Ingo Mierswa
Administrator
Hero Member
*****
Posts: 1226



WWW
« on: May 23, 2008, 11:32:26 PM »

Original message from SourceForge forum at http://sourceforge.net/forum/forum.php?thread_id=2031544&forum_id=390413

Hi: I have a microarray dataset with 7079 attributes (a lot of them). So, when I'm trying to execute any FSSs operator to reduce the dimensionality the RapidMiner is not able to execute the algorithm, a out-of-memory error message appears. I have 2 MB of RAM and set the virtual memory to 4 MB. Is it possible to execute RapidMinner's search algorithms using a such high dimensional problem? Any technical suggestions to solve this situations are welcome, please? Gladys


Edit by Gladys:

Sorry, I mean 2 GB of RAM, not MB. I use Windows XP


Answer by Ingo Mierswa:

Hi Gladys,
 
first try for memory problems in feature selection should be to reduce the number of individuals, e.g. using a 1+1 genetic algorithm using only one individual. This should always work when the data fits into memory. Other approaches like Forward Selection or Backward elimination will hardly work on data sets with this amount of features. You could also try to apply a feature weighting first and (moderately) filter out features by means of the AttributeWeightSelection operator. Search in the forum for this, there were several discussions on selecting features for high-dimensional data sets.
 
Cheers,
Ingo
Logged

Did you try our new Marketplace? Upload or download new Extensions, add comments, and organize your operators. Have a look at  http://marketplace.rapid-i.com
Pages: [1]
  Print  
 
Jump to: