Pages: [1]
  Print  
Author Topic: Kernel density estimator in W-NaiveBayes  (Read 2589 times)
fosgene
Newbie
*
Posts: 9


« on: February 20, 2009, 09:57:42 AM »

Hi,

does anybody know why W-NaiveBayes does not include the kernel density estimator like his similar in Weka? This option is only part of W-NaiveBayesUpdateable. The same issue concerns Rapidminer in-built NaiveBayes.

Thank you!

Matteo
Logged
Sebastian Land
Administrator
Hero Member
*****
Posts: 2426


« Reply #1 on: February 20, 2009, 08:28:10 PM »

Hi Matteo,
it might not be the newest version of weka delivered with rapid miner. You could exchange the weka.jar in the lib directory manually.
Our own implementation of NaiveBayes does not support Kernel Density estimation at this point of time. Perhabs we will readd the feature in future, but there were some unresolved problems with the performance/accuracy trade-off for huge data sets in the old implementation.

Greetings,
  Sebastian
Logged
fosgene
Newbie
*
Posts: 9


« Reply #2 on: February 24, 2009, 04:01:18 PM »

You could exchange the weka.jar in the lib directory manually.

Do you mean copy and paste the weka.jar I can find in the zip file I download from weka website?
Because I have already pasted weka.jar (version 3.5.8 - its NaiveBayes already has kernel estimator) in the lib directory. But RapidMiner keep not showing me the new option.
Logged
Sebastian Land
Administrator
Hero Member
*****
Posts: 2426


« Reply #3 on: February 24, 2009, 06:55:43 PM »

hmm,
seems to be strange. I will check that and come back to you.

Greetings,
  Sebastian
Logged
fosgene
Newbie
*
Posts: 9


« Reply #4 on: February 26, 2009, 12:59:47 PM »

Leaving for a while the issue about kernel density estimator, I tried W-NaiveBayesUpdateble but I can't understand how it works compared to ordinary NaiveBayes.
First of all, is the term Updateable a definition for an on-line algorithm? And about this, does it mean that NBUpd uses one istance at time, evaluates it, and adds its results to those of all past instances?
Second, the fact of attribute precision: how does it work? The help only says that "This classifier will use a default precision is 0.1 for numeric attributes when buildClassifier is called with zero training istances", but there is no explanation about its use.

Thanks for your help!

Matteo
Logged
Sebastian Land
Administrator
Hero Member
*****
Posts: 2426


« Reply #5 on: March 08, 2009, 03:09:22 PM »

Hi Matteo,
I don't know this learners algorithm since it's one of the weka learners. (As indicated by the "W-") But I assume, that it recalculates mean and variance of all attributes incrementally if another example is added.

Greetings,
  Sebastian
Logged
Pages: [1]
  Print  
 
Jump to: