Brainmaker

Nanos gigantium humeris insidentes!
You are currently browsing the Methodology category

Kernel Support Vector Machines for Classification and Regression in C#

  • May 13, 2012 1:49 pm

http://crsouza.blogspot.com/2010/04/kernel-support-vector-machines-for.html

Classifier

  • July 31, 2010 12:27 am

Learning and Data Mining

Both Data Mining and Machine learning are techniques related to the processing of large amounts of data.

  • The Data Mining technique tries to obtain patterns or models from the data collected.
  • Machine learning is the basic part that the different types of existing classifiers have in common. The basic idea of learning is using the perceptions not only to act but also to improve the ability of an agent to act in the future.

Learning techniques usually fall into the following categories:

Supervised learning

The supervised learning involves learning a function from tagged examples above, to establish a correspondence between the inputs and these learning system tries to tag (classify) a set of vectors choosing one of several categories (classes).

Unsupervised learning

The unsupervised learning consists in learning from input patterns with no output values specified. The main problem of this technique is how to take a decision between all patterns provided. The system takes the input objects as a set of random variables, building a density model for that data set.

Semi-supervised learning

Semi-supervised learning is based on techniques that combine the previous two, this is because in some cases can be very difficult to tag or classify all the data. The aim is to combine tagged and untagged data to improve modeling. Although it is not always helpful and there are several methods to do so.

Reinforcement Learning

The reinforcement learning is a way of learning by observing the world.

The idea of learning consists in building a function with the observed behaviour as their input and output. Learning methods can be understood as the research of a rank of hypothesis to find the appropriate function.

The Methodology

  • July 30, 2010 11:35 pm

Hidden Markov Model
Decision tree learning
Nearest Neighbor Algorithm
Conditional Random Field : 一个很好的网站
Naive Bayes : 一个理论介绍
Kernel Method
Support Vector Machine
Maximum Entropy : 一个链接

Semi supervised learning




Support Vector Machine
Naive Bayes : 一个理论介绍
Kernel Method
———————————————–
Hidden Markov Model
Conditional Random Field : 一个很好的网站
Maximum Entropy : 一个链接
Decision tree learning
Nearest Neighbor Algorithm

归纳的方法

  • July 30, 2010 11:35 pm

Hidden Markov Model
Decision tree learning
Nearest Neighbor Algorithm
Conditional Random Field : 一个很好的网站
Naive Bayes : 一个理论介绍
Kernel Method
Support Vector Machine
Maximum Entropy : 一个链接

Semi supervised learning


要做的事:将下面的归纳收集到brainmaker里

  1. 找出提出者的论文
  2. 找一至两篇成熟的应用
    1. 不限定方向
    2. 限定在nlp方向
  3. 找出成熟的实现
    1. SVM: SVMlight

排序:


Support Vector Machine
Naive Bayes : 一个理论介绍
Kernel Method
———————————————–
Hidden Markov Model
Conditional Random Field : 一个很好的网站
Maximum Entropy : 一个链接
Decision tree learning
Nearest Neighbor Algorithm

classifiers model reference

  • July 19, 2010 11:09 pm

naive bayes classifer

http://www.statsoft.com/textbook/naive-bayes-classifier/

http://en.wikipedia.org/wiki/Naive_Bayes_classifier#The_naive_Bayes_probabilistic_model

support vector machine

http://www.statsoft.com/textbook/support-vector-machines/

Vladimir N. Vapnik, The Nature of Statistical Learning Theory. Springer, 1995.

Support Vector Machines — Different Methods

  • May 31, 2010 11:22 pm

Digested from: http://www.dtreg.com/svm.htm

A Support Vector Machine (SVM) performs classification by constructing an N-dimensional hyperplane that optimally separates the data into two categories. Before considering N-dimensional hyperplanes, let’s look at a simple 2-dimensional example. Assume we wish to perform a classification, and our data has a categorical target variable with two categories.

The simplest way to divide two groups is with a straight line, flat plane or an N-dimensional hyperplane. But what if the points are separated by a nonlinear region. Rather than fitting nonlinear curves to the data, SVM handles this by using a kernel function to map the data into a different space where a hyperplane can be used to do the separation. The concept of a kernel mapping function is very powerful. It allows SVM models to perform separations even with very complex boundaries such as shown below.


The Kernel Trick


Many kernel mapping functions can be used – probably an infinite number. But a few kernel functions have been found to work well in for a wide variety of applications. The default and recommended kernel function is the Radial Basis Function (RBF).


Kernel functions supported by DTREG: Linear: u’*v



Polynomial: (gamma*u’*v + coef0)^degree



Radial basis function: exp(-gamma*|u-v|^2)



Sigmoid (feed-forward neural network): tanh(gamma*u’*v + coef0)