Naive Abst.docx

  • Uploaded by: Pooja Racha
  • 0
  • 0
  • May 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Naive Abst.docx as PDF for free.

More details

  • Words: 243
  • Pages: 1
ABSTRACT All of Machine Learning Algorithms need to be trained for supervised learning tasks like classification, prediction etc .By training it means to train them on particular inputs so that later on we may test them for unknown inputs for which they may classify or predict etc based on their learning. This is what most of the Machine Learning techniques like Neural Networks, SVM, Bayesian etc. are based upon. So in a general Machine Learning project basically you have to divide your input set to a Training Set & a Test Set (or Evaluation set).Naive Bayes based on , the idea of Conditional Probability and Bayes rule. In Conditional probability we find the

probability of an event given that some event has already occurred but In Bayes theorem, we find just the opposite, we find the cause of some event that has already occurred. In reality, we have to predict an outcome given multiple evidences. In that case, the math gets very complicated. So we have to 'uncouple' multiple pieces of evidence, and treat each piece of evidence as independent. This approach is called Naive Bayes . When trying to classify, each outcome is called a class and it has class label. Consider how likely it is to be this class or that class, and assign a label to each entity. The class that has the highest probability is declared the "winner" and that class label gets assigned to that combination of evidences.

Related Documents


More Documents from "Paolo"

Naive Abst.docx
May 2020 11
Amcat.txt
May 2020 6
Gaurav_project.docx
November 2019 30
Poster.docx
November 2019 21