Skip main navigation

New offer! Get 30% off one whole year of Unlimited learning. Subscribe for just £249.99 £174.99. New subscribers only. T&Cs apply

Find out more

Map tasks and Reduce tasks

Mark Hall explains how Map tasks produce models and a Reduce task aggregates them. Reduce strategies differ for Naive Bayes and other model types.

Map tasks produce models and a Reduce task aggregates them. Reduce strategies differ for Naive Bayes and other model types. We saw in the last lesson that Naive Bayes and JRip are treated differently. The reason is that Naive Bayes is easily parallelized by adding up frequency counts from the individual partitions, producing a single model. For JRip (and other classifiers), separate classifiers are learned for each partition (4 in this case), and a “vote” ensemble learner is produced that combines them. Also, for some classifiers (like JRip) it is beneficial to randomize the dataset before splitting it into partitions. Finally, we look at the “Spark: cross-validate two classifiers” template and examine how DIstributed Weka performs cross-validation.

This article is from the free online

Advanced Data Mining with Weka

Created by
FutureLearn - Learning For Life

Reach your personal and professional goals

Unlock access to hundreds of expert online courses and degrees from top universities and educators to gain accredited qualifications and professional CV-building certificates.

Join over 18 million learners to launch, switch or build upon your career, all at your own pace, across a wide range of topic areas.

Start Learning now