Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
Ensemble methods that train multiple learners and then combine them to use, with \textit{Boosting} and \textit{Bagging} as representatives, are well-known machine learning approaches. An ensemble is significantly more accurate than a single learner, and ensemble methods have already achieved great success in various real-world tasks.
Part 3 (Chapters 11-16) introduces some advanced topics, covering feature selection and sparse learning, computational learning theory, semi-supervised learning, probabilistic graphical models, rule learning, and reinforcement learning.
In closing, Part IV addresses the development of evolutionary learning algorithms with provable theoretical guarantees for several representative tasks, in which evolutionary learning offers excellent performance.
This self-contained introduction shows how ensemble methods are used in real-world tasks. It first presents background and terminology for readers unfamiliar with machine learning and pattern recognition. The book then covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, and diversity measures. Moving on to more advanced topics, the author explains details behind ensemble pruning and clustering ensembles. He also describes developments in semi-supervised learning, active learning, cost-sensitive learning, class-imbalance learning, and comprehensibility enhancement.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.