Norges billigste bøker

Bøker i Foundations and Trends (R) in Machine Learning-serien

Filter
Filter
Sorter etterSorter Serierekkefølge
  • - A Comprehensive Review
    av Laurent Girin
    1 296,-

    Variational autoencoders (VAEs) are powerful deep generative models widely used to represent high-dimensional complex data through a low-dimensional latent space learned in an unsupervised manner. In this volume, the authors introduce and discuss a general class of models, called dynamical variational autoencoders.

  • av Jiani Liu
    1 212,-

    Tensor Regression is the first thorough overview of the fundamentals, motivations, popular algorithms, strategies for efficient implementation, related applications, available datasets, and software resources for tensor-based regression analysis.

  • av Akshay Agrawal
    1 196,-

    Minimum-Distortion Embedding describes the theory behind and practical use of a cutting-edge artificial intelligence technique. Accompanied by an open-source software package, PyMDE, it illustrates applying these AI techniques in areas such as images, co- networks, demographics, genetics, and biology.

  • av Peter Kairouz
    1 196,-

    The term Federated Learning was coined as recently as 2016 to describe a machine learning setting where multiple entities collaborate in solving a machine learning problem, under the coordination of a central server or service provider. This book describes the latest state-of-the art.

  • av Ljubisa Stankovic
    1 820,-

    Provides a comprehensive introduction to generating advanced data analytics on graphs that allows us to move beyond the standard regular sampling in time and space to facilitate modelling in many important areas.

  • - State-of-the-Art and Future Challenges
    av Karsten Borgwardt
    1 308,-

    Provides a review of existing graph kernels, their applications, software plus data resources, and an empirical comparison of state-of-the-art graph kernels. The book focuses on the theoretical description of common graph kernels, and on a large-scale empirical evaluation of graph kernels.

  • av Majid Janzamin
    1 296,-

    Surveys recent progress in using spectral methods, including matrix and tensor decomposition techniques, to learn many popular latent variable models. The focus is on a special type of tensor decomposition called CP decomposition. The authors cover a wide range of algorithms to find the components of such tensor decomposition.

  • av Diederik P. Kingma
    970,-

    Presents an introduction to the framework of variational autoencoders (VAEs) that provides a principled method for jointly learning deep latent-variable models and corresponding inference models using stochastic gradient descent.

  • av Christian A. Naesseth
    1 180,-

    Sequential Monte Carlo is a technique for solving statistical inference problems recursively. This book shows how this powerful technique can be applied to machine learning problems such as probabilistic programming, variational inference and inference evaluation.

  • av Aleksandrs Slivkins
    1 205,-

    Provides a textbook like treatment of multi-armed bandits. The work on multi-armed bandits can be partitioned into a dozen or so directions. Each chapter tackles one line of work, providing a self-contained introduction and pointers for further reading.

  • - With Applications to Data Science
    av Gabriel Peyre
    1 276,-

    Presents an overview of the main theoretical insights that support the practical effectiveness of OT before explaining how to turn these insights into fast computational schemes. This book will be a valuable reference for researchers and students wishing to get a thorough understanding of computational optimal transport.

  • av Vincent Francois-Lavet
    1 296,-

    Provides a starting point for understanding deep reinforcement learning. Although written at a research level it provides a comprehensive and accessible introduction to deep reinforcement learning models, algorithms and techniques.

  • av Adrian N. Bishop
    1 163,-

    Reviews and extends some important results in random matrix theory in the specific context of real random Wishart matrices. To overcome the complexity of the subject matter, the authors use a lecture note style to make the material accessible to a wide audience. This results in a comprehensive and self-contained introduction.

  • - Part 1 Low-Rank Tensor Decompositions
    av Andrzej Cichocki
    1 296,-

    Provides a systematic and example-rich guide to the basic properties and applications of tensor network methodologies, and demonstrates their promise as a tool for the analysis of extreme-scale multidimensional data. The book demonstrates the ability of tensor networks to provide linearly or even super-linearly, scalable solutions.

  • - A Learning Theory Perspective
    av Dana Ron
    1 063,-

    Takes the learning-theory point of view of property testing and focuses on results for testing properties of functions that are of interest to the learning theory community. In particular the book covers results for testing algebraic properties of functions such as linearity.

  • - Algorithms and Complexity
    av Sebastien Bubeck
    1 238,-

    Presents the main complexity theorems in convex optimization and their corresponding algorithms. The book begins with the fundamental theory of black-box optimization and proceeds to guide the reader through recent advances in structural optimization and stochastic optimization.

  • av Joel A. Tropp
    1 089,-

    Offers an invitation to the field of matrix concentration inequalities. The book begins with some history of random matrix theory; describes a flexible model for random matrices that is suitable for many problems; and discusses the most important matrix concentration results.

  • av Pierre Del Moral
    1 314,-

    Presents some new concentration inequalities for Feynman-Kac particle processes. The book analyses different types of stochastic particle models, including particle profile occupation measures, genealogical tree based evolution models, particle free energies, as well as backward Markov chain particle models.

  • av Francis Bach
    1 063,-

    Presents optimization tools and techniques dedicated to sparsity-inducing penalties from a general perspective. The book covers proximal methods, block-coordinate descent, working-set and homotopy methods, and non-convex formulations and extensions, and provides a set of experiments to compare algorithms from a computational point of view.

  • av Alex Kulesza
    1 296,-

    Provides a comprehensible introduction to determinantal point processes (DPPs), focusing on the intuitions, algorithms, and extensions that are most relevant to the machine learning community, and shows how DPPs can be applied to real-world applications.

  • av Anna Goldenberg
    1 324,-

    Provides an overview of the historical development of statistical network modelling and then introduces a number of examples that have been studied in the network literature. Subsequent discussions focus on a number of prominent static and dynamic network models and their interconnections.

  • - New Frontiers
    av Sridhar Mahadaven
    1 387,-

    Describes methods for automatically compressing Markov decision processes (MDPs) by learning a low-dimensional linear approximation defined by an orthogonal set of basis functions. A unique feature of the text is the use of Laplacian operators, whose matrix representations have non-positive off-diagonal elements and zero row sums.

  • av Martin J. Wainwright
    1 618,-

    Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, this book develops general variational representations of the problems of computing likelihoods, marginal probabilities and most probable configurations.

Gjør som tusenvis av andre bokelskere

Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.