Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todayΓÇÖs smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life.Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants.This book is suitable for researchers, practitioners, and studentsΓÇöanyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
This open access book provides an introduction and an overview of learning to quantify (a.k.a. ¿quantification¿), i.e. the task of training estimators of class proportions in unlabeled data by means of supervised learning. In data science, learning to quantify is a task of its own related to classification yet different from it, since estimating class proportions by simply classifying all data and counting the labels assigned by the classifier is known to often return inaccurate (¿biased¿) class proportion estimates.The book introduces learning to quantify by looking at the supervised learning methods that can be used to perform it, at the evaluation measures and evaluation protocols that should be used for evaluating the quality of the returned predictions, at the numerous fields of human activity in which the use of quantification techniques may provide improved results with respect to the naive use of classification techniques, and at advanced topics in quantification research.The book is suitable to researchers, data scientists, or PhD students, who want to come up to speed with the state of the art in learning to quantify, but also to researchers wishing to apply data science technologies to fields of human activity (e.g., the social sciences, political science, epidemiology, market research) which focus on aggregate (¿macrö) data rather than on individual (¿micrö) data.
This book surveys recent advances in Conversational Information Retrieval (CIR), focusing on neural approaches that have been developed in the last few years. Progress in deep learning has brought tremendous improvements in natural language processing (NLP) and conversational AI, leading to a plethora of commercial conversational services that allow naturally spoken and typed interaction, increasing the need for more human-centric interactions in IR.The book contains nine chapters. Chapter 1 motivates the research of CIR by reviewing the studies on how people search and subsequently defines a CIR system and a reference architecture which is described in detail in the rest of the book. Chapter 2 provides a detailed discussion of techniques for evaluating a CIR system ¿ a goal-oriented conversational AI system with a human in the loop. Then Chapters 3 to 7 describe the algorithms and methods for developing the main CIR modules (or sub-systems). In Chapter 3, conversational document search is discussed, which can be viewed as a sub-system of the CIR system. Chapter 4 is about algorithms and methods for query-focused multi-document summarization. Chapter 5 describes various neural models for conversational machine comprehension, which generate a direct answer to a user query based on retrieved query-relevant documents, while Chapter 6 details neural approaches to conversational question answering over knowledge bases, which is fundamental to the knowledge base search module of a CIR system. Chapter 7 elaborates various techniques and models that aim to equip a CIR system with the capability of proactively leading a human-machine conversation. Chapter 8 reviews a variety of commercial systems for CIR and related tasks. It first presents an overview of research platforms and toolkits which enable scientists and practitioners to build conversational experiences, and continues with historical highlights and recent trends in a range of application areas. Chapter 9eventually concludes the book with a brief discussion of research trends and areas for future work. The primary target audience of the book are the IR and NLP research communities. However, audiences with another background, such as machine learning or human-computer interaction, will also find it an accessible introduction to CIR.
This book brings together the insights from three different areas, Information Seeking and Retrieval, Cognitive Psychology, and Behavioral Economics, and shows how this new interdisciplinary approach can advance our knowledge about users interacting with diverse search systems, especially their seemingly irrational decisions and anomalies that could not be predicted by most normative models.The first part ¿Foundation¿ of this book introduces the general notions and fundamentals of this new approach, as well as the main concepts, terminology and theories. The second part ¿Beyond Rational Agents¿ describes the systematic biases and cognitive limits confirmed by behavioral experiments of varying types and explains in detail how they contradict the assumptions and predictions of formal models in information retrieval (IR). The third part ¿Toward A Behavioral Economics Approach¿ first synthesizes the findings from existing preliminaryresearch on bounded rationality and behavioral economics modeling in information seeking, retrieval, and recommender system communities. Then, it discusses the implications, open questions and methodological challenges of applying the behavioral economics framework to different sub-areas of IR research and practices, such as modeling users and search sessions, developing unbiased learning to rank and adaptive recommendations algorithms, implementing bias-aware intelligent task support, as well as extending the conceptualization and evaluation on IR fairness, accountability, transparency and ethics (FATE) with the knowledge regarding both human biases and algorithmic biases.This book introduces a behavioral economics framework to IR scientists seeking a new perspective on both fundamental and new emerging problems of IR as well as the development and evaluation of bias-aware intelligent information systems. It is especially intended for researchers working on IR and human-information interaction who want to learn about the potential offered by behavioral economics in their own research areas.
Automatic Indexing and Abstracting of Document Texts summarizes the latest techniques of automatic indexing and abstracting, and the results of their application. It also places the techniques in the context of the study of text, manual indexing and abstracting, and the use of the indexing descriptions and abstracts in systems that select documents or information from large collections. Important sections of the book consider the development of new techniques for indexing and abstracting. The techniques involve the following: using text grammars, learning of the themes of the texts including the identification of representative sentences or paragraphs by means of adequate cluster algorithms, and learning of classification patterns of texts. In addition, the book is an attempt to illuminate new avenues for future research. Automatic Indexing and Abstracting of Document Texts is an excellent reference for researchers and professionals working in the field of content management and information retrieval.
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todayΓÇÖs smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life.Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants.This book is suitable for researchers, practitioners, and studentsΓÇöanyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
The second part on "Evaluating Patent Retrieval" then begins with two chapters dedicated to patent evaluation campaigns, followed by two chapters discussing complementary issues from the perspective of patent searchers and from the perspective of related domains, notably legal search.
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades.
Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)-a process known as semantic search.
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades.
Covering aspects from principles and limitations of statistical significance tests to topic set size design and power analysis, this book guides readers to statistically well-designed experiments.
This volume summarizes the author's work on social information seeking (SIS), and at the same time serves as an introduction to the topic. Sometimes also referred to as social search or social information retrieval, this is a relatively new area of study concerned with the seeking and acquiring of information from social spaces on the Internet.
This book introduces the quantum mechanical framework to information retrieval scientists seeking a new perspective on foundational problems.
Part III explores how entities can enable search engines to understand the concepts, meaning, and intent behind the query that the user enters into the search box, and how they can provide rich and focused responses (as opposed to merely a list of documents)-a process known as semantic search.
The second part on "Evaluating Patent Retrieval" then begins with two chapters dedicated to patent evaluation campaigns, followed by two chapters discussing complementary issues from the perspective of patent searchers and from the perspective of related domains, notably legal search.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.