Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
This book explains how to build Natural Language Generation systems - computer software systems which automatically generate understandable texts in English or other human languages. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). Ontologies, as widely used models in semantic technologies, have much in common with the lexicon. A lexicon organizes words as a conventional inventory of concepts, while an ontology formalizes concepts and their logical relations. A shared lexicon is the prerequisite for knowledge-sharing through language, and a shared ontology is the prerequisite for knowledge-sharing through information technology. In building models of language, computational linguists must be able to accurately map the relations between words and the concepts that they can be linked to. This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering, as well as in semantics, psycholinguistics, lexicology and morphology/syntax.
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses.
Of great interest to those working in the fields of computational linguistics, logic, semantics, artificial intelligence and linguistics generally, this 1992 collection includes both tutorial and advanced material in order to orient the uninitiated to the concepts and problems that are at issue.
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering.
This collection of contributions addresses the problem of words and their meaning. This remains a difficult and controversial area within linguistics, philosophy and artificial intelligence. The title aims to provide answers based on empirical linguistics methods that are relevant across disciplines and accessible to researchers from different backgrounds.
Relational Models of the Lexicon not only provides an invaluable survey of research in relational semantics, but offers a stimulus for potential research advances in semantics, natural language processing and knowledge representation.
This book describes the Spoken Language Translator (SLT), one of the first major projects in the area of automatic speech translation.
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.
Editors Madeleine Bates and Ralph Weischedel have invited capable researchers in the field of natural language processing to address theoretical or applied work that has been achieved in the past.
This book offers a comprehensive overview of the human language technology field.
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP). This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults.
Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons.
A collection of new papers by leading researchers on natural language parsing.
This book describes the Spoken Language Translator (SLT), one of the first major projects in the area of automatic speech translation.
Studying language variation requires comprehensive interdisciplinary knowledge and new computational tools. This essential reference introduces researchers and graduate students in computer science, linguistics, and NLP to the core topics in language variation and the computational methods applied to similar languages, varieties, and dialects.
Editors Madeleine Bates and Ralph Weischedel have invited capable researchers in the field of natural language processing to address theoretical or applied work that has been achieved in the past.
A primary problem in the area of natural language processing has been that of semantic analysis. Semantic Processing for Finite Domains presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics.
This study explores an approach to text generation that interprets systemic grammar as a computational representation. Terry Patten demonstrates that systemic grammar can be easily and automatically translated into current AI knowledge representations and efficiently processed by the same knowledge-based techniques currently exploited by expert systems.
This collection of contributions addresses the problem of words and their meaning. This remains a difficult and controversial area within linguistics, philosophy and artificial intelligence. The title aims to provide answers based on empirical linguistics methods that are relevant across disciplines and accessible to researchers from different backgrounds.
Ralph Grishman provides an integrated introduction and valuable survey to the field of computer analysis of language. It tackles syntax analysis, semantic analysis, text analysis and natural language generation through a clear exposition and exercises. This book is written for readers with some background in computer science and finite mathematics.
Drawing on case studies around the world, this book develops a formal computational theory of writing systems and relates it to psycholinguistic results. Sprout then proposes a taxonomy of writing systems. The book will be of interest to students and researchers in theoretical and computational linguistics, psycholinguistics and speech technology.
Logics of Conversation presents a dynamic semantic framework called Segmented Discourse Representation Theory, or SDRT, where the interaction between discourse coherence and discourse interpretation is explored in a logically precise manner.
This comprehensive introduction to all the core areas and many emerging themes of sentiment analysis approaches the problem from a natural-language-processing angle. The author explains the underlying structure and the language constructs that are commonly used to express opinions and sentiments and presents computational methods to analyze and summarize opinions.
On scrutinising how we refer to things in conversation, we find that we rarely state explicitly what object we mean, although we expect an interlocutor to discern it. Dr Kronfield provides an answer to the two questions; how do we successfully refer; and how can a computer be programmed to achieve this?.
This book presents computational mechanisms for solving common language interpretation problems.
A theoretically motivated foundation for semantic interpretation by computer, showing how this framework helps resolve lexical and syntactic ambiguities. The approach is interdisciplinary, drawing on research in computational linguistics, AI, Montague semantics, and cognitive psychology.
An investigation into the problems of generating natural language utterances to satisfy specific goals the speaker has in mind.
This book provides a precise and thorough description of the meaning and use of spatial expressions, using both a linguistics and an artificial intelligence perspective, and also an enlightening discussion of computer models of comprehension and production in the spatial domain.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.