Gjør som tusenvis av andre bokelskere
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.Du kan når som helst melde deg av våre nyhetsbrev.
An argument that what makes science distinctive is its emphasis on evidence and scientists' willingness to change theories on the basis of new evidence.
Who benefits from smart technology? Whose interests are served when we trade our personal data for convenience and connectivity?Smart technology is everywhere: smart umbrellas that light up when rain is in the forecast; smart cars that relieve drivers of the drudgery of driving; smart toothbrushes that send your dental hygiene details to the cloud. Nothing is safe from smartification. In Too Smart, Jathan Sadowski looks at the proliferation of smart stuff in our lives and asks whether the tradeoff—exchanging our personal data for convenience and connectivity—is worth it. Who benefits from smart technology?Sadowski explains how data, once the purview of researchers and policy wonks, has become a form of capital. Smart technology, he argues, is driven by the dual imperatives of digital capitalism: extracting data from, and expanding control over, everything and everybody. He looks at three domains colonized by smart technologies' collection and control systems: the smart self, the smart home, and the smart city. The smart self involves more than self-tracking of steps walked and calories burned; it raises questions about what others do with our data and how they direct our behavior—whether or not we want them to. The smart home collects data about our habits that offer business a window into our domestic spaces. And the smart city, where these systems have space to grow, offers military-grade surveillance capabilities to local authorities. Technology gets smart from our data. We may enjoy the conveniences we get in return (the refrigerator says we're out of milk!), but, Sadowski argues, smart technology advances the interests of corporate technocratic power—and will continue to do so unless we demand oversight and ownership of our data.
How the concept of critical thinking emerged, how it has been defined, and how critical thinking skills can be taught.Critical thinking is regularly cited as an essential twenty-first century skill, the key to success in school and work. Given our propensity to believe fake news, draw incorrect conclusions, and make decisions based on emotion rather than reason, it might even be said that critical thinking is vital to the survival of a democratic society. But what, exactly, is critical thinking? In this volume in the MIT Press Essential Knowledge series, Jonathan Haber explains how the concept of critical thinking emerged, how it has been defined, and how critical thinking skills can be taught and assessed.Haber describes the term's origins in such disciplines as philosophy, psychology, and science. He examines the components of critical thinking, including structured thinking, language skills, background knowledge, and information literacy, along with such necessary intellectual traits as intellectual humility, empathy, and open-mindedness. He discusses how research has defined critical thinking, how elements of critical thinking have been taught for centuries, and how educators can teach critical thinking skills now.Haber argues that the most important critical thinking issue today is that not enough people are doing enough of it. Fortunately, critical thinking can be taught, practiced, and evaluated. This book offers a guide for teachers, students, and aspiring critical thinkers everywhere, including advice for educational leaders and policy makers on how to make the teaching and learning of critical thinking an educational priority and practical reality.
A provocative and probing argument showing how human beings can for the first time in history take charge of their moral fate.Is tribalism—the political and cultural divisions between Us and Them—an inherent part of our basic moral psychology? Many scientists link tribalism and morality, arguing that the evolved "moral mind” is tribalistic. Any escape from tribalism, according to this thinking, would be partial and fragile, because it goes against the grain of our nature. In this book, Allen Buchanan offers a counterargument: the moral mind is highly flexible, capable of both tribalism and deeply inclusive moralities, depending on the social environment in which the moral mind operates.We can't be morally tribalistic by nature, Buchanan explains, because quite recently there has been a remarkable shift away from tribalism and toward inclusiveness, as growing numbers of people acknowledge that all human beings have equal moral status, and that at least some nonhumans also have moral standing. These are what Buchanan terms the Two Great Expansions of moral regard. And yet, he argues, moral progress is not inevitable but depends partly on whether we have the good fortune to develop as moral agents in a society that provides the right conditions for realizing our moral potential. But morality need not depend on luck. We can take charge of our moral fate by deliberately shaping our social environment—by engaging in scientifically informed "moral institutional design.” For the first time in human history, human beings can determine what sort of morality is predominant in their societies and what kinds of moral agents they are.
The sixth edition of the foundational reference on cognitive neuroscience, with entirely new material that covers the latest research, experimental approaches, and measurement methodologies.Each edition of this classic reference has proved to be a benchmark in the developing field of cognitive neuroscience. The sixth edition of The Cognitive Neurosciences continues to chart new directions in the study of the biological underpinnings of complex cognition—the relationship between the structural and physiological mechanisms of the nervous system and the psychological reality of the mind. It offers entirely new material, reflecting recent advances in the field, covering the latest research, experimental approaches, and measurement methodologies.This sixth edition treats such foundational topics as memory, attention, and language, as well as other areas, including computational models of cognition, reward and decision making, social neuroscience, scientific ethics, and methods advances. Over the last twenty-five years, the cognitive neurosciences have seen the development of sophisticated tools and methods, including computational approaches that generate enormous data sets. This volume deploys these exciting new instruments but also emphasizes the value of theory, behavior, observation, and other time-tested scientific habits.Section editorsSarah-Jayne Blakemore and Ulman Lindenberger, Kalanit Grill-Spector and Maria Chait, Tomás Ryan and Charan Ranganath, Sabine Kastner and Steven Luck, Stanislas Dehaene and Josh McDermott, Rich Ivry and John Krakauer, Daphna Shohamy and Wolfram Schultz, Danielle Bassett and Nikolaus Kriegeskorte, Marina Bedny and Alfonso Caramazza, Liina Pylkkänen and Karen Emmorey, Mauricio Delgado and Elizabeth Phelps, Anjan Chatterjee and Adina Roskies
An account of the significant though gradual, uneven, disconnected, ad hoc, and pragmatic innovations in global financial governance and developmental finance induced by the global financial crisis.
The definitive presentation of Soar, one AI's most enduring architectures, offering comprehensive descriptions of fundamental aspects and new components.
An examination of how the technical choices, social hierarchies, economic structures, and political dynamics shaped the Soviet nuclear industry leading up to Chernobyl.
Interdisciplinary perspectives on the capacity to perceive, appreciate, and make music.Research shows that all humans have a predisposition for music, just as they do for language. All of us can perceive and enjoy music, even if we can't carry a tune and consider ourselves "unmusical.” This volume offers interdisciplinary perspectives on the capacity to perceive, appreciate, and make music. Scholars from biology, musicology, neurology, genetics, computer science, anthropology, psychology, and other fields consider what music is for and why every human culture has it; whether musicality is a uniquely human capacity; and what biological and cognitive mechanisms underlie it.Contributors outline a research program in musicality, and discuss issues in studying the evolution of music; consider principles, constraints, and theories of origins; review musicality from cross-cultural, cross-species, and cross-domain perspectives; discuss the computational modeling of animal song and creativity; and offer a historical context for the study of musicality. The volume aims to identify the basic neurocognitive mechanisms that constitute musicality (and effective ways to study these in human and nonhuman animals) and to develop a method for analyzing musical phenotypes that point to the biological basis of musicality.ContributorsJorge L. Armony, Judith Becker, Simon E. Fisher, W. Tecumseh Fitch, Bruno Gingras, Jessica Grahn, Yuko Hattori, Marisa Hoeschele, Henkjan Honing, David Huron, Dieuwke Hupkes, Yukiko Kikuchi, Julia Kursell, Marie-Élaine Lagrois, Hugo Merchant, Björn Merker, Iain Morley, Aniruddh D. Patel, Isabelle Peretz, Martin Rohrmeier, Constance Scharff, Carel ten Cate, Laurel J. Trainor, Sandra E. Trehub, Peter Tyack, Dominique Vuvan, Geraint Wiggins, Willem Zuidema
Why embodied approaches to cognition are better able to address the performative dimensions of art than the dualistic conceptions fundamental to theories of digital computing.
If machine learning transforms the nature of knowledge, does it also transform the practice of critical thought?
Experts offer strategies for managing people in technocentric times.
An accessible introduction to the artificial intelligence technology that enables computer vision, speech recognition, machine translation, and driverless cars.Deep learning is an artificial intelligence technology that enables computer vision, speech recognition in mobile phones, machine translation, AI games, driverless cars, and other applications. When we use consumer products from Google, Microsoft, Facebook, Apple, or Baidu, we are often interacting with a deep learning system. In this volume in the MIT Press Essential Knowledge series, computer scientist John Kelleher offers an accessible and concise but comprehensive introduction to the fundamental technology at the heart of the artificial intelligence revolution.Kelleher explains that deep learning enables data-driven decisions by identifying and extracting patterns from large datasets; its ability to learn from complex data makes deep learning ideally suited to take advantage of the rapid growth in big data and computational power. Kelleher also explains some of the basic concepts in deep learning, presents a history of advances in the field, and discusses the current state of the art. He describes the most important deep learning architectures, including autoencoders, recurrent neural networks, and long short-term networks, as well as such recent developments as Generative Adversarial Networks and capsule networks. He also provides a comprehensive (and comprehensible) introduction to the two fundamental algorithms in deep learning: gradient descent and backpropagation. Finally, Kelleher considers the future of deep learning—major trends, possible developments, and significant challenges.
Continuing his exploration of the organization of complexity and the science of design, this new edition of Herbert Simon's classic work on artificial intelligence adds a chapter that sorts out the current themes and tools--chaos, adaptive systems, genetic algorithms--for analyzing complexity and complex systems.
A comprehensive overview of developments in augmented reality, virtual reality, and mixed reality—and how they could affect every part of our lives.After years of hype, extended reality—augmented reality (AR), virtual reality (VR), and mixed reality (MR)—has entered the mainstream. Commercially available, relatively inexpensive VR headsets transport wearers to other realities—fantasy worlds, faraway countries, sporting events—in ways that even the most ultra-high-definition screen cannot. AR glasses receive data in visual and auditory forms that are more useful than any laptop or smartphone can deliver. Immersive MR environments blend physical and virtual reality to create a new reality. In this volume in the MIT Press Essential Knowledge series, technology writer Samuel Greengard offers an accessible overview of developments in extended reality, explaining the technology, considering the social and psychological ramifications, and discussing possible future directions.Greengard describes the history and technological development of augmented and virtual realities, including the latest research in the field, and surveys the various shapes and forms of VR, AR, and MR, including head-mounted displays, mobile systems, and goggles. He examines the way these technologies are shaping and reshaping some professions and industries, and explores how extended reality affects psychology, morality, law, and social constructs. It's not a question of whether extended reality will become a standard part of our world, he argues, but how, when, and where these technologies will take hold. Will extended reality help create a better world? Will it benefit society as a whole? Or will it merely provide financial windfalls for a select few? Greengard's account equips us to ask the right questions about a transformative technology.
A philosopher considers entertainment, in all its totalizing variety-infotainment, edutainment, servotainment-and traces the notion through Kant, Zen Buddhism, Heidegger, Kafka, and Rauschenberg.
How productivity culture and technology became emblematic of the American economic system in pre- and postwar Germany.The concept of productivity originated in a statistical measure of output per worker or per work-hour, calculated by the US Bureau of Labor Statistics. A broader productivity culture emerged in 1920s America, as Henry Ford and others linked methods of mass production and consumption to high wages and low prices. These ideas were studied eagerly by a Germany in search of economic recovery after World War I, and, decades later, the Marshall Plan promoted productivity in its efforts to help post-World War II Europe rebuild. In Productivity Machines, Corinna Schlombs examines the transatlantic history of productivity technology and culture in the two decades before and after World War II. She argues for the interpretive flexibility of productivity: different groups viewed productivity differently at different times. Although it began as an objective measure, productivity came to be emblematic of the American economic system; post-World War II West Germany, however, adapted these ideas to its own political and economic values. Schlombs explains that West German unionists cast a doubtful eye on productivity's embrace of plant-level collective bargaining; unions fought for codetermination—the right to participate in corporate decisions. After describing German responses to US productivity, Schlombs offers an in-depth look at labor relations in one American company in Germany—that icon of corporate America, IBM. Finally, Schlombs considers the emergence of computer technology—seen by some as a new symbol of productivity but by others as the means to automate workers out of their jobs.
An examination of the meaning of meaninglessness: why it matters that nothing matters.When someone is labeled a nihilist, it's not usually meant as a compliment. Most of us associate nihilism with destructiveness and violence. Nihilism means, literally, "an ideology of nothing. " Is nihilism, then, believing in nothing? Or is it the belief that life is nothing? Or the belief that the beliefs we have amount to nothing? If we can learn to recognize the many varieties of nihilism, Nolen Gertz writes, then we can learn to distinguish what is meaningful from what is meaningless. In this addition to the MIT Press Essential Knowledge series, Gertz traces the history of nihilism in Western philosophy from Socrates through Hannah Arendt and Jean-Paul Sartre.Although the term "nihilism” was first used by Friedrich Jacobi to criticize the philosophy of Immanuel Kant, Gertz shows that the concept can illuminate the thinking of Socrates, Descartes, and others. It is Nietzsche, however, who is most associated with nihilism, and Gertz focuses on Nietzsche's thought. Gertz goes on to consider what is not nihilism—pessimism, cynicism, and apathy—and why; he explores theories of nihilism, including those associated with Existentialism and Postmodernism; he considers nihilism as a way of understanding aspects of everyday life, calling on Adorno, Arendt, Marx, and prestige television, among other sources; and he reflects on the future of nihilism. We need to understand nihilism not only from an individual perspective, Gertz tells us, but also from a political one.
An elaborately illustrated A to Z of the face, from historical mugshots to Instagram posts.By turns alarming and awe-inspiring, Face offers up an elaborately illustrated A to Z—from the didactic anthropometry of the late-nineteenth century to the selfie-obsessed zeitgeist of the twenty-first.Jessica Helfand looks at the cultural significance of the face through a critical lens, both as social currency and as palimpsest of history. Investigating everything from historical mugshots to Instagram posts, she examines how the face has been perceived and represented over time; how it has been instrumentalized by others; and how we have reclaimed it for our own purposes. From vintage advertisements for a "nose adjuster” to contemporary artists who reconsider the visual construction of race, Face delivers an intimate yet kaleidoscopic adventure while posing universal questions about identity.
Investigating the entanglement of industry, politics, culture, and economics at the frontier of ocean excavations through an innovative union of art and science.The oceans are crucial to the planet's well-being. They help regulate the global carbon cycle, support the resilience of ecosystems, and provide livelihoods for communities. The oceans as guardians of planetary health are threatened by many forces, including growing extractivist practices. Through the innovative lens of artistic research, Prospecting Ocean investigates the entanglement of industry, politics, culture, and economics at the frontier of ocean excavation. The result is a richly illustrated study that unites science and art to examine the ecological, cultural, philosophical, and aesthetic reverberations of this current threat to the oceans.Prospecting Oceans takes as its starting point an exhibition by the photographer and filmmaker Armin Linke, which was commissioned by TBA21-Academy, London, and first shown at the Institute of Marine Science (CNR-ISMAR) in Venice. Linke is concerned with making the invisible visible, and here he unmasks the technologies that enable extractions from the ocean, including future seabed mining for minerals and sampling of genetic data. But the book extends far beyond Linke's research, presenting the latest research from a variety of fields and employing art as the place where disciplines can converge. Integrating the work of artists with scientific, theoretical, and philosophical analysis, Prospecting Ocean demonstrates that visual culture offers new and urgent perspectives on ecological crises.
Experts review the latest research on the neocortex and consider potential directions for future research.
A milestone work that examines the democratic idea of photography and its expansion in common culture, particularly in the United States; generously illustrated.
A proposal for using cost-benefit analysis to evaluate the socioeconomic impact of public investment in large scientific projects.
An introduction to the use of probability models for analyzing risk and economic decisions, using spreadsheets to represent and simulate uncertainty.This textbook offers an introduction to the use of probability models for analyzing risks and economic decisions. It takes a learn-by-doing approach, teaching the student to use spreadsheets to represent and simulate uncertainty and to analyze the effect of such uncertainty on an economic decision. Students in applied business and economics can more easily grasp difficult analytical methods with Excel spreadsheets. The book covers the basic ideas of probability, how to simulate random variables, and how to compute conditional probabilities via Monte Carlo simulation. The first four chapters use a large collection of probability distributions to simulate a range of problems involving worker efficiency, market entry, oil exploration, repeated investment, and subjective belief elicitation. The book then covers correlation and multivariate normal random variables; conditional expectation; optimization of decision variables, with discussions of the strategic value of information, decision trees, game theory, and adverse selection; risk sharing and finance; dynamic models of growth; dynamic models of arrivals; and model risk. New material in this second edition includes two new chapters on additional dynamic models and model risk; new sections in every chapter; many new end-of-chapter exercises; and coverage of such topics as simulation model workflow, models of probabilistic electoral forecasting, and real options. The book comes equipped with Simtools, an open-source, free software used througout the book, which allows students to conduct Monte Carlo simulations seamlessly in Excel.
Economists offer rigorous quantitative analyses of how the institutional design and purpose of the WTO (and its progenitor, the GATT) affect economic development.
An argument that—despite dramatic advances in the field—artificial intelligence is nowhere near developing systems that are genuinely intelligent.In this provocative book, Brian Cantwell Smith argues that artificial intelligence is nowhere near developing systems that are genuinely intelligent. Second wave AI, machine learning, even visions of third-wave AI: none will lead to human-level intelligence and judgment, which have been honed over millennia. Recent advances in AI may be of epochal significance, but human intelligence is of a different order than even the most powerful calculative ability enabled by new computational capacities. Smith calls this AI ability "reckoning,” and argues that it does not lead to full human judgment—dispassionate, deliberative thought grounded in ethical commitment and responsible action.Taking judgment as the ultimate goal of intelligence, Smith examines the history of AI from its first-wave origins ("good old-fashioned AI,” or GOFAI) to such celebrated second-wave approaches as machine learning, paying particular attention to recent advances that have led to excitement, anxiety, and debate. He considers each AI technology's underlying assumptions, the conceptions of intelligence targeted at each stage, and the successes achieved so far. Smith unpacks the notion of intelligence itself—what sort humans have, and what sort AI aims at. Smith worries that, impressed by AI's reckoning prowess, we will shift our expectations of human intelligence. What we should do, he argues, is learn to use AI for the reckoning tasks at which it excels while we strengthen our commitment to judgment, ethics, and the world.
An argument that information exists at different levels of analysis—syntactic, semantic, and pragmatic—and an exploration of the implications.Although this is the Information Age, there is no universal agreement about what information really is. Different disciplines view information differently; engineers, computer scientists, economists, linguists, and philosophers all take varying and apparently disconnected approaches. In this book, Antonio Badia distinguishes four levels of analysis brought to bear on information: syntactic, semantic, pragmatic, and network-based. Badia explains each of these theoretical approaches in turn, discussing, among other topics, theories of Claude Shannon and Andrey Kolomogorov, Fred Dretske's description of information flow, and ideas on receiver impact and informational interactions. Badia argues that all these theories describe the same phenomena from different perspectives, each one narrower than the previous one. The syntactic approach is the more general one, but it fails to specify when information is meaningful to an agent, which is the focus of the semantic and pragmatic approaches. The network-based approach, meanwhile, provides a framework to understand information use among agents.Badia then explores the consequences of understanding information as existing at several levels. Humans live at the semantic and pragmatic level (and at the network level as a society), computers at the syntactic level. This sheds light on some recent issues, including "fake news” (computers cannot tell whether a statement is true or not, because truth is a semantic notion) and "algorithmic bias” (a pragmatic, not syntactic concern). Humans, not computers, the book argues, have the ability to solve these issues.
An advanced treatment of modern macroeconomics, presented through a sequence of dynamic equilibrium models, with discussion of the implications for monetary and fiscal policy.This textbook offers an advanced treatment of modern macroeconomics, presented through a sequence of dynamic general equilibrium models based on intertemporal optimization on the part of economic agents. The book treats macroeconomics as applied and policy-oriented general equilibrium analysis, examining a number of models, each of which is suitable for investigating specific issues but may be unsuitable for others.After presenting a brief survey of the evolution of macroeconomics and the key facts about long-run economic growth and aggregate fluctuations, the book introduces the main elements of the intertemporal approach through a series of two-period competitive general equilibrium models—the simplest possible intertemporal models. This sets the stage for the remainder of the book, which presents models of economic growth, aggregate fluctuations, and monetary and fiscal policy. The text focuses on a full analysis of a limited number of key intertemporal models, which are stripped down to essentials so that students can focus on the dynamic properties of the models. Exercises encourage students to try their hands at solving versions of the dynamic models that define modern macroeconomics. Appendixes review the main mathematical techniques needed to analyze optimizing dynamic macroeconomic models. The book is suitable for advanced undergraduate and graduate students who have some knowledge of economic theory and mathematics for economists.
An argument that theoretical works can signify through their materiality—their "noise,” or such nonsemantic elements as typography—as well as their semantic content.In Material Noise, Anne Royston argues that theoretical works signify through their materiality—such nonsemantic elements as typography or color—as well as their semantic content. Examining works by Jacques Derrida, Avital Ronell, Georges Bataille, and other well-known theorists, Royston considers their materiality and design—which she terms "noise”—as integral to their meaning. In other words, she reads these theoretical works as complex assemblages, just as she would read an artist's book in all its idiosyncratic tangibility.Royston explores the formlessness and heterogeneity of the Encyclopedia Da Costa, which published works by Bataille, André Breton, and others; the use of layout and white space in Derrida's Glas; the typographic illegibility—"static and interference”—in Ronell's The Telephone Book; and the enticing surfaces of Mark C. Taylor's Hiding, its digital counterpart The Réal: Las Vegas, NV, and Shelley Jackson's Skin. Royston then extends her analysis to other genres, examining two recent artists' books that express explicit theoretical concerns: Johanna Drucker's Stochastic Poetics and Susan Howe's Tom Tit Tot.Throughout, Royston develops the concept of artistic arguments, which employ signification that exceeds the semantics of a printed text and are not reducible to a series of linear logical propositions. Artistic arguments foreground their materiality and reflect on the media that create them. Moreover, Royston argues, each artistic argument anticipates some aspect of digital thinking, speaking directly to such contemporary concerns as hypertext, communication theory, networks, and digital distribution.
Abonner på vårt nyhetsbrev og få rabatter og inspirasjon til din neste leseopplevelse.
Ved å abonnere godtar du vår personvernerklæring.