The Alignment Problem: Machine Learning and Human Values

99,00 EGP

Description

Price: $0.99
(as of Oct 05,2024 06:08:34 UTC – Details)


Customers say

Customers find the book informative, insightful, and thoughtful. They describe it as a great, engrossing, and solid read. Readers also find the information interesting and eye-opening. Opinions are mixed on the writing quality, with some finding it well-written and others saying it’s not easy to understand.

AI-generated from the text of customer reviews

This Post Has 9 Comments

  1. Superb overview of A.I. progress and issues both technical and social
    Put as broadly as I can manage, the problem with artificial intelligence (A.I.) systems is that, one way or another, they take their training too literally. Human beings grow up in some socio-cultural context. When we are taught to do something potentially dangerous—for example, driving a car—and then face a road situation we did not encounter in our training, our cultural context informs our ability to improvise appropriately in most (though not all) cases. This is precisely what computers cannot do and that is why society faces often unanticipated dangers when it gives machines autonomous control over potentially harmful activities.Mr. Christian, a seasoned author on the subject, provides a comprehensive review of the history of A.I. development in this book. The review illustrates not only how a particular effect was achieved but also what went wrong in early attempts and how those issues were corrected—often but not always with complete success. The author carries us forward historically in both determinate (playing chess, go, or any game in which there is a specifiable outcome), partially determinate (safely operating a car, boat, or aircraft), and indeterminate (morality) domains.In my career, I first encountered A.I. in the form of “expert systems” developed in the late 1970s and early 1980s to help energy companies find oil. I lost track of the field after that until the development of facial recognition systems and ChatGPT. Christian’s book filled in the gaps from the early 1980s and today. He has given me a sense of how much has been accomplished and in how many different ways, how much there is left to do, and intractable issues about which we cannot instruct machines because we do not know how to resolve them ourselves. Well written, great read!

  2. Excellent overview of alignment research
    This is an introduction to the alignment problem, and then an overview of the problems and solutions that have developed over time. It’s highly useful for anyone working in the AI/ML space, because it also has a lot of tips and tricks for solving common problems with models, and overviews of a lot of the major techniques that are used. (The chapters on reinforcement learning were worth it by themselves — a detailed description of the process and variables with examples). Overall, terrific read.

  3. Great intro for knowledgeable beginners!
    I work in data science, but had no formal training in computer science / data in college. I I found the examples and history interesting and loved how technical concepts were explained without overly technical language. To me, this is the equivalent of “Thinking Fast and Slow” but for machine learning (with less repetition of concepts and more tangible examples!)I gifted a copy to my dad as well, who is fascinated with ChatGPT (I wanted to tamper his expectations of this tool, haha), and he also enjoyed it though to a lesser degree given that he had less experience with this technology. TLDR – a delightful introduction to AI that is best for non-experts but with some experience and familiarity with statistics or data analytics.

  4. When I started this I flashed ( if only for a second ) this is the book of the century ( so far )
    When I started reading this I flashed ( if only for a second ) this is the book of the century ( so far ) because it kits the exact center of humanity’s concerns – being able to leveage our technology to survive to the benefit of all of us.A bit over-enthusiastic, OK, but there are so many high-level, but not pompous or condescending, ideas in this book, and the style itself is a lesson in logical structure – something that is almost totally lacking in written language is America anymore. There is profit in friction and obfuscation these days especially in teaching and journalism and so many books that all say the same things.Within 5 pages, or maybe 5 minutes of the audiobook I was stuck with an idea that was worth the price and time of the book about neurons and how McCulloch & Pitts came to think of how they worked. I’ve read 100 descriptions of this but Christian’s narrative made sense of it succinctly and intuitively.As I continued, I am not quite finished with it ( close enough that I realized I will miss it when it is done ) to feel that I can review this as a very educational and inspirational and informative book of what is going on in this century. A book worth reading by everyone.That’s kind of why I have been thinking of this as the book of the century. AI, but not just the electronic AI, because like driverless cars humans are going along for the ride, and that really means that unless we want our society to become lumbering neanderthal brutes, we have to increase the intelligence of the average citizen from about a 4th grade level to what we would consider the PhD. level – and not for the profit over others, for our own survival as involved citizens. Like we build intelligence into our computers – only evolved.This book made me think of a lot of things, and was a rich reading experience in multiple dimensions. It’s good to know someone is capable or working and writing like Brian Christian in “The Alignment Problem” – 5/5 stars.Humanity will survive AI but, like a wreck in an autonomous vehicle there may be megacrashes on the way if we are not all on the ball.

  5. One of the best books around on AI alignment, especially for the non-technical expert. But Christian did such thorough research and explains things so authoritatively and clearly and brings in the voice of the experts from whom he learned that even experts will enjoy and benefit from this review of the field. Learned a lot. I may well assign it as a reading my teaching and I only do that for books that I think students will really get a lot out of.

  6. I got progressively more interested, the further I advanced in the book.Also the hardcover is light blue, which has an appeal to me.

  7. This book deals with the alignment problem analyzed in different perspectives over time while scaling out its abstraction and complexity.Being an IT person, I am fascinated by what machine learning has reached so far and what yet needs to be done for AI to be integrated in our society.The book does not require technical knowledge and I recommend it to anyone interested in machine learning, data engineering but also in policy making around AI.I like the style of the book, in part historical with a lot of references, and having for me the right speed of reading without stopping by too much on a topic.

  8. Trying to understand alignment of human values and machine learningIt’s a great eye opener and easy to understand – covers a lot of ground

  9. Zugegebenermaßen bin ich alles andere als ein Fachmann auf den Gebieten von Künstlicher Intelligenz (KI) und Machine Learning (ML). Doch gerade deshalb war ich dankbar für dieses Buch, das sich dezidiert (auch) an Nicht-Experten wendet und deshalb auch weitestgehend auf Fachterminologien und (mathematisch) formalisierte Darstellungen verzichtet.Leute, die weit besser als ich in diesen Thematiken und Forschungsfeldern bewandert sind, haben mir bestätigt, dass der Autor dieses Buches, Brian Christian, über einen großen Sachverstand verfüge und dass die Zugänglichkeit seiner Darstellungen nicht auf Kosten der sachlichen Korrektheit ginge. Alles in allem also wohl eines jener profunden Sachbücher, für die man den englischsprachigen Kulturraum immer noch beneiden kann.Das im Titel angesprochene Ausrichtungs- oder Orientierungsproblem eräutert bereits der Untertitel weitgehend: Es geht darum, inwieweit auf ML-basierte KI-Systeme im Einklang mit menschlichen Werten stehen bzw. in Widerspruch zu ihnen geraten können oder es bereits getan haben. Konkrete Probleme reichen von Bilderkennungs- bzw. –klassifikationssystemen mit ›gender bias‹ oder ›racial bias‹ bis hin zum Einsatz von KI-basierten Systemen zur Bestimmung der Rückfallwahrscheinlichkeit von Straftätern, über deren Bewährung entschieden werden soll. Das Buch enthält zahlreiche weitere Beispiele des Einsatzes von ML in unterschiedlichen gesellschaftlichen Bereichen.Christian nimmt eine mittlere Position ein zwischen einer pessimistischen Technikdystopie und einer optimistischen KI-Euphorie. Akribisch werden die konkreten Probleme benannt und auf ihre Ursachen hin untersucht. Vieles gebe, so Christian, Anlass zur Skepsis und zur Sorge, doch gleichzeitig betont er die grundsätzliche technische Beherrschbarkeit der Probleme. Um Letzteres zu demonstrieren, taucht er tief in die aktuelle Forschungslandschaft ein und zeichnet erstaunliche Entwicklungslinien nach, die bisweilen wissenschaftshistorisch recht weit ausgreifen. Erstaunlich ist unter anderem, wie sich unterschiedliche Disziplinen (wechselseitig) beeinflussen und um welche Disziplinen es sich dabei handelt. Dass sich Informatiker und Ingenieure, die mit der ML-Entwicklung befasst sind, intensiv für die Ergebnisse von Primatenforschung, Entwicklungspsychologie oder Erziehungswissenschaft interessieren könnten, hätte man vor wenigen Jahrzehnten wohl nicht prognostiziert.Ich habe das Buch mit sehr großem Gewinn gelesen. Was man monieren kann, ist zum einen Christians Hang, manchmal in die journalistische Trickkiste zu greifen, um Erzählungen ›spannend‹ zu gestalten. Doch das scheint mir verzeihlich, weil es nicht auf Kosten des Wahrheitsgehalts geht. Problematischer erscheint mir, dass der Autor allzu sehr ausblendet, dass ein erheblicher Teil der KI- bzw. ML-Forschung in den Institutionen der großen Tech-Konzerne stattfindet und man sich durchaus fragen sollte, inwieweit dieses Faktum – KI/ML-Forschung am Tropf von Konzernen mit Profitmaximierungsinteressen – ein ganz eigenes ›alignment problem‹ mit sich bringt.

Leave a Reply

Your email address will not be published. Required fields are marked *