Prediction Machines: The Simple Economics of Artificial Intelligence

0,00 EGP

Description

Price: $0.00
(as of Jul 30,2024 14:52:22 UTC – Details)


Customers say

Customers find the book’s content excellent, structured, and well-written. They also appreciate the great business insights and value the book. Readers describe the reading experience as amazing.

AI-generated from the text of customer reviews

This Post Has 10 Comments

  1. Easy must read book on AI
    Excellent book on AI in clear simple language. Perfect intro for those trying understanding this new technology development affecting all aspects of our lives.

  2. High-level view of less-than-obvious consequences of prediction technology
    This review echoes the main point of the review by “sweiss” in that there is not a great deal new here for data scientists or machine learning practitioners. However it is a well written book and a more general audience will indeed find much of interest. Furthermore, as economists the authors are indeed adept at uncovering potential unintended consequences of prediction technology and this makes for interesting reading.

  3. Fascinating, easy read on AI
    I found the book an easy-to-read introduction to how AI will affect business. The authors use stories to explain how AI affects decisions. How soon should you use an AI in the field? The book compares fast food workers with airline pilots. Fast food workers receive little training before they are thrown into the job. They make mistakes at first, but the consequences are small. So it is worth having them go right to work. Airline pilots need thousands of hours of experience before they are allowed to fly commercial jets. The consequences of a mistake are too big. US Airways pilot Sully Sullenberger safely landed his plane on the Hudson River, saving many lives and drawing on decades of experience. The book says AIs are the same. When the consequences are small, AIs can be put to work immediately. But when the consequences of a mistake are big, it is important to wait until the AI is almost perfect.

  4. Three Economists Demystify Artificial Intelligence
    The authors, three economists from the University of Toronto, do a great job of demystifying artificial intelligence by examining it through the lens of standard economic theory. The authors are clear that they are not examining AGI (artificial general intelligence), but rather the artificial intelligence produced by algorithms in common, and ever-increasing, use today. When you go online and get a recommendation for a product or you ask a question of Alexa, Siri, or Google, the recommendation or answer that you get is produced by algorithms (prediction machines in the authors’ definition).The authors’ basic premise is that these prediction machines have become, and are becoming, so cheap that their use has expanded, and will continue to expand, dramatically across a range of businesses. They analogize this expansion to the expansion in the use of electricity or cars during the early parts of the last century. The processes for how work was done and the skills needed to do it dramatically changed the number and type of jobs required by the economy. Jobs were both created and destroyed. It took time for this to occur. The authors expect the same effect from the prediction machines.The book looks at the possible effect on the types of jobs at which humans will excel. Judgement will become more valuable to augment the input of artificial intelligence. Jobs will have to be redesigned. Work flows altered.Strategy in the C-suite will be impacted by artificial intelligence. The occupants of top management positions will have to adjust. The book suggests how.After reading this book, I read the July/August edition of MIT Technology Review, which states on the cover “AI and robots are wreaking economic havoc. We need more of them.” There are a number of articles in the magazine that paint a cautionary picture of the prediction machines (“Confessions of an accidental job destroyer”). The authors of Prediction Machines recognize the potential adverse consequences and social risk that the current edition of MIT Technology Review addresses so the book and the magazine are not in conflict.If you’re interested in artificial intelligence and want to read a book that examines the topic dispassionately, then I recommend it highly. The authors did a fine job of making the topic highly accessible.

  5. Covers basic concepts well, but lacks depth
    Agarwal, Gans, and Goldfarb are professors at theUniversity of Toronto School of Management. Agarwal and Goldfarb also work in the School’s Creative Destruction Lab, a program for science-based start-ups. The authors effectively explain the basic economics of artificial intelligence (AI) and its evident consequences for business.  They write in simple and clear language that readers will understand and avoid convoluted technical digressions.  They discuss some of the lessons from cognitive psychology for the machine-learning approach to AI.  One theme is to understand human intelligence itself as a prediction mechanism, since our brains fill in much beyond what we knowthrough our senses.  The discussion of the consequences of AI for corporate strategy is useful, although the languageis general.  I would have appreciated a couple of in-depth case studies.The overall weakness is that the bookstops at the elementary level.  The sub-title promises economic analysis, but authors barely deliver.  Their main theme is that machine learning has lowered the cost of prediction.  That implies an outward shift in the supply curve for goods & services that rely on prediction, which increases their consumption and lowers their price.   The most prominent services that rely on prediction are facial recognition and interpretation of radiological scans. The main goods are robots that rely on images to avoid obstacles andlocate work sites, and therefore, self-driving vehicles. The authors predict that AI will displace some workers and sustain the employment of others by increasing their productivity.  They compare productivity-augmenting AI to the introduction of spreadsheet programs (VisiCalc, Lotus 123, and Excel).  Clerks became more valuable because they could produce calculations more quickly and could run alternative scenarios, for example, to determine how changes in assumptions would change profitabilityof a project.  In a further example, AI might increase the demand for software engineers by increasing their productivity.There’s little argument that AI will cause,at a minimum, a short-term surge of unemployment.  What’s unclear is whether there will be a net decline in unemployment once all the consequences of AI the technology have workedthrough the economic system.  There’ s been much published research on the likely impact of robots on employment.  The leading researcher is David Autor, of theMIT economics department.  He wrote that,“…journalists and even expert commentators tend to overstate the extent of machine substitution for human labor and ignore the strong complementarities between automation andlabor that increase productivity, raise earnings, and augment demand for labor.”Moreover, the increased income to owners of this new technology should raise spending (demand), inducing a compensatingboost to employment.  Most researchersare unsure whether the net and long-term effect of these economic processeswill be to decrease employment.  A morerealistic concern is that AI will lead to greater wage inequality between those with STEM education, creativity, and judgement on one hand and those with highschool or lower education on the other. Unfortunately, the authors have nothing to add to this central debate. The authors do write about the way AI should influence corporate strategy. If AI is central to the enterprise, such as Google’s Waymo, which develops self-driving cars, then the strategy could be AI-first.  This means developing AI even at the expenseof other objectives, such as customer satisfaction.  Securing enough data is a priorityfor development of AI.  Tesla’s strategyhas been to introduce AI software early so that that it can learn from data collected in real world use by customers. It has been releasing its self-driving car software in parts, rather than wait until it can produce a car approved to drive without human supervision.  

  6. Geralmente os livros focam muito nos risco e perigos que AI possui no longo prazo, o Prediction Machines foca mais em como a sociedade vai mudar no médio-longo prazo, dando uma perspectiva bem interessante sobre esses impactos

  7. Ajay Agrawal, Joshua Gans and Avi Goldfarb are professors at the University of Toronto’s Rotman School of Management. Prediction Machines is a very interesting, well-written book that frames Artificial Intelligence in economic terms as delivering “cheap predictions.” This may sound trivial, but as the authors point out, when a strategic commodity becomes cheap, it can change everything. They use the example of light:“Chances are you are reading this book under some kind of artificial light. Moreover, you probably never thought about whether using artificial light for reading was worth it. Light is so cheap that you use it with abandon. But, as the economist William Nordhaus meticulously explored, in the early 1800s it would have cost you four hundred times what you are paying now for the same amount of light. At that price, you would notice the cost and would think twice before using artificial light to read this book. The subsequent drop in the price of light lit up the world. Not only did it turn night into day, but it allowed us to live and work in big buildings that natural light could not penetrate. Virtually nothing we have today would be possible had the cost of artificial light not collapsed to almost nothing.”Some prediction machines have already been proving their worth for more than a decade:“The biggest science project of the iPhone was the soft keyboard. But as late as 2006 (the iPhone was launched in 2007), the keyboard was terrible. Not only could it not compete with the BlackBerry, but it was so frustrating that no one would use it to type a text message, let alone an e-mail. The problem was that to fit it on the 4.7 inch LCD screen, the keys were very small. That meant it was easy to hit the wrong one. Many Apple engineers came up with designs that moved away from the QWERTY keyboard.With just three weeks to find a solution – a solution that, if not found, might have killed the whole project – every single iPhone software developer had free reign to explore other options. By the end of three weeks, they had a keyboard that looked like a small QWERTY keyboard with a substantial tweak. While the image the user saw did not change, the surface area around a particular set of keys expanded when typing. When you type a “t,” it is highly probable that the next letter will be an “h” and so the area around that key expanded. Following that, “e” and “i” expanded, and so on.This was the result of an AI tool at work. Ahead of virtually anyone else, Apple engineers used 2006-era machine learning to build predictive algorithms so that key size changed depending on what a person was typing.”“Today, AI tools predict the intention of speech (Amazon’s Echo), predict command context (Apple’s Siri), predict what you want to buy (Amazon’s recommendations), predict which links will connect you to the information you want to find (Google search), predict when to apply the brakes to avoid danger (Tesla’s Autopilot), and predict the news you will want to read (Facebook’s newsfeed).So, with the cost of predictions ultimately falling to almost nothing, what are the implications?Agrawal, Gans and Goldfarb make several interesting and provocative points:-Prediction is the process of filling in missing information. It takes information you have, often called “data”, and uses it to generate information you don’t have. In addition to generating information about the future, prediction can generate information about the present and the past. This happens when prediction classifies credit card transactions as fraudulent, a tumor in an image as malignant, or whether a person holding an iPhone is the owner-The drop in the cost of prediction will impact the value of other things, increasing the value of complements (data, judgment and action), and diminishing the value of substitutes (human prediction)- Data is the new oil. Prediction machines rely on data. More and better data leads to better predictions. In economic terms, data is a key complement to prediction. It becomes more valuable as prediction becomes cheaper.- Prediction uses three types of data: 1) Training data for training the AI, 2) Input data for predicting, and 3) Feedback data for improving the prediction accuracy.- From a statistical perspective, data has diminishing returns. Each additional unit of data improves your prediction less than the prior data; the 10th observation improves prediction by more than the 1000th.- Humans, including professional experts, make poor predictions under certain conditions. Humans often overweight salient information and do not account for statistical properties (cf. Daniel Kahneman’s Thinking Fast and Slow)- Prediction machines are better than humans at factoring in complex interactions among different indicators, especially in settings with rich data. As the number of dimensions for such interactions grows, the ability of humans to form accurate predictions diminishes, especially relative to machines.- Prediction machines scale. The unit cost per prediction falls as the frequency increases. Human prediction does not scale the same way. However, humans have cognitive models of how the world works and thus can make predictions based on small amounts of data. Thus, we anticipate a rise in human prediction by exception whereby machines generate most predictions because they are predicated on routine, regular data, but when rare events occur the machine recognizes that it is not able to produce a prediction with confidence, and so calls for human assistance.- Prediction machines are so valuable because 1) they can often produce better, faster, and cheaper predictions than humans can; 2) prediction is a key ingredient in decision making under uncertainty; and 3) decision making is ubiquitous throughout our economic and social lives. However, a prediction is not a decision – it is only a component of a decision. The other components are judgment, action, outcome.- By breaking down a decision into its components we can understand the impact of prediction machines on the value of human and other assets. The value of substitutes to prediction machines, namely human prediction, will decline. However, the value of complements, such as the human skills associated with data collection, judgement and actions, will become more valuable.- Judgement involves determining the relative payoff associated with each possible outcome of a decision, including those associated with “correct” decisions as well as those associated with mistakes. As prediction machines make predictions increasingly better, faster and cheaper, the value of human judgment will increase because we’ll need more of it.- Prediction machines increase the returns to judgement because, by lowering the cost of prediction, they increase the value of understanding the rewards associated with actions. However, judgement is costly. Figuring out the relative payoffs for different actions in different situations takes time, effort and experimentation.- If there are a manageable number of action-situation combinations associated with a decision, then we can transfer the judgment from ourselves to the prediction machine (this is “reward function engineering”) so that the machine can make the decision itself once it generates the prediction. This enables automating the decision. Often, however, there are too many action-situation combinations, such that it is too costly to code up in advance all the payoffs associated with each combination, especially the very rare ones. In these cases, it is more efficient for a human to apply judgment after the prediction machine predicts.- Machines are bad at predictions for rare events. Managers make decisions on mergers, innovation and partnerships without data on similar past events for their firms. Humans use analogies and models to make decisions in such unusual situations. Machines cannot predict judgment when a situation has not occurred many times in the past.-Enhanced prediction enables decision makers, whether human or machine, to handle more “ifs” and more “thens.” That leads to better outcomes. For example, in the case of navigation, prediction machines liberate autonomous vehicles from their previous limitation of operating only in controlled environments. These settings are characterized by their limited number of “ifs” (or states). Prediction machines allow autonomous vehicles to operate in uncontrolled environments, like on a city street, because rather than having to code all the potential “ifs” in advance, the machine can instead learn to predict what a human controller would do in any particular situation.- The introduction of AI to a task does not necessarily imply full automation of that task. Prediction is only one component. In many cases, humans are still required to apply judgment and take an action. However, sometimes judgment can be hard coded or, if enough examples are available, machines can learn to predict judgment. In addition, machines may perform the action. When machines perform all elements of the task, then the task is fully automated and humans are completely removed from the loop.- The tasks most likely to be fully automated first are the ones for which full automation delivers the highest returns. These include tasks where: 1) the other elements are already automated except for prediction (e.g. mining); 2) the returns to speed of action in response to prediction are high (e.g. driverless cars); and 3) the returns to reduced waiting time for predictions are high (e.g. space exploration).- An important distinction between autonomous vehicles operating on a city street versus those in a mine site is that the former generates significant externalities while the latter does not. Autonomous vehicles operating on a city street may cause an accident that incurs costs borne by individuals external to the decision maker. In contrast, accidents caused by autonomous vehicles operating on a mine site only incur costs affecting assets or people associated with the mine. Governments regulate activities that generate externalities. Thus, regulation is a potential barrier to full automation for applications that generate significant externalities.-AI tools are point solutions. Each generates a specific prediction, and most are designed to perform a specific task. Large corporations are comprised of work flows that turn inputs into outputs. Work flows are made up of tasks (e.g. a Goldman Sachs IPO is a work flow comprised of 146 distinct tasks). In deciding how to implement AI, companies will break their work flows down into tasks, estimate the ROI for building or buying an AI to perform each task, rank-order the AIs in terms of ROI, and then start from the top of the list and begin working downward.- The authors provide an “AI canvas” to help with the decomposition of tasks in order to see where prediction machines can be inserted. Prediction requires a specificity not found in mission statements. For example, for a business school focused on recruiting the best students, the meaning of the term “best” has to be specified.- C-suite leadership should not fully delegate AI strategy to their IT department, because powerful AI tools may go beyond enhancing the productivity of tasks performed in the service of the organization’s strategy, and instead lead to changing the strategy itself.Overall, Prediction Machines is a very worthwhile book, and the authors do an admirable job of simplifying some difficult concepts. However, AI is clearly a double-edged sword, and Agrawal, Gans and Goldfarb, like Mark Zuckerberg but unlike Elon Musk, choose to focus almost exclusively on its positive aspects (for the other side of the coin, see Musk in the documentary “Do You Trust This Computer”, or read James Bridle’s “New Dark Age”).While Agrawal, Gans and Goldfarb’s examples are useful and simplify explanations, they are also contestable. To take an example close to their hearts, they use their AI canvas to illustrate an MBA recruiting offer, with the objective to “predict whether an applicant would be among the 50 most influential alumni 10 years after graduation,” and the input of “application forms, resumes, GMAT scores, social media and outcome impact measure.”It does not take a leap of imagination to predict that a huge proportion of the 50 most influential alumni 10 years after graduation would consist of students who were born into the top 1 percent of wealthy families, or even the top 0.1 percent; Rupert Murdoch’s sons are going to be influential regardless of what they do or don’t do. So, should they be automatically admitted?As well, the unique strength of AIs is that through unsupervised learning, they can detect patterns in thousands of dimensions, whereas homo sapiens, even domain experts, are only able to process a few dimensions of information. Therefore, it may be that the most relevant dimensions for identifying influential alumni 10 years after graduation have less to do with the factors identified by Agrawal, Gans and Goldfarb, and more to do with factors that are simply too complicated for homo sapiens to understand. Are we then to just accept the black box recommendations?The authors point out that AI may augment jobs, contract jobs, lead to the reconstitution of jobs, shift the emphasis on the specific skills required for a particular job, or shift the relative return to certain skills. However, the overall pattern is already evident, with the benefits of AI going overwhelmingly to the top 1%, as Ronald Inglehart shows in his book “Cultural Evolution.”For his part, in his book on AI, “Life 3.0”, Max Tegmark illustrates the rising tide of occupations that AIs can accomplish better than humans, which could soon be almost all occupations.Tegmark contends that this could be a good news story, presaging an AI utopia where everyone is served by AIs. But this future is not ours to decide, since the AIs, having evolved to AGIs (“Artificial General Intelligence”) much smarter than we are, may not be keen to be slaves to an inferior species. And since they learn through experience, even if they initially serve us, there is no reason to believe they will continue to do so. Tegmark makes a pointed analogy:“Suppose a bunch of ants create you to be a recursively self-improving robot, much smarter than them, who shares their goals and helps build bigger and better anthills, and that you eventually attain the human-level intelligence and understanding that you have now. Do you think you’ll spend the rest of your days just optimizing anthills, or do you think you might develop a taste for more sophisticated questions and pursuits that the ants have no ability to comprehend? If so, do you think you’ll find a way to override the ant-protection urge that your formicine creators endowed you with, in much the same way that the real you overrides some of the urges your genes have given you? And in that case, might a superintelligent friendly AI find our current human goals as uninspiring and vapid as you find those of the ants, and evolve new goals different from those it learned and adopted from us?Perhaps there’s a way of designing a self-improving AI that’s guaranteed to retain human-friendly goals forever, but I think it’s fair to say that we don’t yet know how to build one – or even whether it’s possible.”AIs are defined as “Prediction Machines”, but as Agrawal, Gans and Goldfarb acknowledge, there is no reason that “judgment” and “action”, currently the domain of homo sapiens, cannot also be performed by AIs. The “prediction machines” on which this book focuses are what Oxford professor Nick Bostrom, in his book Superintelligence, would call “oracle AIs.” However, this is just one step away from “genie AIs” that can also take judge and take action based on their predictions.Bostrom suggests that we will likely be confronted suddenly with a superintelligent AI of our own creation, when the last piece of the puzzle falls unexpectedly into place. If this AI has an IQ in the thousands or tens of thousands, we will have no idea what it can actually do, any more than a worm has an idea of human capabilities. This intellectual superpower could have at least a half dozen skill sets, any one of which could be parlayed into its singular domination of the world. These include intelligence amplification; strategizing; social manipulation; hacking; technology research; or economic productivity.Intelligence amplification would allow it to bootstrap its intelligence, becoming exponentially smarter.Strategizing would let it achieve distant goals, and overcome any opposition from far less prescient homo sapiens.Social manipulation would allow it to leverage external resources by recruiting human support; enable a “boxed” AI to persuade its gatekeepers to let it out; and persuade states and organizations to adopt courses of action that suit its purposes. Nothing prevents AIs from lying to us; indeed it would be far easier for AIs to lie to homo sapiens than the reverse, since its understanding of our workings will be virtually complete, whereas our understanding of how AIs come to their re-presentations is already severely stretched with Google DeepMind, Google Brain, and IBM’s Watson.Hacking would let an AI expropriate computational resources over the Internet; a boxed AI could exploit security holes left by all-too-human programmers to escape cybernetic confinement; it could steal financial resources; it could hack infrastructure, including military robots, drones, nanotech projects etc.Technology research would allow it to create an unequaled military force, a surveillance system that would make the NSA look like children, and greatly accelerate space colonization through von Neumann probes.And finally, economic productivity would generate wealth which could be used to buy influence, services, and resources, including hardware. All it takes is any one of these superpowers, and the world could be under its control.Agrawal, Gans and Goldfarb are right that AIs are prediction machines, but they are also much more. In search algorithms, autonomous vehicles and other applications AIs are able to predict ‘What would a human do?’. But this is just the tip of the iceberg. AIs are able not only to predict, but also to judge and to act. So, the question is not, ‘What would a human do?’ but rather, ‘What would a being with an IQ of 6000 choose, decide, and do?’ As Elon Musk observes, we might not always be happy about the answer.Our species’ remaining time may be limited, a momentous event predicted by the philosopher Nietzsche in Thus Spoke Zarathustra:“I teach you the Overman. Man is something that shall be overcome: what have you done to overcome him? All beings so far have created something beyond themselves. Do you want to be the ebb of this great flood? What is the ape to man? A laughingstock or a painful embarrassment. And man shall be just that for the Overman…The Overman is the meaning of the Earth. Let your will say: the Overman shall be the meaning of the Earth…”And if Artificial Intelligence were the Overman?

  8. Parce que todos aquí coincidimos en otorgar a este libro 3 estrellas, y es que en mi opinión, aunque aporta algunos datos interesantes, es muy extenso y contiene lo que en español coloquial llamamos “paja”. Demasiada verbosidad para poco contenido. De hecho, parece los autores se dieron cuenta de lo que ocurría, porque cada capítulo, muy extenso, tiene al final una recapitulación breve de lo dicho antes, como en ciertos manuales escolares.

Leave a Reply

Your email address will not be published. Required fields are marked *