Thus, the most likely AI scenario appears to be legal capitalism, with mostly gradual (albeit rapid) change overall. Many organizations supply a lot of AI and are forced by law and competition to make their AI behave in civil and legal ways that give customers more of what they want compared to alternatives. Yes, sometimes competition causes companies to mislead customers in ways they can’t see, or hurt us all a little bit through things like pollution, but those cases are rare. The best AIs in each area have many competitors with similar abilities. Over time, AIs will become very capable and valuable. (I won’t speculate here on when AIs might transition from powerful tools to sentient agents, as that won’t affect my analysis much.)
Doomers worry that AIs will develop “misaligned” values. But in this scenario, the “values” implicit in AI actions are roughly chosen by the organizations that perform them and by the customers that use them. Such value options are constantly revealed in typical AI behaviors and tested by testing them in unusual situations. When there are misalignments, it is these organizations and their customers who mostly pay the price. Therefore, both are well incentivized to frequently monitor and test any substantial risk of misbehavior of their systems.
And more generally:
As an economics professor, I naturally build my analyzes on economics, treating AIs as comparable to both workers and machines, depending on the context. You may think this is a mistake, as the AIs are unprecedentedly different, but the economics are pretty solid. Although it offers great insights into familiar human behaviors, most economic theory is actually based on the abstract agents of game theory, which always make exactly the best possible move. Most AI fears seem understandable in economic terms; we fear losing to them in familiar games of economic and political power.
There’s a lot more at the link, common sense everywhere!
Robin Hanson’s post on AI and existential risk appeared first on Marginal REVOLUTION.