Robots in Our Midst: A Conversation with Jerry Kaplan

Jerry Kaplan, author of Humans Need Not Apply, says the robots are coming, but whether they will be working on behalf of society or a small cadre of the super-rich is very much in doubt. We sat down with him to discuss what the future may look like and how humans might fit into this new reality.


Yale University Press: So is the future going to be more like Star Trek or Terminator?

Jerry Kaplan: That depends on who you are. Advances percolating in research labs around the world are poised to have a far greater impact on our lives and livelihoods than most people realize. While these new technologies will be dazzling—self-driving cars, robotic farmers, computerized drug discovery—our social and economic systems may not be up to the task of distributing the benefits broadly across society. As people lose their jobs while the profits accrue to the already wealthy, we may experience a protracted period of social turmoil and extreme income inequality. The future is bright, but the path to it may be very rocky.

YUP: Economists and pundits have been talking about income inequality for some time. What are your thoughts on this issue?

JK: While I’m not the first one to ring the alarm bell, as a Silicon Valley entrepreneur I have a front-row seat to this new industrial revolution and a unique perspective on what to do about it. We need to apply the kind of creative thinking and novel perspectives that the Valley is famous for to solve these societal problems, as opposed to merely enriching ourselves. We need bold ideas to ease our transition to the prosperous times that lie ahead.

YUP: What sorts of bold ideas do you have in mind?

JK: For example, to get a mortgage, your house has to be appraised at a sufficient value and you must plausibly have the ability to make the payments. But if either condition fails, you can walk away and lose only your down payment, so the lender has an incentive to make sure they are risking their money prudently. By contrast, right now just about anyone can get a student loan to study just about anything, regardless of the prospects for a return on this investment, because students are on the hook to pay whether or not what they learn is worth anything. We need new types of financial instruments—what I call “job mortgages”—where the payments are conditioned in part on the value of the borrower’s future labor. The need to satisfy these lenders would quickly align the training offered by schools with the skills required by employers.

Another idea is that for larger corporations, tax rates should be progressive, as income taxes are, so that companies that are widely held pay lower rates than ones that are closely held. This puts companies whose profits benefit more people (such as those held by pension funds) at a competitive advantage over those who pay dividends only to a smaller group, such as a wealthy family.

YUP: What are some recent technological developments that indicate we are turning more of our lives over to computers, perhaps in ways we don’t even think about?

JK: To start with one obvious but by no means isolated example, our cars will soon be driving themselves. Great for passengers, bad for professional drivers. But along with the added convenience, these remarkable devices will usurp many ethical decisions, such as whether to save your life or a school bus full of children. Are we really ready to let machines decide who should live or die, and if so, what moral principles do we want them to follow?

YUP: Why is this so dangerous? Aren’t computers more efficient at doing things than we are?

JK: When it comes to social and economic issues, efficiency is not the only goal: fairness and equity are also important. In the future, computers may be hiring, firing, and directing the work of much of the labor force on behalf of distant, detached owners and investors. I believe our economy should work for us, not the other way around.

YUP: How have your experiences as an entrepreneur shaped your views on robots, our culture, and our economy?

JK: In my experience, Karl Marx was right: the struggle between labor and capital is a losing proposition for the workers. But the answer isn’t to expand our social safety net with more welfare and handouts to the disenfranchised—this is simply stirring up the pot in the hope of preventing it from boiling. Instead, we must ensure that more people can ride on the gravy train. This means investing in new ways for people to acquire the skills and equip them with the tools they will need to prosper in an age of extreme automation. We need to train future entrepreneurs and capitalists, not laborers and clerks.

YUP: We’ve all seen the movies where the robots rise up and take over. Is this really going to happen?

JK: The short answer is no. The popular image of anthropomorphic robots “coming alive” is great for Hollywood blockbusters, but it’s not going to become reality. Machines are not people, and they do not have independent desires, instincts, and aspirations. A robot designed to wash and fold laundry isn’t going to wake up one day and decide it really wants to become a concert violinist. The future battleground with machines will be economic, not military.

YUP: So we don’t have to worry about robotic soldiers?

JK: We do—but the robots will be working for people, not for themselves. The use of intelligent machines in war is already a significant issue. The new arms race is already on, and we need to be prepared not only to defeat systems developed by others, but also to safely and humanely control those that we deploy to defend us.

YUP: You write a lot in the book about the ethics of using robots in different aspects of our lives. Are there areas where this concerns you most?

JK: We will need to carefully manage where, when, and how a robot or computer program can act on someone’s behalf. This problem will be obvious when a robot stands in front of you in line at the movie theater, but is less apparent—even today—when ticket scalpers’ computers scarf up all the good seats to a hot concert. Robots don’t naturally abide by human social conventions, so we’re going to have to put regulatory controls in place to control their use and moderate their behavior.

YUP: You have been affiliated with the Stanford Artificial Intelligence lab for some years. What are some of the developments there that have struck you as the most important?

JK: The two most important breakthroughs in artificial intelligence are in machine learning and sensory perception. In our increasingly electronic world, teaching a machine some new skill may be as simple as connecting it to the Internet or hooking up a camera, then letting it watch what you do. The combination of these two new technologies will permit machines to perform a wide variety of tasks that today require human skills or intelligence. The old adage that “computers can only do what they are programmed to do” is no longer true.

YUP: If you could choose one thing for readers to think what would it be?

JK: Today, we tolerate a certain degree of economic and social inequity in the interest of raising our average standard of living. But the coming wave of automation raises the specter of a world where the one-percenters have it all, while everyone else struggles to survive. Just how far are you willing to let this go? We certainly want to reward those whose ideas and hard work improve our lives, but how much is enough?


Jerry Kaplan is widely known in the computer industry as a serial entrepreneur, technical innovator, and best-selling author. He is currently a Fellow at the Center for Legal Informatics at Stanford University and teaches ethics and impact of artificial intelligence in the Computer Science Department.


Further Reading:

Humans Need Not Apply

Featured Image: Tecnología – Technology (1) by Jarleon-Fotografía via Flickr under a Creative Commons license

Leave A Comment

Your email address will not be published.