Even accountants' jobs are at risk from artificial intelligence, but there is a potential upside, according to this special report.
Cloud services, big data and mobile devices are just some of recent disruptive technologies that have presented both great threats and opportunities in business and for careers – but artificial intelligence promises to dwarf them in all in scale.
AI-led automation can potentially add up to $2.2 trillion in value to Australia’s economy by 2030, according to The Automation Advantage, a new study by economics and strategy consulting firm AlphaBeta.
However, the study also acknowledges that up to 3.5 million of Australia’s 12 million workers are at “high risk of being displaced by automation in coming years”.
Highly manual and/or repetitive jobs, such as labourers and cleaners, are identified in most studies as those most likely to be replaced by AI and robots, but it’s not just blue-collar workers facing an uncertain future. High-routine white-collar jobs such as clerks and even accountants are also at risk, according to Artificial Intelligence and Robotics and Their Impact on the Workplace, a study by the International Bar Association.
Musk vs Zuckerberg
Not surprisingly, AI is one of the most divisive technology-related issues of our time. The most recent high-profile evidence of its divisiveness came last month when Elon Musk, CEO and founder of Tesla, slammed Mark Zuckerberg for his “limited” understanding of AI, following comments made by the Facebook boss.
The argument began following an interview last month in which Musk reiterated his bleak view of the emerging technology, describing it as an “existential risk to human civilisation”.
“I have exposure to the very cutting-edge AI, and I think people should be really concerned about it,” Musk told a US National Governors Association meeting. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal.
“AI is a rare case where we need to be proactive about regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it's too late.”
Musk believes the current approach to regulation, which often reacts when “a whole bunch of bad things happen”, such as self-driving car crashes, is not quick enough to deal with the problems AI poses to society.
“AI is a fundamental existential risk to human civilisation, and I don't think people appreciate that," he added. “There will certainly be a lot of job disruption, as robots will be able to do everything better than us – that includes all of us. It really is the scariest problem for me.”
A few days later, during a Facebook livestream with fans, Zuckerberg labelled Musk’s comments as “pretty irresponsible”, saying that he remained “optimistic” and couldn't understand why naysayers would “try and drum up these doomsday scenarios”.
“In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives,” added Zuckerberg.
A Twitter user then posted a link to Zuckerberg’s comments, which spurred Musk to reply: “I’ve talked to Mark about this. His understanding of the subject is limited.”
I've talked to Mark about this. His understanding of the subject is limited.— Elon Musk (@elonmusk) July 25, 2017
Gates and Hawking weigh in
Musk isn’t the only high-profile expert cautious about an AI-fuelled future. Stephen Hawking believes the creation of advanced AI will either be “the best or the worst thing, ever to happen to humanity”.
The scientist admitted that “the potential benefits of creating intelligence are huge”, and he particularly mentioned the positive effects AI could have on disease, poverty and the environment.
“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilisation,” Hawking said last year.
However, he also highlighted his concerns as to how we cannot predict the consequences of having “our own minds amplified by AI”.
Bill Gates argues that if robots are to take jobs away from humans, it stands to reason they should start paying their fair share in tax. Governments should be able to tax companies that make use of robots, as a way of slowing down the growth of automation and to provide funds for creating employment elsewhere, the former Microsoft boss said earlier this year.
“Certainly, there will be taxes that relate to automation,” said Gates, adding that unchecked automation may lead to massively reduced income tax revenue.
“Right now, the human worker who does, say, $50,000 worth of work in a factory, that income is taxed and you get income tax, social security tax, all those things. If a robot comes in to do the same thing, you'd think that we'd tax the robot at a similar level.”
“There are many ways to take that extra productivity and generate more taxes,” added Gates. “Some of it can come on the profits that are generated by the labour-saving efficiency there, some of it can come directly in some type of robot tax. I don't think the robot companies are going to be outraged that there might be a tax.”
Gates has spoken out several times against the prospect of intelligent machines, saying that some decades after they are capable of doing human jobs, they may become a threat.
Gates maintained that given many manual jobs such as warehouse work and cleaning roles will become automated over the next few decades, it is important to have policies that deal with the changing nature of the workforce.
“It is really bad if people overall have more fear about what innovation is going to do than they have enthusiasm. That means they won't shape it for the positive things it can do. Taxation is certainly a better way to handle it than just banning some elements of it, but innovation appears in many forms,” he said.
The onus is on government to figure this out, according to Gates, and nations cannot rely on businesses coming up with plans for automation. With a growing displacement in the workforce, this excess labour can be used to bolster social services and education, but it requires government oversight to make that happen.
Next: the economic case for AI