Bigoted AI systems rightly deserve the universal flak they get. But there’s a problem: we might still be screwed even if AI is neutral.

Unless artificial intelligence has the “right” biases, AI and people won’t coexist very well. Unless purposely wired to be ethical and people-centric, AI can be commandeered for malicious purposes or inadvertently cause serious socio-economic problems, not least of which is the precipitous loss of jobs. With machines already beating humans at manual tasks, games, back-office work, and even lawyering, who knows whether there’ll be any job left in the future for the vast majority of humans to build their careers on.  

A new book, Human + Machine: Reimagining Work in the Age of AI explores how this future is already panning out today and how business organizations and policy makers should approach AI design to find the sweet spot where both humans and machines thrive.

Incidentally, one such spot is Accenture, where co-authors Paul Daugherty and James Wilson lead the development of AI-assisted technology solutions for both internal applications and top-notch clients around the globe. Daugherty serves as Accenture’s Chief Technology and Innovation Officer while Jim Wilson leads the company’s Information Technology and Business Research unit as Managing Director. Both shared their timely insights with me during a freewheeling interview. We covered a universe of exciting subjects but focused on controversial AI-related issues such as workforce disruption, unconscious bias in machine learning, misuse of private data, and whether the purported promise of AI to free us from drudgery towards a more creative and fulfilling work is actually happening.      

Fortunately, there’s much room for hope on every front, if the authors’ upbeat sentiment and Accenture’s extensive AI transformation studies are any indication. To set the tenor of how our conversation went, here’s a quote from Paul Daugherty at the beginning of the interview:

“The right approach to AI will create a lot more jobs. There are millions of open jobs in the United States, with 400,000 new local jobs created this month alone. This is not unusual compared to other countries. The issue is not jobs, the issue is skills. We have to prepare people in inclusive ways so that everyone will benefit proportionally from the big shift we’re going through as we move towards an AI economy.”


Business Leaders Need To Retrain Workforces For A World With AI

Technology has always upended labor, making some jobs obsolete and introducing new ones in a cycle of disruption. Whether we like it or not, the use of automation and AI in business will intensify over time and will inevitably rock the workplace.

A 2017 study by Accenture found that AI can hike corporate profitability in 16 industries by a whopping average of 38% by 2035. In some sectors such as education, food services, and construction, the profit boost can even go as high as 70% to more than 80%. Clearly, the business case for AI is compelling.

But in the same report, Paul Daugherty said, “To realize this significant opportunity, it’s critical that businesses act now to develop strategies around AI that put people at the center, and commit to develop responsible AI systems that are aligned to moral and ethical values that will drive positive outcomes and empower people to do what they do best – imagine, create and innovate.”

I asked Paul to clarify what responsibilities business leaders have in cushioning their workforce from the impact of AI and whether he has seen this responsibility actually being carried out. He conducted a worldwide survey of business executives and found that 74% of them planned to use AI to automate work, but only 3% of them were planning to invest in training. “I think that’s a big problem that we do not have enough focus on learning platforms,” he remarked.

On a positive note, Jim Wilson described AT&T’s proactive efforts to brace their workers against the disruptive effect of AI. According to Jim, AT&T’s main challenge emerged from workforce complexity, being an organization with over 2000 different job titles, many of which were written during the landline era.

AT&T simplified their workforce structure and established an extensive skills training program that enabled landline crews to build the skills they need to transition from fixed line communications to cellular mobile networks. The same skill-based enablement pervades across the organization to include workers in sales, marketing, and customer care. And the same mindset pushes AT&T’s VP of Advanced Technologies Martin Gilbert to train everyone in the company to be comfortable in using AI, even those without a computer science degree. The result? A user-friendly platform that empowers workers to create AI apps that help them become better at their jobs.

At Accenture, the authors also described their own experiences at preparing employees against the backlash of AI through company-wide skills assessment and reallocation programs. For example, the global consulting firm developed a chatbot that uses machine learning to assess employees’ resumes and job experience and outputs the risk factor and time frame in which their skills will become irrelevant. At the same time, the chatbot gives recommendations on which specific skills employees should begin learning today. Linked with internal and external training systems, the ML platform enables workers to future-proof their careers.

Investing in worker re-education enables companies to invest in AI yet still grow their human workforce. Paul described how Accenture deployed several thousand “nanobots” (i.e. automated programs) to automate 25,000 roles in its business process outsourcing (BPO) unit in the last two years, without laying a single person off. Workers impacted by automation were transitioned instead to analysis and other less tedious roles.   


But Are Workers Willing To Learn?

As the onrush of artificial intelligence reconfigure the workplace, employee attitude towards AI also takes center stage. Do workers fear or welcome AI? How much are they willing to learn new skills if a particular application of AI takes over their core functions? Is the collective worker experience at AT&T replicated across other organizations across different sectors?

Paul shared a surprising picture based on prior research that grouped workers into two categories: high-skilled and low-skilled. 7 out of 10 (68%) highly skilled employees felt positive about AI and nearly half (48%) of low-skilled ones professed the same optimism, belying the dystopian narrative that AI is out to make everyone jobless.

In the same research, millennials — the generation that already comprises the largest workplace demographic — felt more positive about AI than baby boomers. Meanwhile, around 2/3 of enterprise executives are mulling over how to re-design job descriptions amid the AI revolution, with about a third of them already actively going through the redesign process.

Paul and Jim recommends the easier route of mapping redesigned roles to six key functional points emerging from the paradigm shift and where humans and machines closely interface:

  1. Trainers: workers who teach/train AI systems how to perform
  2. Explainers: workers who bridge the gap between AI technologies and business leaders
  3. Sustainers: workers tasked to ensure AI systems behave as intended and react accordingly and promptly when these systems go beyond acceptable behavioral parameters
  4. Amplifiers: AI systems that amplify human insight
  5. Interacters: AI systems that orchestrate different experiences and interactions with humans
  6. Embodiers: AI systems that enhance the physical abilities of humans


Has AI Really Made Work Less Boring?

Freeing workers from abject boredom and drudgery is an AI pitch that’s been dangled by tech evangelists all the time. But have workers actually assumed functions that are more creative, fulfilling, analytical, or strategic?

Paul cited the case of anti-money laundering, where banks employ thousands of people to perform fraud prevention roles. The field has a notoriously high false positive rates (more than 80%), often involving workers consistently looking into situations that don’t pan out eventually. These situations do not really require investigative skills or challenge human investigators.

With AI, false positives get reduced by 30%, freeing up human investigators to practice their core skills and dig deeper into real fraud cases. The verdict: being able to practice your craft without needing to perform tedious tasks can be very satisfying.   

Walmart is achieving similar milestones. According to Jim, sales associates in the giant brick-and-mortar retailer increasingly work alongside shelf-standing robots that assumed the much hated task of going up and down the lanes looking for low stock, misplaced, or out-of-stock items. Retrained employees reported they were happy about not doing this work anymore. Jim added that Walmart is expanding its training academies to equip employees with newer and more sophisticated skills such as customer engagement and stock management.

Meanwhile, iconic German carmaker BMW flexes its manufacturing muscle in its South Carolina plant by buying thousands of collaborative robots while aggressively re-skilling its people on the floor. Called the BMW Scholars Program, the work-study initiative enables the company’s blue collar workers this side of the Atlantic to work half the time and study during the remaining period.

Courses include digital manufacturing and robotics courses at community colleges across the state. Workers get to learn not only how to control and manage robots but also to perform backend programming/reprogramming. Workers learning under the program get academic credit but need to maintain a minimum GPA. Thereafter, though, their career track transforms into a roadmap towards a more strategic or management role.  


What About Fairness In AI And Customer Data Protection?

Business leaders also have the responsibility to integrate fairness and eliminate negative bias in AI. As explained by Paul, two of the key human roles emanating from the radical shift to an AI economy are Explainers and Sustainers. Explainers not only articulate the different aspects of AI technologies but also help ensure that these technologies are behaving properly. Meanwhile, Sustainers are tasked to directly monitor and manage AI.  

Explainers and Sustainers are not the only roles who need to master ethics, however. Enterprises need to educate all levels of their workforce on fairness, safety, and defense against malicious uses of technology. AI can even be used to advance AI ethics education. At Accenture, one of the first chatbots developed was designed to answer employees questions about business ethics. The chatbot has an apt name: COBE (which stands for Code Of Business Ethics).   

When it comes to data protection, Paul cited cybersecurity — the imperative to keep customer data safe — as the foundational component. Business leaders should set all the necessary guardrails to prevent data breaches or misuse. Transparency and trust come next, the achievement of which results to positive differentiation for your brand. For companies in the AI secor, those that gain more trust are the ones who will be given stewardship of better and more data. Daugherty sees embrace of blockchain technology and full commitment to GDPR (General Data Protection Regulation) as key steps towards the right direction.


With Great Power Comes Great Responsibility

Many of the world’s leading scientists and tech pioneers believe that no other technology poses as large an existential threat to the human species as artificial intelligence. Whether you agree with them or not, one thing is clear: as business leaders, we can and should mitigate the negative impact of AI as new technologies disrupt how we do business, perform our work, and enrich our lifestyle.    

Jim closes with a fitting conclusion: “We need a Hippocratic Oath for AI. First, do no harm. This is a core responsibility of business executives. When re-designing and re-imagining roles in the workforce, the process must be guided by ethical, human-centered, and responsible design principles. You don’t just simply start with the opportunity to automate a process but you start with how to do it responsibly.”

Share This