Book Marketing

When Algorithms Make Managers Worse – Harvard Business Review

gremlin/Getty Images

One of the newest dilemmas for leaders in an age of AI is when and how to use algorithms to manage people and teams. AI, algorithms, and automation might allow you to manage more people at scale, but that doesn’t mean they will make you a better leader. In fact, quite the opposite may be true: technology has the potential to bring out the very worst in us.

Without careful consideration, the algorithmic workplace of the future may end up as a data-driven dystopia. There are a million ways that algorithms in the hands of bad managers could do more harm than good: How about using an algorithm to set your work rosters so that the number of hours is just below the legal threshold for full-time employment? Or automatically sending emails to people when they are more than five minutes late to work? Or nudging people to work during the time they normally spend with their families by offering incentives? Or using sensors to monitor warehouse workers and then warning them when they take too long to stack a shelf? Or constantly adjusting the color temperature of your office lighting so that your employee’s circadian system thinks that late afternoon is still morning?

Don’t think these kinds of things would happen? It’s already happening. Amazon, for example, has received two patents for a wristband designed to guide warehouse workers’ movements with the use of vibrations to nudge them into being more efficient. IBM has also applied for a patent for a system that monitors its workforce with sensors that can track pupil dilation and facial expressions and then use data on an employee’s sleep quality and meeting schedule to deploy drones to deliver a jolt of caffeinated liquid so its employees’ workdays are undisturbed by a coffee break.

Insight Center

We have been here before. About a hundred years ago, the world experienced the Scientific Management revolution, or more popularly, Taylorism. U.S. industrial engineer Frederick Winslow Taylor had a lot of ideas about how companies might integrate machines and workers for maximum efficiency, and he wrote them all down in his 1911 book, The Principles of Scientific Management.

Many principles of Taylorism are being revived today with a digital or AI-based twist. Consider this list of ideas taken straight from Taylorism: empirical data collection; process analysis; efficiency; elimination of waste; standardization of best practices; disdain for tradition; mass production and scale; and knowledge transfer between workers and from workers into tools, processes, and documentation. That might sound like a twenty-first-century digital transformation plan, but they are all ideas that Taylor had all those decades ago, and when taken to extremes, or put into practice with little regard for the humans carrying them out, the result is alienation and disengagement.

Just as with Taylorism, reliance on algorithmic management may end up creating unease in the workplace and broader social unrest. Industrial action may grow, in which case regulators will have to consider intervention. Automation itself will present serious challenges to the nature of work, our identity, and how people consider their purpose. Your organization will need to confront a challenging tradeoff: Is it better to reduce the agency of human beings by directing their actions entirely by AI, or is it better to use AI to coordinate distributed, autonomous teams?

The answer to that may depend on the nature of the problems that your company is trying to solve. Algorithms are not inherently bad. Automating transactional and repetitive tasks should free people to do more interesting and meaningful work. And for subtle and complex decisions that require human context and delicacy, there is the opportunity to harness algorithms to increase the effectiveness of workers by optimizing their combined talents, rather than maximizing their individual contributions. In other words, rather than using algorithms as a weapon of workplace surveillance, you can instead use them as a catalyst to hack your work culture and organizational structure.

A great example of this idea in practice is the Dutch bank ING, which took inspiration from companies like Google, Netflix, and Spotify, and re-organized their traditional departments like marketing, product management, channel management, and IT development into agile teams and squads united by a common purpose. When I interviewed Peter Jacobs, CIO of ING Bank, and one of the original architects of its transformation program, he explained that people in large companies can lose their sense of purpose if complex projects are broken down into smaller components and the process is essentially turned into a virtual assembly line. That prevents employees from gaining a sense of accountability or ownership over the ultimate objective.

We will see more algorithmic matching of talent, not just for Uber drivers and delivery people, but for professionals and experts as well. Publicis, a multinational marketing company, has already started using algorithms to organize and assign its 80,000 employees, including account managers, coders, graphic designers, and copywriters. Whenever there is a new project or client pitch, the algorithm recommends the right combination of talent for the best possible result. Even beyond marketing, technologies are emerging that assist with the automated matching of skills and projects.

Talent platforms, combined with automation, are increasingly seen as a strategy for companies to gain scale without losing agility. Walmart, in an effort to compete more effectively with Amazon, is examining ways to expand its use of gig economy workers. IKEA, in a move that could hopefully spell the end of crooked home-assembled bookcases, bought Task Rabbit, a freelancer talent platform. In fact, many of the biggest employers in the world today don’t even sell things at all; they rent workers.

Of course, the design of talent platforms is also open to manipulation and abuse. Some retailers have attracted criticism for erratic and unfair work schedules created by automated software systems. Automated work scheduling can be a powerful tool to help companies manage their costs, whether that means sending workers home when sales are slowing, or rapidly staffing up when the weather changes or when there is a seasonal promotion. But it can also be structured to help an organization avoid certain obligations. In August 2013, for example, less than two weeks after the teen-fashion chain Forever 21 began using Kronos, a workforce optimization platform, hundreds of full-time workers were notified that they’d be switched to part-time and that their health benefits would be terminated as part of a move to cut costs and reduce liabilities. Not surprisingly, lawsuits ensued.

The fairest way to design a talent platform that encompasses the entire hierarchy of your company, from your junior positions all the way up to your top leaders, is to imagine that everyone, from top to bottom, has to be governed by the same principles. The Veil of Ignorance was a thought experiment proposed in 1971 by U.S. philosopher John Rawls. He proposed a theory that the best way for people to make political or social decisions with far-reaching impact is for them to imagine how they would feel about those decisions if they woke up the next morning and found that they were one of the people who were directly affected and had no input into the decision. Algorithmic leaders should take the same approach when building systems that manage their own teams and employees.

AI and algorithms offer a wealth of opportunities to design more flexible, fulfilling ways to work. But for them to work effectively, first be sure that you would be managed and matched by the same algorithms you are expecting other people to follow.

Buy Kratom Extracts

Left Coast Kratom is here to help you experience the freshest highest quality kratom powders and extracts at competitive prices.