(In italiano su Linkedin)
I recently read a post by Bill Gates on the advent of artificial intelligence, and although normally I am not very inclined to give weight to the opinions of an over sixty year old on the subject of technological innovation, I think we can make an exception for a person who in the past has correctly identified epochal turns such as the graphical interface of PCs, the Internet and the advent of the cloud. I recommend it to everyone.
I was
particularly struck by the part about the advent of AI within companies. Before
commenting specifically on this point, some notes on Bill's position regarding
AI in general.
First, write
down these prophetic phrases:
"The
development of AI is as fundamental as the creation of the
microprocessor, the personal computer, the Internet, and the mobile phone.
It will change the way people work, learn, travel, get health care, and
communicate with each other."
Second, I
find it hard to share Bill's optimism about things like "how AI can reduce
some of the world's worst inequities": "Climate change is another
issue where I'm convinced AI can make the world more equitable" or
"The world needs to make sure that everyone—and not just people who are
well-off—benefits from artificial intelligence." It strikes me that Bill
uses the auxiliary will instead of can in sentences like this: "the ways in which
it [AI] will help empower people at work, save lives, and
improve education." Hence my position of moderate skepticism that I chose
to express with the words of Jeff Lebowski.
That said,
and having understood the difference between AI and AGI described in the post,
for all of us engaged in office work Gates' predictions about the future of the
workplace are particularly important. "For example, many of the tasks done
by a person in sales, service, or document handling require decision-making but
not the ability to learn continuously. Corporations have training programs for
these activities and in most cases, they have a lot of examples of good and bad
work. Humans are trained using these data sets, and soon these data
sets will also be used to train the AIs that will empower people to do
this work more efficiently." Now I think it's the right time to ask
ourselves if – given that AI will learn to manage these tasks – the effect of
the advent of AI will be the increase in the efficiency of humans or their
replacement, or something else, for example the robotization of the human being
himself, made able to spit extremely efficiently and reassuringly a series of
extremely mediocre results. "As computing power gets cheaper, GPT's
ability to express ideas will increasingly be like having a white-collar worker
available to help you with various tasks. Microsoft describes this as having a
co-pilot. Fully incorporated into products like Office, AI will
enhance your work—for example by helping with writing emails and
managing your inbox. ... In addition, advances in AI will enable the
creation of a personal agent. Think of it as a digital personal assistant: It
will see your latest emails, know about the meetings you attend, read what you
read, and read the things you don't want to bother with. This will both improve your work on the tasks
you want to do and free you from the ones you don't want to do." Also in this
case the choice of verbs is revealing: the AI will read the things that you do
not want to read, not those that you do not need or do
not have time to read. Put it this way, AI is at the service of our laziness,
not our intelligence. That this will result in improving the work we want to do
and free ourselves from what we don't want to do, well… you know… that's just
like your opinion, man. I recently read about how the economists of the early
twentieth century, faced with the industrialization and nascent automation of
the world of work, imagined a world that in a few decades would free up
enormous amounts of time previously engaged in work, to the point where man
would be faced with the problem of what to do with it. As we see, this has not been
the case, and I remain extremely skeptical to hear such predictions a hundred
years later.
But here we
come to the point where it raises disturbing questions about the nature
of our office work. I can only agree with Gates that "Because of
the cost of training the models and running the computations, creating a
personal agent is not feasible yet, but thanks to the recent advances in AI, it
is now a realistic goal" and that "An
agent that understands a particular company will be available for its employees
to consult directly and should be part of every meeting so it
can answer questions. It can be told to be passive or encouraged to speak up if
it has some insight. It will need access to the sales, support, finance,
product schedules, and text related to the company. It should read news related
to the industry the company is in". Now my concern is that this kind of
"agent" (whom Gates presents as a friendly advisor who will support
us in difficult decisions - but, again, opinions...) will bring out the very
little (or non-existent) added value provided by humans in many roles that
today are paid handsomely.
What will
happen, then, when companies realize that they are paying salaries for
roles in which the human factor is irrelevant or almost?
Let's look in
the mirror – or, more appropriately, in the webcam of a Zoom call – and ask
ourselves how much of our work today could be done by sufficiently
powerful artificial intelligence. An AI that knows company resources,
technology, processes, how much and more than us. In a hypothetical typical
working day, what is the unique and irreplaceable value provided by our being
human? 90%? 50%? 10%? Whatever percentage you have in mind, consider
the Dunning-Kruger effect and reduce it further, say by a third. Now imagine
that the result becomes, tomorrow, your new salary, or the probability of not
losing your job.
Do we have a
way to increase the value of the human factor in our work?
Will our
company exploit the potential of AI or not?
If it does,
what will it decide to do with us?
Commenti