The COVID-19 pandemic is causing much distress in all aspects of our life and affecting people’s behavioural patterns. In an unexpected side effect, artificial intelligence (AI) systems are affected, too – a reminder that human intervention is still key.
You may also like: MIT Monitoring Device Reduces COVID-19 Transmissions
The changes in people’s behaviour have affected the AI algorithms involved in inventory management, marketing, investment advice, and many other areas. These algorithms are trained on normal human behaviour and can adapt to changes by design, but the pandemic-related behavioural changes are too significant. Sometimes machine-learning (ML) models cannot perform as they should, hence manual correction is needed.
One of
the prominent examples comes from the world’s biggest online retailer, Amazon,
which has seen its usual top items (eg phone cases and chargers) subside to
newcomers, such as face masks and sanitisers notably bought in bulk, and the burden
on its infrastructure increase. The company had to modify its logistics algorithms
to ease demand on its own warehouses, which contradicts its usual
practice. This also affected the sensitive algorithms that are used for online
advertising on the platform. And since the situation has been too unstable, such
adjustments would be difficult without manual interventions.
Retail industry is not the only one where such changes are observed.
A firm, which provides daily investment advice based on news articles, found
that current sentiment, being generally more negative than usual, was affecting
their recommendations. Another
company uses AI to generate properly toned marketing texts on behalf of
its clients. With the pandemic in the background, it had to increase manual filtering,
eg banned specific phrases, such as “going viral,” or eliminated emojis and terms
that provoked anxiety or fear, such as “OMG” or “stock up.”
ML models’ performance weakens when input data is significantly different from
the data they were trained on, and this is when humans step in. Many businesses,
however, buy AI-driven systems, but do not provide necessary in-house support
to maintain and retrain them when needed, assuming that the systems can run
independently.
Simultaneously, it may be useful to train machine-learning systems not only
on deviations of recent years but also on ‘black swan’-like events of the past,
such as the Great Depression of the 1930s or the 2007-2008 financial crisis. And
the COVID-19 crisis may be a perfect opportunity to introduce this approach
into the AI area.
“This crisis has raised a
lot of questions about algorithms, models, AI. So far, prediction remains a
difficult art (Holmdahl et al. 2020). The crisis has changed how people communicate on social networks, thus
perturbing ML-based approaches. Most important advances – in clinical symptoms,
such as loss
of smell; in risk factors, such as age; in cause of complications, such as pulmonary
embolism or Kawasaki-like disease in children – have been
spotted by really good-working human brains and careful curiosity,
with no more than incomplete and wrong data and robust science available to them” says Prof Lovis, HealthManagement.org’s IT Editor-in-Chief Prof. Christian Lovis, Professor and Chairman of
the Division of Medical Information Sciences at University Hospitals of Geneva
and Director of the Academic Department of Radiology and Medical Informatics at
University of Geneva.
The pandemic has, therefore, exposed numerous weak spots and inefficiencies
in AI and ML systems. If adopted wisely, this may lead to the development of more
resilient and better-designed systems in the future. But until then AI needs to
be supervised.
Source: MIT Technology Review
Image credit: metamorworks
via iStock