4 things to remember when adapting AI/ML learning models during a pandemic

The COVID-19 crisis brings a unique opportunity for updates and innovation

The machine learning and AI-powered tools being deployed in response to COVID-19 arguably improve certain human activities and provide essential insights needed to make certain personal or professional decisions; however, they also highlight a few pervasive challenges faced by both machines and the humans that create them.

Nevertheless, the progress seen in AI/machine learning leading up to and during the COVID-19 pandemic cannot be ignored. This global economic and public health crisis brings with it a unique opportunity for updates and innovation in modeling, so long as certain underlying principles are followed.

Here are four industry truths (note: this is not an exhaustive list) my colleagues and I have found that matter in any design climate, but especially during a global pandemic climate.

Some success can be attributed to chance, rather than reasoning

When a big group of people is collectively working on a problem, success may become more likely. Looking at historic examples like the 2008 Global Financial Crisis, there were several analysts credited with predicting the crisis. This may seem miraculous to some until you consider that more than 200,000 people were working in Wall Street, each of them making their own predictions. It then becomes less of a miracle and more of a statistically probable outcome. With this many individuals simultaneously working on modeling and predictions, it was highly likely someone would get it right by chance.

Similarly, with COVID-19 there are a lot of people involved, from statistical modelers and data scientists to vaccine specialists, and there is also an overwhelming eagerness to find solutions and concrete data-based answers. Following appropriate statistical rigor, coupled with machine learning and AI, can improve these models and decrease the chances of false predictions that arrive from too many predictions being made.

Automation can help in maintaining productivity if used wisely

During a crisis, time-management is essential. Automation technology can be used not only as part of the crisis solution, but also as a tool for monitoring productivity and contributions of team members working on the solution. For modeling, automation can also greatly improve the speed of results. Every second a piece of software can perform automation for a model, it allows a data scientist (or even a medical scientist) to conduct other more important tasks. User-friendly platforms in the market now give more people, like business analysts, access to predictions from custom machine learning models.

Platforms that can reduce the time, cost and any friction occurring on a project can be enticing not only for the IT teams, but also for business leaders and investors looking for a clear return on their investment. When searching for potential automation solutions, decision-makers should consider how well the product integrates into a team’s workflow and how it may help launch or monitor a project.

Continuous human involvement is key on any AI project

Regardless of circumstance, teams using AI solutions need to understand that the solution remains a work in progress even long after the design and early deployment stages. Machine learning deployments, while programmed to respond to changes, perform poorly when the data being received is too different from its previous training data. Furthermore, volatility and novel or extraordinary circumstances can also throw off AI solutions initially. According to Gary Marcus, cognitive scientist and NYU professor, “Top algorithms are left flat-footed when data they’ve trained on no longer represents the world we live in.”

By continually adjusting training data and algorithms to account for these unexpected changes as they occur, we can begin to see improvement in the accuracy and overall performance of AI solutions. We are still in the thick of the COVID-19 pandemic, with more uncertainties and unpredictable circumstances likely to continue to unfold. As such, making note of these changes now and preliminarily incorporating them into AI and machine learning platforms could bring successful returns on a global scale later on. We could build better, more effective machine learning models to handle future COVID-19 outbreaks, as well as other public health and economic crises.

There is a need for competence in novel (“fluid”) reasoning in both humans and machines

When discussing human intelligence and its progression during a person’s lifetime, experts may cite a theory introduced by psychologist Raymond Cattell in the 1960s, of two major subtypes — fluid intelligence and crystallized intelligence. Fluid intelligence represents our ability to solve novel problems, while crystallized intelligence represents our ability to apply existing knowledge and skills in problem-solving.

These two forms of intelligence both develop beginning in childhood, but by nature, with age, fluid intelligence tends to weaken over time, while crystallized intelligence tends to stay preserved (“crystallized”) and even mature further. Additionally, there are differences that exist based on personal predispositions, as some will naturally gravitate toward novelty and change, while others will avoid it and favor routine instead. Some, despite considerable effort, will fall short of their peers in devising novel solutions to problems.

Comparing this phenomenon of fluid human intelligence with fluid “artificial” intelligence, we see a shared challenge in achieving reliable competence between machines and the humans that create them. In particular, today’s machines may excel over humans in routine and repetitive computational tasks, but they still struggle at reasoning under novel, unprecedented and/or volatile circumstances, giving humans the intellectual edge in this area.

As humans, some of us may enjoy greater observable success under these circumstances, but individual differences and situational factors may threaten success. The prospect of future change is always looming, and with it the potential to render our existing machines and findings invalid.