By Evan Schuman
When senior executives at two of the world’s most highly-regulated verticals—healthcare and finance—explore ways of improving operations, boosting margins and delivering it all with a strong ROI, their go-to plan focuses on pushing technology.
But mountains of global compliance requirements prove daunting, especially when rules from various regulators conflict. However, artificial intelligence (AI), and its offshoots such as machine learning (ML), are among the best means to improve operations, especially when humans are strategically incorporated.
Regulators’ concerns around AI are mostly misunderstandings. Consider the European Union’s GDPR requirement that “the data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” This only refers to AI efforts where no human involvement/oversight is involved; when AI is used as a tool to help humans make better decisions, the EU’s issue becomes moot.
In U.S. healthcare, HIPAA (The Health Insurance Portability and Accountability Act of 1996) is mostly concerned with protecting patient privacy and securing PII data as well as possible. Not only does AI not undermine that, but it makes such compliance efforts easier as it automates searches for that data hiding in places humans may not search.
This means that in healthcare, AI can be used to detect anomalies or patterns in test results—relying on vast data warehouses of test results from countless other healthcare institutions. This AI process would never undermine a physician’s judgment but would instead flag diagnosis possibilities the doctor may not have initially had reasons to suspect. The COVID-19 global pandemic has, sadly, given healthcare facilities across the planet a sense of how rapidly diseases can spread and mutate. AI is far more likely to detect such an aberration than even the most experienced doctor looking only at patients in Chicago, Tokyo, Mumbai or elsewhere.
In the financial world, decisions range from the analysis of known elements possibly being crunched in a spreadsheet to projections and speculations based on multiple layers of unknown variables, such as guessing how an investment or stock will perform in six months. Just as medical decisions need to be left in the hands of doctors, financial decisions must be left to traders and other financial specialists. That said, the number of factors to be considered are massive—presenting the perfect opportunity to let artificial intelligence shine, whether by automating data aggregation or making sense out of sentiment analysis.
Both healthcare and financial executives and specialists share one critical attribute: Human decision-makers need to understand, efficiently and quickly, how the AI algorithm came to its recommendation so that they don’t have to wade through 90 pages of explanation. An explainability feature that rapidly details the most salient and important factors is necessary so that decision-makers can more confidently decide what to do at a more accelerated pace. AI that operates in a black box and makes recommendations without context is hardly helpful.
Much of this explainability aspect involves what many call a no-code platform. As a practical matter, no-code hardly equates to the absence of code as much as the lack of visibility of code. In fact, a hidden-code platform might be more explicit. Either way, this means straightforward English explanations that make as much intuitive sense to programmers and topic specialists as they do to CFOs and COOs. Some programming skills may be necessary for training AI—but absolutely shouldn’t be required for understanding its recommendations and the reasoning behind those recommendations.
Even better, at Beyond Limits, the detailed nature of those explanations is fully customizable, meaning that the system can speak at a different technical level to various decision-makers based on what settings that decision-maker chooses. For example, a cardiologist’s readout would look very different from what is being reviewed by a hospital lawyer or finance professional.
Another means of examining patients that rapidly expanded across the healthcare sector after Covid materialized included telehealth sessions. The initial idea was to find Covid-safe means for doctors, physician assistants or nurse practitioners to interact with patients, but the technology’s limitations made it problematic. Although still true today, a major issue in the early days was confidentiality security. In other words, how secure were the makeshift communications that many physicians used? Was ultra-sensitive HIPAA-protected data at risk of being intercepted by identity thieves?
As for medical diagnostic concerns, most patients didn’t have the tools needed to give physicians basic information. The routine actions doctors used to obtain physical information via stethoscope, blood pressure cuff, scale or otherwise, became far more difficult. AI can analyze telehealth video transmissions and deliver more information to healthcare professionals than would be possible through the naked eye. That meant medical specialists needed to gather more information through questioning, which can be challenging as the length of such interactions is being shortened as authorities push for doctors to see as many patients as possible during any given day.
Equipment is now emerging that will allow systems in a patient’s home to monitor everything from oxygen saturation and blood pressure to heart rate, heart patterns, and even brain wave patterns. With AI, though, such systems can examine this data 24×7 with the software having the ability to quickly detect and report any irregularities such as physician-dictated criteria (“Message me if the blood pressure hits XXX”) or other machine learning-detected anomalies.
Medical tests have historically been interpreted in different ways, sometimes based on the varying experience levels of medical specialists looking at the results. From a legal malpractice perspective, those differences can later (in court) prove nightmarish. AI delivers a consistency upon which specialists—and their attorneys—can rely.
There are also global capabilities that particularly lend themselves to AI, such as correlating communities with a need for a particular medication to areas with an oversupply of that resource. This is something that Beyond Limits achieved with their Covid model research, working closely with one of the top medical facilities in the U.S.
The 24×7 capability is also a vital factor for the financial sector. Consider stock selection complexities, for example. It’s not a new idea that Wall Street analysts must track developments 24×7, but as a practical matter, they rely on the necessary systems to get them up to speed when they start their day. An AI system can not only review those feeds and announcements, but it can also complete the analysis and recommend actions—which can be shared with overnight crews empowered to act within very specific analyst-set criteria.
For both critical verticals, more sophisticated and frequent uses of AI can make a massive difference with the intense challenges they face today. Getting recommendations and answers quickly, in ways that are instantly understandable and therefore actionable, will make the difference in difficult environments where answers are rarely easily accessible. Whether that’s a doctor who can’t use her stethoscope during a telehealth session or financial analysts who need to crunch 19,000 data sources before they get to work, AI can make a huge difference.