Do We Need Doctors Or Algorithms?

Comment

Editor’s note: This is Part II of a guest series written by legendary Silicon Valley investor Vinod Khosla, the founder of Khosla Ventures. In Part I, he laid the groundwork by describing how artificial intelligence is a combination of human and computer capabilities. In Part III, he will talk about how technology will sweep through education.

I was asked about a year ago at a talk about energy what I was doing about the other large social problems, namely health care and education. Surprised, I flippantly responded that the best solution was to get rid of doctors and teachers and let your computers do the work, 24/7 and with consistent quality.

Later, I got to cogitating about what I had said and why, and how embarrassingly wrong that might be. But the more I think about it the more I feel my gut reaction was probably right. The beginnings of “Doctor Algorithm” or Dr. A for short, most likely (and that does not mean “certainly” or “maybe”) will be much criticized. We’ll see all sorts of press wisdom decrying “they don’t work” or “look at all the silly things they come up with.” But Dr A. will get better and better and will go from providing “bionic assistance” to second opinions to assisting doctors to providing first opinions and as referral computers (with complete and accurate synopses and all possible hypotheses of the hardest cases) to the best 20% of the human breed doctors. And who knows what will happen beyond that?

Assessing Current Healthcare

Let’s start with healthcare (or sickcare, as many knowledgeable people call it). Think about what happens when you visit a doctor. You have to physically go to the hospital or some office, where you wait (with no real predictability for how long), and then the nurse probably takes you in and checks your vitals. Only after all this does the doctor show up and, after some friendly banter, asks you to describe your own symptoms. The doctor assesses them and hunts around (probably in your throat or lungs) for clues as to their source, provides the diagnosis, writes a prescription, and sends you off.

The entire encounter should take no more than 15 minutes and usually takes probably less than that. Sometimes a test or two may be ordered, if you can afford it. And, as we all know, most of the time, it turns out to be some routine diagnosis with a standard treatment . . . something a computer algorithm could do if the treatment involved no harm, or at least do as well as the median doctor (I am not talking about the top 20% of doctors here—80% of doctors are below the “top 20%” but that is hard for people to intuit!).

So what’s wrong with this situation? This is by no means an exhaustive list, but it sets up a nice springboard:

  • Physically having to go to your doctor’s office makes sense for the most part, except that a lot of the basic tests are either visual (tongue and throat check) or auditory (listening to the breath and vibrations in the abdomen). Time plus cost will often discourage people from taking that first step to visit a doctor. Most of the time a Dr. A could at least advise you when it is worth visiting based on your normal body functions, your current indications, and your locality’s current infections and other symptom trends.
  • A lot of the vitals being tested for (e.g. blood pressure, pulse) can now be routinely done at home or even with the help of an iPhone and an explosion of additional possibilities will emerge in the next decade.
  • You are the one telling the doctor your symptoms.
  • The doctor has to inquire (probably every time) into any possible history of each symptom, test results, and illnesses, except when he does not have time for you in that village in India.
  • The prescriptions are still done on paper, requiring you to, again, physically go to a pharmacy and pick up what you need there. So compliance is an issue.

Looking at this, I cannot help but think that this is a completely antiquated system (regardless of whether it is healthcare or not)!

Going down the list, we find a pretty negative assessment. The vital signs could all be determined with the help of mobile devices, the operation of which do not require years of training and a certification. You will be able to do this by yourself—Philips already is using the iPhone camera to try to measure vital indicators, others will be even more innovative and as an insurance company it would be cost-effective to give them to every insured person for free.  Skin Scan  is measuring your risk of skin cancer from a photograph of a skin lesion. Telemedicine is accelerating and a Qualcomm company is measuring heart rates using an iPhone. Cell phones that display your vital signs and take ultrasound images of your heart or abdomen are in the offing as well as genetic scans of malignant cells that match your cancer to the most effective treatment. Ear infection and skin rash pictures and more will all be mobile phone based, often supplemented by the kind of (fractal) analysis that Skin Scan does, and more than what the doctors naked eye could usually see.

The history of symptoms, illnesses, and test results could be accessed, processed, and assessed by a computer to see any correlation or trends with the patient’s past. You are the one providing the doctor with the symptoms anyway after all!

Any follow-up hunts for clues could again be done with mobile devices. The prescriptions—along with the medical records—could relocate to electronic and digital methods, saving paper, reducing bureaucracy, and easing the healing process. If 90% of the time the doctor knows exactly the right kind of diagnosis from these very few and superficial inputs (we haven’t even considered genetics yet!), does it really require 10+ years of intense education for every diagnostician?

The fault is not entirely with the doctors, though. Most of us don’t know what set of symptoms warrant the full-scale attention of medical personnel, so we either go all the time or we do not go at all (save for emergencies). We also cannot realistically expect any (even our family) doctor to remember every single symptom and test result over the years, definitely not in a government hospital in China. Similarly, we cannot expect our doctor to be able to remember everything from medical school twenty years ago or memorize the whole Physicians Desk Reference (PDR) and to know everything from the latest research, and so on and so forth. This is why, every time I visit the doctor, I like to get a second opinion. I do my Internet research and feel much better.

Identifying Emerging Trends In Healthcare

But I always wonder why I cannot input my specific test numbers and have a system offer me a “second opinion” on the diagnosis since it has all the data that the doctor has and can use all my current and historical data effectively. In fact, it is not hard to imagine it having more data than the doctor has since my full patient record would be at the tip of its digital brain, unlike the average doctor who probably doesn’t remember my blood glucose levels or my ferritin from two years ago. He does not remember all the complex correlations from med school in which ferritin matters—there are three thousand or more metabolic pathways, I was once told, in the human body and they impact each other in very complex ways. These tasks are perfect for a computer to model as “systems biology” researchers are trying to do.

Add to it my baseline numbers from when I was not sick, which most doctors don’t have and if they did 80% of physicians would be too lazy to use or not know how to use. Applied Proteomics can extract tens of gigabytes of proteomics—what my genes are actually doing instead of what they can do—baseline data from one drop of blood. Oh, by the way I have my 23andMe data to add my genetic propensities (howsoever imprecise today, but improving rapidly with time and more data). The doctor uses a lot of imprecise judgments too as most good doctors will readily admit. My very good doctor did not check that I have relative insensitivity, genetically, to Metformin, a diabetes drug. It is easy to input the PDR (the Physicians Desk Reference), the massively thick, small-font book that all physicians are supposed to know backwards and forwards. They often don’t remember everything they read, in med school but it is a piece of cake for computers. The book on your typical doctor’s desk is probably not current on the leading-edge science either. Confirmed science and emerging science are different things and each has a role. Doctors mostly use confirmed science, the average doctor not understanding and pros and cons of each or the expected value of a treatment (benefit and harm). And our 18th century tradition of “first do no harm” dictates that if a treatment hurts ten patients a year but saves a thousand lives we reject it.

With enough examples, today’s techniques for language translation (or newer techniques) can translate from human lingo for symptoms (“I feel itchy” or “buzzy” or “reddish bubbly rash with pimples” or “less energy in the morning” or “sort of a stretch in my tendon” and the myriad of imprecise ways symptoms are described and results interpreted  — these are highly amenable to big data analysis) into medical lingo matching the PDR. With easy input of real medical results into a computer and long-standing historical data per patient and per population, which a human cannot possibly handle, and patient and population genetics, I suspect getting a second opinion of my diagnosis from Dr. A is a reasonable expectation, and it should certainly be better than a middling physician’s (especially in less developed countries like India, where there is a dire shortage of trained physicians).

I may still need a surgeon (though robotic surgeons like those from Intuitive Surgical are on the way too) or other specialists for some tasks for a little while and the software may move from “second opinion” (in three years? Or seven?) to “bionic software” for the physicians (in five or ten years, with enough patient data?).  Bionic software, again, defined here as software which augments and amplifies human understanding.

But I doubt very much if within 10-15 years (given continued investment and innovation and keeping the AMA from quashing such efforts politically) I won’t be able to ask Siri’s great great grandchild (Version 9.0?) for an opinion far more accurate than the one I get today from the average physician. Instead of asking Siri 9.0, “I feel like sushi” or “where can I dispose a body” (try it…it’s fairly accurate!) and with your iPhone X or Android Y with all the power of IBM’s current Watson computer in the mobile phone and an even more powerful “Nvidia times 10-100” server which will cost far less than med school with terabytes or petabytes of data on hundreds of millions (billions?) of patients, including their complete genomics and proteomics (each sample costing about the same as a typical blood test).

IBM’s Watson computer, I understand, is now being applied to medical diagnosis after handling imprecise and vague tasks like winning at Jeopardy, which experts a few years ago would have said could not be done. “Computers cannot match the judgment of humans on these kinds of tasks!” And with enough data, medical diagnosis or 90% of it is an easier task than Jeopardy.

Already Kaiser Permanent already has 10 million real-time medical records with details of 30,000,000 e-visits last year with caregivers and computer modeling of key diseases per individual that data scientists would love to get their hand on. Already, according to IDC 14% of the US population is using their phones for medical help and 200 million health and fitness related mobile applications have been downloaded according to pyramid research. Fun stuff, though early. They are probably two generations away from systems that are actually useful.

A more elaborate vision, one that is not very useful today because of lack of enough data and enough science, is defined in Experimental Man and websites like Quantified Self. Though they feel like toys today, they are much further along than the mobile phone was pre-iPhone in January of 2007. And data, the key ingredient to useful analysis, and diagnosis, is starting to explode exponentially—be it genetic data, proteomic data or physical data about my steps, my exercises, my stress levels or my normal heart and respiration rates.

My UP wristband or something like it (disclosure: I am an investor in Jawbone)) will know all my sleep patterns when I am healthy and how many steps I take each day and may have more data on my mobility if I ever get depressed than any psychiatrist ever will know what to do with. Within a few years, my band will know my heart rate at all times, my respiration rate, my galvanic skin resistance (one parameter among multiple ones used to measure my stress level), my metabolic rate (should cost about $10 to add to the band by measuring my CO2 in my breath and may detect changes in my body chemistry too like when I get a certain type of cancer and traces of it show up in my breath).

All my “health data” as well as my “sick data” and my “activity data” will be accessible to Dr. A (and location when I was stressed or breathing hard or getting the allergic reaction and what chemicals were nearby or in the air—did toluene exposure cause me to break out in a rash from that new carpet or trigger a systemic reaction from my body?). I doubt I will be prescribed an arthritis medicine without Dr. A knowing my genetics and the genetics of my autoimmune disease. Or a cancer medicine without the genetics of my cancer when the genetic sequence (once per life) costs far less than a single dose of medicine. In fact all my infectious disease treatments may be based on analysis of my full genome and my history of exposure to viruses, bacteria and toxic chemicals.

Constant everyday health data from non-medical devices will swamp the “sickness tests” used in most medical diagnosis and be supplemented by detailed genetic, proteomic and sick data with bionic software and machine learning systems. Siri might even remind me one day that my heart rate while sleeping has gone up abnormally over the last year, so I should go run some heart sickness cardiograms or imaging tests. Obviously, Siri’s children and its server friends will be able to keep up with the latest research and decide on optimal strategies based on patient preference (“I prefer to live longer even if it means all the fancy treatments” or “I want to live a normal life and die. I prefer to spend more of my time with my children than at the hospital” or “I like taking risky treatments”). They will take into account known research, early pioneering approaches, very complex interrelationships and much more.

My best guess is that today a physician’s bias makes all these personal decisions for patients in a majority of the cases without the patient (or sometimes even the physician) realizing what “preferences “ are being incorporated into their recommendations. The situation gets worse the less educated or economically less well-off the patient is, such as in developing countries, in my estimation.

Envisioning Future Healthcare

Eventually, we won’t need the average doctor and will have much better and cheaper care for 90-99% of our medical needs. We will still need to leverage the top 10 or 20% of doctors (at least for the next two decades) to help that bionic software get better at diagnosis. So a world mostly without doctors (at least average ones) is not only not reasonable, but also more likely than not. There will be exceptions, and plenty of stories around these exceptions, but what I am talking about will most likely be the rule and doctors may be the exception rather than the other way around.

However fictionalized, we will be aiming to produce doctors like Gregory House who solve biomedical puzzles beyond our best input ability. And India, China and other countries may not have to worry about the investment in massive healthcare or massive inequalities in the type of physicians they might have access to. And hopefully our bionic software (or independent software someday) will be free of the influence of heavily marketed but only minimally effective drugs or treatment regimes or branding campaigns against generics or lower-cost and equally effective, more affordable drugs and treatments. Dr. A will be able to do a cost optimization too both at the patient level and at the policy level (but we may choose, at least for a decade or two, to reject its recommendations—we will still be free to be stupid or political).

What is important to realize is how medical education and the medical profession will change toward the better as a result of these trends. The vision I am proposing here, though, is one in which those decades of learning and experience are used where they actually matter. We consider doctors some of the most learned people in our society. We should aim to use their time and knowledge in the most efficient manner possible. And everybody should have access to the skills of the very best ones instead of only having access to the average doctor. And the not so “Dr. House’ doctors will help us with better patient skills, bedside manners, empathy, advice and caring, and they will have more time for that too. If computers can drive cars and deal with all the knowledge in jeopardy, surely their next to next to next…generation can do diagnosis, treatment and teaching in these far less uncertain domains and with a lot more data. Further the equalizing impact of both electronic doctors and teaching environments has hugely positive social implications. Besides, who wants to be treated by an “average” doctor? And who does not want to be an empowered patient?

The best way to predict this future is not to extrapolate the past and what has or has not worked, but to invent the future we want, the one we believe possible!

Image credit: Shutterstock/koya979

More TechCrunch

Some Indian government websites have allowed scammers to plant advertisements capable of redirecting visitors to online betting platforms. TechCrunch discovered around four dozen “gov.in” website links associated with Indian states,…

Scammers found planting online betting ads on Indian government websites

Around 550 employees across autonomous vehicle company Motional have been laid off, according to information taken from WARN notice filings and sources at the company.  Earlier this week, TechCrunch reported…

Motional cut about 550 employees, around 40%, in recent restructuring, sources say

The deck included some redacted numbers, but there was still enough data to get a good picture.

Pitch Deck Teardown: Cloudsmith’s $15M Series A deck

The company is describing the event as “a chance to demo some ChatGPT and GPT-4 updates.”

OpenAI’s ChatGPT announcement: What we know so far

Unlike ChatGPT, Claude did not become a new App Store hit.

Anthropic’s Claude sees tepid reception on iOS compared with ChatGPT’s debut

Welcome to Startups Weekly — Haje‘s weekly recap of everything you can’t miss from the world of startups. Sign up here to get it in your inbox every Friday. Look,…

Startups Weekly: Trouble in EV land and Peloton is circling the drain

Scarcely five months after its founding, hard tech startup Layup Parts has landed a $9 million round of financing led by Founders Fund to transform composites manufacturing. Lux Capital and Haystack…

Founders Fund leads financing of composites startup Layup Parts

AI startup Anthropic is changing its policies to allow minors to use its generative AI systems — in certain circumstances, at least.  Announced in a post on the company’s official…

Anthropic now lets kids use its AI tech — within limits

Zeekr’s market hype is noteworthy and may indicate that investors see value in the high-quality, low-price offerings of Chinese automakers.

The buzziest EV IPO of the year is a Chinese automaker

Venture capital has been hit hard by souring macroeconomic conditions over the past few years and it’s not yet clear how the market downturn affected VC fund performance. But recent…

VC fund performance is down sharply — but it may have already hit its lowest point

The person who claims to have 49 million Dell customer records told TechCrunch that he brute-forced an online company portal and scraped customer data, including physical addresses, directly from Dell’s…

Threat actor says he scraped 49M Dell customer addresses before the company found out

The social network has announced an updated version of its app that lets you offer feedback about its algorithmic feed so you can better customize it.

Bluesky now lets you personalize main Discover feed using new controls

Microsoft will launch its own mobile game store in July, the company announced at the Bloomberg Technology Summit on Thursday. Xbox president Sarah Bond shared that the company plans to…

Microsoft is launching its mobile game store in July

Smart ring maker Oura is launching two new features focused on heart health, the company announced on Friday. The first claims to help users get an idea of their cardiovascular…

Oura launches two new heart health features

Keeping up with an industry as fast-moving as AI is a tall order. So until an AI can do it for you, here’s a handy roundup of recent stories in the world…

This Week in AI: OpenAI considers allowing AI porn

Garena is quietly developing new India-themed games even though Free Fire, its biggest title, has still not made a comeback to the country.

Garena is quietly making India-themed games even as Free Fire’s relaunch remains doubtful

The U.S.’ NHTSA has opened a fourth investigation into the Fisker Ocean SUV, spurred by multiple claims of “inadvertent Automatic Emergency Braking.”

Fisker Ocean faces fourth federal safety probe

CoreWeave has formally opened an office in London that will serve as its European headquarters and home to two new data centers.

CoreWeave, a $19B AI compute provider, opens European HQ in London with plans for 2 UK data centers

The Series C funding, which brings its total raise to around $95 million, will go toward mass production of the startup’s inaugural products

AI chip startup DEEPX secures $80M Series C at a $529M valuation 

A dust-up between Evolve Bank & Trust, Mercury and Synapse has led TabaPay to abandon its acquisition plans of troubled banking-as-a-service startup Synapse.

Infighting among fintech players has caused TabaPay to ‘pull out’ from buying bankrupt Synapse

The problem is not the media, but the message.

Apple’s ‘Crush’ ad is disgusting

The Twitter for Android client was “a demo app that Google had created and gave to us,” says Particle co-founder and ex-Twitter employee Sara Beykpour.

Google built some of the first social apps for Android, including Twitter and others

WhatsApp is updating its mobile apps for a fresh and more streamlined look, while also introducing a new “darker dark mode,” the company announced on Thursday. The messaging app says…

WhatsApp’s latest update streamlines navigation and adds a ‘darker dark mode’

Plinky lets you solve the problem of saving and organizing links from anywhere with a focus on simplicity and customization.

Plinky is an app for you to collect and organize links easily

The keynote kicks off at 10 a.m. PT on Tuesday and will offer glimpses into the latest versions of Android, Wear OS and Android TV.

Google I/O 2024: How to watch

For cancer patients, medicines administered in clinical trials can help save or extend lives. But despite thousands of trials in the United States each year, only 3% to 5% of…

Triomics raises $15M Series A to automate cancer clinical trials matching

Welcome back to TechCrunch Mobility — your central hub for news and insights on the future of transportation. Sign up here for free — just click TechCrunch Mobility! Tap, tap.…

Tesla drives Luminar lidar sales and Motional pauses robotaxi plans

The newly announced “Public Content Policy” will now join Reddit’s existing privacy policy and content policy to guide how Reddit’s data is being accessed and used by commercial entities and…

Reddit locks down its public data in new content policy, says use now requires a contract

Eva Ho plans to step away from her position as general partner at Fika Ventures, the Los Angeles-based seed firm she co-founded in 2016. Fika told LPs of Ho’s intention…

Fika Ventures co-founder Eva Ho will step back from the firm after its current fund is deployed

In a post on Werner Vogels’ personal blog, he details Distill, an open-source app he built to transcribe and summarize conference calls.

Amazon’s CTO built a meeting-summarizing app for some reason