Futurology is a pseudo science where cutting-edge technology of today is extrapolated. Some of us are afraid of the future, and cling desperately to the present, not realizing that we have already become the past! Healthcare in India is growing exponentially with radical transformations. As one who started training in the late sixties, I often feel that I am in the Jurassic Park. This will be nothing, compared to how millennials of the last decade will feel in 2030. The use of AI, robots, AR (augmented reality), AVR (augmented virtual reality), tele-mentored surgery, and augmentative surgery is only the tip of the iceberg. Healthcare and decision making in the next decade could be totally different. Let us spend a moment doing some crystal ball gazing, understanding how healthcare will be in the coming decade.
The purchase officer in a healthcare institution (not a hospital!!) will have to be extremely tech savvy, perhaps dealing with only one or two super specialties. Most brick-and-mortar centers will be standalone specialty centers, concentrating only on individualized procedures, ICU’s may form even 40 percent of the 100 specialized beds, day care procedures will be done in remote satellite units. Admissions will be restricted to clinical situations, which cannot be managed at home or office. Courts have already pointed out that mandatory 24-hour stay in a hospital to get insurance coverage is an anachronism. Miniaturization and POCD (point-of-care diagnostics) will enable most blood biochemistry to be done in apartment complexes, which may also have miniature ultrasound machines. The regulations may change by then! Laboratories with huge automated analyzers will eventually become history!! X-ray machines, as we know it today, could eventually reduce in number as indigenous mobile and portable CT scans become increasingly available, stethoscopes will be replaced by pocket ECHO machines, giving far more information than trying to interpret heart and lung sounds. Automated breath analyzers will measure surrogate biomarkers in the exhaled breath, leading to a possible diagnosis as will the analysis of speech! Urinalysis could even suggest an arteriovenous malformation in the brain!!. Sophisticated liquid biopsy would help track the responsiveness of a brain tumor to radiotherapy and complement imaging studies. A lab-in-your-pocket, a doctor-in-your-pocket will be a reality. Technology enabled remote healthcare will be incorporated in the core of the healthcare delivery system. Face-to-face consultations will progressively decline.
Non digital payment would be almost unknown. Distinction between public and private would progressively reduce. Personalized medicine and genomic testing would be available in Tier-II and Tier-III cities. The digital health divide between the haves and the havenots will significantly reduce. Universal health coverage may take two decades more, but it is on the cards. Equitable redistribution of healthcare facilities will remove the monopoly now practiced by many institutions.
Expectations of patients will become an all-time high. Easily accessible, information overload, resulting in public awareness of healthcare issues, patient’s advocate groups/media, organizational and financial changes will create hitherto unknown ethical problems. Availability or non-availability of international licensures to practice or opine on healthcare issues will create medico-legal and regulatory concerns. Enforcement of global standards ensuring adherence and compliance to achieve uniform competency in a heterogeneous healthcare group would cause hitherto unknown challenges.
A well-trained algorithm will recommend the best clinical solution, but not necessarily giving adequate weightage to patient and family preferences. Would not deploying AI be considered malpractice in 2030? Machines, like humans, can also commit errors. When AI-driven machines and robots start making critical treatment decisions or operating autonomously, who is responsible for errors, complications, or patient death? There would be a need for a change in the mindset from shifting responsibility to sharing responsibility. Biased or skewed data used to create AI algorithms could have major repercussions. If one believes that a solution is not a solution unless it is universally available to anyone, anytime, anywhere, the digital health divide between the haves and the havenots would create major ethical issues.
Decision made by an AI platform could override patient autonomy, if patients cannot choose for themselves. The need to regulate automated behavior and ensure that no harm is done to human beings is now a reality. Indications for DBS (deep brain stimulation) are increasing and with chips being implanted into the brain, augmentative neurosurgery could be a reality. There would certainly be ethical issues in proposing invasive procedures for management of diabetes, hypertension, obsessive compulsive neurosis, depression, impotence, and a myriad of similar disorders or aberrations, including alcohol habituation and even smoking. Hackers could even reset implanted chips.
Obsolescence would be a nightmare for purchase officers. Being future-ready would be extremely difficult. Shelf life of equipment purchased would be in months not years. Upgrading would need to be done frequently. ChatGPT (chat generative pre-trained transformer), an AI chatbot developed by OpenAI and launched on November 30, 2022 and similar products like Med Palm 2, would be a major integral component, in a doctor’s training and armamentarium. Understanding the complex relationships between different co-existing clinical conditions and different management strategies alone is not enough. ChatGPT no doubt would critically analyze, compare and contrast, factor in, and give the exact weightage of every conceivable variable and recommend to a purchase officer what equipment to buy. But does ChatGPT have a human relationship with the same vendor for the last 15 years and know who is reliable and who is not, in after sales service? Of course, an emotional letter generated by ChatGPT made an insurance provider reconsider the original decision of denying benefits!
Ethically, even the predominance of whites constituting the ethnic composition of the OpenAI team has been questioned!! ChatGPT adheres to European Union’s AI ethical guidelines, concentrating on human oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability. Healthcare in the next decade could be totally different. New codes of conduct need to be evolved. An AI-influenced Hippocratic Oath may well be called The Robocratic Oath. Machines, like humans, can also commit errors. To err is ChatGPT, to forgive is human would be the adage of the future!!
One wonders how Sir William Osler, who in 1890 opined that medicine is a science of uncertainty and an art of probability, would have reacted to the introduction of AI in healthcare. For centuries, practicing medicine involved acquiring as much data about the patient’s health or disease as possible and taking decisions. Wisdom presupposed experience, judgment, and problem-solving skills, using rudimentary tools and limited resources. AI will and should never ever replace a commiserating healthcare professional.
Enforcing and implementing culture-sensitive ethical values is what distinguishes the proficient healthcare provider – the patient’s interest should be the prime concern. The capacity of the patient for self-determination and ability to make independent decisions, based on personal values and beliefs, should be acknowledged. Though legally a healthcare provider–patient relationship is contractual, this contract is based on trust, confidentiality, clear understanding of the consent, avoiding conflict of interests, and empathy. Occasionally, medical institutions have financial problems resulting in a conflict of interest between the clinician and the patient. Prioritizing patient interest is not easy with limited financial resources. A clinician has multiple professional roles–doctor, educator, investigator, organizational leader, and consultant–and non-professional roles, such as a spouse, a parent, and a community member. The complexity of these roles with potential for conflict require clinicians to be particularly skilled in unravelling ethical challenges.
Since the dawn of modern healthcare, never has there been such a radical transformation in every single subspecialty. With 620 individuals already having gone into space in the last 57 years and space tourism having started, extra-terrestrial healthcare will eventually be a reality. 3D printing of living tissue in outer space could address shortage of cadaveric organs for transplant. Bio-print facilities to produce functional complex human tissues in a microgravity milieu during low earth orbit flights, is in the offing. Ventricular assist device, a life-saving heart pump for patients awaiting heart transplants, was designed by combining supercomputer simulation of fluid flow through rocket engines. Programmable pacemakers, micro-transmitters used in fetal monitoring, laser angioplasty, and light-emitting diodes (LEDs) used in neurosurgery, are all modifications from space technology The sky will no longer be the limit for terrestrial problems. As early as 2012, articles had appeared on Surgery in Space: Where are we now? The author expects that during his lifetime, the first extra-terrestrial surgery in microgravity settings will take place. The ethical implications are mind boggling.
We need to understand that the future is always ahead of schedule. As Mark Twain once remarked “The future ain’t what it used to be.” As one trained in the BC era in the 20th century, I often feel that though familiar with ChatGPT and Google Bard, I belong to the Paleolithic Age. We, as a generation, are becoming an endangered species and will soon become extinct. It is my fervent wish and prayer that healthcare providers in the coming decades will not lose sight of ethical practice, though new standards will have to be set up for what is ethical. The raison d^etre for our existence is to provide TLC (tender loving care) to all those who have entrusted their lives to us. In 1967, Lars Leksell the inventor of the gamma knife (at that time the most sophisticated equipment in healthcare) had remarked “A fool with a tool is still a fool.” Let us always remember that technology is a means to an end and not an end by itself. From an ethical perspective, a solution is not a solution unless it is universally available to anyone, anytime, anywhere. Alas, this will be in Utopia, not in the real world.
It would be interesting to review this contribution in 2030!!
The author is also Distinguished Visiting Professor, IIT Kanpur; Distinguished Professor, The Tamilnadu Dr MGR Medical University; and Emeritus Professor, National Academy of Medical Sciences.