How technology is changing the future of healthcare

How technology is changing the future of healthcare

Thursday, 12th November 2020
Twitter icon
Facebook icon
LinkedIn icon
e-mail icon

Two distinguished surgeons discuss technology’s role in unlocking the future of healthcare.

To “intervene earlier and prevent folks from getting sick before they do” is the grand hope of Dr Alan Karthikesalingam, a surgeon scientist and research lead at Google Health UK. His aspirational vision “may take a long time”, he conceded.

As part of the RTS Digital Convention 2020, he was in conversation with Professor Lord (Ara) Darzi, President of the British Science Association. The two surgeons quickly set the tone as they discussed the difficulties confronting them during the pandemic.

Darzi shared his recent experience of working in an intensive care unit and how it was “very, very painful” to see a ward usually occupied by patients suffering from a variety of conditions suddenly dominated by those ill with Covid-19.

This was where Karthikesalingam offered a potentially impactful tool not only in the fight against the virus, but to aid healthcare as a whole: artificial intelligence (AI). Typically, AI is defined as machine intelligence that processes data to help maximise the chance of achieving its goals based on what it learns. AI can even predict outcomes by using machine-learning algorithms to spot patterns.

“A few years ago, it was being used to do things such as play chess. The same systems are now being used to predict the protein structure of this virus,” explained Karthikesalingam. He has led studies into the use of AI in healthcare and knows how important it can be.

But, as the race to develop a Covid-19 vaccine intensifies, one key technological cog in the AI machine is needed more than ever – data. Google has used information collected from individuals and agencies to assist in several ways. Karthi­kesalingam told the RTS how Google Maps, for example, “is now showing accurate and live information about some 14,000 Covid test sites in more than 20 different countries”.

Darzi complimented Google on this work before asking whether it “might play a role in discovering some new therapeutics” as we endure a second coronavirus wave.

Karthikesalingam pointed out that pharmaceutical companies were using “machine learning to try and make their selection of promising drug candidates more efficient”. But he admitted that, “because it’s such a new virus, the kind of clinical data about which treatments work – and which treatments don’t work – is so new that it is probably too [soon] for artificial intelligence to help at this end of the scale.”

In light of this, Darzi took a step back and widened the discussion to other areas of research that Google and its AI subsidiary, DeepMind, have ventured into. Karthikesalingam highlighted work on breast cancer screening and how the data gathered by big national breast screening programmes had been used to train the algorithm to make maximum use of X-rays.

“I think an average radiologist will look at more than 10 million images in their career, and most of them tend to get better over time. In the same way, we can train these machine-learning systems to interpret X-rays,” said Karthikesalingam. He went on to claim that Google’s system had the “same level of accuracy as expert radiologists”.

This led Darzi to ask if, “at the end of the day, a radiologist would sign reports [jointly with the company providing the AI]”. Karthi­kesalingam looked on AI as a tool that could complement doctors rather than replace them – at least for now: “Most people are looking at machine learning in healthcare as a tool that assists experts to be more efficient and to be able to do their job.”

At what point might AI become so good that it could theoretically replace a human, wondered Darzi: if a machine processed more than the rough average of 10 million images that a radiologist saw in a lifetime, would the machine become “better” than a human?

Karthikesalingam said it was not as simple as that. For one thing, the quality of the image was crucial: “Imagine taking a photo of a number of cats from a distance. If you’re standing very, very far away, then it doesn’t matter how good you are at counting cats, because the photograph might be too blurry [for you] to see.”

However, there was potentially more to see in an image than many humans could currently detect, Karthikesalin­gam added. Referring to his collaboration with London’s Moorfields Eye Hospital on identifying diseases that can lead to blindness, he described research that Google has conducted. By “looking at these images of the back of the eye, machine-learning systems can [reveal] predictive information about the rest of the body’s health. For example, about how likely people are to have heart attacks or strokes in the five or 10 years following [these images].”

He added: “That’s very early-stage research. But… we might uncover new things in these images that we couldn’t see before.”

Ultimately, both surgeons agreed on one thing that was of the utmost importance – patient care. Darzi recalled an incident some 20 years previously, when he “commissioned a robotic system in the operating theatre”. Subsequently, a patient was very upset “to hear that it was a robot that was going to operate on them”.

Although Darzi assured the patient that it was “just a tool”, it was a seminal moment because it raised the question of how the public would react to advanced technology playing a part in their healthcare.

Karthikesalingam pointed out that, in his research, he had found that including patients in projects from the beginning, “guarantees that you build the systems in ways that are more acceptable to patients and the public”.

But who those patients were was equally important, noted Karthi­kesalingam, because machine learning could be undermined by biased data. Some AI medical systems had been inconsistent in processing people from ethnic minorities in the US and Europe because they “learnt” from datasets comprised largely of one – white –ethnic group.

When bias existed in the way that data was collected, the process of “repeatedly training pattern-recognition systems using this data [could] build in these biases in a way that makes them permanent – or even amplify them – when we turn them into tools and technologies”.

Throughout the discussion, data and how it drove technology was the central theme. Advanced technologies such as AI might be exciting but simpler things, such as video calls, could sometimes be more effective.

The extent to which virtual medical consultations had become commonplace had taken Darzi by surprise. He made the point that “if you’d asked me about a year ago how many patients would you be doing a remote consultation with, I would have said zero.… We’ve moved from nothing to millions of remote consultations – in other words, sitting down in front of a television and having a discussion with the patient.”

He believed that “a lot of that will remain with us, because it’s good, it’s efficient”.

Karthikesalingam’s hope was that, “over the next 10 or 20 years, we will start to see much more digitisation of our wellbeing and our health” to process and unlock new insights.

It is this data-driven dream that may help realise his hope that we can, one day, “intervene earlier and prevent folks from getting sick before they do”.

 

Report by Omar Mehtab, who is a journalist on the BBC technology show Click. ‘In conversation with Professor Lord Darzi and Dr Alan Karthi­kesalingam’ on 20 October was part of the RTS Digital Convention 2020, sponsored by YouTube. The producer was Jon Brennan, manager, EMEA broadcast, entertainment and media partnerships at Google.

in_conversation_with_professor_lord_ara_darzi_dr_alan_karthikesalingam_rts_digital_convention