Are big data and technological innovations in healthcare always exclusively convenient, or are they complicated too? Do they lead to better healthcare, or do they lead to healthcare that is even more expensive? Wouldn’t it be wiser to spend money on education or safety rather than on surgical robots? “What we do is less spectacular than developing robots for example, but it is important.”

Professor Antoinette de Bont was educated a health scientist and a technology researcher, because “technology is too important to be left to technologists.” Now she’s Professor of Sociology of Innovation in Healthcare at the Erasmus School of Health Policy & Management (ESHPM). She focuses on the relationships between technological innovations and the social practices they are embedded in. Together with healthcare providers, policy makers, engineers and businesses, she does research into the development of big data in healthcare in Europe. From a social perspective, among other things, she looks at the digitisation of the use of big data: how are technological innovations used to the benefit of healthcare, and what does technology mean for the people – professionals and patients alike – who work with it?

The Data Does Not Work was the provoking title of your inaugural address two years ago. The message was that we expect a lot from digital innovations assisted by big data in healthcare, but that this digitisation takes longer, is more expensive and takes more effort than we think. Is that still the message?
“One of the bigger projects financed by the European Union that I’m currently working on is about the use of big data in healthcare. We work together with Philips, IBM, other universities, other faculties within Erasmus University and with the pharmaceutical industry in a public-private partnership. What can artificial intelligence (AI) mean in healthcare?
The Dutch healthcare system is currently based on commercialisation with a social slant. The introduction of AI requires adjustments to our healthcare system, because the organisation of healthcare services and the relationship between doctor and patient is changing. It also raises questions about paying the algorithms that predict disease, for instance. Because there’s no budget for prevention in the current healthcare system. And how do you make sure that data storage and reusing this data is secure? How do you determine whether an algorithm is reliable? It also raises all kinds of legal, ethical and moral questions, particularly if commercial companies get hold of the AI algorithms.”

“It also raises all kinds of legal, ethical and moral questions, particularly if commercial companies get hold of the AI algorithms”

How do commercial companies get hold of AI algorithms?
“Besides equipment, these companies provide a lot more: they provide treatment plans, including trainers and nurses. We look at the socio-scientific side: what are the (unwritten) rules and guidelines in this context? How will it be embedded in organisations? What is the business model behind it? What is the result of using algorithms – does it make things cheaper, for instance?”

Could you give an example of a non-socio-scientific side?
“For example: the photographs of the new MRI scanners are stored. The hospital has them, but a company like Philips has them too. What is the company allowed to do with this data? Additionally, big data can help combine fragmented knowledge. The computer sees that there’s something strange with this lung. Next, the database yields comparable photos of something similar. The computer also shows a directive for a treatment. In this way, a doctor would have everything together at a single button click. From other digital technology, we know that in practice this process doesn’t always go smoothly.”

It sounds easy: treatment advice at a single button click.
“A good doctor or radiologist can’t – and won’t – do it like this. Doctors find it very hard to transfer diagnostics to algorithms and to rely on the knowledge of the computer. You need to think very carefully about the social embedding of this type of technology to ensure it has added value.
In fact, we do three types of research. Firstly, we want to ensure that such technological developments really result in better healthcare. That raises questions like: how do you make it affordable? Which business models can be used? We also think further: what is the actual added value of big data to a healthcare professional, a patient, a healthcare organisation or society? In this respect, we are very critical. Finally, we conduct action research: how will we persuade doctors, nurses or patients to accept the technology?”

What are the challenges you face in this kind of research?
“In the cooperation with companies, we’re sometimes told: ‘You can’t carry out research into this, because it’s a business secret.’ Of course, that’s out of the question. These are big EU projects, and we’ve agreed to conduct research. Another interesting issue is: may private parties hide behind business secrets if they also want to have a social task?
Another challenge is payment. I also work for Medical DeltaOpens external, where technologies from TU Delft are linked to medical problems faced by Erasmus MC. We can detect only half of all cases of cardiac arrhythmia, for example. A technician then determines what’s needed to improve this and builds a machine. At Erasmus MC, it’s used and developed. They then come to us with a request for payment. Sometimes, we wonder if this is what we really want. We already pay a lot of money on healthcare as it is. Is diagnosing even more cases of cardiac arrhythmia the biggest problem, or do we rather want to focus on preventing it? Or should we spend more money on education and safety, for example? As a university, we opt for the social perspective: what problems in society do we want to solve, for whom, for what, and at which costs? These are not always the technical problems.”

You can’t stop technological innovation, can you?
“That’s not our task either. But we need to show that the technological push doesn’t work sufficiently. Our perspective is less spectacular than the new robot developed by a university of technology. That puts us at a bit of a disadvantage. They create new technology, and we wonder whether we’re prepared to spend our money on it.
This calls for good research about implementation and embedding. The technologies already exist, like the research with the MRI scanners I mentioned. These machines have already been purchased and commissioned. What’s the use of maintaining that the social value is zero? Or should we think of finding ways to ensure that this type of innovation does have added value?”

Source: https://www.eur.nl/en/news/you-have-think-carefully-about-social-embedding-artificial-intelligence-ensure-it-has-added-value