Nada Abedin | Internal and Emergency Medicine, University Hospital Frankfurt, Germany; Founder, AIxMedical, Aachen, Germany
Citation: EMJ Innov. 2025; https://doi.org/10.33590/emjinnov/VHJP5465
With your background in gastroenterology, hepatology, intensive care, and emergency medicine, what gaps or unmet needs did you observe in clinical practice that inspired you to explore the potential of AI in healthcare?
There are two sides to this. On one hand, I’ve been working in high-stakes specialties with fast-paced environments, such as emergency medicine, where you really do have to make decisions quickly. Patients are critically ill, and we cannot afford to spend too much time in inefficient workflows or waiting on decisions for diagnostics, etc. So, I kept thinking that AI must be able to provide support in this direction. When working in gastroenterology, hepatology, and the ICU, I’ve seen how overwhelming the workload can be. For example, documentation overload, manual data entry, and, again, inefficient workflows. We have tons of data, yet are unable to use it effectively. Our current systems are not designed to support clinicians, and AI is something that can help with that. Overall, it’s about supporting clinicians, augmenting their work, and having AI double-check decisions that are made by clinicians, because humans are humans. Having a tool to check your decisions would enhance patient satisfaction and patient safety.
The other side is seeing what is happening in the world around us. The way we think and the way we work is changing because of AI applications that are already a part of our daily lives. Just thinking about how different these approaches are, and how different the medical world is from everything that is happening outside of medicine, has been bothering me. It triggered the idea to build a bridge between these worlds, to make AI accessible to medicine, to increase patient safety, patient satisfaction, physician satisfaction, and the way we perform medicine.
As the founder of AIxMedical (Aachen, Germany), how does AIxMedical’s approach to clinician-led AI development differ from more traditional models, and what are some of the most impactful projects or initiatives your team has led recently?
The world of medicine is moving in a completely different direction than everything outside of medicine, and I’m seeing a lot of applications being built for the medical world, but clinicians are not really a part of this. That’s a problem. The company was born out of the simple belief and drive that doctors must be involved. If we want AI to truly support clinical care, to be safe, sustainable, and useful, we need clinician engagement from the start. Not just as end-users, but as co-creators.
Thus, our first approach is to educate clinicians. To be able to participate and be a part of this technical development, we need to demystify AI, understand what AI in the clinic means, what AI in healthcare means, what possible applications are, what the limitations of AI are, and what it means these days when patients approach us saying that they discussed possible diagnoses or treatment plans with ChatGPT (OpenAI, San Francisco, California, USA). Essentially, it is about how we can make AI our ally. Therefore, we built a tailored training program for clinicians by clinicians on AI in healthcare. It covers the fundamentals, real-world use cases, risks, and limitations.
The second part is matchmaking. There are several great companies building applications for the clinical field, but many of them do so without clinician input. This leads to two problems. One is that they are building products that don’t really target the pain points. We connect startups with relevant clinicians to provide early and ongoing feedback. And we help initiate collaborations with hospitals and clinics for clinical studies and to test their products in real clinical settings.
The third pillar is consulting, offering strategic advice and guidance to companies entering the medical space, but that’s a minor part of the main goal.
Do you find that a lot of hospitals are open to collaborating with startups?
It’s quite mixed. On one hand, there is growing curiosity and interest, but on the other hand, there are still major barriers. Our clinical environments are not yet designed to support AI applications. This means that, in terms of interoperability, many applications don’t have access to our current systems and data. The question of who is going to pay for these applications always comes up as well. These are critical points.
Then there is something I would summarise under change management and cultural barriers. I’ve talked about AI literacy in clinicians, but there’s a lot of cultural change that has to happen in hospitals for clinicians to understand the importance of these applications. Even when there is technological potential, the readiness on the human side is often missing. While collaboration is possible and many hospitals are open in principle, there is still a lot of foundational work to be done.
At HLTH, you won the debate session titled “This House Believes There Will Be No Doctors in the Future.” What arguments or moments do you think most resonated with the audience during this session?
It was a close run; in the end, it was 52 points against 48. It was a fascinating debate, and emotions were heated, which connected to one of our main points. Humans are human beings. They need emotion, human connection, and interaction, and there are things that you only see when you work in a clinic addressing real-life scenarios. I brought up an example of parents bringing their 2-year-old child to the emergency department. They were worried, they were afraid. They just needed someone, a human, to talk to, to comfort them, and to tell them they would be there. They needed someone to address, someone who was responsible, and someone who could be held accountable for what was happening. These are the legal and ethical aspects of medicine that cannot just be built or given by an AI tool.
Another important message was that medicine is more than just the clinical vignettes that you see in studies where AI outperforms clinicians. Yes, AI may outperform clinicians in controlled environments on certain tasks, but real-life medicine isn’t clean-cut. Medicine is viewing, sensing (with all your senses), and assessing the current situation, then reacting accordingly. It’s mostly these important factors: compassion, human interaction, human touch, the legal and ethical aspects, and the complexity of medicine in real-life situations, because we are not dealing with robots, where you can just change something in the mechanics. We are dealing with a situation that you can only assess through real interactions. So, while acknowledging AI’s incredible potential, we also made a clear statement: the future of medicine is doctor and machine, working together.
During the debate, what were the most compelling points raised by the opposing side?
One of the strongest arguments was that AI doesn’t get tired. It is something that will offer consistent, reproducible, objective assessments, and it has access to tons of databases and studies that human beings cannot carry around with them. This is why I believe that AI could be great at double checking, as I mentioned before. Having something checking your decisions, flagging things that are going wrong, and flagging important results would be very useful. This is one important point.
Another key point was standardisation. This is why we have standard operating procedures. If we look at the aviation industry, which was also brought up by the opposing team, they have autopilot systems, but still have human pilots to be there for the emergency cases. The same applies to medicine. Standardisation through AI could help reduce variability in care, but human overseeing is irreplaceable when things go off-script. We also discussed bias. Humans are biased in clinical encounters and decisions. However, AI also has biases because we are the ones who train the models. The real question is, how can we make sure that the models we train help us overcome these biases?
Both sides of the debate agreed that AI will definitely change the way we practice medicine. It will augment a lot of our work, and there will be specialties that will see a radical shift in their way of working.
At the end of the day, we all agreed that there will still be doctors in one way or another. No matter where you are, there will be a need for medical care if there is no electricity or connection to technological advances. We live in a world where there are wars. We live in a world with natural disasters, and we cannot afford to rely solely on tech. How do we make sure that, with all the advancements in AI and medicine, we still train doctors to deliver care responsibly and accordingly without these tools? I think that this hasn’t been addressed enough yet, but it is a critical topic.
Reflecting on HLTH Europe 2025, which sessions or discussions did you find most impactful?
What I really loved about HLTH Europe 2025 was its truly interdisciplinary nature. There were great exchanges through all the specialties, with deciders, administrators, clinicians, and patients in attendance. Only through this kind of dialogue will we be able to shape a future-proof healthcare system. I loved the specific tracks, for example, the track on women’s health. I think it’s very important to highlight these aspects, areas that have been historically and are still underrepresented and push the conversation forward. Though singling out one or two sessions is really tough for me, I think just looking at the programme and looking at the topics that were mentioned gives you an idea of how innovative the conference has been, as well as how much went into bringing all these people together and hosting multi-stakeholder panels to talk about what might be possible in the future.
We are also talking more about implementation, which stood out at the conference, including what has already been implemented, what the good parts of it are, what needs to be improved, and how we move forward from here to make sure that we are using all the opportunities that we have in a good way. From women’s health to wearables, perceptions of Gen Z, and the future of medical care, the timeliest questions were addressed, and I’m already excited for next year.
What is your one key takeaway from HLTH Europe 2025?
If I had to sum it up: AI in medicine is not just a future concept. It is already here. It is reshaping the way we think of medical care, or clinical care, and there are two important points related to that. One is implementation: How do we ensure that we can implement everything that is being developed? What can we learn from each other? What about collaboration? We need to work together, across all fields. It’s not a tech problem, it’s not a business problem, and it’s not a clinical problem. We must work together to ensure that we are using all the resources that we have in a meaningful way and in the best interest of the patient.
Second is moving towards prevention. It was a big topic, and I think prevention in medicine is the future, especially if you think about all the opportunities that AI offers us, and how we can move from reactive medicine to something that would focus on healthier lives. From predictive analytics to early detection, we have the tools, but still need a shift in mindset and funding. This is something that I definitely took away from the conference: the future of medicine is now, and the question is no longer if, but how we shape it, together.