Dr Alberto-Giovanni Busetto is a Swiss-Italian AI Executive & Innovator and the Chief AI Officer of HealthAI. He is a Member of the World Economic Forum’s Global Future Councils and previously held pioneering roles as the inaugural Global Head of Data Science & AI at Merck Healthcare and the first Group SVP Head of Data & AI at The Adecco Group.
Throughout his career, Alberto-Giovanni has been recognized as a UN Global Compact Mentor, Merck Digital Champion, and IBM Distinguished Speaker. He has contributed to global AI governance as a Task-Force Member of the World Employment Confederation and was honored by the US National Academy of Engineering FoE as one of the nation’s outstanding early-career engineers. Additionally, he served as the US Big Data Chair of the Japan-America Frontiers of Engineering.
What inspired your transition from AI leadership roles in major companies like Merck to leading HealthAI?
Hello, I’m Dr. Alberto-Giovanni Busetto, Chief AI Officer at HealthAI – The Global Agency for Responsible AI in Health. I hold more than 20 years of experience in the design, development, deployment and management of AI solutions. My career has been marked by a commitment to harnessing AI for meaningful impact. At Merck Healthcare, I served as the inaugural Global Head of Data Science & AI, where I led the development of AI-driven solutions in healthcare and biotechnology. This role underscored the transformative potential of AI in health sectors.
Transitioning to HealthAI allowed me to focus more intensively on responsible AI deployment in health, aiming to bridge the gap between technological innovation and human-centered care. In this, data governance is also one of my core focus points. Something that really excites me at HealthAI is the work we do in the context of regulatory and innovation sandboxes, we’re building blueprints to accelerate the development and adoption of responsible AI at scale to enable innovators.
I’m driven by the question: How can we make AI not just smarter, but truly useful where it matters most—improving people’s lives? At HealthAI, I have the opportunity to shape how we think about AI in health, offering guidance to governments and health institutions on deploying AI solutions that are not only cutting-edge but also ethical, transparent, and deeply embedded in real-world health needs. For me, it’s not just about algorithms—it’s about impact.
What excites you most about the intersection of AI and health?
The convergence of AI and health presents opportunities to enhance outcomes through improved diagnostics, personalized treatments, and optimized health solutions. AI’s ability to analyze complex datasets can, for example, lead to earlier disease detection and more accurate prognoses.
What excites me most is that these advancements are not just reserved for high-income countries—they have the potential to transform health sectors in low- and middle-income countries as well. AI-powered diagnostics can bring specialist-level insights to regions with limited medical expertise, predictive analytics can help allocate resources where they are needed most, and digital health tools can bridge gaps in access to care. By deploying AI responsibly, we can create more equitable health solutions that serve people everywhere, regardless of geography or income level.
How can AI help close the health gap between high-income and low- and middle-income countries? What challenges exist in ensuring equitable access?
AI has the potential to democratize health by making advanced medical insights accessible globally. In regions with limited medical expertise, AI-powered diagnostic tools can assist in identifying diseases accurately. For example, telemedicine platforms, enhanced by AI, can connect patients in remote areas with specialists worldwide, facilitating knowledge transfer and improving care quality.
Ensuring equitable access to AI-driven health solutions involves addressing several challenges, some of the most prominent ones relate to infrastructure limitations. This can hinder the adoption of AI technologies, as many regions may lack the necessary digital frameworks to support advanced solutions.
Data privacy concerns also remain critical—protecting personal information requires robust governance structures to ensure confidentiality and security. Additionally, AI systems must be tailored to local languages and cultural contexts to be truly effective, overcoming barriers that could otherwise limit accessibility.
Regulatory hurdles further complicate the landscape, as striking the right balance between fostering innovation and ensuring privacy plus safety requires thoughtful policy development. By addressing these challenges proactively, AI can become a powerful tool for improving global health equity.
How crucial are collaborations between governments, tech companies, and health providers in ensuring responsible AI development and deployment?
Collaboration among governments, technology firms, and healthcare providers is not just beneficial—it’s essential for the responsible development and deployment of AI in health. These partnerships enable the creation of comprehensive frameworks that address ethical considerations, safeguard data privacy, and establish operational standards that ensure AI is both effective and trustworthy.
By working together, stakeholders can move beyond fragmented, one-size-fits-all solutions and instead develop AI-driven approaches that are tailored to real-world health needs. This means leveraging AI to enhance diagnostics, streamline clinical workflows, and expand access to quality care—particularly in underserved regions. Moreover, collaboration fosters transparency and accountability, ensuring that AI remains a tool for empowerment rather than exclusion.
When innovation is driven by shared responsibility and aligned with public health priorities, AI has the potential to reshape our approach to health in a way that is both transformative and equitable.
What ethical considerations should be at the forefront of AI-driven health solutions?
When we think about the ethical considerations in AI-driven health solutions, there are a few key areas we need to focus on. First, we have to address bias mitigation to ensure AI models don’t unintentionally reinforce existing health disparities. Transparency is also crucial, as the decision-making process behind AI must be clear and understandable to all parties involved. Accountability comes next, with clear lines of responsibility for any decisions made by AI systems, so that someone can be held accountable if things go wrong.
Autonomy is at the very core of ethical AI in health, which means it’s essential that we not only respect but actively support people’s rights to make well-informed decisions about their care. This goes beyond simply providing information—it’s about ensuring that people fully understand their options, the potential risks and benefits, and the role AI plays where it is employed.
We must guarantee that the users or those who benefit from AI feel empowered and confident in their choices made, knowing they have a say in the technologies that affect their health and well-being.
AI models in health have sometimes demonstrated biases. How can regulators and AI developers mitigate this risk?
To mitigate biases in AI models, it is necessary that regulators and developers focus on building systems that are representative of diverse populations. This starts with collecting data from a wide range of demographic groups, so the AI is trained on information that reflects the real-world diversity of patients. But data alone isn’t enough—continuous monitoring is also key, as AI systems should be regularly assessed to identify and correct any biased patterns that may emerge over time.
Involving a variety of stakeholders in the development process, including ethicists, patient representatives, clinicians, medical experts, etc., can bring in valuable perspectives that help ensure the AI models serve everyone equitably and fairly.
How can governments and organizations ensure that AI-driven health solutions responsibly use patient data?
To ensure that AI-driven health solutions responsibly use patient data, governments and organizations need to implement strong data governance policies that clearly outline how patient information is collected, stored, and shared.
Anonymization techniques also play a core role in protecting people’s identities, ensuring that data can be used without compromising privacy. On top of that, adhering to both international and local data protection laws is essential not only for maintaining trust but also for ensuring that AI systems are operating within legal boundaries. This approach helps create a foundation of security and transparency that benefits both people and the health sector as a whole.
What are the biggest obstacles in regulating AI for health, and how can countries overcome them?
The rapid pace at which technology advances comes to mind first. Regulations often struggle to keep up with how quickly AI innovations are emerging, leaving gaps in oversight. Additionally, policymakers need a deeper understanding of both AI technologies and the unique complexities of health to craft effective, informed regulations.
Another hurdle is the lack of global standardization—without consistent regulations across countries, it becomes difficult to foster international collaboration and ensure that AI solutions can be safely and ethically deployed worldwide.
To overcome these obstacles, countries will need to invest in continuous education for policymakers, work towards harmonizing regulations internationally, and stay agile in adapting to new technological developments.
Countries can address these obstacles by fostering continuous dialogue between technologists, health professionals, and regulators, and by investing in education and training programs that bridge knowledge gaps. HealthAI, for example, is tackling this through its Global Regulatory Network (GRN), which strengthens local capacity and capabilities in AI regulation for health, ensuring that regulators worldwide are well-equipped to manage the evolving landscape of AI in health.
How does HealthAI help countries build and certify responsible AI validation mechanisms?
As an implementation partner, HealthAI works together with governments, ministries of health, and other health organisations to navigate not only the safety and efficacy of AI-driven health tools but also their ethical compliance, ensuring that the technology aligns with both regulatory requirements and societal values.
HealthAI supports the development of rigorous certification processes that help establish trust and accountability in AI health solutions. By doing so, we ensures that AI systems meet the highest standards before being deployed, which is essential for protecting welfare, enhancing beneficial outcomes, and fostering global confidence in the responsible use of AI in health.
How can AI be used to predict, track, and manage future health crises?
AI can play a pivotal role in managing health crises by providing tools to predict, track, and manage outbreaks more effectively. Through predictive analytics, AI can analyze vast amounts of epidemiological data, identifying patterns and trends that could signal the potential for an outbreak before it happens, giving authorities time to prepare. In addition, real-time surveillance powered by AI can continuously monitor health data from hospitals, clinics, and other sources, allowing for the rapid detection of emerging threats and the ability to respond quickly to contain them.
AI can also assist in resource optimization during a crisis, helping authorities allocate medical supplies, personnel, and other critical resources more efficiently, ensuring that they are used where they are most needed. By integrating AI into public health strategies, countries can improve their ability to anticipate and respond to future health emergencies, enhancing both preparedness and resilience in the face of evolving threats. This proactive approach saves lives and helps to minimize the overall impact of health crises on society and the economy.
Thank you for the great interview, readers who wish to learn more should visit HealthAI.
Credit: Source link