profile pic
⌘ '
raccourcis clavier

Tomorrow’s medicine is today’s research. That is why the question of how we allocate resources to research is at least as important as the question of how we allocate resources to health care itself. -- Tony Hope, Medical Ethics

Privacy and Confidentiality

  1. Historical Context

    The Hippocratic Oath (Edelstein, 1943) emphasizes confidentiality as a sacred trust between physician and patient. Contemporary reiterations, such as the World Medical Association’s Declaration of Geneva, echo these sentiments, framing patient privacy as non-negotiable.

    The system must also implement Ann Cavoukian’s Privacy by Design, putting emphasis on creating positive-sum than zero-sum. (Cavoukian, 2009)

  2. Philosophical consideration

    Enlightenment thinkers with the likes of Kant upheld human autonomy as fundamental moral imperatives. Kantian ethics suggests that using patient data solely as a means to an end, without their informed consent, is ethically problematic (Kant, 1785). John Stuart Mill’s harm principle further supports protecting private information to prevent harm and maintain trust.

  3. Contemporary implications

    In the age of data capitalism (Zuboff, 2019), health data is a valuable commodity. Therefore, the AI system must strictly follow Ontario’s Personal Health Information Protection Act (PHIPA) and Canadian Federal Privacy Legislation (PIPEDA) (Information and Privacy Commissioner of Ontario, 2004; Office of the Privacy Commissioner of Canada, 2024). Measures such as data minimization, differential privacy, encryption, and strict access controls must be in place. The framework should also include ongoing compliance checks and audits to ensure data-handling practices remain in line with evolving legal standards and community expectations.

  4. Guidelines

    • Minimum necessary data collection principle
    • End-to-end encryption for all health data
    • Strict access controls and audit trails
    • Data localization within Canada to comply with PHIPA
    • Regular privacy impact assessments
    • Clear data retention and disposal policies

Algorithmic fairness and bias mitigation

  1. Historical Context

    With the rise of predictive analytics and machine learning in the mid 2000s, the transition from pure statistical methods to more complex machine learning models began as computational power and large Medicare claims datasets became more accessible. ML models were developed to predict patient frailty, identify fraud and abuse, and forecast patient outcomes such as hospital readmission or mortality. (Obermeyer & Emanuel, 2016; Raghupathi & Raghupathi, 2014)

    However, researchers found that certain medicare data, reflecting decades of social inequality, could lead to predictive models that inadvertently disadvantaged some patients. For example, models predicting healthcare utilization might assign lower risk scores to communities with historically reduced access to care, not because they were healthier, but because they had fewer recorded encounters with the health system. (Obermeyer et al., 2019)

    Early mitigation attempts focused primarily on “fairness through awareness”—identifying and documenting biases. Health services researchers and policymakers began calling for the inclusion of demographic and social determinants of health data to correct for skewed historical patterns (Rajkomar et al., 2018). Some efforts were made to reweight training samples or stratify predictions by race, ethnicity, or income to detect differential performance.

  2. Philosophical consideration

    John Rawls’ veil of ignorance, or his principles of justice in general, encourage designing systems that benefit all segments of society fairly, without bias toward any particular group. (Rawls, 1999) Additionally, Nussbaum and Sen’s capabilities approach suggests that technologies should expand human capabilities and Agency (health, longevity, quality of life), especially marginalized communities. (Robeyns, 2020) 1

    Notable mentions that the AI system should also consider Kimberlé Crenshaw’s theory of intersectionality in healthcare disparities to address fairness. (Crenshaw, 1991)

  1. Contemporary implications

    Modern scholarship in data ethics (Noble, 2018) and public health frameworks stress the importance of addressing algorithmic bias. Recency bias in training data (CRAWFORD, 2021) can disproportionately harm smaller rural communities, Indigenous populations, or minority groups who may not be well-represented in the data.

  2. Guidelines

    • Rigorous bias audits of training datasets.
    • Engaging local communities (e.g., Northern Ontario Indigenous communities, diverse communities in Hamilton) in the development and testing phases.
    • Regularly updating and retraining models on more representative datasets.
    • Incorporating Kimberlé Crenshaw’s intersectionality framework to ensure that multiple axes of identity (e.g., Indigenous identity, rural location, age, disability) are considered.
    • Continual monitoring and transparent reporting on equity metrics over time.

Interpretability and transparency

  1. Historical Context

    During the 1970s and 1980s, some of the earliest applications of AI were expert systems designed to replicate the decision-making abilities of human specialists—most notably in the medical domain (Haugeland, 1997). One of the pioneering systems, MYCIN, developed at Stanford University in the 1970s, diagnosed and recommended treatments for blood infections (Shortliffe, 1974). MYCIN’s developers recognized the importance of justifying recommendations, implementing what were known as “rule traces” to explain the system’s reasoning in human-understandable terms. Although these explanations were rudimentary, they established the principle that AI systems, especially those used in high-stakes domains like healthcare, should provide comprehensible justifications.

  2. Philosophical consideration

    Hans-Georg Gadamer’s work on hermeneutics highlight the importance of interpretation and understanding in human communication, including the relationship between patient, physician, and medical knowledge (Gadamer, 1977). Minimizing the opacity of AI models aligns with respecting patient autonomy and informed consent, as patients should understand how their health data influences recommendations.

  3. Contemporary implications

    The AI systems must be equipped with user-friendly explanations of AI-driven recommendations. Rudin argued that for high-stakes decisions, it’s not merely desirable but often morally imperative to use interpretable models over post-hoc explanations of black boxes (Rudin, 2019). Thus, the AI system is required to be implemented with transparent algorithms. Additionally, Floridi suggests a unified principle where we must “[incorporate] both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’)” in building the AI system. (Floridi, 2019)

    In Ontario and across Canada, healthcare data falls under stringent privacy and confidentiality laws. PHIPA in Ontario and the PIPEDA at the federal level mandate careful stewardship of personal health information. While these laws do not explicitly require explainable AI, their emphasis on accountability and trust indirectly encourages interpretable models into the AI systems.

    The emerging Artificial Intelligence and Data Act (AIDA), proposed under Bill C-27 at the federal level, signals Canada’s intention to regulate high-impact AI systems. The trajectory suggests future regulatory frameworks may explicitly demand that automated decision-making tools, such as our AI system—particularly in healthcare— must provide understandable rationales for their outputs (Innovation, Science and Economic Development Canada, 2024).

  4. Guidelines

Bibliographie

  • Cammarata, N., Olah, C., Schubert, L., Goh, G., Petrov, M., & Carter, S. (2020). Thread: Circuits. Distill.
  • Cavoukian, A. (2009). Privacy by Design: The 7 Foundational Principles.
  • CRAWFORD, K. (2021). The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. Yale University Press. http://www.jstor.org/stable/j.ctv1ghv45t
  • Crenshaw, K. (1991). Mapping the Margins: Intersectionality, Identity Politics, and Violence against Women of Color. Stanford Law Review, 43(6), 1241–1299.
  • Edelstein, L. (1943). The Hippocratic Oath. The Johns Hopkins press.
  • Floridi, L. (2019). A Unified Framework of Five Principles for AI in Society. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3831321
  • Gadamer, H.-G. (1977). Philosophical Hermeneutics (D. E. Linge, Ed.; p. 243). University of California Press.
  • Haugeland, J. (1997). Mind Design II: Philosophy, Psychology, and Artificial Intelligence. The MIT Press. https://doi.org/10.7551/mitpress/4626.001.0001
  • Information and Privacy Commissioner of Ontario. (2004). A Guide to the Personal Health Information Protection Act [Technical Report].
  • Innovation, Science and Economic Development Canada. (2024). The Artificial Intelligence and Data Act (AIDA) - Companion Document [Policy Document]. Government of Canada.
  • Kant, I. (1785). Groundwork for the Metaphysics of Morals (T. E. Hill & A. Zweig, Eds.). Oxford University Press.
  • Lundberg, S. M., & Lee, S.-I. (2017). A Unified Approach to Interpreting Model Predictions. Advances in Neural Information Processing Systems, 30, 4765–4774.
  • Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model Cards for Model Reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229.
  • Nanda, N. (2023). Concrete Steps to Get Started in Transformer Mechanistic Interpretability. https://www.neelnanda.io/mechanistic-interpretability/getting-started
  • Noble, S. U. (2018). Algorithms of Oppression: How Search Engines Reinforce Racism (p. 229). New York University Press.
  • Obermeyer, Z., & Emanuel, E. J. (2016). Predicting the Future — Big Data, Machine Learning, and Clinical Medicine. New England Journal of Medicine, 375(13), 1216–1219. https://doi.org/10.1056/NEJMp1606181
  • Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342
  • Office of the Privacy Commissioner of Canada. (2024). PIPEDA Requirements in Brief [Guidance Document]. Office of the Privacy Commissioner of Canada.
  • Pozdniakov, S., Brazil, J., Abdi, S., Bakharia, A., Sadiq, S., Gasevic, D., Denny, P., & Khosravi, H. (2024). Large Language Models Meet User Interfaces: The Case of Provisioning Feedback. arXiv preprint arXiv:2404.11072 [arxiv]
  • Raghupathi, W., & Raghupathi, V. (2014). Creating value in health care through big data: opportunities and policy implications. Health Information Science and Systems, 2(1), 3. https://doi.org/10.1186/2047-2501-2-3
  • Rajkomar, A., Hardt, M., Howell, M. D., Corrado, G. S., & Chin, M. H. (2018). Ensuring Fairness in Machine Learning to Advance Health Equity. Annals of Internal Medicine, 169(12), 866–872. https://doi.org/10.7326/M18-1990
  • Rawls, J. (1999). A Theory of Justice (Revised). Belknap Press of Harvard University Press.
  • Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778
  • Robeyns, I. (2020). The Capability Approach. Stanford Encyclopedia of Philosophy.
  • Rudin, C. (2019). Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead. Nature Machine Intelligence, 1(5), 206–215.
  • Shortliffe, E. H. (1974). MYCIN: A Rule-Based Computer Program for Advising Physicians Regarding Antimicrobial Therapy Selection (Technical Report STAN-CS-74-465). Stanford University.
  • Zafar, M. R., & Khan, N. M. (2019). DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems. arXiv Preprint arXiv:1906.10263.
  • Zuboff, S. (2019). The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. PublicAffairs.

Remarque

  1. Large language model systems are poised to revolutionize ethnography by fundamentally altering how researchers conduct their work. In a sense, these systems should amplify our work, rather act as a replacement. Even these systems exhibit emergent behaviour of intelligence, we don’t think it is artificial general intelligence (AGI) due to observer-expectancy effect.