For a company moving as quickly as possible to build artificial intelligence into everything — including health care — Microsoft spends a lot of time talking about how to regulate it.
The tech giant has helped organize four separate coalitions to devise guidelines and technical standards for AI in health care. It supplies these groups — composed of health systems, government regulators, and other health businesses — with top executives to serve on steering committees, technical assistance, money for membership dues, and press releases trumpeting the need to root out bias and ramp up oversight.
The company’s full-throated advocacy of responsible AI, in turn, gives it something even more precious: the opportunity to influence testing standards and regulations that will determine how thoroughly its own technology is vetted in health care, as well as how difficult and costly it will be for competitors to measure up.
This article is exclusive to STAT+ subscribers
Unlock this article — and get additional analysis of the technologies disrupting health care — by subscribing to STAT+.
Already have an account? Log in
Already have an account? Log in
To submit a correction request, please visit our Contact Us page.
STAT encourages you to share your voice. We welcome your commentary, criticism, and expertise on our subscriber-only platform, STAT+ Connect