Skip to main content

Q&A: Datavant on AI hype and stock market volatility

Dan Walsh, chief information security officer of Datavant, sat down with MobiHealthNews in-person to discuss AI and its evolution compared to other tech.
By Jessica Hagen , Executive Editor
Dan Walsh, chief information security officer of Datavant

Dan Walsh, chief information security officer of Datavant

Photo courtesy of Datavant

LOS ANGELES – Dan Walsh, chief information security officer at Datavant, sat down with MobiHealthNews for an in-person interview today to discuss the potential of an AI bubble and the stock market's reaction to the evolution of AI.

MobiHealthNews: Datavant is a big player in the data sharing space, using AI – specifically machine learning and natural language processing. Do you think we're in an AI bubble?

Dan Walsh: I don't know that we're an AI bubble. I think the amount of AI that will end up being what is normalized when that time comes will probably shift and settle in certain places. But I do think AI is going to fundamentally change the way that we think about healthcare connectivity, and even beyond healthcare, just sort of society in general.

MHN: Do you think that AI is overhyped right now?

Walsh: Probably. There is certainly more hype, and right now we're calling everything AI. So, is it LLMs? Is it agentic AI? Are we enhancing regular SaaS products with AI summarization capabilities? What specifically is the AI that we're talking about here, and I think we're lumping everything into the same bucket, which is causing a lot of that confusion and the feeling of there's a lot of hype out there.

MHN: There was a pretty large dip in the stock market recently when Anthropic released its newest version of Claude. Do you think investors are anticipating that AI will replace a lot of software companies?

Walsh: So, I'll give you a story. In 2004, Bill Gates said that phishing would be eliminated in two years. And here we are, 22 years later, and now we're worried about AI making phishing impossible for tools and people to detect.

So, I think, again, like with everything else, it is going to shift. I think that Anthropic and OpenAI, you know, I know Anthropic has this Claude code scanner that caused a dip in the stock market. I think it's very good at scanning code. And so I think application security platforms and security platforms are going to shift, and ... it will remove a lot of superficial capabilities from the market. But again, what Anthropic released on Friday cannot replace CrowdStrike. It cannot replace these platforms. It's a scanning capability. So, hopefully it shifts, and I hate to use the term "shift left," but hopefully it creates less vulnerabilities from the start of when we're generating that code as opposed to catching them downstream.

MHN: So in your view, investors need to start seeing it more as a tool that's going to help these companies, not necessarily eliminate them.

Walsh: 100%.

MHN: How is Datavant ensuring longevity within healthcare with the AI hype?

Walsh: At the end of the day, the fundamentals of healthcare really matter. And so when we're introducing things like AI, whether it's something that we would consider developing in-house, or whether it would be something we were considering partnering with from a third-party or vendor point of view, we need to make sure that we can answer a couple of fundamental questions. One is, what data will this AI have access to? Training on customer data is a no-go for us when it comes to AI.

MHN: Why?

Walsh: Because it really violates the privacy mechanisms that we have in place.

MHN: Even de-identified data?

Walsh: De-identified data would be different. For AI to summarize something administratively, fine. For AI to make a clinical determination, hard no-go.

And the really simple way to break it down is we want to think about our use of AI as a patient safety issue. It's not really a cybersecurity issue; it's a patient safety issue. So, if we can ask the simple questions about whether it's going to make patient delivery, patient care safer and enable that, then that is something we would want to consider. And if it can't, then that would be something where we're like, not at this time. Maybe it's not mature enough, maybe, or maybe it will never get there.

I mean, we saw this with cloud computing 20 years ago. Why did these big cloud breaches occur? It occurred because identity was over-sprawled. There were misconfigurations. People forgot there was access to data somewhere. Those same questions, those same gaps are emerging now as problematic for AI. So, we need to make sure we have the same governance mechanisms in place to ensure that we don't repeat those same mistakes and have those same breaches.