Categories

Tag: AI

The Times They Are A-Changing….Fast

By KIM BELLARD

If you have been following my Twitter – oops, I mean “X” – feed lately, you may have noticed that I’ve been emphasizing The Coming Wave, the new book from Mustafa Suleyman (with Michael Bhaskar). If you have not yet read it, or at least ordered it, I urge you to do so, because, frankly, our lives are not going to be the same, at all.  And we’re woefully unprepared.

One thing I especially appreciated is that, although he made his reputation in artificial intelligence, Mr. Suleyman doesn’t only focus on AI. He also discusses synthetic biology, quantum computing, robotics, and new energy technologies as ones that stand to radically change our lives.  What they have in common is that they have hugely asymmetric impacts, they display hyper-evolution, they are often omni-use, and they increasingly demonstrate autonomy. 

In other words, these technologies can do things we didn’t know they could do, have impacts we didn’t expect (and may not want), and may decide what to do on their own.  

To build an AI, for the near future one needs a significant amount of computing power, using specialized chips and a large amount of data, but with synthetic biology, the technology is getting to the point where someone can set up a lab in their garage and experiment away.  AI can spread rapidly, but it needs a connected device; engineered organisms can get anywhere there is air or water.

“A pandemic virus synthesized anywhere will spread everywhere,” MIT”s Kevin Esvelt told Axios.

I’ve been fascinated with synthetic biology for some time now, and yet I still think we’re not paying enough attention. “For me, the most exciting thing about synthetic biology is finding or seeing unique ways that living organisms can solve a problem,” David Riglar, Sir Henry Dale research fellow at Imperial College London, told The Scientist. “This offers us opportunities to do things that would otherwise be impossible with non-living alternatives.”

Jim Collins, Termeer professor of medical engineering and science at Massachusetts Institute of Technology (MIT), added: “By approaching biology as an engineering discipline, we are now beginning to create programmable medicines and diagnostic tools with the ability to sense and dynamically respond to information in our bodies.”

For example, researchers just reported on a smart pill — the size of a blueberry! — that can be used to automatically detect key biological molecules in the gut that suggest problems, and wirelessly transmit the information in real time. 

Continue reading…

Shiv Rao, CEO demos Abridge

Abridge has been trying to document the clinical encounter automatically since 2018. There’s been quit a lot of fuss about them in recent weeks. They announced becoming the first “Pal” on the Epic “Partners& Pals” program, and also that their AI based encounter capture technology was now being used at several hospitals. And they showed up in a NY Times article about tech being used for clinical documentation. But of course they’re not the only company trying to turn the messy speech in a clinician/patient encounter into a buttoned-up clinical note. Suki, Augmedix & Robin all come to mind, while the elephant is Nuance, which has itself been swallowed by the whale that is Microsoft.

But having used their consumer version a few years back and been a little disappointed, I wanted to see what all the fuss was about. CEO Shiv Rao was a real sport and took me through a clinical example with him as the doc and me as a (slightly) fictionalized patient. He also patiently explained where the company was coming from and what their road map was. But they are all in on AI–no off shore typists trying to correct in close to real time here.

And you’ll for sure want to see the demo. (If you want to skip the chat it’s about 8.00 to 16.50). And I think you’ll be very impressed indeed. I know I was. I can’t imagine a doctor not wanting this, and I suspect those armies of scribes will soon be able to go back to real work! — Matthew Holt

Smells like AI Spirit

By KIM BELLARD

There are so many exciting developments in artificial intelligence (AI) these days that one almost becomes numb to them. Then along comes something that makes me think, hmm, I didn’t see that coming.

For example, AI can now smell.

Strictly speaking, that’s not quite true, at least not in the way humans and other creatures smell.  There’s no olfactory organ, like our nose or a snake’s tongue. What AI has been trained to do is to look at a molecular structure and predict what it would smell like.

If you’re wondering (as I certainly did when I heard AI could smell), AI has also started to crack taste as well, with food and beverage companies already using AI to help develop new flavors, among other things. AI can even reportedly “taste wine” with 95% accuracy. It seems human senses really aren’t as human-only as we’d thought.

The new research comes from the Monell Chemical Senses Center and Osmo, a Google spin-off. It’s a logical pairing since Monell’s mission is “to improve health and well-being by advancing the scientific understanding of taste, smell, and related senses,” and Osmo seeks to give “computers a sense of smell.” More importantly, Osmo’s goal in doing that is: “Digitizing smell to give everyone a goal at a better life.”

Osmo CEO Alex Wiltschko, PhD says: “Computers have been able to digitize vision and hearing, but not smell – our deepest and oldest sense.” It’s easy to understand how vision and hearing can be translated into electrical and, ultimately, digital signals; we’ve been doing that for some time. Smell (and taste) seem somehow different; they seem chemical, not electrical, much less digital. But the Osmo team believes: “In this new era, computers will generate smells like we generate images and sounds today.”

I’m not sure I can yet imagine what that would be like.

The research team used an industry dataset of 5,000 known odorants, and matched molecular structures to perceived scents, creating what Osmo calls the Principle Odor Map (POM). This model was then used to train the AI. Once trained, the AI outperformed humans in identifying new odors. 

The model depends on the correlation between the molecules and the smells perceived by the study’s panelists, who were trained to recognize 55 odors. “Our confidence in this model can only be as good as our confidence in the data we used to test it,” said co-first author Emily Mayhew, PhD. Senior co-author Joel Mainland, PhD. admitted: “The tricky thing about talking about how the model is doing is we have no objective truth.” 

The study resulted in a different way to think about smell. The Montell Center says:

The team surmises that the model map may be organized based on metabolism, which would be a fundamental shift in how scientists think about odors. In other words, odors that are close to each other on the map, or perceptually similar, are also more likely to be metabolically related. Sensory scientists currently organize molecules the way a chemist would, for example, asking does it have an ester or an aromatic ring?

“Our brains don’t organize odors in this way,” said Dr. Mainland. “Instead, this map suggests that our brains may organize odors according to the nutrients from which they derive.”

“This paper is a milestone in predicting scent from chemical structure of odorants,” Michael Schmuker, a professor of neural computation at the University of Hertfordshire who was not involved in the study, told IEEE Spectrum.  It might, he says, lead to possibilities like sharing smells over the Internet. 

Think about that. 

“We hope this map will be useful to researchers in chemistry, olfactory neuroscience, and psychophysics as a new tool for investigating the nature of olfactory sensation,” said Dr. Mainland. He further noted: “The most surprising result, however, is that the model succeeded at olfactory tasks it was not trained to do. The eye-opener was that we never trained it to learn odor strength, but it could nonetheless make accurate predictions.”

Next up on the team’s agenda is to see if the AI can learn to recognize mixtures of odors, which exponentially increases the number of resulting smells. Osmo also wants to see if AI can predict smells from chemical sensor readings, rather than from molecular structures that have already been digitized. And, “can we digitize a scent in one place and time, and then faithfully replicate it in another?”

That’s a very ambitious agenda.

Dr. Wiltschko claims: “Our model performs over 3x better than the standard scent ingredient discovery process used by major fragrance houses, and is fully automated.” One can imagine how this would be useful to those houses. Osmo wants to work with the fragrance industry to create safer products: “If we can make the fragrances we use every day safer and more potent (so we use less of them), we’ll help the health of everyone, and also the environment.”

When I first read about the study, I immediately thought of how dogs can detect cancers by smell, and how exciting it might be if AI could improve on that. Frankly, I’m not much interesting in designing better fragrances; if we’re going to spend money on training AI to recognize molecules, I’d rather it be spent on designing new drugs than new fragrances.

Fortunately, Osmo has much the same idea. Dr. Wiltschko writes:

If we can build on our insights to develop systems capable of replicating what our nose, or what a dog’s nose can do (smell diseases!), we can spot disease early, prevent food waste, capture powerful memories, and more. If computers could do these kinds of things, people would live longer lives – full stop. Digitizing scent could catalyze the transformation of scent from something people see as ephemeral to enduring.   

Now, that’s the kind of innovation that I’m hoping for.

Skeptics will say, well, AI isn’t really smelling anything, it’s just acting as though it does. E.g., there’s no perception, just prediction. One would make the same argument about AI taste, or vision, or hearing, not to mention thinking itself. But at some point, as the saying goes, if it looks like a duck, swims like a duck, and quacks like a duck, it’s probably a duck.  At some point in the not-so-distant future, AI is going to have senses similar to and perhaps much better than our own.

As Dr. Wilkschko hopes: “If computers could do these kinds of things, people would live longer lives – full stop.” 

Kim is a former emarketing exec at a major Blues plan, editor of the late & lamented Tincture.io, and now regular THCB contributor.

The Next Pandemic May Be an AI one

By KIM BELLARD

Since the early days of the pandemic, conspiracy theorists have charged that COVID was a manufactured bioweapon, either deliberately leaked or the result of an inadvertent lab leak. There’s been no evidence to support these speculations, but, alas, that is not to say that such bioweapons aren’t truly an existential threat.  And artificial intelligence (AI) may make the threat even worse.

Last week the Department of Defense issued its first ever Biodefense Posture Review.  It “recognizes that expanding biological threats, enabled by advances in life sciences and biotechnology, are among the many growing threats to national security that the U.S. military must address.  It goes on to note: “it is a vital interest of the United States to manage the risk of biological incidents, whether naturally occurring, accidental, or deliberate.”  

“We face an unprecedented number of complex biological threats,” said Deborah Rosenblum, Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs. “This review outlines significant reforms and lays the foundation for a resilient total force that deters the use of bioweapons, rapidly responds to natural outbreaks, and minimizes the global risk of laboratory accidents.”

And you were worried we had to depend on the CDC and the NIH, especially now that Dr. Fauci is gone.  Never fear: the DoD is on the case.  

A key recommendation is establishment of – big surprise – a new coordinating body, the Biodefense Council. “The Biodefense Posture Review and the Biodefense Council will further enable the Department to deter biological weapons threats and, if needed, to operate in contaminated environments,” said John Plumb, Assistant Secretary of Defense for Space Policy. He adds, “As biological threats become more common and more consequential, the BPR’s reforms will advance our efforts not only to support the Joint Force, but also to strengthen collaboration with allies and partners.”

Which is scarier: that DoD is planning to operate in “contaminated environments,” or that it expects these threats will become “more common and more consequential.” Welcome to the 21st century.  

Continue reading…

Can AI Part The Red Sea?

BY MIKE MAGEE

A few weeks ago New York Times columnist Tom Friedman wrote, “We Are Opening The Lid On Two Giant Pandoras Boxes.” He was referring to 1) artificial Intelligence (AI) which most agree has the potential to go horribly wrong unless carefully regulated, and 2) global warming leading to water mediated flooding, drought, and vast human and planetary destruction.

Friedman argues that we must accept the risk of pursuing one (rapid fire progress in AI) to potentially uncover a solution to the other. But positioning science as savior quite misses the point that it is human behavior (a combination of greed and willful ignorance), rather than lack of scientific acumen, that has placed our planet and her inhabitants at risk.

The short and long term effects of fossil fuels and carbonization of our environment were well understood before Al Gore took “An Inconvenient Truth” on the road in 2006. So were the confounding factors including population growth, urbanization, and surface water degradation. 

When I first published “Healthy Waters,” the global population was 6.5 billion with 49% urban, mostly situated on coastal plains. It is now 8 billion with 57% urban and slated to reach 8.5 billion by 2030 with 63% urban. 552 cities around the globe now contain populations exceeding 1 million citizens.

Under ideal circumstances, this urban migration could serve our human populations with jobs, clean air and water, transportation, housing and education, health care, safety and security. Without investment however, this could be a death trap. 

Continue reading…

Would You Picket Over AI?

By KIM BELLARD

I’m paying close attention to strike by the Writers Guild Of America (WGA), which represents “Hollywood” writers.  Oh, sure, I’m worried about the impact on my viewing habits, and I know the strike is really, as usual, about money, but what got my attention is that it’s the first strike I’m aware of where impact of AI on their jobs is one of the key issues.

It may or may not be the first time, but it’s certainly not going to be the last.

The WGA included this in their demands: “Regulate use of artificial intelligence on MBA-covered projects: AI can’t write or rewrite literary material; can’t be used as source material; and MBA-covered material can’t be used to train AI.” I.e., if something – a script, treatment, outline, or even story idea – warrants a writing credit, it must come from a writer.  A human writer, that is.

John August, a screenwriter who is on the WGA negotiating committee, explained to The New York Times: “A terrible case of like, ‘Oh, I read through your scripts, I didn’t like the scene, so I had ChatGPT rewrite the scene’ — that’s the nightmare scenario,”

The studios, as represented by the Alliance of Motion Picture and Television Producers (AMPTP), agree there is an issue: “AI raises hard, important creative and legal questions for everyone.” It wants both sides to continue to study the issue, but noted that under current agreement only a human could be considered a writer. 

Still, though, we’ve all seen examples of AI generating remarkably plausible content.  “If you have a connection to the internet, you have consumed AI-generated content,” Jonathan Greenglass, a tech investor, told The Washington Post. “It’s already here.”  It’s easy to imagine some producer feeding an AI a bunch of scripts from prior instalments to come up with the next Star Wars, Marvel universe, or Fast and Furious release.  Would you really know the difference? 

Sure, maybe AI won’t produce a Citizen Kane or The Godfather, but, as Alissa Wilkinson wrote in Vox: “But here is the thing: Cheap imitations of good things are what power the entertainment industry. Audiences have shown themselves more than happy to gobble up the same dreck over and over.” 

Continue reading…

Can we trust ChatGPT to get the basics right?

by MATTHEW HOLT

Eric Topol has a piece in his excellent newsletter Ground Truth‘s today about AI in medicine. He refers to the paper he and colleagues wrote in Nature about Generalist Medical Artificial Intelligence (the medical version of GAI). It’s more on the latest in LLM (Large Language Models). They differ from previous AI which was essentially focused on one problem, and in medicine that mostly meant radiology. Now, you can feed different types of information in and get lots of different answers.

Eric & colleagues concluded their paper with this statement: “Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.” But he does note that “there are striking liabilities and challenges that have to be dealt with. The “hallucinations” (aka fabrications or BS) are a major issue, along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security and deep concerns about regulatory issues.”

What he’s saying is that there are unexplained errors in LLMs and therefore we need a human in the loop to make sure the AI isn’t getting stuff wrong. I myself had a striking example of this on a topic that was purely simple calculation about a well published set of facts. I asked ChatGPT (3 not 4) about the historical performance of the stock market. Apparently ChatGPT can pass the medical exams to become a doctor. But had it responded with the same level of accuracy about a clinical issue I would be extremely concerned!

The brief video of my use of ChatGPT for stock market “research” is below:

THCB Spotlight: Glen Tullman, Transcarent & Aneesh Chopra, Carejourney

No THCB Gang today because my kid is in the hospital (minor planned surgery) So instead I am reposting this great interview from last week.

I just got to interview Glen Tullman, CEO Transcarent (and formerly CEO of Livongo & Allscripts) & Aneesh Chopra, CEO Carejourney (and formerly CTO of the US). The trigger for the interview is a new partnership between the two companies, but the conversation was really about what’s happening with health care in the US, including how the customer experience needs to change, what level of data and information is available about providers and how that is changing, how AI is going to change data analytics, and what is actually happening with Medicare Advantage. This is a fascinating discussion with two real leaders in health and health techMatthew Holt

AI: Not Ready, Not Set – Go!

By KIM BELLARD

I feel like I’ve written about AI a lot lately, but there’s so much happening in the field. I can’t keep up with the various leading entrants or their impressive successes, but three essays on the implications of what we’re seeing struck me: Bill Gates’ The Age of AI Has Begun, Thomas Friedman’s Our New Promethean Moment, and You Can Have the Blue Pill or the Red Pill, and We’re Out of Blue Pills by Yuval Harari, Tristan Harris, and Aza Raskin.  All three essays speculate that we’re at one of the big technological turning points in human history.

We’re not ready.

The subtitle of Mr. Gates’ piece states: “Artificial intelligence is as revolutionary as mobile phones and the Internet.” Similarly, Mr. Friedman recounts what former Microsoft executive Craig Mundie recently told him: “You need to understand, this is going to change everything about how we do everything. I think that it represents mankind’s greatest invention to date. It is qualitatively different — and it will be transformational.”    

Mr. Gates elaborates:

The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.

Mr. Friedman is similarly awed:

This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.

Professor Harari and colleagues are more worried than awed, warning: “A.I. could rapidly eat the whole of human culture — everything we have produced over thousands of years — digest it and begin to gush out a flood of new cultural artifacts.”  Transformational isn’t always beneficial.

Continue reading…

Searching For The Next Search

By KIM BELLARD

I didn’t write about ChatGPT when it was first introduced a month ago because, well, it seemed like everyone else was. I didn’t play with it to see what it could do.  I didn’t want it to write any poems. I didn’t have any AP tests I wanted it to pass. And, for all you know, I’m not using it to write this. But when The New York Times reports that Google sees ChatGPT as a “Code Red” for its search business, that got my attention.

A few months ago I wrote about how Google saw TikTok as an existential threat to its business, estimating that 40% of young people used it for searches. It was a different kind of search, mind you, with video results instead of links, but that’s what made it scary – because it didn’t just incrementally improve “traditional” search, as Google had done to Lycos or Altavista, it potentially changed what “search” was.    

TikTok may well still do that (although it is facing existential issues of its own), but ChatGPT could pose an even greater threat. Why get a bunch of search results that you still have to investigate when you could just ask ChatGPT to tell you exactly what you want to know?

Look, I like Google as much as anyone, but the prospect that its massive dominance of the search engine market could, in the near future, suddenly come to an end gives me hope for healthcare.  If Google isn’t safe in search, no company is safe in any industry, healthcare included.

Continue reading…
assetto corsa mods