Categories

Tag: ChatGPT

AI Cognition – The Next Nut To Crack

By MIKE MAGEE

OpenAI says its new GPT-4o is “a step towards much more natural human-computer interaction,” and is capable of responding to your inquiry “with an average 320 millisecond (delay) which is similar to a human response time.” So it can speak human, but can it think human?

The “concept of cognition” has been a scholarly football for the past two decades, centered primarily on “Darwin’s claim that other species share the same ‘mental powers’ as humans, but to different degrees.” But how about genAI powered machines? Do they think?

The first academician to attempt to define the word “cognition” was Ulric Neisser in the first ever textbook of cognitive psychology in 1967. He wrote that “the term ‘cognition’ refers to all the processes by which the sensory input is transformed, reduced, elaborated, stored, recovered, and used. It is concerned with these processes even when they operate in the absence of relevant stimulation…”

The word cognition is derived from “Latin cognoscere ‘to get to know, recognize,’ from assimilated form of com ‘together’ + gnoscere ‘to know’ …”

Knowledge and recognition would not seem to be highly charged terms. And yet, in the years following Neisser’s publication there has been a progressively intense, and sometimes heated debate between psychologists and neuroscientists over the definition of cognition.

The focal point of the disagreement has (until recently) revolved around whether the behaviors observed in non-human species are “cognitive” in the human sense of the word. The discourse in recent years had bled over into the fringes to include the belief by some that plants “think” even though they are not in possession of a nervous system, or the belief that ants communicating with each other in a colony are an example of “distributed cognition.”

What scholars in the field do seem to agree on is that no suitable definition for cognition exists that will satisfy all. But most agree that the term encompasses “thinking, reasoning, perceiving, imagining, and remembering.” Tim Bayne PhD, a Melbourne based professor of Philosophy adds to this that these various qualities must be able to be “systematically recombined with each other,” and not be simply triggered by some provocative stimulus.

Allen Newell PhD, a professor of computer science at Carnegie Mellon, sought to bridge the gap between human and machine when it came to cognition when he published a paper in 1958 that proposed “a description of a theory of problem-solving in terms of information processes amenable for use in a digital computer.”

Machines have a leg up in the company of some evolutionary biologists who believe that true cognition involves acquiring new information from various sources and combining it in new and unique ways.

Developmental psychologists carry their own unique insights from observing and studying the evolution of cognition in young children. What exactly is evolving in their young minds, and how does it differ, but eventually lead to adult cognition? And what about the explosion of screen time?

Pediatric researchers, confronted with AI obsessed youngsters and worried parents are coming at it from the opposite direction. With 95% of 13 to 17 year olds now using social media platforms, machines are a developmental force, according to the American Academy of Child and Adolescent Psychiatry. The machine has risen in status and influence from a side line assistant coach to an on-field teammate.

Scholars admit “It is unclear at what point a child may be developmentally ready to engage with these machines.” At the same time, they are forced to admit that the technological tidal waves leave few alternatives. “Conversely, it is likely that completely shielding children from these technologies may stunt their readiness for a technological world.”

Bence P Ölveczky, an evolutionary biologist from Harvard, is pretty certain what cognition is and is not. He says it “requires learning; isn’t a reflex; depends on internally generated brain dynamics; needs access to stored models and relationships; and relies on spatial maps.”

Thomas Suddendorf PhD, a research psychologist from New Zealand, who specializes in early childhood and animal cognition, takes a more fluid and nuanced approach. He says, “Cognitive psychology distinguishes intentional and unintentional, conscious and unconscious, effortful and automatic, slow and fast processes (for example), and humans deploy these in diverse domains from foresight to communication, and from theory-of-mind to morality.”

Perhaps the last word on this should go to Descartes. He believed that humans mastery of thoughts and feelings separated them from animals which he considered to be “mere machines.”

Were he with us today, and witnessing generative AI’s insatiable appetite for data, its’ hidden recesses of learning, the speed and power of its insurgency, and human uncertainty how to turn the thing off, perhaps his judgement of these machines would be less disparaging; more akin to Mira Murati, OpenAI’s chief technology officer, who announced with some degree of understatement this month, “We are looking at the future of the interaction between ourselves and machines.”

Mike Magee MD is a Medical Historian and regular contributor to THCB. He is the author of CODE BLUE: Inside the Medical Industrial Complex (Grove/2020)

The 7 Decade History of ChatGPT

By MIKE MAGEE

Over the past year, the general popularization of AI orArtificial Intelligence has captured the world’s imagination. Of course, academicians often emphasize historical context. But entrepreneurs tend to agree with Thomas Jefferson who said, “I like dreams of the future better than the history of the past.”

This particular dream however is all about language, its standing and significance in human society. Throughout history, language has been a species accelerant, a secret power that has allowed us to dominate and rise quickly (for better or worse) to the position of “masters of the universe.”

Well before ChatGPT became a household phrase, there was LDT or the laryngeal descent theory. It professed that humans unique capacity for speech was the result of a voice box, or larynx, that is lower in the throat than other primates. This permitted the “throat shape, and motor control” to produce vowels that are the cornerstone of human speech. Speech – and therefore language arrival – was pegged to anatomical evolutionary changes dated at between 200,000 and 300,000 years ago.

That theory, as it turns out, had very little scientific evidence. And in 2019, a landmark study set about pushing the date of primate vocalization back to at least 3 to 5 million years ago. As scientists summarized it in three points: “First, even among primates, laryngeal descent is not uniquely human. Second, laryngeal descent is not required to produce contrasting formant patterns in vocalizations. Third, living nonhuman primates produce vocalizations with contrasting formant patterns.”

Language and speech in the academic world are complex fields that go beyond paleoanthropology and primatology. If you want to study speech science, you better have a working knowledge of “phonetics, anatomy, acoustics and human development” say the  experts. You could add to this “syntax, lexicon, gesture, phonological representations, syllabic organization, speech perception, and neuromuscular control.”

Professor Paul Pettitt, who makes a living at the University of Oxford interpreting ancient rock paintings in Africa and beyond, sees the birth of civilization in multimodal language terms. He says, “There is now a great deal of support for the notion that symbolic creativity was part of our cognitive repertoire as we began dispersing from Africa.  Google chair, Sundar Pichai, maintains a similarly expansive view when it comes to language. In his December 6, 2023, introduction of their ground breaking LLM (large language model), Gemini (a competitor of ChatGPT), he described the new product as “our largest and most capable AI model with natural image, audio and video understanding and mathematical reasoning.”

Continue reading…

Altman, Ive, and AI

BY KIM BELLARD

Earlier this year I urged that we Throw Away That Phone, arguing that the era of the smartphone should be over and that we should get on to the next big thing.  Now, I don’t have any reason to think that either Sam Altman, CEO of OpenAI, and Jony Ive, formerly and famously of Apple and now head of design firm LoveFrom, read my article but apparently they have the same idea.  

Last week The Information and then Financial Times reported that OpenAi and LoveFrom are “in advanced talks” to form a venture in order to build the “iPhone of artificial intelligence.”  Softbank may fund the venture with as much as $1b.  There have been brainstorming sessions, and discussions are said to be “serious,” but a final deal may still be months away. The new venture would draw on talent from all three firms.

Details are scare, as are comments from any of the three firms, but FT cites sources who suggest Mr. Altman sees “an opportunity to create a way of interacting with computers that is less reliant on screens.” which is a sentiment I heartily agree with.  The Verge similarly had three sources who agreed that the goal is a “more natural and intuitive user experience.”

Continue reading…

The Next Pandemic May Be an AI one

By KIM BELLARD

Since the early days of the pandemic, conspiracy theorists have charged that COVID was a manufactured bioweapon, either deliberately leaked or the result of an inadvertent lab leak. There’s been no evidence to support these speculations, but, alas, that is not to say that such bioweapons aren’t truly an existential threat.  And artificial intelligence (AI) may make the threat even worse.

Last week the Department of Defense issued its first ever Biodefense Posture Review.  It “recognizes that expanding biological threats, enabled by advances in life sciences and biotechnology, are among the many growing threats to national security that the U.S. military must address.  It goes on to note: “it is a vital interest of the United States to manage the risk of biological incidents, whether naturally occurring, accidental, or deliberate.”  

“We face an unprecedented number of complex biological threats,” said Deborah Rosenblum, Assistant Secretary of Defense for Nuclear, Chemical, and Biological Defense Programs. “This review outlines significant reforms and lays the foundation for a resilient total force that deters the use of bioweapons, rapidly responds to natural outbreaks, and minimizes the global risk of laboratory accidents.”

And you were worried we had to depend on the CDC and the NIH, especially now that Dr. Fauci is gone.  Never fear: the DoD is on the case.  

A key recommendation is establishment of – big surprise – a new coordinating body, the Biodefense Council. “The Biodefense Posture Review and the Biodefense Council will further enable the Department to deter biological weapons threats and, if needed, to operate in contaminated environments,” said John Plumb, Assistant Secretary of Defense for Space Policy. He adds, “As biological threats become more common and more consequential, the BPR’s reforms will advance our efforts not only to support the Joint Force, but also to strengthen collaboration with allies and partners.”

Which is scarier: that DoD is planning to operate in “contaminated environments,” or that it expects these threats will become “more common and more consequential.” Welcome to the 21st century.  

Continue reading…

Asking Bard And ChatGPT To Find The Best Medical Care, I Got Truth And Truthiness

BY MICHAEL MILLENSON

If you ask ChatGPT how many procedures a certain surgeon does or a specific hospital’s infection rate, the OpenAI and Microsoft chatbot inevitably replies with some version of, “I don’t do that.”

But depending upon how you ask, Google’s Bard provides a very different response, even recommending a “consultation” with particular clinicians.

Bard told me how many knee replacement surgeries were performed by major Chicago hospitals in 2021, their infection rates and the national average. It even told me which Chicago surgeon does the most knee surgeries and his infection rate. When I asked about heart bypass surgery, Bard provided both the mortality rate for some local hospitals and the national average for comparison. While sometimes Bard cited itself as the information source, beginning its response with, “According to my knowledge,” other times it referenced well-known and respected organizations.

There was just one problem. As Google itself warns, “Bard is experimental…so double-check information in Bard’s responses.” When I followed that advice, truth began to blend indistinguishably with “truthiness” – comedian Stephen Colbert’s memorable term to describe information that’s seen as true not because of supporting facts, but because it “feels” true.

Continue reading…

Can we trust ChatGPT to get the basics right?

by MATTHEW HOLT

Eric Topol has a piece in his excellent newsletter Ground Truth‘s today about AI in medicine. He refers to the paper he and colleagues wrote in Nature about Generalist Medical Artificial Intelligence (the medical version of GAI). It’s more on the latest in LLM (Large Language Models). They differ from previous AI which was essentially focused on one problem, and in medicine that mostly meant radiology. Now, you can feed different types of information in and get lots of different answers.

Eric & colleagues concluded their paper with this statement: “Ultimately, GMAI promises unprecedented possibilities for healthcare, supporting clinicians amid a range of essential tasks, overcoming communication barriers, making high-quality care more widely accessible, and reducing the administrative burden on clinicians to allow them to spend more time with patients.” But he does note that “there are striking liabilities and challenges that have to be dealt with. The “hallucinations” (aka fabrications or BS) are a major issue, along with bias, misinformation, lack of validation in prospective clinical trials, privacy and security and deep concerns about regulatory issues.”

What he’s saying is that there are unexplained errors in LLMs and therefore we need a human in the loop to make sure the AI isn’t getting stuff wrong. I myself had a striking example of this on a topic that was purely simple calculation about a well published set of facts. I asked ChatGPT (3 not 4) about the historical performance of the stock market. Apparently ChatGPT can pass the medical exams to become a doctor. But had it responded with the same level of accuracy about a clinical issue I would be extremely concerned!

The brief video of my use of ChatGPT for stock market “research” is below:

Ultrasound is Ultra-Cool

BY KIM BELLARD

AI continues to amaze – ChatGPT is now passing Wharton Business School exams, Microsoft and Google are doubling down in their AI efforts – and I’m as big a fan as anyone, but I want to talk about a technology that has been more under the radar, so to speak: ultrasound.  

Yes, ultrasound.  Most of us have probably had an ultrasound at some point (especially if you’ve been pregnant) and Dr. Eric Topol continues his years-long quest to replace the ancient stethoscope technology with ultrasound, but if you think ultrasound is just another nifty tool in the imaging toolbox, you’ve missed a lot. 

Let’s start with the coolest use I’ve seen: ultrasound can be used for 3D printing.  Inside the body.  

This news on this dates back to last April, when researchers from Concordia University published their findings in Nature (I found out about it last week).  Instead of the more common “Additive Manufacturing” (AM) approach to 3D printing, these researchers use Direct Sound Printing (DSP).  

The paper summarizes their results: “To show unique future potentials of DSP, applications such as RDP [Remote Distance Printing] for inside body bioprinting and direct nanoparticle synthesizing and pattering by DSP for integrating localized surface plasmon resonance with microfluidics chip are experimentally demonstrated.”

Continue reading…

Searching For The Next Search

By KIM BELLARD

I didn’t write about ChatGPT when it was first introduced a month ago because, well, it seemed like everyone else was. I didn’t play with it to see what it could do.  I didn’t want it to write any poems. I didn’t have any AP tests I wanted it to pass. And, for all you know, I’m not using it to write this. But when The New York Times reports that Google sees ChatGPT as a “Code Red” for its search business, that got my attention.

A few months ago I wrote about how Google saw TikTok as an existential threat to its business, estimating that 40% of young people used it for searches. It was a different kind of search, mind you, with video results instead of links, but that’s what made it scary – because it didn’t just incrementally improve “traditional” search, as Google had done to Lycos or Altavista, it potentially changed what “search” was.    

TikTok may well still do that (although it is facing existential issues of its own), but ChatGPT could pose an even greater threat. Why get a bunch of search results that you still have to investigate when you could just ask ChatGPT to tell you exactly what you want to know?

Look, I like Google as much as anyone, but the prospect that its massive dominance of the search engine market could, in the near future, suddenly come to an end gives me hope for healthcare.  If Google isn’t safe in search, no company is safe in any industry, healthcare included.

Continue reading…

OK, trying more AI to write this

by MATTHEW HOLT

In the far oft halcyon days of last month, I used an AI system from Lex.Page  to write something about digital health. In the past few days the world has gone nuts about a new AI author called ChatGPT. So I tought I would compare and contrast.

Last month Lex.page was asked about Digital Health and it said

Digital Health is one of the most rapidly growing and important sectors in health care today. It is a broad term that encompasses the use of technology to improve health and health care.

Today I asked ChatGPT the same thing. In fact I just wrote “Digital Health” as the Q and this was the A

Digital health, also known as telehealth, is the use of digital technologies to deliver healthcare services remotely. This includes the use of video conferencing, mobile health apps, remote patient monitoring, and other digital tools to provide healthcare services to patients. Digital health can be used to improve access to primary care, reduce costs, and improve patient outcomes.

OK, pretty much a longer version of the same thing but with more details. So then I went BACK to Lex.page to ask it. But I had to ask it “What is Digital Health”

Digital health is a term used to describe the use of technology to improve healthcare. It covers everything from diagnostic tools and monitoring devices to electronic medical records and healthcare analytics. It’s an exciting and rapidly evolving area that has the potential to revolutionize the delivery of healthcare.

So in a month Lex.page has gotten a lot better!

Last month Lex.page said in response to my question/statement, “The biggest challenges for primary care are”

-Access to care

-Cost of care

-Quality of care

So it seems that digital health can help with all of those things. By providing tools that help patients better self-manage their health, digital health can help to improve access to care. By providing tools that help to lower the cost of care, digital health can help to make care more affordable. And by providing tools that help to improve the quality of care, digital health can help to make

I put the same sentence into GPTChat and at the bottom it told me Warning: Your text ends in a trailing space, which causes worse performance due to how the API splits text into tokens. But I asked it anyway and it gave me

A: 1. Accessibility: Ensuring that everyone has access to quality primary care services.

I surmised that it had more answers so I kept hitting submit! It got to another 12 answers with no new question!

Q: A: 2. Cost: Finding ways to make primary care services more affordable and accessible to all. Q: A: 3. Workforce: Recruiting and retaining a sufficient number of primary care providers to meet the needs of the population. Q: A: 4. Technology: Incorporating new technologies into primary care to improve efficiency and patient outcomes. Q: A: 5. Coordination: Improving the coordination of care between primary care providers and other healthcare professionals. Q: A: 6. Quality: Ensuring that primary care services are of the highest quality and meet the needs of patients. Q: A: 7. Education: Educating patients about the importance of preventive care and the value of primary care services. Q: A: 8. Mental Health: Integrating mental health services into primary care to better address the needs of patients. Q: A: 9. Population Health: Developing strategies to improve the health of the population as a whole. A: 10. Data: Utilizing data to inform decision-making and improve patient outcomes. A: 11. Payment: Developing new payment models that incentivize quality care and reduce costs. Q: A: 12. Social Determinants of Health: Addressing the social and environmental factors that influence health outcomes.

OK so those are all separate statements, but it’s pretty damn impressive. How is Lex.page doing? Well I asked it’s chatbot the same statement and it now answered:

The biggest challenges for primary care are likely to be related to access and cost. In many areas of the world, access to quality primary care is limited by geography, poverty, or both. Cost is also an issue, especially in countries with expensive healthcare systems. Additionally, many primary care providers struggle to effectively manage chronic conditions, such as diabetes and hypertension, due to a lack of resources, funding, and staff. Finally, the rapid advancements in technology have presented a challenge in terms of staying up to date with the latest medical developments.

That’s not only pretty smart but it’s quite an advance on the answer it gave me to the same vague question/statement just 31 days ago.

We are seeing a technology emerge right before our eyes.

Presumably the next step is to set it to read some of my favorite pieces on THCB and just have it write for me! Then I will be happily replaced by a robot!