Commentary

Article

AI-Powered Neurotech Could Revolutionize Dementia Care and Unlock Communications

Author(s):

Neal K. Shah, CEO of CareYaya Health Technologies, discussed how emerging neurotechnology, such as mobile EEG devices paired with customized AI and machine learning models, is poised to transform the fight against dementia.

Neal K. Shah, CEO of CareYaya Health Technologies

Neal K. Shah

Credit: CareYaya Health Technologies

Across the United States, millions of patients with dementia face each day trapped in a silence of their own minds, unable to articulate their thoughts, express feelings, or make their basic needs and desires known to their loved ones. This represents not just a medical challenge, but a profound human crisis.

Traditional pharmaceuticals and medical procedures have made some strides in managing symptoms, yet they fall short in addressing the core communication barriers imposed by neurological disorders such as Alzheimer, Parkinson, and the aftermath of strokes.

But there’s hope on the horizon, in the burgeoning field of neurotechnology, or "neurotech".

When Words Fail: The Impact of Neurological Disorders

Dementia, stroke, Parkinson disease – these conditions can rob patients of their ability to communicate through speech or writing, a common yet devastating outcome as the brain's functional pathways erode. The impact is devastating, not just for the patient, but for their entire support system. Families watch helplessly as their loved ones retreat into an unreachable world, where understanding even simple desires becomes a labyrinth of frustration. This loss extends beyond words, to the very essence of personal agency and connection.

However, we are on the cusp of a technological renaissance, with recent advancements in artificial intelligence and machine learning models, combined with innovations in neurotech devices.

The Innovative Potential of Neurotech

Neurotech is at the forefront of bridging critical care gaps. Pioneers are blending AI with novel neurotech, crafting tools that may redefine neurological care. Innovations now on the horizon could translate thoughts to communicative intent, promising to restore the voices of those silenced by illness.

Technologies that leverage advanced brain imaging to interpret the neural activity of non-verbal patients are beginning to take shape. These new technologies, a combination of hardware and software innovations, are learning to translate thoughts into communication intent. This has breakthrough potential to give a voice to those who have been silenced. From decoding visual perceptions to interpreting the very words and topics on a person's mind, neurotech is pushing the boundaries of what we thought was possible in neurological care.

Decoding the Mind through AI and Machine Learning

Three groundbreaking research projects are paving the way.

In June 2023, a seminal paper in this field was published entitled “DreamDiffusion: Generating High-Quality Images from Brain EEG Signals,” a collaboration between researchers at Tshingua University and Tencent AI Lab – one of China’s leading universities and one of its largest technology companies.

The research demonstrated an ability to generate high-quality images representing a subject’s thoughts directly from their electroencephalography (EEG) signals. DreamDiffusion leveraged pretrained text-to-image models and other novel machine learning techniques, to overcome the challenges of using EEG signals for image generation, such as noise, limited information, and individual differences.

This breakthrough showed that it is possible to interpret the visual cortex and reconstruct what a person is seeing or imagining, using non-invasive brain scans. Additionally, the team made the source code available on an open GitHub repository, allowing future AI innovators to leverage the breakthrough.

Building upon this, AI researchers at Meta (the American technology company formerly known as Facebook), published a white paper in October 2023 entitled “Toward a real-time decoding of images from brain activity.” This work utilized a different neuroimaging technique of magnetoencephalography (MEG), in which thousands of brain activity measurements are taken per second. This technique provides a faster and more precise measurement of brain activity, thereby facilitating a more immediate decoding and interpretation of data. In addition, the AI models were built on “self-supervised” learning architecture, which means they learned visual representations without necessitating any human annotations. According to Meta, this confirmed that “self-supervised learning leads AI systems to learn brain-like representations: The artificial neurons in the algorithm tend to be activated similarly to the physical neurons of the brain in response to the same image.”

Meta AI Research white paper, “Toward a real-time decoding of images from brain activity”

Credit: Meta

Meta showcased an AI system capable of real-time reconstruction of visual perception from brain activity, bringing us one step closer to real-time decoding of thought. (For an interesting demonstration of what real-time image AI decoding and visual reconstruction looks like solely from MEG data, readers can watch this video).

The difference here is that Meta's approach, with its focus on MEG's rapid data acquisition, has the potential for real-time application. In contrast, DreamDiffusion's use of EEG signals does not emphasize the speed of decoding but rather the ability to generate detailed visual outputs from brain activity.

So, while DreamDiffusion is an essential step towards visualizing thoughts and dreams after-the-fact, Meta's work builds upon this by accelerating the process, moving closer to the real-time interpretation of mental imagery. Both contributions are pivotal, each advancing brain-computer interfacing (BCI) in unique ways.

CareYaya: Democratizing Neurotech

At CareYaya Health Technologies, we build upon these innovations. While EEG and MEG are powerful, their cost and complexity limit access. Our mission: democratize neurotech to aid those affected by neurological disorders widely.

In April 2024, a social impact project “BrainYaya” led by our team has taken these concepts a step further – by integrating AI methods onto real-time data captured from mobile electroencephalogram (EEG) technology.

CareYaya illustration of consumer-accessible Muse 2 headband, with electrodes on forehead and above ear, measuring frontal and temporal lobe activity.

Illustration of consumer-accessible Muse 2 headband, with electrodes on forehead and above ear, measuring frontal and temporal lobe activity.

Credit: CareYaya

The beauty of this project is its accessibility. Neurotech advancements are becoming increasingly affordable for at-home usage. Technologies like mobile EEG, once confined to research labs, are now being consumerized and made affordable to the mass population. Companies like Muse have advanced clinically-validated mobile EEG devices for consumer applications that cost only $300-400, a fraction of the price of traditional EEG machines (which can cost $30k-$100k+) or MEG machines ($2 million-plus).

With portable, user-friendly devices that can be used in the comfort of one's own home, neurotech is more accessible. This means that these life-changing innovations could be within reach not just for the wealthy, but for lower- and moderate-income families as well - a crucial and necessary step towards health equity in neurological care.

As mobile EEG has data output limitations compared to traditional EEG or MEG methods, our initial focus is to decode thoughts into language, aiming for a future where complete thought-to-image conversion is feasible.

For now, the project enables at-home, real-time decoding of the words and topics that a patient with neurological disorders is thinking about, even when they can no longer verbalize those thoughts. By training AI models on patterns of brain activity associated with certain words and topics, this technology could allow caregivers to understand the needs and desires of their loved ones, even in the later stages of dementia. It has a very immediate, practical application for millions of patients with neurological disorders and their caregivers.

A Call to Action: Innovators and Clinicians Unite

But to truly capitalize on this potential, we need all hands on-deck. This is a call to innovators everywhere – put on your thinking caps and dedicate yourselves to pushing the boundaries of what's possible in this space. We need the brightest minds in AI, neuroscience, and hardware technology to come together and tackle these challenges head-on.

And to neurology clinicians, I urge you to reach out and collaborate with these projects. Your expertise and firsthand experience with patients are invaluable in guiding the development of neurotech solutions that are not only effective, but also practical and user-friendly. Together, we have the power to reshape the landscape of neurological care.

A Brighter Future for Neurological Care

In the face of conditions like dementia that can feel hopeless, neurotech offers the potential for hope. By harnessing the power of neurotech innovation, we can give a voice back to the voiceless. The tide is turning, and with each passing day, we're one step closer to a world where no one is left trapped within their own thoughts.

The future of neurological care is here, and it's brighter than ever before. With innovators and clinicians working hand in hand, and with technologies becoming more accessible by the day, we stand on the precipice of a neurotech revolution. So let us charge forward with hope and determination, knowing that every step brings us closer to a world where no mind is left behind.

REFERENCES
1. Bai Y, Wang X, Cao Y, et al. DreamDiffusion: Generating High-Quality Images from Brain EEG Signals. Research Paper. arXiv. Published June 30, 2023. Accessed April 29, 2024. https://arxiv.org/pdf/2306.16934.
2. Bai Y, Wang X, Cao Y, et al. DreamDiffusion: Generating High-Quality Images from Brain EEG Signals. Source Code GitHub. Accessed April 29, 2024. https://github.com/bbaaii/DreamDiffusion.
3. Benchetrit Y, Banville HJ, King JR. Toward a real-time decoding of images from brain activity. ResearMeta. Published October 18, 2023. Accessed April 29, 2024. https://ai.meta.com/blog/brain-ai-image-decoding-meg-magnetoencephalography/.
4. Shah NK. Guest column: A daughter’s AI exploration and the device that could restore language for those with dementia. News. WRAL. Published April 20, 2024. Accessed April 29, 2024. https://wraltechwire.com/2024/04/20/guest-column-a-daughters-ai-exploration-and-the-device-that-could-restore-language-for-those-with-dementia/.
5. Muse Research. Muse. Accessed April 29, 2024. https://choosemuse.com/pages/muse-research.
6. Hodge R. As mother battles dementia, daughter works with AI to improve communication. News. ABC 13 News.Published April 23, 2024. Accessed April 29, 2024. https://wlos.com/news/local/ai-artificial-intelligence-dementia-lake-junaluska-mother-daughter-developer-improve-communication-north-carolina

Neal K. Shah is the CEO of CareYaya Health Technologies, one of the fastest-growing health tech startups in America. He runs a social enterprise and applied research lab utilizing AI to advance health equity, with a focus on neurological care for elders with dementia, Alzheimer and Parkinson. Shah has advanced AI projects to improve neurological care with support from the National Institutes of Health, Johns Hopkins AITC and Harvard Innovation Labs. Neal is a “Top Healthcare Voice” on LinkedIn with a 35k+ following.

Related Videos
Henri Ford, MD, MHA
Michael Levy, MD, PhD, is featured in this series.
David A. Hafler, MD, FANA
Lawrence Robinson, MD
Gil Rabinovici, MD
Joel B. Braunstein, MD
© 2024 MJH Life Sciences

All rights reserved.