jazzyjackson 22 minutes ago

Have they only demo'd it with patients who can't speak? Seems like if they cracked mind reading it would work just as well on someone with full faculties to confirm it's accurate.

labelra 2 hours ago

Are there any details on how this works? Based on what is available in the linked article, it looks like they have an LLM+RAG and are trying to pass off the responses as speech from the user. Done with full transparency, and right protections, this could be useful, but calling it BCI, and overselling it as user's voice (especially given voice cloning is also being done) can be misrepresenting it.

  • Sanzig an hour ago

    Agreed - I don't want to come across as negative, but this is certainly in the "extraordinary claims" category for me right now. If this works, it's obviously huge, but I really want to see third party validation before I can let go of my skepticism.

    I would be very curious to hear about interviews with the patients (conducted through their current means of communication, eg: eye gaze interfaces). Are they finding that the speech generated by the system reflects their intentions accurately?

    EDIT: the EEG peripheral they are using is 4 channels / 250 Hz sample rate. I freely admit I have little knowledge of neuroscience and happily defer to the experts, but that really doesn't seem like a lot of data to be able to infer speech intentions.

    • throwup238 an hour ago

      Even if the LLM hallucinates every word, just knowing when to say something versus stay quiet based on EEG data would be a huge breakthrough.

      • Sanzig 40 minutes ago

        If that's all they were doing - showing when the patient wanted to speak - that would be fine. Presenting speech as attributable to that patient, though? That feels irresponsible without solid evidence, or at least informing the families of the risk that the interface may be just totally hallucinating. Imagine someone talking to an LLM they think is their loved one, all while that person has to watch.

        • throwup238 24 minutes ago

          You’ll get no argument from me there. The whole LLM part seems like a gimmick unless it’s doing error correction on a messy data stream like a drunk person fat fingering in a question to ChatGPT except with an EEG. It might be a really fancy autocorrect.

          I’m just saying that EEG data is so unreliable and requires so much calibration/training per person that reliably isolating speech in paralyzed patient would be a significant development.

  • 4b11b4 an hour ago

    seems like they have built on top of HALO using generative AI now (with partnership from unababel?)

vasco_ 4 days ago

Halo, developed by Unbabel, combines a non invasive BCI with an LLM to enable ALS patients to regain the ability to talk with loved ones. The search for a CEO is on.

xk_id an hour ago

Those non-invasive headbands (which work very differently from implanted electrodes) are notoriously inaccurate at recording brain signals. Even scientific studies, which use advanced setups like the 10-20 system for scalp EEG, face unsolved challenges in removing noise from the data and in using the data to reconstruct underlying brain activity [0] – let alone making meaningful inferences about it.

Patients with locked-in syndrome (one of the use cases mentioned in the article, also called a pseudo-coma), or with other disorders of consciousness, are unable to protest, or to confirm the accuracy of the generative message which is being attributed to them. Communicating on your own terms and in your own words is fundamental to human dignity.

Meanwhile, this coincides with lukewarm reception of generative AI from consumers; perhaps it is the lack of autonomy of locked-in patients, which makes them an interesting segment to this new generation of ventures, scrambling for a ROI on the enormous over-investment in the sector.

The conference venues look lush tho.

[0] https://en.wikipedia.org/wiki/Electroencephalography#Artifac...

  • y-curious an hour ago

    I've spoken to a lot of smart people on the topic of EEG (I'm in a very related field). I agree with you.

    It's an extremely powerful tool for diagnosis of a limited range of conditions but it is not magic. Electrical signal gets attenuated heavily when signals are not on the outside of the brain. Even still, a headband like this is susceptible to noise from movement and other factors. You either need to correct for this with AI, which introduces a second source for error, or you need a very still user. I'm not convinced by the ability to "read minds" with the technology; I would need the man in the video answer some specific questions to be convinced.

    Is this better than not being able to communicate at all? Yes.

    • Sanzig an hour ago

      What they need to provide is surveys from the patients without the device (even locked in patients can often communicate slowly via eye-scan interfaces). How well do the patients rate the system at aligning with what they want to say?

      If they don't find that it aligns at all, then honestly that is worse than nothing. Imagine being locked in and your family communicating with an LLM pretending to be you - all while you have to watch and can't do anything about it.

      • anonzzzies 36 minutes ago

        It might be beneficial to the family though, but indeed not to you.