Elon Musk’s quest to wirelessly connect human brains with machines has run into a seemingly impossible obstacle, experts say. The company is now asking the public for help finding a solution.
Musk’s startup Neuralink, which is in the early stages of testing in human subjects, is pitched as a brain implant that will let people control computers and other devices using their thoughts. Some of Musk’s predictions for the technology include letting paralyzed people “walk again and use their arms normally.”
Turning brain signals into computer inputs means transmitting a lot of data very quickly. A problem for Neuralink is that the implant generates about 200 times more brain data per second than it can currently wirelessly transmit. Now, the company is seeking a new algorithm that can transmit this data in a smaller package — a process called compression — through a public challenge.
As a barebones web page announcing the Neuralink Compression Challenge posted on Thursday explains, “[greater than] 200x compression is needed.” The winning solution must also run in real time, and at low power.
Submit your algorithms under GPL
AGPL just in case they try to put your brain waves into the cloud
GPLv3, make it really radioactive to them.
Did they try Stack overflow?
Why would you ever want to do that?! Marked as duplicated. Shove a cactus up your ass.
Why not skip the middle man and ask ChatGPT directly?
*GrokAI
You know, Xitter’s shittier AI.
I’m not an Information Theory guy, but I am aware that, regardless of how clever one might hope to be, there is a theoretical limit on how compressed any given set of information could possibly be; and this is particularly true for the lossless compression demanded by this challenge.
Quote from the article:
The skepticism is well-founded, said Karl Martin, chief technology officer of data science company Integrate.ai. Martin’s PhD thesis at the University of Toronto focused on data compression and security.
Neuralink’s brainwave signals are compressible at ratios of around 2 to 1 and up to 7 to 1, he said in an email. But 200 to 1 “is far beyond what we expect to be the fundamental limit of possibility.”
The implication of a 200 to 1 algorithm would be that the data they’re collecting is almost entirely noise. Specifically that 99.5% of all the data is noise. In theory if they had sufficient processing in the implant they could filter the data down before transmission thus reducing the bandwidth usage by 99.5%. It seems like it would be fairly trivial to prove that any such 200 to 1 compression algorithm would be indistinguishable in function from a noise filter on the raw data.
It’s not quite the same situation, but this should show some of the issues with this: https://matt.might.net/articles/why-infinite-or-guaranteed-file-compression-is-impossible/