Jump to content
Search In
  • More options...
Find results that contain...
Find results in...
Sharok-allinthemined

We are approaching the cyborg age!

Recommended Posts

See the title



Scientists made someone view a vid, while they were retrieving the images from his brains with scans.
Now, let's just wait for the quality to get better, and get a way to induce images, then we'll be able to watch movies in boring classes whitout being noticed! Very Happy (one day hopefully)

Share this post


Link to post
Share on other sites
Thats the coolest thing. Only thing is, the way they recreated the video has a major flaw.

Build a random library of ~18,000,000 seconds of video downloaded at random from YouTube (that have no overlap with the movies subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average those clips together. This is the reconstruction.

That reconstruction is made up of YouTube videos that look most similar to what the person saw. What if none of the YouTube clips match what the person saw. This would explain the random building and person at the 7th second. All these images were clear YouTube clips, but were distorted to make it look more like what the person saw.

It would seem that the scientists are able to record the rough shapes we see, and maybe even there distance, but it seems they cant figure out much more than that. Until we can record the geometry and rough color that the brain sees (and i believe that they will one day) this technology will never be an accurate portrayal of ones memory.

Oh, and PS. sharok, the Project in BETA is down for me. Can you fix that?

Share this post


Link to post
Share on other sites
The scientist still don't have the tech to retrieve the exact image seen, so they just made peoples see images, recorded the brain activity, and repeated the process.

By analysing data, they probably found the regions that change activity when the subject sees another image. Since they still don't know where to scan exactly, they associate images with an activity. In this clip, the right image is just a bunch of overlayed images that fit the brains activity the most, so you can get a blurry preview.

They just demonstrated that they can, in theory, make their tech better and get better and accurate images, if they can locate which part of the brain is responsible for what part of the image. The big problem imo is the difference between each patient... Determining the right positions to scan takes lots of time, effort and money and might work on 1 person only, since it requires very high accuracy.

And maybe colors could be a problem, since they are "created" by the brain itself... Your green might be different than my green.

But still, they can roughly see what the person is seeing! :3

Share this post


Link to post
Share on other sites

×
×
  • Create New...