The biggest achievement of a real brain-computer interface could be that it strips away the layers of abstraction that we have generated in order to communicate our intentions and wishes. Take the example of describing the feeling you get looking at a beautiful flower. The visual input is converted into a wide spectrum https://www.wubmed.org/blog/genbrain-supplement-scam-or-not/ of brain responses that are propagating through the neuronal and glial networks in the form of chemical and electrical signals to all kinds of brain areas and when we try to describe them, they are retrieved and then put into an abstract thing we call language. If we go into written language, we add another layer of abstraction in the form of writing. Depending on whether it is longhand or typing, there is yet another layer of abstraction and each one will take away from the original goal.
But this is not all that happens. What the brain sees is very different from the pictures formed on the Retina. For example, your eyes are seldom at rest. If you stand outdoors looking at the scene around you, your eyes stop for only a second at the grass, treetop, cloud, bird, or squirrel.
The brain does not see a series of quick snapshots. The seeing part of the brain records each picture and remembers it. It adds them together and gives them meaning, so that the whole picture is seen, not the parts. In a second, it draws upon the store of memories in the brain. A tree, a cloud, a squirrel, these have been seen before. It takes only a glance to recognize them.
So seeing includes the use of many parts of the eye, the optic nerve, and the parts of the brain that see and interpret the eye's messages. That's why a baby must learn to use his vision. Before long his visual mechanics work well. But his vision is still poor. Why? Because he understands little of what he sees. The brain is not yet playing its full part in seeing.