Tag Archives: Mind-Brain-Control

Microsoft Patents Body-As-Network

Microsoft Patents Body-As-Network

A human conduit could distribute power across wearable devices, developer says.

By Stephen LawsonIDG News    Jun 24, 2004 12:00 am

Microsoft has a patent on a new kind of network: Your body.

The software giant has received a U.S. patent for a “method and apparatus for transmitting power and data using the human body.” An application for the patent, No. 6,754, 472, was filed in 2000 and awarded this week.

Microsoft proposes linking portable devices such as watches, keyboards, displays, and speakers using the conductivity of “a body of a living creature.”

Powering Devices

A variety of devices could be powered selectively from a single power source carried on the body, via multiple power supply signals at different frequencies, according to the patent abstract. In addition, data and audio signals could be transmitted over that same power signal. The power source and devices would be connected to the body via electrodes.

In the patent application, Microsoft says the company set out to address the proliferation of small handheld or wearable devices with redundant parts for input and output of data, such as separate speakers in a watch, a radio, and a personal digital assistant. If all those devices were networked, they could all share one speaker, the company suggests. Personal wireless networks have potential problems involving power consumption, interference and security, and batteries add weight and are inconvenient to replace or recharge, according to Microsoft.

A Microsoft spokesperson on Wednesday confirmed the company has been awarded the patent. Microsoft did not immediately provide any details of product plans for the technology.

Practical Uses

The idea of using the body to transmit power among devices is not new, according to Gartner analyst Ken Dulaney. However, small batteries and wireless personal-area network technologies such as Bluetooth may be a more practical approach, he says.

“Think about the problems of always having to have things touching your body,” Dulaney says. “I think this could be one of those technologies that’s interesting but not practical in the long run.”

One area in which the body could be useful as a network conduit might be useful is with medical devices, he adds.

Ethical Assessment of Implantable Brain Chips

20th World Congress of Philosophy Logo

Bioethics and Medical Ethics

 

Ethical Assessment of Implantable Brain Chips

 

Ellen M. McGee and G. Q. Maguire, Jr.

bluered.gif (1041 bytes)

 

ABSTRACT: My purpose is to initiate a discussion of the ethics of implanting computer chips in the brain and to raise some initial ethical and social questions. Computer scientists predict that within the next twenty years neural interfaces will be designed that will not only increase the dynamic range of senses, but will also enhance memory and enable “cyberthink” — invisible communication with others. This technology will facilitate consistent and constant access to information when and where it is needed. The ethical evaluation in this paper focuses on issues of safely and informed consent, issues of manufacturing and scientific responsibility, anxieties about the psychological impacts of enhancing human nature, worries about possible usage in children, and most troubling, issues of privacy and autonomy. Inasmuch as this technology is fraught with perilous implications for radically changing human nature, for invasions of privacy and for governmental control of individuals, public discussion of its benefits and burdens should be initiated, and policy decisions should be made as to whether its development should be proscribed or regulated, rather than left to happenstance, experts and the vagaries of the commercial market.

bluered.gif (1041 bytes)

The future may well involve the reality of science fiction’s cyborg, persons who have developed some intimate and occasionally necessary relationship with a machine. It is likely that implantable computer chips acting as sensors, or actuators, may soon assist not only failing memory, but even bestow fluency in a new language, or enable “recognition” of previously unmet individuals. The progress already made in therapeutic devices, in prosthetics and in computer science indicate that it may well be feasible to develop direct interfaces between the brain and computers.

Worldwide there are at least three million people living with artificial implants. In particular, research on the cochlear implant and retinal vision have furthered the development of interfaces between neural tissues and silicon substrate micro probes. The cochlear implant, which directly stimulates the auditory nerve, enables over 10,000 totally deaf people to hear sound; the retinal implantable chip for prosthetic vision may restore vision to the blind. Research on prosthetic vision has proceeded along two paths: 1) retinal implants, which avoid brain surgery and link a camera in eyeglass frames via laser diodes to a healthy optic nerve and nerves to the retina, and 2) cortical implants, which require brain surgery and the pneumatic insertion of electrodesinto the brain to penetrate the visual cortex and produce highly localized stimulation.

The latest stage in the evolution towards the implantable brain chip involves combining these advances in prostheses technology with developments in computer science. The linkage of smaller, lighter, and more powerful computer systems with radio technologies will enable users to access information and communicate anywhere or anytime. Through miniaturization of components, systems have been generated that are wearable and nearly invisible, so that individuals, supported by a personal information structure, can move about and interact freely, as well as, through networking, share experiences with others. The wearable computer project envisions users accessing the Remembrance Agent of a large communally based data source.

Wearables and body-nets are intermediate technologies; the logical next step in this development is the implantable brain chip, direct neural interfacing. As early as 1968, Nicholas Negroponte, presently director of MIT’s Media Lab, first prophesied this symbiosis between mankind and machine. His colleague, Professor Gershenfeld, asserts that “in 10 years, computers will be everywhere; in 20 years, embedded by bioengineers in our bodies…” Neither visionary professes any qualms about this project, which they expect to alter human nature itself. “Suddenly technology has given us powers with which we can manipulate not only external reality — the physical world — but also, and much more portentously, ourselves.” Once networked the result will be a “collective consciousness”, “the hive mind.” “The hive mind…is about taking all these trillions of cells in our skulls that make individual consciousness and putting them together and arriving at a new kind of consciousness that transcends all the individuals.”

The technology for implantable devices is becoming available, and at prices that make such systems very cost effective. Three stages of introduction of such devices can be delineated. The earliest adopters will be those with a disability, who will use this as a more powerful prosthetic device. The next stage, represents the movement from therapy to enhancement, and it is at this point that ethical evaluation becomes imperative. One of the first groups of non-disabled “volunteers” will probably be the professional military, where the use of an implanted computing and communication device with new interfaces to weapons, information, and communications could be lifesaving. The third group of users will probably be those involved in very information intensive businesses, who will use these devices to develop an expanded information transfer capability.

As intelligence or sensory “amplifiers”, the implantable chip will generate at least four benefits: 1) it will increase the dynamic range of senses, enabling, for example, seeing IR, UV, and chemical spectra; 2) it will enhance memory; 3) it will enable “cyberthink” — invisible communication with others when making decisions, and 4) it will enable consistent and constant access to information where and when it is needed. For many these enhancements will produce major improvements in the quality of life, or their survivability, or their performance in a job. The first prototype devices for these improvements in human functioning should be available in five years, with the military prototypes starting within ten years, and information workers using prototypes within fifteen years; general adoption will take roughly twenty to thirty years. The brain chip will probably function as a prosthetic cortical implant. The user’s visual cortex will receive stimulation from a computer based either on what a camera sees or based on an artificial “window” interface.

Not every computer scientist views such prospects with equanimity. Michael Dertouzos writes, “even if it would someday be possible to convey such higher-level information to the brain — and that is a huge technical “If” — we should not do it. Bringing light impulses to the visual cortex of a blind person would justify such an intrusion, but unnecessarily tapping into the brain is a violation of our bodies, of nature, and for many, of God’s design.”

This succinctly formulates the essentialist and creationist argument against the implantable chip. Fears of tampering with human nature are widespread; the theme that nature is good and technology evil, that the power to recreate oneself is overreaching hubris, and that reengineering humanity can only result in disaster, is a familiar response to each new control that man exercises. The mystique of the natural is fueled by the romantic world view of a benign period when humans lived in harmony with nature. However attractive, it is probable that this vision is faulty inasmuch as man has always used technology to survive, and to enhance life; the use of technology is natural to man. Thus this negative response to the prospect of implantable chips is certainly inadequate, although it points to a need to evaluate the technology in terms of the good or evil possibilities for its use by men, or governments.

The call not to “play God” is also familiar, and suffers from the same difficulties articulated by David Hume. This critique relies on a religious sense that improving on the design of creation insults the Creator. In particular, it proposes that attempts to alter the functioning of the brain for purposes of creating a superior human being can be decried as usurping God’s power. To be persuasive this argument must depend on a restrictive, even for religionists, view of creation, one that sees no role for human creativity.

Rejection of wiring brains directly to a computer also stems from a desire for bodily integrity, and intuitions about the sanctity of the body. Thus, many accept the invasion of the organic by the mechanical for curative purposes, but feel that such uses for enhancement are wrong. This conviction, that respect for humans requires the physical integrity of the body is a version of “the inviolability-of persons view”, a deontological position. Using this standard, a distinction is drawn between therapeutic and enhancement procedures; “An intervention that is life-saving, rehabilitative, or otherwise therapeutic can be consistent with the principle that the physical integrity of the body should be preserved even if it involves a bodily ‘mutilation’ or intrusion, provided that it promotes the integrity of the whole.” Implantable chips that amplify the senses, or enhance memory or networking capacities would, thus, be suspect. For others, however, there is no bright line between therapy and enhancement — how deficient does my memory have to be before it would be ethical to wire my brain to a computer? — and the argument is too weak to preclude the use of this technology, anymore than it is possible to proscribe cosmetic surgery, or the use of mood-improving drugs if the benefits seems to outweigh the medical risks. However, even if we discount the force of these three arguments, there are a myriad of other technical, ethical and social concerns to consider before proceeding with implantable chips. The areas of concern for technology assessment are extensive, including risks, appropriateness, societal impact, costs and equity issues and need evaluation by a multi disciplinary team. Study of this device would seem to need participants from at least the fields of computer science, biophysics, medicine, law, philosophy, public policy and international economy. Unlike the scientific community at the advent of genetic technologies, the computer industry has not, as yet, engaged in a public dialogue of these promising, but risky technologies. This avoidance of discussion, and simple reliance upon principles of free scientific inquiry and the market economy is itself a moral stance requiring justification.

Ethical appraisal of implantable computer chips should assess at least the following areas of concern: issues of safety and informed consent, issues of manufacturing and scientific responsibility, anxieties about the psychological impacts of enhancing human nature, worries about possible usage in children, and most troublesome, issues of privacy and autonomy. As is the case in evaluation of any future technology, it is unlikely that we can reliably predict all effects. Nevertheless, the potential for harm must be considered.

The most obvious and basic problems involve safety. Evaluation of the costs and benefits of these implants requires a consideration of the surgical and long term risks. One question, — whether the difficulties with development of non-toxic materials will allow long term usage? — should be answered in studies on therapeutic options and thus, not be a concern for enhancement usages. However, it is conceivable that there should be a higher standard for safety when technologies are used for enhancement rather than therapy, and this issue needs public debate. Whether the informed consent of recipients should be sufficient reason for permitting implementation is questionable in view of the potential societal impact. Other issues such as the kinds of warranties users should receive, and the liability responsibilities if quality control of hard/soft/firmware is not up to standard, could be addressed by manufacturing regulation. Provisions should be made to facilitate upgrades since users presumably would not want multiple operations, or to be possessors of obsolete systems. Manufacturers must understand and devise programs for teaching users how to implement the new systems. There will be a need to generate data on individual implant recipient usefulness, and whether all users benefit equally. Additional practical problems with ethical ramifications include whether there will be a competitive market in such systems and if there will be any industry-wide standards for design of the technology.

One of the least controversial uses of this enhancement technology will be its implementation as therapy. It is possible that the technology could be used to enable those who are naturally less cognitively endowed to achieve on a more equitable basis. Certainly, uses of the technology to remediate retardation or to replace lost memory faculties in cases of progressive neurological disease, could become a covered item in health care plans. Enabling humans to maintain species typical functioning would probably be viewed as a desirable, even required, intervention, although this may become a constantly changing standard. The costs of implementing this technology needs to be weighed against the costs of impairment, although it may be that decisions should be made on the basis of rights rather than usefulness.

Consideration also needs to be given to the psychological impact of enhancing human nature. Will the use of computer-brain interfaces change our conception of man and our sense of identity? If people are actually connected via their brains the boundaries between self and community will be considerably diminished. The pressures to act as a part of the whole rather than as a single isolated individual would be increased; the amount and diversity of information might overwhelm, and the sense of self as a unique and isolated individual would be changed.

Since usage may also engender a human being with augmented sensory capacities, the implications, even if positive, need consideration. Supersensory sight will see radar, infrared and ultraviolet images, augmented hearing will detect softer and higher and lower pitched sounds, enhanced smell will intensify our ability to discern scents, and an amplified sense of touch will enable discernment of environmental stimuli like changes in barometric pressure. These capacities would change the “normal” for humans, and would be of exceptional application in situations of danger, especially in battle. As the numbers of enhanced humans increase, today’s normal range might be seen as subnormal, leading to the medicalization of another area of life. Thus, substantial questions revolve around whether there should be any limits placed upon modifications of essential aspects of the human species. Although defining human nature is notoriously difficult, man’s rational powers have traditionally been viewed as his claim to superiority and the center of personal identity. Changing human thoughts and feeling might render the continued existence of the person problematical. If one accepts, as most cognitive scientists do, “the materialist assertion that mind is an emergent phenomenon from complex matter, … cybernetics may one day provide the same requisite level of complexity as a brain.” On the other hand, not all philosophers espouse the materialist contention and use of these technologies certainly will impact discussions about the nature of personal identity, and the traditional mind-body problem. Modifying the brain and its powers could change our psychic states, altering both the self-concept of the user, and our understanding of what it means to be human. The boundary between me “the physical self” and me “the perceptory/intellectual self” could change as the ability to perceive and interact expands far beyond what can be done with video conferencing. The boundaries of the real and virtual worlds may blur, and a consciousness wired to the collective and to the accumulated knowledge of mankind would surely impact the individual’s sense of self. Whether this would lead to bestowing greater weight to collective responsibilities and whether this would be beneficial are unknown.

Changes in human nature would become more pervasive if the altered consciousness were that of children. In an intensely competitive society, knowledge is often power. Parents are driven to provide the very best for their children. Will they be able to secure implants for their children, and if so, how will that change the already unequal lottery of life? Standards for entrance into schools, gifted programs and spelling bees – all would be affected. The inequalities produced might create a demand for universal coverage of these devices in health care plans, further increasing costs to society. However, in a culture such as ours, with different levels of care available on the basis of ability to pay, it is plausible to suppose that implanted brain chips will be available only to those who can afford a substantial investment, and that this will further widen the gap between the haves and the have-not. A major anxiety should be the social impact of implementing a technology that widens the divisions not only between individuals, and genders, but also, between rich and poor nations. As enhancements become more widespread, enhancement becomes the norm, and there is increasing social pressure to avail oneself of the “benefit.”Thus, even those who initially shrink from the surgery may find it becomes a necessity, and the consent part of “informed consent”would become subject to manipulation.

Beyond these more imminent prospects is the possibility that in thirty years, “it will be possible to capture data presenting all of a human being’s sensory experiences on a single tiny chip implanted in the brain.” This data would be collected by biological probes receiving electrical impulses, and would enable a user to recreate experiences, or even to transplant memory chips from one brain to another. In this eventuality, psychological continuity of personal identity would be disrupted with indisputable ramifications . Would the resulting person have the identities of other persons?

The most frightening implication of this technology is the grave possibility that it would facilitate totalitarian control of humans. In a prescient projection of experimental protocols, George Annas writes of the “project to implant removable monitoring devices at the base of the brain of neonates in three major teaching hospitals….The devices would not only permit us to locate all the implantees at any time, but could be programmed in the future to monitor the sound around them and to play subliminal messages directly to their brains.” Using such technology governments could control and monitor citizens. In a free society this possibility may seem remote, although it is not implausible to project usage for children as an early step. Moreover, in the military environment the advantages of augmenting capacities to create soldiers with faster reflexes, or greater accuracy, would exert strong pressures for requiring enhancement. When implanted computing and communication devices with interfaces to weapons, information, and communication systems become possible, the military of the democratic societies might require usage to maintain a competitive advantage. Mandated implants for criminals are a foreseeable possibility even in democratic societies. Policy decisions will arise about this usage, and also about permitting usage, if and when it becomes possible, to affect specific behaviors. A paramount worry involves who will control the technology and what will be programmed; this issue overlaps with uneasiness about privacy issues, and the need for control and security of communication links. Not all the countries of the world prioritize autonomy, and the potential for sinister invasions of liberty and privacy are alarming.

In view of the potentially devastating implications of the implantable brain chip should its development and implementation be prohibited? This is, of course, the question that open dialogue needs to address, and it raises the disputed topic of whether technological development can be resisted, or whether the empirical slippery slope will necessarily result in usage, in which case regulation might still be feasible. Issues raised by the prospect of implantable brain chips are hard ones, because the possibilities for both good and evil are so great. The issues are too significant to leave to happenstance, computer scientists, or the commercial market. It is vital that world societies assess this technology and reach some conclusions about what course they wish to take.

bluered.gif (1041 bytes)

20th World Congress of Philosophy Logo


Nanotechnology coming to a brain near you

Nanotechnology coming to a brain near you

=

 

Nanotechnology coming to a brain near you

(Nanowerk Spotlight) If you have seen the movie The Matrix then you are familiar with ‘jacking in’ – a brain-machine neural interface that connects a human brain to a computer network. For the time being, this is still a sci-fi scenario, but don’t think that researchers are not heavily working on it. What is already reality today is something called neuroprosthetics, an area of neuroscience that uses artificial microdevices to replace the function of impaired nervous systems or sensory organs. Different biomedical devices implanted in the central nervous system, so-called neural interfaces, already have been developed to control motor disorders or to translate willful brain processes into specific actions by the control of external devices. These implants could help increase the independence of people with disabilities by allowing them to control various devices with their thoughts (not surprisingly, the other candidate for early adoption of this technology is the military). The potential of nanotechnology application in neuroscience is widely accepted. Especially single-walled carbon nanotubes (SWCNT) have received great attention because of their unique physical and chemical features, which allow the development of devices with outstanding electrical properties. In a crucial step towards a new generation of future neuroprosthetic devices, a group of European scientists developed a SWCNT/neuron hybrid system and demonstrated that carbon nanotubes can directly stimulate brain circuit activity.
Examples of existing brain implants include brain pacemakers, to ease the symptoms of such diseases as epilepsy, Parkinson’s Disease, dystonia and recently depression; retinal implants that consist of an array of electrodes implanted on the back of the retina, a digital camera worn on the user’s body, and a transmitter/image processor that converts the image to electrical signals sent to the brain; and most recently, cyberkinetics devices such as the BrainGate™ Neural Interface System that has been used successfully by quadriplegic patients to control a computer with thoughts alone.
Thanks to the application of recent advances in nanotechnology to the nervous system, a novel generation of neuro-implantable devices is on the horizon, capable of restoring function loss as a result of neuronal damage or altered circuit function. The field will very soon be mature enough to explore in vivo neural implants in animal models.
“We developed an integrated system coupling SWCNTs to an ex vivo reduced nervous system, where a mesh of SWCNTs deposited on glass acts as a growing substrate for rat cultured neurons” Dr. Maurizio Prato and Dr. Laura Ballerini explain to Nanowerk. “We demonstrated that neurons form functional healthy networks in vitro over a period of several days and developed a dense array of connection fibers, unexpectedly intermingled with the SWCNT meshwork with tight contacts with the cellular membranes.
Ballerini, an associate professor in Physiology, and Prato, a professor in the Department of Pharmaceutical Science both at the University of Trieste, Italy, are also involved in the EuropeanNeuronano project, an advanced scientific multi-disciplinary project to develop neuronal nano-engineering by integrating neuroscience with materials science, micro- and nanotechnology. The Neuronano network’s major aim is to integrate carbon nanotubes with multi electrode array technology to develop a new generation biochips to help repair damaged central nervous system tissues.
“For the first time, we show how electrical stimulation delivered through carbon nanotubes activates neuronal electrical signaling and network synaptic interactions” says Dr. Michele Giugliano, a researcher at the Brain Mind Institute at the Ecole Polytechnique Federale de Lausanne in Switzerland. He is one of Ballerini’s co-authors of their recent paper “Interfacing Neurons with Carbon Nanotubes: Electrical Signal Transfer and Synaptic Stimulation in Cultured Brain Circuits”. “We developed a mathematical model of the neuron/SWCNT electrochemical interface. This model provides for the first time the basis for understanding the electrical coupling between neurons and SWCNT.”  
Over the past few years, there has been tremendous interest in exploiting nanotechnology materials and devices either in clinical or in basic neurosciences research. However, so far the interactions between carbon nanotubes and cellular physiology have been studied and characterized as an issue of biochemical mechanisms involving molecular transport, cellular adhesion, biocompatibility, etc. These new findings boost scientists’ understanding of interfacing the nervous system with conductive nanoparticles, at the very fast time scale of electrical neuronal activity which in mammals determines behavior, cognition and learning.
“Recently, the Neuronano research group pioneered the exploration of carbon nanotubes as artificial means to interact with the collective electrical activity emerging in networks of vertebrate neurons” says Giugliano. “Biocompatibility of carbon nanotubes has been shown in the literature and several groups recently have attempted coupling neurons to carbon nanotubes to probe or elicit electrical impulses. However, specific considerations of the electrophysiological techniques that are crucial for understanding signal-transduction and electrical coupling were underestimated.”
The researchers achieved direct SWCNT–neuron interactions by culturing rat hippocampal cells on a film of purified SWCNTs for 8–14 days, to allow for neuronal growth. This neuronal growth was accompanied by variable degree of neurite extension on the SWCNT mat. A detailed scanning electron microscopy analysis suggested the presence of tight interactions between cell membranes and SWCNTs at the level of neuronal processes and cell surfaces
“With regards to the technological processes involved in the SWCNT deposition on glass, the chemical processes we previously developed and used in this work is the only one effectively employing no intermediate functional group to anchor the carbon nanotubes to the glass substrate, thus allowing a unique perspective of the properties and interaction of nanotubes alone” says Prato.
The scientists point out that their results as a whole represent a crucial step towards future neuroprosthetic devices, exploiting the surprising mechanical and (semi)conductive properties of carbon nanotubes. This field is now closer to a quantitative understanding of how precise electrical stimulation may be delivered in deep structures by ‘brain pacemakers’ in the treatment of brain diseases.
“From current and previous results of our group, it seems that carbon nanotubes could functionally interact with electrical nervous activity even in the absence of signal-conditioning integrated electronics and explicit external control” says Ballerini. “In fact, at least to some extent, (semi)conductive properties of the nanotubes might facilitate the emergence of synaptic activity. These achievements offer a promising strategy to further develop next-generation materials to be used in neurobiology.”
By Michael Berger, Copyright 2007 Nanowerk LLC

Could Soldiers Be Prosecuted for Thought Crime?

Could Soldiers Be Prosecuted for Thought Crime?

Binoculars_wide

The Pentagon’s Defense Advanced Research Projects Agency is funding a number of technologies that tap into the brain’s ability to detect threats before the conscious mind is able to process the information. Already, there is Pentagon-sponsored work on using the brain’s pattern detection capabilities for enhanced gogglesand super-fast satellite imagery analysis. What happens, however, when the Pentagon ultimately uses this enhanced capability for targeting weapons?

This question has led Stephen White to write a fascinating article exploring the implications of a soldiers’ legal culpability for weapons that may someday tap into this “pre-conscious” brain activity. Like theMinority Report notion of “pre-crime,” where someone is convicted for contemplating a criminal act they haven’t yet acted upon, this article raises the intriguing question of whether a soldier could be convicted for the mistake made by a pre-conscious brain wave.

One of the justifications for employing a brain-machine interface is that the human brain can perform image calculations in parallel and can thus recognize items, such as targets, and classify them in 200 milliseconds, a rate orders of magnitude faster than computers can perform such operations. In fact, the image processing occurs faster than the subject can become conscious of what he or she sees. Studies of patients with damage to the striate cortex possess what neuropsychologists term “blindsight,” an ability to predict accurately where objects are positioned, even when they are placed outside these patients’ field of vision. The existence of this ability suggests the operation of an unconscious visual perception system in the human brain. These blindsight patients often exhibit “levels of accuracy well beyond the performance of normal observers making judgments close to the threshold of awareness,” particularly with regard to locating ‘unseen’ objects. The speed of visual recognition varies depending on its degree of the perceived object’s multivalence; ambiguous objects take more time to process. If neuralinterfaced weapons were designed to fire at the time of recognition rather than after the disambiguation process, a process that would likely need to occur for the pilot to differentiate between combatants and protected persons, the pilot firing them presumably would lack criminal accountability for the act implicit in willful killing. Because of the way brain-interfaced weapons may interrupt the biology of consciousness, reasonable doubt may exist as to whether an actor performed a conscious act in the event of a contested incident.

It’s not just legal analysts who recognize this issue. After reading my article on “Luke’s Binoculars” — DARPA’s brain-tapping binos program — one neuroscientist raised an obvious concern: Psychopathy is linked to bypassing the inhibitory control mechanisms of prefrontal cortex, and do we really want psychopathic soldiers?”

Ethics of ‘neuro-weaponry’

Winnipeg Free Press

Winnipeg Free Press –

Ethics of ‘neuro-weaponry’ hard to wrap your brain around

By: Robert Alison

Mind control will be a primary focus of neuro-weaponry, which is expected to reshape warfare, neuroscientists confirm.

Emerging technologies will give birth to highly sophisticated adversarial applications centred on brain science; conventional battlefield methodology could soon fade into history.

“We are approaching a time when brain science will be critical to our national security,” confirmed James Forsythe of Sandia National Laboratories

According to James Giordano of Georgetown University and the Potomac Institute for Policy Studies, and colleague Rachel Warzman at Georgetown University, the battlefields of the future will be shaped by advances in neuroscience focused for military purposes.

“Major breakthroughs (in brain science) relevant to national security are both viable and imminently achievable,” Giordano suggested at a recent neuroscience conference.

The result would be an “arsenal of neuro-weapons,” concluded Jonathan Marks at Penn State University.

Such an arsenal could include “drugs, microbiological agents and toxins from nature,” explained Jonathan Moreno at the University of Pennsylvania.

In addition to the use of “brain-machine interfaces,” the hormone oxytocin could be used to make prisoners more co-operative in divulging sensitive military information. Other substances would make soldiers forget atrocities they might have committed.

According to Forsythe and Giordano, adversarial elements could include: “nanoparticles engineered to affect specific brain processes,” “super soldiers created through pharmaceuticals and/or brain stimulation” and “brain imaging for interrogation-lie detection” as well as the use of “intelligent machines.”

Other possibilities being considered by military strategists include an aerosolized shellfish neurotoxin fatal to humans in a few minutes, hallucination-causing bacteria and organisms that access and destroy human brains by crawling up the olfactory nerves.

Such technologies would have been unimaginable not so long ago, but the U.S. Defence Advanced Research Projects Agency has been focusing on the military applications of brain science, Moreno confirmed.

Some of its projects, posted on its website, include “neuroscience for intelligence analysts” and “accelerated learning.”

For the past several years, DARPA, the military research and development agency tasked with maintaining U.S. military technology superiority, “has engaged in research on direct neurological control,” confirmed Stephen White at Cornell Law School.

But such a dramatic alteration in the way warfare is waged has legal implications, analysts suggest.

According to White, there are concerns with regard to “criminal responsibility for war crimes.”

“Science and technology should never be used to do bad things,” Giordano pointed out, cautioning that history shows scientists often generate information misused for unintended military purposes.

White noted that international law has no “per se prohibition” with regard to the direction that neuro-weaponry appears to be taking.

Robert Alison is a zoologist and freelance writer based in Victoria, B.C.

The brain is like a computer, and we can fix it with nanorobots !

Ed Boyden: The brain is like a computer, and we can fix it with nanorobots

Synthetic biology has the potential to replace or improve therapies for a wide range of brain disorders

ed boyden

Ed Boyden’s background is in electrical engineering and physics. Photograph: Quinn Norton

Ed Boyden heads the Synthetic Neurobiology Group at MIT Media Lab. He is working on developing technologies and tools for “analysing and engineering brain circuits” – to reveal which brain neurons are involved in different cognitive processes and using this knowledge to treat brain disorders.

What is synthetic neurobiology?

The synthetic biology part is about taking molecules from the natural world and figuring how to make them into little machines that we can use to address complex brain problems.

Moreover, if we can synthesise the computation of the brain and write information to it, that allows us to test our understanding of the brain and fix disorders by controlling the processes within – running a piece of software on the brain as if it is a computer.

The brain as computer… we probably shouldn’t be surprised that your initial training was in electrical engineering and physics?

Training as a physicist was very helpful because you are trained to think about things both at a logical and intuitive level. Electrical engineering was great too because neurons are electrical devices and we have to think about circuits and networks. I was interested in big unknowns and the brain is one of the biggest, so building tools that allow us to regard the brain as a big electrical circuit appealed to me.

So do you have a “circuit board” of the brain?

It’s not even known how many kinds of cells there are in the brain. If you were looking for a periodic table of the brain, there is no such thing. I really like to think of the brain as a computer. Let’s take an iPhone – there are millions around the world, they all have the same map but at this moment they are all doing a different computations – from firing birds at walls to reading an email. You need more than just a map to understand a computation.

So how do you find out about the functions of the different neurons?

We have a collaboration with a team at Georgia Institute of Technology to build robots to help us analyse the brain at single-cell resolution. We hope to use these robots to harvest the contents of cells to figure out what their properties are. The tip of this robot is a millionth of a metre wide.

And what would you do with the data?

One strategy we are working on is what you might call high throughput screening (HTS) for the living brain. HTS has been used for decades to, for example, screen for genes important for a biological process. But how do you do it in the living brain? We are working on technologies like those robotics or three-dimensional interfaces which would allow you to target information to thousands of points of the brain, so you could determine which circuits are important to a given cognitive process or fixing a disorder.

Robots and interfaces – sounds invasive.

Some degree of invasiveness might not be the end of the world – 250,000 people have some kind of neural implant already, such as deep brain stimulators or cochlear implants. Some people perceive that invasive treatment done subtly could be more desirable than something that you have to wear all the time like an helmet.

Have your techniques been used in live experiments?

In a collaboration led by Alan Horsager from the University of Southern California, we tried to restore vision to a blind eye. There are lots of examples of blind eyes where the photoreceptors have gone: in such a case, there’s no drugs you can give because there’s nothing to bind to. So we thought, why don’t we build an entire suite of tools that would deliver the gene for a light-activated protein into a targeted set of cells and try to restore visual behaviour. Neurons are electrical devices. Normally, photosensory cells in the retina capture light and transform them into electrical signals, which can then be processed by the retina and relayed to the rain. But what if the photosensory cells are gone? What we did was take a light-sensitive protein from a species of green algae, which converts light into electrical signals, and installed it in spared cells in the retina of a blind mouse. Then, the newly photosensitive cells in the retina could capture light. Basically the previously blind retina became a camera.We found we could take a blind mouse that couldn’t solve a maze problem and by making its retina light sensitive, it could navigate a fairly complex maze and go right to the target. Does this show the mouse has conscious vision? I don’t know if we can really say that, but it does show these mice can make cognitive use of visual information.

How far are we from using these techniques on humans?

My lab is focused on inventing the tools. But of the people who are pursuing blindness treatments there are at least five groups who have stated plans or started ventures to take these technologies and move to humans.

What are the advantages of these technologies over drugs?

They can help solve problems where drugs can’t. And maybe they can help people find better drugs. There are many disorders where a specific kind of cell in the brain is atrophied or degenerates. If we can get information to that cell, then we might more accurately be able to correct a brain disorder while minimising side-effects. A drug might affect cells that are normal as well as cells that need to be fixed, causing side effects.

And these tools could also be used to aid drug discovery?

Drugs have a lot of good things about them – they are portable, non-invasive, they don’t need a specialist to administer them. Suppose we could go through the brain with an array of light sources and track down which specific molecules on specific cells are most impactful for treating a disorder. If we can find a drug that can bind to that molecule, (although only 1 in 10 molecules are bindable) maybe we could develop drugs that affect specific classes of cell in the brain and not others.

Reconstructing visual experiences from brain activity evoked by natural movies.

Publications‎ > ‎

Reconstructing visual experiences from brain activity evoked by natural movies.

Shinji Nishimoto, An T. Vu, Thomas Naselaris, Yuval Benjamini, Bin Yu & Jack L. Gallant.
Current Biology, published online September 22, 2011.
Quantitative modeling of human brain activity can provide crucial insights about cortical representations and can form the basis for brain decoding devices. Recent functional magnetic resonance imaging (fMRI) studies have modeled brain activity elicited by static visual patterns and have reconstructed these patterns from brain activity. However, blood oxygen level-dependent (BOLD) signals measured via fMRI are very slow, so it has been difficult to model brain activity elicited by dynamic stimuli such as natural movies. Here we present a new motion-energy encoding model that largely overcomes this limitation. The model describes fast visual information and slow hemodynamics by separate components. We recorded BOLD signals in occipitotemporal visual cortex of human subjects who watched natural movies and fit the model separately to individual voxels. Visualization of the fit models reveals how early visual areas represent the information in movies. To demonstrate the power of our approach, we also constructed a Bayesian decoder by combining estimated encoding models with a sampled natural movie prior. The decoder provides remarkable reconstructions of the viewed movies. These results demonstrate that dynamic brain activity measured under naturalistic conditions can be decoded using current fMRI technology.

Simple example of reconstruction

The left clip is a segment of a Hollywood movie trailed that the subject viewed while in the magnet. The right clip shows the reconstruction of this segment from brain activity measured using fMRI. The procedure is as follows:
[1] Record brain activity while the subject watches several hours of movie trailers.
[2] Build dictionaries (regression model; see below) to translate between the shapes, edges and motion in the movies and measured brain activity. A separate dictionary is constructed for each of several thousand points in the brain at which brain activity was measured.
(For experts: our success here in building a movie-to-brain activity encoding model that can predicts brain activity to arbitrary novel movie inputs was one of the keys of this study)
[3] Record brain activity to a new set of movie trailers that will be used to test the quality of the dictionaries and reconstructions.
[4] Build a random library of ~18,000,000 seconds of video downloaded at random from YouTube (that have no overlap with the movies subjects saw in the magnet). Put each of these clips through the dictionaries to generate predictions of brain activity. Select the 100 clips whose predicted activity is most similar to the observed brain activity. Average those clips together. This is the reconstruction.

Reconstruction for different subjects

This video is organized as folows: the movie that each subject viewed while in the magnet is shown at upper left. Reconstructions for three subjects are shown in the three rows at bottom. All these reconstructions were obtained using only each subject’s brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. The reconstruction at far left is the Average High Posterior (AHP). The reconstruction in the second column is the Maximum a Posteriori (MAP). The other columns represent less likely reconstructions. The AHP is obtained by simply averaging over the 100 most likely movies in the reconstruction library. These reconstructions show that the process is very consistent, though the quality of the reconstructions does depend somewhat on the quality of brain activity data recorded from each subject.


Frequently Asked Questions About This Work

Could you give a simple outline of the experiment?

The goal of the experiment was to design a process for decoding dynamic natural visual experiences from human visual cortex. More specifically, we sought to use brain activity measurements to reconstruct natural movies seen by an observer. First, we used functional magnetic resonance imaging (fMRI) to measure brain activity in visual cortex as a person looked at several hours of movies. We then used these data to develop computational models that could predict the pattern of brain activity that would be elicited by any arbitrary movies (i.e., movies that were not in the initial set used to build the model). Next, we used fMRI to measure brain activity elicited by a second set of movies that were completely distinct from the first set. Finally, we used the computational models to process the elicited brain activity, in order to reconstruct the movies in the second set of movies. This is the first demonstration that dynamic natural visual experiences can be recovered from very slow brain activity recorded by fMRI.

Can you give an intuitive explanation of movie reconstruction?

As you move through the world or you watch a movie, a dynamic, ever-changing pattern of activity is evoked in the brain. The goal of movie reconstruction is to use the evoked activity to recreate the movie you observed. To do this, we create encoding models that describe how movies are transformed into brain activity, and then we use those models to decode brain activity and reconstruct the stimulus. 

Can you explain the encoding model and how it was fit to the data?

 

To understand our encoding model, it is most useful to think of the process of perception as one of filtering the visual input in order to extract useful information. The human visual cortex consist of billions of neurons. Each neuron can be viewed as a filter that takes a visual stimulus as input, and produces a spiking response as output. In early visual cortex these neural filters are selective for simple features such as spatial position, motion direction and speed. Our motion-energy encoding model describes this filtering process. Currently the best method for measuring human brain activity is fMRI. However, fMRI does not measure neural activity directly, but rather measures hemodynamic changes (i.e. changes in blood flow, blood volume and blood oxygenation) that are caused by neural activity. These hemodynamic changes take place over seconds, so they are much slower than the changes that can occur in natural movies (or in the individual neurons that filter those movies). Thus, it has previously been thought impossible to decode dynamic information from brain activtiy recorded by fMRI. To overcome this fundamental limitation we use a two stage encoding model. The first stage consists of a large collection of motion-energy filters that span a range of positions, motion directions and speeds as the underlying neurons. This stage models the fast responses in the early visual system. The output from the first stage of the model is fed into a second stage that describes how neural activity affects hemodynamic activity in turn. The two stage processing allows us to model the relationship between the fine temporal information in the movies and the slow brain activity signals measured using fMRI. Functional MRI records brain activity from small volumes of brain tissue called voxels (here each voxel was 2.0 x 2.0 x 2.5 mm). Each voxel represents the pooled activity of hundreds of thousands of neurons. Therefore, we do not model each voxel as a single motion-energy filter, but rather as a bank of thousands of such filters. In practice fitting the encoding model to each voxel is a straightforward regression problem. First, each movie is processed by a bank of nonlinear motion-energy filters. Next, a set of weights is found that optimally map the filtered movie (now represented as a vector of about 6,000 filter outputs) into measured brain activity. (Linear summation is assumed in order to simplify fitting.)

How accurate is the decoder?

A good decoder should produce a reconstruction that a neutral observer judges to be visually similar to the viewed movie. However, it is difficult to quantify human judgments of visual similarity. In this paper we use similarity in the motion-energy domain. That is, we quantify how much of the spatially localized motion information in the viewed movie was reconstructed. The accuracy of our reconstructions is far above chance.

Other studies have attempted reconstruction before. How is your study different?

Previous studies showed that it is possible to reconstruct static visual patterns (Thirion et al., 2006 Neuroimage; Miyawaki et al., 2008 Neuron), static natural images (Naselaris et al., 2009 Neuron) or handwriting digits (van Gerven et al. 2010 Neural Computation). However, no previous study has produced reconstructions of dynamic natural movies. This is a critical step toward obtaining reconstructions of internal states such as imagery, dreams and so on.

Why is this finding important?

From a basic science perspective, our paper provides the first quantitative description of dynamic human brain activity during conditions simulating natural vision. This information will be important to vision scientists and other neuroscientists. Our study also represents another important step in the development of brain-reading technologies that could someday be useful to society. Previous brain-reading approaches could only decode static information. But most of our visual experience is dynamic, and these dynamics are often the most compelling aspect of visual experience. Our results will be crucial for developing brain-reading technologies that can decode dynamic experiences.

How many subjects did you run? Is there any chance that they could have cheated?

We ran three subjects for the experiments in this paper, all co-authors. There are several technical considerations that made it advantageous to use authors as subjects. It takes several hours to acquire sufficient data to build an accurate motion-energy encoding model for each subject, and naive subjects find it difficult to stay still and alert for this long. Authors are motivated to be good subjects, to their data are of high quality. These high quality data enabled us to build detailed and accurate models for each individual subject. There is no reason to think that the use of authors as subjects weakens the validity of the study. The experiment focuses solely on the early part of the visual system, and this part of the brain is not heavily modulated by intention or prior knowledge. The movies used to develop encoding models for each subject and those used for decoding were completely separate, and there no plausible way that a subject could have changed their own brain activity in order to improve decoding. Many fMRI studies use much larger groups of subjects, but they collect much less data on each subject. Such studies tend to average over a lot of the individual variability in the data, and the results provide a poor description of brain activity in any individual subject.

What are the limits on brain decoding?

Decoding performance depends on the quality of brain activity measurements. In this study we used functional MRI (fMRI) to measure brain activity. (Note that fMRI does not actually measure the activity of neurons. Instead, it measures blood flow consequent to neural activity. However, many studies have shown that the blood flow signals measured using fMRI are generally correlated with neural activity.) fMRI has relatively modest spatial and temporal resolution, so much of the information contained in the underlying neural activity is lost when using this technique. fMRI measurements are also quite variable from trial-to-trial. Both of these factors limit the amount of information that can be decoded from fMRI measurements. Decoding also depends critically on our understanding of how the brain represents information, because this will determine the quality of the computational model. If the encoding model is poor (i.e., if it does a poor job of prediction) then the decoder will be inaccurate. While our computational models of some cortical visual areas perform well, they do not perform well when used to decode activity in other parts of the brain. A better understanding of the processing that occurs in parts of the brain beyond visual cortex (e.g. parietal cortex, frontal cortex) will be required before it will be possible to decode other aspects of human experience.

What are the future applications of this technology?

This study was not motivated by a specific application, but was aimed at developing a computational model of brain activity evoked by dynamic natural movies. That said, there are many potential applications of devices that can decode brain activity. In addition to their value as a basic research tool, brain-reading devices could be used to aid in diagnosis of diseases (e.g., stroke, dementia); to assess the effects of therapeutic interventions (drug therapy, stem cell therapy); or as the computational heart of a neural prosthesis. They could also be used to build a brain-machine interface.

Could this be used to build a brain-machine interface (BMI)?

Decoding visual content is conceptually related to the work on neural-motor prostheses being undertaken in many laboratories. The main goal in the prosthetics work is to build a decoder that can be used to drive a prosthetic arm or other device from brain activity. Of course there are some significant differences between sensory and motor systems that impact the way that a BMI system would be implemented in the two systems. But ultimately, the statistical frameworks used for decoding in the sensory and motor domains are very similar. This suggests that a visual BMI might be feasible.

At some later date when the technology is developed further, will it be possible to decode dreams, memory, and visual imagery?

Neuroscientists generally assume that all mental processes have a concrete neurobiological basis. Under this assumption, as long as we have good measurements of brain activity and good computational models of the brain, it should be possible in principle to decode the visual content of mental processes like dreams, memory, and imagery. The computational encoding models in our study provide a functional account of brain activity evoked by natural movies. It is currently unknown whether processes like dreaming and imagination are realized in the brain in a way that is functionally similar to perception. If they are, then it should be possible to use the techniques developed in this paper to decode brain activity during dreaming or imagination.

At some later date when the technology is developed further, will it be possible to use this technology in detective work, court cases, trials, etc?

The potential use of this technology in the legal system is questionable. Many psychology studies have now demonstrated that eyewitness testimony is notoriously unreliable. Witnesses often have poor memory, but are usually unaware of this. Memory tends to be biased by intervening events, inadvertent coaching, and rehearsal (prior recall). Eyewitnesses often confabulate stories to make logical sense of events that they cannot recall well. These errors are thought to stem from several factors: poor initial storage of information in memory; changes to stored memories over time; and faulty recall. Any brain-reading device that aims to decode stored memories will inevitably be limited not only by the technology itself, but also by the quality of the stored information. After all, an accurate read-out of a faulty memory only provides misleading information. Therefore, any future application of this technology in the legal system will have to be approached with extreme caution.

Will we be able to use this technology to insert images (or movies) directly into the brain?

Not in the foreseeable future. There is no known technology that could remotely send signals to the brain in a way that would be organized enough to elicit a meaningful visual image or thought.

Does this work fit into a larger program of research?

One of the central goals of our research program is to build computational models of the visual system that accurately predicts brain activity measured during natural vision. Predictive models are the gold standard of computational neuroscience and are critical for the long-term advancement of brain science and medicine. To build a computational model of some part of the visual system, we treat it as a “black box” that takes visual stimuli as input and generates brain activity as output. A model of the black box can be estimated using statistical tools drawn from classical and Bayesian statistics, and from machine learning. Note that this reverse-engineering approach is agnostic about the specific way that brain activity is measured.

One good way to evaluate these encoding models is construct a corresponding decoding model, and then assess its performance in a specific task such as movie reconstruction.

Why is it important to construct computational models of the brain?

The brain is an extremely complex organ and many convergent approaches are required to obtain a full understanding of its structure and function. One way to think about the problem is to consider three different general goals of research in systems/computational neuroscience. (1) The first goal is to understand how the brain is divided into functionally distinct modules (e.g., for vision, memory, etc.). (2) The second goal, contingent on the first, is to determine the function of each module. One classical approach for investigating the function of a brain circuit is to characterize neural responses at a quantitative computational level that is abstracted away from many of the specific anatomical and biophysical details of the system. This helps make tractable a problem that would otherwise seem overwhelmingly complex. (3) The third goal, contingent on the first two, is to understand how these specific computations are implemented in neural circuitry. A byproduct of this model-based approach is that it has many specific applications, as described above.

Can you briefly explain the function of the parts of the brain examined here?

The human visual system consists of several dozen distinct cortical visual areas and sub-cortical nuclei, arranged in a network that is both hierarchical and parallel. Visual information comes into the eye and is there transduced into nerve impulses. These are sent on to the lateral geniculate nucleus and then to primary visual cortex (area V1). Area V1 is the largest single processing module in the human brain. Its function is to represent visual information in a very general form by decomposing visual stimuli into spatially localized elements. Signals leaving V1 are distributed to other visual areas, such as V2 and V3. Although the function of these higher visual areas is not fully understood, it is believed that they extract relatively more complicated information about a scene. For example, area V2 is thought to represent moderately complex features such as angles and curvature, while high-level areas are thought to represent very complex patterns such as faces. The encoding model used in our experiment was designed to describe the function of early visual areas such as V1 and V2, but was not meant to describe higher visual areas. As one might expect, the model does a good job of decoding information in early visual areas but it does not perform as well in higher areas.

Are there any ethical concerns with this type of research?

The current technology for decoding brain activity is relatively primitive. The computational models are immature, and in order to construct a model of someone’s visual system they must spend many hours in a large, stationary magnetic resonance scanner. For this reason it is unlikely that this technology could be used in practical applications any time soon. That said, both the technology for measuring brain activity and the computational models are improving continuously. It is possible that decoding brain activity could have serious ethical and privacy implications downstream in, say, the 30-year time frame. As an analogy, consider the current debates regarding availability of genetic information. Genetic sequencing is becoming cheaper by the year, and it will soon be possible for everyone to have their own genome sequenced. This raises many issues regarding privacy and the accessibility of individual genetic information. The authors believe strongly that no one should be subjected to any form of brain-reading process involuntarily, covertly, or without complete informed consent.

Professor Calls For “Google Type” Brain Chip Implants

Professor Calls For “Google Type” Brain Chip Implants
Touts exact mirror of DARPA control project in New York Times’ “Idea Lab”

Steve Watson
Infowars.net


“However difficult the practicalities, there’s no reason in principle why a future generation of neural prostheticists couldn’t pick up where nature left off, incorporating Google-like master maps into neural implants.” writes New York University professor of psychology Gary Marcus.

A New York Professor has advocated the idea of Google type brain implant chips that would “improve human memory”, an idea which mirrors already active projects funded by the Pentagon’s Defense Advanced Research Projects Agency.

“This in turn would allow us to search our own memories — not just those on the Web — with something like the efficiency and reliability of a computer search engine.” he postulates.

“How much would you pay to have a small memory chip implanted in your brain if that chip would double the capacity of your short-term memory? Or guarantee that you would never again forget a face or a name?”

Clearly DARPA would pay quite a lot, given that the research arm of the US military continues to fund scientific development of that exact technology.

The justification for the continued funding of such research is to develop a substitute for damaged or diseased brain regions, holding promise for victims of Alzheimer’s disease, stroke and other brain traumas.

Yet even the scientists currently at work on such projects know that the real application for the implant devices would be in the commercial and military sectors. After all, why would the Pentagon have such a keen interest in curing Alzheimer’s?

In 2003 Popular Science reported:

Medicine aside, Biomedical engineer Theodore Berger sees potential commercial and military applications for the brain chip, which is partially funded by the Defense Advanced Research Projects Agency. Learning how to build sophisticated electronics and integrate them into human brains could one day lead to cyborg soldiers and robotic servants, he says.

In his Times piece, New York Professor Gary Marcus concludes:

“Would this turn us into computers? Not at all. A neural implant equipped with a master memory map wouldn’t impair our capacity to think, or to feel, to love or to laugh; it wouldn’t change the nature of what we chose to remember.”

Clearly Mr Marcus has not considered that there is a very good reason why the human brain blocks out certain memories or feelings and why it correlates information in the way that it does.

Furthermore, cataloguing a person’s memories on an external source invariably means that an entity external to that particular person, be it a company, corporation or government, could conceivably gain access to those memories.

The more that entity knows about the population, the more it can and inevitably will use that information to control it for their own benefit and profit.

This concept may seem completely outlandish to many, yet it has been the central focus of DARPA activities for some time with projects such as LifeLog, which seeks to gain a multimedia, digital record of everywhere a person goes and everything they see, hear, read, say and touch.

Wired Magazine has reported:

On the surface, the project seems like the latest in a long line of DARPA’s “blue sky” research efforts, most of which never make it out of the lab. But DARPA is currently asking businesses and universities for research proposals to begin moving LifeLog forward.

“What national security experts and civil libertarians want to know is, why would the Defense Department want to do such a thing?” the article asks. The answer lies in the stated goal of the US military – “Total Spectrum Dominance”.

Furthermore, Mr Marcus’ assertions that the neuro technology would not be in any way dominant over a person’s capacity to think, does not tally with DARPA’s Brain Machine Interfaces enterprise, a $24 million project reported on in the August 5, 2003 Boston Globe.

The project is developing technology that “promises to directly read thoughts from a living brain – and even instill thoughts as well… It does not take much imagination to see in this the makings of a matrix-like cyberpunk dystopia: chips that impose false memories, machines that scan for wayward thoughts, cognitively augmented government security forces that impose a ruthless order on a recalcitrant population.” The Globe reported.

Government funded advances in neurotechnology which also focus on developing the ability to essentially read people’s minds should also set alarm bells ringing.

It is also well documented that the military and the federal government have been dabbling in mind control and manipulation experimentation for decades.

Mr Marcus may be a well meaning scientist and may very well see such technology as progressive for humanity, but when it is being developed by military commanders under governments that have killed and oppressed billions across the globe in the last century alone, the prospect becomes somewhat sullied to say the least.

While neuro implants represent the second phase of implantable chips, the technology has been in existence for over a decade and discussions on simple ID chipping of humans is now in the news regularly.

Tommy Thompson, the former Health and Human Services Secretary in the Bush administration, promised to have a chip implanted and is subsequently toured the country lauding the virtues of ID chips. During the the confirmation hearings for John Roberts Jr., George W. Bush’s nominee for Supreme Court chief justice, Roberts was questioned by Senator Joseph R. Biden on whether he would rule against a mandatory implantable microchip to track American citizens.

Last year there was a congressional debate on whether airport workers could be mandated to have microchip implants.

Other workers have already been forced to take the chip.

Government workers in Mexico are being forced to take the chip or lose their job. Staff of Mexico’s attorney general had to take the chip in order to access secure areas.

In February, a Cincinnati surveillance equipment company became the first U.S. business to use this application when a handful of employees voluntarily got implants to allow them to enter secure rooms.

In a trail blazing act in 2006 however, Governor Jim Doyle of Wisconsin signed a lawdeclaring it a crime to require an individual to be implanted with a microchip. The people of Wisconsin welcomed the RFID law which imposes fine of up to $10,000 per day for a violator. The Bill was introduced by Rep. Marlin Schneider, D-Wisconsin.

A spotlight has recently been placed on chip implants by the London Times which ran a piece asking whether children should be implanted in the wake of the kidnapping of British toddler Madeleine McCann.

Debate also exists on the chipping on inmates, sex offenders and other vilified groups.

Another area in which the debate over chips rages is the medical profession. Last year, leaked British policy review documents revealed that the government has considered the notion of implanting anyone considered mentally unstable with a microchip.

We have also previously highlighted how implantable chips are being used for recreational purposes, to pay for drinks in bars. The Baja Beach club in Barcelona has championed the technology for years. Leading sports figures now carry chips in their clothing to track their performances, implantation has already been debated as the next step.

All manner of things from commerce to transport could one day forge the way towards a microchipped society.

Last year award winning director Aaron Russo, appearing on the Alex Jones show,stressed that the true intentions of the global elite, in particular the Rockefeller family, is a microchipped society. A society where you have no privacy, nowhere to run, nowhere to hide, whether you’re innocent, guilty, indifferent or impaired.

Consider that the first use of the technology was in tracking and tracing cattle and other animals.

A microchipped society sounds like something from a horrific science fiction movie, as ever fiction is being mirrored by reality as we now see it being debated in Congress.

The Age in Australia reported that within ten years the chip will be as common as cell phones are today. If the scheme became commonplace then it is estimated that around75% of the population would be mandated to take the chips.

By pure coincidence (ahem) IBM, the company behind Verichip, the major retailer of implantable chips, also ran the cataloging system used by the Nazis to store information on Jews in Hitler’s Germany.

To understand just how easily a microchipped society could quickly become reality, read thisexcellent piece from 2006 by Kevin Haggerty of the Toronto Star.