From Human Consciousness to Artificial Intelligence

“Creating a synthetic brain was possible as of 2009, but scientists had yet to understand how to emulate adaptivity as well as basic human emotion.”

As humans, we fear death and are never willing to cheat it. Life is meant for you to live in the moment and seize every opportunity, but people tend to live in their mediocre world. What would happen if we could digitize our consciousness into Artificial Intelligence and indefinitely extend the human race’s life expectancy? Never having to live in fear again. As explained by Mitch Moffit with AsapScience to digitize a brain you must first understand how forming a memory works, “Your brain is a three-pound lump of fatty tissue that contains about 86 billion brain cells called neurons. By passing electricity or chemicals between them, neurons can send signals to each other. Most neuroscientists believe memory is stored as a network of neurons that form links with each other and all fire at the same time. Each time a memory is recalled, the same network of neurons fires together” (Moffit 2018). Memories can theoretically be duplicated and uploaded to a hard drive and scientists have already successfully done so by using computers to match firing neurons. Although we are still years away from being able to digitize our consciousness that has not stopped scientists funded by the National Science Foundation from leading groundbreaking research into creating the first synthetic brain, “The challenges to creating a synthetic brain are staggering. Unlike computer software that simulates brain function, a synthetic brain will include hardware that emulates brain cells, their amazingly complex connectivity and a concept Parker calls “plasticity,” which allows the artificial neurons to learn through experience and adapt to changes in their environment the way real neurons do” (Parker and Zhou 2009). Parker and Zhou explain that to replicate brain function a leap of technology would be needed with a structure that would allow connectivity in every direction. Carbon nanotubes are the preferred structure because of their electrical conductivity and the ability to be chemically altered. Creating a synthetic brain was possible as of 2009, but scientists had yet to understand how to emulate adaptivity as well as basic human emotion. In 2018 the Michigan Institute of Technology began their work on machine-learning models that furthered the research of Alice Parker and Chongwu Zhou at the University of Southern California, “In the growing field of “affective computing,” robots and computers are being developed to analyze facial expressions, interpret our emotions, and respond accordingly. Applications include, for instance, monitoring an individual’s health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions” (Matheson 2018). An algorithm grants any artificial intelligence the ability to accurately determine and respond toward human emotion however Rob Matheson says it was a challenge for MIT to allow for their technology to adapt toward different cultures and ethnicities. Even though MIT ran into this major problem with everything it took time, “Currently available data for such affective-computing research isn’t very diverse in skin colors, so the researchers’ training data were limited. But when such data become available, the model can be trained for use on more diverse populations. The next step, Feffer says, is to train the model on “a much bigger dataset with more diverse cultures” (Matheson 2018). The University of Southern California and the Michigan Insitute of Technology created a foundation for MinD in a Device CEO Tsubasa Nakamura. Nakamura started funding for his business as of March 25th, 2019. Promising to make digitizing your consciousness into artificial intelligence a reality within the next twenty years. MinD in a Device was also responsible for turning artificial intelligent brain waves into audible speech, “The team says “robust performance” was possible when training the device on just 25 minutes of speech, but the decoder improved with more data. For this study, they trained the decoder on each participant’s spoken language to produce audio from their brain signals” (Whyte 2019).