From Human Consciousness to Artificial Intelligence

“Creating a synthetic brain was possible as of 2009, but scientists had yet to understand how to emulate adaptivity as well as basic human emotion.”

As humans, we fear death and are never willing to cheat it. Life is meant for you to live in the moment and seize every opportunity, but people tend to live in their mediocre world. What would happen if we could digitize our consciousness into Artificial Intelligence and indefinitely extend the human race’s life expectancy? Never having to live in fear again. As explained by Mitch Moffit with AsapScience to digitize a brain you must first understand how forming a memory works, “Your brain is a three-pound lump of fatty tissue that contains about 86 billion brain cells called neurons. By passing electricity or chemicals between them, neurons can send signals to each other. Most neuroscientists believe memory is stored as a network of neurons that form links with each other and all fire at the same time. Each time a memory is recalled, the same network of neurons fires together” (Moffit 2018). Memories can theoretically be duplicated and uploaded to a hard drive and scientists have already successfully done so by using computers to match firing neurons. Although we are still years away from being able to digitize our consciousness that has not stopped scientists funded by the National Science Foundation from leading groundbreaking research into creating the first synthetic brain, “The challenges to creating a synthetic brain are staggering. Unlike computer software that simulates brain function, a synthetic brain will include hardware that emulates brain cells, their amazingly complex connectivity and a concept Parker calls “plasticity,” which allows the artificial neurons to learn through experience and adapt to changes in their environment the way real neurons do” (Parker and Zhou 2009). Parker and Zhou explain that to replicate brain function a leap of technology would be needed with a structure that would allow connectivity in every direction. Carbon nanotubes are the preferred structure because of their electrical conductivity and the ability to be chemically altered. Creating a synthetic brain was possible as of 2009, but scientists had yet to understand how to emulate adaptivity as well as basic human emotion. In 2018 the Michigan Institute of Technology began their work on machine-learning models that furthered the research of Alice Parker and Chongwu Zhou at the University of Southern California, “In the growing field of “affective computing,” robots and computers are being developed to analyze facial expressions, interpret our emotions, and respond accordingly. Applications include, for instance, monitoring an individual’s health and well-being, gauging student interest in classrooms, helping diagnose signs of certain diseases, and developing helpful robot companions” (Matheson 2018). An algorithm grants any artificial intelligence the ability to accurately determine and respond toward human emotion however Rob Matheson says it was a challenge for MIT to allow for their technology to adapt toward different cultures and ethnicities. Even though MIT ran into this major problem with everything it took time, “Currently available data for such affective-computing research isn’t very diverse in skin colors, so the researchers’ training data were limited. But when such data become available, the model can be trained for use on more diverse populations. The next step, Feffer says, is to train the model on “a much bigger dataset with more diverse cultures” (Matheson 2018). The University of Southern California and the Michigan Insitute of Technology created a foundation for MinD in a Device CEO Tsubasa Nakamura. Nakamura started funding for his business as of March 25th, 2019. Promising to make digitizing your consciousness into artificial intelligence a reality within the next twenty years. MinD in a Device was also responsible for turning artificial intelligent brain waves into audible speech, “The team says “robust performance” was possible when training the device on just 25 minutes of speech, but the decoder improved with more data. For this study, they trained the decoder on each participant’s spoken language to produce audio from their brain signals” (Whyte 2019).

 

Sources:

http://news.mit.edu/2018/helping-computers-perceive-human-emotions-0724

https://www.nsf.gov/discoveries/disc_summ.jsp?cntn_id=112947&org=NSF

https://www.forbes.com/sites/bernardmarr/2019/09/30/the-7-biggest-technology-trends-in-2020-everyone-must-get-ready-for-now/#7e39c25a2261

http://www.asapscience.com/blog/2018/5/22/could-you-transfer-your-consciousness-to-another-body

https://www.nanowerk.com/nanotechnology/introduction/introduction_to_nanotechnology_22.php

http://mindinadevice.main.jp/mindinadevice.com/wp-content/uploads/2019/03/20190325PR_Eng/MinD_Press_Release_190325(final).pdf

https://www.newscientist.com/article/2200683-mind-reading-device-uses-ai-to-turn-brainwaves-into-audible-speech/