Superintelligence by Nick Bostrom
For me the most interesting thought is why and how the superintelligence should be controlled by humans. It is evident that the human brain is supreme in comparison to gorilla’s brain that’s why future of gorillas depends more on actions of humans than on gorillas themselves. In short, higher intelligence level has capability to rule the lesser intelligent species or systems. In the event that machine’s brains one day come to outperform human brains when all is said in done insight, at that point this new superintelligence could turn out to be effective. So the destiny of our species at that point would come to rely upon the activities of the machine superintelligence. Specifically, superintelligence may have values that are contrary with the survival of mankind. For instance, a superintelligence resolutely committed to maximize the quantity of paperclips in the universe may try to convert everything on Earth into paperclips including us. Of course this is an extreme and not very plausible example, and we are probably unable to understand the values of superintelligence smarter than us like we are smarter than a beetle. But continuing to exist and having sufficient resources are needed for a wide range of possible values, so it seems likely so it appears to be likely that any future superintelligence will have a capable drive to self-safeguarding and asset procurement. It could consider us useless thing on its way and could remove us calmly. Bostrom commits half of the book to conceivable approaches to handle the control issue, including motivators, disincentives, and inspiration/esteem building procedures to configuration, assemble, change, and balance out the value and motivational systems of future machine intelligences before they accomplish superintelligence.
Super intelligent systems would be very active at attaining jobs they are made for – for instance, they would be substantially more proficient than people are at translating information of numerous types, refining logical hypothesis, enhancing innovations, and understanding and anticipating complex frameworks like the worldwide economy and the environment. However, super intelligent AI systems could also pose dangers if they are not designed and used sensibly. There are many other sources of hazard from super intelligent systems; for instance, severe governments could utilize these systems to do brutality on a substantial scale.
Luckily, we humans have one preferred standpoint that we’ll characterize eventual fate of superintelligence without anyone else and we get the opportunity to make the first move. I completely agree with Bostrom’s “common good principle,” which states that “Superintelligence should be developed only for the benefit of all of humanity and in the service of widely shared ideals,” (p. 254) does in reality appear like a commendable principle to indoctrinate in the AI research field. Research should be allowed for only those fields which will not lead to the destruction of human beings. For example, one can develop AI-based robots like “Slaughterbots”, this is may lead to misuse of these robots for killings of innocent people. So, it is of great importance that scientific community put serious efforts to draw some boundary lines for the research and development of AI now. I’ll call it as “bounded Artificial Intelligence (AI)”.
Human beings should ensure their survival first then they should try to maximize the capabilities and values of superintelligence. As human are developing intelligent machines and AI based systems, so we must equip all these AI systems with only positive senses. While developing these intelligent machines we should follow very simple approach and that is “Human save principle”. By using this approach we should give realization to smart and super intelligent systems so that they don’t harm human beings and the things like food, land and shelters, which are required for our survival.
In conclusion, survival of human beings are most important than attaining superintelligence. We must take cautious steps now that how further we should allow machines to be intelligent. If we reach the superintelligence and humans are destroyed by this superintelligence then what is its advantage to us? Obviously, we have no advantage. So, it better to limit the machine intelligence before our extinction.
If we realistically think then I would say no such superintelligence will exist. I think human advancement will co-exist with technological advancement, with humans capabilities will be enhanced by synthetic biology and artificial intelligence. AI system and intelligent machines will enhance the intelligence of humans. In other words, human intelligence will continue to rule the machines intelligence.
There are couples of reasons for my stand. First reason is that, humans are not idiot and they love to rule and to dominate. They will never make such system or thing which will rule them. Other reason is the imperfection or natural limitation of human creativity. I always define creativity as “to make imperfect from perfect is creativity”. So naturally, whatever human will create will be lesser intelligent than humans.
I hope that we will seamlessly integrate with superintelligence. Up till now, in certain intellectual areas, it is believed that humans use only 10% of their brain. I foresee, by development of intelligent machines humans will be able to increase this percentage of usage of their brain. As Bostrom said “Superintelligence could almost certainly devise means to helping them shuffle off their mortal coils altogether by uploading their minds to a digital substrate.” So more usage of human brain will make us master and intelligent systems or machines will follow the intelligence of humans.
This discussion can be extended to discuss superintelligence in medical field. I am convinced that intelligent machines will play a vital role in healthcare and AI and machine learning will definitely affect its course in an effective way. Nevertheless, that doesn’t mean robots are going to replace human factor altogether. According to Professor Richard Lilford of Warwick University: “I don’t think computers will ever supplant the doctor’s diagnosis. I think things will change… a computer may become a second opinion, or perhaps even a first opinion, but the doctor will still make the final call.” We can support this argument on solid ground that when it comes to exercising judgment or being creative, robots become useless. I completely agree with Prof.Richard that superintelligence will not exist as human will be making final calls in future too.
One other reason for my stand is that consciousness is an integral part of human beings and it also provides inputs to our brain and then we make decisions on the basis of these inputs. But I think this will be impossible to develop intelligent systems with consciousness. Ultimately, we may build an AI that’s extremely clever, but unable of experiencing the world in a self-aware, particular, and conscious way. The absence of consciousness will not allow machines to become super intelligent.
In the conclusion, I believe that no real superintelligence will exist in future. If machines will become more intelligent then human’s brain percentage use will increase as well. Smarter machines and more intelligent human brains will co-exist and human will be making final calls. Superintelligence should also not extent to that degree where existence of human becomes under threat.
Total Characters: 7574