Question#1In super intelligent, it will have more and more

Question#1In my opinion the most interesting fact is why and how humans should control superintelligence. Humansare the most intelligent living creatures on earth, and their actions will affect the life of the other creatures(animals etc.) more. If a machine is built that is more intelligent and dominating than humans, then theiraction will affect the lives of humans to a great extent. For example, superintelligence may have values thatdo not align with the survival of human beings. If an artificial superintelligence does become goal-driven,it might develop goals incompatible with human well-being. Or it may pursue compatible goals viaincompatible means. Hence the destiny of the human beings will depend on the wish of the super intelligentmachines. Being far more powerful and intelligent, humans will be of no match to these machines.As a super intelligent entity becomes more and more super intelligent, it will have more and more awarenessof its own mental processes. With increased self-reflection it will become more and more autonomous andless able to be controlled. Like humans, it will have to be persuaded to believe in something (or to take acertain course of action). Also, this super intelligent entity will be designing even more self-aware versionsof itself. Increased intelligence and increased self-reflection go hand in hand. Monkeys don’t persuadehumans because monkeys lack the ability to refer to the concepts that humans are able to entertain. To asuper intelligent entity we will be as persuasive as monkeys (and probably much less persuasive).I imagine two (non-exclusive) scenarios in which autonomous, self-replicating AI entities could arise andthreaten their human creators.1. The Robotic Warfare scenario: No one wants their (human) soldiers to die on the battlefield. A populationof intelligent robots that are designed to kill humans will solve this problem. Unfortunately, if control oversuch warrior robots is ever lost, then this could spell disaster for humanity.2. The Increased Dependency scenario: Even if we wanted to, it is already impossible to eliminatecomputers because we are so dependent on them. Without computers our financial, transportation,communication and manufacturing services would grind to a halt. Imagine a near-future society in whichrobots perform most of the services now performed by humans and in which the design and manufactureof robots are handled also by robots. Assume that, at some point, a new design results in robots that nolonger obey their human masters. The humans decide to shut off power to the robotic factory but it turnsout that the hydroelectric plant (that supplies it with power) is run by robots made at that same factory. Sonow the humans decide to halt all trucks that deliver materials to the factory, but it turns out that thosetrucks are driven by robots, and so on.If developed completely, AI is a double-edged sword. Both could solve the complex issues in whichhumanity is put, as exterminate this same humanity for one simple reason: the human being would beredundant for a super intelligent AI. So it is crucial the problem of thinking in advance how to control thisAI and induce her to do what we want. But what we want may not be the best for us, and then everythingis complicated. Thus the subject, let extends indefinitely.It’s radical and perhaps frightening but our failure to comprehend the magnitude of the risks we are aboutto confront would be a grave error given that, when super-intelligence begins to manifest itself and act, thechange may be extremely quick and we may not be afforded a second chance. Once machines surpass us inintelligence and progressively become even more intelligent, we will have lost our ability to control whathappens next. Before this comes to pass, it is essential that we develop a strategy to influence what happensso that the potential dangers are dealt with before they develop.There’s a story that scientists built an intelligent computer. The first question they asked it was, “Is there aGod?” The computer replied, “There is now.” Wise development would ensure that we reap the benefitsand minimize the risks. In short the final goal of AI development should be that we end up with a “Friendly”superintelligence rather than an unfriendly or indifferent superintelligence.Question#2In my opinion the prospect of achieving the type of superintelligence discussed in the book, although mightbe possible, is a bit farfetched. If we realistically think then I would say no such superintelligence will exist.I think human advancement will co-exist with technological advancement, with humans capabilities willbe enhanced by synthetic biology and artificial intelligence. The emergence of super-intelligence (machinesreplacing humans) is far from a forgone conclusion, especially within the time perspective of a generationor two that is predicted.The basic assumption that anything ordinary intelligence can do, an improved intelligence is capable of. Inparticular if an ordinary intelligence is capable of inventing an intelligence superior to itself, the same mustbe true for superintelligence. In this way we get an infinite reiterative process and geometric, or as we preferto say nowadays, exponential growth. Now this ability of exceeding yourself is a highly abstract one,reminiscent of the kind of reasoning that leads paradox of omnipotency – can God make a stone so heavyhe cannot lift it? A small animal such as a rat can carry a bigger animal on its back, but this cannot beassumed recursively, an elephant put on top of an elephant will break the back of the latter. A thin papercan easily be folded, but the process soon comes to a stop, long before the thickness of the paper exceedsits length and breadth. Examples can be multiplied, but on the other hand as the notion of intelligence issuch a fluid one, any attempts to foil its growth, can easily be circumvented. The problem is now how totame this power so it does not lead to the extinction of mankind. How to make this power benevolent? Thisis exactly the task of creating a deity. God, as far as the notion makes sense, looks out for the interests ofmankind far more effectively than mankind would be able to do on its own.As of now, today, no one knows with certainty to what extent (if any) superintelligence will eventually beable to do everything that human intellect can do and do it better and faster. Humans design systems andare beginning to design systems that can also design systems. I have a few articles of faith that I presumeto share now. First, I believe that instruments of artificial intelligence (AI) will never replace human beingsbut, over time, they will become increasingly more valuable collaborators insofar as the what’s and how’sare concerned. Second, I believe that human beings will always be much better qualified to rank prioritiesand determine the whys. Finally, and of greatest importance to me, I believe that only human beings possessa soul that can be nourished by “a compassionate and jubilant use of humanity’s cosmic endowment.”In the conclusion, I believe that no real superintelligence, depicted in the book, will exist in future. Therewill be superintelligence but its magnitude will be far less than what is explained in the book. If machineswill become more intelligent then human’s brain percentage use will increase as well. Smarter machinesand more intelligent human brains will co-exist and human will be making final calls.WC: 7533y