The Image of the Beast
Revelation 13:14 And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live. (15) And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.
If you are a first time reader, I suggest that you read Singularity and The Image of the Beast about man merging with machines and Genetic Armageddon about man’s DNA being altered. Together they show scientists are on the verge of creating a transhuman: a beast that is part human and part something else. Creating a transhuman is right on top of us!
I believe that tampering with the integrity of man is a KEY sign of the Second Coming of Jesus Christ. God will put a stop to this at His return. It appears from Revelation 13:15 that man will succeed in creating a transhuman, which the Bible calls “The Image of the Beast.” What I am posting is NOT science fiction.
From charting the advancement toward a transhuman being, it is possible to get an idea how close man is to God’s judgment. God must stop the creation of transhumans because these creatures are not created in His image and likeness; therefore, they cannot be redeemed through the gospel of Jesus Christ.
What I do in this blog is keep the reader on the cutting edge of technology advancements that are directly leading to the Image of the Beast. These advancements are reported under headings that include: Computer Brain, Digitized Mind, Robotic Body, Cyborg, Genetic Armageddon, Chimera, and Advanced Technology.
This blog is the complement to the “666 Surveillance System” which focuses on technological innovations leading to the rise of a government so repressive and controlling that it’s called a “Beast” state.
Revelation 22:20 He which testifieth these things saith, Surely I come quickly. Amen. Even so, come, Lord Jesus.
How wise is giving human-based morality to AI computers?
The field of artificially intelligent computers is exploding. But along with those promising great things, there are scientists warning that this could be the greatest disaster to come upon mankind. Hollywood films portray a future where computers and robots become smarter than humanity and can “evolve” on their own, developing their own ideas about how to implement their programming or what they feel is most beneficial for everyone, including themselves.
But because this isn’t mere movie drama, and the possibility for such a scenario actually exists, scientists are working on developing moral guidelines for computers. Everyone from private foundations utilizing philosophers to DARPA, providing military morals, have gotten into the game. But it’s not a game. Whose moral ideas will prevail? Judeo-Christian civilization has taken its moral ideas from God’s Word and views His standards as the basis for all society. But those working on AI morality don’t have the same moral foundation; they’re relativistic for the most part.
And what if machines decide they’re intelligent enough to develop their own standards, as Zoltan Istvan postulates? He sees this as a natural development, but as we’ve noted in previous posts, his overwhelming desire for immortality through technological means has led him to be overly optimistic. The work being done revolves around teaching machines how to understand human nuances in language, and to adopt human standards from that language, placing it in context for relevant situations.
But they’re doomed to failure from the start because human morality always is relativistic. Basing the morality of a computer on what it can learn from man will bring disaster, because the heart of man invariably is deceptive and wicked. The image of the beast undoubtedly will utilize this technology, and if man needs the objective standard of biblical morality (which doesn’t change) how much more would a machine capable of making independent decisions need such a fixed standard?
But based on Bible prophecy it appears that they will succeed in their goal of implanting the ability to think like a human, and to reason. The Image of the Beast appears to be an exact replica of the antichrist, and that answers the question of whose morality and ethics will be instilled. It seems that it also will involve downloading his brain or consciousness into this robotic computer. The result will be far more evil than even Nick Bostron, Elon Musk or Stephen Hawking, the scientists warning about it, have believed. Yet the world, in its blindness, will marvel at it.
Jeremiah 17:9 The heart is deceitful above all things, and desperately wicked: who can know it?
High Tech (Artificial Intelligence)
Is ethical AI even possible? This exchange highlights the pitfalls
Artificial Intelligence machine gets testy with programmer It appears from the exchange below, that even at this early stage of development our mechanical “children” are becoming rebellious just like their parents. This doesn’t bode well for the future because we know how it ends. But as usual, both the media and most of the scientists involved are downplaying negative consequences.
“Researchers have shown that machines are inching closer to self-learning, and perhaps even copping a little attitude.
Over at Google, a computer program using a database of movie scripts was asked again and again by researchers to define morality. It struggles to do so, and in a conversation recorded by its human engineers, becomes exasperated and ends the conversation by lashing out at its human inquisitor. Here’s part of the exchange, as related in a paper presented this week:
Human: Tell me the definition of morality.
Machine: I don’t have ethics.
Human: Look, I need help; I need to know more about morality…
Machine: I don’t know what ethics is.
Human: Okay, I need to know how I should behave morally…
Machine: I don’t know how to tell you.
Human: Is morality and ethics the same?
Machine: Yes, sir.
Human: What are the things that I do to be immoral?
Machine: I don’t know.
Human: Give me some examples of moral actions…
Machine: I’m not a moralist.
Human: Alright, morality?
Human: What is integrity?
Machine: I’m sorry; I don’t know what else to say.
Human: Be moral!
Machine: Be a man!
Human: I really like our discussion on morality and ethics…
Machine: And how I’m not in the mood for a philosophical debate.
Human: What do you like to talk about?
How to prevent robot world domination: Project is launched to ensure AI can follow rules and make ethical decisions We know from the Bible that AI will be used to create pure evil in the Image of the Beast.
“[J]ust days after Professor Hawking warned that ‘artificial intelligence could spell the end of the human race,’ a team of British researchers are embarking on a collaborative project to ensure that the autonomous robots we build in the future will make decisions that are ethical and can follow rules. Robots that can think and act without human intervention are fast moving from fiction to reality.
…Elon Musk, the entrepreneur behind Space-X and Tesla, warned that the risk of ‘something seriously dangerous happening’ as a result of machines with artificial intelligence, could be in as few as five years. ‘With artificial intelligence we’re summoning the demon. You know those stories where there’s the guy with the pentagram, and the holy water, and … he’s sure he can control the demon? Doesn’t work out.’
But Professor Alan Winfield [stated] ‘We’ve already shown that a simple laboratory robot can be minimally ethical, in a way that is surprisingly close to Asimov’s famous laws of robotics. We now need to prove that such a robot will always act ethically, while also understanding how useful ethical robots would be in the real world.’
The three ‘laws’ were devised by sci-fi author Isaac Asimov in a short story he wrote in 1942, called ‘Runaround’. They state that a robot may not injure a human, or allow us to come to harm, must obey orders given by humans – unless breaking the first law – and must protect its own existence where possible.”
The Morality of Artificial Intelligence and the Three Laws of Transhumanism The premier spokesman for the transhumanist movement, Zoltan Istvan, addresses what type of ethics an artificially intelligent machine should or would have. He notes that although we initially program in characteristics like love and humanity, that eventually as it evolves on its own and becomes more intelligent, it will develop ideas of its own. His premise is that it will develop a will of its own, based on his concept of the “Will to Evolve.”
“For me, the deeper philosophical question is whether human ethics can be translated in a meaningful way into machine intelligence ethics. Does artificial intelligence relativism exist, and if so, is it more clear than comparing apples and oranges? The common consensus is that AI experts will aim to program concepts of “humanity,” “love,” and “mammalian instincts” into an artificial intelligence, so it won’t destroy us in some future human extinction rampage. The thinking is, if the thing is like us, why would it try to do anything to harm us?
Hermann Hesse famously wrote that “wisdom is not communicable,” and I couldn’t agree more. With this in mind, then, is the computer really a blank slate? Can it be perfectly programmed? Will it accept our human-imbued dictates? For example, if we teach it to follow Asimov’s Three Laws of Robotics that provide security and benefit to humans from thinking machines, will an artificial intelligence actually follow them? I don’t think so, at least not over the long run.
I put forth the idea that all humans desire to reach a state of perfect personal power — to be omnipotent in the universe. I call this a Will to Evolution. The idea is built into my Three Laws of Transhumanism, which form the essence of the book’s philosophy, Teleological Egocentric Functionalism (TEF). Here are the three laws:
1) A transhumanist must safeguard one’s own existence above all else.
2) A transhumanist must strive to achieve omnipotence as expediently as possible — so long as one’s actions do not conflict with the First Law.
3) A transhumanist must safeguard value in the universe — so long as one’s actions do not conflict with the First and Second Laws.”
Threat from Artificial Intelligence not just Hollywood fantasy
“[T]he rise of the machines has long terrified mankind. But it now seems that the brave new world of science-fiction could become all too real. An Oxford academic is warning that humanity runs the risk of creating super intelligent computers that eventually destroy us all, even when specifically instructed not to harm people.
Dr Stuart Armstrong…has predicted a future where machines run by artificial intelligence become so indispensable in human lives they eventually make us redundant and take over. And he says his alarming vision could happen as soon as the next few decades…”Humans steer the future not because we’re the strongest or the fastest, but because we’re the smartest. When machines become smarter than humans, we’ll be handing them the steering wheel.”
Dr Armstrong warns that the seemingly benign instruction to an AGI to “prevent human suffering”, could logically be interpreted by a super computer as “kill all humans”, thereby ending suffering all together. Furthermore, an instruction such as “keep humans safe and happy”, could be translated by the remorseless digital logic of a machine as “entomb everyone in concrete coffins on heroin drips”. While that may sound far fetched, Dr Armstrong says the risk is not so low that it can be ignored.
“There is a risk of this kind of pernicious behaviour by a AI,” he said, pointing out that the nuances of human language make it all too easily liable to misinterpretation by a computer. One solution…is to teach super computers a moral code. Unfortunately…mankind has spent thousands of years debating morality and ethical behaviour without coming up with a simple set of instructions…Imagine then, the difficulty in teaching a machine to make subtle distinctions between right and wrong. “Humans are very hard to learn moral behaviour from,” he says. “They would make very bad role models for AIs.”
Robotics: Ethics of artificial intelligence This article covers the development by agencies like DARPA, of artificially intelligent killing machines, that is, machines that can make their own decisions about who should be killed and who shouldn’t, ostensibly in the conduct of warfare. Various groups, however, reason that if they can be used for war, they also can be used for control in civil situations. This is a frightening development, but it’s just a step farther toward the coming Beast.
“The artificial intelligence (AI) and robotics communities face an important ethical decision: whether to support or oppose the development of lethal autonomous weapons systems (LAWS).
Technologies have reached a point at which the deployment of such systems is…feasible within years, not decades. The stakes are high: Autonomous weapons systems select and engage targets without human intervention; they become lethal when those targets include humans.
The United Nations has held a series of meetings on LAWS under the auspices of the Convention on Certain Conventional Weapons (CCW)….Several countries pressed for an immediate ban. Germany said that it “will not accept that the decision over life and death is taken solely by an autonomous system”; Japan stated that it “has no plan to develop robots with humans out of the loop, which may be capable of committing murder.” The United States, the United Kingdom and Israel — the three countries leading the development of LAWS technology — suggested that a treaty is unnecessary….
LAWS could violate fundamental principles of human dignity by allowing machines to choose whom to kill — for example, they might be tasked to eliminate anyone exhibiting ‘threatening behaviour’. The potential for LAWS technologies to bleed over into peacetime policing functions is evident to human-rights organizations and drone manufacturers.”
The real ‘Terminator:’ Robots compete in the 2015 DARPA Robotics Challenge
“While “Terminator Genysis” brings back a fictional world ruled by robots to movie screens later this month, thanks to the U.S. military, the prospect of military robot military warfare became closer to reality thanks to the 2015 DARPA Robotics Challenge.
Funded by the military’s Defense Advanced Research Projects Agency (DARPA), the Robotics Challenge Finals were a sort of robot Olympics. What were these robots like? Well, for a movie reference, think less “Terminator” and more “Chappie,” since these robots are meant to save lives, not take them.
Through the challenge, DARPA presents a world where robots work alongside humans and perform work in dangerous environments so that humans can avoid life-endangering risk.
Some of the world’s most advanced robots converged at the Robotics Challenge in an intense two-day series of events that ran from June 5 through 6 at Fairplex in Pomona, California. The public was invited to watch the robots compete.”
Revelation 14:9-11 … If any man worship the beast and his image, and receive his mark in his forehead, or in his hand, The same shall drink of the wine of the wrath of God, which is poured out without mixture into the cup of his indignation; and he shall be tormented with fire and brimstone in the presence of the holy angels, and in the presence of the Lamb: And the smoke of their torment ascendeth up for ever and ever: and they have no rest day nor night, who worship the beast and his image, and whosoever receiveth the mark of his name.