SOME ARE TINY, SOME ARE VERY REALISTIC IN THEIR MIMICRY OF BIRDS AND INSECTS, BUT ALL ARE CAPABLE OF CARRYING SOME TYPE OF WEAPONRY, WHETHER A DROP OF POISONOUS SUBSTANCE OR AN EXPLOSIVE.
NOW, THEY'RE BEING TAUGHT TO THINK AND REACT ALL ON THEIR OWN.
ARE WE READY FOR AUTONOMOUS DRONES?
CAN WE TRUST SUCH THINGS NOT TO GO WRONG?
DARPA'S HUMMINGBIRD DRONE
UNDERWATER ATTACK DRONES
x
x
"BUG" DRONES
Swarm robots cooperate with a flying drone
THE NEW WEAPONS RACE DARPA (Defense Advanced Research Projects Agency) HAS BEEN HASTILY DEVELOPING TWO NEW TYPES OF DRONES AND BOTH WILL BE AUTONOMOUS KILLERS, CAPABLE OF HUNTING IN PACKS AND MAKING DECISIONS "ON THEIR OWN".
THE SMALLEST IS ABLE TO DELIVER A LETHAL LOAD OF EXPLOSIVES STRONG ENOUGH TO "BLOW A PERSON'S HEAD OFF", AND THESE ROBOT DRONES CAN HUNT IN PACKS QUICKLY AND EFFECTIVELY IN URBAN AND RURAL SETTINGS.AS DARPA DEVELOPS THESE, THEY WILL BE ALLOWED TO "MAKE THEIR OWN DECISIONS", AND THAT IS WHERE A LOT CAN GO TERRIBLY WRONG, VERY QUICKLY.
THIS "TREND" IN WEAPONRY HAS ALREADY BEGUN, AND ANYONE WHO DOES NOT BELIEVE WE ARE IN A NEW 'ARMS RACE' TO CREATE THE MOST DANGEROUS SUCH WEAPONS, AND DO SO FASTER THAN ANYONE ELSE, JUST DOES NOT KNOW OUR MILITARY AT ALL, NOR DO YOU KNOW OUR FEDERAL GOVERNMENT.
GIVEN A 'PANDORA'S BOX', OUR MILITARY WILL OPEN IT EVERY SINGLE TIME.
FROM AN ARTICLE IN A UK NEWSPAPER:
(There are several videos in the article.)
"The U.S. Air Force is developing tiny unmanned drones that will fly in swarms, hover like bees, crawl like spiders and even sneak up on unsuspecting targets and execute them with lethal precision.
The Air Vehicles Directorate, a research arm of the Air Force, has released a computer-animated video outlining the the future capabilities of Micro Air Vehicles (MAVs). The project promises to revolutionize war by down-sizing the combatants.
The project, which is based at Wright-Patterson Air Force Base in Dayton, Ohio, was revealed in the March issue of the National Geographic magazine.
The promotional video begins with a swarm of tiny drones be dropped on a city from a passing plane.
The drones will work in concert to patch together a wide, detailed view of the battlefield - singling out individual targets without losing sight of the broader scene.
The video demonstrates how MAVs could be used to sneak up behind unsuspecting targets and kill them with a single, lethal shot."
IN THIS NEW RACE FOR DOMINANCE AND ONE-UPMANSHIP, "THERE CAN BE ONLY ONE" WINNER, AND ALL OTHERS WILL BE LOSERS.NO ONE WANTS TO LOSE THIS RACE!
WE'VE BEEN HEADED IN THIS DIRECTION FOR QUITE SOME TIME NOW, THIS "ROBOT AUTONOMY", THIS "INDEPENDENT DECISION MAKING" BY DRONES AND OTHER ROBOTICS.
BUT AS STUART RUSSELL, COMPUTER SCIENTIST AND ARTIFICIAL INTELLIGENCE RESEARCHER AT BERKELEY RECENTLY WROTE IN THE JOURNAL 'NATURE':
"Technologies have reached a point at which the deployment of such systems is -- practically, if not legally -- feasible within years, not decades."
These weapons "have been described as the third revolution in warfare, after gunpowder and nuclear arms," Russell wrote.
Lethal autonomous weapons systems could find and attack their targets without human intervention.
For example, such systems could include armed drones that are sent to kill enemies in a city, or swarms of autonomous boats sent to attack ships.
Some people argue that robots may not be able to distinguish between enemy soldiers and civilians, and so may accidentally kill or injure innocent people.
Should we support or oppose the development of deadly, autonomous robots?
There are already artificial intelligence systems and robots in existence that are capable of doing one of the following: sensing their environments, moving and navigating, planning ahead, or making decisions. "They just need to be combined," Russell said.
Already, the Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense charged with advancing military technologies, has two programs that could cause concern, Russell said. The agency's Fast Lightweight Autonomy (FLA) project aims to develop tiny, unmanned aerial vehicles designed to travel quickly in urban areas.
Already, the Defense Advanced Research Projects Agency (DARPA), the branch of the U.S. Department of Defense charged with advancing military technologies, has two programs that could cause concern, Russell said. The agency's Fast Lightweight Autonomy (FLA) project aims to develop tiny, unmanned aerial vehicles designed to travel quickly in urban areas.
And the Collaborative Operations in Denied Environment (CODE) project involves the development of drones that could work together to find and destroy targets, "just as wolves hunt in coordinated packs," Jean-Charles Ledé, DARPA's program manager, said in a statement.
Current international humanitarian laws do not address the development of lethal robotic weapons, Russell pointed out.
But without a treaty, there's the potential for a robotic arms race to develop, Russell warned. Such a race would only stop "when you run up against the limits of physics," such as the range, speed and payload of autonomous systems.
Developing tiny robots that are capable of killing people isn't easy, but it's doable.
"With 1 gram [0.03 ounces] of high-explosive charge, you can blow a hole in someone's head with an insect-size robot," Russell said.
"Is this the world we want to create?" If so, "I don't want to live in that world," he said.
Other experts agree that humanity needs to be careful in developing autonomous weapons."
"CODE intends to focus in particular on developing and demonstrating improvements in collaborative autonomy: the capability for groups of UAS to work together under a single human commander’s supervision.
The unmanned vehicles would continuously evaluate themselves and their environment and present recommendations for UAV team actions to the mission supervisor who would approve, disapprove or direct the team to collect more data.
Using collaborative autonomy, CODE-enabled unmanned aircraft would find targets and engage them as appropriate under established rules of engagement, leverage nearby CODE-equipped systems with minimal supervision, and adapt to dynamic situations such as attrition of friendly forces or the emergence of unanticipated threats.
CODE’s envisioned improvements to collaborative autonomy would help transform UAS operations from requiring multiple people to operate each UAS to having one person who is able to command and control six or more unmanned vehicles simultaneously.
Commanders could mix and match different systems with specific capabilities that suit individual missions instead of depending on a single UAS that integrates all needed capabilities but whose loss would be potentially catastrophic.
This flexibility could significantly increase the mission- and cost-effectiveness of legacy assets as well as reduce the development times and costs of future systems.
This flexibility could significantly increase the mission- and cost-effectiveness of legacy assets as well as reduce the development times and costs of future systems.
“Just as wolves hunt in coordinated packs with minimal communication, multiple CODE-enabled unmanned aircraft would collaborate to find, track, identify and engage targets..."
WHAT COULD GO WRONG, EH?
WHAT...COULD...GO VERY, VERY WRONG?
APPLIED TECHNOLOGY...TRUST IT OR FEAR IT?
APPLIED TECHNOLOGY...TRUST IT OR FEAR IT?
"Japan and Korea plan to deploy humanoid robots to care for the elderly, while the United States already fields thousands of robot warriors on the modern battlefield.
Artificial limbs, organs and bionic eyes? Check.
Man and machine increasingly look alike, and at some point the difference may not exist.
But on a brighter note, humans won't worry so much about robots once they've merged with them.
ROBOTS have brought their tireless efficiency to everything from assembly line work to humdrum gene sequencing in labs, and have appeared in growing numbers on real-life battlefields — although the latter can lead to the different problem if robots stage a rebellion, or even just have a weapons malfunction.
For now, robots complement rather than replace elements of the human workforce and armed forces due to limits on their intelligence. But they're evolving quickly, and a few have even begun tinkering with science themselves.A science-savvy robot called Adam has successfully developed and tested its first scientific hypothesis, all without human intervention.
Thousands of drones and ground robots have been deployed by many nations, and particularly the United States in Iraq and Afghanistan.
An automatic antiaircraft gun killed human soldiers on its own when it malfunctioned during a South African training exercise.
Military researchers refer to "Terminator" scenarios, and seriously discuss how armed robots are changing the rules and ways of modern war.
Great Britain has established a network of satellites for the purpose of coordinating all those drones and other military assets. It shares the same name as a certain villainous artificial intelligence that dominates the "Terminator" movies — Skynet.
TELEPRESENCE
WE, ALL OF US USING THE INTERNET, HAVE ATTAINED "TELEPRESENCE"...WE ARE 'VIRTUALLY' WHERE WE ARE NOT.
FOR EXAMPLE, WE ARE HERE IN THE TEA ROOM, BUT WE AREN'T.
WITH THE CLICK OF A MOUSE, WE CAN BE HIGH ABOVE EARTH, LOOKING DOWN WITH THE CREW OF THE INTERNATIONAL SPACE STATION...BUT WE AREN'T REALLY THERE, EITHER.
TELEPRESENCE.... WE CAN BE WHERE WE AREN'T, BUT WE AREN'T ACTUALLY WHERE WE ARE.
DON'T THINK ABOUT THAT TOO LONG...IT GETS MORE THAN A LITTLE CREEPY!
"Developing a virtual human is the greatest challenge of this century," said John Parmentola, U.S. Army director for research and laboratory management.
Marsella and other researchers working with Parmentola have even floated the idea of someday testing their AI (ARTIFICIAL INTELLIGENCE) in online video games, where thousands of human-controlled characters already run around.
That would essentially turn games such as "World of Warcraft" into a huge so-called Turing Test that would determine whether human players could tell that they were chatting with AI."
That would essentially turn games such as "World of Warcraft" into a huge so-called Turing Test that would determine whether human players could tell that they were chatting with AI."
COULD WE TELL?
SHOULD WE ALLOW THIS TYPE TECHNOLOGY TO PROCEED?
IS THERE ANY WAY TO STOP IT NOW, IF WE WANTED TO?
IS THERE ANY WAY TO STOP IT NOW, IF WE WANTED TO?
I thought it was going to be some boring old post, but it really compensated for my time. I will post a link to this page on my blog. I am sure my visitors will find that very useful.
ReplyDeleteLol elo
Hiya, John, and thanks for reading. Thanks to your comment I saw how screwy the format, etc, had gone on this post and was, I hope, able to repair that. Look, I even allowed your link to remain in the message! The Tea Room is almost devoid of comments as MOST have at least a subtle hyperlink somewhere and about 30% of those lead to some very bad places. The remainder lead to businesses..."Lol elo"...... "stet"...maybe someone needs that?
DeleteGood place to post the 100th-plus reminder that ALL comments are moderated by the Tea Room and ANY form of advertisement is not welcome here, nor are hyperlinks to questionable or harmful websites allowed in comments. This is why there are so few comments posted to the Tea Room, those hyperlinks in comments. I mark about 20 to 100 comments each day as SPAM because of those hyperlinks.
ReplyDeleteThe Tea Room doesn't even allow counts or other info about this blog to be made public. We're here for facts, truth, the end. Popularity or getting paid to have ads is NEVER considered here. I really wish everyone would get the message...