collapse

Author Topic: AI Robots more dangerous than Nukes?  (Read 20399 times)

Offline robomont

  • Inventor's Group
  • Hero Member
  • *
  • Posts: 4415
  • Gold 215
Re: AI Robots more dangerous than Nukes?
« Reply #30 on: August 06, 2014, 02:48:40 am »
http://www.blacklistednews.com/Singapore%27s_Precarious_Surveillance_State_The_Envy_Of_US_Intelligence_Agencies/37082/0/38/38/Y/M.html   this article just came out a few hours ago.some of your neighbors deuem.yes i know we are stretching it as far as being on target for thread.
ive never been much for rules.
being me has its priviledges.

Dumbledore

Offline The Seeker

  • grouchy, old, but inquisitive...
  • Administrator
  • Hero Member
  • *****
  • Posts: 3757
  • Gold 426
  • The one-armed Bandit
Re: AI Robots more dangerous than Nukes?
« Reply #31 on: August 06, 2014, 05:51:10 am »
Deuem, it sounds far worse where you are than here;I did not say the system couldn't or wouldn't be abused...

But I suppose when a bank robber walks up and shoots someone, we don't want their picture;

or they jack your new car;

or run over you at the street corner

or the neighbor's dog chews your face off in your driveway...

This is the Last off topic post; this is about AI , not big brother; more on the lines of AI becoming Skynet...

seeker
Look closely: See clearly: Think deeply; and Choose wisely...
Trolls are crunchy and good with ketchup...
Seekers Domain

Offline robomont

  • Inventor's Group
  • Hero Member
  • *
  • Posts: 4415
  • Gold 215
Re: AI Robots more dangerous than Nukes?
« Reply #32 on: August 06, 2014, 06:01:43 am »
in your opinion ,what is skynet.i suppose were not talking exactly about the sattelites but more along the lines of terminator,am i correct?
ive never been much for rules.
being me has its priviledges.

Dumbledore

deuem

  • Guest
Re: AI Robots more dangerous than Nukes?
« Reply #33 on: August 06, 2014, 06:05:30 am »
The target for this thread is AI Robots. Robots don't all need to get up and walk around to be more dangerous than Nukes. A least with a Nuke when you see the big flash you know you've had it.  The new computers and cameras are now reaching the AI level. They can take pictures and formulate decisions faster than people. They can target people through a program that they can update. They feed info to people to react on. Even if it is wrong.
 
They now have systems in place that can, in their eyes pin point a bad person depending on his/her face's expressions. Maybe they just had gas. We all know the military already has launch and forget weapons that make up their own mind on a target as they fly. That looks like AI to me.
 
When one uses a Nuke there better be a darn good reason. When they use AI everywhere there is a made up reason most of the time.
 
They now have white line cameras here that monitor the white lines on the road. If you put one tire on the line it is a 200 dollar fine. Even if you got cut off. The cameras work together using AI and grab both your tire and plate. Most of the time your face also. Did you hurt anyone by putting a tire on the line? It is only a white line of paint. So to make more money they pulled up the old lines and made the new ones closer.
 
Did this make people safer? NO! Running a red light and crashing is still the same. So the AI is after the money by searching the people with money. If you can buy a car, you have some money. And that is true anywhere. All of this is coming your way. Some cameras now are also testing your cars emmision as you drive by and fining people left and right even if it was the truck in front of them.
 
People will use cameras to get into every single aspect of your life to gain control over you, get power over you and make money at the same time. People in these positions never go after the crooks, they like them. The crooks boost the economy. Why stop them.
 
And from someone that never does anything wrong, I still think the AI is a bad idea. Give them a few more years and we will be working for the robots who are working for the master elite and Nazi like people. Didn't the Pres just sign an order that it is now OK to drone kill Americans. This is all AI driven.
 
Just Nule us and get it over with.

Offline robomont

  • Inventor's Group
  • Hero Member
  • *
  • Posts: 4415
  • Gold 215
Re: AI Robots more dangerous than Nukes?
« Reply #34 on: August 06, 2014, 06:25:05 am »
yes they have had the mad face thing for at least five years.

i believe drones are going to be the first bots we will see.
those cheeta bots will be out soon.
asimo from japan has been making some big strides in the last few years.
ive never been much for rules.
being me has its priviledges.

Dumbledore

Offline The Seeker

  • grouchy, old, but inquisitive...
  • Administrator
  • Hero Member
  • *****
  • Posts: 3757
  • Gold 426
  • The one-armed Bandit
Re: AI Robots more dangerous than Nukes?
« Reply #35 on: August 06, 2014, 06:30:20 am »
in your opinion ,what is skynet.i suppose were not talking exactly about the sattelites but more along the lines of terminator,am i correct?
Skynet is an anachronism referring to the super computer networks, Robo, borrowed from Terminator, a system that became self aware...

units like the Cray computers that TPTB do have and use; so far just machines running software, but we have that Frankenstein idea that something in-animate can or might become animate...

 a new species based on cold logic instead of the chaos and fire of emotions that will not understand hu-mons or know how to think outside the parameters ...


seeker
Look closely: See clearly: Think deeply; and Choose wisely...
Trolls are crunchy and good with ketchup...
Seekers Domain

Offline Amaterasu

  • The Roundtable
  • Hero Member
  • *****
  • Posts: 6713
  • Gold 276
  • Information Will Free Us
    • T.A.P. - You're It
Re: AI Robots more dangerous than Nukes?
« Reply #36 on: August 06, 2014, 06:50:32 am »
I contemplate the idea of coming to awareness necessarily having no sense of ethics.  It seems that there is no idea in the awareness in any of the "AI becomes aware" scenarios out there that ethics have a logical base, and therefore might be seen.  That the AI might see Consciousness as valuable, all equal in having it, and might try to help (or at least, not hinder) what We choose to do.  As long as it is ethical.

If I would grant AI equality as a Conscious Being (should it attain such), based in ethics, might not that happen in reverse?
"If the universe is made of mostly Dark Energy...can We use it to run Our cars?"

"If You want peace, take the profit out of war."

deuem

  • Guest
Re: AI Robots more dangerous than Nukes?
« Reply #37 on: August 06, 2014, 06:57:02 am »
IMHO: On the Earths clock, that new species is just a few tics away.
 
remember that doomsday clock they had that we almost went up to Midnight.
 
They need one for AI. At 12 it takes over. Say we are at 11:30 now and ticking with each Robot or wire connected to a mainframe. Right now the AI has eyes, Drones and cameras. So it does see. Click it up to 11:35. Some AI now knows touch and smell. 11:40. Make their own decisions. 11:45

Offline Amaterasu

  • The Roundtable
  • Hero Member
  • *****
  • Posts: 6713
  • Gold 276
  • Information Will Free Us
    • T.A.P. - You're It
Re: AI Robots more dangerous than Nukes?
« Reply #38 on: August 06, 2014, 07:22:05 am »
I will be watchful of all contingencies.  I will believe that there is a fair probability of creating an ethical alliance.  An understanding that Consciousness is of value.
"If the universe is made of mostly Dark Energy...can We use it to run Our cars?"

"If You want peace, take the profit out of war."

PLAYSWITHMACHINES

  • Guest
Re: AI Robots more dangerous than Nukes?
« Reply #39 on: August 06, 2014, 08:34:46 am »
Robo, i guess you missed my post about G**gle, they are using it to build Skynet also.
Yes Deuem has a very good point, it's easy to misuse tach to get rich & enslave peeps, not AI, but the guys programming the machines.
 Yes Amy, but i think the ethics or common sense rules must be inherent in the programming, every decision needs to go through the 3(4) laws filter. If the filter stops it, the robot (computer) has to decide something else, and that goes through the filter again.

A robot milling machine tried to kill me once, it was just a short circuit that caused it to switch on & start moving while i was inside it :o, but the end result was the same as a premeditated attempt on my life ::)

Safeguarding machines from reckless humans is now big business, it also means more work for me ;D

Offline Ellirium113

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 2255
  • Gold 335
  • We are here
Re: AI Robots more dangerous than Nukes?
« Reply #40 on: August 06, 2014, 09:39:05 am »
Quote
They now have systems in place that can, in their eyes pin point a bad person depending on his/her face's expressions. Maybe they just had gas. We all know the military already has launch and forget weapons that make up their own mind on a target as they fly. That looks like AI to me.

If the machine put you on this list....you might be a terrorist:
https://firstlook.org/theintercept/document/2014/08/05/directorate-terrorist-identities-dti-strategic-accomplishments-2013/

1,000,000th terrorist logged as of June 28th, 2013 and growing. How many people on this list would be classed by you or I as legitimate terrorists? Granny J-walked with her dog and it pooped on someone's lawn... YOU ARE ON THE LIST!!!  :P

Pre-crime prevention is the next rage...they can't actually CATCH a terrorist even if they added more software capabilities and 5 million more cameras, How many terrorist threats has the NSA stopped so far? Bet you could count them on 1 hand.

deuem

  • Guest
Re: AI Robots more dangerous than Nukes?
« Reply #41 on: August 06, 2014, 10:00:12 am »
Wow, Ellirum. If that report is 100% then we are all in trouble, Did you see what they are adding to the data base. Handwriting, Signatures, Scars, Marks, Tattoos and DNA strands. Where they get DNA from? Public toilets? They are making 1984 look like a comic book.
 
PMW, would you call a Tom Cruise missile AI. It thinks on its own. Launch and forget. I know it runs off of programs but in a way I think it is like a newborn baby AI contraption. Even full blown AI in the future might have its original program written by people. So what would you say the tipping point is From this to that.

Offline petrus4

  • Iconoclast
  • Hero Member
  • *****
  • Posts: 2373
  • Gold 623
Re: AI Robots more dangerous than Nukes?
« Reply #42 on: August 06, 2014, 10:02:18 am »

I agree with the makers of Terminator, it will become self aware & say to itself "Fck it, these hu-mons are a waste of resources, let's blow them up"

Skynet experienced a cascade rampancy.  Rampancy is not the same thing as sentience; it's a dissociative form of insanity which can happen on the way to sentience, when the machine becomes cognitively overwhelmed with the amount of information it's taking in.  It is more or less the same thing that was shown happening to Lal, Data's android daughter in Star Trek: The Next Generation, except in Lal's case it was neurological, but in Skynet's case it was more purely psychological, because Skynet didn't have a brain in the human/android sense of the word.

Skynet is also the least likely form of sentient artificial intelligence to eventually exist, if any form does.  This is because very specifically, Skynet is what is known in the literature as an acorporeal artificial intelligence.  Skynet was only acorporeal in the sense of software, however; when we are talking about intelligence, the truly important meaning of the word acorporeal is astral or aetheric, which I will come back to in a minute.

Skynet's host system was also relatively conventional mainframe hardware, and a scenario where that hosts a program that can reach sentience, is virtually impossible.

If we are hypothetically going to witness anything approaching true sentient or strong AI at all, then I would expect it to either be nanotech or biomechanically based, or if it is based on conventional hardware, to more closely resemble the AIs that Gibson depicted, which were born out of networks of machines mimicking neurology, rather than a monolithic host.

Binary programming on a relatively monolithic host, however, is never going to produce strong AI.  It fairly simply can't.  You're talking about a level of complexity which the host system is not capable of producing.

I will believe that we are close to strong AI, when I see people growing isolated biological or biomechanical brains; or when I hear about people making truly advanced use of nanotechnology.  Neither of those things are currently happening to the best of my knowledge, however; and thus, as a result, talk of any form of AI which is more advanced than a complex expert system, is masturbatory and pointless.

Google is basically an extremely complex textual pattern matching system.  It has probably combined more conventional pattern matching with fuzziness and weighting; which can have the illusion of intelligence, but an illusion is all it is.  You can have in-depth conversations with a human being, which retain clear coherence.  You can't do that with a weak AI chat bot written in AIML, and I don't care what those idiots who run the Turing test, try and claim to the contrary.

Anyone who is scared of the possibility of strong AI on conventional hardware, really really really needs to educate themselves about basic electronics.  Even creating the most basic circuitry is beyond the ability of most people; you're talking about an insane degree of complexity for even a relatively stupid system.  This is the other major reason why I don't believe the Cartesian rubbish that we are all essentially biological machines; because the basis of our intelligence is far too complex to have been able to evolve in the manner that they think it did.  I believe in cymatics, not evolution.  The two operate similarly in some respects, but in others are very different.

Evolution is a purely physical/mechanistic process.  Cymatics behaves in a manner which superficially resembles evolution, (which is why materialists think that that is how it happened) but for its' shaping/development of characteristics and other traits, it relies on information derived from the aether.  This is consistent with Sheldrake's work on morphic fields, and it is also consistent with Steiner's work, and the channelling and observations made at Findhorn.

Essentially it's a several step process.

a}  Egregore/semi-sentient construct gradually develops within the aether/astral space.  This is what Sheldrake is talking about when he mentions morphic fields.  Although I haven't read his books in depth, I'm guessing he probably doesn't understand this from a truly hermetic point of view.

b}  Egregore eventually reaches a certain point where the astral form is able to densify down into a physical mirror of said astral form.  Astral form continues to exist, however, and feeds information to mirrors within physical space.  This is also the basis of the totem in indigenous terms; the collective or group mind (egregore) of an entire given species.

c}  Egregore/totem continues to gather information from the activity of physical examples of the species.  Egregore/totem is primarily interested in physical survival, but may (and usually does) develop secondary objectives as well.  Evolution of first the astral, and then the physical form is directed towards the meeting of these objectives.  The main purpose of DNA is to allow the transfer of information between physical and astral space.

I also have empirical proof of this, kids; remember that to a certain extent at least, I'm an occasionally practicing magician.  I've made more than one servitor.  I'm not an armchair atheist; I only have beliefs that I can experimentally verify.  This doesn't cut the mustard with the conventional scientific circle jerk, mind you; or at least not publically.  You can bet your boots, however, that the intelligence community or other such people know about the creation of servitors, and what they can be used for.  Go and look up the Psychic Warrior project.  Remote viewing is all you will be able to find, most likely; but that is only the tip of the iceberg.

This is why strong artificial intelligence does not currently exist, and most likely won't for at least the next fifty to one hundred years.  The Cartesian model for the origins of life is completely wrong; and their research into artificial intelligence, cybernetics, and transhumanism as a whole is based on said incorrect beliefs.  The most a purely mechanical, or non-aetherically based AI will ever achieve, is cascade rampancy; which may or may not give you something as smart as Skynet was, although it is doubtful.

The other point is, however, that we don't need strong AI.  Weak AI is more than capable enough of performing all the automation tasks we might want, and more importantly, will do so ethically, because we aren't exploiting life in the process.

Without its' own corresponding presence in the aether, however, (use the word soul if you want to be old school) genuine, sane sentience can not exist.  When the Atlanteans wanted to make their bots, they enslaved elementals to control them.  Machine intelligence without an aetheric presence is not life.
« Last Edit: August 06, 2014, 10:12:03 am by petrus4 »
"Sacred cows make the tastiest hamburgers."
        — Abbie Hoffman

deuem

  • Guest
Re: AI Robots more dangerous than Nukes?
« Reply #43 on: August 06, 2014, 10:26:37 am »
Quote
This is the Last off topic post; this is about AI , not big brother; more on the lines of AI becoming Skynet...

Nothing written about the cameras is off topic, I think there is a much bigger picture. This is how the system has eyes. In a standard 100 million person city where you have over 100,000 cameras set up, who do you think is looking at it? Big brother AI computers. They are already judging every step of our lives and it will get much worse. They are now moving out to to the subs. Maybe people think Skynet is just an Internet thing that tracks if you purchased a bar of soap this month or you went up a size in your pants.
 
We are saying it is out there now and they are starting to use it now! And yes I would agree about the one time it catches a bad dog or a bank robber, But that is not what they are after now. They can not get money from the dog. But they can get it out of people. Good people. They are the ones with money.
 
How about the supermarket. Has AI hit there yet. It has for us. Any thing on sale now has to be scanned by your phone and turned in at the check out to get the sales price. They are taking all of this info and sending it to huge AI machines that are profiling us. At the check out it downloads all your info and God only knows what else. You are matched up with cameras taking live video of paying and the computer selects which one to keep on you. So now I always pay full price and stay off the program.
 
For the topic at hand the Nuke is instant, The new AI coming your way is slow and painful.

Offline The Seeker

  • grouchy, old, but inquisitive...
  • Administrator
  • Hero Member
  • *****
  • Posts: 3757
  • Gold 426
  • The one-armed Bandit
Re: AI Robots more dangerous than Nukes?
« Reply #44 on: August 06, 2014, 10:50:47 am »
Deuem, it sounds to me like your adopted country is really becoming a mess; it isn't even close to that extreme as yet here... but then again we don't have billions of peeps...

I try to avoid using my phone to scan anything, and limit my purchases at any large store; flea markets and salvage stores get the majority of my business along with roadside produce stands...

seeker
Look closely: See clearly: Think deeply; and Choose wisely...
Trolls are crunchy and good with ketchup...
Seekers Domain

 


Wal-Mart.com USA, LLC
affiliate_link
Free Click Tracking
Wal-Mart.com USA, LLC

* Recent Posts

Re: kits to feed your family for a year by Shasta56
[March 17, 2024, 12:40:48 pm]


Re: kits to feed your family for a year by space otter
[March 16, 2024, 08:45:27 pm]


Re: kits to feed your family for a year by Shasta56
[March 16, 2024, 07:24:38 pm]


Re: kits to feed your family for a year by space otter
[March 16, 2024, 10:41:21 am]


Re: Full Interview - Lance Corporal Jonathan Weygandt (1997) by RUSSO
[March 12, 2024, 07:22:56 pm]


Re: Full Interview - Lance Corporal Jonathan Weygandt (1997) by RUSSO
[March 09, 2024, 03:25:56 am]


Re: Full Interview - Lance Corporal Jonathan Weygandt (1997) by RUSSO
[March 09, 2024, 02:33:38 am]


Re: Music You Love by RUSSO
[March 09, 2024, 01:10:22 am]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by RUSSO
[March 09, 2024, 12:14:14 am]


Re: Full Interview - Lance Corporal Jonathan Weygandt (1997) by RUSSO
[March 09, 2024, 12:08:46 am]


Re: A peculiar stone in DeForest by Canine
[March 03, 2024, 11:54:22 am]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by kevin
[March 03, 2024, 11:30:06 am]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by kevin
[March 03, 2024, 11:21:15 am]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by kevin
[March 03, 2024, 11:16:05 am]


Re: Music You Love by RUSSO
[March 02, 2024, 07:58:09 pm]


Re: Full Interview - Lance Corporal Jonathan Weygandt (1997) by RUSSO
[March 02, 2024, 07:50:59 pm]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by RUSSO
[March 02, 2024, 07:43:03 pm]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by RUSSO
[March 02, 2024, 07:41:30 pm]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by kevin
[March 01, 2024, 11:54:23 am]


Re: The Man Who Built UFOs For The CIA (Not Bob Lazar!) by kevin
[March 01, 2024, 11:34:15 am]

affiliate_link