34 Replies to “I, For One, Welcome Our Self Driving Overlords”

  1. so, self driving cars are like about one half the a$$holes out there. Most people are piss poor drivers, so self driving cars should blend rite in!

    1. The article mentions that the self driving cars cannot do what human drivers do with ease … like “merging”. O.M.G. Whoever penned this article must NOT actually drive an automobile. Here on the gridlocked CA highways … EVERY Google RED COLORED MAP … is due to the incompetent, inability, of 93% of drivers who CANNOT MERGE!!! Drivers who literally STOP while merging, merge too early-jamming themselves into the lane, timid mergers who hesitate, serial mergers who pile into the slightest opening tailgating the driver ahead, or the dominant mergehole who insists that it is HIS turn … when it isn’t.

      People … what’s so hard about taking turns when merging ? Left lane goes-right lane goes-left lane goes-right lane goes … what’s so HARD about that?

      Yet … I would still rather suffer the frustration of mergehole drivers … than an automated “self-driving” car. The computer programmed car will still be MUCH more dangerous than the most annoying machismo-fueled, multiple DWI, illegal alien driver with no auto insurance on his way home from the bar. I can SEE what’s going on with drivers like that … the machine? I have no idea what it’s internal algorithm might do next.

    1. Fully deployed: 2028

      Only Major component missing is communications protocol between vehicles.

  2. People who think we can have self driving cars are at the same level of understanding as the doctors in 1350 or so who thought reliving a migraine was to drill a hole in the skull.

    Maybe someday. But how many lives will be sacrificed to save one life?

    I hate the idea of self driving cars.

    1. Hell I like the idea,but the proponents are dreaming in technocolour.
      I am beginning to sense who is pushing this attempt to control us.
      And they deserve every wrongful death lawsuit they are going to reap.

      1. This technology is being “pushed” simply because it’s potentially worth trillions of dollars a year. It’s called capitalism.

        1. Capitalism usually involves products and services that are … useful … that the marketplace demands. Who exactly has … demanded … computer-driven vehicles?

          1. Every company that employs drivers. Workers with an hour+ commute. People who like to go out and drink. Elderly or disabled people who have difficulty driving. The list goes on.

          2. 1. Replacing people with machines is fine for … some … activities, not others. Rushing to replace drivers may prove to be … unwise, and counterproductive
            2. Move
            3. Call an Uber. PS, Getting too drunk to drive, shortens your lifespan and kills off brain cells. If you need a self-driving automobile after binge drinking, you have mucho problemas … you need a 12-step program – pronto
            4. Call an Uber, take the bus, call the elder-shuttle. Use a grocery delivery website, the list goes on.

          3. Why call an Uber when you can call a self-driving car for 1/4 of the price? You know how capitalism works right? Whoever corners this technology is going to make an ungodly huge amount of money, That’s all that matters.

        2. read up on capitalism and automation. dont stop until you get to the point where you realize how far off your statement is.

  3. Oh thank God. I almost went an entire three days without reading an inconsequential story about minor difficulties with a technology in its infancy. Thank you for your service, Kate, truly a boon to society.

    1. If it is so terrible having to read the article, perhaps you should be reading elsewhere. Maybe Pravda has a website that you would enjoy reading.

    2. Infancy, hey? You gonna get into a car driven by an infant? I’m not.

      AI is not intelligent. Full stop. It is software. I design software for a living, a suite of integrated applications that allow users to collate, analyse, and interpret varied sets of related data. For what it does, it’s a very complicated piece of software, far more than your average office product. On a continuum, you could place it between an office suite and an operating system in its overall complexity. I work with developers and testers every work day, and when I overlay my knowledge and experience of software development with what’s going on (and intended to go on) with autonomous vehicles, I become very afraid. Look at the kids on their Macbooks in Starbucks and ask yourself if that’s who you want programming your car’s software.

      Computers are great at computation. That’s it. It’s even in their names. They can’t inherently analyse anything. They can take input and generate output, according to how they are programmed, which is done by flawed humans. If you “ask” computers a question, they can provide an answer. The devil is in the details of asking the question. As a simple example using English, if you text a question to a friend thusly: “bud, wanna do lunch”, you can parse that as as question to meet up for the midday meal today. A computer, in its language, cannot. It NEEDS to see, “Bud, would you like meet up at 11:55AM today to eat lunch together?” Otherwise, it cannot parse the request.

      You may claim that this is a hyperbolic example, but it’s not. Every line of code has to be correctly constructed for the output to be correct, and that presumes that you’ve even properly defined what the correct output is. “Turn the corner” is a lot more simple than “Turn right at the corner of Hollywood and Vine”, which is still more simple than “Turn right at the corner of Hollywood and Vine while staying correctly in the street, not hitting any cyclists, pedestrians, or other cars, while not running a red light.” And each of the conditions in that statement has to be properly defined. And so on. And when programming a car, every. single. line. of code has to be correct.

      Defining conditions. “Don’t hit a pedestrian” seems pretty simple. Define for me what a pedestrian is. Tell me how to detect one. Tell me how to detect more than one or a group of them. Define for me what is the correct way not to hit it/them. What do I do if I successfully avoid the pedestrian, and what to I do if I do not? How do I tell which of these to conditions obtains? Every single little thing that human drivers do more or less automatically (after training and experience) has to be taught exactly to computers. Exactly. There is zero room for error, because they cannot learn from experience or mistakes. The world is a chaotic place, and we cannot–cannot–define for computers all the cases which they must consider.

      It’s one thing to be doing research and development into the concepts and technology. It is quite another to rush such products to a market that has very little demand for them without sufficient testing that doesn’t involve killing random people or the destruction of property that doesn’t belong to the companies. One or maybe two deaths could have been attributed to bad luck. But there’s enough of a pattern of behaviour going on here that no municipality in the world should be allowing any “autonomous” vehicle on any of its roads. Believing AI is mature enough to be anywhere near public roadways is delusional hubris of the worst sort. Believing it’ll become that way within the next 100 years is…a bit less so. But I’m not prepared to say it’s impossible…just within my lifetime.

      On a lighter note, I was reminded of this joke from my youth:
      http://sturges.public.iastate.edu/Humor/carcomp.html

      This page has it copyrighted 1998, but I’m pretty sure it was a fair bit older than that in its original incarnation. It seems appropriate in this context, however. Oddly, the captcha was asking me to identify cars prior to posting….

      1. Just because you don’t know how to do it doesn’t make it impossible. Believe it or not, but there do exist people in the world who are smarter than you.

        1. After reading what A-Calgary typed … I would put him top of the tech class. True stupidity is not recognizing intelligence when you see it, read it, hear it. When you simply blow-by the perfectly logical enumerated difficulties in getting a machine to safely turn left which is second nature to me. Even when 3 other douchebag-drivers, a rogue biker, a blind man with a white cane, and a kid with a ball in his fumbling hands, are at the intersection. My internal computer, with 45 years driving experience can sort it all out in a nanosecond … and STILL have time and space to correct if that kid chases his loose ball into my path.

          1. I’m sorry but you’re the very epitome of the Dunning–Kruger effect. Do you really think you know more on this subject than the leadership of every one of the dozens of successful tech company pouring billions of dollars into this technology? The level of hubris is terrifying.

        2. Andrew, my friend (because I am “affable”);

          There are none so blind as those who will not see

          You … and your machine-driven auto … cannot SEE. Your vision is chock-full of blind spots

        3. Sure there are, but you’re not one of them.

          Your ability to completely miss the point is stunning, as is your repertoire of logical fallacies. But at least I could teach you a new word, so I guess that’s something.

          1. I never claimed to be smarter than anyone. I’m pretty sure I’m an idiot, actually. But I’m smart enough to see that you’re a bit behind the times when it comes to machine learning and AI. For example:

            “The devil is in the details of asking the question. As a simple example using English, if you text a question to a friend thusly: “bud, wanna do lunch”, you can parse that as as question to meet up for the midday meal today. A computer, in its language, cannot. It NEEDS to see, “Bud, would you like meet up at 11:55AM today to eat lunch together?” Otherwise, it cannot parse the request.”

            Google already has a system called duplex that easily does exactly this: https://www.youtube.com/watch?v=bd1mEm2Fy08

            Your entire argument is predicated on the assumption that these systems use the traditional logic-based programming you’re familiar with, when in fact they use an entirely different technology altogether that you (and me) are completely unqualified to comment on. You could argue that the sensor side of the system has a long way to go (low light, snow/rain), or that self-driving cars as a whole are just a bad thing for society. But to so confidently claim that the software is 100 years away is simply ignorant.

          2. I am personally obsessed with machine tools, esp. computerized robots. But if there is one thing I’ve learned from watching every episode of How it’s Made … is that some tasks simply don’t lend themselves to machine tooling. Driving an automobile in the REAL WORLD (not on Disney’s Autotopia) is a uniquely HUMAN activity. Not suitable for computer simulation. Of course we can design all manner of “sensors” and can create endless strings of computer code programming “every” possible eventuality (well … not really) … but will never create a “safe” computer (software) driven automobile.

            Anyone who doesn’t MARVEL at the ingenious ways machine tools fabricate products is just a dullard. I expect Henry Ford’s mouth would be agape at the current state of machine tools. They can be designed to do ALMOST anything. Except driving.

          3. Notwithstanding that having Google Duplex make a hair appointment for you is not even a little similar to autonomous vehicles moving about public roadways without inducing carnage, it’s clear (again) that you’ve missed the point. My example was to show the limitations of any computational language in getting any computer to follow simple instructions, not on whether it could parse English words correctly. And that leaves aside the issues of this video not being a live demonstration (two cherry-picked examples) and without any reference to failure cases or limitations. But at least they aren’t likely to kill anyone with a misunderstood phone call.

            Your example shows that you have conflated natural-language verbal command processing with what it actually takes to get to that point and beyond. Regardless of your claims to the contrary, “traditional” logic-based programming must still apply, so long as computers are built on binary logic circuits. That hasn’t changed yet and isn’t likely to without a quantum shift in the underlying basic computing technology. Duplex is “just” a user interface. The CPU executing the Duplex instructions is not significantly different from the one you’re using to access this website: it is not completely-different technology. It’s still a processor running code. It’s still written by humans and debugged by humans. The code still runs in an executable after being compiled from source, in a compiler written by humans. The code still has instruction sets and loops and conditional statements and functions. The code still cannot do anything without user input to start it off, and it must still interact with third-party input to continue. All of it needs to be hardcoded before the user ever gets the chance to ask it a question or make a request, because it can’t recompile itself, even if it can keep track of exceptions, aberrations, assertion failures, crashes, bugs, and so on.

            And to do as much as was shown, Google has had to be able to account for all manner of possible outputs, given a single input/command, and all sorts of potential responses or reactions or idioms or accents or speech impediments (ignoring language localisations and the crossovers thereto). But for all that, the real-world subset of that for making a hair appointment (or other phone calls) is infinitesimally small compared to what a human experiences every day, and what an autonomous vehicle would encounter on a random Sunday drive, leaving aside rush hour. It’s still just a publicity gimmick that can do some interesting things, not artificial “intelligence”.

            The primary problem of “regular people” in discussing computers is that they have an extremely over-inflated sense of what they can and cannot do. It’s a common problem where I work, even among my co-workers. It’s far worse outside our walls, even among our clients who are advanced users of our products.

            You have no idea what I am or am not qualified to comment on. I am prepared to accept your assertions that you are completely unqualified to comment on this, and that you’re an idiot.

      2. this sir, is one of the single most articulate and correct treatments of the issue of what computahs can and CANNOT do.
        ever since I heard the term ‘AI’ back in the 80s I was skeptical. Skeptical because I had 10 years of working with them, and went on from senior mainframe operator to programming and teaching. I would intro the class by describing computers as a petulant brat constantly playing word games and gotcha because of the necessary hyper precision and clarity necessary to ‘communicate’ with them.
        this has not gone away.

        has any pc of software recoded itself after a learning exercise?

        1. Thanks.

          I learned it as “computers are very stupid: they do exactly what you tell them and nothing more.” Your sentiment is quite similar and mine hasn’t changed in the 35 years since I learned it.

      3. Well put. I work in resource extraction in Calgary and was talking with a couple of our programmers, one who likes the idea of autonomous autos. He hadn’t thought about it very much, but thought they’d be great. I asked him what happened with his current code if it hit a circumstance that wasn’t pre-programmed. He said results would vary, but the system would probably crash and need restarting.

        I then posed a hypothetical to him – you’re driving at highway speed and a squirrel runs out in front of the car. And if the car is programmed to ignore a squirrel, is it programmed to identify the mess that is made when the car ahead runs over a squirrel and splatters it in one of many ways? Does he want a car that reboots, or one that crashes? The term “crash” was an intended pun, and he turned a nice shade of gray at thinking of all the ways the sensors could pick up new shapes that confound the pre-loaded set. Then we started talking about winter, and frosty sensors.

        He’s not a proponent of self-driving cars anymore.

  4. anybody gung ho about self drive needs to watch a couple episodes of ‘Mayday’ and find out how yer precious automation can cause a 100 ton plane carrying 200 passengers to auger in real hard.

Navigation