Announcement

Collapse
No announcement yet.

Robots Will Create 'Permanently Unemployable Underclass'

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Re: Robots Will Create 'Permanently Unemployable Underclass'

    dcarrigg;294698]Hey, Santafe, I just realized how hostile that last post sounded when I re-read it just now.

    Sorry about that. Lordy, lordy what was I bent out of shape about that night?
    Ha! I'm not sure which post that was dc but even if I knew, you'd get a pass. You've got too many thoughtful posts here and elsewhere on iTulip for me to be concerned about tone. After all, a few of us did get riled up and drive this thread into the abyss fighting the good fight against the evil forces of iLibertarianism....

    Comment


    • Re: Robots Will Create 'Permanently Unemployable Underclass'



      dinner is served . . . .

      Comment


      • Re: Robots Will Create 'Permanently Unemployable Underclass'

        What to learn in to stay ahead of computers:

        http://www.nytimes.com/2015/05/24/up...abt=0002&abg=1

        Comment


        • Re: Robots Will Create 'Permanently Unemployable Underclass'

          “I don’t understand why some people are not concerned.”
          Bill Gates

          “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that. I would be to summon the demon.”
          Elon Musk

          “the development of full artificial intelligence could spell the end of the human race.”
          Stephen Hawkings




          Relax, the Terminator Is Far Away

          By JOHN MARKOFF

          Not so fast. Next month, the Defense Advanced Research Projects Agency, a Pentagon research arm, will hold the final competition in its Robotics Challenge in Pomona, Calif. With $2 million in prize money for the robot that performs best in a series of rescue-oriented tasks in under an hour, the event will offer what engineers refer to as the “ground truth” — a reality check on the state of the art in the field of mobile robotics.

          A preview of their work suggests that nobody needs to worry about a Terminator creating havoc anytime soon. Given a year and a half to improve their machines, the roboticists, who shared details about their work in interviews before the contest in June, appear to have made limited progress.

          In the previous contest in Florida in December 2013, the robots, which were protected from falling by tethers, were glacially slow in accomplishing tasks such as opening doors and entering rooms, clearing debris, climbing ladders and driving through an obstacle course. (The robots had to be placed in the vehicles by human minders.)

          Reporters who covered the event resorted to such analogies as “watching paint dry” and “watching grass grow.”

          This year, the robots will have an hour to complete a set of eight tasks that would probably take a human less than 10 minutes. And the robots are likely to fail at many. This time they will compete without belays, so some falls may be inevitable. And they will still need help climbing into the driver’s seat of a rescue vehicle.

          Twenty-five teams are expected to enter the competition. Most of their robots will be two-legged, but many will have four legs, several will have wheels, and one “transformer” is designed to roll on four legs or two. That robot, named Chimp by its designers at Carnegie Mellon University, will weigh 443 pounds.

          None of the robots will be autonomous. Human operators will guide the machines via wireless networks that will occasionally slow to just a trickle of data, to simulate intermittent communications during a crisis. This will give an edge to machines that can act semi-autonomously, for example, automatically walking on uneven terrain or grabbing and turning a door handle to open a door. But the machines will remain largely helpless without human supervisors.

          “The extraordinary thing that has happened in the last five years is that we have seemed to make extradorinary progress in machine perception,” said Gill Pratt, the Darpa program manager in charge of the Robotics Challenge.

          Pattern recognition hardware and software has made it possible for computers to make dramatic progress in computer vision and speech understanding. In contrast, Dr. Pratt said, little headway has been made in “cognition,” the higher-level humanlike processes required for robot planning and true autonomy. As a result, both in the Darpa contest and in the field of robotics more broadly, there has been a re-emphasis on the idea of human-machine partnerships.

          “It is extremely important to remember that the Darpa Robotics Challenge is about a team of humans and machines working together,” he said. “Without the person, these machines could hardly do anything at all.”

          In fact, the steep challenge in making progress toward mobile robots that can mimic human capabilities is causing robotics researchers worldwide to rethink their goals. Now, instead of trying to build completely autonomous robots, many researchers have begun to think instead of creating ensembles of humans and robots, an approach they describe as co-robots or (inevitably) “cloud robotics.”

          Ken Goldberg, a University of California, Berkeley, roboticist, has called on the computing world to drop its obsession with singularity, the much-ballyhooed time when computers are predicted to surpass their human designers. Rather, he has proposed a concept he calls “multiplicity,” with diverse groups of humans and machines solving problems through collaboration.

          For decades, artificial-intelligence researchers have noted that the simplest tasks for humans, such as reaching into a pocket to retrieve a quarter, are the most challenging for machines.

          “The intuitive idea is that the more money you spend on a robot, the more autonomy you will be able to design into it,” said Rodney Brooks, an M.I.T. roboticist and co-founder two early companies, iRobot and Rethink Robotics. “The fact is actually the opposite is true: The cheaper the robot, the more autonomy it has.”

          For example, iRobot’s Roomba robot is autonomous, but the vacuuming task it performs by wandering around rooms is extremely simple. By contrast, the company’s Packbot is more expensive, designed for defusing bombs, and must be teleoperated or controlled wirelessly by people.

          The first Darpa challenge more than a decade ago had a big effect on the perception of robots. It also helped spark greater interest in the artificial intelligence and robotics industries.

          During the initial Darpa challenge in 2004, none of the robotic vehicles was able to complete more than seven of the 150 miles that the course covered. However, during the 2005 challenge, a $2 million prize was claimed by a group of artificial-intelligence researchers from Stanford University whose vehicle defeated a Carnegie Mellon entrant in a tight race.

          The contest led to Google’s decision to begin a self-driving-car project, which in turn spurred the automotive industry to invest heavily in autonomous vehicle technology.

          Developing a car to drive on an unobstructed road was a far simpler task than the current Darpa Robotics Challenge, which requires robots to drive and, while they’re walking, navigate around obstacles, remove debris, use vision and grasp with dexterity, and perform tasks with tools.

          “We had a relatively easy task,” said Sebastian Thrun, a roboticist who led the Stanford team in 2005 and later started the Google self-driving-car project. “Today they’re doing the hard stuff.”

          His view about the relationship between humans and robots has been shaped by the two contests. “I’m a big believer that technology progresses by complementing people rather than replacing them,” he said.

          Most of the Robotics Challenge teams receive university and corporate financing, and in some cases use a Darpa-funded, 6-foot-2 Atlas robot that weighs 380 pounds. (All of the competitors must design their own software and controls.)

          But one team of hobbyists will bring a homegrown robot financed with credit cards and the help of family members.

          “We’re not a big company,” said Karl Castleton, an assistant professor of computer science at Colorado Mesa University and the leader of Grit Robotics, which has constructed a robot that rolls slowly on four wheels. “We’re just some guys who have a lot of love for what we’re doing.”

          Comment


          • Re: Robots Will Create 'Permanently Unemployable Underclass'

            Uber poaches 40 researchers from Carnegie Mellon:

            PITTSBURGH—Carnegie Mellon University is scrambling to recover after Uber Technologies Inc. poached at least 40 of its researchers and scientists earlier this year, a raid that has left one of the world’s top robotics research institutions in a crisis.
            In February, Carnegie Mellon and Uber trumpeted a strategic partnership in which the school would “work closely” with the taxi-hailing service to develop driverless car technology.
            But behind the scenes, the tie-up was more combative than collaborative.
            Uber envisions autonomous cars that could someday replace its tens of thousands of contract drivers. With virtually no in-house capability, the San Francisco company went to the one place in the world with enough talent to build a team instantly: Carnegie Mellon’s National Robotics Engineering Center.
            Flush with cash after raising more than $5 billion from investors, Uber offered some scientists bonuses of hundreds of thousands of dollars and a doubling of salaries to staff the company’s new tech center in Pittsburgh, according to one researcher at NREC.
            The hiring spree in January and February set off alarm bells. Facing a massive drain of talent and cash, Herman Herman, the newly elevated director of the NREC, made a presentation May 6 to staff to explain the situation and seek ideas on how to stabilize the center, according to documents reviewed by The Wall Street Journal.
            The short presentation at the school here laid out the issues. In all, Uber took six principal investigators and 34 engineers. The talent included NREC’s director, Tony Stentz, and most of the key program directors. Before Uber’s recruiting, NREC had more than 100 engineers and scientists developing technology for companies and the U.S. military.
            “If you want to do autonomous vehicles—we have a lot of people here doing that,” said Jeff Legault, the head of business development for NREC, in an interview. “I would have preferred [Uber] just come to us” to develop the vehicle rather than hire away scientists.”

            Comment


            • Robots: Where the Rubber Meets the Road






              This is the fifth episode in a Bits video series, called Robotica, examining how robots are poised to change the way we do business and conduct our daily lives.

              Matt McMullen has proved that some people are willing to spend thousands on sex dolls.

              Mr. McMullen, the creator of the RealDoll, says he has sold over 5,000 customizable, life-size dolls since 1996, with prices from $5,000 to $10,000. Not only can his customers decide on body type and skin, hair and eye color, but on a recent day in the company’s factory in San Marcos, Calif., a craftsman was even furnishing one doll with custom-ordered toes.

              Mr. McMullen’s new project, which he is calling Realbotix, is an attempt to animate the doll. He has assembled a small team that includes engineers who have worked for Hanson Robotics, a robotics lab that produces shockingly lifelike humanoid robots.

              Mr. McMullen is first focusing on developing convincing artificial intelligence, and a robotic head that can blink and open and close its mouth. He’s also working to integrate other emerging technologies, like a mobile app that acts like a virtual assistant and companion, and virtual reality headsets that can be used separately or in tandem with the physical doll.

              One of the challenges that Mr. McMullen will have to contend with is the so-called uncanny valley. It is a concept first written about in 1970 by a Japanese researcher, that says people’s responses to robots will shift sharply from empathy to revulsion the more closely the robots resemble humans. In other words, something robotic that looks alive, but is not completely convincing, will creep people out.

              In a paper, the researcher, Masahiro Mori, then a robotics professor at the Tokyo Institute of Technology, likened the experience to encountering a prosthetic limb.

              “We could be startled during a handshake by its limp boneless grip together with its texture and coldness,” he wrote. When that happens, according to Mr. Mori, “we lose our sense of affinity, and the hand becomes uncanny.”

              With Realbotix, Mr. McMullen is trying to avoid that sense of uncanny by creating products that still look like dolls and not, as he says, copies of people.

              Mr. McMullen says the Realbotix head, which can be attached to the existing RealDoll body, will cost around $10,000, and be commercially available in two years. The full body, which he will begin developing next, will most likely range from $30,000 to $60,000. — Emma Cott

              http://www.nytimes.com/2015/06/12/te...-realdoll.html

              Comment


              • Re: Robots: Where the Rubber Meets the Road

                Read Dilbert this morning and it made me think of this thread...

                Comment


                • Re: Robots: Where the Rubber Meets the Road

                  Good one!

                  I was reading this article, and had a similar but dissenting thought. It seems that different countries may have different responses to the robot transition:

                  Hitchhiking Robot, Safe in Several Countries, Meets Its End in Philadelphia

                  The creators, David Harris Smith and Frauke Zeller, two Canadian professors, said they had built the robot as “an artwork and social robotics experiment” and had successfully sent it across Canada, Germany and the Netherlands. Short and stocky, with a bucket for a body and the red LED lights of its face enclosed in plastic, the brightly colored bot would be difficult to miss.

                  Over its two weeks in the United States, hitchBOT made its way from Boston to New York — stopping to take photos in Times Square — and to Philadelphia. It made light, automated conversation and took photos of its surroundings about every 20 minutes, documenting its travels on its popular Twitter, Facebook and Instagram accounts.

                  It had brief instructions written on its back to help the travelers who would guide it through its American bucket list, which included listening to jazz in New Orleans and being the fifth face in a photo of Mount Rushmore. But its creators said the robot’s journey was cut short by vandals — it was found early Saturday beaten and dismembered in Philadelphia’s historic Old City neighborhood.
                  Apparently not all societies are equally ready to embrace their new robot overlords.

                  Comment


                  • Re: Robots: Where the Rubber Meets the Road

                    Originally posted by astonas View Post
                    Good one!

                    I was reading this article, and had a similar but dissenting thought. It seems that different countries may have different responses to the robot transition:



                    Apparently not all societies are equally ready to embrace their new robot overlords.
                    In Philadelphia of all places? Aren't robots just mechanical brothers? These people have never seen the twilight Zone !

                    Comment


                    • Fine Tuning the Robots We Have

                      The idea that you can augment yourself with technologies will become absolutely commonplace and a natural progression.

                      by Olivia Solon (Bloomberg)

                      The future of the high-performance workplace is taking shape behind closed doors and kept quiet by non-disclosure agreements.

                      Across the U.K., hedge funds, banks, call centers and consultancies are installing tracking systems to link biosensing wearable devices with analytics tools once the preserve of elite sports.

                      “There isn’t a competitive sports team in the world that doesn’t adopt high-end analytics tracking the athletes on the field, off the field, at home, when they’re sleeping, when and what they're eating,” says Chris Brauer, Director of Innovation at Goldsmiths, University of London. “The workplace is heading towards that model.”

                      The new tools help link human behavior and physiological data to business performance. It’s a departure from typical wearable technology strategies, which tend to focus on operational efficiency or safety.

                      The power of the new tools is being evaluated privately, partly to avoid accusations of intrusive behavior, partly because those running the tests believe it gives them a competitive edge.

                      “Yes, it’s already happening, starting off with some of the big hedge funds,” says John Coates, a Cambridge neuroscientist and former Goldman Sachs trader, who is actively working with companies to link biological signals to trading success.

                      His academic research focuses on understanding the physiological drivers of risk preferences. “It used to be assumed that most things that you learned at business school were pure cognitive activity, and if you’re not doing well you need better information or psychological training,” he says.

                      However, science is starting to show that some hormones – including naturally produced steroids and testosterone – increase confidence and make us take more risks. Stress hormones like cortisol produce the opposite effect.

                      Wearable technology – be it heart-rate monitors or skin response sensors – can give this underlying influence more visibility, says Coates. “You need to figure out whether you should be trading or whether you should go home. If you are trading, should you double up your position because you’re in the zone?”

                      Coates says he is working with three or four hedge funds to implement such an early-warning system: “A lot of smart managers think their algos have gone as far as they can go. The next step is human optimization.”

                      While much is happening in secret, some companies across a range of sectors are open about their experiments with biosensing wearables.

                      The Military Tech Maker

                      Equivital wireless human monitoring equipment comes from the battlefield. The firm’s Black Ghost chest-mounted wearable sensor measures heart rate, stress levels, breathing, skin temperature and body position. The company is currently working on a predictive system to flag when someone is 20 minutes away from heat stress.

                      Equivital’s systems are now creeping into industry, starting with oil and gas, mining and construction – still mostly for health and safety purposes. Chief Executive Officer Anmol Sood says that’s changing: “Companies are always looking at data to ensure their own business models are as effective as possible – it makes sense to bring that data from individuals working at the company.”

                      The Smart Badge Firm

                      Humanyze’s smart work badges contain microphones and and precision positioning technology.

                      “We’re doing voice analysis in real time,” says CEO Ben Waber. The system looks at how much individuals talk, how loudly they speak, whether they interrupt or sound stressed. “We also look at how much you move around and interact with other teams.”

                      Bank of America used Humanyze’s technology within its call centers to find out what made employees most productive in terms of numbers of completed calls. Yet it found that the biggest predictor of productivity was how staff spoke to their colleagues. Those with the closest ties to others in their group were more productive and less likely to quit than those who worked alone. The bank added a 15-minute shared coffee break to daily routines: productivity increased by 10 percent and staff turnover dropped by 70 percent.

                      The Formula One Team

                      McLaren Applied Technologies works on evidence-based systems to maximize human efficiency, and counts KPMG and GlaxoSmithKline among its clients. Internally the company has been exploring how to get its Formula One teams to recover from jet lag most effectively.

                      “Our teams are travelling around the world a lot. Some of them have to change wheels in the pit stop and be alert and fit, while others have to look at reams of data and need to be cognitively alert,” explains Duncan Bradley, Head of High-Performance Design.

                      By plotting data relating to travel schedules with heart rate variability monitors and tests to monitor cognitive alertness, the company was able to reorganize journey times and teams to suit the way different individuals coped with jet lag and stress.

                      The Lifestyle Monitors

                      Behavior outside the workplace impacts performance as well – including exercise, sleep, food, alcohol consumption and caffeine intake.

                      Peak Health works with high-potential leadership within financial organizations on joining the dots between health and performance. Founder Dan Zelezinksi has worked with people at Goldman Sachs, Bank of America and various hedge funds. He gives clients wearables to track stress physiology while keeping a journal of activities.

                      “High net-worth individuals, portfolio managers and owners of organizations on the buy side are looking for any kind of edge,” he explains. The “easiest wins” tend to focus on recovery – eating well and getting good quality sleep.

                      “A lot of it is intuitive but you don’t know it until you see the data,” says Jason Rabinowitz, a consultant who used Zelezinksi’s services at Goldman Sachs. However, as Zelezinski points out, there’s still a big leap between physical performance and profit or loss. “We might be able to find some correlations, but it’s difficult to track.”

                      John Coates agrees: “The real problem is signal processing: having a deep insight into how your physiology is affecting your performance. Not a lot of people are doing that science. How do you link the data you are collecting to questions like, ‘Should I be trading today?’ ”

                      Even without those issues, there are glaring privacy and morale implications to consider.

                      “You can’t all of a sudden tell people to wear a load of sensors. That’s creepy,” says Humanyze’s Waber.

                      Chris Brauer believes the privacy debate will fade once people realize the potential of this sort of human performance analytics.

                      “High achievers are very competitive with themselves as well as others. So allowing them to track when they are most productive, focused and satisfied will help them understand the conditions when they perform best,” he says.

                      Comment


                      • Barbi Will See You Now

                        It looked like a child’s playroom: toys in cubbies, a little desk for doing homework, a whimsical painting of a tree on the wall. A woman and a girl entered and sat down in plump papasan chairs, facing a low table that was partly covered by a pink tarp. The wall opposite them was mirrored from floor to ceiling, and behind it, unseen in a darkened room, a half-dozen employees of the toy company Mattel sat watching through one-way glass. The girl, who looked about 7, wore a turquoise sweatshirt and had her dark hair pulled back in a ponytail. The woman, a Mattel child-testing specialist named Lindsey Lawson, had sleek dark hair and the singsong voice of a kindergarten teacher. Microphones hidden in the room transmitted what Lawson said next. ‘‘You are going to have a chance to play with a brand-new toy,’’ she told the girl, who leaned forward with her hands on her knees. Removing the pink tarp, Lawson revealed Hello Barbie.

                        ‘‘Yay, you’re here!’’ Barbie said eagerly. ‘‘This is so exciting. What’s your name?’’

                        ‘‘Ariana,’’ the girl said.

                        ‘‘Fantastic,’’ Barbie said. ‘‘I just know we’re going to be great friends.’’

                        Their exchange was the fulfillment of an ancient dream: Since there have been toys, we have wanted them to speak to us. Inventors in the mid-1800s, deploying bellows in place of human lungs and reeds to simulate vocal cords, managed to get dolls to say short words like ‘‘papa.’’ Thomas Edison’s first idea for commercializing his new phonograph invention was ‘‘to make Dolls speak sing cry,’’ as he wrote in a notebook entry in 1877. In the 20th century, toy makers scored with products like Dolly Rekord, who spoke nursery rhymes in the 1920s; Chatty Cathy, a 1959 release from Mattel whose 11 phrases included ‘‘I love you’’; and Teddy Ruxpin, a mid-1980s stuffed bear whose mouth and eyes moved as he told stories. Even Barbie gained her voice in 1968 with a pull string that activated eight short phrases.

                        All that doll talk has always been a kind of party trick, executed with hidden record players, cassette tapes or digital chips. But in the past five years, breakthroughs in artificial intelligence and speech recognition have given the devices around us — smartphones, computers, cars — the ability to engage in something approaching conversation, by listening to users and generating intelligent responses to their queries. Apple’s Siri and Microsoft’s Cortana are still far from the science-fiction promise of Samantha from the movie ‘‘Her.’’ But as conversational technology improves, it may one day rival keyboards and touch screens as our primary means of communicating with computers — according to Apple, Siri already handles more than a billion spoken requests per week. With such technology widely available, it was inevitable that artificial intelligence for children would arrive, too, and it is doing so most prominently in the pink, perky form of Mattel’s Hello Barbie. Produced in collaboration with ToyTalk, a San Francisco-based company specializing in artificial intelligence, the doll is scheduled to be released in November with the intention of hitting the lucrative $6 billion holiday toy market.

                        For adults, this new wave of everyday A.I. is nowhere near sophisticated enough to fool us into seeing machines as fully alive. That is, they do not come close to passing the ‘‘Turing test,’’ the threshold proposed in 1950 by the British computer scientist Alan Turing, who pointed out that imitating human intelligence well enough to fool a human interlocutor was as good a definition of ‘‘intelligence’’ as any. But things are different with children, because children are different. Especially with the very young, ‘‘it is very hard for them to distinguish what is real from what is not real,’’ says Doris Bergen, a professor of educational psychology at Miami University in Ohio who studies play. The penchant to anthropomorphize — to believe that inanimate objects are to some degree humanlike and alive — is in no way restricted to the young, but children, who often favor magical thinking over the mundane rules of reality, have an especially rich capacity to believe in the unreal.

                        Hello Barbie is by far the most advanced to date in a new generation of A.I. toys whose makers share the aspiration of Geppetto: to persuade children that their toys are alive — or, at any rate, are something more than inanimate. At Ariana’s product-testing session, which took place in May at Mattel’s Imagination Center in El Segundo, Calif., near Los Angeles, Barbie asked her whether she would like to do randomly selected jobs, like being a scuba instructor or a hot-air-balloon pilot. Then they played a goofy chef game, in which Ariana told a mixed-up Barbie which ingredients went with which recipes — pepperoni with the pizza, marshmallows with the s’mores. ‘‘It’s really fun to cook with you,’’ Ariana said.

                        At one point, Barbie’s voice got serious. ‘‘I was wondering if I could get your advice on something,’’ Barbie asked. The doll explained that she and her friend Teresa had argued and weren’t speaking. ‘‘I really miss her, but I don’t know what to say to her now,’’ Barbie said. ‘‘What should I do?’’

                        ‘‘Say ‘I’m sorry,’ ’’ Ariana replied.

                        ‘‘You’re right. I should apologize,’’ Barbie said. ‘‘I’m not mad anymore. I just want to be friends again.’’

                        This summer, when I visited Mattel’s sprawling campus in El Segundo, a prototype of Hello Barbie stood in the middle of a glass-topped conference table, her blond tresses parted on the right and cascading down to her left shoulder. She looked like your basic Barbie, but Aslan Appleman, a lead product designer, explained that her thighs had been thickened slightly to fit a rechargeable battery in each one; a mini-USB charging port was tucked into the small of her back.

                        A microphone, concealed inside Barbie’s necklace, could be activated only when a user pushed and held down her belt buckle. Each time, whatever someone said to Barbie would be recorded and transmitted via Wi-Fi to the computer servers of ToyTalk. Speech-recognition software would then convert the audio signal into a text file, which would be analyzed. The correct response would be chosen from thousands of lines scripted by ToyTalk and Mattel writers and pushed to Hello Barbie for playback — all in less than a second.

                        ‘‘Barbie, what is your full name?’’ Appleman asked the doll as I watched.

                        ‘‘Oh, I thought you knew,’’ Barbie replied. ‘‘My full name is Barbara Millicent Roberts.’’

                        Ever since Barbie introduced herself to the world, she has stood at the uneasy center of questions about the influence of dolls on children. Unveiled at the New York Toy Fair in 1959, she quickly became both a cultural flash point — attacked by the pioneering feminist Betty Friedan and depicted by Andy Warhol — and one of the top-selling toys of all time, with more than a billion dolls purchased. Her stiltlike legs, tiny waist and enormous breasts set her apart from the childish dolls that had reigned until that time; in the 1950s, before Barbie was even released, a mother complained to Mattel that the doll had ‘‘too much of a figure.’’ Her appearance has remained controversial.

                        Protesters at the 1972 Toy Fair complained that Barbie and other dolls encouraged girls ‘‘to see themselves solely as mannequins, sex objects or housekeepers,’’ according to an account in The New York Times.

                        When children reach preschool, they begin to avidly collect information about gender roles — what distinguishes girls from boys, and what each gender is supposed to say and do, says May Ling Halim, an assistant professor of psychology at California State University, Long Beach, who studies gender identity. Barbie and other dolls are hardly the only influences on this process, but they may be a significant source of gender information. A 2006 study in the journal Developmental Psychology bluntly concluded that ‘‘girls exposed to Barbie reported lower body esteem and greater desire for a thinner body shape.’’

                        Giving Barbie a voice only increases her potential impact. ‘‘The messages that she says could influence how kids define being a girl,’’ Halim says. An earlier version of the doll with a much more limited ability to speak — Teen Talk Barbie, released in 1992 — enraged critics with the utterance, ‘‘Math class is tough.’’ The American Association of University Women called on Mattel to recall the doll, and the company, apologizing, deleted the offending line from the computer chip.

                        The technology behind Barbie’s latest campaign to speak was inspired by an incident four years ago, when a 7-year-old girl named Toby sat on the floor of her family’s playroom in Piedmont, Calif. She and her father were chatting with her grandmother, using the Skype app on an iPhone. After the call, Toby gazed across the room at her favorite stuffed animal, a fuzzy rabbit she called Tutu, and then back at the phone. ‘‘Daddy, can I use this to talk to Tutu?’’ she asked.

                        Toby’s father was Oren Jacob, who until recently had worked at Pixar, and he says he just laughed at his daughter’s remark at the time. Jacob started at the company in 1990, while he was still an undergraduate at the University of California, Berkeley. As a technical director, he helped create Buzz Lightyear’s rocket exhaust in ‘‘Toy Story’’ and the watery world of ‘‘Finding Nemo.’’ By 2008, he was a chief technical officer, reporting directly to John Lasseter and Steve Jobs.

                        Jacob resigned in 2011, wanting to try something new. Soon after, he and Martin Reddy, who had been Pixar’s lead software engineer, decided to start a company. But the two struggled to find a compelling idea. So Jacob mentioned his daughter’s comment to Reddy, and the more they discussed the notion of talking to toys, the more the idea seemed promising — or even revolutionary, on par with the once-heretical notion of using a computer to create cartoons. ‘‘If you could put an incredible, believable character in conversation, what would it do to the world?’’ Jacob says he and Reddy wondered. ‘‘What kind of characters could you create, stories could you tell and entertainment could you offer?’’

                        Jacob isn’t young by the standards of Silicon Valley — he’s 44, with close-cropped graying hair. But he is impish, favoring shorts and brightly colored T-shirts, and he is manic, the sentences cascading from his mouth at an auctioneer’s pace. He and Reddy, who is also 44 and has a Ph.D. in computer science, started ToyTalk in May 2011 and, with the help of $30 million so far in investment, have hired nearly 30 employees, including coders, artificial-intelligence experts, natural-language-processing specialists and a creative team. The company’s first commercial offerings were smartphone and tablet apps featuring characters that talk back. But early this year, ToyTalk and Mattel joined forces to create a talking Barbie that could actually listen.

                        Mattel committed to a November 2015 release date, but as of February, none of Barbie’s lines had been written, reviewed or recorded. Almost none of the technology inside the doll was available off the shelf; Mattel needed specific features and components that fit into Barbie’s notoriously svelte figure. ‘‘For the Wi-Fi transmitter alone, we had five different vendors working on a solution in parallel,’’ Appleman said.

                        But Mattel executives felt they needed to push forward, not least because the Barbie brand is ailing. The company sold $1.3 billion worth of Barbie products in 2011, but by last year, the figure had dropped to $1 billion. A typical product team at Mattel might have 15 people handling 40 to 75 new offerings; the Hello Barbie team is twice as large, with some members devoted exclusively to the new doll. Usually the product-development timeline is 18 months; Hello Barbie needed to be finished in half that time.

                        In May, three ToyTalk employees in their 30s — Sarah Wulfeck, Nick Pelczar and Dan Clegg — filed into a conference room in the company’s San Francisco office. Pelczar and Clegg were Shakespearean actors who still performed regularly onstage; Wulfeck studied dramatic writing and did voice-over work in Hollywood. All three supplied the voices for prior ToyTalk characters, but for Hello Barbie, their job was to write the content that would fill Barbie’s vacant brain. (Danielle Frimer, another actor, would join them later.) ‘‘We are trying to build her personality from scratch into the perfect friend,’’ Wulfeck said.

                        Now two months into their writing process, the team had finished about 3,000 lines of dialogue — mostly isolated modules of content on fashion, careers, animals and more. They had 5,000 more lines to write until the project was finished. Wulfeck plugged in a computer and started a program called PullString, named in homage to the mechanism that triggered the utterances of mid-20th-century toys. The software, which was created by ToyTalk’s engineers, allowed nonprogrammers to script the conversations that kids might have with a character like Barbie.

                        The writers were working on a relatively simple game in which Barbie, casting herself as a game-show host, would ask children to give awards to family members. Wulfeck had written the module and now wanted feedback from the other writers. They started playing the game, with Pelczar providing the child’s responses. Wulfeck typed what he said into the system and read the replies that PullString generated for Barbie.

                        ‘‘For the person who’s always gonna grab the last French fry, carrot stick or cookie, it’s the Always Eats the Last One Award!’’ Wulfeck-as-Barbie said. ‘‘And the award goes to?’’

                        ‘‘My brother, Andrew,’’ Pelczar said.

                        ‘‘Your brother,’’ Wulfeck replied, reading from the PullString screen. ‘‘He’s the best at getting the last one, huh? How does he do it?’’
                        ‘‘He’s fast and hungry,’’ Pelczar said.

                        On another visit, Wulfeck showed me how Barbie’s artificial intelligence worked. She tapped on the keyboard to bring up a simple example. ‘‘Hey, how are you?’’ read a line of Barbie’s at the top of the screen. The next step had been for the writers to list dozens of words that the speech-recognition software should listen for in a child’s answer: for instance, ‘‘good,’’ ‘‘fine,’’ ‘‘fantastic’’ or ‘‘not bad.’’ The system extracted keywords, and in this case, ‘‘good’’ or any of its positive brethren would cue Barbie to reply, ‘‘Great! Me, too.’’ ‘‘Bad’’ or other negative words would direct Barbie to say, ‘‘I’m sorry to hear that.’’

                        In this way, every one of Barbie’s potential conversations was mapped out like the branches of a tree, with questions leading to long lists of predicted answers, which would trigger Barbie’s next response, and so on. In case the speech recognition failed or a kid’s response was not predicted, the writers always supplied Barbie with a ‘‘fallback’’: the kind of enthusiastic and generic conversational trick — ‘‘Really? No way!’’ — that a person might use in, say, a loud bar. The writing process, Wulfeck said, was like doing improv with an unpredictable partner. ‘‘You are playing off of somebody who could be anybody,’’ she said. ‘‘It could be the shy kid, the really snarky kid or the insecure kid, and you have to think about what that child is going to say back.’’

                        Barbie would be able to ask kids what music they liked, for instance, and was ready for nearly 200 possible responses. Taylor Swift? ‘‘She is one of my super favorites right now!’’ Barbie would reply. My Bloody Valentine? ‘‘They are so emo.’’

                        The writers marked important questions with ‘‘flags,’’ and this enabled Hello Barbie’s most unnerving power: She could remember the answers and use them for conversation starters days or weeks later. ‘‘She should always know that you have two moms and that your grandma died, so don’t bring that up, and that your favorite color is blue, and that you want to be a veterinarian when you grow up,’’ Wulfeck said.

                        In developing the technology to make such feats possible, ToyTalk is chasing one of the most prized goals in Silicon Valley today, which is to create artificial-intelligence-powered companions that are personality-rich and conversationally capable. ToyTalk’s approach, however, focuses on quality of conversation instead of quantity. Smartphones often regurgitate online content using automated voices, while every single word that a ToyTalk character says is scripted by writers and recorded by actors to sound genuinely human. Smartphones roam the web for unlimited information, while a ToyTalk character like Hello Barbie is limited to 8,000 lines of content. Smartphones rarely extend beyond one-question, one-answer exchanges; PullString conversations can go 10 to 200 interchanges deep. Over all, ToyTalk favors creative control over smartphones’ superior utility.

                        To craft Hello Barbie’s character, the ToyTalk writers worked from a background brief and verbal instructions from Mattel. As a toy, Hello Barbie needs to be both fun, leading girls through imaginative games, and funny, telling jokes and being goofy. But Mattel also wanted Barbie to have an empathetic, affirming sensibility aimed at young girls, says Julia Pistor, a Mattel vice president. ‘‘The subtext that is there that we would not do for boys is: ‘You don’t have to be perfect. It is O.K. to be messy and flawed and silly.’ ’’

                        Armed with her ToyTalk-scripted lines, Hello Barbie comes across as chipper and positive, verging on cloying. But she is also fun-loving, with just a hint of conspiratorial mischief. ‘‘I like to think of her as the world’s best babysitter,’’ Wulfeck said.

                        She told me she imagined a girl taking the new doll into her bedroom and closing the door. ‘‘I have no doubt she will ask Barbie all manner of those intimate questions that she wouldn’t ask an adult,’’ Wulfeck said.

                        For those situations, the team was working on getting Barbie to say the right things — or at the very least, to not say the obviously wrong ones.

                        ‘‘Do you believe in God?’’ a kid might ask.

                        ‘‘I think a person’s beliefs are very personal to them,’’ Barbie might reply, Wulfeck said.

                        ‘‘I’m getting bullied in school.’’

                        ‘‘That’s sounds like something you should talk to a grown-up about.’’

                        ‘‘Do you think I’m pretty?’’

                        This was dicey territory, and Wulfeck was trying to steer Barbie to safe ground. ‘‘Of course you’re pretty, but you know what else you are?’’

                        Barbie would reply. ‘‘You’re smart, talented and funny.’’

                        ‘‘I feel shy trying to make new friends.’’

                        ‘‘Feeling shy is nothing to feel bad about,’’ Barbie would say. ‘‘Just remember this — you made friends with me right away.’’

                        Anyone who has watched a child have an animated conversation with a doll — or a stuffed animal, a toy car or a Lego brick for that matter — has probably wondered what that child is really thinking. As the pioneering developmental psychologist Jean Piaget wrote in his book ‘‘The Child’s Conception of the World,’’ published in 1929, ‘‘Does the child attribute consciousness to the objects which surround him, and in what measure?’’

                        This question has only grown more intriguing with the advent of toys that, rather than waiting for a child’s imagination to animate them, use technology to seemingly attain consciousness all on their own. In the late 1990s, Noel Sharkey, a professor at the University of Sheffield in England who studies the ethics of robotics, saw how this could play out when one of his daughters, who was around 8 at the time, started interacting with one of the first-ever artificial-intelligence-powered toys — a virtual pet called Tamagotchi. An egg-shaped computer that fit in the palm of her hand, the Tamagotchi had a tiny screen to express what it needed and wanted. Sharkey’s daughter periodically pressed a button to give the Tamagotchi food; she played simple games to boost her pet’s happiness levels; she took the pet to the toilet when the screen indicated that it needed to relieve itself. Tamagotchi’s creators had programmed it to demand an ever-increasing amount of attention, and a failure to deliver this caused the pet to become sick. ‘‘We had to break it away from my daughter in the end, because she was obsessed with it,’’ Sharkey says. ‘‘It was like, ‘Oh, my God, my Tamagotchi is going to die.’ ’’

                        The ability of even simple gadgets like the Tamagotchi to seduce users into the belief that they have lifelike qualities has been obvious since the earliest days of artificial intelligence. In the 1960s, a computer scientist, Joseph Weizenbaum, created a computer program called Eliza, which could pretend to be a psychotherapist via a simple text interface. As Weizenbaum later wrote, ‘‘I was startled to see how quickly and how very deeply people … became emotionally involved with the computer and how unequivocally they anthropomorphized it.’’ Five decades of research have supported the same finding in increasingly creative ways. Studies have documented that people are embarrassed to undress in front of humanlike robots. We cheat less in the presence of robots, keep a robot’s secrets from other people when asked by the machine to do so and hesitate to ‘‘kill’’ (via an on-off dial) a nice-seeming robot.

                        With children, this phenomenon can be even more pronounced. To see how they reacted to lifelike technologies, the roboticists Cynthia Breazeal and Brian Scassellati and the psychologist Sherry Turkle introduced children to the robots Cog and Kismet at the Massachusetts Institute of Technology. In the experiment, which took place in 2001, the two robots couldn’t converse with kids but engaged them through eye contact, gestures and facial expressions. Surveyed after these encounters, most children said they believed that Kismet and Cog could listen, feel, care about them and make friends — despite researchers’ showing the children how the robots worked and giving them a chance to control them. ‘‘Children continued to imbue the robots with life even when being shown — as in the famous scene from ‘The Wizard of Oz’ — the man behind the curtain,’’ the researchers later wrote.

                        For psychologists who study the imaginative play of children, the primary concern with A.I. toys is not that they encourage kids to fantasize too wildly. Instead, researchers worry that a conversational doll might prevent children, who have long personified toys without technology, from imagining wildly enough. ‘‘Imaginary companions aren’t constrained,’’ says Tracy Gleason, a professor of psychology at Wellesley College who studies children’s imaginative play. ‘‘They often do all kinds of things like switching age, gender, priorities and interests.’’ With a toy like Hello Barbie, the personality is limited by programming — and public-relations concerns. Mattel, rather than kids, ultimately controls what she can say. ‘‘She is who she is,’’ Gleason says. ‘‘That might be a lot of fun, but it is definitely less imaginative, child-generated and truly interactive than someone with whom you can imagine whatever you want.’’

                        A toy that can befriend a child is likely to be a commercially successful one, so toy makers will presumably push to make their A.I. technologies ever more likable. ‘‘The first thing you’re going to do is to try and create stronger and stronger emotional bonds,’’ Sharkey says. For some children, synthetic friendship could begin to supplant the real kind: ‘‘If you’ve got someone who you can talk to all the time, why bother making friends?’’ Family members and friends can annoy, challenge or disappoint, ‘‘whereas this lovely Barbie will be beautiful to you all the time.’’ Parents, who already turn to televisions and tablets to occupy their children, might embrace an even more capable-seeming e-babysitter. Is Hello Barbie ‘‘a step toward leaving your children in the hands of robots?’’ Sharkey asks. ‘‘I don’t know.’’

                        Peter Kahn, a professor of psychology at the University of Washington who studies human-robot interaction, worries about a ‘‘domination model’’ relationship in which the child makes all the demands and receives all the rewards but feels no responsibility to the robot. This, he says, is unhealthy for moral and emotional development. At worst, the human can begin to abuse his power. In a study conducted at a Japanese shopping mall a couple of years ago, for instance, researchers videotaped numerous children who kicked and punched a humanoid robot when it got in their way. Anticipating that some kids might say mean things to Hello Barbie, ToyTalk has programmed her to simply ignore verbal jabs. The thinking behind this approach, which is common in A.I., is that acknowledging bad behavior often has the perverse effect of encouraging it.

                        The alternative is for the robot to stand up for itself. To test this strategy, Kahn and his colleagues ran an experiment in which 90 kids and teenagers played a game of I Spy with a robot named Robovie. Before the game could finish, an adult experimenter would always interrupt, saying, ‘‘Robovie, you’ll have to go into the closet now.’’ Robovie would protest that this wasn’t fair, but the experimenter would nonetheless lead the robot away. ‘‘I’m scared of being in the closet,’’ Robovie would say. After witnessing this, nearly 90 percent of the subjects said they agreed with Robovie’s protests; more than half thought that it was ‘‘not all right’’ to put the robot in the closet. ‘‘The surprising finding,’’ Kahn says, ‘‘is that children will engage not just socially but morally with these robots.’’ Kahn, though, says he has mixed feelings about programming machines to demand morally just treatment. Some roboticists feel that in so doing, they create an even more persuasive charade of life.

                        All the roboticists I spoke with said that getting people to suspend disbelief and emotionally invest in imaginary characters is what practitioners of the arts have been doing for eons, in works from ‘‘Oedipus Rex’’ to ‘‘Inside Out.’’ But it’s hard not to wonder what our increasingly potent ability to conjure the illusion of life — especially with products for children, who eagerly anthropomorphize even without the help of technology — will mean for the future. ‘‘In small doses, it just doesn’t matter, of course,’’ Kahn says. ‘‘It is just a trivial toy. But the way the world is going, these are not just going to become small, isolated technologies in a child’s life. They are becoming a pervasive form of interaction.’’

                        At the beginning of June, I returned to El Segundo to check on Mattel’s progress with Hello Barbie. Michelle Chidoni, the company’s marketing chief, led me past racks of candy-colored clothes for girls tagged with the Barbie brand, bins of glitter and dozens of Barbie heads perched atop small sticks.

                        Inside a conference room, several employees had gathered to review the latest batch of dialogue from the ToyTalk writers, with Wulfeck on speakerphone. Roughly 5,000 of the 8,000 lines of dialogue were now written, and one question concerned Barbie’s tone. If her sense of humor was too sharp, it might alienate the 3-year-olds who were at the young end of the doll’s target market. But if Barbie were too treacly, she might turn off the 8- and 9-year-olds. The group started discussing the family award show, and Amy Braun, a Mattel marketing manager, made it clear that she thought the game’s tone had veered too far toward snarky. ‘‘It feels kind of negative,’’ Braun said.

                        Everyone in the room remembered the ‘‘Math class is tough’’ debacle, and nobody wanted to repeat it. Wulfeck had scripted Barbie to say ‘‘You’re beautiful’’ in a playfully smarmy tone as she greeted girls to the game show. But Braun objected. ‘‘I don’t love that the first thing you say to her is, ‘You’re beautiful. … ’ I want to hear: ‘You are smart. You are intelligent. You are awesome.’ Something other than her physical attribute.’’

                        Later in the game, Barbie commended the family’s top book reader for being ‘‘a lover of the literary.’’ Carrie Buse, an interactive-content specialist at Mattel, said the line ‘‘implies something slightly inappropriate.’’

                        ‘‘I don’t understand,’’ Wulfeck said.

                        ‘‘As in, ‘I am the lover of those who are literary,’ ’’ Buse replied. ‘‘We don’t want Barbie to be the lover of anybody.’’

                        The game show also had an award for cleaning up gross stuff, including bugs, and a Mattel lawyer fretted that it sounded as though Barbie was encouraging kids to cruelly squash insects. Chidoni agreed. ‘‘PETA will come after you,’’ she said.

                        That wasn’t the only problem, Braun said. ‘‘Line 176, the word ‘cockroach’ … ’’

                        ‘‘Nope! Not a Barbie word,’’ Buse interjected. Mattel wanted to avoid the embarrassment of someone, with an easy bit of editing, say, posting a YouTube video of Hello Barbie saying a profanity. Barbie provoked angry reactions, and Mattel had to anticipate them. ‘‘Barbie has a target on her back,’’ Chidoni told me.

                        Mattel executives know that they will never win over all critics, but they are nonetheless using the introduction of Hello Barbie to promote a different view of the doll. Unlike baby dolls, which encourage a mothering role, Barbie has showcased 170 different careers as an unmarried adult woman, making her an unlikely sort of feminist. ‘‘She went to the moon before Neil Armstrong,’’ Chidoni said. ‘‘She was president of the United States before any female president.’’ Touting Barbie’s independence was one component of the rollout, and Evelyn Mazzocco, a senior vice president, told me the physical doll also needed to offer ‘‘a better reflection of what young girls look like today.’’ Hello Barbie and many other new Barbie models coming out this year will wear less makeup and tamer clothes than in the past. The doll’s feet will be flat, allowing her to fit into comfortable shoes.

                        Changing the way the doll speaks is another part of the makeover, Mazzocco said. The first-ever talking Barbie, the one released in 1968, was voiced by Gwen Florea, who was hired to work on talking toys at Mattel after an employee spotted her dancing to a song called ‘‘The Stripper’’ in a bar whose sound system she engineered. Her voice, which played from a quarter-size record hidden in Barbie’s torso, was lilting and vapid. Barbie’s current voice is that of Erica Lindbeck, a 23-year-old voice actress. Mazzocco said she was chosen because her delivery was lower, less breathy and more down to earth than those of past Barbies.

                        A few weeks after the dialogue-review meeting, I went to one of Lindbeck’s recording sessions. Inside a darkened studio booth, the session’s director, Collette Sunderman, stared through a window into an adjacent room, where Lindbeck perched atop a stool with a microphone in front of her mouth.

                        Lindbeck, who had previously recorded content for many of the stand-alone content modules, was now starting to work on the lines that would enable one of the doll’s most advanced capabilities — being able to reference previous conversations with girls, aided by Barbie’s digitally stored memories. ‘‘Oh, you told me you liked your science class,’’ Lindbeck said into the microphone with gusto. ‘‘Is there something else you like from school?’’

                        ‘‘Perfect,’’ Sunderman said. ‘‘Moving into the biology. Same feel, O.K.?’’

                        During a break, Lindbeck came into the sound booth and explained how the work required a new kind of acting. Much as action stars envision fantasy worlds when they perform in front of green screens, she had to imagine the responses of a girl who wasn’t there. (In ‘‘The Diamond Age,’’ Neal Stephenson’s prescient science-fiction classic about artificial intelligence, this particular job was called ‘‘racting.’’) Sunderman said she frequently used a catchphrase to coax Lindbeck into trying to create intimacy between doll and girl. ‘‘I’m sure you’ve heard me say this a thousand times, ‘knee to knee,’ ’’ Sunderman told Lindbeck. Then Sunderman turned to me. ‘‘I came up with that little phrase for us to feel like we were two little girls in a slumber party sitting on the bed, knee to knee, talking.’’

                        In August, only three months before Hello Barbie was scheduled to ship to toy stores, a group of Mattel employees assembled again in the Imagination Center. Wulfeck and Pelczar had flown in, and they took notes, their computer screens glowing as the lighting dimmed in the observation room. With seven new girls coming in, and 7,000 of Barbie’s 8,000 lines complete, the goal now was to test some of the most complex games and discussions.

                        One by one, the girls came into the mock playroom for 20-minute sessions. Ranging in age from 6 to 8, they helped Barbie solve a mystery involving zoo animals and missing bananas. They played a guessing game and rode imaginary horses into the forest. They talked about what they wanted to be when they grew up — veterinarian topped the list — and told Barbie about their families.

                        Barbie was pretty — ‘‘Why are you so beautiful?’’ one girl asked her, unprompted. But the doll, which was still under development, was also streaky. She would give a long string of intelligent responses, conveying a remarkable illusion of sentience, and then bungle a few easy exchanges in a row. The kids usually didn’t seem to notice. Even when they did, they just kept plowing along, a roll-with-it attitude that makes children an ideal test market for the nascent artificial-intelligence industry.

                        In a session with a 7-year-old named Tiara, Barbie demonstrated her potential to push beyond games — but also revealed the considerable challenge of doing so. Barbie explained that her sister Stacy was feeling jealous because the other players on her new soccer team were so good. ‘‘What do you think I should say to her?’’ Barbie asked.

                        ‘‘Just say, ‘It’s O.K. if you lose,’ ’’ Tiara replied.

                        ‘‘That’s a good suggestion,’’ Barbie said. A bit later, she asked: ‘‘What about you? Have you ever felt jealous about something?’’
                        ‘‘Other people talking about me,’’ the girl said.

                        ‘‘Jealous feelings can be really hard,’’ Barbie said. So far, the conversation was proceeding just as the writers had envisioned — an emotional disclosure leading to a teachable moment, just like an after-school special. Then Barbie’s brain seemed to get stuck, and it took Tiara several attempts to steer the conversation away from the jealousy topic.

                        ‘‘Talk about something else,’’ Tiara said, and Barbie finally listened. She told some jokes, then switched to a more personal topic.

                        ‘‘Hey, new question,’’ Barbie said. ‘‘Do you have any sisters?’’

                        ‘‘Yeah,’’ Tiara said. ‘‘I only have one.’’

                        ‘‘What’s something nice that your sister does for you?’’ Barbie asked.

                        ‘‘She does nothing nice to me,’’ Tiara said tensely.

                        Barbie forged ahead. ‘‘Well, what is the last nice thing your sister did?’’

                        ‘‘She helped me with my project — and then she destroyed it.’’

                        ‘‘Oh, yeah, tell me more!’’ Barbie said, oblivious to Tiara’s unhappiness.

                        ‘‘That’s it, Barbie,’’ Tiara said.

                        ‘‘Have you told your sister lately how cool she is?’’

                        ‘‘No. She is not cool,’’ Tiara said, gritting her teeth.

                        ‘‘You never know, she might appreciate hearing it,’’ Barbie said.

                        Another awkward moment came during the zoo game, when one girl seemed to be having a good time but became unnerved when Barbie mentioned seeing orange fur, which was a clue; the girl thought she was supposed to see the fur in the testing room. When she couldn’t find it, she stood up and walked away, saying, ‘‘It’s freaking me out.’’

                        But when each play session finished and Lawson, the child-testing specialist, came back into the room to debrief, the girls all said more or less the same thing. They liked talking to Barbie. She was a good listener. Conversation was easy and fun.

                        What exactly girls will make of Hello Barbie isn’t clear. Research shows that children don’t fully believe that artificial-intelligence toys are alive in a biological sense. But they also don’t treat them simply as devices. Instead they are increasingly comfortable with a third ontological category — beings that are less than human but more than machines. Turkle, whose upcoming book is titled ‘‘Reclaiming Conversation: The Power of Talk in a Digital Age,’’ says that we have arrived a​t a​ ​‘‘robotic moment’’ — a milestone that is as much about cultural acceptance as it is technological achievement. ‘‘It's not that we have really invented machines that love us or care about us in any way, shape or form,’’ Turkle says, ‘‘but that we are ready to believe that they do. We are ready to play their game.’’

                        At the Imagination Center, as one of the sessions ended, Lawson told a little girl named Emma that it was time to leave the testing room.

                        ‘‘Is Barbie going to come?’’ the girl asked hopefully.

                        ‘‘Barbie is going to hang out here,’’ Lawson replied.

                        Emma got up from the table. Reaching the door, she stole a quick glance back at Barbie, who stood alone on the table, the smile frozen on her pink plastic lips.



                        http://www.nytimes.com/2015/09/20/magazine/barbie-wants-to-get-to-know-your-child.html?hp&action=click&pgtype=Homepage&module=photo-spot-region&region=top-news&WT.nav=top-news

                        Comment


                        • Re: Barbi Will See You Now

                          Originally posted by nyt
                          ....a Mattel child-testing specialist named Lindsey Lawson, had sleek dark hair and the singsong voice of a kindergarten teacher. Microphones hidden in the room transmitted what Lawson said next. ‘‘You are going to have a chance to play with a brand-new toy,’’ she told the girl, who leaned forward with her hands on her knees. Removing the pink tarp, Lawson revealed Hello Barbie.

                          ‘‘Yay, you’re here!’’ Barbie said eagerly. ‘‘This is so exciting. What’s your name?’’

                          ‘‘Ariana,’’ the girl said.

                          ‘‘Fantastic,’’ Barbie said. ‘‘I just know we’re going to be great friends.’’....
                          the child testing-specialist sounds somewhat familiar... like....ummm... that other lindsey that starts with an L(ohan)

                          wondren how long it'll be before the under-represented lifestyle crowd gets their very own barbies...
                          as in... uhhhh.... strap-on barbie... and or the barbie, bobbie and ken threesome set (incl the strap-on accy kit, so ken dont feel left out of the party)
                          or HEY! why not the spoiled-rotten celebrity bitch barbie (with lohan-inspired accessories, like a re-hab playhouse and stolen jewelry to match... )

                          Comment

                          Working...
                          X