Get HideMyAss! VPN, PC Mag's Top 10 VPNs of 2016 for 55% off for a Limited Time ×
Mars

Laser-Armed Martian Robot Now Vaporizing Targets of Its Own Free Will (dailymail.co.uk) 73

Slashdot reader Rei writes: NASA -- having already populated the Red Planet with robots and armed a car-sized nuclear juggernaut with a laser -- have now decided to grant fire control of that laser over to a new AI system operating on the rover itself. Intended to increase the scientific data-gathering throughput on the sometimes glitching rover's journey, the improved AEGIS system eliminates the need for a series of back-and-forth communication sessions to select targets and aim the laser.
Rei's original submission included a longer riff on The War of the Worlds, ending with a reminder to any future AI overlords that "I have a medical condition that renders me unfit to toil in any hypothetical subterranean lithium mines..."
China

China Wants To Be a Top 10 Nation For Automation By Putting More Robots In Its Factories (reuters.com) 140

An anonymous reader shares a Reuters report: China is aiming for a top-10 ranking in automation for its industries by 2020 by putting more robots in its factories, the International Federation of Robotics (IFR) said. China's push to modernize its manufacturing with robotics is partly a response to labor shortages and fast-rising wages. But the world's second-largest economy still has far lower robot penetration than other big industrialized economies -- just 36 per 10,000 manufacturing workers in 2015, ranking it 28th among the world's most automated nations. By 2020, it aims to boost penetration to 150 per 10,000 workers, IFR said in a statement, citing Wang Ruixiang, President of the China Machinery Industry Federation. To help reach that goal, China aims for sales of 100,000 domestically produced industrial robots a year by 2020, up 49 percent compared with last year, the IFR said in a statement at an industry summit in Shanghai, where the Chinese federation's chief was speaking.
Graphics

NVIDIA Drops Surprise Unveiling of Pascal-Based GeForce GTX Titan X (hothardware.com) 134

MojoKid writes from a report via HotHardware: Details just emerged from NVIDIA regarding its upcoming powerful, Pascal-based Titan X graphics card, featuring a 12 billion transistor GPU, codenamed GP102. NVIDIA is obviously having a little fun with this one and at an artificial intelligence (AI) meet-up at Stanford University this evening, NVIDIA CEO Jen-Hsun Huang first announced, and then actually gave away a few brand-new, Pascal-based NVIDIA TITAN X GPUs. Apparently, Brian Kelleher, one of NVIDIA's top hardware engineers, made a bet with NVIDIA CEO Jen-Hsun Huang, that the company could squeeze 10 teraflops of computing performance out of a single chip. Jen-Hsun thought that was not doable in this generation of product, but apparently, Brian and his team pulled it off. The new Titan X is powered by NVIDIA's largest GPU -- the company says it's actually the biggest GPU ever built. The Pascal-based GP102 features 3,584 CUDA cores, clocked at 1.53GHz (the previous-gen Titan X has 3,072 CUDA cores clocked at 1.08GHz). The specifications NVIDIA has released thus far include: 12-billion transistors, 11 TFLOPs FP32 (32-bit floating point), 44 TOPS INT8 (new deep learning inferencing instructions), 3,584 CUDA cores at 1.53GHz, and 12GB of GDDR5X memory (480GB/s). The new Titan X will be available August 2nd for $1,200 direct from NVIDIA.com.
Google

Google Testing AI System To Cool Data Center Energy Bills 52

An anonymous reader writes: Google is looking at artificial intelligence technology to help it identify opportunities for data center energy savings. The company is approaching the end of an initial 2-year trial of the machine learning tool, and hopes to see it applied across the entire data center portfolio by the end of 2016. The new AI software, which is being developed at Google's DeepMind, has already helped to cut energy use for cooling by 40%, and to improve overall data center efficiency by 15%. DeepMind said that the program has been an enormous help in analyzing data center efficiency, from looking at energy used for cooling and air temperature to pressure and humidity. The team now hopes to expand the system to understand other infrastructure challenges, in the data center and beyond, including improving power plant conversion, reducing semiconductor manufacturing energy, water usage, and helping manufacturers increase throughput.
Security

DARPA Will Stage an AI Fight in Las Vegas For DEF CON (yahoo.com) 89

An anonymous Slashdot reader writes: "A bunch of computers will try to hack each other in Vegas for a $2 million prize," reports Tech Insider calling it a "historic battle" that will coincide with "two of the biggest hacking conferences, Blackhat USA and DEFCON". DARPA will supply seven teams with a supercomputer. Their challenge? Create an autonomous A.I. system that can "hunt for security vulnerabilities that hackers can exploit to attack a computer, create a fix that patches that vulnerability and distribute that patch -- all without any human interference."

"The idea here is to start a technology revolution," said Mike Walker, DARPA's manager for the Cyber Grand Challenge contest. Yahoo Tech notes that it takes an average of 312 days before security vulnerabilities are discovered -- and 24 days to patch it. "if all goes well, the CGC could mean a future where you don't have to worry about viruses or hackers attacking your computer, smartphone or your other connected devices. At a national level, this technology could help prevent large-scale attacks against things like power plants, water supplies and air-traffic infrastructure.

It's being billed as "the world's first all-machine hacking tournament," with a prize of $2 million for the winner, while the second and third place tem will win $1 million and $750,000.
Databases

Leaky Database Leaves Oklahoma Police, Bank Vulnerable To Intruders (dailydot.com) 16

blottsie quotes a report from The Daily Dot: A leaky database has exposed the physical security of multiple Oklahoma Department of Public Safety facilities and at least one Oklahoma bank. The vulnerability -- which has reportedly been fixed -- was revealed on Tuesday by Chris Vickery, a MacKeeper security researcher who this year has revealed numerous data breaches affecting millions of Americans. The misconfigured database, which was managed by a company called Automation Integrated, was exposed for at least a week, according to Vickery, who said he spoke to the company's vice president on Saturday. Reached on Tuesday, however, an Automation Integrated employee said "no one" in the office was aware of the problem. Vickery was able to retrieve images of various doors, locks, RFID access panels, and the controller board of an alarm system all of which could be previously accessed without a username or password. The database also contained "details on the make, model, location, warranty coverage, and even whether or not the unit was still functional," Vickery said. What's worse is that Automated Integration is far from the only company whose database are left exposed online. "I have a constantly fluctuating list of 50 to 100 similar breaches that need to be reported," he said. "This one just happened to involve a security-related company and government buildings, so it got bumped to the top of my list."
Privacy

Ashley Madison Admits It Lured Customers With 70,000 Fake 'Fembots' (arstechnica.com) 92

America's Federal Trade Commission is now investigating the "infidelity hookup site" Ashley Madison. In a possibly-related development, an anonymous reader writes: Ashley Madison's new executive team "admits that it used fembots to lure men into paying to join the site," reports Arts Technica. More than 75% of the site's customers were convinced to join by an army of 70,000 fembot accounts, "created in dozens of languages by data entry workers...told to populate these accounts with fake information and real photos posted by women who had shut down their accounts on Ashley Madison or other properties owned by Ashley Madison's parent company, Avid Life Media... In reality, that lady was a few lines of PHP... In internal company e-mails, executives discussed openly that only about five percent of the site's members were real females."
The company only abandoned the practice in 2015, and CNN also reports that for years, if the site's male customers complained, Ashley Madison "threatened to send paperwork to users' homes if they disputed their bills -- potentially revealing cheaters to their spouses," while one user complained that the site also automatically signed up customers for recurring billing. "We are not threatening you. We are laying the facts to you..." one e-mail read, while another warned that "We do fight all charge backs."
Google

Google's DeepMind AI To Use 1 Million NHS Eye Scans To Spot Diseases Earlier (arstechnica.com) 34

Google DeepMind has announced its second collaboration with the NHS, as part of which it will work with Moorfields Eye Hospital in east London to build a machine learning system which will eventually be able to recognise sight-threatening conditions from just a digital scan of the eye. The five-year research project will draw on one million anonymous eye scans which are held on Moorfields' patient database, reports Ars Technica, with the aim to speed up the complex and time-consuming process of analysing eye scans. From the report:The hope is that this will allow diagnoses of common causes of sight loss, like diabetic retinopathy and age-related macular degeneration, to be spotted more rapidly and hence be treated more effectively. For example, Google says that up to 98 percent of sight loss resulting from diabetes can be prevented by early detection and treatment. Two million people are already living with sight loss in the UK, of whom around 360,000 are registered as blind or partially-sighted. Google quotes estimates that the number of people suffering from sight loss in the UK will double by 2050. Improvements in detection and treatment would therefore have a major impact on the quality of life for large numbers of people in the UK and around the world.
Security

A New Corporate AI Can Read Your Emails - and Your Mind (fortune.com) 120

"Okay, as of last night, who were the people who were most disgruntled...? Show me the top 10." An anonymous Slashdot reader shares their report on a fascinating Fortune magazine article: "One company says it can spot 'insider threats' before they happen -- by reading all your workers' email." Working with a former CIA consultant, Stroz Friedberg developed a software that "combs through an organization's emails and text messages -- millions a day, the company says -- looking for high usage of words and phrases that language psychologists associate with certain mental states and personality profiles...

"Many companies already have the ability to run keyword searches of employees' emails, looking for worrisome words and phrases like 'embezzle' and 'I loathe this job'. But the Stroz Friedberg software, called Scout, aspires to go a giant step further, detecting indirectly, through unconscious syntactic and grammatical clues, workers' anger, financial or personal stress, and other tip-offs that an employee might be about to lose it... It uses an algorithm based on linguistic tells found to connote feelings of victimization, anger, and blame."

The article reports that 27% of cyber-attacks "come from within," according to a study of 562 organizations that was partly conducted by the U.S. Secret Service, with 43% of the surveyed companies reporting an "insider attack" within the last year.
Communications

Facebook Messenger Now Has 11,000 Bots (theverge.com) 43

An anonymous reader writes: Three months after Facebook announced a platform for building bots that operate inside its Messenger app, Messenger chief David Marcus said in a blog post that more than 11,000 bots have been created. He also said 23,000 more developers have signed up to use tools provided by Wit.ai, a Facebook acquisition that automates conversational interactions between users and businesses. Facebook has yet to announce any numbers regarding how many users actually use the bots, but developers appear to be actively engaged. Facebook has said that bots will rapidly improve as more developers create them. Marcus did announce several new features for the platform. Bots can now respond with GIFs, audio, video, and other files "to help a brand's personality come across," Marcus said. They can now link Messenger profiles to customer accounts, such as a bank or online merchant. They're also getting some new UI elements: "quick replies" that suggest interactions for the user to help them set their expectations, and a "persistent menu" option for bots that displays available commands at all times so users don't have to remember them. A star system is now in place for users to rate bots and provide feedback directly to developers.
Slashdot also has a Facebook Messenger bot. You can chat with it by messaging the Slashdot Facebook page.
AI

Satya Nadella Explores How Humans and AI Can Work Together To Solve Society's Greatest Challenges (geekwire.com) 120

In an op-ed for Slate, Microsoft CEO Satya Nadella has shared his views on AI, and how humans could work together with this nascent technology to do great things. Nadella feels that humans and machines can work together to address society's greatest challenges, including diseases and poverty. But he admits that this will require "a bold and ambition approach that goes beyond anything that can be achieved through incremental improvements to current technology," he wrote. You can read the long essay here. GeekWire has summarized the principles and goals postulated by Nadella. From the article:AI must be designed to assist humanity.
AI must be transparent.
AI must maximize efficiencies without destroying the dignity of people.
AI must be designed for intelligent privacy.
AI needs algorithmic accountability so humans can undo unintended harm.
AI must guard against bias.
It's critical for humans to have empathy.
It's critical for humans to have education.
The need for human creativity won't change.
A human has to be ultimately accountable for the outcome of a computer-generated diagnosis or decision.

AI

Let's Stop Freaking Out About Artificial Intelligence (fortune.com) 150

Former Google CEO, and current Alphabet Executive Chairman Eric Schmidt and Google X founder Sebastian Thrun in an op-ed on Fortune Magazine have shared their views on artificial intelligence, and what the future holds for this nascent technology. "When we first worked on the AI behind self-driving cars, most experts were convinced they would never be safe enough for public roads. But the Google Self-Driving Car team had a crucial insight that differentiates AI from the way people learn. When driving, people mostly learn from their own mistakes. But they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again," they wrote. The two also talked about an artificial intelligence apocalypse, adding that while it's unlikely to happen, the situation is still worth considering. They wrote:Do we worry about the doomsday scenarios? We believe it's worth thoughtful consideration. Today's AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic - it's to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.It's a long commentary, but worth a read.
AI

AI Downs 'Top Gun' Pilot In Dogfights (dailymail.co.uk) 441

schwit1 writes from a report via Daily Mail: [Daily Mail reports:] "The Artificial intelligence (AI) developed by a University of Cincinnati doctoral graduate was recently assessed by retired USAF Colonel Gene Lee -- who holds extensive aerial combat experience as an instructor and Air Battle Manager with considerable fighter aircraft expertise. He took on the software in a simulator. Lee was not able to score a kill after repeated attempts. He was shot out of the air every time during protracted engagements, and according to Lee, is 'the most aggressive, responsive, dynamic and credible AI I've seen to date.'" And why is the US still throwing money at the F35, unless it can be flown without pilots. The AI, dubbed ALPHA, features a genetic fuzzy tree decision-making system, which is a subtype of fuzzy logic algorithms. The system breaks larger tasks into smaller tasks, which include high-level tactics, firing, evasion, and defensiveness. It can calculate the best maneuvers in various, changing environments over 250 times faster than its human opponent can blink. Lee says, "I was surprised at how aware and reactive it was. It seemed to be aware of my intentions and reacting instantly to my changes in flight and my missile deployment. It knew how to defeat the shot I was taking. It moved instantly between defensive and offensive actions as needed."
AI

Drivers Prefer Autonomous Cars That Don't Kill Them (hothardware.com) 451

"A new study shows that most people prefer that self-driving cars be programmed to save the most people in the event of an accident, even if it kills the driver," reports Information Week. "Unless they are the drivers." Slashdot reader MojoKid quotes an article from Hot Hardware about the new study, which was published by Science magazine. So if there is just one passenger aboard a car, and the lives of 10 pedestrians are at stake, the survey participants were perfectly fine with a self-driving car "killing" its passenger to save many more lives in return. But on the flip side, these same participants said that if they were shopping for a car to purchase or were a passenger, they would prefer to be within a vehicle that would protect their lives by any means necessary. Participants also balked at the notion of the government stepping in to regulate the "morality brain" of self-driving cars.
The article warns about a future where "a harsh AI reality may whittle the worth of our very existence down to simple, unemotional percentages in a computer's brain." MIT's Media Lab is now letting users judge for themselves, in a free online game called "Moral Machine" simulating the difficult decisions that might someday have to be made by an autonomous self-driving car.
Robotics

Artificially Intelligent Russian Robot Escapes...Again (livescience.com) 89

Slashdot reader Taco Cowboy brings a new report about Russian robot IR77, which has escaped from its research lab again... The story goes that an engineer working at Promobot Laboratories, in the Russian city of Perm, had left a gate open. Out trundled Promobot, traveling some 150 feet into the city before running out of juice. There it sat, batteries mostly dead, in the middle of a Perm street for 40 minutes, slowing cars to a halt and puzzling traffic cops

A researcher at Promobot's facility in Russia said that the runaway robot was designed to interact with human beings, learn from experiences, and remember places and the faces of everyone it meets. Other versions of the Promobot have been docile, but this one just can't seem to fall in line, even after the researchers reprogrammed it twice. Despite several rewrites of Promobot's artificial intelligence, the robot continued to move toward exits. "We have changed the AI system twice," Kivokurtsev said. "So now I think we might have to dismantle it".

Fans of the robot are pushing for a reprieve, according to an article titled 'Don't kill it!': Runaway robot IR77 could be de-activated because of 'love for freedom'
AI

Scientists Force Computer To Binge On TV Shows and Predict What Humans Will Do (geekwire.com) 63

An anonymous reader quotes a report from GeekWire: Researchers have taught a computer to do a better-than-expected job of predicting what characters on TV shows will do, just by forcing the machine to study 600 hours' worth of YouTube videos. The researchers developed predictive-vision software that uses machine learning to anticipate what actions should follow a given set of video frames. They grabbed thousands of videos showing humans greeting each other, and fed those videos into the algorithm. To test how much the machine was learning about human behavior, the researchers presented the computer with single frames that showed meet-ups between characters on TV sitcoms it had never seen, including "The Big Bang Theory," "Desperate Housewives" and "The Office." Then they asked whether the characters would be hugging, kissing, shaking hands or exchanging high-fives one second afterward. The computer's success rate was 43 percent. That doesn't match a human's predictive ability (72 percent), but it's way better than random (25 percent) as well as the researchers' benchmark predictive-vision programs (30 to 36 percent). The point of the research is to create robots that do a better job of anticipating what humans will do. MIT's Carl Vondrick and his colleagues are due to present the results of their experiment next week at the International Conference on Computer Vision and Pattern Recognition in Las Vegas. "[The research] could help a robot move more fluidly through your living space," Vondrick told The Associated Press. "The robot won't want to start pouring milk if it thinks you're about to pull the glass away." You can watch their YouTube video to learn more about the experiment.
AI

Apple Won't Collect Your Data For Its AI Services Unless You Let It (recode.net) 36

Apple doesn't like collecting your data. This is one of iPhone maker's biggest selling points. But this approach has arguably acted as a major roadblock for Apple in its AI and bots efforts. With iOS 10, the latest version of company's mobile operating system, Apple announced that it will begin collecting a range of new information as it seeks to make Siri and iPhone as well as other apps and services better at predicting the information its owner might want at a given time. Apple announced that it will be collecting data employing something called differential privacy. The company wasn't very clear at the event, which caused confusion among many as to what data Apple is exactly collecting. But now it is offering more explanation. Recode reports:As for what data is being collected, Apple says that differential privacy will initially be limited to four specific use cases: New words that users add to their local dictionaries, emojis typed by the user (so that Apple can suggest emoji replacements), deep links used inside apps (provided they are marked for public indexing) and lookup hints within notes. Apple will also continue to do a lot of its predictive work on the device, something it started with the proactive features in iOS 9. This work doesn't tap the cloud for analysis, nor is the data shared using differential privacy.Additionally, Recode adds that Apple hasn't yet begun collecting data, and it will ask for a user's consent before doing so. The company adds that it is not using a users' cloud-stored photos to power its image recognition feature.
AI

Elon Musk's Open Source OpenAI: We're Working On a Robot For Your Household Chores (zdnet.com) 64

An anonymous reader writes from a report via ZDNet: OpenAI, the artificial-intelligence non-profit backed by Elon Musk, Amazon Web Services, and others, is working on creating a physical robot that performs household chores. In a blog post Monday, OpenAI leaders said they don't want to manufacture the robot itself, but "enable a physical robot [...] to perform basic housework." The company says it is "inspired" by DeepMind's work in the deep learning and reinforcement learning field of AI, as displayed by its AlphaGo victory over human Go masters. OpenAI says it wants to "train an agent capable enough to solve any game," noting that significant advances in AI will be required in order for that to happen. In May, the company released a public beta of a new Open Source gym for computer programmers working on AI. They also have plans to build an agent that can understand natural language and seek clarification when following instructions to complete a task. OpenAI plans to build new algorithms that can advance this field. Finally, OpenAI wants to measure its progress across games, robotics, and language-based tasks, which is where OpenAI's Gym Beta will come into play.
IBM

IBM Engineer Builds a Harry Potter Sorting Hat Using 'Watson' AI (thenextweb.com) 117

An anonymous reader writes: As America celebrates Father's Day, The Next Web reports on an IBM engineer who found a way to combine his daughters' interest in the Harry Potter series with an educational home technology project. Together they built a Hogwarts-style sorting hat -- which assigns its wearer into an appropriate residence house at the school of magic -- and it does it using IBM's cognitive computing platform Watson. "The hat uses Watson's Natural Language Classifier and Speech to Text to let the wearer simply talk to the hat, then be sorted according to what he or she says..." reports The Next Web. "Anderson coded the hat to pick up on words that fit the characteristics of each Hogwarts house, with brainy and cleverness going right into Ravenclaw's territory and honesty a recognized Hufflepuff attribute."
The hat's algorithm would place Stephen Hawking and Hillary Clinton into Ravenclaw, according to the article, while Donald Trump "was assigned to Gryffindor for his boldness -- but only with a 48 percent certainty."

The sorting hat talks, drawing its data directly from the IBM Cloud, and if you're interested in building your own, the IBM engineer has shared a tutorial online.
Transportation

Will Self-Driving Cars Destroy the Auto Insurance Industry? (siliconvalley.com) 299

An anonymous reader quotes an article from the Bay Area News Group: Imagine your fully autonomous self-driving car totals a minivan. Who pays for the damages? "There wouldn't be any liability on you, because you're just like a passenger in a taxi," says Santa Clara University law professor Robert Peterson. Instead, the manufacturer of your car or its software would probably be on the hook... Virtually everything around car insurance is expected to change, from who owns the vehicles to who must carry insurance to who -- or what -- is held responsible for causing damage, injuries and death in an accident." Ironically, if you're only driving a semi-autonomous car, "you could end up in court fighting to prove the car did wrong, not you," according to the article. Will human drivers be considered a liability -- by insurers, and even by car owners? The article notes that Google is already testing a car with no user-controlled brake pedal or steering wheel. Of course, one consumer analyst warns the newspaper that "hackers will remain a risk, necessitating insurance coverage for hostile takeover of automated systems..."

Slashdot Top Deals