Login to your account

Username *
Password *
Remember Me

Create an account

Fields marked with an asterisk (*) are required.
Name *
Username *
Password *
Verify password *
Email *
Verify email *
Captcha *
Reload Captcha
December 10, 2018
Latest Cyber News, Help & Advice




EU e-Privacy Directive

This website uses cookies to manage authentication, navigation, and other functions. By using our website, you agree that we can place these types of cookies on your device.

View e-Privacy Directive Documents

You have declined cookies. This decision can be reversed.

Google Won't Allow Use Of Their AI In Developing Weapons

Written by  Jun 09, 2018
Google announced on Thursday that it will not allow its AI software to be used in the development of weapons, which sets new standards for the business decisions it makes in this contentious area.
Google management is trying to diffuse the situation regarding the companies employees and the government work it is involved in due to the significant level of protest it received from its staff. The US military was trying to work on identifying the objects in the video of drones which is what prompted the outcry from employees and the company has now advised that their focus will be to target government contracts in the fields of cybersecurity and military recruitment.

“We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” said by the CEO of the company Sundar Pichai.


Get 50% off your second year with our 2-year deal!

The increase in the performance of computers and the new processor architecture has made AI a reality in the last couple of years. Google is one of the biggest sellers of AI-powered tools which will help the computers to review large data sets and learn from them faster than humans could do.

An anonymous employee from Google said that had the new principles been in place at the time the drone project would not have been taken on. Google is planning to honour its commitment to the project until next March however a petition has been signed by more than 4600 Google employees to stop the work sooner. Microsoft and other companies had published their general guidelines for AI guidelines before Google but theirs have received more attention due to their current involvement with the drone project.

“The clear statement that they won’t facilitate violence or totalitarian surveillance is meaningful,” University of Washington technology law professor Ryan Calo tweeted on Thursday.

The company has also recommended that due to concerns over current security systems, developers should avoid launching AI programs that are capable of ‘significant damage’, presumably until those systems catch up with the rapidly advancing AI technology.

Take your time to comment on this article.

Leave a comment

Make sure you enter all the required information, indicated by an asterisk (*). HTML code is not allowed.

Top News

Error: No articles to display


  1. Popular
  2. Trending


« December 2018 »
Mon Tue Wed Thu Fri Sat Sun
          1 2
3 4 5 6 7 8 9
10 11 12 13 14 15 16
17 18 19 20 21 22 23
24 25 26 27 28 29 30