How would you feel when you can get along with AI that could learn itself? Simply, it’s Amazing! And what would you call for an Artificial Intelligence that is capable of learning itself? EI (Emotional Intelligence), SAI (Super Artificial Intelligence) or anything else; but to be intact, DeepMind it is!
DeepMind is AI built by British Company named as DeepMind Technologies in September 2010. The co-founders, Demis Hassabis, Shane Legg and Mustafa Suleyman were supported by tech entrepreneurs and investors when the company was established. The company focused on creating a ‘neural network’ that is able to learn human nature in playing video games. Along with the network, the company also focused on building ‘Neural Turing Machine’, which functions similar to short term memory of human brain. Larry Page, founder of Google, acquired this project in 2014 for $500 million and renamed as Google DeepMind.
The AI alternatives, IBM Deep Blue, Rolls Royce’s AI, Watson or any other robotics like of Zenbo, Cozmo and Sony's AI functions within the system developed. DeepMind, on the contrary, is out of the pre-determined programs. It is able to learn from experience. As the name suggests, it uses deep learning mechanism that calculates even a single tick that allows perfection in work.
The AI is tested with video games, retro arcade games like Space Invaders, Chess and Go. The AI is capable of understanding the game scene, and also the way to win the game without altering the original game code. DeepMind is being tested with retro arcade games, the games like Doom, Spaces, and Ms PacMan that came to the world in the late seventies.
DeepMind Technologies started this project to create an alternate intelligence system through the combination of ‘machine learning mechanism’ and ‘System Neuroscience’. The combo of these systems led to building powerful algorithm for learning.
The company now is focusing on research on computers that is able to render and play games and development of systems ranging from Go game to Space Breakout game. The system is able to break through records of strategic games to arcade games. The research describes that when AI was able to play different games of Atari published, Google acquired the project.
In 2015, the company worked out to bring a program named Alpha Go to compete with a professional human. The program was powered by DeepMind, was able to beat European Go champion Fan Hui. The Go game is the traditional Chinese Game played at a beginner level in a computer before. This game has large possibilities for a win to humans than in the games like Chess, Carom Board and other computer games. This news was not published until January 27, 2016, to overlap the information about algorithms in the ‘Nature’ journal. Followed by the news, the Alpha Go beat 9th Dan player; Lee Sedol in a five-match game with the score of 4-1.
The DeepMind is not just a game player; it is also capable of learning and tackling the situations. The system is able to reduce the most challenging problem on earth, energy usage.
To reduce such problem, the company maintained super servers at Google; invented ways to cool data centres, invested in eco-energy sources to complete the goal of a cent per cent eco-energy. Currently, servers at Google consume 3.5 times lower power than it was five years ago.
Source: DeepMind Official