Artificial Intelligence: Armageddon and Digital Dreamland

When it comes to Artificial Intelligence, it is usually defined as the science and development of computer systems that have the ability to perform tasks and generate knowledge through their software. Thus, replicating human behaviour and how we use our brains.
However, since computer scientist John McCarthy – known as the father of Artificial Intelligence – first spoke out about this revolution, it seems like others have jumped on the bandwagon to give their views on the impact of Artificial Intelligence and the world as we know it.
So, with that in mind, we have devised this post designed to take a look back at the Artificial Intelligence (AI) timeline, and what the future might hold for the human race and technology alike.
Think you might be interested in this controversial phenomenon? Read on to find out more.
Science fiction? In some parts of North Manchester it’s science fact…

The beginnings of Artificial Intelligence

Since the development of the electronic computer in 1941 and the stored programme computer in 1949, it’s safe to say that technology has come a long way in not only popularity but practicality, too. However, back in the 1940s, the link between human intelligence and machines wasn’t something that was widely researched until the late 1950s.
A discovery that assisted much of the early development of AI was made by Norbert Wiener – an American mathematician and philosopher. His theory suggested that intelligent behaviour was in fact a product of mechanisms that could very well be the result of machines.
As previously mentioned, the person who finally coined the term Artificial Intelligence was John McCarthy. His organised conference – the Dartmouth Summer Research Project on Artificial Intelligence in 1956, was designed to bring those who were interested in machine intelligence together. As time went on, AI research centres began incorporating this method of brainstorming and new challenges were faced.
Whilst more programmes were introduced, a major breakthrough in AI was found in the creation of the LISP language (list processing) by McCarthy in 1958. It was soon adopted by many researchers and is still used today.

Common views on Artificial Intelligence

Bill Gates: Heart-throb Billionaire, philanthropist and all-round dreamboat Bill Gates, revealed back in January the looming threat to humanity posed by Artificial Intelligence. In this article, he spoke of how he couldn’t fathom why the human race were not in turmoil about how such a theory could expand to the point where it was beyond human control.
The Microsoft creator went on to state that he was ‘in the camp that is concerned about super intelligence.’ An elite camp indeed, given he is joined by famed physicist Stephen Hawking and SpaceX CEO Elon Musk (the latter of whom described toying with AI as similar to ‘summoning a demon’).
With such famous forward thinkers recoiling in fear of a potential re-enactment of the Terminator or Matrix franchises, it’s difficult to know exactly where to position one’s own opinion on the subject. However, it’s safe to say that these views definitely have the potential to get us thinking about where Artificial Intelligence may be heading.

The future of Artificial Intelligence

In a kitsch future, white gloved dwarf butlers push impractical metal trolleys towards equally impractical three wheel cars: Who needs AI? With the ability to access information on the Internet at a much faster and more efficient rate than visiting your local library, there is no denying that Artificial Intelligence is getting smarter. Although Elon Musk and Bill Gates named Artificial Intelligence as one of humanity’s biggest existential risks, could we really see a decline in the human race to the point where human intelligence is no longer important? In short: will computers eventually be able to make better decisions than humans?
Although it can be argued that machines are good at complex tasks – especially in today’s society where technology evolves at the rate of a rollercoaster – humans excel in completing simple activities and understanding emotions and feelings. However, if this is the case, aren’t humans and computers surely stronger when they work together?
Businesses in particular have so much to gain from emerging technologies, but will it be at the expense of a workforce? Will education and teaching be further affected as they have in the wake of the development of a digital economy? And will human communication be hindered as a result?
With this in mind, it’s interesting to note that there are some people that believe up to 60% of the work that is currently being done by humans will be replaced by machines within twenty years. With advanced technology such as smartphones, iPads and tablets being used in the workplace now more than ever, it’s easy to see how this could be a possibility.
In the past five years, advances in AI are putting Artificial Intelligence products at the forefront of our lives. Google, Facebook and Microsoft (to name a few) are hiring AI researchers at an alarming rate. This is to ensure that better algorithms and smarter computers are being created.
Furthermore, AI solutions that seemed out of reach a few years ago are now making a statement. Deep learning has boosted Android’s speech recognition, Google are building self-driving cars and robot dogs can now walk very much like their living counterparts – a far cry from the technological ghosts of recent memory such as the Xybernaut wearable.
The Xybernaut Poma: A technological leapfrog into the abyss However, don’t set a date for the ‘End of the the Human Race’ party just yet!
Although Stephen Hawking worries that Artificial Intelligence could spell doom for the human race, Gates believes it will be decades before Skynet becomes self-aware. True, one can never really know what the future will hold. But, in the meantime, we have plenty of opportunities to develop fail-safes and make decisions about just where the bipedal robot with perfect human features will stay in our sky-castle. As a result, the question really remains: will Artificial Intelligence mark our immortality or our extinction?
Billionaires are famed for both their eccentricity and clairvoyance. Unfortunately this means we don’t often take them seriously. Artificial Intelligence is something that will always remain in the library of complex thoughts, but it is interesting to note – and even predict – just how much of an impact it could have on our world as a whole.
If this post has got you thinking about AI and its potential future, why not get in touch? If you have any interesting theories, the team at Neil Walker Digital Group would love to hear from you. You can email us on [email protected] or alternatively, you can fill out our contact form.