When you think of Artificial Intelligence, the image of evil killer robots (Terminator) come to mind.
Fortunately, science has a long way to go before we get there, but it is slowly making progress. The most prominent application of artificial intelligence at the moment is in fact in advertising and data management on a large scale: so called “Big Data” is processed through the technique of “machine learning” allowing facebook to show you slightly less boring adverts. As thinking machines have become more and more important to modern society and the benefits and uses of the technology more apparent, funds are beginning to pour into further research. In other words, don’t be too surprised if robot chefs and butlers become the norm.
Aristotle laid the foundation for formal research into computer science by describing syllogism: a methodical analysis of thought and arguing. By breaking down the process of arguing into smaller mechanical problems, the skill to argue and think can be transferred to computers.
George Boole carried on in the same vein as Aristotle; in 1854 he set out to "investigate the fundamental laws of those operations of the mind by which reasoning is performed" going so far as to translate reason into calculus. Through his work, Boolean Algebra emerged (if… then calculations).
Although these are just two of many scientists who made a serious contribution to the study of artificial intelligence, the most important figure in the subject is undoubtedly Alan Turing. In his 1950 paper, Turing described the “Turing Test”. This is a theoretical way of testing whether a machine should be called “intelligent”. A machine is said to pass the Turing Test if it can win an ‘Imitation Game’ (sound familiar?). To setup the imitation game, we require three participants in separate rooms: an interrogator, a human and a machine. The interrogator must ask questions, and from the answers of the other participants, must determine which is the human, and which is the machine. If the machine tricks the interrogator an acceptable amount of times, it is said to have passed the imitation game, and so according to the Turing Test must be seen as intelligent.
Modern intelligent machines are able to have simple conversations, for example ‘Cleverbot’, ‘Siri’ or windows new ‘Cortana’. Claiming these are true artificial intelligence, however, is like comparing a puppet to a real human. Beyond the surface of simple conversations the machine has no understanding of the deeper meanings and complexities. It would be impossible, using conventional methods, to give a machine ‘declarative knowledge’, because it would have to program in every nuance of speech. An alternative method, which many experts believe will eventually lead to true artificial intelligence is machine learning. A machine is programmed with only the simplest understanding of the tasks it will be given, and the rest is ‘taught’ to it by a human instructor. This is made possible by copying the way neurons in organic brains work. This pseudo-nature approach allows not only for learning, but also for artificial evolution.
The obvious application for artificial intelligence in society is to employ intelligent robots in the menial and boring jobs that humans don’t want to do, like assembly line work and mining. However, this almost seems to be a waste of potential for more intelligent machines. As previously mentioned, artificial intelligence can be put to great use organising huge amounts of data, such as for shopping suggestions or organising companies and on the more interesting side of things, artificial intelligence is used in games to create engaging interactions - look up a game called ‘Facade’ where you talk to two intelligent characters and try to stop them from falling out with each other.
More worryingly, super-intelligent robot overlords (like in “the Matrix”) could actually take over civilisation if we aren’t careful. It seems like a joke, but many respected scientists have spoken out in warning about the dangers of artificial intelligence. Stephen Hawking along with Elon Musk, Steve Wozniak and a thousand other prominent experts signed an open letter urging a ban on warfare AI and autonomous weapons. If the thought of killer robots made you laugh before, perhaps that will make you think twice. It doesn’t mean that we should abandon artificial intelligence and the advantages that we could gain from it, but going forward, caution is definitely advised.
Image sourced under the Creative Commons license.