Machine Learning

By David Higgins

What is Machine Learning?

Machine learning, not to be confused with Artificial Intelligence, describes the complex, automated and iterative process of creating programs and computers with the ability to learn and develop themselves with as little human input as possible. Developing a model using machine learning requires several key steps as followed:

  1. Firstly deciding what the algorithm is going to do. This is the end goal and what the machine is trying to build up to.

  2. The next step is gather data relevant to the end goal. This could include numbers, colours, pictures, sounds etc.

  3. After that, the data is prepared. This involves ensuring the data gathered is evenly distributed over relevant fields, the data is unbiased and the data is in a random order.

  4. A model is then made. A model is like a function which takes numerous variables for different data types and uses them to determine a result which it then returns. The hope is that the returned value will answer the question.

  5. The next stage and probably the most significant stage is training the model. Here the random data gathered is put into the model in hopes it will return the correct value. At first the model will do very poorly, however after each iteration the model attempts to update itself. Through repeated testing and updating, the model is able to make itself more accurate, to the point it can perform the task to a much better degree of accuracy.

  6. After this an evaluation is performed. Here the model is used with data it has never seen before – data outside of the training zone. This gives the developer an indication of whether or not the model can be used in the real world.

  7. Once the evaluation is completed, a developer can see what’s missing from the model and potentially add or withdraw data inputs which the model uses. This adjustments can help further improve the accuracy of the model. The evaluation can also help determine how much a model should update itself during every iteration of training – the “learning rate.”

  8. Finally, after repeated training and evaluation, the model may be used in real life to perform tasks for a user.

The goal of the machine learning is to be able to do as much of these stages as possible without human input. The human developer can give the algorithm instructions on how to modify itself, however from there it is on its own to learn and develop. In doing so, this saves the developer countless hours of fine tuning and testing models themselves.



The history of Machine Learning

The concept of machine learning and A.I. is said to have originated in 1950 when the famous Alan Turing published written work called “Computing Machinery and Intelligence.” Within this writing he considers the question “Can machines think?” Due to the ambiguity surrounding “computers” and “thinking” back then and still today, Turing suggested a viewpoint called the “Imitation Game.” This game ultimately consists of three players whereby one player tries to guess the gender of the other two players through written notes, being unable to see them. However Turing suggested the idea a computer could take the role of a player and trick the guesser. This would therefore make the guesser believe the computer is human. The game consequently lead to the development of what’s known as the “Turing Test” – If a computer can trick a human into thinking it too is human, it is said to have passed the Turing Test and have a certain level of computational intelligence. Seven years later, computer scientist Frank Rosenblatt designed the first neural network. A neural network, inspired by a biological neural network, uses example data to ‘learn’ how to do a specific task and then can be used with real data. Over the next 60 years, as computers and intelligence has improved drastically so has machine learning and is now applied in many areas of everyday technological life.



Where is Machine Learning currently used?

Machine learning works best when supported by as much data as possible, which helps it to be as accurate as possible. In today’s day and age, with the magnitude of data available, machine learning is successfully used in many real life applications. Strong examples are in map based technologies, such as ‘google maps’ and ‘uber eats’ where the technologies learn from the people currently using it to retrieve data, compare it with data on databases and share it for other users in a real time environment. This all happens in an automated process. For example this is used to estimate which the quickest route is given the current traffic situation. Other Examples of where machine learning is used is in facial recognition technology, advertisement targeting, picture inspection, driverless cars, language translation software and even in robotic hardware.



The future of Machine Learning

The future of machine learning is exciting and promising. In an interview with WIRED President Barack Obama described AI and machine learning to be “seeping into our lives in all sorts of ways that we just don’t notice.” This is true for many and growing aspects of our lives as automated technology and intelligence is simple becoming more effective and more efficient than any human ability. Experts predict that there are still many ways in which machine learning can and will be used. Some companies intend to use machine learning so that clients can design and create their own software without the requirement of code. Machine learning could also lead to smarter search engines, more efficient program development and better personalisation in software for consumers. Computer scientists also believe that the rise in quantum computing will see an acceleration in machine learning and the ability to solve problems and do tasks which otherwise would have seemed impossible. However truth be told the future of machine learning is very difficult to predict. If you had told experts from twenty years ago about machine learnings’ capabilities today, they would have most certainly been sceptical.



Bibliography