From Jeopardy winners and Go masters to infamous advertising-related racial profiling, it would seem we have entered an era in which artificial intelligence developments are rapidly accelerating. But a fully sentient being whose electronic “brain” can fully engage in complex cognitive tasks using fair moral judgement remains, for now, beyond our capabilities.
Unfortunately, current developments are generating a general fear of what artificial intelligence could become in the future. Its representation in recent pop culture shows how cautious – and pessimistic – we are about the technology. The problem with fear is that it can be crippling and, at times, promote ignorance.
Learning the inner workings of artificial intelligence is an antidote to these worries. And this knowledge can facilitate both responsible and carefree engagement.
The core foundation of artificial intelligence is rooted in machine learning, which is an elegant and widely accessible tool. But to understand what machine learning means, we first need to examine how the pros of its potential absolutely outweigh its cons.
Data are the key
Simply put, machine learning refers to teaching computers how to analyse data for solving particular tasks through algorithms. For handwriting recognition, for example, classification algorithms are used to differentiate letters based on someone’s handwriting. Housing data sets, on the other hand, use regression algorithms to estimate in a quantifiable way the selling price of a given property.
What would a machine say to this? Jonathan Khoo/Flickr, CC BY-NC-ND
Machine learning, then, comes down to data. Almost every enterprise generates data in one way or another: think market research, social media, school surveys, automated systems. Machine learning applications try to find hidden patterns and correlations in the chaos of large data sets to develop models that can predict behaviour.
Data have two key elements – samples and features. The former represents individual elements in a group; the latter amounts to characteristics shared by them.
Look at social media as an example: users are samples and their usage can be translated as features. Facebook, for instance, employs different aspects of “liking” activity, which change from user to user, as important features for user-targeted advertising.
Facebook friends can also be used as samples, while their connections to other people act as features, establishing a network where information propagation can be studied.
My Facebook friends network: each node is a friend who might or might not be connected to other friends. The larger the node, the more connections one has. Similar colours indicate similar social circles. https://lostcircles.com/
Outside of social media, automated systems used in industrial processes as monitoring tools use time snapshots of the entire process as samples, and sensor measurements at a particular time as features. This allows the system to detect anomalies in the process in real time.
All these different solutions rely on feeding data to machines and teaching them to reach their own predictions once they have strategically assessed the given information. And this is machine learning.
Human intelligence as a starting point
Any data can be translated into these simple concepts and any machine-learning application, including artificial intelligence, uses these concepts as its building blocks.
Once data are understood, it’s time to decide what do to with this information. One of the most common and intuitive applications of machine learning is classification. The system learns how to put data into different groups based on a reference data set.
This is directly associated with the kinds of decisions we make every day, whether it’s grouping similar products (kitchen goods against beauty products, for instance), or choosing good films to watch based on previous experiences. While these two examples might seem completely disconnected, they rely on an essential assumption of classification: predictions defined as well-established categories.
When picking up a bottle of moisturiser, for example, we use a particular list of features (the shape of the container, for instance, or the smell of the product) to predict – accurately – that it’s a beauty product. A similar strategy is used for picking films by assessing a list of features (the director, for instance, or the actor) to predict whether a film is in one of two categories: good or bad.
By grasping the different relationships between features associated with a group of samples, we can predict whether a film may be worth watching or, better yet, we can create a program to do this for us.
But to be able to manipulate this information, we need to be a data science expert, a master of maths and statistics, with enough programming skills to make Alan Turing and Margaret Hamilton proud, right? Not quite.
You don’t have to be Alan Turing to have a go at machine learning. CyberHades/Flickr, CC BY-NC
We all know enough of our native language to get by in our daily lives, even if only a few of us can venture into linguistics and literature. Maths is similar; it’s around us all the time, so calculating change from buying something or measuring ingredients to follow a recipe is not a burden. In the same way, machine-learning mastery is not a requirement for its conscious and effective use.
Yes, there are extremely well-qualified and expert data scientists out there but, with little effort, anyone can learn its basics and improve the way they see and take advantage of information.
Algorithm your way through it
Going back to our classification algorithm, let’s think of one that mimics the way we make decisions. We are social beings, so how about social interactions? First impressions are important and we all have an internal model that evaluates in the first few minutes of meeting someone whether we like them or not.
Two outcomes are possible: a good or a bad impression. For every person, different characteristics (features) are taken into account (even if unconsciously) based on several encounters in the past (samples). These could be anything from tone of voice to extroversion and overall attitude to politeness.
For every new person we encounter, a model in our heads registers these inputs and establishes a prediction. We can break this modelling down to a set of inputs, weighted by their relevance to the final outcome.
For some people, attractiveness might be very important, whereas for others a good sense of humour or being a dog person says way more. Each person will develop her own model, which depends entirely on her experiences, or her data.
Different data result in different models being trained, with different outcomes. Our brain develops mechanisms that, while not entirely clear to us, establish how these factors will weight out.
What machine learning does is develop rigorous, mathematical ways for machines to calculate those outcomes, particularly in cases where we cannot easily handle the volume of data. Now more than ever, data are vast and everlasting. Having access to a tool that actively uses this data for practical problem solving, such as artificial intelligence, means everyone should and can explore and exploit this. We should do this not only so we can create useful applications, but also to put machine learning and artificial intelligence in a brighter and not so worrisome perspective.
There are several resources out there for machine learning although they do require some programming ability. Many popular languages tailored for machine learning are available, from basic tutorials to full courses. It takes nothing more than an afternoon to be able to start venturing into it with palpable results.
All this is not to say that the concept of machines with human-like minds should not concern us. But knowing more about how these minds might work will gives us the power to be agents of positive change in a way that can allow us to maintain control over artificial intelligence and not the other way around.
Matt Escobar receives funding from the Core Research for Evolutionary Science and Technology (CREST) project 'Development of a knowledge-generating platform driven by big data in drug discovery through production processes' of the Japan Science and Technology Agency (JST)