Is machine learning a threat?

Materials World magazine
,
1 Feb 2018

Machine learning could improve efficiency in a range of sectors, but has been labelled a possible threat to jobs. Ellis Davies investigates the pros and cons.

One of the abilities that has helped intelligent life, such as humans, to excel is learning. Being able to give artificial intelligence this ability could make it far more applicable to many sectors, including research, manufacturing and engineering. The name given to this ability is machine learning, a term first coined by Arthur Samuel in 1959 while working at IBM. It came about through the work with pattern recognition, and explores the construction of algorithms that can learn from and make predictions on data.

For use in research, the responses are largely positive, as systems have been created to aid researchers in discovery by crunching data that would take a person a long time to sift through. One such system has been developed at MIT, USA, which can search scientific papers and extract recipes for producing particular types of materials. Also, in microscopy, ZEISS, Germany, has introduced machine learning to its instruments.

Machine learning is both exciting, and, for some, worrying. With increased automation in manufacturing already causing some to be wary of the potential implications for the workforce, the addition of machinery and technology that has a human characteristic could intensify fears. 

Since the implementation of machinery in industry, people have been afraid and suspicious. The Luddites’ – textile workers in Nottinghamshire, Yorkshire and Lancashire, UK – attacks on the newly introduced framework-knitters in 1811 showed some of the earliest sense of threat from machinery. Although far more advanced and sophisticated, machine learning could foster a similar mood in today’s industry. 

Are the machines taking over?

In a recent study, What can machine learning do? Workforce implications, Erik Brynjolfsson, Director of the MIT Initiative on the Digital Economy, and Tom Mitchell, Professor in the Machine Learning Department of the School of Computer Science Carnegie Mellon University, USA, wrote that machine learning computer systems could do a superior job in some, but not all, situations compared to people.

Brynjolfsson and Mitchell argue that we are not facing the imminent end of work as is sometimes proclaimed. They note that machine learning is best suited to jobs that require the sifting of large amounts of data, and in this capacity could compete with humans. But, ‘although parts of many jobs may be suitable for machine learning, other tasks within these same jobs do not fit the criteria well – hence effects on employment are more complex than the simple replacement and substitution story emphasised by some,’ they write. 

It’s all been said before

Additionally, Deputy Director of the Centre for Labour Market Studies, Rostislav Kapeliushnikov, believes that predictions of a labour market apocalypse with mass loss of jobs caused by technological progress are unfounded – they have been made numerous times throughout modern history, and have never come true. 

Technological unemployment – defined as loss of jobs caused by introduction of new technology – is real, and has long been part of the modern marketplace. It is caused by the market’s need to adapt to innovation, and is therefore usually a short-term phenomenon. Kapeliushnikov says that between 1985 and 2009, data for 21 industrially developed countries suggest that increases in unemployment last three years on average, and then rebound or even exceed pre-decline levels.

Kapeliushnikov said, ‘In terms of the impact of new technologies, the greatest challenge for economic and social policy today is not so much the demand for labour as its supply.’ This refers to the number of hours people want to work. He theorises that computers and the internet have led to a decline in the amount of time people are willing to spend on the job, as social networks and online entertainment offer plenty of opportunities for leisure activities and socialising.

Supporting research and analysis

Whereas data driven and manufacturing industries are worried about the implementation of machine learning, researchers are taking advantage of it. ‘The new technology allows resources to be used more efficiently and does not take away the primary job of the researcher to provide learning and understanding,’ said Dr Matthew Andrew of ZEISS. ‘In our application of machine learning [ZEN Intellesis], the user and the algorithm work in harmony, with the researcher providing the analytical insight and the algorithm applying that insight consistently across very large datasets.’

Modern research is drowning in data, which presents a time consuming job for those needing to filter out the important elements for their work. 

Microscopy is commonly implemented in materials science for a range of applications, such as sample analysis. ZEISS has recently announced its intentions to introduce machine-learning capabilities to its range of instruments. The technology will enable image segmentation to address previously insoluble segmentation and classification challenges, such as correlative, multi-modal – combined light, X-ray, electron and ion microscopy datasets – data classification, low signal-to-noise data, data that suffers from sample preparation artefacts and the segmentation of mineralogy information from multi-angled polarised light data. 

Andrew explained its relevance to materials science. ‘Machine learning has broad applications across the range of microscopy challenges faced by materials scientists. This technology allows for analysis to be driven by multiple spatially correlated data sources, each giving different information or subjects to different contrasting mechanisms within the sample,’ he said. Once a model has been trained locally, it can be applied across large microscopy to allow for heterogeneous data sets – varied or dissimilar types of data – to be properly characterised. This allows the machine to do tedious analysis so a researcher can get back to work. 

‘One area that shows particularly exciting initial results,’ said Andrew,’ is the automation of mineralogical analysis using both multi-angled polarised light microscopy and multiple spatially registered light and electron microscopy datasets.’ Once trained, this classifier can be applied across an area that represents subsurface geological heterogeneity, enabling large area mineralogical analysis using quantitative correlative microscopy. 

Machine learning in microscopy is also applicable outside of research, chiefly in industry. The attraction is the ability to be more noise-tolerant – noisy data has a large amount of additional meaningless information, which allows for faster data acquisition – thereby reducing cost-per-analysis and increasing efficiency. ‘In the field of digital rock physics, where 3D-images of porous rocks are used as the input for computational models to predict fluid flow behavior, high-quality pore network segmentations can be performed even on extremely noisy input data,’ Andrew said.

Finding patterns

Where microscopy gives a detailed view of a small picture, a development from scientists at MIT takes a broader view by poring through scientific papers to extract recipes for producing particular types of materials. The system can recognise higher-level patterns that are consistent, identifying correlations  between precursor chemicals used in materials and the crystal structures of the resulting products. The system relies on statistical methods that provide a natural mechanism for generating original recipes, which is used to suggest alternative recipes for known materials.

Huge sets of training data are analysed by the system so that the neural network can begin to learn. The use of such networks has hit problems in the past, namely sparsity and scarcity. 

Scarcity refers mainly to newer materials, which may possess only a few recipes. 

Ideally, the system would be trained on a large number of varied parameters, such as chemical concentrations and temperatures.

Sparsity, meanwhile, refers to a material that uses only a few chemicals and solvents. A recipe can be represented as a vector, or a long string of numbers – each number is a feature. If most of the numbers are zero, the recipe is sparse.

The researchers’ network distils input vectors into smaller forms with meaningful numbers in every input – the middle layer of the network holds just a few nodes. The purpose of the training is to teach the network to have an output as close as possible to the input, which will mean that the middle layer must represent most of the information from the input vector in a compressed form. A system like this, in which the output tries to match the input, is called an auto encoder. Such systems compensate for sparsity. However, to overcome scarcity the network was trained on both recipes for producing particular materials and other similar materials. 

Specifically, the network is a variational auto encoder. This means that in training, the network is evaluated on the similarity of its input and output, as well as how the middle layer matches up with a statistical model, such as a bell curve or normal distribution. The values in the middle layer cluster around a central value and then taper off at a regular rate in all directions. This is what allows the network to generate new recipes – the values of the middle layer conform to a probability distribution, meaning that taking a value from that distribution at random is likely to yield a plausible recipe.

Machine learning is on its way. Although there remains some fear in a number of sectors, the introduction will be largely positive, but may require retraining to ensure the loss of jobs is kept to an absolute minimum.