COVID-19: Millgate continues to operate as normal. Our team remain fully enabled and continue to deliver the tech products, services and solutions that work for you. They can be contacted on the usual email address or telephone numbers. For general enquiries, please call 0114 242 7310. We are here to help.
Research into artificial intelligence, or AI, began in the 1950s following work by British mathematician Alan Turing during the Second World War. But it is only in the past 10 years that there have been rapid advances in AI, brought about by the confluence of three critical factors – ubiquitous cloud computing, vast amounts of data and breakthroughs in machine learning algorithms.
So, what is AI? Broadly speaking, it’s when machines or computer systems behave in a way that simulates human intelligence. In computer science, AI comprises several fields of study, most notably machine learning. So, first some basics:
What is machine learning?
Machine learning enables computers to learn without being explicitly programmed. It is advances in this field, particularly in deep learning, that have led to the recent explosion in AI. Machine learning works by training computer systems to use algorithms – lines of code – to spot patterns in data and then behave in a predictive way.
Speech recognition, natural language recognition, computer vision, search recommendations and email filtering are all examples of AI that use machine learning. Your focused inbox in Outlook sorts the most important emails from the rest using AI. When you search or shop online, the suggested results or recommendations are the work of AI. Microsoft Translator uses machine learning algorithms to transcribe what you say into one of many languages as you speak.
Training machines using labelled data is known as “supervised learning”. For example, the data might include photos that have been labelled to indicate what they depict. The algorithm used by the machine can then pick out these labels in other datasets. So, if a set of pictures is labelled as showing dogs, the machine can understand this and identify similar pictures of dogs.
By contrast, in “unsupervised learning”, machines identify patterns in datasets with no labels by looking for similarities. Here, the algorithms are not written to spot specific types of data (such as pictures of dogs) but to look for examples that look similar and can be grouped together.
In “reinforcement learning”, a machine learns through trial and error, eventually deciding on the best way to complete a given task. Microsoft uses this technique in gaming environments such as Minecraft to explore improvements in how “software agents” work – for example, enabling an AI character to navigate a path through a lava field without falling in.
What is deep learning?
Deep learning is a type of machine learning that is inspired by how neural networks in the human brain process information. In these systems, each layer in the neural network transforms the data it receives into a slightly more composite representation of that information.
In this way, the system reaches a highly detailed understanding of the data that amounts to a form of intelligent reasoning. So, in “seeing” a picture of a dog, the machine will first detect a shape from a matrix of pixels, then it might identify the edges of that shape, then contours, then the object itself, and so on, until it identifies the image.
It is these artificial neural networks that have fuelled the recent advancement in machine learning and the ability of computers to carry out tasks such as speech recognition, natural language processing and image recognition.
Will AI take over the world?
The public perception of AI is largely misinformed by its negative portrayal in science fiction films. However, today’s AI systems are only capable of carrying out a single and specific task. These systems are good at logical tasks but they’re not capable of intuition, empathy or emotional intelligence. In other words, the fears held by some people are far removed from what is actually happening in AI development.
Building public trust in AI technology needs to begin at the design stage and products should be created within a strong ethical framework. Data privacy, the malicious misuse of AI, the moral status of AI systems and where responsibility lies when things go wrong are the issues we should be focused on.
Microsoft takes a partnership approach to developing AI, placing human values at the centre. We believe in responsible design and that the companies developing these technologies should take ownership of the ethical considerations and work together to solve the toughest challenges. AI should not be the provenance of any one company or nation. It should belong to everyone.
To that end, Microsoft has helped to establish the Partnership on AI, a non-profit organisation that aims to ensure AI technologies benefit people and society through best practices and open discussion. Last year, we set up our own advisory committee, called AI and Ethics in Engineering and Research, to make sure all AI systems embody our ethical design principles. We insist they must guard against bias, have algorithmic accountability, be transparent and explainable, and assist humanity while respecting privacy.
AI and the future of work
There has been much debate about AI replacing jobs, particularly manual work that can be automated. Microsoft believes that this view is too simplistic. Most jobs have routine and mundane elements that affect the productivity of an individual. AI could perform these tasks instead, allowing workers to focus on more important issues.
It’s fair to say that AI is likely to have a transformative effect on the workplace that will displace some jobs. But it will also create new ones, some of which don’t even exist yet. This has been the case with every industrial revolution, beginning with the invention of the weaving loom and the steam engine. The advantage we have today with the Fourth Industrial Revolution is that we are able to plan for change with considerably more insight. It is through policymaking and reskilling that job creation can outpace job replacement.
Microsoft has already done a significant amount in this area in terms of impact studies and policy recommendations. In the 2018 update of our policy guidance for creating a legal framework that will extend the benefits of cloud computing to everyone, we draw attention to the disruption that all technology brings. While we don’t believe AI will replace all jobs, we do think it will change the nature of work and that we have a responsibility to ensure people are equipped to navigate this change. We look at this in more detail in The Future Computed, our eBook on the role of AI in society.
Still, we don’t underestimate the extent of this challenge and the public trepidation it causes. A survey by the Oxford University’s Future of Humanity Institute, which featured the views of machine learning experts, found that AI could have a significant impact on the role of truck drivers by 2027, retail workers by 2031 and surgeons by 2053.
However, we believe it will be many decades before AI is advanced enough to replace humans at many tasks and, when that happens, it will raise ethical questions of whether it’s the right thing to do. Ultimately, where AI is concerned, we think it’s better to focus on the long term and take a responsible approach to tackling the challenges we face today.
The new normal of work is here and businesses are already adapting to get ahead. Find out how you can improve your IT infrastructure to support your growth.
Discover how machine learning can boost your application uptime with HPE Nimble. Your users expect data to be instantly available. Any slowdown or disruption across the infrastructure stack (storage, networks...
It can be all too easy to overlook a crucial element of your cyber security strategy, so we’ve compiled this handy checklist to ensure you haven’t missed anything.