Machine learning sheds its sci-fi image to become reality

Artificial Intelligence has attracted a lot of media attention recently with the popularity of shows like Black Mirror and Westworld, due to a careful balance of reality and science fiction. If you’ve seen them, or listened to influential doomsayers such as Elon Musk, you may have reason to feel worried about what’s to come. 

But do not fear, we are not actually on the brink of a robot apocalypse – what we are seeing are business functions fundamentally changing due to the leaps being made in the field of machine learning. 


 
Software development is not going to be immune from these sweeping changes. The roles that currently exist will be radically different as they continue to develop under what is now being dubbed “Software 2.0”, and I am excited, as the result is going to be undoubtedly superior.

Before I dive into why I am so excited for the future of software and why it might be coming a lot sooner than you think, let me clarify some terms that you may hear from time to time. Artificial Intelligence, Machine Learning, and Neural Networks are often thrown around interchangeably but there are distinctions between them.

Artificial Intelligence (AI)

AI can be thought of as any system that can make its own decisions. AI is an umbrella term that encompasses technologies that achieve this, such as machine learning.

Machine Learning 

Machine learning is a subset of AI, whereby the systems trains on a data subset and has the ability to learn for itself. There are many ways systems can do this, whether it be with classification or regression algorithms or the more magical unsupervised or reinforced learning methods.

Neural Networks

Neural networks are a special subset of machine learning which is inspired by the human brain. When we walk into a room, we identify objects based on their appearance related to objects we’ve previously seen. Neural networks allow computers to do the same thing. 

This is a fascinating subject and I’ve been especially gript by it ever since AlexNet blew away the rest of the field at the 2012 ImageNet competition by training a network on ImageNet data, which contained over 15 million annotated images from a total of over 22,000 categories.

However, if I just want to make a website and not a fleet of intelligent robots why does any of this matter? 

Traditionally, developers need to have business processes boiled down to an explicit set of rules so they can implement a solution. As time goes on without proper planning and maintenance the ability to innovate is reduced as the amount of technical debt increases.

However, with machine learning instead of an engineer writing code to implement these business rules, they would be responsible for curating the data that would be fed into learning algorithms that have the capacity to be iteratively trained and improved. 

In this new paradigm, we now see the data itself being of the utmost importance as the machine learning algorithms are able to infer what patterns are of importance without a human ever explicitly writing this. If you are thinking changes this drastic to the way we develop software seem far-fetched, Pete Warden former CTO at Jetpac suggests that in ten years, most software jobs will not involve programming

I do not expect that all areas of software will be taken over by machine learning, however, in the not too distant future, I do believe it will be sprinkled into many of our practices. Bug fixing and automated testing are leading candidates for where we will see machine learning practices become mainstream first.

“Bugs” or defective pieces of code are often the difference between delivering an innovative cost-effective solution or an expensive nightmare. Even though many precautions are taken to ensure only bug-free software is deployed to production, there are always going to be ones that slip through the cracks and there are many reasons for this, which I plan to cover in a later post. 

By leveraging machine learning software, we could learn from past experiences, identifying common errors and flagging them on the fly. Google’s “bugspots” could be considered as one of the first incarnations of this, but it is conceivable that in the future, software could be implemented to detect and resolve these defects without the need for human intervention at all.

Functionise is a leader in intelligent automation testing, they understand that crucial decisions as to when your software can be deployed will often pivot based on your ability to thoroughly test the solution. Functionise’s automation testing platform has the ability to learn and improve itself through the use of machine learning with the guidance of a QA engineer, reducing the overall time required to test with better overall results.

We are only scratching the surface of what is possible with this technology, and there have been many other suggested use cases. Intelligent programming assistants (think Microsoft’s Clippy on steroids) could offer suggestions on best practice and create portions of code in real-time. 

The time required to provide accurate estimates for a solution could be significantly reduced as machine learning algorithms could train on the data of past projects such as feature definitions, and estimated time vs actual (taking into account the time required for bug-fixing) to predict the required budget for a solution. 

By using a collection of data from past projects, it is also conceivable that you could leverage these learnings to make strategic decisions about what type of features have the most impact for end-users whilst considering the cost it takes for them to be developed.

As interesting as it would be to say that we could programme human consciousness this isn’t Ex Machina, and if an army of intelligent robots does attempt world domination, I can be cited here as a non-believer. Essentially, this kind of sci-fi will stay put, much to the dismay of Hollywood, but that’s not to say that some of it isn’t becoming a reality.

We are starting to see a new generation of software techniques that were initially thought of as only being used for complex problems such as image and voice recognition become a pivotal component of every company’s software development life cycle. These software techniques are not the kind that jeopardise the human race – instead, they incrementally push us forward. 

Yes, our responsibilities as developers may change as these processes become more commonplace, but just as software is evolving, we must too balance these new technological realities, as the end result will be cleaner, more efficient and cost-effective software solutions.