The term hacker might have negative connotations, but ‘ethical hacking’ is on the rise.
Pen testing, short for penetration testing, is the practice of testing out software, a system or applications to probe for any potential security vulnerabilities.
In this post, our Chief Technology Officer, Simon Cole, discusses the importance of “white hat hacking” to Automated Intelligence…
“In the last few weeks alone our AI.DATALIFT platform has been pen tested by no less than three different independent testing agencies, including through the UK Government’s IT Health Check program.
Many of our customers are requiring that we undertake a pen test specifically for them as they seek assurance that their data is managed safely within our solutions.
As security assurance lies with the customer, past successes in pen testing for other customers is simply not enough.
We’ve been working with a wide range of pen testing organisations who are formally taking our products, very independently of AI, and testing them for weaknesses.
The fact that the test is independently verified is a vital thing; this is not us saying we are safe, these are GCHQ-certified organisations that are looking at our solutions in many ways. What is the attack surface we expose? What are attack vectors to try and gain access to the system? Just how could criminals exploit the technology?
This ethical hacking will try many different tools and techniques to attempt to break into AI.DATALIFT; it’s our job to ensure we prevent that at all costs.
Security at the heart of the design
At Automated Intelligence, our development methodology is to approach security at the design phase.
We don’t write software and then get it tested; quality flows through everything we do. When we go for penetration testing we know exactly what will happen, and we understand the attack surface and how it will react to attack.
All our pen testing, including our very first test, have come back with no major issues. Quite often software companies use a pen test to find the gaps to plug. Our methodology and security-centric design turns this approach on its head.
Security considerations is not a new concept for AI – our software has been used in secure platforms since 2010, right from our very first customer, the UK Home Office.
What’s changed recently, however, is that while pen testing used to be solely in secure private environments, we are now using the Microsoft Azure public Cloud. As a result, the security requirements and the need to prove appropriate security design has increased.
AI.DATALIFT can be delivered in two ways- via an AI-owned Azure tenant or the customers’ private Azure tenant. Both of these environments are equally pen tested so we are confident both are equally secure.
Microsoft’s Public Cloud
Aside from the formal penetration testing, we do a lot of internal security work to make sure our system is completely secure. To that end, we have built our solutions on Microsoft Azure due to the nature of the platform and the world-leading level of investment in security that Microsoft spends annually.
We work closely with the Microsoft Azure teams, getting their guidance in terms of how we design our software to maximise the protection that Azure offers, and to mitigate any threats to our customers.
The trust in the platform is evidenced through the large enterprises, major government departments, and many other customers who are relying on Azure for their public Cloud needs.
Secure public Cloud is not something to be scared of; it’s usually a lot more secure than on-premise platforms that have minimal security investment after installation.
White hat hacking on the rise
Overall, pen testing has become a common practice- every major customer we deal with now demands this as base security requirement.
The fact we have had three this month alone is both an indication of the uptake of the product and the demand for security.
AI will continue to invest in a security-first design for all of our products and we expect our pen testing results to continue to provide our customers with the confidence they need to solve their data challenges with our products and services.”