Cutting out the cancer in the code.
Customers are used to immediate and regular updates on their personal devices, so it can be really tough for them to understand how a massive, multi-billion-dollar organisation like a bank, an energy company or a supermarket chain isn’t as flexible and agile as the phones in their pockets.
But anyone looking under the hood will understand: the legacy tech inside a big organisation is usually a kind of Frankenstein’s monster of add-ons and retro-fits and it would be impossibly expensive, not to mention impractical, to replace it all wholesale. Big company usually means decades of big, costly, cumbersome, fragmented, often incompatible implementations. And those legacy systems weren’t designed with modern agility needs in mind.
You know you need your systems to function in the cloud. And you know you need to ensure things are secure and that all the old bits and pieces not only work together, but will also be able to work with current and future implementations. So what do you do?
This is where containerisation comes in.
The container essentially wraps your legacy systems in a cloud-native “coating”, so they now function in the cloud as if they were cloud-native systems.
The containers can all speak to each other, acting as a translator and helping all your fragmented legacy systems speak both to each other and to any new tech implementations and apps. This hybrid approach is gaining momentum and known in DevOps as “lift and shift” since a containerised on-site, monolithic app can be “lifted and shifted” somewhere else (e.g., modern public or cloud).
The containers also act like a shield, insulating against malicious attacks.
The result is a cloud native, flexible, dynamic, protected environment that supports your legacy and future systems.
But what about cybersecurity flaws already inside your systems? Sometimes the threat really is coming from inside the house.
There’s a strong ethos predicated on the free exchange of knowledge that historically underpins the software engineering community and professions. It’s often referred to as the open source ethos. We can largely thank the many fathers of the internet for that. They had a utopian vision of what software development should look like right at the very cusp of the internet age.
It’s got a lot of benefits. Rather than reinventing the wheel over and over again, engineers can stand on the shoulders of their peers and forebears. It’s one of the reasons growth in this space has been so rapid over the past couple of decades.
But it has its drawbacks and (particularly for enterprise) dangers too.
If you have a company that uses software of any kind, you probably have some open source code in the guts of your system somewhere. Your IT people have been using open source code in all your tech products since the very beginning. That’s fine. That’s normal – great even! But how do you know if there are errors – or worse, malware – that has been introduced into the billions of lines of code already in use in your systems?
Frankly, you don’t. We can refer to these anomalies as a kind of “cancer in the code”. These might be small mistakes that could cause problems as you develop and build on, or they could be malicious lines smuggled in amongst the good stuff. Basically, your risk is low so long as you don’t come up against a) bad actors, or b) bad developers. We are now capable of scanning code for these cancers, cutting them out and providing a clean resource for devs to draw from.
After that, if any strands still get through, the moment they try to activate, we have technology that shuts them down and isolates them. While the container protects you from the outside, containerised security acts like an immune system, protecting you from within.