Containers are all the rage, and their adoption rates among enterprises is on the rise. But there are considerations when working with containers, for example, how do you successfully incorporate Docker into Enterprise pipelines, alongside legacy and traditional apps?
On Tuesday, November 22, I participated in an online panel on the subject of DevOps and Docker at Scale, as part of Continuous Discussions (#c9d9), a series of community panels about Agile, Continuous Delivery and DevOps. Watch a recording of the panel:
Continuous Discussions is a community initiative by Electric Cloud, which powers Continuous Delivery at businesses like SpaceX, Cisco, GE and E*TRADE by automating their build, test and deployment processes.
Below are a few insights from my contribution to the panel:
How Does Docker Enable DevOps?
"I think Docker and DevOps both really benefit from each other along your DevOps journey. When you're looking to enable Continuous Delivery and to move through your pipeline, Docker fits very nicely – it creates that immutable artifact that you produce at the end of your CI. They naturally fit into microservices and some of the others thing that you're already trying to use for your DevOps solution."
Best Practices for Docker in Production: Security
"On the surface area, in case of an attack surface, I think the solution is a CoreOS with something on the hardware that's the minimum needed to run the container, as well as a static analysis. For example, with CoreOS, they have Clair, which you can inject into your CI and then analyze. It can stop the containers from going on, if, during the analysis, any of the dependencies that are brought in are having any kind of exploits."