Refactoring development

Ramblings from the trenches...

View on GitHub
12 February 2016

Deployment

by

                                                                                           | | Deployment is a subject dear to anyone's heart who's current deployment system is sub-optimal. Safe rapid deployment is at the heart of the dev-ops movement.

Spec

What do we want out of a good deployment system?

In many ways we want to consider managing environments as a whole, rather than concerning ourselves with the implementation of the individual machines.

Windows Implementations

I focus on windows here as it seems a harder problem to solve. Linux seems to have apt-get package management systems that coupled with Puppet or some other newer deployment system seem to manage most of the above.

For Windows the landscape is changing rapidly. It started with NuGet which is a dll dependency finder for .Net, but which has now been used as a distribution mechanism by Chocolatey.  This then gives us a package manager for Windows, with dependencies thrown in.

It’s good, but it’s not enough to install MSIs, we have to ensure the other bits like setting up shares, configuring firewalls and the like are also done if we’re going to truly have one click deployment.

Enter stage left, Powershell DSC.

Powershell is a puppet-esk declarative config for specifying how a machine should be configured (think /etc for windows).  It can install Chocolatey packages and configure the other bits and pieces.

The down side: It’s lacking a UI / website, which means the barrier for entry is still a bit high. On the plus side it looks like this will be the de-facto way of configuring windows, with hundreds of DSC modules coming out (wave after wave of them).

There are other alternatives, CA’s Nolio and Octopus Deploy being some that seem to be getting a reasonable amount of purchase (and Puppet can use DSC modules if you’re in a hybrid environment). Indeed Octopus Deploy has recently open sourced its configuration modules (which someone will make callable from Puppet & DSC no doubt).

And just recently MicroSoft has announced their first steps of Linux support for Powershell DSC. Puppet really will have competition!

(Interestingly Windows getting built-in support for SSH will make things like Puppet on Windows easier to set-up - this can only be a good thing!)


Why am I not talking about Windows Nano server and Docker in this article? It’s because you need to be able to automate building an image before you deploy it across lots of servers.

You wouldn’t take a build from a developers machine and put it in production, you’d take it from the CI server. It’s the same here - don’t put an image you can’t recreate into production - it will bite back. (Let’s call this the George’s Marvelous Medicine principle - reproducibility is key! )

 

Once you can do this, it’s time to move up the stack and have fun with containers.  
  Deployment is a subject dear to anyone’s heart who’s current deployment system is sub-optimal. Safe rapid deployment is at the heart of the dev-ops movement.

Spec

What do we want out of a good deployment system?

In many ways we want to consider managing environments as a whole, rather than concerning ourselves with the implementation of the individual machines.

Windows Implementations

I focus on windows here as it seems a harder problem to solve. Linux seems to have apt-get package management systems that coupled with Puppet or some other newer deployment system seem to manage most of the above.

For Windows the landscape is changing rapidly. It started with NuGet which is a dll dependency fetcher for .Net, but which has now been used as a distribution mechanism by Chocolatey.  This then gives us a package manager for Windows, with dependencies thrown in.

It’s good, but it’s not enough to install MSIs, we have to ensure the other bits like setting up shares, configuring firewalls and the like are also done if we’re going to truely have one click deployment.

Enter stage left, Powershell DSC.

Powershell is a puppet-esk declarative config for specifying how a machine should be configured (think /etc for windows).  It can install Chocolatey packages and configure the other bits and pieces.

The down side: It’s lacking a UI / website, which means the barrier for entry is still a bit high. On the plus side it looks like this will be the de-facto way of configuring windows, with hundreds of DSC modules coming out (wave after wave of them).

There are other alternatives, CA’s Nolio and Octopus Deploy being some that seem to be getting a reasonable amount of purchase (and Puppet can use DSC modules if you’re in a hybrid environment). Indeed Octopus Deploy has recently open sourced its configuration modules (which someone will make callable from Puppet & DSC no doubt).

And just recently MicroSoft has announced their first steps of Linux support for Powershell DSC. Puppet really will have competition!

(Interestingly Windows getting built-in support for SSH will make things like Puppet on Windows easier to set-up - this can only be a good thing!)


Why am I not talking about Windows Nano server and Docker in this article? It’s because you need to be able to automate building an image before you deploy it across lots of servers.

You wouldn’t take a build from a developers machine and put it in production, you’d take it from the CI server. It’s the same here - don’t put an image you can’t recreate into production - it will bite back. (Let’s call this the George’s Marvelous Medicine principle - reproducibility is key! )

 

Once you can do this, it’s time to move up the stack and have fun with containers.  
  Deployment is a subject dear to anyone’s heart who’s current deployment system is sub-optimal. Safe rapid deployment is at the heart of the dev-ops movement.

Spec

What do we want out of a good deployment system?

In many ways we want to consider managing environments as a whole, rather than concerning ourselves with the implementation of the individual machines.

Windows Implementations

I focus on windows here as it seems a harder problem to solve. Linux seems to have apt-get package management systems that coupled with Puppet or some other newer deployment system seem to manage most of the above.

For Windows the landscape is changing rapidly. It started with NuGet which is a dll dependency finder for .Net, but which has now been used as a distribution mechanism by Chocolatey.  This then gives us a package manager for Windows, with dependencies thrown in.

It’s good, but it’s not enough to install MSIs, we have to ensure the other bits like setting up shares, configuring firewalls and the like are also done if we’re going to truly have one click deployment.

Enter stage left, Powershell DSC.

Powershell is a puppet-esk declarative config for specifying how a machine should be configured (think /etc for windows).  It can install Chocolatey packages and configure the other bits and pieces.

The down side: It’s lacking a UI / website, which means the barrier for entry is still a bit high. On the plus side it looks like this will be the de-facto way of configuring windows, with hundreds of DSC modules coming out (wave after wave of them).

There are other alternatives, CA’s Nolio and Octopus Deploy being some that seem to be getting a reasonable amount of purchase (and Puppet can use DSC modules if you’re in a hybrid environment). Indeed Octopus Deploy has recently open sourced its configuration modules (which someone will make callable from Puppet & DSC no doubt).

And just recently MicroSoft has announced their first steps of Linux support for Powershell DSC. Puppet really will have competition!

(Interestingly Windows getting built-in support for SSH will make things like Puppet on Windows easier to set-up - this can only be a good thing!)


Why am I not talking about Windows Nano server and Docker in this article? It’s because you need to be able to automate building an image before you deploy it across lots of servers.

You wouldn’t take a build from a developers machine and put it in production, you’d take it from the CI server. It’s the same here - don’t put an image you can’t recreate into production - it will bite back. (Let’s call this the George’s Marvelous Medicine principle - reproducibility is key! )

 

Once you can do this, it’s time to move up the stack and have fun with containers.  
  Google Container Engine is The Product Version of Kubernetes and It’s Now Live http://thenewstack.io/google-container-engine-is-the-product-version-of-kubernetes-and-its-now-live/
                                                                                                                                                                                                                                                                                                     | | Deployment Management Tools: Chef vs. Puppet vs. Ansible vs. SaltStack vs. Fabric <a href="http://www.javacodegeeks.com/2015/08/deployment-management-tools-chef-vs-puppet-vs-ansible-vs-saltstack-vs-fabric.html">http://www.javacodegeeks.com/2015/08/deployment-management-tools-chef-vs-puppet-vs-ansible-vs-saltstack-vs-fabric.html</a>     

Docker

                                                           | | There's something in the air this year, first it was node.js 4 io.js and now Docker have made friends with CoreOS. The announcement of the open container foundation presents a way for everyone to play together and avoid the vendor lock-in that was the instigator of all this in the first place.

Read more about the open container foundation.

Tell me again why any of this matters?

Why is it important to me? Far more than just an image format, the speed of it due to caching of the layers and the start up times mean that for once we can do micro end-to-end testing. I’ve always argued for building a testbase that is grounded in the business problem rather than in implementation details. What we’re aiming for is trying to test as much of the hooked up end-to-end system as possible quickly. There are naysayers (google) that say end-to-end testing is broken, and if it’s slow and brittle then I’d agree. But by being able to spin up quickly our ecosystem as a set of interacting docker containers I think we may just be able to have our cake and eat it. And by keeping the tests grounded in the business space, a broken test really means something to someone. It’s not an arbitrary broken test that may or may not be worth investigating, it’s a bonefide business use case that is broken - that’s one you can have a discussion about with everyone, not just the developers.

If it’s not possible to specify the business functionality that you’d like to see in code, then you’ve got bigger problems.

In many ways I’m arguing for the BDD given / when / then style (as this avoids the implementation being in the test), but really focusing on the top layer of the onion. By focusing on that layer you’re testing a cohesive system which should have few dependencies.

Getting started

Find out more at Docker.

See also: Convert Any Server to a Docker Container https://zwischenzugs.wordpress.com/2015/05/24/convert-any-server-to-a-docker-container/

Security

With Docker, like everything else being more secure requires having less attack surface area. Docker are now moving to a Linux distoro that fits into 5MB. As you can imagine there’s a lot less that an attacker can use in an OS image that small! A company called Iron.Io are leading this with images for most languages.

If that’s not enough security / simplicity, then the final leap is to merge kernel mode and user mode - i.e. run the operating system merged with the program. This is called a ‘unikernel’.  
  There’s something in the air this year, first it was node.js 4 io.js and now Docker have made friends with CoreOS. The announcement of the open container foundation presents a way for everyone to play together and avoid the vendor lock-in that was the instigator of all this in the first place.

Read more about the open container foundation.

tags: