Jenkins CI: Pains and Gains

Update 09/24/18: Jenkins sucks for our needs. Authentication is cumbersome and it has become a mountain of hacks in order to handle multi-container docker configurations. We’re better off creating our own custom CI solution.

—-

Original post:

This year has been packed with some genuine full-stack goodness.

Setting up continuous integration is frustrating at first, but incredibly rewarding once a working pipeline is in place. A year ago, I setup a pipeline to automate Android builds with pretty version numbers, signed for release, and easily downloadable by colleagues for distribution. All it took was a push to the master branch.

For this current project, we use Jenkins. We run it in Docker, so there’s a tad bit of dockerception going on. The Jenkins container runs with the host docker socket mounted on a volume so that the container docker client can send actions to the host’s docker daemon.

Now my task was to get some tests running for a Node server and containerized MongoDB process. The Node part was easy. But adding an extra container meant that 1.) the ports would need to be configured for inter-container communication and 2.) the lifetime of the container(s) would need to be managed (read: cleaned up) between each job and 3.) there would need to be caching for node_modules and test datasets.

The port issue was rather simple: I just needed to apply the --network host argument to the Node container so that it would have access to all ports on the host, notably the MongoDB port. Though there is a way to explicitly bind containers by name, that’s an optimization for later.

The next issue was container lifetime. The Node container was spawned by Jenkins and thus destroyed by Jenkins whenever the job was stopped. That’s good. But the MongoDB container was spawned manually so I’d need to take ownership of it and remove it at some point. To do so, I grep’d through each container via docker ps -a containing substring “mongo”, and used awk to extract the ID column, then removed them.

Things get really funky when you’re trying to run certain commands in the Jenkins config because quotes and some special characters need to be escaped, double escaped, and probably escaped some more. But if you somehow miraculously get all your commands running, it’s a sweet situation.

The last issue was caching. At first I thought I’d need to cache by a key which would be the checksum/md5 hash of the package.json or something. Fortunately there was an even simpler solution: docker volumes. I just needed to mount a host cache directory to certain project directories. I did it for our test data as well the yarn cache directory. I tried doing the same for the node_modules directories but there was unexpected behavior. Caching just the yarn cache folder worked fine. This solution is preferred over Jenkins’ Job Cacher plugin.

My overall impression of Jenkins on an in-house machine is that it’s considerably more flexible than a 3rd party service like CircleCI. With Digital Ocean, we can scale up the memory or CPU power easily. Jenkins is less abstract that other solutions and behaves exactly how it says it will. And lastly, the new Blue Ocean UI is sick.

Leave a Reply

Be the First to Comment!

wpDiscuz