Tyler Fitch

How/When to use Community Cookbooks

• Tyler Fitch • Chef

A common theme my customers will ask is “How/when do I trust Community Cookbooks”?

TL;DR

Remote Copying of Generated Keys

• Tyler Fitch • Chef and Artifactory

A customer asked me this question:

I feel like I’m trying to accomplish something fairly common, but I can’t find any good documentation around it and I’m not sure of the correct design pattern to use here. On each machine during a chef-client run, I need to generate an SSL keypair, upload the public key to another server, and the run commands on that remote machine necessary to add the key to a TLS truststore. That other machine is managed by Chef as well, so it’s possible to deal with importing any new keys into the truststore during that machine’s chef-client run, but getting the key onto the machine is proving to be a little more complicated.

Is the best way to deal with this just to issue an scp command through the execute resource that uploads the key to the TLS server or is there some more idiomatic way to upload a file to a specific location on a remote machine?

So I thought about the question for a while and came up with a possible solution to this.

##TL;DR

Share the Public Key

Any server can generate its SSL key pair when it is being created. Then when once you have the public key, add a bit of logic to the Chef cookbook (or configuration management tool of choice, but you know which way I lean) to upload the public key to your artifact server of choice. Since I recommend Artifactory, consider using the Artifactory gem in your Chef cookbook to handle this piece of code.

When uploading the public key to Artifactory, use tactics to allow for systematic scripted uploading and subsequent downloading of the public keys. I prefer to use a consistent naming convention like /ssl_public_keys/<hostname>.

Then on any remote machine that needs to add the public key for <hostname_zyx> to its TLS truststore, the remote machine will go find the public key on the Artifactory server in the /ssl_public_keys/<hostname_zyx> location and pull it down over HTTP(S).

Long story short (but not the TL;DR), a Chef recipe can generate the key pair on the server and a Chef recipe will be used to install the key on remote nodes, but Chef does not have to manage all the pieces of the puzzle between those two events.

Alternate solutions

Lots of choices - find one that works for you

The list of alternate solutions is about as long as my description of how to setup one of the solutions (and I’m certain I missed some options). So like most technology choices, it comes down to finding the solution that works for your application. If you’re not sure which one works, and that is totally okay, don’t spend all your time just thinking about the options while trying to find why it won’t work. Just try one for two weeks. It might work - it might not. If you find the edge case where it is not going to work, then try the next option. Eventually you will have a working solution for your needs and you’ll know a lot more about the tool(s) because you have used them instead of just thought about them.

Generic Chef CI Instructions

• Tyler Fitch • chef, Continous Integration, CI, and testing

I’ve been in discussion with a few of my Chef customers about how they are progressing on their DevOps Journeys. A frequent point of discussion is their Continuous Integration (CI) servers, or lack thereof as the discussions highlight.

##TL;DR I will not talk about a specific CI server platform here. The goal of CI is always the same. Check out source code, build, valid and deploy. How a tool gets there might have slight variances, but the steps to follow are the same.

What’s missing and what problems will a CI server solve?

Let’s look at it from a high level.

It is not just you that needs the CI server. It is your team, your organization and your company. I can honestly say setting up the CI server is absolutely the best first step any team can take with automation. Even more important than than starting to use Chef - yep I said it.

Your CI server is the foundation of automation

I consider setting up your CI server to be a Sprint Zero task in Agile terms. You want to start building your Minimum Viable Product on Sprint One? Build the app how? Build the app where?

Exactly

Feel free to finish reading this reading this article, but then you must go deploy your CI server immediately!

Here are some planning steps for your CI server.

The standard builds jobs

Your build pipelines for Java apps, Ruby apps, Chef cookbooks, etc will all look the same and have these types of jobs, in this order.

Linting

Is your code well formatted? Do you have unused variables? Catch these deficiencies early on with quick linting tests. Ideally your engineers will be running these tests locally before committing their code, but we can leave nothing to chance.

Unit tests

Unit Tests will be longer running than the linting tests, but shorter running than Integration Test suites. Unit tests will validate that your code says what it does (the code will install Nginx).

Integration tests

Finally we will run our Integration Tests. For Java, Ruby, Node, etc. applications Integration Tests will validate your compiled/converged code do what they say In Chef this will be applying a cookbook to node and running a suite of tests against it, in other words, running the kitchen test command and finding out that Nginx is running as well as Nginx is listening on port 80 and 443.

Deploy your build artifact

Once your linting, unit tests and integration tests have all passed you have a good check in and build artifact. Automatically deploy the code to the proper location (this could be uploading a Chef cookbook to your Chef Server, a WAR file to an Artifactory instance or a Ruby Gem to rubygems.org).

Your CI server doesn’t get bored or take shortcuts

And why do you want to take the time to set up all those build jobs I just described? Because having humans do these tasks is problematic at best. Humans get bored. Humans are tired. Those humans make misatkes. On the other hand, computers do the same thing you tell them to do, over and over and over again.

Even worse, humans will try circumvent security where it gets in your way. “Let me just upload this one line change.” Yeah - those are famous last words. Your CI server on the other hand, won’t skip a step of your pipeline. That “one line change” will be tested and only deployed once those tests pass. All your rules will be followed exactly how your security teams wants them to be. They are going to be really happy campers.

Here is your example

I know I said this would be platform agnostic, and it was until now. But below is a repo that will build you a CI pipeline to play with, even copy from. All the words above are great, but having a toy to play with will definitely better meet some of your learning styles.

Your big payoff for reading this far? You can kick start this whole process in a Chef specific way! Check out the https://github.com/chef-solutions/pipeline repository on GitHub. With it you can literally kitchen converge centos-7 a functional pipeline to see an example of everything wired together.

Congrats! You can now further

automate all the things

Using Chef without a native chef-client

• Tyler Fitch • chef

What can you do when you have a piece of IT infrastructure that doesn’t have native support for the chef-client?

TL;DR

Put a vanilla linux machine in the middle to manage the device using the device’s native APIs.

Options

A) If the device has Ruby available on it, go tough guy mode and add support for the device to the client. This can be both highly lucrative and highly painful.

B) If the device has APIs exposed for managing it, then use that API!

C) ¯\_(ツ)_/¯ There’s more are multiple ways to tackle these types of problems!

In this post we’re going to focus on Option B. Which from a Chef perspective looks a little this.

Network Diagram

We’re going to need one machine to sit in the middle of this process and be the registered Chef client(s), and then interact with one or more network devices via their APIs. In this example we’ll have five network switches we want managed by Chef. To do this we will stick a vanilla Linux machine in between the Chef Server and the network switches as the “controlling node”.

We’re going to end up bootstrapping the controlling node five times, generating different client.rb files each time. Each client.rb represents a network switch that is going to be controlled by our cookbook via the controlling node.

Then we can use the -c flag of the chef-client executable to identify which client the controlling node will be running as. It would look like this chef-client -c /data/chef/client-switch-one.rb or chef-client -c /data/chef/client-switch-two.rb and so on. Remember this because if you’re going to setup the chef-client as a scheduled task, you’ll need to set it up five separate times and use this -c flag as the key differentiator.

Now that that clients are all setup we can being to write our cookbook in Ruby to make API calls to the individual switches and adjust their configurations as needed. This isn’t necessarily easy and it means you’ll be working outside of native Chef resources but you’ll be bringing more devices in line with being managed by Chef as a consistent interface.

The end result is, if you did a knife node list you’d get back five results for the switches you’re now managing as Chef nodes via your custom cookbook. And you can adjust those five switches independently, even if there is just that single controlling node doing all the work.

Wherein one learns they're doing Chef cookbook dependency management manually

• Tyler Fitch • chef and berkshelf

TL;DR

The Chef DK has Berkshelf bundled in it to handle cookbook dependency management, so take advantage of that being available to you.

Question:

How do I ensure all my cookbook’s dependencies are on the Chef server?

Using cookbook upload:

knife deps  cookbooks/alpha  | xargs knife upload

Updated cookbooks/alpha

As an aside to this, we have looked to place community cookbook source into a separate directory, and set the cookbook path in the knife config. But the knife deps command does not seem to use that configuration. For example chef-client cookbook and related are in community_cookbooks dir rather than cookbooks folder.

knife deps  cookbooks/mongo —tree

cookbooks/mongo
--cookbooks/chef-client
----cookbooks/cron
----cookbooks/logrotate
----cookbooks/windows
------cookbooks/chef_handler

Does knife deps make use of the knife config(cookbook_path) and repo config or just use the metadata found in each of the cookbooks?

Answer:

The command above is basically come straight out of our Chef docs from https://docs.chef.io/knife_deps.html

It is great that the instructions “from the source” are being used, BUT

When you’re using knife deps it is definitely using the knife.rb to determine the chef-repo-path value. I have not actually used knife deps myself to see how it’d handle two different directories as cookbook sources, but what I have read says knife will not handle that. But this brings up a point I want to highlight which is the dependencies on the Chef Server vs. the local development machine. deps is thinking like the server (or the client), and on the server the cookbooks are all in a single Org. But on your local development machine, this correlation is not expected, or even required. Local cookbook directories can be anywhere. Dependencies like a community cookbook especially could be anywhere (or nowhere). I store mine in ~/source/gh/opscode-cookbooks/cron for example and my personal cookbook in ~/source/gh/tfitch/why-run-alerting depends on cron, but the cron cookbook on my machine is no where “near it”.

The knife deps cookbooks/mongo | xargs knife upload command looks like you’re trying to find all dependencies for the cookbook and upload them, right? Then I’d like you to meet Berkshelf! Conveniently bundled in the Chef DK.

Berkshelf can pull your cookbook’s dependent cookbooks down locally and upload them to your server without you actually having to see them. Think Maven pulling down Java libraries or npm pulling down JS packages as an analogy. Community cookbooks will get stored by default in a .berkshelf directory in your user’s home directory, and shared as needed. So if you have 5 cookbooks depending on the community cron cookbook, berkshelf just manages that for you, stores the one copy locally and when depends 'cron' comes up in any cookbook’s metadata.rb, berks upload will make sure that requirement is met automatically on the Chef Server.

Potential caveat here, if you’re modifying the community cookbooks after downloading from the Internet, then make sure you’re using the path: '/where/you/store/community_cookbooks' setting for cookbooks in the Berksfile. It’ll prevent you from automatically using the community cookbook with the same name. And under the hood, if you have a custom cron cookbook and also use the community cron cookbook, Berkshelf handles this in the .berkshelf directory for you so you can still use both.

Finally, Berkshelf supports custom installs of the Chef Supermarket as a “lookup location” for community cookbooks. Meaning, if you’re running an install of the Supermarket behind your firewall and it has a couple cookbooks, maybe ‘ntp’ and ‘hostsfile’ that have the standard settings for all corporate servers. Well, you can configure the Berksfile to know that when you say you need the ntp cookbook, it’ll look at your Supermarket first, find it and use it. This can be very helpful to grow usage of shared things in your company as you all grow with Chef, and save people the trouble of reinventing the wheel to solve typical configuration settings people encounter at your company.

Freezer burn of your Chef cookbook

• Tyler Fitch • chef

TL;DR

Using the freeze command when uploading a cookbook is great! But as soon as you do you must change the version of the cookbook in metadata.rb to the next release number or you’ll get upload errors like you’re seeing in the question here.

Freezer Burn

Question:

Imagine a Chef cookbook, or role cookbook, with dependencies defined in metadata.rb and that each version of the cookbooks are to be frozen on the Chef server, e.g. alpha > delta.

Using cookbook upload:

knife cookbook upload alpha --freeze --include-dependencies
Uploading alpha [0.2.2]
Uploading delta [0.2.0]

ERROR: Version 0.2.0 of cookbook delta is frozen. Use --force to override.
WARNING: Not updating version constraints for delta in the environment as
the cookbook is frozen.
WARNING: Uploaded 1 cookbook ok but 1 cookbook upload failed.

Here you see an error, but also the knife command completed successfully, which makes it hard to identify the error if used in a tool/pipeline.

Answer:

Getting the upload error isn’t really a terrible thing. Freezing your cookbooks means that version has been released and you’re ready to start working on the next one. Perfectly inline with software development processes. Freezing and releasing versions of your cookbooks is a good thing! I like that you’re doing it.

The keys for good freezing and releasing of a cookbook comes down to following steps.

  1. Tagging the release in source control. In your example you would have tagged the delta cookbook as “v0.2.0”.
  2. Upload the version of the tagged repo to the Chef server with the —freeze flag
  3. Now, immediately bump the version to 0.3.0 (or 0.2.1 depending on your needs) and check it to source control.

Three steps to repeat often, sounds ripe for automation eh? And there are a couple tools in the Chef ecosystem to automate this task. First up a look at Knife Spork. http://jonlives.github.io/knife-spork/ It will be installed as part of the Chef DK (https://downloads.chef.io/chef-dk/). Secondly is thor-scmversion from RiotGames. https://github.com/RiotGamesMinions/thor-scmversion A description of what thor-scmversion can do is #3 on this article. https://www.rallydev.com/community/engineering/6-things-every-chef-user-should-do

The #1 rule here, even if you’re not freezing your cookbooks when you release them (make the final upload to the Chef Server), because not everyone does and it’s not the default behavior of an upload, is to always bump up the version number as the first change after any cookbook release. Always, and forever, bump the version number first! What do we do first after releasing a cookbook? Bump the version # and check it in.

One way to alleviate the upload errors during local dev if you’re able to work with VMs locally is by using Chef Zero (https://www.chef.io/blog/2014/06/24/from-solo-to-zero-migrating-to-chef-client-local-mode/). Then you won’t actually be hitting the live Chef Server and needing to worry about uploading conflicts. This will also technically mask any errors around editing a frozen cookbook, but it’ll also speed up your local development until you’re ready to upload to the server. Then you’ll get the error about the frozen cookbook and then make your tweak to the version value in metadata.rb.

Automate the automated release task

Finally on releasing cookbooks, if you’re able to have your CI machine have write access to the cookbook repos, you can have CI jobs do the Spork/thor-scmversion tasks for you. You’ll create the job once and when it’s time to release a cookbook you just a push a button and all tasks are completed for you.

This is totally a “down the road” goal, but something to think about as you do your current work and make plans to go faster. If you can’t have automated machines editing repos don’t worry about it. I did not have this luxury at my previous job, so it wouldn’t surprise me if you don’t either. Still manually running knife spork will be better than doing the three steps manually, as they can be error prone commands to type of “just right” with every release. I’ll look to expand this in to it’s own post/tutorial in the future.

Searching by a Node's cookbook's attributes

• Tyler Fitch • chef and attributes

We’re working on a method for self discovery of a group of servers being setup to be a MongoDB replica set.  And a specific replica set among an environment with multiple MongoDB servers and replica sets at that.

TL;DR

A cookbook’s node attributes are uploaded back to the server and available to be searched against.  Here’s the secret to searching on the Node’s attributes defined in the cookbook.  In the cookbook we had node[‘mongodb’][‘replica_set’] but when we wanted to search for it we did mongodb_replica_set:ReplSet1 - chaining the full name of the attributes with underscores.  Doesn’t entirely seem intuitive, but it is the way Search works with the underlying Solr engine in the Chef Server.

Out of all the nodes in a Chef server Organization we’re going to use three keys to identify the nodes/machines of a single replica set.

1) A Role - http://www.getchef.com/blog/2013/11/19/chef-roles-arent-evil/ For this I like having two roles, mongodb-primary and mongodb-secondary.  The difference will be the run_list.  Both will have the recipe[‘mongodb’] which will install, configure and startup MongoDB on the servers.  But the mongodb-primary role will have the run_list of recipe[‘mongodb’],recipe[‘mongodb::primary’] to install MongoDB and configure the replicaset.

2) The Environment - https://docs.getchef.com/essentials_environments.html

3) A replica set name, set via a cookbook attribute default['mongodb']['replica_set'] = 'ReplSet1'

Not everyone in the Chef community may love Roles and Environments, but this is a great use case for them.  Treating the combination of the three like a compound key in a database table, we’re able to identify like machines to be joined together in to the replica set.

Searching for the Nodes

Using the above three values, what do the search commands look like?  First up from knife we’d have

knife search node "role:mongo AND chef_environment:dev AND mongodb_replica_set:ReplSet1"

This is nice to verify you’ll get the desired search results without having to run a cookbook and debug the search syntax by doing a node converge.

Now we’ve got the search syntax worked out, we’ll want to actually use this in a cookbook to create a replicaset configuration file.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# some stuff was probably above here

# chef_environment might look redundant, but it will find machine in the same environment this machine is configured for
secondaries = search(:node, "role:mongodb-secondary AND chef_environment:#{node.chef_environment} AND mongodb_replica_set:#{node.mongodb.replica_set}")

# pass them in to the template below
template '/apps/mongodb/conf/replicaset.js' do
owner 'mongodb'
group 'users'
mode '0755'
action :create
variables({
:replicaset => node.mongodb.replica_set,
:secondaries => secondaries
})
end

execute "mongodb-config" do
command "/apps/mongodb/bin/mongo localhost:27017/test /apps/mongodb/conf/replicaset.js"
action :run
retries 6
retry_delay 10
end

# and something is probably below here, but maybe not for the primary
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# replicaset.js.erb file
rsconf = {
"_id" : "<% replicaset %>",
"version" : 1,
"members" : [
<% idCounter = 1 %>
<%- @secondaries.each do |secondary| %>
{
"_id" : <%= idCounter %>,
"host" : "<%= secondary.fqdn %>:27017"
},
<% idCounter += 1 %>
<%-
end %>
// in order to gracefully handle trailing commas, put the known Primary last
{
"_id" : 0,
"host" : "<%= node.fqdn %>
:27017"
}
<% end %>
]
}

rs.initiate( rsconf )

There we have it.  We’ve found all the machines we want to be in the same replicaset and configured the replicaset by running the JS file on the Primary machine of the MongoDB replica set.

Remember, only one machine in the replica sets needs to run this script, the primary.  AND it will need to run it last, after all the nodes in the replicaset are running (you can’t add a machine to a replica set if it isn’t running and configured to be a mongodb yet).

Sending name/value pairs from Chef cookbook attributes to be dynamic Resource attributes

• Tyler Fitch • chef

One of my customers had a question about using name/value pairs from his cookbook’s attributes as settings for a Resource.

TL;DR

The name/value pairs in a cookbook Resource object are more than just a combination of attribute + value. More precisely, they’re a method_name + value combination. So making the method_name come from a dynamic source requires you use send() method of a Ruby Object to invoke the method, not just output a String equal to what the cookbook would look like if everything was hard coded.

Long form:

Let’s start with the hardcoded cookbook recipe and see what that would look like.

1
2
3
4
5
6
7
# if nothing was dynamic
iis_pool 'MyAppPool' do
runtime_version '12'
max_proc 4
thirty_two_bit false
action :config
end

Now to convert all those Resource attributes to be dynamic let’s convert them in to Attributes of the cookbook

1
2
3
4
# attributes for the iis_pool
default['config']['setting']['runtime_version'] = '12'
default['config']['setting']['thirty_two_bit'] = false
default['config']['setting']['max_proc'] = 4

A first attempt at setting the attributes looked liked this, but it did not work

1
2
3
4
5
6
7
# outputs the same as hardcoded recipe but does *not* work
iis_pool 'MyAppPool' do
node['config']['setting'].each do |setting, value|
"#{setting}" "#{value}"
end
action :config
end

At first pass this all looked correct to me too. I was stumped until my brilliant co-worker Steve Danna (@SteveDanna) shared some great Ruby/Chef knowledge with me. The correct way to achieve the goal would be this

1
2
3
4
5
6
7
# same result as hardcoded recipe but now driven by Attributes
iis_pool 'MyAppPool' do
node['config']['setting'].each do |setting, value|
send(setting, value)
end
action :config
end

The key here is the send() method. http://ruby-doc.org/core-2.1.2/Object.html#method-i-send

When you do #{setting} #{value} then setting is just returned as a string and not calling the relevant setter method of the Resource object (iis_pool in this case).

I guess this surprised me, because when the cookbook is hardcoded you see the line thirty_two_bit false and it reads like two strings and not that thirty_two_bit is actually a method name being called.

Alas, now I know and you do too.

#ChefConf 2014 BoF - Creating Deploy Jobs in Jenkins

• Tyler Fitch • ChefConf

At #ChefConf 2014 I had the honor of leading a Birds of a Feather session on “Creating Deploy Jobs in Jenkins”.

ChefConf 2014 BoF

My interest here is in deploying a web app to my Chef managed servers.  The problem I need to solve is for when you don’t know what servers you have because in the Cloud the servers come and go.  Servers are cattle - not pets, right?

So what are our options?

knife status “role:webserver AND chef_environment:stage”

will give me the following output.

12 mins ago, ec2-57.186.102.29-1396555868.17, ec2-57-186-102-29.us-west-2.compute.amazonaws.com, 192.31.9.43, centos 6.4.
8 mins ago, ec2-57.82.206.136-1396555883.36, ec2-57-82-206-136.compute-1.amazonaws.com, 57.82.206.136, centos 6.4.

So I want to cherry pick the 3rd item of the comma-separated list, the hostname.  I could use awk, but found a simple example using cut and went with it. I loop over the output of the knife status search like so

for h in $(knife status “role:webserver AND chef_environment:load”| cut -d’,’ -f3); do ssh -oStrictHostKeyChecking=no myuser@$h “uptime”; done

The -oStrictHostKeyChecking=no is there because I’ve maybe never deployed to this cow before.  Change the ssh “uptime” command to the necessary scp command and you’re deploying.

But it isn’t bulletproof.  If a server crashes and doesn’t unregister itself as an active node with the Chef server you’ll try to deploy to a place that is no longer valid.  One way to mitigate this is use the timestamp column in the knife status result that shows the last time the server phoned home.  If all servers are checking in every 15-30 mins and it’s been an hour since the Chef client last checked with the Chef server then skip trying to deploy to it.

Installing Sun Java 1.5 with Chef

• Tyler Fitch • chef and java

In trying to use the Chef community cookbook for Java I found it to only work with Java 6 and Java 7.

I was trying to create a cookbook to recreate an existing environment at work I need a scripted install of the Sun 1.5 JDK.

So I had to bail entirely on the community cookbook and go my own route.  Here’s how it played out.

remote_file "/install/location/path/jdk-1_5_0_XYZ-linux-i586.bin" do
source "http://your.artifactory.example.com/artifactory/yourRepo/jdk-1_5_0_XYZ-linux-i586.bin"
mode "0700"
not_if { ::File.exists?('/install/location/path/jdk-1_5_0_XYZ-linux-i586.bin') }
end
bash "install-jdk15" do
cwd "/install/location/path"
code <<-EOH
./jdk-1_5_0_XYZ-linux-i586.bin >/dev/null < <(echo yes) >/dev/null < <(echo yes)
rm -rf /install/location/path/jdk1.5.0_XYZ/sample
rm -rf /install/location/path/jdk1.5.0_XYZ/demo
ln -s /install/location/path/jdk1.5.0_XYZ /install/location/path/java
EOH
not_if { ::File.exists?('/install/location/path/jdk1.5.0_XYZ/README.html') }
end
magic_shell_environment 'JAVA_HOME' do
value '/install/location/path/java'
end

One thing this does not do is put the java and/or javac executables in to your $PATH.  I’d do this like the Java cookbook does and do symlinks to /usr/local/bin if you need it.