User data is a feature that allows customization of Amazon Elastic Compute Cloud (virtual machine) when it is created and (if desired) with each restart after being provisioned.
As we all know, Amazon EC2 instance (virtual machine) is the legacy approach to hosting applications. Last year, I attended a webinar sponsored by AWS where the presenter, Mike Pfeiffer, talked about the four strategies for migrating from on-premises to the cloud. They were (i) lift and shift, (ii) modernization (or refactoring application), (iii) re-architect to benefit from the flexibility of the cloud, and finally, (iii) rebuilding from scratch to go cloud-native.
Note: If you are interested, here is the link to the webinar: AWS migration services.
As the name suggests, lift-and-shift is a direct move of the virtual servers and workload from on-premises to the cloud and, hence, is the fastest way to migrate to the cloud. It is, however, the most expensive way of hosting an application in the cloud among the four strategies listed above. Every organization realizes that and gradually shifts towards efficient cloud-native products (starting from infrastructure-as-a-service and moving to platform-as-a-service). An infrastructure-as-code approach to provision Amazon EC2 supports the cloud-native strategy, compared to lift-and-shift, but the cost of hosting remains high.
However, Amazon Web Services has provided an exciting feature to assist in a quicker turnaround time. Just because an application is hosted on a virtual machine does not mean that installing software, enabling features, and converting it to a functional component of an environment has to happen manually and, in the process, take longer. Over the last few years, I have learned of a few tools like Terraform that would help manage a cloud resource (like Amazon EC2) with a few lines of HCL code. But what after that? Merely provisioning a virtual machine is not enough. The provisioned virtual machine won't be usable without installing specific software and/or enabling certain features (I'm talking about Windows OS here).
That is when UserData comes into the picture. I found this AWS-Docs page very informative as I learned about UserData and would highly recommend going through it - ec2-windows-user-data.
User data is the answer to automating all the manual steps applied once an Amazon EC2 is provisioned to host an application.
Learning by practice works best for me, and hence, I created a use case to provision an Amazon EC2 instance and ran user data as part of the provisioning process. I used Terraform to achieve this objective. As part of provisioning a Windows Amazon EC2 instance, I had the following steps planned:
1. Create a folder to store the user data log file.
2. Rename the machine, which is followed by a restart.
3. Install a Windows feature after the restart.
Previously, I had worked on provisioning an Amazon EC2 instance, so in this note, I built on the work done there. I stored the code at my GitHub repository: ec2-userdata-terraform.
Step 1: Add user_data
block to aws_instance
terraform block
The path to the user data script, along with any input variables, is passed to the user_data
block in the Terraform configuration file.
As you can see in the above code snippet, I passed the name of the server as input to the user data script, and inside the user data script, I read it using $ServerName = "${ServerName}"
assignment.
This approach can pass multiple such variables to the user data script.
Step 2: Add the user data template file with detailed instructions.
I chose Powershell as my preferred user data script. The file is stored at user_data\user_data.tpl
I made two important considerations while coming up with a user data based approach, which was: (i) whether the script was run multiple times with each machine restart or just once when Terraform provisioned the machine, and (ii) the status of the user data script execution.
There is no right or wrong answer on whether the user data script needs to run once or more than once. It depends on the use case. There was a requirement to run it multiple times for this particular use case since I had inserted a machine restart. In such a case, it was necessary to make the user data script idempotent -run as many times as you want; the end state is always the same. If the user data is required to run with each restart, it must be idempotent. If you are new to this concept, I have a short article at idempotency-in-infrastructure-as-code. [TLDR: you check for the desired state and apply only if the state does not match.]
The second important consideration was whether the user data script ran. When I started using user data, that was the #1 question in my mind. Also, to what point did the script run? Instead of checking the desired states manually, like if the folder has been created, the machine renamed, or the Windows update has been installed, I could make intelligent comments in the user data script and log them somewhere.
I addressed both of these concerns with the userdata.tpl
file.
Step 3: Run a terraform plan
and terraform apply
Remember that even though terraform apply
ran just fine without any errors, terraform
does not know anything about the status of user data [-did it run at all? Did it run successfully till the end?]. Terraform was instructed to provision an Amazon EC2 correctly and hand over the control to the user data that is run inside the Amazon EC2 instance. So do not be mad at yourself if you logged into the Amazon EC2 instance and found that user data did not work.
Step 4: Verify user data
Although the verification does not need to be done manually when the process is well established, when I was setting up the process, one of my first questions (as I mentioned earlier) was -Did user data run without any errors?
Hence, the first task was finding the log file the user data created. That is why I made the user data script to easily access and review the log file to check the status.
I created a folder to store the log file in the user data in this use case.
Remember that this step, too, has to be idempotent, and hence the if() block
.
Sometimes, it is also necessary to know where the user data file (in my case, Powershell file) is stored so that I can check the form in which AWS transferred it to the Amazon EC2 instance while debugging. So, I identified that and wrote that path out in the log file with the below logic:
The log file for the user data execution is stored below. This information is available in the AWS Docs link I shared above.
Here is an image of the log file created with each user data run -after machine provisioning and after each machine restarts. The user data execution log file and the Powershell script are stored at different locations.
And that brings us to the end of this note. I hope you found this note helpful. I would be happy to answer any questions that you may have. Would you please share them in the comments section? Also, please do not hesitate to correct me if I have misstated anything.
It wouldn't have been possible to gain so much information about user data without the help of my colleague Steve Torrey, who showed me how to work with user data and patiently answered all my questions.