π Getting Started
On June 7, 2025, I took my first real step into the world of DevOps.I had just wrapped up a crash course in Linux, and I was curious: "What if I could automate the boring stuff I was doing manually?"
That question led me to Shell Scripting. Instead of watching endless tutorials, I gave myself a challenge: 1 week. 2 real projects. Letβs see what I can build.
This blog captures that journey β the good, the broken, and the lessons.
π οΈ Project 1: Deploying a Django App Using Shell Script
ποΈ Link: View on
π What I Tried to Build
I wanted to automate the deployment process of a Django app using a single Shell script.
The goal was to:
- Clone a repo
- Set up Docker and Docker Compose
- Build and run the app containers with a single command
π‘ What I Learned
- The importance of breaking down a task into smaller steps using functions β like cloning a repo, checking dependencies, building containers, and running them.
- How to use Shell commands like
cd
,git clone
,if
conditions, andecho
for logging inside scripts. - Learned how to add logic like.
if [ -d "myapp" ]; then
echo "Directory already exists. Skipping clone."
else
git clone <repo-url>
fi
- Most important thing i learned while deploying the django app is Error Handling.
β Issues I Faced
1. Script didnβt execute because I forgot to give it execute permission using chmod +x deploy.sh
.
2. Directory already existed when I ran the script a second time, which caused git clone
to fail. I had to learn how to add checks to prevent duplicate actions.
3. Docker daemon wasn't running, so the docker build
command failed with errors. I didnβt realize Docker needs to be started manually or as a service.
4. Docker image naming mistakes I initially wrote docker build -t . myapp
instead of the correct docker build -t myapp .
.
5. No error handling at first my script would continue even if a step failed. For example, if git clone
or docker build
failed, the next steps still ran, which made debugging harder.I didnβt realize how important it is to stop the script when something breaks, or at least handle the failure properly.I also forgot to check if commands actually succeeded before moving on, like.
git clone <repo-url>
no check here β even if it fails, the script moves forward.
β How I Solved Them
1. Missing Execute Permission
I realized that shell scripts need execute permission. I ran:
chmod +x deploy.sh
This allowed the script to run directly with
./deploy.sh
.
2. Directory Already Exists Error
- I added a check to see if the folder was already present before cloning:
if [ -d "django-notes-app" ]; then
echo "Directory already exists. Skipping clone."
else
git clone <repo-url>
fi
3. Docker Daemon Not Running
- I learned to start the Docker daemon manually using:
sudo systemctl start docker
4. Incorrect Docker Build Command
- Initially, I mixed up the syntax. Corrected it to:
docker build -t notes-app .
5. No Error Handling
- Started using this format to catch and report errors:
git clone <repo-url> || { echo "Git clone failed"; exit 1; }
π οΈ Project 2: Automating AWS EC2 Instance Creation Using Shell Script
ποΈ Link: View on
π§ What I Tried to Build
In this project, I wanted to automate the process of launching an EC2 instance on AWS using a simple Bash script β no manual clicks on the AWS Console.
The goal was to:
- Install AWS CLI (if not already installed)
- Create an EC2 instance using required parameters like AMI ID, instance type, key pair, subnet ID, and security group
- Wait until the instance is in a running state
- Handle basic error checking and output helpful logs throughout the process
I treated this as a real-world mini DevOps task β automating infrastructure provisioning using shell scripting and AWS CLI. This helped me understand how cloud resources can be managed programmatically and how to handle dynamic situations in automation.
π What I Learned
Validate Before You Automate
Skipping checks for things like empty AMI IDs or missing subnets led to broken scripts. I learned to always validate user inputs before running any cloud commands.IAM Permissions Can Make or Break Automation
Even if the script is correct, missing permissions can silently block EC2 creation. Understanding IAM policies helped me debug smarter and faster.Clarity in Output Makes Debugging Easier
Adding clear and structured log messages ([INFO]
,[ERROR]
, etc.) helped me understand what the script was doing at each step β and where it was going wrong.
π Issues I Faced
1. Running Script Without Setting Required AWS Parameters In the initial version of the script, I left variables like AMI_ID
, SUBNET_ID
, and SECURITY_GROUP_IDS
as empty strings. This caused AWS CLI commands to fail with confusing or unhelpful error messages.
An error occurred (InvalidAMIID.Malformed) when calling the RunInstances operation: Invalid id: ""
2. Missing IAM Permissions for EC2 Actions Even though the script was syntactically correct, it failed to create an instance because the AWS user didnβt have the necessary EC2 permissions.
You are not authorized to perform this operation.
3. No Status Feedback During Execution The script initially lacked logs or progress messages, which made it hard to trace which part was executing or where exactly it failed.
4. Script Continued Execution After Failure When AWS CLI wasnβt installed or a command failed, the script still continued, which caused additional errors and wasted time.
β How I Solved Them
1. Empty or Incorrect Parameters
I added clear variable assignments in the main function of the script and make sure to never leave them empty.
AMI_ID="ami-0abcd1234example"
SUBNET_ID="subnet-0123example"
SECURITY_GROUP_IDS="sg-0123example"
2. Missing IAM Permissions
I updated the IAM user by attaching the AmazonEC2FullAccess managed policy in the AWS Console.
Then I tested access with a simple command:
aws ec2 describe-instances
3. Lack of Logging or Debug Output
I introduced [INFO], [ERROR], and progress messages before and after major steps.
echo "[INFO] AWS CLI is installed."
echo "[INFO] Creating EC2 instance..."
4. Script Continued Even After Failure
I used proper condition checks and exit 1 to safely stop the script if something wasnβt right.
check_aws_cli(){
if ! which aws &> /dev/null; then
echo "[ERROR] AWS CLI is not installed."
return 1
else
echo "[INFO] AWS CLI is installed."
fi
}
main() {
if ! check_aws_cli; then
install_aws_cli
if ! check_aws_cli; then
echo "[ERROR] AWS CLI install failed. Exiting..."
exit 1
fi
fi
}
π Resources That Helped Me Along the Way
I didnβt figure everything out on my own β here are some awesome resources that guided me through this project:
AWS EC2 Documentation
The official AWS documentation was my go-to source for understanding how EC2 instances work and the parameters needed to launch one.AWS CLI User Guide
Helped me structure commands properly and understand how the CLI interacts with various AWS services.YouTube: Shell Scripting in One Shot
Gave me a clear foundation on how to use conditionals, functions, and error handling in shell scripts.
Feel free to explore these if you're starting out or building something similar.
π Wrapping Up
This was a fun and challenging week exploring how powerful Bash can be when combined with AWS CLI. I learned a lot by building, breaking, and fixing things β and Iβm just getting started on this DevOps journey.
β Over to You
Have you ever tried automating cloud tasks with Bash or any other scripting language? What was the biggest challenge you faced, or what tools made it easier for you?
Let me know in the comments or connect with me on X/Twitter or LinkedIn β Iβd love to hear your experience!
Thanks for reading β see you in the next blog! π
Top comments (0)