Beginner
Why Automating Server Configuration Matters
Imagine you just got hired at a company that manages 50 Linux servers. Your manager asks you to set up 10 new ones — install packages, configure firewalls, create users, harden SSH, set up monitoring agents. Doing this manually on each server would take hours, and you’d inevitably forget a step on server number 7.
This is where Ansible comes in. Ansible lets you describe the desired state of your servers in simple YAML files and apply that configuration to one server or a thousand servers with a single command. No agents to install on your servers, no complex architecture — just SSH and a clear description of what you want.
In this guide, you’ll learn how to take a fresh Ubuntu server from zero to production-ready using Ansible. By the end, you’ll have a reusable playbook that handles:
- System updates and essential package installation
- User creation with SSH key access
- SSH hardening (disabling root login and password authentication)
- Firewall configuration with UFW
- Automatic security updates
- Timezone and hostname configuration
Prerequisites
Before we start, here’s what you need:
- A control machine — your laptop or a dedicated admin server (Linux or macOS). This is where Ansible runs.
- One or more target servers — fresh Ubuntu 22.04 or 24.04 installations with SSH access. A cheap cloud VPS from any provider works perfectly for practice.
- SSH key-based access to the target servers from your control machine.
- Python 3 installed on both your control machine and target servers (Ubuntu includes this by default).
Step 1: Install Ansible on Your Control Machine
Ansible only needs to be installed on your control machine — the machine you’re running commands from. Nothing gets installed on the target servers. That’s one of Ansible’s biggest advantages.
On Ubuntu/Debian
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansible
On macOS
brew install ansible
Using pip (any platform)
python3 -m pip install --user ansible
Verify the installation:
ansible --version
You should see output starting with something like:
ansible [core 2.17.x]
config file = None
configured module search path = ['/home/youruser/.ansible/plugins/modules']
python version = 3.x.x
If you see a version number, you’re good to go.
Step 2: Set Up Your Project Structure
A well-organized project makes everything easier to maintain. Let’s create a clean structure:
mkdir -p ~/server-setup
cd ~/server-setup
Here’s the structure we’ll build:
server-setup/
├── inventory.ini
├── ansible.cfg
├── playbook.yml
└── files/
└── sshd_config
Step 3: Create the Inventory File
The inventory tells Ansible which servers to manage. Create a file called inventory.ini:
[webservers]
server1 ansible_host=203.0.113.10
server2 ansible_host=203.0.113.11
[all:vars]
ansible_user=root
ansible_python_interpreter=/usr/bin/python3
Replace 203.0.113.10 and 203.0.113.11 with your actual server IP addresses. If you only have one server, just list one entry. The ansible_user=root tells Ansible to connect as root initially — we’ll create a dedicated admin user during the playbook run.
Step 4: Create the Ansible Configuration File
Create ansible.cfg in your project directory. This file configures Ansible’s behavior for this project:
[defaults]
inventory = inventory.ini
host_key_checking = False
retry_files_enabled = False
timeout = 30
A note on host_key_checking: We’re disabling this for initial setup convenience. In a production environment with long-lived servers, you’d want to set this to True and manage known hosts properly. For fresh servers that you’re setting up for the first time, this is a reasonable tradeoff.
Step 5: Test Connectivity
Before writing any playbook, let’s make sure Ansible can reach your servers:
ansible all -m ping
Expected output:
server1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
server2 | SUCCESS => {
"changed": false,
"ping": "pong"
}
If you see "pong", your connection is working. If you get errors, double-check your SSH keys, IP addresses, and that the servers are running. A common beginner mistake is forgetting to add your SSH public key to the server’s ~/.ssh/authorized_keys file.
Step 6: Write the Production-Ready Playbook
Now for the main event. Create playbook.yml:
---
- name: Configure fresh Linux server to production-ready state
hosts: all
become: true
vars:
admin_user: deploy
server_timezone: UTC
ssh_port: 22
allowed_ssh_networks:
- "0.0.0.0/0"
essential_packages:
- ufw
- curl
- wget
- git
- vim
- htop
- unzip
- fail2ban
- unattended-upgrades
- apt-transport-https
- ca-certificates
tasks:
# -------------------------------------------------------
# 1. System Updates
# -------------------------------------------------------
- name: Update apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 3600
- name: Upgrade all packages to latest version
ansible.builtin.apt:
upgrade: dist
autoremove: true
# -------------------------------------------------------
# 2. Install Essential Packages
# -------------------------------------------------------
- name: Install essential packages
ansible.builtin.apt:
name: "{{ essential_packages }}"
state: present
# -------------------------------------------------------
# 3. Set Timezone and Hostname
# -------------------------------------------------------
- name: Set timezone
community.general.timezone:
name: "{{ server_timezone }}"
- name: Set hostname
ansible.builtin.hostname:
name: "{{ inventory_hostname }}"
# -------------------------------------------------------
# 4. Create Admin User
# -------------------------------------------------------
- name: Create admin user
ansible.builtin.user:
name: "{{ admin_user }}"
groups: sudo
shell: /bin/bash
create_home: true
state: present
- name: Set up authorized keys for admin user
ansible.posix.authorized_key:
user: "{{ admin_user }}"
state: present
key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
- name: Allow admin user to sudo without password
ansible.builtin.lineinfile:
path: /etc/sudoers.d/{{ admin_user }}
line: "{{ admin_user }} ALL=(ALL) NOPASSWD:ALL"
create: true
mode: "0440"
validate: "visudo -cf %s"
# -------------------------------------------------------
# 5. Harden SSH
# -------------------------------------------------------
- name: Disable root login via SSH
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#?PermitRootLogin"
line: "PermitRootLogin no"
validate: "sshd -t -f %s"
notify: Restart SSH
- name: Disable password authentication
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#?PasswordAuthentication"
line: "PasswordAuthentication no"
validate: "sshd -t -f %s"
notify: Restart SSH
- name: Disable empty passwords
ansible.builtin.lineinfile:
path: /etc/ssh/sshd_config
regexp: "^#?PermitEmptyPasswords"
line: "PermitEmptyPasswords no"
validate: "sshd -t -f %s"
notify: Restart SSH
# -------------------------------------------------------
# 6. Configure UFW Firewall
# -------------------------------------------------------
- name: Allow SSH through UFW
community.general.ufw:
rule: allow
port: "{{ ssh_port }}"
proto: tcp
- name: Allow HTTP through UFW
community.general.ufw:
rule: allow
port: "80"
proto: tcp
- name: Allow HTTPS through UFW
community.general.ufw:
rule: allow
port: "443"
proto: tcp
- name: Set UFW default policy to deny incoming
community.general.ufw:
default: deny
direction: incoming
- name: Set UFW default policy to allow outgoing
community.general.ufw:
default: allow
direction: outgoing
- name: Enable UFW
community.general.ufw:
state: enabled
# -------------------------------------------------------
# 7. Configure Automatic Security Updates
# -------------------------------------------------------
- name: Configure unattended-upgrades for security updates
ansible.builtin.copy:
dest: /etc/apt/apt.conf.d/20auto-upgrades
content: |
APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
mode: "0644"
# -------------------------------------------------------
# 8. Enable and Start fail2ban
# -------------------------------------------------------
- name: Enable and start fail2ban
ansible.builtin.systemd:
name: fail2ban
enabled: true
state: started
# -------------------------------------------------------
# 9. Kernel Hardening via sysctl
# -------------------------------------------------------
- name: Apply sysctl security settings
ansible.posix.sysctl:
name: "{{ item.key }}"
value: "{{ item.value }}"
sysctl_set: true
reload: true
loop:
- { key: "net.ipv4.conf.all.rp_filter", value: "1" }
- { key: "net.ipv4.conf.default.rp_filter", value: "1" }
- { key: "net.ipv4.icmp_echo_ignore_broadcasts", value: "1" }
- { key: "net.ipv4.conf.all.accept_redirects", value: "0" }
- { key: "net.ipv4.conf.default.accept_redirects", value: "0" }
# ---------------------------------------------------------
# Handlers
# ---------------------------------------------------------
handlers:
- name: Restart SSH
ansible.builtin.systemd:
name: ssh
state: restarted
Let’s break down the key concepts in this playbook for anyone seeing Ansible for the first time:
- hosts: all — Run this playbook against every server in our inventory.
- become: true — Use sudo to execute tasks (we need root privileges to install packages and modify system config).
- vars: — Variables we can easily change without modifying task logic.
- tasks: — The ordered list of actions Ansible will perform.
- handlers: — Special tasks that only run when notified by another task. SSH restarts only if the config actually changed.
- notify: — Triggers a handler. If the SSH config didn’t change (say, on a second run), the handler won’t fire.
Step 7: Install Required Ansible Collections
Our playbook uses a couple of modules from Ansible collections that may not be installed by default. Let’s install them:
ansible-galaxy collection install ansible.posix community.general
Expected output:
Starting galaxy collection install process
Process install dependency map
...
ansible.posix (1.6.x) was installed successfully
community.general (9.x.x) was installed successfully
Step 8: Run the Playbook
This is the moment — one command to go from a fresh server to production-ready:
ansible-playbook playbook.yml
You’ll see Ansible work through each task. A successful run looks something like this:
PLAY [Configure fresh Linux server to production-ready state] ******************
TASK [Gathering Facts] *********************************************************
ok: [server1]
TASK [Update apt cache] ********************************************************
changed: [server1]
TASK [Upgrade all packages to latest version] **********************************
changed: [server1]
...
PLAY RECAP *********************************************************************
server1 : ok=18 changed=16 unreachable=0 failed=0 skipped=0
Key things to look at in the recap:
- ok — Tasks that ran successfully (includes tasks where nothing needed to change).
- changed — Tasks that actually modified something on the server.
- failed=0 — This is what you want. Zero failures.
Step 9: Verify and Run Again (Idempotency)
One of Ansible’s superpowers is idempotency — you can run the same playbook multiple times and it won’t break anything. On the second run, Ansible will see that everything is already configured and skip most tasks:
ansible-playbook playbook.yml
PLAY RECAP *********************************************************************
server1 : ok=18 changed=0 unreachable=0 failed=0 skipped=0
See changed=0? That means your server is already in the desired state. This is incredibly powerful — you can run this playbook on a schedule or after any manual change to bring servers back into compliance.
Step 10: Verify the Server Configuration
Let’s verify that everything was applied correctly. SSH into your server using the new admin user:
ssh deploy@203.0.113.10
Then run these checks:
# Check that the admin user exists and has sudo
sudo whoami
# Expected: root
# Check UFW status
sudo ufw status
# Expected: Status: active with rules for 22, 80, 443
# Check fail2ban is running
sudo systemctl status fail2ban
# Expected: active (running)
# Check SSH config
grep PermitRootLogin /etc/ssh/sshd_config
# Expected: PermitRootLogin no
# Check timezone
timedatectl
# Expected: Time zone: UTC
Common Beginner Mistakes to Avoid
After mentoring many junior engineers through their first Ansible projects, here are the mistakes I see most often:
| Mistake | Why It’s a Problem | What to Do Instead |
|---|---|---|
| Hardcoding IP addresses in playbooks | Playbooks become impossible to reuse | Use inventory files and variables |
| Running playbook before testing connectivity | You waste time debugging Ansible when the problem is SSH | Always run ansible all -m ping first |
| Disabling root SSH before creating an admin user | You lock yourself out of the server entirely | Create the admin user with sudo before hardening SSH (our playbook does this correctly) |
Not using validate on config files |
A typo in sshd_config can lock you out on restart | Use validate: "sshd -t -f %s" to test config before applying |
Ignoring the --check flag |
You apply changes without previewing them | Run ansible-playbook playbook.yml --check for a dry run first |
The Dry Run Flag
This is especially important to internalize. Before running any playbook against production servers, do a dry run:
ansible-playbook playbook.yml --check --diff
The --check flag simulates the run without making changes. The --diff flag shows you exactly what would change in files. Together, they give you confidence before you commit.
Taking It Further: Using Variables for Multiple Environments
As your infrastructure grows, you’ll want different settings for staging and production. You can override variables at runtime:
ansible-playbook playbook.yml -e "admin_user=staging-deploy server_timezone=America/New_York"
Or create separate variable files and include them. The playbook stays the same — only the data changes.
What’s Next
Now that you have a working server configuration playbook, here are the logical next steps in your Ansible journey:
- Ansible Roles — Break your playbook into reusable components. Instead of one big file, you’d have separate roles for “base server,” “web server,” “database server,” etc.
- Ansible Vault — Learn to encrypt sensitive variables like passwords and API keys with
ansible-vault. - Dynamic Inventory — Instead of manually listing servers in