Wx0xo6FsZRyx4rLE66hBR56d1ftvUDQRSK2eJM5q
Bookmark

Locking Down Linux: My "Day 0" Server Hardening Checklist

The 15-Minute Nightmare That Changed My Workflow

I still remember the sinking feeling I had back in 2017. I had just spun up a cheap $5 VPS to test a Python script for a client. I figured, "I'll secure it later, I just need to check if this dependency installs." I left the default root password active—something incredibly weak—and went to grab a coffee. I was gone for maybe twenty minutes.

When I got back, the terminal was sluggish. htop showed the CPU pinned at 100%. Someone had brute-forced the root account, installed a crypto miner, and was already blasting outgoing traffic, which got my IP flagged by the hosting provider. That was a wakeup call. It doesn't take days for bots to find a new server; it takes minutes. Since then, I don’t touch a single line of code until I’ve run through my "Day 0" hardening checklist. It’s not paranoia if the internet is actually out to get you.

This isn't just about following best practices because some textbook said so. This is about sleeping at night knowing your server isn't mining Monero for a stranger or serving as a command-and-control node for a botnet. Here is exactly how I set up every Linux machine I touch, primarily focused on Debian/Ubuntu systems, though the logic applies everywhere.

SSH: The Front Door Needs a Better Lock

Most attacks happen here. SSH (Secure Shell) is the industry standard for remote administration, but out of the box, it’s often too permissive. I’ve seen defaults on some provider images that honestly scare me.

Step 1: Keys or Bust
I haven't used a password to log into a server in years. Passwords are leakable, guessable, and annoying to type. If you're still using RSA 2048 keys, it's time to upgrade. I switched to Ed25519 keys about four years ago. They are smaller, faster, and generally considered more secure against certain types of attacks.

On your local machine, generate one:

ssh-keygen -t ed25519 -C "your-email@example.com"

Once you copy that to your server (ssh-copy-id), disable password authentication immediately. This is the single biggest security upgrade you can make.

Step 2: The Config File Overhaul
Open up /etc/ssh/sshd_config. Here are the specific lines I change. Don't just copy-paste; understand what they do.

  • PermitRootLogin no: Never log in as root. Log in as a regular user and use sudo. It adds a layer of accountability and stops bots that are hardcoded to attack the 'root' username.
  • PasswordAuthentication no: This forces key-based auth.
  • PubkeyAuthentication yes: Ensures keys actually work.
  • Port 2222 (or similar): Okay, this is controversial. Changing the default port (22) is "security by obscurity," and seasoned pros will tell you it doesn't stop a determined hacker. They're right. However, I do it anyway because it stops 99% of the automated script kiddie noise in my logs. It keeps the log files clean so I can see real threats.
My Mistake: One time I changed the SSH port to 4422 but forgot to update my local SSH config alias. I spent an hour debugging network issues before realizing I was just knocking on the wrong door. Always verify your connection in a new terminal window before closing your current session.

Firewalls: UFW is Your Best Friend

I used to mess around with raw iptables commands. Honestly? It's miserable. Unless you are a network engineer tuning high-frequency trading servers, just use UFW (Uncomplicated Firewall). It’s a wrapper for iptables/nftables that makes sense to human beings.

The philosophy here is "Deny All, Allow Necessary."

Here is the exact sequence I run. Warning: Do this in order, or you will lock yourself out (ask me how I know).

sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 2222/tcp  # OR whatever port you picked for SSH
sudo ufw enable

If you are running a web server, obviously add port 80 and 443. But don't open them until you actually have a service listening there. Leaving ports open for "future use" is a bad habit I had to break early in my career.

If you prefer a GUI or more visual representation, you might look at cloud-provider firewalls (like AWS Security Groups), but I always believe in host-based firewalls as a second line of defense. If the cloud config fails or is misconfigured by a teammate, your local UFW rules still hold.

Fail2Ban: The Bouncer

Even with a custom port and keys, you might see connection attempts. Fail2Ban is a piece of software that scans log files (like /var/log/auth.log) and bans IPs that show malicious signs—like too many password failures.

I install this on every machine. The default configuration is usually in /etc/fail2ban/jail.conf, but never edit that file. Updates will overwrite it. Create a jail.local file instead.

Here is a snippet of my standard jail.local for SSH:

[sshd]
enabled = true
port = 2222
filter = sshd
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600

A bantime of 3600 seconds (1 hour) is usually enough to make a bot move on. Some people set it to permanent bans, but I've found that eventually, you'll ban a legitimate IP (like your office VPN) and cause a headache for yourself. A one-hour timeout is a good balance.

I've been hearing a lot of buzz about CrowdSec lately. It’s a newer tool that shares threat intelligence across a community. I've tested it on a few non-critical nodes, and the concept is brilliant—if an IP attacks me, my server tells the network, and your server blocks it preemptively. However, for pure stability on production rigs, I'm still sticking with Fail2Ban v0.11+ for now. It’s boring, and boring is good for security.Automatic Updates: The "Unattended" Dilemma

This is where I argue with other sysadmins. Some say, "Never auto-update! You might break the app!"

My take? I would rather deal with 10 minutes of downtime because a patch broke a config file than deal with a complete data breach because I forgot to patch a critical CVE for three months. For the OS level (kernel, openssl, bash), I enable unattended-upgrades on Debian/Ubuntu systems.

You can configure it to only install security updates, ignoring the rest. This minimizes the risk of breaking changes while keeping the nasty stuff patched.

sudo apt install unattended-upgrades
sudo dpkg-reconfigure --priority=low unattended-upgrades

I check my servers manually about once a month, but this tool covers my back when I'm on vacation or just swamped with other work. In 5 years of running this on about 20 servers, I think an auto-update has caused a service failure maybe twice. The risk/reward ratio is heavily in favor of patching.

User Management and Sudo

We mentioned disabling root login, but user management goes deeper. When you create your admin user, ensure they are in the sudo group (or wheel on RHEL/CentOS).

One specific tweak I love: requiring a password for sudo. Some cloud-init scripts set up the default user with NOPASSWD in the sudoers file. I hate this. If an attacker manages to compromise your user account (maybe via a web shell or a stolen session cookie), they shouldn't just be able to type sudo rm -rf / without a challenge.

Run sudo visudo and check your permissions. Make sure it looks like this:

%sudo   ALL=(ALL:ALL) ALL

NOT this:

%sudo   ALL=(ALL:ALL) NOPASSWD: ALL

It adds three seconds to your workflow to type your password, but it stops automated escalation scripts dead in their tracks.

Frequently Asked Questions

Is changing the SSH port actually useful?

Technically, no, it's not a hard security measure. A port scanner like Nmap will find your open SSH port in seconds regardless of where you hide it. However, practically speaking, it is incredibly useful for reducing log noise. If you leave it on port 22, your logs will be gigabytes larger, filled with bots trying generic passwords. Moving it keeps your logs readable, which helps you spot targeted attacks versus random noise.

What if I lock myself out with UFW?

This happens to the best of us. If you are on a VPS (like DigitalOcean, Linode, or AWS), most providers offer a "Web Console" or "VNC Console" feature in their dashboard. This creates a direct KVM connection to the server, bypassing the network interface. You can log in there (usually requiring the root password you hopefully saved somewhere secure) and run sudo ufw disable to regain access.

Should I use SELinux or AppArmor?

If you are on CentOS/RHEL, keep SELinux enabled. If you are on Ubuntu/Debian, use AppArmor. I know, I know—SELinux is notorious for breaking things and making you want to pull your hair out. I've spent nights debugging "permission denied" errors that turned out to be SELinux contexts. But disabling it is like taking the airbags out of your car because they take up space. Learn the basics of audit2allow; it’s worth the learning curve.

Why not just use a VPN to access the server?

If you have the infrastructure, putting your SSH access behind a VPN (like WireGuard or OpenVPN) is the gold standard. You close port 22 (or 2222) to the public internet entirely and only allow connections from the VPN interface. However, for a standalone server or a small project, that adds a lot of complexity. The guide above is for when you need that server publicly accessible but secure.

So, where does that leave us?

Security isn't a product you buy; it's a process you maintain. The steps above—keys, firewalls, Fail2Ban, and updates—are the foundation. I've been running servers this way for years, and while nothing is ever 100% hack-proof, this setup filters out the low-effort attacks that make up the majority of internet traffic.

Don't let the complexity paralyze you. Start with the SSH keys and the firewall. Once you get comfortable with those, the rest falls into place. And seriously, check your backups. But that’s a story for another post.

Dengarkan
Pilih Suara
1x
* Mengubah pengaturan akan membuat artikel dibacakan ulang dari awal.
Posting Komentar