DevOps engineers and system administrators spend countless hours on repetitive tasks that could be automated. This guide provides 25 practical bash scripting for devops examples that solve real problems you face daily.
These devops automation scripts are designed for DevOps professionals, SREs, and Linux administrators who want to streamline their workflows and reduce manual work. Each script includes real-world scenarios, implementation details, and bash scripting best practices.
You’ll discover automation scripts for linux that cover system monitoring scripts for tracking server health, log management automation for parsing and analyzing logs, and application deployment scripts that handle complex release processes. We’ll also explore ci cd pipeline automation techniques and infrastructure automation bash solutions for cloud operations.
The scripts range from simple file system operations to advanced configuration management bash tools. Each example includes error handling, logging, and security considerations to help you build reliable automation that works in production environments.
Essential Bash Scripting Fundamentals for DevOps Teams
Master Variable Management and Environment Configuration
Variable management forms the backbone of effective bash scripting for devops teams. Proper variable handling prevents configuration drift and makes scripts maintainable across different environments. Global variables should follow consistent naming conventions, using uppercase letters with underscores for constants like DB_HOST or API_ENDPOINT.
Environment variables play a crucial role in devops automation scripts. Define environment-specific configurations using .env files or export statements at the script’s beginning. Always provide default values using parameter expansion: ${DATABASE_URL:-"localhost:5432"}. This approach ensures scripts run smoothly even when environment variables aren’t explicitly set.
Local variables within functions prevent namespace pollution and improve script reliability. Use local keyword to scope variables properly:
deploy_application() {
local app_name="$1"
local version="$2"
local deployment_path="/opt/${app_name}"
# Function logic here
}
Array variables excel at managing multiple configuration items. Store server lists, configuration files, or deployment targets in arrays for easy iteration:
SERVERS=("web01.example.com" "web02.example.com" "web03.example.com")
CONFIG_FILES=("/etc/nginx/nginx.conf" "/etc/php/php.ini" "/etc/mysql/my.cnf")
Implement Error Handling and Exit Codes for Robust Scripts
Robust error handling distinguishes professional devops bash scripts from amateur attempts. The set -euo pipefail combination creates a fail-fast environment that catches errors immediately. The -e flag exits on command failures, -u treats unset variables as errors, and -o pipefail ensures pipeline failures don’t get masked.
Custom error handling functions provide detailed feedback when things go wrong:
handle_error() {
echo "ERROR: $1" >&2
echo "Line $2: Command failed with exit code $3"
cleanup_resources
exit 1
}
trap 'handle_error "Script failed" $LINENO $?' ERR
Exit codes communicate script results to calling processes. Use standard conventions: 0 for success, 1 for general errors, 2 for command-line usage errors, and custom codes for specific failure conditions. CI/CD pipeline automation relies heavily on these exit codes to determine build success or failure.
Validate critical inputs before processing. Check file existence, network connectivity, and service availability upfront:
validate_environment() {
[[ -f "$CONFIG_FILE" ]] || { echo "Config file missing"; exit 2; }
[[ -n "$DATABASE_URL" ]] || { echo "Database URL required"; exit 2; }
curl -s "$API_ENDPOINT/health" > /dev/null || { echo "API unreachable"; exit 3; }
}
Leverage Command-Line Arguments and User Input Processing
Command-line argument processing makes bash scripts flexible and reusable across different scenarios. The getopts builtin provides robust argument parsing with support for both short and long options. Create self-documenting scripts that display usage information when called incorrectly.
Process positional parameters systematically. Assign meaningful names to positional arguments and validate their presence:
APP_NAME="${1:?Application name required}"
ENVIRONMENT="${2:?Environment (dev/staging/prod) required}"
VERSION="${3:-latest}"
Interactive prompts enhance script usability during manual operations. Use read with appropriate options for secure input handling:
read -p "Enter deployment environment: " environment
read -s -p "Enter database password: " db_password
echo # Add newline after hidden input
Configuration file parsing expands script capabilities beyond command-line arguments. Support multiple configuration formats like JSON, YAML, or simple key-value pairs. Parse JSON configurations using jq for complex infrastructure automation bash scripts:
parse_config() {
local config_file="$1"
DB_HOST=$(jq -r '.database.host' "$config_file")
API_KEY=$(jq -r '.api.key' "$config_file")
REPLICAS=$(jq -r '.deployment.replicas' "$config_file")
}
Optimize Script Performance with Efficient Looping Techniques
Loop optimization significantly impacts script performance, especially when processing large datasets or managing multiple servers. Choose appropriate loop constructs based on your specific use case. for loops work well with known lists, while while loops excel at processing streaming data or file contents.
Minimize subprocess creation within loops. Instead of calling external commands repeatedly, batch operations when possible:
# Inefficient: Creates multiple processes
for server in "${SERVERS[@]}"; do
ssh "$server" "systemctl status nginx"
done
# Efficient: Single command with parallel execution
parallel-ssh -h server_list.txt "systemctl status nginx"
Array processing techniques reduce loop complexity. Use parameter expansion and array slicing for efficient data manipulation:
# Process array elements efficiently
servers=("web01" "web02" "web03" "db01")
web_servers=("${servers[@]:0:3}") # First 3 elements
db_servers=("${servers[@]:3}") # Remaining elements
Background processing and job control enable parallel execution. Launch multiple tasks simultaneously and wait for completion:
deploy_to_servers() {
for server in "${SERVERS[@]}"; do
deploy_to_single_server "$server" &
pids+=($!)
done
# Wait for all deployments to complete
for pid in "${pids[@]}"; do
wait "$pid" || echo "Deployment failed on PID $pid"
done
}
Built-in string processing reduces external command dependencies. Use parameter expansion for common string operations instead of sed or awk:
filename="/path/to/config.yaml"
basename="${filename##*/}" # config.yaml
extension="${filename##*.}" # yaml
path="${filename%/*}" # /path/to
File System Operations and Log Management Automation
Automate Directory Structure Creation and Cleanup Tasks
Creating standardized directory structures across development, staging, and production environments becomes effortless with bash scripting for devops. A well-crafted script can establish consistent folder hierarchies, set proper permissions, and handle cleanup operations automatically.
#!/bin/bash
# Project structure automation script
create_project_structure() {
local project_name=$1
local base_dir="/opt/projects"
mkdir -p "$base_dir/$project_name"/{src,tests,docs,config,logs,tmp}
chmod 755 "$base_dir/$project_name"
chmod 644 "$base_dir/$project_name"/config/*
echo "Project structure created for $project_name"
}
Cleanup operations prevent disk space issues by removing temporary files, old builds, and outdated artifacts. Schedule these scripts with cron jobs to maintain system hygiene automatically.
Implement Intelligent Log Rotation and Archival Systems
Log management automation prevents systems from running out of disk space while preserving important historical data. Smart rotation scripts consider file size, age, and system load before making decisions.
#!/bin/bash
# Intelligent log rotation script
rotate_logs() {
local log_dir="/var/log/applications"
local max_size="100M"
local retention_days=30
find "$log_dir" -name "*.log" -size +$max_size -exec gzip {} \;
find "$log_dir" -name "*.gz" -mtime +$retention_days -delete
# Send logs to remote storage before deletion
rsync -av "$log_dir"/*.gz remote-server:/backup/logs/
}
This approach ensures critical logs are archived safely while keeping local storage optimized for current operations.
Build File Backup and Synchronization Scripts
Robust backup scripts form the backbone of disaster recovery strategies. These devops automation scripts handle incremental backups, verify data integrity, and manage multiple backup destinations.
#!/bin/bash
# Automated backup with verification
backup_critical_data() {
local source_dir="/opt/data"
local backup_dir="/backup/$(date +%Y%m%d)"
rsync -av --delete "$source_dir/" "$backup_dir/"
# Verify backup integrity
if diff -r "$source_dir" "$backup_dir" > /dev/null; then
echo "Backup verified successfully"
# Clean up old backups
find /backup -maxdepth 1 -type d -mtime +7 -exec rm -rf {} \;
else
echo "Backup verification failed!" >&2
exit 1
fi
}
Create Automated File Permission and Ownership Management
Security compliance requires consistent file permissions across environments. Automated scripts ensure configurations remain secure while allowing proper application access.
#!/bin/bash
# Permission management for web applications
set_web_permissions() {
local web_root="/var/www/html"
# Set directory permissions
find "$web_root" -type d -exec chmod 755 {} \;
# Set file permissions
find "$web_root" -type f -exec chmod 644 {} \;
# Special permissions for executable scripts
find "$web_root" -name "*.sh" -exec chmod 755 {} \;
# Set ownership
chown -R www-data:www-data "$web_root"
}
Develop Log Analysis and Pattern Matching Tools
Log analysis automation helps identify issues before they become critical problems. Pattern matching scripts can detect anomalies, security threats, and performance bottlenecks in real-time.
#!/bin/bash
# Log analysis for error detection
analyze_application_logs() {
local log_file="/var/log/app/application.log"
local alert_threshold=10
# Count error occurrences in last hour
error_count=$(grep -c "ERROR" "$log_file" | tail -n 60)
if [ "$error_count" -gt "$alert_threshold" ]; then
# Extract recent errors for analysis
grep "ERROR" "$log_file" | tail -n 20 > /tmp/recent_errors.log
# Send alert with error summary
mail -s "High error rate detected" admin@company.com < /tmp/recent_errors.log
fi
}
These bash scripts for system administration create a comprehensive file management ecosystem that handles routine operations automatically, allowing DevOps teams to focus on strategic initiatives rather than manual maintenance tasks.
System Monitoring and Health Check Scripts
Monitor CPU, Memory, and Disk Usage with Alert Mechanisms
Tracking system resources becomes critical when managing production environments. System monitoring scripts help DevOps teams catch resource bottlenecks before they impact users. A comprehensive resource monitoring script can check CPU load, memory consumption, and disk space usage while sending alerts when thresholds are exceeded.
#!/bin/bash
# Resource monitoring with alerts
CPU_THRESHOLD=80
MEMORY_THRESHOLD=85
DISK_THRESHOLD=90
EMAIL_RECIPIENT="admin@company.com"
# Check CPU usage
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | sed 's/%us,//')
cpu_int=${cpu_usage%.*}
# Check memory usage
memory_usage=$(free | grep Mem | awk '{printf("%.0f", $3/$2 * 100.0)}')
# Check disk usage
disk_usage=$(df -h / | awk 'NR==2{printf("%d", $5)}')
# Alert function
send_alert() {
echo "$1" | mail -s "System Alert: $HOSTNAME" $EMAIL_RECIPIENT
logger "SYSTEM_ALERT: $1"
}
# Check thresholds and send alerts
if [ $cpu_int -gt $CPU_THRESHOLD ]; then
send_alert "CPU usage is at ${cpu_usage}%, exceeding threshold of ${CPU_THRESHOLD}%"
fi
if [ $memory_usage -gt $MEMORY_THRESHOLD ]; then
send_alert "Memory usage is at ${memory_usage}%, exceeding threshold of ${MEMORY_THRESHOLD}%"
fi
if [ $disk_usage -gt $DISK_THRESHOLD ]; then
send_alert "Disk usage is at ${disk_usage}%, exceeding threshold of ${DISK_THRESHOLD}%"
fi
This bash scripting for devops example provides real-time monitoring with automated alerting through email and system logs. The script can be scheduled via cron to run at regular intervals, ensuring continuous oversight of critical system resources.
Automate Service Status Checking and Recovery Processes
Service availability monitoring requires proactive detection and automatic recovery mechanisms. DevOps automation scripts for service management can detect failed services, attempt restarts, and escalate issues when automatic recovery fails.
#!/bin/bash
SERVICES=("nginx" "mysql" "redis-server" "postgresql")
MAX_RESTART_ATTEMPTS=3
LOG_FILE="/var/log/service_monitor.log"
check_and_recover_service() {
local service=$1
local restart_count=0
while [ $restart_count -lt $MAX_RESTART_ATTEMPTS ]; do
if systemctl is-active --quiet $service; then
echo "$(date): $service is running" >> $LOG_FILE
return 0
else
echo "$(date): $service is down, attempting restart #$((restart_count + 1))" >> $LOG_FILE
systemctl start $service
sleep 10
restart_count=$((restart_count + 1))
fi
done
# Service failed to recover
echo "$(date): CRITICAL - $service failed to recover after $MAX_RESTART_ATTEMPTS attempts" >> $LOG_FILE
send_critical_alert "Service $service is down and failed automatic recovery"
return 1
}
send_critical_alert() {
echo "$1" | mail -s "CRITICAL: Service Failure on $HOSTNAME" admin@company.com
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"đ¨ $1\"}" \
$SLACK_WEBHOOK_URL
}
# Monitor all services
for service in "${SERVICES[@]}"; do
check_and_recover_service $service
done
This script provides intelligent service recovery with escalation paths. When services fail, it attempts automatic restart with exponential backoff and sends critical alerts through multiple channels when recovery attempts fail.
Implement Network Connectivity and Port Monitoring
Network connectivity monitoring ensures that critical services remain accessible. Automation scripts for linux environments should verify both internal service connectivity and external dependencies to maintain service reliability.
#!/bin/bash
# Network monitoring configuration
INTERNAL_SERVICES=(
"localhost:3306:MySQL"
"localhost:6379:Redis"
"localhost:9200:Elasticsearch"
)
EXTERNAL_DEPENDENCIES=(
"api.external-service.com:443:External API"
"database.cloud-provider.com:5432:Cloud Database"
)
WEB_ENDPOINTS=(
"https://myapp.com/health"
"https://api.myapp.com/status"
)
# Port connectivity check
check_port() {
local host=$1
local port=$2
local service_name=$3
if timeout 5 bash -c "</dev/tcp/$host/$port"; then
echo "â
$service_name ($host:$port) is reachable"
return 0
else
echo "â $service_name ($host:$port) is unreachable"
send_connectivity_alert "$service_name at $host:$port is unreachable"
return 1
fi
}
# HTTP endpoint check
check_http_endpoint() {
local url=$1
local response=$(curl -s -o /dev/null -w "%{http_code}" --max-time 10 $url)
if [ "$response" -eq 200 ]; then
echo "â
$url returned HTTP $response"
else
echo "â $url returned HTTP $response"
send_connectivity_alert "HTTP endpoint $url returned status $response"
fi
}
# Check all internal services
echo "Checking internal services..."
for service in "${INTERNAL_SERVICES[@]}"; do
IFS=':' read -r host port name <<< "$service"
check_port $host $port $name
done
# Check external dependencies
echo "Checking external dependencies..."
for service in "${EXTERNAL_DEPENDENCIES[@]}"; do
IFS=':' read -r host port name <<< "$service"
check_port $host $port $name
done
# Check HTTP endpoints
echo "Checking HTTP endpoints..."
for endpoint in "${WEB_ENDPOINTS[@]}"; do
check_http_endpoint $endpoint
done
This comprehensive network monitoring solution checks multiple connectivity layers, from low-level port accessibility to application-level health endpoints.
Create Database Connection and Performance Verification Scripts
Database monitoring requires specialized checks that verify not only connectivity but also performance characteristics. Bash scripts for system administration should include database health verification to catch performance degradation early.
#!/bin/bash
# Database monitoring script
MYSQL_HOST="localhost"
MYSQL_USER="monitor"
MYSQL_PASS="monitor_password"
MYSQL_DB="production"
POSTGRES_HOST="localhost"
POSTGRES_USER="monitor"
POSTGRES_DB="production"
# MySQL monitoring
check_mysql_health() {
local connection_test=$(mysql -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASS -e "SELECT 1" 2>/dev/null)
if [ $? -eq 0 ]; then
echo "â
MySQL connection successful"
# Check slow queries
local slow_queries=$(mysql -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASS -e "SHOW STATUS LIKE 'Slow_queries'" | awk 'NR==2
# Check connection count
local connections=$(mysql -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASS -e "SHOW STATUS LIKE 'Threads_connected'" | awk 'NR==2{print $2}')
# Performance check query
local query_time=$(mysql -h$MYSQL_HOST -u$MYSQL_USER -p$MYSQL_PASS -e "SELECT BENCHMARK(1000000, MD5('test'))" 2>&1 | grep "Query OK" | awk '{print $4}')
echo "MySQL Stats: Slow Queries: $slow_queries, Active Connections: $connections"
if [ $connections -gt 100 ]; then
send_db_alert "MySQL has $connections active connections (threshold: 100)"
fi
else
echo "â MySQL connection failed"
send_db_alert "MySQL database connection failed"
fi
}
# PostgreSQL monitoring
check_postgres_health() {
local pg_status=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -U $POSTGRES_USER -d $POSTGRES_DB -c "SELECT version();" 2>/dev/null)
if [ $? -eq 0 ]; then
echo "â
PostgreSQL connection successful"
# Check active connections
local active_connections=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT count(*) FROM pg_stat_activity WHERE state = 'active';" 2>/dev/null | tr -d ' ')
# Check database size
local db_size=$(PGPASSWORD=$POSTGRES_PASS psql -h $POSTGRES_HOST -U $POSTGRES_USER -d $POSTGRES_DB -t -c "SELECT pg_size_pretty(pg_database_size('$POSTGRES_DB'));" 2>/dev/null | tr -d ' ')
echo "PostgreSQL Stats: Active Connections: $active_connections, Database Size: $db_size"
else
echo "â PostgreSQL connection failed"
send_db_alert "PostgreSQL database connection failed"
fi
}
send_db_alert() {
echo "$(date): DATABASE_ALERT: $1" >> /var/log/db_monitor.log
echo "$1" | mail -s "Database Alert: $HOSTNAME" admin@company.com
}
# Run all database checks
check_mysql_health
check_postgres_health
This database monitoring script provides comprehensive health checks including connection verification, performance metrics, and automated alerting for critical database issues. The script can be integrated into larger monitoring frameworks or scheduled for regular execution via cron jobs.
Application Deployment and Configuration Management
Streamline Application Installation and Update Processes
Modern DevOps teams need reliable automation scripts for linux that handle application deployments without manual intervention. Bash scripting for devops excels at creating robust installation workflows that check dependencies, validate configurations, and handle rollbacks when things go wrong.
A comprehensive application deployment script starts with pre-deployment validation. Check if required services are running, verify disk space, and confirm network connectivity before attempting any installation. Here’s a practical approach:
#!/bin/bash
validate_environment() {
if ! systemctl is-active --quiet docker; then
echo "Docker service not running"
exit 1
fi
REQUIRED_SPACE=1000000 # 1GB in KB
AVAILABLE_SPACE=$(df /opt --output=avail | tail -1)
if [ "$AVAILABLE_SPACE" -lt "$REQUIRED_SPACE" ]; then
echo "Insufficient disk space"
exit 1
fi
}
Update processes require careful orchestration to maintain service availability. Build your scripts to download new versions to temporary locations first, perform health checks, then swap them into production. Always maintain previous versions for quick rollbacks.
DevOps automation scripts should include logging mechanisms that capture deployment progress and any errors. This makes troubleshooting failed deployments much easier and provides audit trails for compliance requirements.
Automate Environment Configuration and Variable Management
Environment configuration represents one of the most error-prone aspects of application deployment. DevOps bash examples demonstrate how shell scripts can standardize this process across different environments while maintaining security best practices.
Create configuration management scripts that source environment-specific variables from secure locations rather than hardcoding values. This approach supports multiple deployment targets while keeping sensitive data protected:
#!/bin/bash
ENVIRONMENT=${1:-development}
CONFIG_DIR="/secure/configs"
if [[ -f "$CONFIG_DIR/$ENVIRONMENT.env" ]]; then
source "$CONFIG_DIR/$ENVIRONMENT.env"
else
echo "Configuration file not found for $ENVIRONMENT"
exit 1
fi
deploy_with_config() {
# Use sourced variables for deployment
docker run -e DATABASE_URL="$DB_CONNECTION" \
-e API_KEY="$API_SECRET" \
myapp:latest
}
Configuration management bash scripts should validate all required environment variables before proceeding with deployments. Missing or malformed configuration values cause deployment failures that are often difficult to diagnose after the fact.
Template-based configuration works well for applications requiring complex setup files. Scripts can generate configuration files from templates while substituting environment-specific values, ensuring consistency across deployments while maintaining flexibility.
Implement Blue-Green Deployment Automation Scripts
Blue-green deployments eliminate downtime by maintaining two identical production environments and switching traffic between them. Bash scripting best practices for this pattern include comprehensive health checking and automated traffic routing.
Start by creating scripts that manage environment states and track which environment currently serves production traffic. Use simple flag files or configuration entries to maintain this state information:
#!/bin/bash
DEPLOYMENT_STATE="/opt/deployment/current_env"
BLUE_PORT=8080
GREEN_PORT=8081
get_current_env() {
if [[ -f "$DEPLOYMENT_STATE" ]]; then
cat "$DEPLOYMENT_STATE"
else
echo "blue" # Default to blue
fi
}
switch_environment() {
CURRENT=$(get_current_env)
if [[ "$CURRENT" == "blue" ]]; then
TARGET_ENV="green"
TARGET_PORT=$GREEN_PORT
else
TARGET_ENV="blue"
TARGET_PORT=$BLUE_PORT
fi
# Update load balancer configuration
update_nginx_upstream "$TARGET_PORT"
echo "$TARGET_ENV" > "$DEPLOYMENT_STATE"
}
Health check validation becomes critical in blue-green deployments. Deploy to the inactive environment, run comprehensive tests, then switch traffic only after confirming the new version works correctly. Build retry logic and automatic rollback capabilities into your devops shell scripting workflows.
Infrastructure automation bash scripts should integrate with your load balancer or reverse proxy to handle traffic switching. Whether using nginx, HAProxy, or cloud-based load balancers, automate the configuration updates that redirect user requests to the newly deployed environment.
Monitor both environments during the switching process. Keep the previous version running briefly after traffic switches to enable instant rollbacks if issues emerge. This approach provides the safety net that makes blue-green deployments so effective for high-availability applications.
CI/CD Pipeline Integration and Build Automation
Create Automated Code Testing and Quality Assurance Scripts
Setting up automated testing through bash scripting for devops teams makes code quality checks happen automatically without manual intervention. These scripts run multiple test types and catch issues before they reach production.
#!/bin/bash
# Comprehensive testing automation script
set -euo pipefail
PROJECT_DIR="/path/to/project"
TEST_RESULTS_DIR="test-results"
# Function to run unit tests
run_unit_tests() {
echo "Running unit tests..."
cd "$PROJECT_DIR"
npm test || exit 1
echo "Unit tests passed!"
}
# Function to run linting
run_code_quality_checks() {
echo "Running code quality checks..."
eslint src/ --format junit --output-file "$TEST_RESULTS_DIR/eslint-results.xml"
# SonarQube analysis
sonar-scanner \
-Dsonar.projectKey=myproject \
-Dsonar.sources=src/ \
-Dsonar.host.url=$SONAR_HOST_URL \
-Dsonar.login=$SONAR_TOKEN
}
# Security vulnerability scanning
run_security_scan() {
echo "Running security vulnerability scan..."
npm audit --audit-level moderate || {
echo "Security vulnerabilities found!"
npm audit fix
}
}
# Create test results directory
mkdir -p "$TEST_RESULTS_DIR"
# Execute all tests
run_unit_tests
run_code_quality_checks
run_security_scan
echo "All quality assurance checks completed successfully!"
This devops automation script handles different testing phases and generates reports that integrate seamlessly with CI/CD tools. The script stops execution when tests fail, preventing broken code from moving forward.
Implement Docker Container Management and Registry Operations
Docker container management becomes streamlined with bash scripts that handle building, tagging, and pushing images to registries. These scripts automate the entire container lifecycle for ci cd pipeline automation.
#!/bin/bash
# Docker container management automation
set -euo pipefail
REGISTRY_URL="your-registry.com"
IMAGE_NAME="myapp"
VERSION_TAG="${1:-latest}"
DOCKERFILE_PATH="./Dockerfile"
# Build Docker image
build_docker_image() {
echo "Building Docker image: $IMAGE_NAME:$VERSION_TAG"
docker build -t "$IMAGE_NAME:$VERSION_TAG" -f "$DOCKERFILE_PATH" .
# Tag for registry
docker tag "$IMAGE_NAME:$VERSION_TAG" "$REGISTRY_URL/$IMAGE_NAME:$VERSION_TAG"
docker tag "$IMAGE_NAME:$VERSION_TAG" "$REGISTRY_URL/$IMAGE_NAME:latest"
}
# Security scan for containers
scan_container_vulnerabilities() {
echo "Scanning container for vulnerabilities..."
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
aquasec/trivy image "$IMAGE_NAME:$VERSION_TAG"
}
# Push to registry
push_to_registry() {
echo "Logging into registry..."
echo "$REGISTRY_PASSWORD" | docker login "$REGISTRY_URL" -u "$REGISTRY_USER" --password-stdin
echo "Pushing images to registry..."
docker push "$REGISTRY_URL/$IMAGE_NAME:$VERSION_TAG"
docker push "$REGISTRY_URL/$IMAGE_NAME:latest"
}
# Cleanup old images
cleanup_old_images() {
echo "Cleaning up old Docker images..."
docker image prune -f
docker system prune -f
}
# Execute container operations
build_docker_image
scan_container_vulnerabilities
push_to_registry
cleanup_old_images
echo "Docker operations completed successfully!"
Container registry operations need authentication and proper error handling. This script manages login credentials securely and cleans up local storage after successful operations.
Automate Git Repository Management and Branch Operations
Git workflow automation reduces manual repository management tasks and enforces consistent branching strategies. These devops bash examples handle common Git operations that happen during automated deployments.
#!/bin/bash
# Git repository automation script
set -euo pipefail
REPO_URL="https://github.com/yourorg/yourrepo.git"
MAIN_BRANCH="main"
FEATURE_BRANCH="feature/automated-deployment"
# Clone or update repository
setup_repository() {
if [ -d ".git" ]; then
echo "Updating existing repository..."
git fetch origin
git reset --hard origin/$MAIN_BRANCH
else
echo "Cloning repository..."
git clone "$REPO_URL" .
fi
}
# Create and switch to feature branch
create_feature_branch() {
echo "Creating feature branch: $FEATURE_BRANCH"
git checkout -b "$FEATURE_BRANCH" || git checkout "$FEATURE_BRANCH"
git pull origin "$FEATURE_BRANCH" || echo "New branch created"
}
# Commit automated changes
commit_automated_changes() {
local commit_message="$1"
git add .
git diff --staged --quiet || {
echo "Committing changes: $commit_message"
git commit -m "$commit_message"
git push origin "$FEATURE_BRANCH"
}
}
# Create pull request
create_pull_request() {
echo "Creating pull request..."
gh pr create \
--title "Automated deployment updates" \
--body "Automated changes from CI/CD pipeline" \
--base "$MAIN_BRANCH" \
--head "$FEATURE_BRANCH" || echo "PR already exists"
}
# Tag release
tag_release() {
local version="$1"
git checkout "$MAIN_BRANCH"
git pull origin "$MAIN_BRANCH"
git tag -a "v$version" -m "Release version $version"
git push origin "v$version"
}
# Execute Git operations
setup_repository
create_feature_branch
commit_automated_changes "Update deployment configuration"
create_pull_request
Branch protection and automated merging strategies work together with these scripts to maintain code quality while speeding up deployment cycles.
Build Notification Systems for Pipeline Status Updates
Pipeline notifications keep teams informed about build status and deployment progress. These automation scripts for linux systems integrate with various communication platforms to provide real-time updates.
#!/bin/bash
# Pipeline notification system
set -euo pipefail
SLACK_WEBHOOK_URL="$SLACK_WEBHOOK"
EMAIL_RECIPIENTS="dev-team@company.com"
PROJECT_NAME="MyApplication"
BUILD_ID="${BUILD_NUMBER:-unknown}"
BUILD_STATUS="${1:-unknown}"
# Send Slack notification
send_slack_notification() {
local status="$1"
local message="$2"
local color=""
case "$status" in
"success") color="good" ;;
"failure") color="danger" ;;
"warning") color="warning" ;;
*) color="#439FE0" ;;
esac
curl -X POST -H 'Content-type: application/json' \
--data "{
\"attachments\": [{
\"color\": \"$color\",
\"fields\": [{
\"title\": \"$PROJECT_NAME - Build #$BUILD_ID\",
\"value\": \"$message\",
\"short\": false
}]
}]
}" "$SLACK_WEBHOOK_URL"
}
# Send email notification
send_email_notification() {
local status="$1"
local message="$2"
local subject="[$PROJECT_NAME] Build #$BUILD_ID - $status"
echo "$message" | mail -s "$subject" "$EMAIL_RECIPIENTS"
}
# Generate deployment metrics
generate_metrics() {
local status="$1"
local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
# Send metrics to monitoring system
curl -X POST \
-H "Content-Type: application/json" \
-d "{
\"metric\": \"deployment.status\",
\"value\": \"$status\",
\"timestamp\": \"$timestamp\",
\"tags\": {
\"project\": \"$PROJECT_NAME\",
\"build_id\": \"$BUILD_ID\"
}
}" "$METRICS_ENDPOINT"
}
# Main notification handler
handle_pipeline_notification() {
local status="$BUILD_STATUS"
local message=""
case "$status" in
"success")
message="â
Deployment completed successfully!"
;;
"failure")
message="â Deployment failed. Check logs for details."
;;
"started")
message="đ Deployment started..."
;;
*)
message="âšī¸ Pipeline status: $status"
;;
esac
send_slack_notification "$status" "$message"
send_email_notification "$status" "$message"
generate_metrics "$status"
}
# Execute notification
handle_pipeline_notification
Teams can customize notification triggers and integrate with different communication tools based on their workflow requirements. These scripts provide flexibility for different notification scenarios while maintaining consistent messaging across platforms.
Security and Compliance Automation Scripts
Implement Automated Security Scanning and Vulnerability Assessment
Automated security scanning becomes essential when managing multiple servers and applications across your infrastructure. Bash scripting for devops teams can streamline vulnerability assessments and security checks without manual intervention.
#!/bin/bash
# Comprehensive security scanner script
SCAN_DIR="/var/log/security-scans"
DATE=$(date +%Y%m%d_%H%M%S)
REPORT_FILE="$SCAN_DIR/security_report_$DATE.txt"
# Create scan directory if it doesn't exist
mkdir -p $SCAN_DIR
# Check for outdated packages
echo "=== Outdated Package Check ===" > $REPORT_FILE
apt list --upgradable 2>/dev/null | grep -v "Listing..." >> $REPORT_FILE
# Scan for open ports
echo -e "\n=== Open Port Scan ===" >> $REPORT_FILE
netstat -tuln | grep LISTEN >> $REPORT_FILE
# Check file permissions for sensitive files
echo -e "\n=== Sensitive File Permissions ===" >> $REPORT_FILE
find /etc -name "passwd" -o -name "shadow" -o -name "hosts" -exec ls -la {} \; >> $REPORT_FILE
# Look for suspicious processes
echo -e "\n=== Process Analysis ===" >> $REPORT_FILE
ps aux | grep -E "(nc|netcat|nmap)" >> $REPORT_FILE
# Email report if critical issues found
if grep -q "CRITICAL" $REPORT_FILE; then
mail -s "Critical Security Issues Detected" admin@company.com < $REPORT_FILE
fi
This script runs comprehensive security checks including package vulnerabilities, network exposure analysis, and file permission auditing. Schedule it via cron for regular devops automation scripts execution.
Create User Access Management and Permission Auditing Tools
User access management requires constant monitoring to maintain security compliance. System administration teams need automated tools to track user permissions, detect unauthorized access, and generate audit reports.
#!/bin/bash
# User access audit and management script
AUDIT_LOG="/var/log/user_audit_$(date +%Y%m%d).log"
ALERT_THRESHOLD=90 # Days since last password change
# Function to check user account status
check_user_accounts() {
echo "=== User Account Audit Started ===" >> $AUDIT_LOG
while IFS=: read -r username _ uid gid _ homedir shell; do
# Skip system accounts
if [ $uid -ge 1000 ] && [ $uid -le 60000 ]; then
echo "Checking user: $username" >> $AUDIT_LOG
# Check last login
last_login=$(last -n 1 $username | head -1 | awk '{print $4, $5, $6}')
echo " Last login: $last_login" >> $AUDIT_LOG
# Check password age
pass_age=$(passwd -S $username 2>/dev/null | awk '{print $3}')
echo " Password last changed: $pass_age" >> $AUDIT_LOG
# Check sudo access
if sudo -l -U $username 2>/dev/null | grep -q "may run"; then
echo " ALERT: $username has sudo privileges" >> $AUDIT_LOG
fi
# Check group memberships
groups_list=$(groups $username 2>/dev/null | cut -d: -f2)
echo " Groups: $groups_list" >> $AUDIT_LOG
fi
done < /etc/passwd
}
# Function to detect inactive users
detect_inactive_users() {
echo -e "\n=== Inactive User Detection ===" >> $AUDIT_LOG
for user in $(cut -d: -f1 /etc/passwd); do
if [ $(id -u $user 2>/dev/null) -ge 1000 ] 2>/dev/null; then
last_login_epoch=$(last -n 1 $user --time-format=iso | head -1 | awk '{print $4}')
if [ ! -z "$last_login_epoch" ]; then
days_inactive=$(( ($(date +%s) - $(date -d "$last_login_epoch" +%s)) / 86400 ))
if [ $days_inactive -gt $ALERT_THRESHOLD ]; then
echo "INACTIVE: $user (${days_inactive} days)" >> $AUDIT_LOG
fi
fi
fi
done
}
# Execute audit functions
check_user_accounts
detect_inactive_users
# Generate summary report
total_users=$(grep -c "Checking user:" $AUDIT_LOG)
sudo_users=$(grep -c "ALERT.*sudo" $AUDIT_LOG)
inactive_users=$(grep -c "INACTIVE:" $AUDIT_LOG)
echo -e "\n=== Audit Summary ===" >> $AUDIT_LOG
echo "Total users audited: $total_users" >> $AUDIT_LOG
echo "Users with sudo access: $sudo_users" >> $AUDIT_LOG
echo "Inactive users detected: $inactive_users" >> $AUDIT_LOG
This comprehensive auditing tool tracks user activities, permission changes, and identifies potential security risks. The script integrates seamlessly with existing devops bash examples workflows and compliance requirements.
Automate SSL Certificate Management and Renewal Processes
SSL certificate management becomes complex when dealing with multiple domains and services. Infrastructure automation bash scripts can handle certificate monitoring, renewal requests, and deployment across your environment.
#!/bin/bash
# SSL Certificate automation and monitoring script
CERT_DIR="/etc/ssl/certificates"
LOG_FILE="/var/log/ssl_management.log"
RENEWAL_THRESHOLD=30 # Days before expiration to renew
# Function to check certificate expiration
check_cert_expiration() {
local cert_file=$1
local domain=$2
if [ -f "$cert_file" ]; then
exp_date=$(openssl x509 -enddate -noout -in "$cert_file" | cut -d= -f2)
exp_epoch=$(date -d "$exp_date" +%s)
current_epoch=$(date +%s)
days_until_exp=$(( ($exp_epoch - $current_epoch) / 86400 ))
echo "$(date): $domain expires in $days_until_exp days" >> $LOG_FILE
if [ $days_until_exp -le $RENEWAL_THRESHOLD ]; then
echo "WARNING: Certificate for $domain expires in $days_until_exp days" >> $LOG_FILE
return 1
fi
else
echo "ERROR: Certificate file $cert_file not found" >> $LOG_FILE
return 2
fi
return 0
}
# Function to renew certificate using Let's Encrypt
renew_letsencrypt_cert() {
local domain=$1
local webroot=$2
echo "$(date): Starting renewal for $domain" >> $LOG_FILE
if certbot renew --webroot -w "$webroot" -d "$domain" --quiet; then
echo "$(date): Successfully renewed certificate for $domain" >> $LOG_FILE
# Restart nginx to load new certificate
systemctl reload nginx
echo "$(date): Nginx reloaded with new certificate" >> $LOG_FILE
# Send success notification
echo "SSL certificate for $domain has been renewed successfully" | \
mail -s "SSL Renewal Success" admin@company.com
return 0
else
echo "$(date): Failed to renew certificate for $domain" >> $LOG_FILE
echo "CRITICAL: SSL renewal failed for $domain" | \
mail -s "SSL Renewal FAILED" admin@company.com
return 1
fi
}
# Function to validate certificate installation
validate_certificate() {
local domain=$1
local port=${2:-443}
echo "$(date): Validating certificate for $domain:$port" >> $LOG_FILE
# Check if certificate is properly installed
cert_info=$(echo | openssl s_client -servername "$domain" -connect "$domain:$port" 2>/dev/null | \
openssl x509 -noout -dates -subject)
if [ $? -eq 0 ]; then
echo "$(date): Certificate validation successful for $domain" >> $LOG_FILE
echo "$cert_info" >> $LOG_FILE
return 0
else
echo "$(date): Certificate validation failed for $domain" >> $LOG_FILE
return 1
fi
}
# Main execution block
echo "$(date): Starting SSL certificate management process" >> $LOG_FILE
# Define domains and their certificate paths
declare -A certificates=(
["example.com"]="/etc/ssl/certificates/example.com.crt"
["api.example.com"]="/etc/ssl/certificates/api.example.com.crt"
["admin.example.com"]="/etc/ssl/certificates/admin.example.com.crt"
)
# Check each certificate
for domain in "${!certificates[@]}"; do
cert_file="${certificates[$domain]}"
if ! check_cert_expiration "$cert_file" "$domain"; then
# Certificate needs renewal
if renew_letsencrypt_cert "$domain" "/var/www/$domain"; then
# Validate the new certificate
sleep 10 # Wait for nginx reload
validate_certificate "$domain"
fi
fi
done
# Generate daily summary
cert_count=${#certificates[@]}
expiring_soon=$(grep -c "WARNING:" $LOG_FILE)
renewal_failures=$(grep -c "FAILED" $LOG_FILE)
echo "$(date): SSL Management Summary - $cert_count certificates monitored, $expiring_soon expiring soon, $renewal_failures renewal failures" >> $LOG_FILE
This automation scripts for linux solution provides complete SSL lifecycle management including monitoring, automatic renewal, and validation. The script integrates with popular certificate authorities and web servers, making it perfect for devops shell scripting environments requiring robust certificate management.
Infrastructure Management and Cloud Operations
Automate Server Provisioning and Configuration Tasks
Server provisioning through bash scripting for devops transforms manual processes into efficient automation workflows. Your infrastructure automation bash scripts can handle everything from initial OS configuration to application deployment across multiple environments.
#!/bin/bash
# Server provisioning script
provision_server() {
local server_ip=$1
local server_role=$2
# Update system packages
ssh root@$server_ip "apt update && apt upgrade -y"
# Install essential packages based on role
case $server_role in
"web")
ssh root@$server_ip "apt install nginx php-fpm -y"
;;
"db")
ssh root@$server_ip "apt install mysql-server -y"
;;
"cache")
ssh root@$server_ip "apt install redis-server -y"
;;
esac
# Configure SSH keys and security
scp ~/.ssh/authorized_keys root@$server_ip:~/.ssh/
ssh root@$server_ip "ufw enable && ufw allow 22,80,443/tcp"
}
Your provisioning scripts should include configuration management for different server roles, automated security hardening, and environment-specific settings. This approach eliminates human error and ensures consistent server configurations across your infrastructure.
Implement Cloud Resource Management and Cost Optimization
Cloud resource management requires sophisticated bash scripts for cloud operations that monitor usage patterns and optimize costs automatically. These devops automation scripts help maintain budget control while ensuring optimal performance.
#!/bin/bash
# AWS cost optimization script
optimize_ec2_instances() {
# Find underutilized instances
aws ec2 describe-instances --query 'Reservations[].Instances[?State.Name==`running`].[InstanceId,InstanceType]' --output text | \
while read instance_id instance_type; do
# Get CPU utilization for last 7 days
avg_cpu=$(aws cloudwatch get-metric-statistics \
--namespace AWS/EC2 \
--metric-name CPUUtilization \
--dimensions Name=InstanceId,Value=$instance_id \
--start-time $(date -d '7 days ago' -u +%Y-%m-%dT%H:%M:%S) \
--end-time $(date -u +%Y-%m-%dT%H:%M:%S) \
--period 3600 \
--statistics Average \
--query 'Datapoints[].Average' \
--output text | awk '{sum+=$1; count++} END {print sum/count}')
# Recommend downsizing if CPU < 20%
if (( $(echo "$avg_cpu < 20" | bc -l) )); then
echo "Instance $instance_id ($instance_type) - Low utilization: ${avg_cpu}%"
# Automatically stop non-production instances
if [[ $instance_type == *"dev"* ]]; then
aws ec2 stop-instances --instance-ids $instance_id
echo "Stopped development instance $instance_id"
fi
fi
done
}
# Schedule unused EBS volume cleanup
cleanup_unused_volumes() {
aws ec2 describe-volumes --filters Name=status,Values=available --query 'Volumes[].VolumeId' --output text | \
xargs -n1 -I {} aws ec2 delete-volume --volume-id {}
}
Resource tagging automation helps track costs by department or project, while automated scaling policies adjust resources based on actual demand patterns.
Create Database Backup and Migration Automation Scripts
Database backup automation through bash scripts ensures data protection and seamless migration processes. Your system administration bash scripts should handle multiple database types and implement retention policies automatically.
#!/bin/bash
# Comprehensive database backup script
DB_BACKUP_DIR="/opt/backups/databases"
RETENTION_DAYS=30
backup_mysql_database() {
local db_name=$1
local backup_file="${DB_BACKUP_DIR}/mysql_${db_name}_$(date +%Y%m%d_%H%M%S).sql.gz"
# Create compressed backup
mysqldump --single-transaction --routines --triggers $db_name | gzip > $backup_file
# Verify backup integrity
if gunzip -t $backup_file; then
echo "MySQL backup successful: $backup_file"
# Upload to S3 for offsite storage
aws s3 cp $backup_file s3://your-backup-bucket/mysql/
else
echo "Backup verification failed for $db_name"
exit 1
fi
}
backup_postgresql_database() {
local db_name=$1
local backup_file="${DB_BACKUP_DIR}/postgres_${db_name}_$(date +%Y%m%d_%H%M%S).dump"
pg_dump -Fc $db_name > $backup_file
# Test restore capability
createdb test_restore_db
pg_restore -d test_restore_db $backup_file
dropdb test_restore_db
echo "PostgreSQL backup completed: $backup_file"
}
# Automated cleanup of old backups
cleanup_old_backups() {
find $DB_BACKUP_DIR -name "*.sql.gz" -mtime +$RETENTION_DAYS -delete
find $DB_BACKUP_DIR -name "*.dump" -mtime +$RETENTION_DAYS -delete
}
# Database migration script
migrate_database() {
local source_host=$1
local target_host=$2
local db_name=$3
# Create migration backup
mysqldump -h $source_host --single-transaction $db_name > /tmp/${db_name}_migration.sql
# Create target database
mysql -h $target_host -e "CREATE DATABASE IF NOT EXISTS $db_name"
# Import data with progress monitoring
pv /tmp/${db_name}_migration.sql | mysql -h $target_host $db_name
# Verify data integrity
source_count=$(mysql -h $source_host -N -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='$db_name'")
target_count=$(mysql -h $target_host -N -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='$db_name'")
if [ "$source_count" -eq "$target_count" ]; then
echo "Database migration successful: $db_name"
else
echo "Migration verification failed for $db_name"
fi
}
These scripts include automated testing of backup integrity and provide rollback capabilities for failed migrations.
Build Load Balancer and Traffic Management Solutions
Load balancer management through devops shell scripting automates traffic distribution and health monitoring across your application infrastructure. These automation scripts for linux handle dynamic backend registration and traffic routing decisions.
#!/bin/bash
# HAProxy configuration management script
HAPROXY_CONFIG="/etc/haproxy/haproxy.cfg"
BACKEND_SERVERS="/etc/haproxy/backends.list"
update_backend_servers() {
local service_name=$1
local new_servers=("${@:2}")
# Backup current configuration
cp $HAPROXY_CONFIG "${HAPROXY_CONFIG}.backup"
# Generate new backend configuration
{
echo "backend $service_name"
echo " balance roundrobin"
echo " option httpchk GET /health"
for server in "${new_servers[@]}"; do
server_name=$(echo $server | cut -d':' -f1)
server_port=$(echo $server | cut -d':' -f2)
echo " server $server_name $server:$server_port check inter 5s fall 3 rise 2"
done
} > /tmp/backend_${service_name}.cfg
# Update main configuration
sed -i "/backend $service_name/,/^$/c\\$(cat /tmp/backend_${service_name}.cfg)" $HAPROXY_CONFIG
# Validate configuration
haproxy -c -f $HAPROXY_CONFIG
if [ $? -eq 0 ]; then
systemctl reload haproxy
echo "Backend servers updated for $service_name"
else
# Restore backup on failure
mv "${HAPROXY_CONFIG}.backup" $HAPROXY_CONFIG
echo "Configuration validation failed, backup restored"
fi
}
# Automated health check and failover
monitor_backend_health() {
while read -r backend server_list; do
for server in $server_list; do
if ! curl -s -o /dev/null -w "%{http_code}" "http://$server/health" | grep -q "200"; then
# Remove unhealthy server
echo "Server $server is unhealthy, removing from rotation"
echo " server $(echo $server | tr ':' '_') $server check disabled" >> /tmp/disabled_servers
# Send alert notification
curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"Server $server failed health check\"}" \
$SLACK_WEBHOOK_URL
fi
done
done < $BACKEND_SERVERS
}
# Dynamic scaling based on load
auto_scale_backends() {
local service_name=$1
local current_load=$(haproxy -s /var/lib/haproxy/stats -c "show stat" | grep $service_name | awk -F',' '{print $5}')
if [ "$current_load" -gt 80 ]; then
# Scale up: add more backend servers
aws autoscaling set-desired-capacity --auto-scaling-group-name $service_name-asg --desired-capacity $((current_servers + 2))
echo "Scaling up $service_name due to high load: $current_load%"
elif [ "$current_load" -lt 20 ]; then
# Scale down: remove excess servers
aws autoscaling set-desired-capacity --auto-scaling-group-name $service_name-asg --desired-capacity $((current_servers - 1))
echo "Scaling down $service_name due to low load: $current_load%"
fi
}
These traffic management solutions include SSL termination automation, geographic load balancing, and integration with cloud auto-scaling groups for dynamic capacity adjustment.
Performance Optimization and Best Practices
Implement Parallel Processing for Faster Script Execution
Modern infrastructure demands speed, and your bash scripts for devops automation can benefit massively from parallel processing. Instead of waiting for each task to complete sequentially, you can run multiple operations simultaneously using background processes and job control.
The ampersand operator (&) runs commands in the background, while the wait command ensures all parallel jobs complete before proceeding:
#!/bin/bash
# Deploy to multiple servers simultaneously
deploy_to_server() {
local server=$1
echo "Deploying to $server..."
ssh $server "sudo systemctl restart application"
}
# Launch parallel deployments
for server in web01 web02 web03; do
deploy_to_server $server &
done
wait
echo "All deployments completed"
GNU parallel offers even more control for complex operations. Install it and transform single-threaded tasks into powerhouses:
# Process log files in parallel
parallel gzip ::: /var/log/app/*.log
# Run health checks across multiple endpoints
parallel -j 10 curl -s {} ::: $(cat endpoints.txt)
Bash’s built-in xargs with the -P flag provides another lightweight parallel option:
# Process files with 4 parallel workers
find /data -name "*.csv" | xargs -P 4 -I {} python process_data.py {}
Monitor system resources when implementing parallel processing – too many concurrent operations can overwhelm CPU or memory. Use tools like nproc to determine optimal worker counts based on available cores.
Create Modular and Reusable Script Components
Breaking down monolithic scripts into modular components transforms your devops automation scripts from maintenance nightmares into elegant, reusable tools. Think of functions as building blocks that you can combine in different ways across projects.
Start by creating a shared library of common functions:
#!/bin/bash
# lib/common.sh - Shared utility functions
log() {
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" >&2
}
check_service() {
local service=$1
if systemctl is-active --quiet "$service"; then
log "â $service is running"
return 0
else
log "â $service is not running"
return 1
fi
}
backup_file() {
local file=$1
local backup_dir=${2:-/tmp/backups}
mkdir -p "$backup_dir"
cp "$file" "$backup_dir/$(basename $file).$(date +%s)"
}
Source this library in your main scripts:
#!/bin/bash
# Load shared functions
source "$(dirname "$0")/lib/common.sh"
# Now use them anywhere
log "Starting deployment process"
backup_file "/etc/nginx/nginx.conf"
check_service "nginx" || exit 1
Create configuration templates that scripts can customize:
# templates/nginx.conf.template
server {
listen {{PORT}};
server_name {{DOMAIN}};
root {{DOCUMENT_ROOT}};
}
Process templates with environment substitution:
render_template() {
local template=$1
local output=$2
envsubst < "$template" > "$output"
}
export PORT=80 DOMAIN=example.com DOCUMENT_ROOT=/var/www
render_template "templates/nginx.conf.template" "/etc/nginx/sites-available/mysite"
Establish Error Logging and Debugging Frameworks
Robust error handling separates professional devops bash examples from quick-and-dirty scripts. Your automation needs to fail gracefully and provide clear diagnostics when things go wrong.
Set up comprehensive error trapping at the script level:
#!/bin/bash
set -euo pipefail # Exit on errors, undefined variables, pipe failures
# Global error handler
error_handler() {
local line_no=$1
local error_code=$2
echo "Error on line $line_no: Command exited with status $error_code"
cleanup_on_error
exit $error_code
}
trap 'error_handler ${LINENO} $?' ERR
cleanup_on_error() {
# Clean up temporary files, kill background processes, etc.
kill $(jobs -p) 2>/dev/null || true
rm -f /tmp/script_lock_$
}
Implement structured logging with different severity levels:
# Logging framework
LOG_LEVEL=${LOG_LEVEL:-INFO}
LOG_FILE=${LOG_FILE:-/var/log/automation.log}
log_debug() { [[ $LOG_LEVEL == "DEBUG" ]] && log "DEBUG: $*"; }
log_info() { log "INFO: $*"; }
log_warn() { log "WARN: $*" | tee -a "$LOG_FILE"; }
log_error() { log "ERROR: $*" | tee -a "$LOG_FILE" >&2; }
validate_input() {
local input=$1
if [[ -z "$input" ]]; then
log_error "Required parameter missing"
return 1
fi
log_debug "Input validation passed for: $input"
}
Add dry-run capabilities for safer testing:
DRY_RUN=${DRY_RUN:-false}
safe_execute() {
local command="$*"
if [[ "$DRY_RUN" == "true" ]]; then
log_info "[DRY RUN] Would execute: $command"
else
log_debug "Executing: $command"
eval "$command"
fi
}
# Usage
safe_execute "systemctl restart nginx"
safe_execute "rm -rf /tmp/old_deployment"
These bash scripting best practices create maintainable, debuggable automation that scales with your infrastructure needs. Your future self will thank you when troubleshooting complex deployments at 3 AM.
These 25 automation scripts cover the essential building blocks every DevOps team needs to streamline their daily operations. From basic file management and log rotation to complex CI/CD pipeline integration and cloud infrastructure management, bash scripting remains one of the most powerful tools in your automation toolkit. The scripts we’ve explored help eliminate repetitive manual tasks, reduce human error, and create consistent processes across your entire infrastructure.
Start small by implementing the file system and monitoring scripts first, then gradually work your way up to more complex deployment and security automation. Remember that good bash scripts are like good tools â they should be reliable, well-documented, and easy for your team to understand and modify. Take these examples, adapt them to your specific environment, and watch how much time and effort you can save while improving the reliability of your systems.


















