Running out of disk space on your EC2 instances can bring your applications to a grinding halt, leaving you scrambling for a quick fix. This guide walks you through the complete process of resolving EC2 full disk space issues and preventing them from happening again.
Who this guide is for: AWS administrators, DevOps engineers, and system administrators managing EC2 instances who need immediate solutions for storage problems or want to set up better EC2 storage management practices.
We’ll cover the essential emergency cleanup steps to get your systems running again, walk through the complete process of AWS EBS volume expansion and filesystem resize Linux procedures, and share proven AWS storage best practices for long-term EC2 disk usage optimization. You’ll learn how to quickly identify what’s eating up your disk space, safely expand your storage capacity, and set up monitoring systems that catch problems before they crash your applications.
Identify Full Disk Issues on EC2 Instances
Recognize common disk space warning signs and error messages
When your EC2 instance runs out of space, the system throws specific error messages that signal trouble ahead. Watch for “No space left on device” errors, failed application deployments, or database write failures. Your SSH sessions might become sluggish, and system logs often show warnings about low disk space. Application crashes without clear reasons frequently point to EC2 full disk space issues that need immediate attention.
Use command-line tools to check disk usage and availability
The df -h command reveals current disk usage across all mounted filesystems, showing available space in human-readable format. Use du -sh /* to examine directory-level consumption, while lsblk displays block device information including EBS volumes. For real-time monitoring, iostat tracks disk I/O patterns. These tools help diagnose EC2 disk cleanup requirements and identify which partitions need AWS EBS volume expansion to prevent system failures.
Locate files and directories consuming excessive storage space
Large log files often consume massive amounts of storage space without warning. Run du -ah /var/log | sort -rh | head -20 to find the biggest log files eating your disk space. Check /tmp directories for forgotten temporary files using find /tmp -type f -size +100M. Docker images and containers can bloat rapidly – use docker system df to assess container storage usage. Package caches in /var/cache and old kernel files in /boot frequently require cleanup for effective EC2 storage management and disk usage optimization.
Emergency Cleanup Steps to Free Immediate Space
Remove temporary files and system logs safely
Start your EC2 disk cleanup by targeting /tmp directory and system logs in /var/log. Use sudo rm -rf /tmp/* to clear temporary files, then truncate large log files with sudo truncate -s 0 /var/log/syslog instead of deleting them. Check Apache/Nginx logs in /var/log/apache2 or /var/log/nginx and compress old entries using gzip. Run sudo journalctl --vacuum-time=7d to clean systemd journal logs older than a week.
Clear package manager caches and old kernels
Package managers store downloaded files that consume significant space on your EC2 instance. Run sudo apt autoremove and sudo apt autoclean on Ubuntu/Debian systems, or sudo yum clean all on RedHat-based distributions. Remove old kernel versions with sudo apt autoremove --purge to free several gigabytes. Check /var/cache/apt/archives or /var/cache/yum directories manually if automatic cleanup doesn’t provide enough space relief.
Delete unnecessary application logs and backup files
Application logs often grow unchecked, consuming valuable disk space on EC2 instances. Navigate to your application directories and identify log files larger than 100MB using find /var/www -name "*.log" -size +100M. Rotate or delete old application logs, temporary uploads in /tmp, and database dump files. Remove old backup files from /home directories and check Docker containers for accumulated layers using docker system prune -a if applicable.
Compress or archive large files taking up critical space
Identify space-consuming files with du -ah / | sort -hr | head -20 to find the largest directories and files. Compress log files, database dumps, and archived data using gzip or tar -czf. Move non-critical files to S3 storage using AWS CLI commands like aws s3 cp large-file.tar.gz s3://your-bucket/. Create compressed archives of old application data and user uploads that aren’t frequently accessed but must be retained.
Expand EBS Volume Size Through AWS Console
Navigate to EC2 dashboard and select target volume
Access the AWS Management Console and head to the EC2 service dashboard. Click on “Volumes” in the left navigation panel under “Elastic Block Store.” Find your specific EBS volume by matching the instance ID or volume ID. The volume list displays attachment information, size, and current status. Select the volume attached to your full disk EC2 instance by clicking the checkbox next to it.
Modify volume size with zero-downtime expansion
Right-click the selected volume and choose “Modify Volume” from the context menu. The modification dialog shows current volume specifications including size, type, and IOPS settings. Enter your desired new size – AWS EBS volume expansion supports increasing size without detaching from running instances. Review the changes and click “Modify” to initiate the expansion process. This AWS EBS volume expansion happens live while your EC2 instance continues running.
Monitor volume modification progress and completion status
The volume modification creates a background task visible in the “Volumes” section. Watch the “Volume State” column which transitions from “in-use” to “modifying” during expansion. The process typically completes within minutes depending on volume size. Check the “Modification State” column for real-time progress updates. Once it shows “completed,” your EBS volume successfully expanded, though the operating system filesystem still needs resizing to access the additional space.
Resize Filesystem to Utilize New EBS Space
Extend partition tables for increased volume capacity
After expanding your EBS volume through the AWS console, the operating system doesn’t automatically recognize the additional space. You’ll need to extend the partition table to make the extra capacity available to your filesystem. For most EC2 instances running Linux, use the growpart command to resize the partition. First, identify your root partition with lsblk to see the current disk layout. Then run sudo growpart /dev/xvda 1 (replace with your actual device and partition number) to extend the partition to use all available space on the expanded EBS volume.
Grow filesystem using resize2fs or xfs_growfs commands
Once the partition is extended, you need to resize the actual filesystem to use the newly available space. The command depends on your filesystem type – check with df -T to identify whether you’re running ext4 or XFS. For ext4 filesystems, run sudo resize2fs /dev/xvda1 to expand the filesystem to fill the enlarged partition. For XFS filesystems, use sudo xfs_growfs / instead. Both commands work online without requiring a system reboot, making them perfect for production environments where downtime isn’t an option.
Verify successful filesystem expansion and available space
After completing the filesystem resize operation, confirm the expansion worked correctly by checking available disk space with df -h. The output should show your root filesystem now has the additional space from your EBS volume expansion. Cross-reference this with lsblk to ensure the partition and filesystem sizes match your expanded EBS volume capacity. If the numbers don’t align, double-check that you extended the correct partition and used the appropriate filesystem resize command for your setup.
Implement Proactive Monitoring and Prevention Strategies
Set up CloudWatch alarms for disk usage thresholds
Configure CloudWatch alarms to monitor EC2 disk usage and trigger alerts when space reaches 70%, 85%, and 95% capacity thresholds. Create custom metrics using CloudWatch Agent to track filesystem usage across all mounted volumes. Set up SNS notifications to alert your team via email or Slack when AWS storage best practices thresholds are exceeded, enabling proactive EC2 storage management before critical disk space issues occur.
Configure automated log rotation and cleanup schedules
Implement logrotate configurations to automatically compress and remove old log files on a daily or weekly schedule. Set up cron jobs to clean temporary directories, remove old application logs, and purge cached files that consume unnecessary disk space. Configure automated cleanup scripts that target common space-consuming directories like /tmp, /var/log, and application-specific directories, preventing EC2 out of space scenarios through regular maintenance.
Establish regular capacity planning and volume sizing reviews
Schedule monthly reviews of EC2 disk usage optimization patterns and growth trends using CloudWatch metrics and AWS Cost Explorer. Analyze historical storage consumption data to predict future EBS volume requirements and plan timely expansions. Create standardized volume sizing guidelines based on workload types and implement automated tagging strategies to track storage costs and usage patterns across your AWS infrastructure.
Create backup and disaster recovery procedures for critical data
Establish automated EBS snapshot schedules using AWS Backup or custom Lambda functions to protect critical data before performing EC2 disk cleanup operations. Create cross-region backup copies of essential volumes and test restoration procedures regularly. Document step-by-step recovery processes and maintain offsite backups of configuration files, ensuring business continuity during storage emergencies or when implementing EBS volume expansion procedures.
Running out of disk space on your EC2 instances doesn’t have to turn into a crisis. By spotting the warning signs early and knowing the right cleanup commands, you can buy yourself time to plan a proper fix. Expanding your EBS volumes and resizing filesystems has become straightforward with AWS tools, and the whole process can usually be done without any downtime.
The real win comes from setting up monitoring before you need it. CloudWatch alarms for disk usage, automated cleanup scripts, and regular capacity planning will save you from those 3 AM emergency calls. Take a few minutes to implement these monitoring practices now, and you’ll thank yourself later when your systems scale smoothly instead of hitting unexpected walls.


















