AWS EC2 Instance Instance reachability check failed
A simple restart may result in the instance reachability check failing and your instance being seemingly broken. But is it?
Written by Anthony Chambers , and read12,085 times
This morning we've restarted one of our main web-servers. It should've been a simple process that had the system down for only a couple of minutes. But it didn't come back up properly at all. We looked at the Status Checks and saw that the Instance Reachability Check had failed.
It's possible to view the server logs from the AWS Console by right-clicking on your instance and selecting Get Logs. We did this and right at the bottom, this is what we saw:
Checking filesystems Checking all file systems. [/sbin/fsck.ext4 (1) -- /] fsck.ext4 -a /dev/xvda1 / has been mounted 28 times without being checked, check forced.
This is default behaviour for the Amazon Linux that we're using. Administrators should be aware, because it's possible that the system may not come back up when you expect it to, you go into panic mode trying to restore snapshots etc, only for it to come back up on its own if left for long enough.
You can disable this check by editing the /etc/fstab file. For example:
# LABEL=/ / ext4 defaults,noatime 1 1 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0
Look at the two numbers on the end of each line. The second one (the last value) determines the order in which the filesystems should be checked (assuming they're on the same drive). By setting a value of 0 (zero) this filesystem will be skipped.