Linux file permissions look simple on the surface, but a weak Linux Permissions Audit can turn a routine server into an escalation path. A single writable script, an overexposed SSH key, or a misowned service file can lead to privilege escalation, data exposure, or persistence after compromise. That is why Linux Security teams treat permissions as a first-line control, not a housekeeping task.
This guide focuses on a practical audit workflow using Open Source Security Tools. You will inspect standard mode bits, ownership, special bits, ACLs, SUID and SGID files, world-writable paths, and sensitive configuration files. The goal is not just to identify problems, but to understand which findings matter, how to validate them, and how to fix them without breaking services.
We will use tools you already have on most systems or can install easily: find, stat, ls, getfacl, namei, auditd, and Lynis. We will also mention optional utilities such as chkrootkit and rkhunter where compromise indicators overlap with permission issues. This process is for system administrators, DevOps engineers, security teams, and anyone responsible for Linux servers or workstations. Vision Training Systems recommends building the audit into your regular operations, not using it only after an incident.
Understanding Linux File Permission Basics
Linux file permissions are built on three access classes: owner, group, and others. Each class can have read, write, and execute rights, and that model determines who can view content, change it, or run it. A file with mode 644 is readable by everyone but writable only by the owner, while 755 on a directory allows traversal and listing for others but still limits write access to the owner.
Ownership matters just as much as the mode bits. A root-owned configuration file in /etc is expected, because many services rely on root to protect and manage system state. If that same file is owned by a normal user, even if the permissions look “tight,” you may have a control weakness because the wrong account can change service behavior.
Special permissions deserve extra scrutiny. The SUID bit makes a program run with the owner’s privileges, commonly root. The SGID bit grants group privileges, and on directories it can affect group inheritance. The sticky bit limits who can delete files in shared directories such as /tmp. These features are useful, but they are also classic escalation points if they appear where they should not.
Access Control Lists, or ACLs, extend traditional permissions by allowing more granular rules for specific users and groups. That means a file can look safe in ls -l while still granting unexpected access through an ACL entry. This is why a real Linux Permissions Audit needs both mode checks and ACL inspection.
- Read means view file contents or list directory entries.
- Write means modify or delete content, depending on directory traversal rights.
- Execute means run a binary or enter a directory.
- SUID, SGID, and ACLs can override what the basic mode bits suggest.
“If permission review stops at
ls -l, it is not an audit. It is a glance.”
Preparing For The Audit
Start by identifying the systems and paths that matter most. The usual targets are /etc, /home, /var/www, /opt, /usr/local, and any application-specific directories where code, secrets, or service files live. In container hosts or CI runners, you should also include mounted volumes and automation directories, because those often become overlooked trust boundaries.
Run the audit from a root shell or through sudo with logging enabled. That gives you the access needed to inspect hidden ownership issues and protected files. It also creates an accountability trail, which matters when you need to explain why a configuration changed or why a risky path was flagged.
Decide whether this is a one-time review, a compliance check, or a recurring monitoring task. A one-time review is good for incident response or hardening. A compliance check needs evidence, repeatability, and documented thresholds. A recurring task should be automated enough to detect drift without flooding your team with noise.
Before changing anything, establish a baseline. Capture current permissions, ownership, and ACLs so you can compare future scans against the known-good state. If the host is critical, take a backup or snapshot first. That is not paranoia; it is basic change control.
Note
A baseline is only useful if it reflects approved behavior. Document exceptions such as shared application directories, managed deployment paths, and service accounts that legitimately own files outside /home.
- Scope system paths, application paths, and user-managed locations.
- Capture a snapshot or backup before remediation.
- Record expected ownership for sensitive files.
- Define whether findings will be remediated immediately or reviewed first.
Using Find To Locate Risky Permissions
The find command is the fastest way to surface obvious permission risks. A classic audit starts by looking for world-writable files and directories, because those paths can let an unprivileged user alter scripts, configs, or data. On many systems, this is the first place to look for privilege escalation opportunities.
Use search patterns that match your environment. For example, world-writable directories outside temporary locations are usually suspicious, while world-writable files in application folders may indicate a deployment issue. You also want to find files without a valid owner or group, because orphaned assets often survive after account deletion or migrations.
find /etc /home /var/www /opt /usr/local -xdev -type d -perm -0002 -print
find /etc /home /var/www /opt /usr/local -xdev -type f -perm -0002 -print
find / -xdev ( -nouser -o -nogroup ) -print
From there, locate SUID and SGID binaries. Not every SUID binary is dangerous, but each one should be validated against a known-good list. If a custom binary in /usr/local has SUID set, that deserves a close look. The same is true for unexpected executables in writable directories, because attackers frequently place helper scripts where they can later be invoked by a service or admin task.
Save results to a report file so you can track remediation. That report becomes evidence for change management and helps you avoid rescanning the same issues repeatedly. A simple text report is enough if it contains timestamps, paths, and a short risk note for each item.
find / -xdev -type f -perm -4000 -o -type f -perm -2000 > suid_sgid_report.txt
find /etc /var/www /opt /usr/local -xdev -type f -perm -0002 > world_writable_files.txt
- World-writable directories outside
/tmpare high-risk. - Orphaned files may signal abandoned or migrated assets.
- SUID and SGID binaries require business justification.
- Writable executables in shared paths are a common persistence mechanism.
Warning
Do not assume every world-writable path is malicious. Package managers, application spool directories, and temporary workspaces may be writable by design. Confirm the intended owner and write model before changing anything.
Inspecting Ownership And Metadata With Stat And Ls
Use stat when you need more detail than ls -l provides. It shows UID, GID, mode, inode number, timestamps, and the file’s exact metadata. That matters because a file can appear normal at a glance while its timestamps or ownership reveal tampering, replacement, or an unusual deployment event.
ls -l is useful for quick review, but it hides key details. It may not show the inode, it can be affected by aliases or local formatting, and it does not give you the level of precision needed for forensic work. In an audit, compare both outputs on sensitive files and look for mismatches or unusual patterns.
ls -l /etc/sudoers /etc/ssh/sshd_config
stat /etc/sudoers /etc/ssh/sshd_config
For critical files, the expected owner is often root, and the group is usually root or a tightly controlled admin group. If you find a service unit file in /etc/systemd/system owned by a developer account, that may be a sign of risky operational practice. The same applies to cron jobs, SSH configuration, and sudoers-related content.
Timestamp analysis can reveal recent edits to files that should rarely change. A sudden modification in a protected directory is not proof of compromise, but it is a reason to ask why the file changed and who changed it. In a strong Linux Security process, ownership and timestamp review are used together, not separately.
| Tool | Best Use |
|---|---|
| ls -l | Quick human-readable permission review |
| stat | Detailed metadata, inode, and timestamp validation |
- Check
/etc/sudoers, SSH configs, and cron entries first. - Look for non-root owners in root-managed directories.
- Investigate recent timestamp changes on stable system files.
- Confirm that metadata aligns with your baseline.
Auditing ACLs And Extended Permissions
getfacl is the tool that closes the gap left by standard permission bits. It shows ACL entries that can grant access to specific users or groups even when ls -l looks clean. This is a common blind spot in a Linux Permissions Audit because ACLs are easy to forget and easy to misuse during application setup.
Look for unexpected read or write access on files that store secrets, configs, or scripts. A developer may have granted access to a test account months ago and forgotten to remove it. If the ACL still exists, that account may still be able to read production data or alter execution paths.
getfacl /etc/ssh/sshd_config
getfacl /var/www/app
getfacl -R /opt/myapp | less
Default ACLs deserve attention too. They control how permissions are inherited by newly created files in a directory. If a shared project folder has a permissive default ACL, every new file may inherit broader access than the creator intended. That can quietly expand the blast radius of one misconfigured folder.
Document ACL findings carefully. Removing them blindly can break application workflows, especially in environments with shared deployment accounts or collaborative content systems. The right approach is to identify the business need, compare it to the actual access path, and then trim only the entries that are no longer justified.
Key Takeaway
ACLs can override apparently safe mode bits. If a file looks locked down in ls -l, verify it with getfacl before you trust the result.
- Check for named users in ACL entries on sensitive files.
- Review default ACLs on shared directories.
- Document intended exceptions before removing them.
- Re-test applications after ACL cleanup.
Checking Path Integrity And Parent Directory Permissions
namei is one of the best tools for path integrity checks because it walks each component in a file path. A file may have secure permissions, but if one of its parent directories is writable by unauthorized users, the path is still vulnerable. That is why parent directories must be audited as part of a real Linux Permissions Audit.
For sensitive paths like SSH keys or service binaries, verify that no parent directory can be modified by the wrong account. If a directory in the chain is writable, an attacker may be able to replace a file, redirect access, or manipulate lookups through a symlink. This is especially important in application trees and scripts that run with elevated privileges.
namei -l /etc/ssh/sshd_config
namei -l /home/appuser/.ssh/authorized_keys
namei -l /usr/local/bin/deploy-script
Symlinks and hard links deserve attention too. A symlink inside a writable directory can point to a sensitive location, and a careless admin may follow it without noticing the destination. Hard links are less common for attack paths on modern systems, but they still matter when you need to understand how a file can be reached or preserved.
Path integrity checks are especially useful when a file is technically root-owned yet lives under a weak directory tree. In that case, the file itself may be secure, but the route to it is not. A secure endpoint inside an insecure chain is still a weak control.
- Audit every parent directory in sensitive file paths.
- Check for writable parents, symlink tricks, and unexpected mount points.
- Validate deployment paths and admin scripts, not just config files.
- Use
namei -lwhen a file-level review seems “too clean.”
Finding Dangerous Files And Misconfigurations
Some of the most serious permission problems hide in plain sight. Hidden files, backup files, editor swap files, and temporary artifacts can leak credentials or expose old config states. Files such as .bak, .old, ~, and .swp are common in manual workflows and often left behind in production directories.
Search for writable scripts, cron entries, and service startup files. If an attacker can modify a scheduled job or a startup unit, they can often turn that change into persistence or escalation. This is one of the most dangerous patterns in Linux Security because the malicious change can look like a routine admin update.
find /etc /var/www /opt /usr/local -xdev ( -name "*.bak" -o -name "*.old" -o -name "*~" -o -name ".swp" ) -print
find /etc/cron* /var/spool/cron -type f -perm -0002 -print
find /etc/systemd/system /usr/lib/systemd/system -type f -perm -0002 -print
Also inspect secrets such as private keys, certificates, and API tokens. These should never be broadly readable unless there is a documented business reason. If a private key is world-readable, rotate it. If a configuration file exposes a token, assume it may already be copied elsewhere.
Do not ignore log and temp directories. Permissions drift over time as troubleshooting, log rotation, and application restarts create new files. A directory that started safe can become messy in a few deployments. The audit should catch that drift before it becomes a breach path.
- Remove leftover backup and swap files from sensitive directories.
- Check for writable cron jobs and service units.
- Treat exposed secrets as compromise candidates.
- Review temp and log locations for permission drift.
Leveraging Open Source Security Audit Tools
Lynis is useful because it automates a broad hardening review and flags permission-related concerns quickly. It will not replace a manual audit, but it helps identify weak defaults, risky ownership patterns, and missing hardening controls. For teams running repeated Open Source Security Tools workflows, Lynis provides a practical starting point and a repeatable baseline.
auditd is the evidence layer. It records events tied to files and directories, which helps you prove when a sensitive file changed and which process made the change. That makes it ideal for permission drift tracking and incident investigations.
lynis audit system
auditctl -w /etc/sudoers -p wa -k sudoers_watch
auditctl -w /etc/ssh/sshd_config -p wa -k ssh_watch
Optional tools such as chkrootkit or rkhunter can be helpful when permission anomalies might be tied to compromise indicators. They are not permission scanners, but they can provide a broader context if you discover unusual executables or altered system files. When used together, automated tools and manual verification reduce false positives and prevent rushed conclusions.
Simple shell scripting can automate recurring checks. A weekly scan that captures world-writable files, SUID binaries, and ACL deltas is often enough for small fleets. Larger environments should aggregate results into a central report so patterns become visible across hosts instead of living in isolated logs.
Pro Tip
Use tool output as a triage layer, not a verdict. The fastest audits combine automation for breadth with manual review for accuracy.
- Run Lynis for broad hardening recommendations.
- Use auditd for immutable evidence of file changes.
- Consider chkrootkit or rkhunter when compromise is suspected.
- Automate recurring scans with shell scripts or scheduled jobs.
Interpreting Findings And Prioritizing Remediation
Not all findings carry the same risk. A world-writable directory in /tmp is expected if it has the sticky bit, but a world-writable directory in /etc is an immediate escalation concern. A readable log file may be a low-priority hygiene issue, while a writable sudoers fragment can become a direct root path.
Start by classifying issues into severity buckets. Immediate risk means privilege escalation, code execution, or secret exposure. Medium risk means paths that could become dangerous in a future change or deployment. Low risk means cleanup items that do not currently expose critical assets but should still be corrected.
| Severity | Examples |
|---|---|
| Immediate | SUID abuse, writable sudoers, exposed SSH private keys |
| Medium | ACL exceptions, writable application scripts, weak parent directories |
| Low | Unused backup files, orphaned but non-sensitive artifacts |
Distinguish legitimate application needs from unsafe permissions. Some services require write access to a data directory, and some deployment systems need controlled group write permissions. The key is documenting why that access exists and verifying that it is constrained to the smallest practical scope.
Once you know the severity and business purpose, create a remediation plan. Include permission changes, ownership corrections, ACL cleanup, and directory hardening. Test in staging before touching production. Permission changes that break a service at 2 a.m. are worse than the original misconfiguration.
Remediation Best Practices
Use chmod, chown, and setfacl carefully. The goal is to match the intended access model, not just to make the scanner go quiet. For example, removing group write from a shared deployment directory may be correct, but only after you confirm the application does not depend on that group write behavior.
Remove unnecessary SUID and SGID bits wherever possible. Many legacy binaries were granted these bits years ago for convenience, and modern operating practices often have safer alternatives. If a binary does not truly need elevated execution, strip the bit and verify the application still functions.
chmod u-s /usr/local/bin/example
chmod g-s /usr/local/bin/example
chown root:root /etc/ssh/sshd_config
setfacl -b /var/www/app
Secrets should be locked down to least privilege. If you suspect exposure, rotate credentials immediately rather than waiting for proof of misuse. Treat directory hardening seriously too: shared temporary locations often need the sticky bit so one user cannot delete another user’s files.
After remediation, verify the fix. Re-run the same commands you used during the audit and compare the results with your baseline. This final check is what turns a cleanup task into a controlled security change.
- Prefer small, testable permission changes over broad fixes.
- Strip SUID and SGID bits when they are not required.
- Rotate secrets after exposure or suspicious access.
- Confirm the post-change state with a second scan.
Setting Up Ongoing Monitoring And Reporting
Permission audits should not be one-time events. Schedule periodic scans with cron or systemd timers so you can detect drift before it becomes an incident. A weekly baseline scan is a good starting point for many small and mid-sized environments, while sensitive systems may need daily checks.
Use auditd rules to alert on changes to sensitive files and directories. That gives you a change history for sudoers, SSH configuration, service units, and key directories. When paired with scheduled scans, it helps you separate expected changes from suspicious ones.
Keep a historical record of findings so you can analyze repeat offenders and trend lines. If the same directory keeps drifting back to risky permissions, the problem is probably process-related, not just technical. This is where operational ownership matters as much as remediation commands.
Integrate audit output into CI/CD, configuration management, or endpoint monitoring workflows. If deployments routinely create unsafe permissions, the pipeline should catch that before production does. Vision Training Systems recommends assigning a named owner for follow-up actions so findings do not die in a shared mailbox.
Key Takeaway
The strongest permission program combines recurring scans, change logging, and clear ownership. Without all three, drift returns quickly.
- Schedule scans with cron or systemd timers.
- Watch high-value files with auditd.
- Track repeats and trends over time.
- Connect findings to change management and deployment workflows.
Conclusion
A structured Linux Permissions Audit reduces risk because it finds problems that simple checks miss: hidden ACL access, unsafe parent directories, overexposed secrets, and SUID or SGID misuse. The best audits are practical. They use Open Source Security Tools to gather evidence quickly, then rely on human review to decide what is actually dangerous.
Secure file permissions are both a technical control and an operational discipline. You need correct mode bits, but you also need ownership discipline, change tracking, and repeatable monitoring. If you only fix what is broken today, the same mistakes will return in the next deployment, patch cycle, or emergency change.
For busy teams, the right approach is straightforward: establish a baseline, scan with find, stat, getfacl, and namei, confirm findings with auditd or Lynis, and then remediate in stages. Keep the process simple enough that it actually gets repeated. That is how Linux Security becomes a habit instead of an after-action report.
Your next step is practical. Run a baseline find scan for world-writable paths and SUID binaries on one critical Linux host, then review /etc, SSH configuration, and cron files for ownership and ACL issues. If your team needs a stronger process for audits, monitoring, and Linux hardening, Vision Training Systems can help you build that capability with focused, job-ready training.