Tenable Nessus in Cloud
After spending some time with the Tennable Nessus agent in an ephemeral workload over the past few weeks, I’ve put together some challenges on using the tool on an ephemeral workload, namely AWS EC2 instances in Auto-Scaling Groups.
Disclaimer: Any opinions expressed are solely my own and do not express the views or opinions of my employer.
More about Nessus
Tenable Nessus Agent
Nessus is a remote security scanning tool, which scans a computer and raises an alert if it discovers any vulnerabilities that malicious hackers could use to gain access to any computer you have connected to a network.
The tool operates as an agent installed on the target operating system and can perform scans of the OS and any installed applications.
Download & Install
Obtaining the correct version of the agent can be a challenge in a CI/CD environment where the process for building your workload is entirely code-based.
Agents are available via the download site but there is no official API for obtaining any particular version of the agent.
It seems the agent list on the website is composed of data being pulled from this un-official API which has an extensive list of agents but is not very well formatted to enable grouping / sorting / filtering / etc.
Linking the Agent
Once the agent is installed, the next step is to link the agent to the SaaS and this is where the problems begin.
How the Agent UUID Breaks Everything
This is the point where everything stops working because the agent has now been linked so if you use this instance to do anything “cloud” like, the agent will maintain the same UUID. This means, any replicated agents will not be able to link with the tenable.io SaaS.
Using Amazon Linux as the example here, there are a number of ways to unlink the agent:
There are inherent problems with both options.
Option 1: Erasing all settings and unlinking the agent
This method is only useful when executed before the server shuts-down and only when you can guarantee that shut-down process in every case, which is not something Cloud is designed for. We often terminate, destroy, kill, and generally don’t care about instances once they are no longer of any use.
Option 2: Unlinking the agent
Also not a great option for the same reason as Option 1 but additionally, unlinking the agent does not always work reliably. I have experienced cases where the unlink command has been issued to the agent but a record still exists in the tenable.io console which then needs to be manually and forcefully removed.
Ideally, the agent should not rely on a UUID. It should work agnostic to the instances and ephemeral workloads.
As a user, I shoul dbe able to simply deploy the agent into an auto-scaling environment, an SOE, or a COE, and have all agents automatically link and remain linked when cloned.
The Janky Workaround
You could automate the unlinking process as part of the AMI baking process each time you bake your SOE, but you would need to ensure that the process was followed and that could introduce a risk of failure.
Since we cannot trust that the agent will be unlinked before image baking, the nasty workaround is to simply wipe the agent and re-link it again on each boot-up. (I called it an janky workaround for a reason!).
The simplest way to have the agent wiped and relinked is with a JSON file. Here is a sample file, along with the detailed instructions on using it and the commands required.
NOTE: Do not store the JSON file in /opt/nessus/var/nessus/config.json because the agent will erase the config file once linked. Instead, store the file anywhere else (eg: /opt/nessus/config.json)
The information for forming this JSON file and the command to consume it has been taken from this Tenable help document.