As pyTenable starts to near it’s 3rd birthday, I’ve started working on a complete rewrite of the codebase. For a number of reasons, the current v1 code has become a monstrosity of tests, repeated code, and assumptions in the APIs that are no longer correct. Thats not to say that it doesn’t work, or even work well for what folks are using it for, just that code maintainability has been a concern as of late.
I often have the need to spin up a Linux host to perform some quick testing for something, and the amount of legwork and time to get a simple VM up and running is often as time-consuming as getting the software installed to interact with it. I needed something that was quicker to get me bootstrapped, simple enough to not require me to learn a whole ton to get going, and repeatable enough for me to even write some dirty scripts to make standing something up repeatable.
The more integrations I write the more it becomes apparent that there is no consistency in User-Agent strings for the purposes of identification of whom is making what calls. It’s something that folks are supposed to do with making API calls, yet most folks don’t even bother with it. It creates nothing but issues when the people managing the application you’re talking to doesn’t inform the admins who you are or what you’re doing.
With all of the work thats been done with the pyTenable library, I reached a point where I was using pyTenable’s core APISession, APIEndpoint, and APIIterator classes a lot for external work. It seemed only logical to separate these base classes from pyTenable and wrap them up into their own library to act as a framework for folks looking to build their own API libraries. The end result of this is the new Python RESTfly library, which is focused on providing a basic scaffolding to make writing API libraries similar to pyTenable’s easy and and effective.
So I decided to start looking at Visual Studio Code with most of the folks I know dropping Sublime like it’s a bad habit and see what all of the hubbub is about. I will have to say that after some tweaking I’ve been pleasantly surprised with how well VSCode works. It’s taken a lot less tweaking to get it to a point where I’m happy with it than it ever did with Sublime Text, and it even has some really nice features out of the box for Python.
The pyTenable library has been rapidly evolving over the past few months. The library has seen a lot of expansion and maturation over the last several weeks. Going from version 0.1.0 at the time of last post to now 0.3.3, there has been a lot of work done to lay scaffolding for the SecurityCenter package. SecurityCenter (recently re-branded as Tenable.sc) is as large, if not larger a project as Tenable.io was.
After nearly 8 months of on-and-off development (mostly off, day-job work keeps my busy), I’m proud to announce that pyTenable has hit version 0.1.0. While this may not seem like much, it’s been a lot of work to bring it across this line in the journey so far. All of the Tenable.io Vulnerability Management API are now pythonized. Further everything in the tenable_io module has been tested out (519 tests!). Tenable has also seen fit to link to pyTenable as an official module for working with our products.
Considering there wasn’t any Nessus Network Monitor docker images that I could find, I decided I’d create one. Using the Nessus Scanner image as a starting point, this image should have a lot of the most common things parameterized out already. As for sniffing traffic, I’d highly encourage you to take a look at one of the earlier posts covering Docker & packet sniffing. Deploying the sensor should be a simple matter of setting up a volume for the sensor data (for persistence), linking it to a promiscuous interface, and then instantiating it:
A lot of the Nessus Scanner docker images in Docker Hub don’t appear to be properly parameterizing a lot (or in many cases, any) of the required inputs to really get the scanner to run and connect up in an automated fashion. Further most of the images that I’ve seen out there aren’t cleaning up the identifying information the scanner created as part of install (such as the UUID, the master encryption key, etc.
With all of the materials out there on the web revolving around docker containers, I thought that getting some sort of a docker network that containers could promiscuously sniff would have been a relatively easy thing to find. I was shocked to find out that, not only was this not the case, but that the general consensus was that you need to use either Docker’s host networking (which means that these containers can’t exist in other network name-spaces), use pass-through networking (which unless you have hardware that support SR-IOV, your out of luck), or that you resort to some serious host hacking to get the interface into the container.