Welcome to Deeptracy¶

Welcome to Deeptracy’s documentation. This documentation is divided into two different parts. One is the User’s Documentation which include installation and usage, and the other is the Developer’s Documentation which include Source Code Docs documentation, local environment, testing and so on.
Quick start¶
This document is a quick introduction for use Deeptracy Service
First of run Patton Server¶
1 - Install Docker Compose¶
2 - Create a docker-compose.yml file¶
Download Docker Compose file from this link or copy this code
version: '3'
services:
postgres:
image: postgres:9.6-alpine
environment:
- POSTGRES_PASSWORD=postgres
ports:
- 5433:5433
command: -p 5433
redis:
image: redis:3-alpine
ports:
- 6379:6379
deeptracy:
image: bbvalabs/deeptracy
depends_on:
- redis
- postgres
environment:
- BROKER_URI=redis://redis:6379
- DATABASE_URI=postgresql://postgres:postgres@postgres:5433/deeptracy
- POSTGRES_URI=postgresql://postgres:postgres@postgres:5433
- SHARED_VOLUME_PATH=/tmp/deeptracy
- LOCAL_PRIVATE_KEY_FILE=/root/.ssh/id_rsa
- PATTON_URI=http://0.0.0.0:8000
# - EMAIL_SMTP_SERVER=xxx.xxx.xxx
# - EMAIL_SMTP_PORT=xxx
# - EMAIL_SMTP_USER=xxx@xxx.xxx
# - EMAIL_SMTP_PASSWORD=xxxxx
# - EMAIL_FROM=xxx@xxx.xxx
ports:
- 8000:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp:/tmp
- ./private_key:/root/.ssh/
privileged: true
command: ["./init_patton_db.sh"]
deeptracy-api:
image: bbvalabs/deeptracy-api
depends_on:
- redis
- postgres
ports:
- 8080:8080
environment:
- BROKER_URI=redis://redis:6379
- DATABASE_URI=postgresql://postgres:postgres@postgres:5433/deeptracy
- SERVER_ADDRESS=0.0.0.0:8080
- GUNICORN_WORKERS=1
- LOG_LEVEL=INFO
command: ["./wait-for-it.sh", "postgres:5433", "--", "/opt/deeptracy/run.sh"]
3 - Execute the docker-compose file¶
> docker-compose up
4 - ENJOY DEEPTRACY WITH PATTON¶
User’s Documentation¶
This documentation is for users who want to use Deeptracy. It covers two parts, Installation and Usage
Installation¶
Components¶
Deeptracy has four main components:
- Deeptracy Workers these are celery workers that process tasks
- Deeptracy API this is the main entrance for actions
- Deeptracy Dashboard this is the dashboard for visual information on the system
- plugin-ref there is a plugin for each scan tool
Each of this components is shipped as a docker image. You can find them in the public deeptracy dockerhub https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=deeptracy&starCount=0.
Beside the components of Deeptracy, the system needs two more things to work:
- Postgres database to store projects, scans and so on
- Redis in-memory data structure store used as message broker
This two components can be launched as a docker containers, but you can also install them without docker.
Deeptracy Workers¶
Workers are celery processes. You can launch any number of workers on the same hosts. As they are celery workers connected to a broker (redis), they will take tasks to even the workload.
One of the tasks performed by the workers is cloning repositories. For this, you need to mount a the same volume in each worker from the host, where the repositories will be cloned. This volume (SHARED_VOLUME_PATH) will be mounted in various containers that the worker uses to perform distinct tasks.
Warning
Because the repository to scan is only downloaded once, you can’t have workers on different hosts, as the source code for the project is only present int he hosts that perform the task to download it.
The workers performs almost all the task inside docker containers. The worker image has docker installed, but you can mount the docker socket from the host in to the worker containers, so the docker in the host would be used.
Environment Variables¶
This are the environment variables needed by the workers
- BROKER_URI Url to the redis broker (Ex. redis://127.0.0.1:6379)
- DATABASE_URI Url to the prostgres database (Ex. postgresql://postgres:postgres@127.0.0.1:5433/deeptracy)
- PATTON_URI Url to the patton server(Ex. http://localhost:8000)
- SHARED_VOLUME_PATH Path in the host to mount as a volume in Docker images. this folder
- is going to be used to clone projects to be scanned. (Ex. /tmp/deeptracy)
- LOCAL_PRIVATE_KEY_FILE If you wanna clone private repositories, you can specify a private key file to
- be used when cloning such repos.
- LOG_LEVEL The log level for the application (Ex. INFO)
Docker Compose Example¶
deeptracy:
image: bbvalabs/deeptracy
depends_on:
- redis
- postgres
environment:
- BROKER_URI=redis://redis:6379
- DATABASE_URI=postgresql://postgres:postgres@postgres:5433/deeptracy
- SHARED_VOLUME_PATH=/tmp/deeptracy
- LOCAL_PRIVATE_KEY_FILE=/tmp/id_rsa
- PLUGINS_LOCATION=/opt/deeptracy/plugins
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp:/tmp
privileged: true
command: ["./wait-for-it.sh", "postgres:5433", "--", "/opt/deeptracy/run.sh"]
Deeptracy API¶
Deeptracy Dashboard¶
Bringing up the environment¶
As all the pieces are shipped as Docker containers, is easy to bring up an environment. You can find an example with code to launch Deeptracy in a single AWS instance in the deploy folder.
This is an example of a complete Docker Compose file that launch a complete working environment.
version: '3'
services:
deeptracy:
image: bbvalabs/deeptracy
depends_on:
- redis
- postgres
environment:
- BROKER_URI=redis://redis:6379
- DATABASE_URI=postgresql://postgres:postgres@postgres:5433/deeptracy
- SHARED_VOLUME_PATH=/tmp/deeptracy
- LOCAL_PRIVATE_KEY_FILE=/tmp/id_rsa
- PLUGINS_LOCATION=/opt/deeptracy/plugins
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /tmp:/tmp
privileged: true
command: ["./wait-for-it.sh", "postgres:5433", "--", "/opt/deeptracy/run.sh"]
deeptracy-api:
image: bbvalabs/deeptracy-api
depends_on:
- redis
- postgres
ports:
- 80:8080
environment:
- BROKER_URI=redis://redis:6379
- DATABASE_URI=postgresql://postgres:postgres@postgres:5433/deeptracy
- SERVER_ADDRESS=0.0.0.0:8080
- GUNICORN_WORKERS=1
command: ["./wait-for-it.sh", "postgres:5433", "--", "/opt/deeptracy/run.sh"]
postgres:
image: postgres:9.6-alpine
ports:
- 5433:5433
environment:
- POSTGRES_PASSWORD=postgres
command: -p 5433
redis:
image: redis:3-alpine
ports:
- 6379:6379
This docker compose will bring up an environment with a single worker and the API listening in the port 80 of the host.
Usage¶
This section explain how to use Deeptracy. Once installed Deeptracy can be used as a service. This means that a public API is exposed and all functionalities can be used through it.
Create Projects¶
Projects are the main object in the API. A project represents a single repository that you want to scan and monitorice for vulnerabilities. You can’t have more that one project with the same repository in the database.
To create projects you need to invoke the Create Projects endpoint after any scan.
Launch Scans¶
Every time a scan is launched, Deeptracy will check for the project dependencies. If the dependencies have changed from the last scan performed, the scan will begin.
A scan is performed by cloning the project repository and running different plugins against the source code. You can launch scans manually by calling the Create Scan endpoint or by Configuring a hook for your project.
Spot Vulnerabilities¶
Evey scan will run N analyzers (one for each plugin available in the system) and save the vulnerabilities found on the database. Once all analyzers are done, all vulnerabilities are merged together and saved as a final vulnerability list.
You can access individual analyzer results with Get Analyzer Vulnerabilities endpoint or the final scan list with the Get Scan Vulnerabilities endpoint.
Get Notified¶
Every time a scan finishes, if your project have the information to receive notifications you will receive one with the spotted vulnerabilities.
Configuring a hook for your project¶
You can configure a hook in your repository, so every time a push is detected a scan will automatically launched for
your project. The url for the hook is {host}/api/1/webhook/
Developer’s Documentation¶
This documentation is for developers who want to contribute to Deeptracy.
Installation¶
Python Version¶
We recommend using the latest version of Python 3. Deeptracy supports Python 3.6 and newer.
Deeptracy Projects¶
Deeptracy has four repositories with each of its components:
- Workers main repository with celery tasks and plugins
- Api holds the Flask API
- Dashboard has the front web
- Core shared library between workers and api projects. Data access components and plugins perks.
For develop, is recommended that you clone each repository under the same work dir:
- deeptracy-project
|- deeptracy
|- deeptracy-api
|- deeptracy-core
|- deeptracy-dashboard
Virtual environments¶
Is highly recommended to work with a single virtual environment for all the projects by creating a single environment at the same level that the rest of the projects
- deeptracy-project
|- deeptracy
|- deeptracy-api
|- deeptracy-core
|- deeptracy-dashboard
|- .venv
Deeptracy Core¶
Deeptracy core is a shared library that has common functionalities used in the rest of the projects. When developing is recommended to install it in your virtualenv in editable mode:
$ cd deeptracy-core
$ pip install -e .
This will instruct distutils to setup the core project in to development mode
Deeptracy Workers¶
This project is a Celery project. You can install it with:
$ cd deeptracy
$ make install-requirements_dev
Deeptracy API¶
This project is a `Flask`_ project. You can install it with:
$ cd deeptracy-api
$ make install-requirements_dev
Dependencies¶
These distributions will be installed automatically when installing Deeptracy.
- Celery is an asynchronous task queue/job queue based on distributed message passing
- Redis in-memory data structure store used as message broker in celery
- Psycopg PostgreSQL database adapter for Python
- Pluginbase for plugin management
- Docker most tasks are executed inside docker containers
- PyYAML parse yml files
Usage¶
Makefiles & Dotenv¶
To standardize tasks among repositories, each repository have a Makefile
that can be used to perform common tasks.
By executing make
in the root of each project you can get a detailed list of tasks that can be performed.
When executing tasks with make, we also provide a .dot-env
mechanism to have local environment variables for each
project. So, the first time you perform any make task, you will be prompted for the required environment variables for
that project.
Keep in mind that you can always change the local environment for a project by editing the .env
file generated
in the project root folder.
This is a sample of common tasks that can be performed with make:
$ make
clean remove all build, test, coverage and Python artifacts
test run tests quickly with py.test
test-all run tests on every python version with tox
lint check style with flake8
coverage check code coverage
docs generate and shows documentation
run launch the application
at_local run acceptance tests without environemnt. You need to start your own environment (for dev)
at_only run acceptance tests without environemnt, and just features marked as @only (for dev)
at run acceptance tests in complete docker environment
Local environment¶
You can have a full functional working local environment to do integration or acceptance tests. En the workers and API
projects you can find a docker-compose-yml
file that will launch a postgres and a redis container:
$ cd deeptracy
$ docker-compose up
Once the database and the broker are in place, now you can launch each project issuing a make run
on each of them.
Development flow¶
You should be doing unit test to test the new features. When you are working in deeptracy or in deeptracy-api is likely you will also need to work in deeptracy-core. If you installed the core in Deeptracy Core you will see the changes in the core from the other projects as soon as they are made.
Once the new feature is covered and tested with unit tests, you can launch a Local environment and run
the acceptance tests in the local environment with make at_local
Testing¶
Unit Tests¶
For development is recommended to do unit tests to speedup the process (you don’t need a full environment), and only do acceptance and integration tests when the feature is ready and tested with unit tests.
Warning
Pipelines has a check on whether the test coverage has a minimum of code covered, so lowering the percentage
of lines of code covered by unit tests is not an option. You can check your code coverage with make coverage
Acceptance Tests¶
Code Coverage¶
Source Code Docs¶
deeptracy package¶
Subpackages¶
deeptracy.notifications package¶
Submodules¶
deeptracy.notifications.slack_webhook_post module¶
Detailed documentation of Slack Incoming Webhooks: https://api.slack.com/incoming-webhooks
Module contents¶
deeptracy.tasks package¶
Submodules¶
deeptracy.tasks.base_task module¶
This module contains base class for all celery task in deeptracy and other common classes used in all tasks
deeptracy.tasks.notify_results module¶
deeptracy.tasks.prepare_scan module¶
deeptracy.tasks.scan_deps module¶
deeptracy.tasks.notify_patton_deltas module¶
Module contents¶
Submodules¶
deeptracy.celery module¶
deeptracy.config module¶
Module contents¶
Deeptracy Workers Project.
This package contains celery workers and tasks to process the deeptracy flow for scanning projects.