Skip to main content

7 posts tagged with "security"

View All Tags

Patrick DeVivo

In this post we'll show how MergeStat can be used in conjunction with tfsec, a static analysis tool for terraform, to secure cloud infrastructure 🛡️.

As a reminder, MergeStat accesses git repos, runs an analysis on them, and stores the results in SQL for downstream querying and operationalization:

Banner describing MergeStat Banner describing MergeStat

Our integration with tfsec can be used to build a report in Grafana that surfaces terraform IaC issues and misconfigurations. The end result here is a TFSec Grafana dashboard that can be explored and filtered, based specifically on what you're looking to target. The dashboard looks like this:

Grafana example screenshot

This dashboard is available as a JSON export here and should be easy to expand on!

The Setup

To begin scanning IaC repos with MergeStat, you'll need a MergeStat instance. You can start locally by following these instructions to get a Docker Compose instance running.

You'll also want to add the ghcr.io/mergestat/sync-scan-tfsec sync to your MergeStat instance:

Add sync screenshort

Once this sync has been scheduled and has run:

Screenshot of the tfsec sync running

You should see a new view available to query: tfsec_repo_issues:

-- returns all issues found by tfsec for a given repo
SELECT * FROM tfsec_repo_issues;

Screenshot of tfsec SQL query

And that's it! You have output from tfsec stored in your PostgreSQL database. Query it from MergeStat, check out the Grafana board, or plug in another BI/SQL tool.

Join our Slack

If you found this interesting, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

Peter Freiberg
Guest Post

This is a guest post by Peter Freiberg, a DevSecOps and application security consultant based in Melbourne, Australia.

I am a consultant who specialises in Application Security and DevSecOps. I've built teams and capabilities for organisations around Penetration Testing (ethical hacking), Secure Code Reviews, DevSecOps, Security Test Automation, Security Development Lifecycle and training.

In this article, we will look at processes and techniques for finding Dockerfiles across git repos, looking for specific image pinning practices. We will use the lens of a DevSecOps persona assessing the current state of where we are using a #knowyourcode approach.

TLDR: Pinning Docker images is good for deterministic builds but needs to be checked regularly for security issues. Query your code for current practices to get a starting point.

We’re going to explore a few key things we can search for using MergeStat:

  • Where are all of our Dockerfiles?
  • Are we using :latest instead of a pinned version?
  • For our Dockerfiles (maybe golden images), when is the last time they were modified?

For the rest of this article, we’re going to refer to “Golden images”, also called gold or base images. By this, we mean an image built for your organisation, that handles patching and installing core libraries and frameworks required by a developer or team, rather than all teams doing it themselves. The idea is this practice is more time efficient, reduces redundancy and creates more consistency.

We will also assume that the MergeStat instance we're working with has the following syncs enabled and scheduled for the repos we're looking at:

As a DevSecOps person, we probably want to know where all our Dockerfiles are. Here’s a query to do just that for everything we’ve loaded into MergeStat.

SELECT
repos.repo AS repo, path
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%Dockerfile%'

Here are some results from public repositories that we’ve loaded into our MergeStat instance.

Screenshot of query results showing file paths of anything that looks like a Dockerfile

These results will tell us at least where we have Dockerfiles across our code bases to understand the scope of how many we have and where they are (what repos, what directories). It gives us a starting point for analysis and we can reach out to developers as we find issues or have questions.

From a deterministic build point of view, people tend to use a specific version of an image. So we can query to start understanding what the current practices are around pinning to a specific version. 📌

What are we using? (Are we using our golden images?)

If we know what our golden image naming convention is, we could start by just looking through the FROM statements in Dockerfiles and eyeball them. The following query is going to show what images we are currently using across our code bases.

WITH docker_base_images AS (
SELECT
repos.repo AS repo,
path,
(regexp_matches(contents, 'FROM (.*):(.*) AS', 'gm')) AS docker_image
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%Dockerfile%'
)
SELECT
*,
docker_image[1] AS image,
docker_image[2] AS version
FROM docker_base_images

We’re going to get a list that looks like the following:

Screenshot showing query results of Dockerfile image versions

The results tells us that in this example, images are from Docker hub and not an organisation specific registry. The image column would have data like my.registry.address/repositoryname if it was using a different registry. This may actually be a policy you want to enforce, or detect variance: that you need to use your company's registry for all production images.

Let’s NOT use ‘latest’ Docker images (or at least be able to find them)

Using the :latest tag can be good from a security patching viewpoint, but may introduce compatibility challenges in libraries and frameworks. The following query searches for images pinned to the :latest tag.

WITH docker_base_images AS (
SELECT
repos.repo AS repo,
path,
(regexp_matches(contents, 'FROM (.*):(.*) AS', 'gm')) AS docker_image
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%Dockerfile%'
)
SELECT
*,
docker_image[1] AS image,
docker_image[2] AS version
FROM docker_base_images
WHERE docker_image[2] = 'latest'

So, we should now have a list that looks like the following.

Repopathdocker_imageimageversion
https://github.com/Org1/xyzDockerfile["alpine","latest"]alpinelatest
https://github.com/TeamZ/abcworker/Dockerfile["centos","latest"]centoslatest

Keep in mind, “latest” might not be the actual latest, it’s just a tag applied during the build process. It could even be really old if that image is no longer updated. From here, we can either reach out to the developer to understand why “latest” is being used.

Golden images need to be maintained!

I know this sounds very obvious, but here’s a real-life example though of when a process or people breaks down:

My team was an embedded DevSecOps capability in a transformation program. We started noticing that 15+ feature teams were getting vulnerabilities in their images. After a bit of digging, we discovered the issue was the golden image had not been maintained in months. We did get a bit of noise from people saying “can we just suppress the results and release, it’s the golden image and we’re adhering to the pattern”. Turns out the maintainer of the golden image was refocused on other priorities and wasn’t actually maintaining the image.

When was the last time these files were modified or updated?

So, if we are starting a program to analyse our current state, looking at the age (or last modified date) of a Dockerfile could be one way of doing this.

As of this week, there’s a new helper function in the MergeStat schema, getFilesOlderThan, which does some magic joining of tables to make things easier to query.

SELECT * 
FROM public.getFilesOlderThan('%Dockerfile', 30)
ORDER BY author_when DESC

From here, we can see the author_when column is the last time changes were made to the file, in this scenario, with the path pattern of %Dockerfile.

repofile_pathauthor_whenauthor_nameauthor_emailhash
https://github.com/XYZ/utilsDockerfile2023-03-20T15:12:46.000ZBob Smithbob@XYZ.comsd8f67z8xvc7
https://github.com/ABC/coresvc/Dockerfile2023-03-01T17:29:22.000ZAlice Sharmaalice@ABC.COM98sa7s9fd32

From here, we can either look and review this file ourself, run a checker over the top (or security scan the base image) and if we’re concerned, we’ve at least got a name and email to reach out to.

Who are you going to call? (when you want to check something)

The above query showed the last author to edit the file. That would be the first attempt at contact if we were concerned. However, if they are not available, who else would you talk to? Here’s a query to get the last five author's details (and committer) of who’s modified a given file in a repo:

SELECT
author_email, author_name, author_when, committer_email, committer_name, repo, file_path
FROM git_commits
INNER JOIN repos ON git_commits.repo_id = repos.id
INNER JOIN git_commit_stats ON git_commit_stats.repo_id = git_commits.repo_id AND git_commit_stats.commit_hash = git_commits.hash AND parents < 2
AND repo ILIKE 'https://github.com/OWNER/REPO'
AND file_path ILIKE 'Dockerfile'
ORDER BY author_when DESC
LIMIT 5

The result is a list of the people who’ve modified the file in question for the given repository and grab the last 5.

We can use code analytics for security and governance

So, depending on your organisation, some policies you have can be queried in a “Governance as Code” way of doing things such as:

  • Use pinned versions (e.g. no “latest”)
  • Use the approved registry to build images
  • Make sure our Dockerfiles are checked and updated at least every X days

As we’ve shown above, these can all be queried using MergeStat and SQL. This approach can assist with identifying, remediating and managing risk. The ability to query provides a fast approach to gaining broad visibility.

Parting thoughts

Get started with some analysis and don’t let perfect be the enemy of good. I’ve spoken to a few people and heard comments like “That query will miss edge case XYZ”. Maybe, but you’ve got the 80-90% of a good starting point and can iterate on accuracy from that query!

Join our Slack

If you found this interesting, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

Peter Freiberg
Guest Post

This is a guest post by Peter Freiberg, a DevSecOps and application security consultant based in Melbourne, Australia.

I am a consultant who specialises in Application Security and DevSecOps. I've built teams and capabilities for organisations around Penetration Testing (ethical hacking), Secure Code Reviews, DevSecOps, Security Test Automation, Security Development Lifecycle and training.

Previously, we looked at a few ways we can look at open-source library risk across our code. In this article, we’ll look at how we can search for new known vulnerabilities once details become available.

So, an email, article or colleague disrupts whatever you were doing and proclaims “XYZ library is vulnerable and it’s horrible!”. How do you tell if you’re vulnerable to XYZ vulnerability?

If you can query your code, you can start finding where you might be exploitable. If you’re new to MergeStat, the tldr is it's a tool to aggregate multiple Git sources, organisations and repos in a literal SQL interface (with some other magic sauce utilities too).

In this case, let’s take a look at Log4Shell. Here’s a rundown of the advisory: https://www.cisa.gov/news-events/cybersecurity-advisories/aa21-356a

Log4J Logo

One of their first steps was “Identifying assets affected by Log4Shell and other Log4j-related vulnerabilities.”

Now, assets is going to involve a lot of things that are off the shelf, but from a “knowing our code” angle there’s a few things we can do.

Our high-level process using MergeStat to identify potential locations is pretty simple:

  1. Find all locations where we might include Log4J. In this example, we’ll look at the the package manager maven)
  2. Search for Log4J (we can get more specific, but let’s start here)
  3. Find the last author or committer who touched the file to assist with assessing and remediation

Find all the maven package manager files:

SELECT repo, path 
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%pom.xml'

We’re going to get something that looks like the following

repopath
https://github.com/ORG-A/repo1service-discovery/blah/pom.xml
https://git.internal.xyz/TEAM-B/repoYpom.xml
......

That’s pretty straight forward, but what we really want is to find all the maven files which look like they use Log4J.

SELECT repo, path 
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%pom.xml' AND contents LIKE '%log4j%'

This will produce a similar table of results as above, this time with files that contain Log4J. Now, this is pretty broad keyword search, but if you take a look at the Log4J documentation this should catch most of the situations where Log4J appears, even if you change the package manager filename (e.g. build.gradle, build.sbt etc.) and even the Clojure build tool if that’s your thing.

Now to find out who can help us determine if we’re actually vulnerable. Hopefully we have a version number in the pom.xml or equivalent build tool. If we have to dive deeper to check deployment status then we probably need to find the last person who touched the package manager file.

Here’s a query to find all developers and committers who last modified the maven files which look like they use Log4J:

SELECT DISTINCT ON (repo, path) repo, path, author_name, author_email, author_when, committer_name, committer_email, committer_when
FROM git_files
INNER JOIN git_commit_stats ON git_files.repo_id = git_commit_stats.repo_id AND git_files.path = git_commit_stats.file_path
INNER JOIN git_commits ON git_commit_stats.repo_id = git_commits.repo_id AND git_commit_stats.commit_hash = git_commits.hash
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%pom.xml' AND contents LIKE '%log4j%' AND parents < 2
ORDER BY repo, path, committer_when DESC

Our results should look something like:

repopathauthor_nameauthor_emailauthor_whencommitter_namecommitter_emailcommitter_when
https://github.com/ORG-A/repo1service/pom.xmlBobbob@xyz.com2021-06-01T08:30:36ZAlicealice@alice.com2021-06-01T08:30:36Z
https://int.xyz/TEAM-B/repoYpom.xmlJosuejosue@aaa.com2021-11-01T09:42:36ZPatrickpd@there.com2021-06-01T08:30:36Z

The above query has some complexity, but this should be easy(ish) for anyone to modify, changing the path and searching on strings based on your use case.

If the above query has found people who are no longer in your organisation, then you can follow up by finding whoever was the last to author or commit to the repo:

SELECT *
FROM git_commits
INNER JOIN repos ON git_commits.repo_id = repos.id
WHERE repo = 'https://github.com/YOUR_ORG/YOUR_REPO' -- replace with your repo
ORDER BY committer_when DESC
LIMIT 1

What are the limitations of this process?

This example is only Maven, it doesn’t include other ecosystems (but could)

The above process and queries could be easily tweaked to query Gradle, Scala build tool and other JVM based languages or integrations. You could run your analysis on build.gradle or whatever your JVM package manager file is. As above take a look at the Log4J and Clojure build tool documentation for some other package managers.

It may be included in another library as a dependency (AKA Transitive Dependency)

This one is a bit trickier. If we don’t have that extracted and queryable already, we would then need to query the broader libraries and check if they use the library. However, we can at least start from a “where are all the maven, gradle, sbt etc” package manager files and run the analysis from that starting point.

We’re only looking at our code, what about 3rd party and COTS?

The above won’t help with “off the shelf” software that has something like Log4J bundled (e.g. Tomcat, or a commercial application built on top of Tomcat). Most likely, these will be discovered using other vulnerability management tools looking at virtual machines or containers.

This code may not be actually deployed

Is it test code? Proof of Concept? Decommissioned?

Who knows? At least you have a starting point with some names to start asking questions. As above, if you find the last author or committer and ask them you can then rule in or out whether it’s an issue for you.

It may be fixed in code, but not released to production

To actually confirm we’re not vulnerable across our organisation and applications, we still need to check the deployed artefact. We have the starting point of code, but can then follow up through developers we found who’ve updated those files.

Next Steps

As you can see, being able to query your code is extremely useful when looking for something which is “known bad”. We can quickly query our code bases to start the initial analysis of our posture. We’ve used Log4J here, but these types of queries could be used for Dockerfiles (e.g. some binary install through curl that’s compromised, different added packages with vulnerabilities), other language package managers (e.g. Ruby Gems, Python pip, Go mod etc).

Join our Slack

If you found this interesting, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

Peter Freiberg
Guest Post

This is a guest post by Peter Freiberg, a DevSecOps and application security consultant based in Melbourne, Australia.

I am a consultant who specialises in Application Security and DevSecOps. I've built teams and capabilities for organisations around Penetration Testing (ethical hacking), Secure Code Reviews, DevSecOps, Security Test Automation, Security Development Lifecycle and training.

Identifying Open-Source Library Risk Using MergeStat (Part 1)

One topic that arises frequently in my work is understanding what open-source code is in use in an organisation, and where that organisation may have vulnerabilities. Moreover, how do we ensure consistent visibility into our open-source risk profile over time?

Large organisations often have a lot of code, a lot of developers, and a lot of dependencies. This can make it feel like quite a daunting task, and it’s often difficult to know where to begin. The reality, however, is that with a couple of open-source tools, almost every company can easily get accurate information on a great many topics if they know where to look and how to use their data. There’s so much useful data within Git, but not many people are using it today.

MergeStat is one such tool that can help, and here we’ll showcase some queries that can assess open-source library risk and identify the general development landscape across code in an organisation. MergeStat allows you to query all your Git repositories using SQL, to start asking questions like:

  • Where in our organisation do we use maven (pom.xml) or other package managers of interest?
  • I’ve found a security issue in our dependencies, which developer modified this last so I can collaborate on remediation?
  • What’s our technology and language landscape?

Being able to query code ourselves saves a lot of time, the data is the data, it’s actually what’s happened versus surveys, emails and spreadsheets that take weeks to collate and analyse.

I think it’s really useful for AppSec or DevSecOps people to be able to query and analyse code at scale, and then proactively reach out to developers.

We’re going to explore a few use cases for querying an organisation’s code bases, with a little help from automation and some data augmentation, to identify some specific areas of risk.

Show me XYZ open source package manager files, so I can look for known vulnerabilities

In this example, we’re simply going to look for known package manager file types, in this case maven.

-- Find all maven (pom.xml) files across the enterprise 
SELECT repo, path
FROM git_files
INNER JOIN repos ON git_files.repo_id = repos.id
WHERE path LIKE '%pom.xml%'

From this generated list of files, we can do our own analysis. If you’re already using an SCA tool for your organisation you can also run your own analysis. Keep in mind that having a security tool for your organisation, doesn’t mean everyone is using it (yet).

If we find a potential vulnerability in a file, we can also query who the last author (developer) was and who committed it to the repo again using another query:

SELECT author_email, author_when, committer_email, committer_when
FROM git_commits
JOIN git_commit_stats ON git_commits.hash = git_commit_stats.commit_hash
JOIN repos ON git_commits.repo_id = repos.id
WHERE file_path = 'PATH/TO/THE-FILE.extenstion' AND
repo = 'https://github.com/YOUR_ORG/REPO'
LIMIT 1

While the last developer or committer may not have added the library which has the issue, at least they have looked at the file recently and in a good position to collaborate with us.

Across all our Git repos, show the most frequently used 3rd party dependencies

One of the first and most fundamental challenges is simply understanding what 3rd party, open-source code a company uses. This is often difficult to aggregate across many code bases and teams.

It can be trivial to access this information in a single repo, but across many (hundreds or even thousands), it can be much more difficult, at least without some automation (or a tool like MergeStat).

MergeStat is able to run open-source SBOM generators on all the Git repositories in an organisation, and store their output in PostgreSQL. In this example, Syft is used to generate and store SBOMs for supported package managers, allowing us to execute a query like this one:

-- Find the most commonly used Syft artifacts across all repos
SELECT COUNT(*) as count, name, version, type
FROM syft_repo_artifacts
GROUP BY name, version, type
ORDER BY 1 DESC

This gives us the most common SBOM artifacts in use across all repositories. This is a great starting point, as it gives us a sense of what languages and tools are in use throughout a company’s code. This is an “out of the box” feature that MergeStat provides.

It also gives us a list of any “key” dependencies - the 3rd party libraries that are most commonly relied on throughout an organisation.

Most common SBOM artifacts across all repos

Show me the oldest files that contain security vulnerabilities

Similar to how MergeStat can generate an SBOM across many repos, it can run a vulnerability scan using a tool such as Trivy or Grype. Doing so allows us to surface detected vulnerabilities across the repos in an organisation. Combining this with Git activity allows us to also filter vulnerabilities based on when the related file was last modified. This allows us to order by the vulnerabilities in the least recently modified files first (as these may be a top priority to address, as they’ve been around for a while).

-- SELECT the top 10 oldest files that contain CRITICAL vulnerabilities
SELECT MAX(committer_when) last_commit_to_file, repo, target, vulnerability_id, vulnerability_pkg_name, vulnerability_installed_version, vulnerability_severity
FROM trivy_repo_vulnerabilities
INNER JOIN git_commit_stats ON trivy_repo_vulnerabilities.repo_id = git_commit_stats.repo_id AND trivy_repo_vulnerabilities.target = git_commit_stats.file_path
INNER JOIN git_commits ON git_commit_stats.repo_id = git_commits.repo_id AND git_commit_stats.commit_hash = git_commits.hash
INNER JOIN repos ON trivy_repo_vulnerabilities.repo_id = repos.id
WHERE vulnerability_severity = 'CRITICAL'
GROUP BY repo, target, vulnerability_id, vulnerability_pkg_name, vulnerability_installed_version, vulnerability_severity
ORDER BY 1 ASC
LIMIT 10

This query uses the Trivy integration to surface the top 10 oldest files (by when they were last modified) that contain a CRITICAL vulnerability. This serves as a starting point to identify high-priority vulnerabilities - most likely the most urgent and longest standing ones. Obviously, this query can be modified to suit your needs but it does illustrate the capability.

The key point is that once you can easily query or find the file types you want, running frequent analysis is easier and more repeatable.

Show me who typically maintains 3rd party dependencies across codebases

It’s one thing to know where vulnerabilities are, it’s another thing entirely to find the right people and process to mitigate these vulnerabilities with patches or upgrades. The last person to modify a file is probably a good person to help investigate and fix an issue. An additional query here that’s very valuable to me, is one that shows who are the most frequent committers to dependency manifest files in recent history?

-- show authors with most (non-merge) commits that change package.json files in the last year
SELECT count(*), author_name, author_email, file_path, repo FROM git_commits
JOIN git_commit_stats ON git_commits.hash = git_commit_stats.commit_hash
JOIN repos ON git_commits.repo_id = repos.id
WHERE file_path LIKE '%package.json' AND author_when > now() - '1 year'::interval AND parents < 2
GROUP BY author_name, author_email, file_path, repo
ORDER BY count(*) DESC

The query above can be modified easily to:

  • Look at go.mod, requirements.txt, Gemfile, etc instead for different package ecosystems
  • Look at a different time frame - maybe 1 year is too long (lots of engineering churn) and 1 month is a better filter
  • Filter for specific code bases - maybe you’re only concerned with one or two code bases, as that’s where your focus is.

What Next?

These queries are a way to produce a high-level survey of the open-source risk profile of code bases across an organisation. Getting data from either direct integration of tools (e.g. Trivy, Grype) or just looking for older package manager files provides a good starting point for a security program.

These queries can be expanded on to answer additional questions:

  • What teams or code bases do a better job of keeping dependencies up to date and vulnerabilities out of main?
  • Are there individuals or teams who typically update dependencies across many code bases in the organisation? If so, are these good people to target for additional questions about security maintenance?

Once you have your git data sources plumbed into MergeStat, you can query as needed alternating between engaging in discovery and analysing immediate issues.

In Part 2, we’ll cover how to use MergeStat to starting identify overall exposure to a single vulnerability (such as a Log4Shell) that may be thrust upon your day and answer questions in a similar way. Stay tuned!

Join our Slack

If you found this interesting, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

Patrick DeVivo

We've been big fans of Renovate, an open-source tool and GitHub app that automates dependency management by opening PRs into your repos. In particular, we've loved:

  • The breadth of supported dependency types, including Dockerfiles, Helm Charts and GitHub Actions
  • PRs are rate limited, so that we're not overwhelmed with code changes to review
  • Respect for conventional commits

Recently, we wanted to know which of our repos had Renovate installed, which is a use case several folks have now shared with us! This post showcases some of the queries we put together around our use of Renovate.

We spun up an instance of MergeStat and begin writing some queries 🎉.

Which Codebases Have Renovate Installed?

Get the list of repos that have Renovate installed (by looking for a renovate.json config file in the root of the repo).

-- which repos have a renovate.json file in the root
SELECT repo, path FROM git_files
JOIN repos ON git_files.repo_id = repos.id
WHERE path = 'renovate.json'

Retrieve when the renovate.json file was added to a codebase, by looking at the commit history.

-- how long ago (in days) were renovate.json files added to our repos
SELECT
repo,
max(author_when) AS last_modified,
ROUND(EXTRACT(EPOCH FROM now() - max(author_when))/86400, 2) AS last_modified_days_ago,
min(author_when) AS first_added,
ROUND(EXTRACT(EPOCH FROM now() - min(author_when))/86400, 2) AS first_added_days_ago
FROM git_commit_stats
JOIN git_commits ON git_commit_stats.commit_hash = git_commits.hash
JOIN repos ON git_commit_stats.repo_id = repos.id
WHERE file_path = 'renovate.json'
GROUP BY repo
ORDER BY first_added ASC

Screenshot showing a query of when renovate.json files were added to repos

How Many Renovate PRs Have Been Opened?

-- total count of renovate PRs across repos
SELECT count(*) FROM github_pull_requests
WHERE author_login = 'renovate'
-- count of renovate PRs across repos by state
SELECT state, count(*)
FROM github_pull_requests
WHERE author_login = 'renovate'
GROUP BY state
ORDER BY count(*) DESC

Screenshot of renovate PRs by state

What Files Does Renovate Typically Modify?

When a Renovate PR merges, what files is it typically making changes to?

-- count of modifications made by Renovate to specific files
SELECT file_path, count(*) FROM git_commits
JOIN git_commit_stats ON git_commits.hash = git_commit_stats.commit_hash
JOIN repos ON git_commits.repo_id = repos.id
WHERE author_name = 'renovate[bot]'
GROUP BY file_path
ORDER BY count(*) DESC

Here's example output from our codebases:

file_pathcount
package-lock.json259
package.json231
go.mod37
go.sum36
.nvmrc12
.github/workflows/linter.yaml6
.github/workflows/lint-test.yaml6
renovate.json6
.github/workflows/release.yml5
.github/workflows/chromatic.yml4

Show the Cumulative Sum of Merged Renovate PRs

How much has Renovate actually been up to - and how much value are we getting from it over time?

-- calculate the cumulative sum of merged renovate PRs over time
WITH data As (
SELECT
date_trunc('day', created_at) AS day,
count(1)
FROM github_pull_requests
WHERE author_login = 'renovate' AND state = 'MERGED'
GROUP BY 1
)
SELECT
day,
sum(count) OVER (ORDER BY day ASC rows BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW)
FROM data

Cumulative sum of merged Renovate PRs over time

How Quickly Are Renovate PRs Merged?

What's the distribution of time-to-merge (measured as the number of days between PR creation and merge) of Renovate PRs.

SELECT
FLOOR((EXTRACT(EPOCH FROM merged_at - created_at)/86400)/5)*5 as bin_floor,
count(*)
FROM github_pull_requests
WHERE author_login = 'renovate' AND state = 'MERGED'
GROUP BY 1
ORDER BY 1

Histogram of renovate time-to-merge

Conclusion

We were curious to play around with this data to learn about our use of Renovate and the value we've been getting from it. If you're a Renovate user, hopefully it's interesting to you as well!

Join our Slack

If you found this interesting, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

Patrick DeVivo

MergeStat’s mission is to make it possible to query anything that’s involved in building and shipping software with SQL. One aspect of building and shipping software is the management of open-source security vulnerabilities in the “supply chain” of dependencies your code uses.

In this post, we’ll explore how MergeStat’s integration with code scanners Grype and Trivy allows us to put together an open-source solution for managing vulnerabilities across many Git repositories.

Setup

To get started, you'll want an instance of MergeStat (check out how to get started locally). Once you've added your Git repos, turn on the Grype and Trivy scanner syncs:

Screenshot showing the Grype and Trivy scanner configs

Runs of these syncs should begin populating the following views:

  • grype_repo_vulnerabilities
  • trivy_repo_vulnerabilities

Where each row is a vulnerability detected in a repo by one of the tools.

Now, let's see how we can query these vulnerabilities to get a sense of where (what repos) we have the most vulnerabilities in, and what the severity make up looks like.

Total Vulnerability Count by Repo

These queries look at the total counts of vulnerabilities surfaced in each of the codebases in MergeStat.

-- Count of trivy vulnerabilities by repo
SELECT repo, count(*) FROM trivy_repo_vulnerabilities
JOIN repos ON trivy_repo_vulnerabilities.repo_id = repos.id
GROUP BY repo
ORDER BY count(*) DESC
-- Count of grype vulnerabilities by repo
SELECT repo, count(*) FROM grype_repo_vulnerabilities
JOIN repos ON grype_repo_vulnerabilities.repo_id = repos.id
GROUP BY repo
ORDER BY count(*) DESC

Trivy vulnerabilities by repo

Vulnerability Count by Repo & Severity

These queries break out the count of vulnerabilities across codebase and severity.

-- Count of grype vulnerabilities by repo and severity
SELECT repo, severity, count(*) FROM grype_repo_vulnerabilities
JOIN repos ON grype_repo_vulnerabilities.repo_id = repos.id
GROUP BY repo, severity
ORDER BY count(*) DESC
-- Count of trivy vulnerabilities by repo and severity
SELECT repo, vulnerability_severity, count(*) FROM trivy_repo_vulnerabilities
JOIN repos ON trivy_repo_vulnerabilities.repo_id = repos.id
GROUP BY repo, vulnerability_severity
ORDER BY count(*) DESC

Vulnerability count by repo and severity

Vulnerability Count by Type

Look at the breakdown of vulnerability counts by "type" (package ecosystem).

note

Interestingly, Grype reports more npm vulnerabilities while Trivy reports more gomod vulnerabilities across our repos.

-- Count of trivy vulnerabilities by type
SELECT type, count(*) FROM trivy_repo_vulnerabilities
GROUP BY type
ORDER BY count(*) DESC
-- Count of grype vulnerabilities by type
SELECT type, count(*) FROM grype_repo_vulnerabilities
GROUP BY type
ORDER BY count(*) DESC

Trivy vulnerabilities by type

Most Common Vulnerabilities

Find which vulnerabilities you're most exposed to by surfacing the most commonly found ones (by CVE ID) in your codebases.

-- Count of trivy vulnerabilities by id
SELECT count(*) vulnerability_id, vulnerability_title, type FROM trivy_repo_vulnerabilities
GROUP BY vulnerability_id, vulnerability_title, type
ORDER BY count(*) DESC
-- Count of grype vulnerabilities by id
SELECT count(*), id, description, type FROM grype_repo_vulnerabilities
GROUP BY id, description, type
ORDER BY count(*) DESC

Additional Exploration

Hopefully these queries and screenshots give a sense of what's possible when it comes to understanding CVEs across your organization using MergeStat SQL. Some additional queries and explorations that could be interesting:

  • Join CVE data with Git activity to order by the recency of commits in code (filter out abandoned codebases where the last commit was over a year ago)
  • Determine if certain teams or projects do a better job at managing CVEs
  • Alert when new CRITICAL vulnerabilities are detected using a direct SQL integration
  • Connect a BI tool directly to the MergeStat PostgreSQL instance to see vulnerability data in a single dashboard
Join our Slack

If this you find this interesting and want to dig deeper on CVE data with us, hop in our community Slack! We're always happy to chat about MergeStat there 🎉.

note

Today, repo data sources are defined in the MergeStat code. We're working on support for a "bring your own" tool approach that would enable anyone to configure their own scanners or scripts in MergeStat.

Patrick DeVivo

Did you know that GitHub maintains a public database of known CVEs and security advisories for open-source codebases? The database is a public Git repository that holds JSON files in OSV format, partitioned by date. This is the data that's displayed on the github.com/advisories page, which also powers Dependabot alerts!

Since it's just a Git repo, we wanted to take it for a spin with MergeStat to see what we could learn applying some SQL to its contents.

Setup

We followed the getting started instructions to get a new local instance of MergeStat running with docker-compose. Since this is a public GitHub repo (and we're looking at the data brought in by the GIT_FILES syncer), we don't need to configure any authentication, we can just bring the repo in directly, and sync the file contents:

Screenshot of the MergeStat app having synced the file contents of the GitHub Advisory Database repo

We started by establishing a view over the "raw" file contents of the repo, which extracts some values from the underlying JSON files and parses their file paths:

-- View of all GitHub Advisories in the repo
CREATE OR REPLACE VIEW github_advisories AS (
SELECT
path,
split_part(path, '/', 2) as reviewed,
split_part(path, '/', 3) as year,
split_part(path, '/', 4) as month,
split_part(path, '/', 5) as id,
contents::jsonb->>'summary' as summary,
contents::jsonb->'database_specific'->>'severity' as severity,
a.value->'package'->'name' as affected_name,
a.value->'package'->'ecosystem' as affected_ecosystem
FROM git_files LEFT JOIN LATERAL jsonb_array_elements(contents::jsonb->'affected') a ON true
WHERE path LIKE 'advisories/%.json'
)

This allows us to run much cleaner queries, such as:

SELECT * FROM github_advisories

Screenshot of the MergeStat app running a SQL query to list all advisories

without needing to carry over the parsing and JSON logic in every query we run.

How many advisories are reviewed vs unreviewed?

SELECT
count(1) as total,
count(1) FILTER (WHERE reviewed = 'github-reviewed') as reviewed,
count(1) FILTER (WHERE reviewed = 'unreviewed') as unreviewed
FROM github_advisories
  • 201,711 total advisories (at time of writing)
  • 19,078 advisories have been reviewed by GitHub (~9% of total)
  • 182,633 remain unreviewed

(Clearly the GitHub team has quite a bit on their plate 😀)

How many advisories exist by ecosystem?

Ecosystem here refers to the package registry the advisory was found in. Looks like PyPi has the most and Pub the least. Note that we're excluding results where an affected ecosystem is not listed.

SELECT
affected_ecosystem,
count(*)
FROM github_advisories
WHERE affected_ecosystem IS NOT NULL -- where the affected ecosystem is known
GROUP BY affected_ecosystem
ORDER BY count(*) DESC

Chart showing the number of advisories by ecosystem

How many advisories exist by ecosystem and severity?

Severity here is one of CRITICAL, HIGH, MODERATE, and LOW.

SELECT
affected_ecosystem,
severity,
count(*)
FROM github_advisories
WHERE affected_ecosystem IS NOT NULL -- where the affected ecosystem is known
GROUP BY affected_ecosystem, severity
ORDER BY count(*) DESC

Chart showing the number of advisories by ecosystem and severity

Some ecosystems have fewer advisories overall, but a higher percentage of those advisories are CRITICAL.

How many advisories have been published over time?

SELECT
extract(year from published) || '-Q' ||extract(quarter from published) as quarter,
count(*)
FROM github_advisories
GROUP by extract(year from published), extract(quarter from published)
ORDER BY extract(year from published) ASC, extract(quarter from published) ASC

Chart showing the number of advisories by publication quarter

Looks like a ton of advisories were published in Q2 2022, making it hard to see overall trends.

What's been the makeup of advisory severity over time?

SELECT
extract(year from published) || '-Q' ||extract(quarter from published) as quarter,
severity,
count(*)
FROM github_advisories
GROUP by extract(year from published), extract(quarter from published), severity
ORDER BY extract(year from published) ASC, extract(quarter from published) ASC

Chart showing the number of advisories by publication quarter and severity

Chart showing the number of advisories by publication quarter and severity, 100% stacked With 100% stacking

What's been the makeup of advisory ecosystem over time?

SELECT
extract(year from published) || '-Q' ||extract(quarter from published) as quarter,
affected_ecosystem,
count(*)
FROM github_advisories
GROUP by extract(year from published), extract(quarter from published), affected_ecosystem
ORDER BY extract(year from published) ASC, extract(quarter from published) ASC

Chart showing the number of advisories by publication quarter and ecosystem

(Where the ecosystem is known).

What packages have the most advisories?

SELECT affected_name, affected_ecosystem, count(*)
FROM github_advisories
WHERE affected_ecosystem IS NOT NULL
GROUP BY affected_name, affected_ecosystem
ORDER BY count(*) DESC

Chart showing the number of advisories by package

Looks like there's a lot going on in tensorflow!

What packages have the most advisories, by ecosystem?

SELECT affected_name, affected_ecosystem, count(*)
FROM github_advisories
WHERE affected_ecosystem IS NOT NULL AND affected_ecosystem = 'REPLACE_WITH_ECOSYSTEM'
GROUP BY affected_name, affected_ecosystem
ORDER BY count(*) DESC
LIMIT 10

PyPI

Chart showing the number of advisories by package in the PyPI ecosystem

Maven

Chart showing the number of advisories by package in the Maven ecosystem

npm

Chart showing the number of advisories by package in the npm ecosystem

Packagist

Chart showing the number of advisories by package in the Packagist ecosystem

NuGet

Chart showing the number of advisories by package in the NuGet ecosystem

Go

Chart showing the number of advisories by package in Go ecosystem

RubyGems

Chart showing the number of advisories by package in RubyGems ecosystem

crates.io

Chart showing the number of advisories by package in crates.io ecosystem

Next Steps

This has been a high level view of all the advisories found in a specific GitHub-maintained public database. To examine the advisories that impact your organization or team, you'd want to join the data in this repository with information about what packages your code brings in - potentially by using SBOMs or even parsing package manifests.

Tools such as Dependabot already do this for the GitHub ecosystem, and automate the maintenance of dependencies as well. There are, however, other databases and ecosystems emerging that track advisories and vulnerabilities. MergeStat supports some of these tools today, and we are working to make it possible for anyone to easily bring the output of their own selection of tools in. We believe the easier it is to access data involved in key aspects of the software-supply-chain, the more secure and the more effective engineering organizations may be.

Join our Slack

Our community Slack is a great place to find help and ask questions. We're always happy to chat about MergeStat there 🎉!