Blueprint

SonarSource: Automatic Code Analysis in the Cloud with AWS

In this video, Olivier Schmit, Solutions Architect at SonarSource connects with Ross McEwan, Senior Solutions Architect at AWS, to brief about SonarSource and how the strenuous efforts made by their development team have revolutionized the automatic code analysis.

At 0:24, Olivier starts by briefing the journey of SonarSource and how it started 13 years ago. He further mentions that the company was founded by a team of three developers who envisioned providing code analysis solutions.

Presently, the company has three products like SonarLint, SonarCloud, and SonarQube supported by 27 programming languages.

At 0:43, the speaker dives into the starting point, adding developers to the list. These folks write and push codes to the GitHub repository. Code once pushed to the GitHub repo triggers an event are then captured into the Lambda. After that, the team creates an analysis request through triage and analysis, which goes straight to the SQS queue.

Ross questions Oliver about the need for using SQS. He further adds, why the developers were not speaking directly to Fargate? Considering the fact that Fargate can do the job of picking messages from the GitHub.

At 1:22, Olivier justifies the reasons why their team uses SQS instead of Fargate. He states the first reason to reverse the load and their aim was not to overwhelm the Fargate cluster. Secondly, SQS was preferred for scaling needs because the Fargate cluster uses autoscaling plugged directly into the SQS queue. The Fargate cluster pulls the queue and the first step is to clone the code.

Their biggest code base is nine gigabytes. Also, the speaker confirmed that the team was working with ephemeral storage, with no use of any external storage.

At 2:11, the speaker Oliver explains that once the code is analyzed, it is then pushed to the EC2 block. The EC2 instance receives reports through an API and the reports are stored in the AURORA database. Once the data are in the database, it will be implemented as a Fargate cluster. Next, the cluster referred to as the compute engine pulls the database to generate reports. This entire workflow works similarly to a queuing mechanism.

At 3:07, Oliver briefs about the extraction of reports. Adding to his discussion, he discusses the computation and code quality metrics involved behind the scenes. Post computation, the database is then updated. Finally, the analysis report created is ready to be consumed by users.

It is worth mentioning that the ES stores additional data for optimization. The team filters these data to avail searches optimized format.

At 3:59, the speaker adds that the users go to the UI and call the UI and API to see the results. Furthermore, Oliver explains the use of an auto-scan to analyze up to 150K.

The Final Verdict

Since its inception, SonarCloud is continually supporting around 300,000 organizations. As the next big move, the company envisions moving to a managed solutions service.

The main focus will be sticking to optimization and removing the queue in Aurora. Also, from the point of a product perspective, the company plans to support non-compiled languages and expand its reach globally.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button