DevOps is spreading rapidly throughout the world of programming and development. It is predicted that investment in DevOps strategies is going to nearly triple in just five years. 

Considering its impact on a company’s productivity and output, it’s easy to understand why. Companies that adopt a DevOps strategy increase their output by 63%!

With DevOps being so hot at the moment, we see a massive influx of new tools and technologies. We’re going to take a look at Prometheus, a popular tool for Kubernetes. 

Here’s our Prometheus tutorial to help you get up and running and try it out for yourself! 

Prometheus Tutorial

Container technologies are even more widespread than DevOps at this point. A massive 87% of businesses surveyed in 2019 were already using some form of container technology. Of these, Kubernetes is one of the most popular. 

Prometheus is a popular DevOps monitoring platform for Kubernetes. Let’s begin our Prometheus tutorial by learning what the program is and does. Then we’ll show you how to get started monitoring in Kubernetes using Prometheus yourself. 

What Is Prometheus?

Prometheus is the most popular monitoring solution for Kubernetes. It’s the second project to be launched by the Cloud Native Computing Foundation, which is the same organization responsible for Kubernetes. 

Prometheus takes full advantage of the strengths offered by a cloud-native setup. It monitors several metrics on Kubernetes and stores them as a time/date series. So, data can be easily searched, scanned, and retrieved. 

Prometheus collects data at pre-set intervals. It can also send alerts when certain thresholds are crossed. 

How Does Prometheus Work With Your Development Environment?

Prometheus works by collecting data around a series of metrics. That means it needs to work with most of the popular programming and development languages for it to be useful. 

With that in mind, Prometheus has libraries for a wide berth of different programming languages. These include: 

  • Go
  • Ruby
  • Java
  • Python

There are libraries in the works for other popular languages, but they’re not quite official. These include Rust, C#, and Node.js. Prometheus can even be configured to monitor short-lived programming jobs like batch processing using PushJobs

The preferable way to monitor applications is by retrieving data from endpoints. 

The libraries offer endpoints for a wide range of metrics. You’ll need to configure Prometheus to do that, however. Otherwise, you run the risk of losing your data if your program should happen to crash. 

Now let’s find out how to use Prometheus. 

Getting Started With Prometheus

One of the easiest ways to experiment with Prometheus and see how it works for you is by running it in Docker. To follow this Prometheus tool tutorial, you’ll need to have Docker installed. For everything else, you can find examples in this GitHub repository.

Implement An Application

To begin, we’re going to use this example application written in Go. It only monitors random metrics, while the application is running. We won’t dissect every line of code, but we’ll focus on a few key areas. 

We’ll start by looking at the end of the main.go file. Here, you’ll find the `/metrics` endpoint. This is the endpoint that configures the metrics for the Prometheus format using the `promhttp` library. 

http.Handle(“/metrics”, promhttp.Handler())

Slightly above that, you’ll see the code for printing metrics when the application’s running. 

go func() {

             for {

                        v := (rand.NormFloat64() * *normDomain) + *normMean



                        time.Sleep(time.Duration(75*oscillationFactor()) *




This code calls two variables from above and specifies a few things about the metric’s format. 

Now take a look at the following lines of code. This defines the name of the metric and offers some useful information about the metrics you’re tracking. 

rpcDurations = prometheus.NewSummaryVec(


                        Name: “rpc_durations_seconds”,

                        Help: “RPC latency distributions.”,

                       Objectives: map[float64]float64{0.5: 0.05, 0.9: 0.01,

0.99: 0.001},



2. Compile and Run

Now you’re ready to run your code and see how Prometheus works for yourself. Run the following code:

go get -d

go build

./random -listen-address=:8080

If you’re running code in Docker, use the following:

docker run –rm -it -p 8080:8080 christianhxc/gorandom:1.0

Now open a window in your web browser and go to  http://localhost:8080/metrics endpoint. You should be able to see all of the custom metrics. 

3. Install Prometheus

There are a few ways you can install Prometheus. First, you can download the binaries for your operating system. Then you can run the executables to launch the application. 

In Docker, you can simply run the code:

docker run –rm -it -p 9090:9090 prom/prometheus

Now go back to your web browser and navigate to http:\\localhost:9090\

4. Customize Prometheus

Remember, Prometheus’ main strength is its ability to monitor whatever metrics you want. So far, you’re just using the default settings. Now you’re going to specify for Prometheus to pull data from the /metrics endpoint from Go. 

To do this, you’re going to create a prometheus.yml file to specify the routing. Remember to replace `` with the IP for your application. If you’re running Docker, don’t use localhost. 


     scrape_interval:   15s # By default, scrape targets every 15 seconds.

     evaluation_interval: 15s # Evaluate rules every 15 seconds.

    # Attach these extra labels to all timeseries collected by this Prometheus instance.


       monitor: ‘scalyr-blog’

rule_files: –



       – job_name: ‘prometheus’

# Override the global default and scrape targets from this job every 5 seconds.

      scrape_interval: 5s


       – targets: [‘localhost:9090’]

– job_name: ‘golang-random’

     # Override the global default and scrape targets from this job every 5 seconds.

       scrape_interval: 5s static_configs: – targets: [‘’]


        group: ‘production’

The last section tells Prometheus to monitor the metrics every 5 seconds. The results are then tagged with a group label and a production value. You can add additional endpoints and tags as needed. 

Now terminate the command that you used to launch Prometheus. Relaunch the application with the following command, which includes the Prometheus.yml file:

docker run –rm -it -p 9090:9090 -v

$(pwd)prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus

Now refresh your browser or open a new window to make sure that Prometheus is still running. 

5. Explore Metrics Using Prometheus’ UI

Type the following into the ‘Expression’ textbox to get some data to work with.

avg(rate(rpc_durations_seconds_count[5m])) by (job, service)

Hit the blue execute button to run the code. 

Select the ‘Graph’ tab to see a graphic visualization of your data. 

That’s all there is to it! You are now up and running with Prometheus!

Once you’re collecting data, you can set alerts for certain conditions. You can also aggregate data in many ways. You can even include aggregating rules in Prometheus’ configuration. 

Prometheus Use Cases

We’ve shown you how to set up and get started with Prometheus for yourself. So far, it’s just an abstract concept and some lines of code. Let’s take a look at how some are using Prometheus in real life to give you an idea of how Prometheus monitoring can benefit your development.

The financial sector is going to need to have trustworthy technology. A financial service company, Northern Trust, chose Prometheus to handle some of its monitoring needs

While Northern Trust didn’t use Prometheus to monitor their accounts, they decided to go with Prometheus monitoring for their hardware. Instead of choosing a commercial application, they preferred Prometheus’ extensive customizability and in-depth, granular analytics. 

The company Cloudflare offers a more robust example of Prometheus in action. When your company consists of 116 data centers located worldwide, the ability to customize alerts becomes a necessity. 

Cloudflare has 188 Prometheus servers located throughout the globe. They also have four top-level servers to monitor for critical issues and respond to incidents and perform analytics on all the rest of the data. 

Cloudflare uses Prometheus Alertmanager to prevent unnecessary alerts. It also allows them to group alerts and customizes names and titles. This helps them to keep all of the alerts and data straight throughout their extensive network.

Cloud-based systems and applications like those on Kubernetes tend to grow and become more complex. That’s part of the point of why we use them. We need our monitoring solutions to be as flexible and customizable to keep up with all of the ways we might use our cloud-based systems over time. 

Prometheus is the perfect solution in that regard. It’s endlessly customizable and as detailed as you need it to be, no matter what you’re monitoring. 

Are You Looking For Kubernetes Monitoring?

Kubernetes, like any cloud container application system, can quickly become complicated. It can be a lot to keep tabs on, yet monitoring is essential if you want your application or tool to be successful. Now that you’re more familiar with Kubernetes monitoring after reading our Prometheus tutorial try out Apica for free to find out how integrating all of your data can empower your organization!