Exploring InfluxCloud using Chronograf

Chronograf is InfluxData’s open source web application and included in your subscription to InfluxCloud. Chronograf is the primary administrative interface for your InfluxCloud subscription. You can also use Chronograf to visualize your monitoring data and easily create alerting and automation rules.

1. Launch Chronograf

To start, click the Launch Chronograf button on the subscription management page.

The following page appears:

Next, click the Login with Auth0 button and you will be presented with a login page (shown below) which prompts you for a username and password.

Use the credentials you provided to create your account during the sign-up process.
These are the same credentials that you use to sign into InfluxCloud here: https://cloud.influxdata.com/

The login credentials being requested are NOT the admin credentials for your InfluxCloud subscription.
Those are used to connect Chronograf, Influx CLI or your custom application to your InfluxCloud instance.
Within Chronograf, these have been used to configure an InfluxDB Source which you will see later.

2. Create your first database

Before you write any data to InfluxDB, you need to create a database. A database stores your data. (It also stores information about users, retention policies, and continuous queries but you don’t need to worry about those for now.)

In the left-hand navigation panel, select the crown icon. This opens the administrative panel in the right-hand side of Chronograf. Next, click the Create Database button in the upper right-hand portion of the screen.

Here, you create a database called state_fair_db

Click the green check mark and you should receive a pop-up indicating the database has been successfully created.

3. Write data to your database

Now that you have a database, it’s time to write some data to it.

Navigate to the Data Explorer within Chronograf. The Data Explorer is found within the left-hand navigation panel, under the third icon down.

Next let’s get familiar with the data you’ll be writing to the database. The chart below shows the number of funnel cakes at two locations (fair 1 and fair 2) over time.

InfluxDB’s data structure is made up of measurements, tags, fields, and timestamps. Translating the data in the chart into InfluxDB’s data structure looks like this:

  • foods, the title of the chart, is your measurement. Think of a measurement as a logical container for your data, similar in concept to a relational database TABLE.
  • fair_id is your tag. It has two values: 1 and 2. Tags store meta data about the data stored in fields. Tags are an optional part of InfluxDB’s data structure.
  • funnel_cakes is your field. It has six different values. Fields store the actual time series data.
  • The dates and times on the x-axis serve as your timestamps. Every data point that you write to InfluxDB will have a timestamp; we are a time series database after all!

Now that you know how your data fits into the InfluxDB structure, it’s time to format your data so that InfluxDB can understand them. We call the format for writing points to InfluxDB Line Protocol. The Line Protocol for the first point in the chart where fair_id is 1 looks like this:

foods,fair_id=1 funnel_cakes=23 1500267600000000000

The measurement name (foods) comes first, followed by a comma and the tag key-value pair (fair_id=1). After the tag key-value pair, there’s a white space, followed by the field key-value pair (funnel_cakes=23). Finally, there’s another white space and the nanosecond timestamp (1500267600000000000). The timestamp must be in Unix Epoch time (the time that has passed since January 01, 1970 UTC). There are a number of handy converters available on the Internet if you want to transform these timestamps into a more human readable format. The timestamp is optional, and, if not specified, InfluxDB assigns your server’s local time (in nanoseconds) to the point.

Note: Timestamps don’t have to be in nanoseconds when you write them to the database. See Write Syntax for how to write non-nanosecond timestamps to InfluxDB using the HTTP API.

To write that first point to InfluxCloud, click on the Write Data button which appears at the top of the Data Explorer panel. You should then see a modal dialog which looks like this:

Notice the drop down list and the selector next to the Write Data to prompt. The drop down list shows the database where data will be written. The selector allows you to switch the mode of data import between File Upload and Manual Entry. For this exercise, select Manual Entry.

Cut and paste the following text into the entry field and then click the Write button on the lower right of the window.

foods,fair_id=1 funnel_cakes=23 1500267600000000000

The Write Data modal dialog should disappear, returning you to the Data Explorer. You have successfully written your first data point into InfluxCloud!

Before querying the data, let’s return to the Write Data dialog and add some additional data points. This allows for more interesting query examples to be explored later.

Click the Write Data button, ensure that state_fair_db is selected, and choose Manual Entry. This time, cut and paste all of the data shown below into the entry box and click the Write button on the lower right of the window.

foods,fair_id=1 funnel_cakes=28 1500267960000000000
foods,fair_id=1 funnel_cakes=27 1500268320000000000
foods,fair_id=2 funnel_cakes=14 1500267600000000000
foods,fair_id=2 funnel_cakes=18 1500267960000000000
foods,fair_id=2 funnel_cakes=36 1500268320000000000

Once the data has been written, the dialog will clear and return you once again to the Data Explorer. Now it is time to move on to the next section to see what your data looks like in InfluxCloud!

4. Query data in your database

From within the Data Explorer, click the Add a Query button near the center of the screen. This opens a query builder tab. Multiple tabs can be opened by clicking the blue + button next to the Query 1 tab.

As mentioned earlier, InfluxQL is InfluxDB’s SQL-like query language. It’s designed to feel familiar to those coming from other SQL or SQL-like environments while providing features specific to storing and analyzing time series data.

Let’s start out with a simple InfluxQL query to view all fields and tags (*) in the measurement foods.

Ideally, every query submitted through Chronograf has a discrete WHERE clause. This allows you to limit the time range and the number of points potentially returned from the database – which could potentially be an astronomical number of values! Now, for the data set that you’ve imported, we don’t have that problem. However, we will follow some best practices and limit the query to a single month. The state fair data was collected on July 17th, 2017.

So, we are going to use the time range drop down in the upper-right hand portion of the screen and select July 1st, 2017 00:00 as the starting date and July 31st, 2017 23:30 as the ending date as shown below. Click Apply once both the starting date/time and ending date/time are selected. When you develop queries, this time range will be automatically injected into the InfluxQL statement, as you will see later.

Now, for your first example, simply cut and paste the query below into the text entry field below the Query 1 tab then either tab out of the field or hit the enter key. In the results field, switch the selector to table (instead of graph).

SELECT * FROM "state_fair_db"."autogen"."foods"

The output shows the measurement name (foods), the timestamps in local browser time within the time column, the tag (fair_id) and its values, and the field (funnel_cakes) and its values. Every piece of data that you write to InfluxDB will have the time column.

Below, you’ll find several queries which you can explore and can help you develop some insight into what InfluxQL can do for you. Using the query builder, attempt to form the following queries:

  • Select just the field funnel_cakes from the foods measurement:

    SELECT funnel_cakes FROM "state_fair_db"."autogen"."foods" WHERE time > '2017-07-01T07:00:00.916Z' AND time < '2017-08-01T06:30:00.930Z'

    Remember to hit enter or tab out of the field once you’ve pasted the query into the query text box.

  • Calculate the average number of funnel_cakes in the measurement foods using one of InfluxQL’s functions:

    SELECT mean(funnel_cakes) FROM "state_fair_db"."autogen"."foods" WHERE time > '2017-07-01T07:00:00.916Z' AND time < '2017-08-01T06:30:00.930Z'
  • Calculate the average number of funnel_cakes in the measurement foods for each value of fair_id using InfluxQL’s GROUP BY clause:

    SELECT mean(funnel_cakes) FROM "state_fair_db"."autogen"."foods" WHERE time > '2017-07-01T07:00:00.916Z' AND time < '2017-08-01T06:30:00.930Z' GROUP BY fair_id
  • Calculate the sum of funnel_cakes in the measurement foods for every 7d interval of data using InfluxQL’s GROUP BY clause:

    SELECT sum(funnel_cakes) FROM "state_fair_db"."autogen"."foods" WHERE time > '2017-07-01T07:00:00.916Z' AND time < '2017-08-01T06:30:00.930Z' GROUP BY time(7d)
  • Perform the same query above but use InfluxQL’s INTO functionality to write the results of the query to a new measurement called downsampled_data:  

    SELECT sum(funnel_cakes) INTO downsampled_data FROM "state_fair_db"."autogen"."foods" WHERE time >= '2017-07-17T00:00:00Z' and time <= '2017-07-17T06:00:00Z' GROUP BY time(12m) fill(none)
    SELECT * FROM downsampled_data

    Note: You can make InfluxDB automatically perform this kind of downsampling (so periodically performing an aggregation and writing the results of that aggregation to a new measurement) with InfluxDB’s Continuous Queries.

You’ve now been introduced to the fundamentals of InfluxDB, its data structure, and its query language. To learn more about InfluxDB, see the InfluxDB documentation. For more on InfluxQL specifically, see the query language documentation.

Where to from here?

Nice job! In this guide you’ve become acquainted with InfluxCloud – and the underlying capabilities powered by InfluxDB and Chronograf. You’ve gotten a taste of InfluxQL and written data to your InfluxCloud instance via Chronograf.

Click here to explore DevOps monitoring with Telegraf:

  • an integrated co-monitoring capability which allows you to monitor the health of your InfluxCloud.
  • the most flexible path for ingesting streams of telemetry data from over 100 sources.

Click here to explore how to setup anomaly detection and alerting via Chronograf.

Click here to explore data visualization with the Grafana Add-On.

We hope you’ve been intrigued by these solutions for storing, managing, collecting, and visualizing time series data. We highly recommend taking a look at our full documentation to see all that InfluxData can do for you. If you have any questions about our products, please don’t hesitate to contact our support team at support@influxdata.com.