The Beyondsoft Data & Analytics Team lives and breathes JIRA—Atlassian’s phenomenal issue and task tracking solution. We particularly love the myriad data points it captures during a ticket’s life cycle. Unfortunately, however, we’ve found its reports to be sadly lacking; this deficiency evolved from a nuisance into a major bottleneck during the team’s adoption of Scrum, when we needed to analyze how we were moving our work forward during each sprint.

During a few spare cycles I reviewed JIRA’s API documentation and found that most of the data was accessible; we just needed to put it all together. Our team brings data together for clients all the time using a variety of techniques and tools — from full scale cloud data warehouses to lightweight tabular models to everything in between — but this project seemed to call less for something bullet-proof and more for a quick, iterative approach. We needed data fast to help us improve our team’s effectiveness and answer some key questions:
• What work are we doing?
• Are we focusing on the right priorities?
• Are we neglecting any of our projects?

Enter Microsoft’s Power BI. Over the past few years this application has become our go-to tool for data visualization. Its underlying engine supports rapid prototyping that allows for mashing up data without the need for extensive supporting infrastructure. Could we quickly build a Power BI dashboard for our management team to use in reviewing and prioritizing our dev team’s progress? Spoiler alert: we could, and did. And it was awesome. Power BI surpassed all my expectations, and we used it to build a quick and dirty tool in our spare time. Here’s the story of how this came together (leaving out some details of a few dead ends).

Step 1: Determine if a Power BI service content pack is available

If you’re interested in integrating an alien data source into Power BI, one of the first things you should do is review whether your data source supports a Power BI service content pack. Conveniently, Atlassian has one at Its goal is to provide an instant dashboard for visualizing workloads and velocity.
Connecting our JIRA instance’s data to Power BI was quick and easy, but the canned dashboard wouldn’t work for the particular workflow our team was focused on. It provided a simple way of viewing all tasks but didn’t break apart by project or provide deeper insights into where our team’s workflow was stalling at the sprint level.

Step 2: Explore your APIs

After determining the Power BI service content pack wasn’t going to provide an adequate solution, we dove into an exploration of the JIRA REST API and what it would take to pull together the view we were interested in. Our first step was to determine what data was available, so we began reviewing the Atlassian JIRA API documentation. For those following along at home we found the following resources particularly useful:
• Tutorial for beginning to explore the API:
• Documentation for the Board-getAllBoards endpoint:

We determined that the key data streams for our team would be sprints and issues for several of our JIRA boards, and began creating test API calls to start exploring the data:
• Sprint data: https://[instanceName][boardID]/sprint
• Issue data (at board level): https://[instanceName][boardID]/issue
• Issue data (at sprint level): https://[instanceName][sprintID]/issue

Step 3: Consume JSON within Power BI

The JSON returned was pretty much as we expected and contained all of the attributes necessary to build the dashboard we wanted to see. The next step was to consume the API data from within Power BI.
A good reference for the next steps is The consumption of JSON within Power BI is incredibly easy but it took a few transformations to get the data into the exact form we were looking for. We had to do the following transformation steps (raw “M” code in parentheses if you’re working with the advanced editor).
• Converted to Table: #”Converted to Table” = Record.ToTable(Source)
• Transposed Table: #”Transposed Table” = Table.Transpose(#”Converted to Table”)
• Promoted Headers: #”Promoted Headers” = Table.PromoteHeaders(#”Transposed Table”)
• Expanded values: #”Expanded values” = Table.ExpandListColumn(#”Promoted Headers”, “values”)

We repeated this for each of our three main data sources, resulting in three queries containing our basic set of data:
• Sprints
• Tickets in sprints
• Tickets

At this point we were able to do cleanup of data within Power BI: set necessary data types, remove unnecessary columns, rename columns, etc.

Step 4: Knit your data together into a unified model

The data we had was real but broken across several disparate sources, making it unsuitable for the analysis we hoped to perform.
As a helpful intermediary for our slicing and dicing we implemented a basic date dimension using the following DAX and the “New Table” feature in the Data tab:

Date = 
CALENDAR (DATE(2000,1,1), DATE(2025,12,31)),
"DateAsInteger", FORMAT ( [Date], "YYYYMMDD" ),
"Year", YEAR ( [Date] ),
"Monthnumber", FORMAT ( [Date], "MM" ),
"YearMonthnumber", FORMAT ( [Date], "YYYY/MM" ),
"YearMonthShort", FORMAT ( [Date], "YYYY/mmm" ),
"MonthNameShort", FORMAT ( [Date], "mmm" ),
"MonthNameLong", FORMAT ( [Date], "mmmm" ),
"DayOfWeekNumber", WEEKDAY ( [Date] ),
"DayOfWeek", FORMAT ( [Date], "dddd" ),
"DayOfWeekShort", FORMAT ( [Date], "ddd" ),
"Quarter", "Q" & FORMAT ( [Date], "Q" ),
"YearQuarter", FORMAT ( [Date], "YYYY" ) & "/Q" & FORMAT ( [Date], "Q" )

Once we had implemented the rudimentary date dimension we used that as the common join point for our three data sets.

Figure 1: Date dimension used as common join point for three data sets

Step 5: Visualize within Power BI

With a fully realized data model now available, we had the foundation for building a dashboard suitable for our team to view sprint progress and have data to support our post-sprint retrospective conversations.

Conclusion: Power BI is great for web API consumption

Our team found Microsoft’s Power BI to be a phenomenal tool for rapidly implementing a management dashboard directly connected to Atlassian’s JIRA APIs. Our team didn’t have to build any databases, ETLs, or code. We found that the main limitation of this approach is that no data is stored outside of the upstream API sources, so complex warehousing (e.g. of historical data) might require kludgy workarounds.