Posted in Uncategorized

Improving #IronViz, part 2: Suggestions, Recommendations, Ideas

First, some ground rules for these discussions:

  1. Don’t blame women and POC for their lack of representation. If there aren’t enough submissions from women and POC, assume the problem is with the competition design and in aspects of the community, not their rate of submission. Look at the competition and community pipeline to see how it’s exclusionary, rather than (or in tandem with) work on submission rates
  2. Don’t assume data is neutral. Information is socially-constructed, subjective, and fallible – and more information does not necessarily lead to more knowledge.
  3. Don’t assume the judges/rules/best practices are infallible/objective/correct/neutral. Tufte did his research on mostly male, white U.S. college students; the field of visualization and its best practices grew out of that research. Many people are problematizing conventional wisdom (about pie charts, bar charts, and data cleaning, among others). Not that we shouldn’t value clarity, precision, and standardization, but other values can be just as important, even if they are not explicitly encoded into Tableau.

Second, suggestions.

In a recent Twitter thread, someone shared a link to 10 Ways to Increase Girls’ Participation in Computing Competitions. This was largely drafted before reading that list, but I think many of these ideas are in line with that advice. Also, this is not intended to be comprehensive – nor are all of these necessarily good ideas – but I think it is helpful to have something concrete to react to and adapt.

Small scale (working within the system)

  • Provide a single, clear place for people to find out about announcements – done!
  • Announce timing of contests in advance so that people with less flexible jobs or family/personal commitments are able to make arrangements in advance if they like
  • Consider expanding the Iron Viz submission window for similar reasons
  • Provide a description/template for optional blog posts about the contest (or even a round-up of exemplars). Also make it clearer how or if these inform the judging.
  • Provide a longer explication of judging criteria. This could be done as a longer write up of the winning feeder entries and finalists (why and how the winning submissions excelled in each area; what about the viz was exciting, innovative, and novel. Don’t let the viz speak for itself!) or a more robust description of each category and how the judges evaluate the entries.
  • Announce runner ups!

Medium scale (questioning the system)

  • Provide prizes for runner ups and ways to win beyond achieving the #1 spot. This could be a “best in class” along the various criteria (top scorers in “design,” “storytelling,” and “analysis”) or ad hoc awards, like “most technically impressive,” “most creative,” or “something we’ve never seen before.” With “best newbie” replacing community votes, people who have entered before are in an all-or-nothing competition that takes a lot of time and effort. Additional recognition will also have the benefit of improving community understanding of what that category means and what exemplary work looks like.
  • Host smaller-stakes competitions throughout the year. Makeover Monday gives opportunities for public review and improvement every week, but there is no “winner” (…aside from perfect attendance, maybe!)
  • Offer additional opportunities for community members to reach the Tableau conference without winning the highest-stakes competition or investing over $1,000 in registration fees. This was done in 2016 with the Tableau 10 Olympic Torch competition but I don’t believe anything similar happened in 2017.
  • Provide feedback to participants on their IronViz entries. What were some of the judges comments? What were some areas of improvement? This could help strengthen the talent pipeline and ensure people feel seen, even if their work doesn’t make a big splash publicly.
  • Have a community-based discussion of  judging criteria. Reach out particularly to participants in underrepresented groups. Would they change anything? Would the community want to add additional criteria, like promoting social equity?
  • Maybe try blind entries for public comment until the results are announced. This might backfire spectacularly, as I think a lot of people do enter for public recognition, but do any entries get more public attention without recognizable names and logos?
  • Build in other best practices (open access, copyright, reproducibility) to the contest rules:
    • No graphics, icons, or fonts for which you don’t have permission
    • Use publicly-accessible datasets or make the full dataset available for download
    • Make workbook available to download
  • Make people who are already great stretch themselves and tap into knowledge and forms of creativity other community members may already have but don’t get to show off. Have a way for community members suggest a theme (email address, box submission, etc.)
    • Improvement/revision based contest (e.g., re-do an old viz)
    • Partnership-based contest (sense of community) not one person:
      • Build a viz in collaboration with another designer
      • Build a viz in collaboration with the person doing the data collection / someone from the community being studied / an expert in that field
      • Build a viz in collaboration with a partner organization (e.g., Data Science for Social Good, Viz for Social Good, volunteer organizations, Tableau Foundation partners) Give those partners a say in the judging/evaluation criteria!
    • Rather than a specific topic as the theme, consider offering challenges (similar to the Mobile contest):
      • Build a viz using only text data
      • Viz about a single person – not yourself!
      • Use a dataset that challenges you
      • Build a viz with no bar graphs
      • Build a viz only with bar graphs
      • Build a viz with no graphics or images
      • Make a viz that is accessible to someone who is visually impaired. Go beyond a color blind palette! Make it comprehensible to someone using keyboard navigation or someone totally blind. Invite someone with colorblindness or vision impairment into the judging process

Large scale (changing the system)

  • Invite a member of the community to the judging panels. Better still, invite community members of underrepresented groups to all judging panels and pay them for their time 🙂
  • Have a feeder competition helmed and judged by an all-community member panel
  • Consider an IronViz Invitational, or a preliminary/qualifying competition so people know there has been some amount of selection before they are asked to invest a substantial amount of time
  • After the deadline, maybe create a mentorship window where participants are given a week to make changes based on feedback from a matched Tableau employee/Iron Viz champion/Iron Viz contestant/Zen master, etc. Or maybe create a different mentorship pipeline people can opt in or out of
  • Consider funding a position for a diversity fellow: give them money to mentor underrepresented designers, listen to and learn about their challenges and contexts, make recommendations for changes based on those experiences
  • Allow participants to provide their own criteria for evaluation. What do they want to be judged on? Aesthetics? Creativity? Emotional response? This might allow people to set and point out personal goals in line with the contest

Would love to see other ideas – or to hear any thoughts on the ones above!

Posted in Uncategorized

Improving #IronViz, part 1: Why I, a woman, did not participate in Iron Viz 2017

You know that joke about a man trying to solve a problem while the woman just wants him to listen?

I’m seeing a lot of discussion about the representation of women in #IronViz. Setting aside the socio-cultural context for under-representation, as a woman who has participated – and not participated! – in the competition, part of me just wants someone to…ask…

So, since the hashtag is #WeAreDataPeople, not #WeAreQuantitativeDataPeople, I’ve decided to start us off with a dataset of n=1. Please feel free to steal the questions and add your own answers. Maybe with enough interviews we can cycle back around to quantifying some of this into a Tableau dashboard (insert autoethnoGRAPHy pun) and making some recommendations and changes.

1. Tell me about yourself. What is your professional background? When did you participate in Iron Viz?

I participated in two Iron Viz feeder competitions in 2016 (entry 1 and entry 2). I was working as an Assessment Research Analyst when I participated; I am now a Data Visualization & Analysis Librarian. I have a master’s degree in information and library science and I studied data visualization in grad school. I got an English degree as an undergrad and I tend to see my data work as an extension of that degree – interpreting and making sense of content/information – rather than deviating from it. As a result, I’m pretty interested in data ethics, critical theory, data and visualization literacy, and data humanism.

2. Is Tableau a part of your job/professional identity?

Definitely. I don’t necessarily work with Tableau every day, but I work with data collection and analysis every day. The Tableau work tends to come in waves – I’ll not open the program at all for a few weeks, then I’ll have a project where I’m in Tableau all day every day until the final product is done.

3. How did you find out about Iron Viz?

I remember finding out about the Iron Viz competitions in grad school and was eager to participate, but I think I had just missed the last feeder window that year. I made a mental note to look it up the next year and remember being absolutely furious when I saw the results of the Iron Viz Food competition – not for anything related to the contest, just because I had followed Tableau in like six different ways (through work and through my personal social media accounts) and still managed to miss the announcement. I followed them in a few more places (I remember grumpily adding separate subscriptions for the Tableau Public blog and the Tableau Desktop blog on my RSS feed) and finally managed to see the announcements in advance of the second contest that year. I was really pleased when they finally offered a mailing list announcement just for Iron Viz announcements.

4. Did you have any reservations about participating in Iron Viz?

Not about the competition itself – I wasn’t nervous about submitting my work for feedback or competing against others. I remember thinking that I was more excited about the second place prize (at the time, $500) than the first place prize, but that was probably the broke grad student talking. I think it also gave me a blanket of plausible deniability in the off-chance I didn’t win 😉

After the theme of the first competition was announced (Politics) I was definitely nervous, though. My viz was certainly not coming from a neutral perspective, and if there was a chance it would go viral (as the occasional winning/highly regarded Tableau viz does), I was worried about the possibility of backlash (e.g., getting doxxed or harassed online). I didn’t let it stop me from submitting, but I still don’t have my name associated with Twitter or my blog in part because that content- and anxiety – is still there.

5. Talk me through your favorite submission to Iron Viz. What did you like about it? Why?

My favorite (out of two :P) is my Politics entry. The original one is here, the revision is here, and the blog post (which several preambles and justifications) is here. I enjoy exploring subjectivity and rhetoric through visualization, and for this project I wanted to focus how we create and interpret meaning and messages in a dataset.

I quantified deeply subjective categories (“consistency”) into a single value, mapped it on a grid, and then hid the value such that the only way to find the right answer was to click around until you found it. The idea was that this would force the user to make meaning out of the slight differences they could see in the shifting central visualization (the blurring/distorting of candidate faces) and, presumably, reflect on the gap between their own understanding of the candidates and “reality” (as interpreted/visualized by the author). Because I am also a librarian, I wanted the supporting information (the “raw data” of the candidate statements and the coding of each organization) to be available below so that they could drill down into individual statements to see how much they agreed/disagreed with my breakdown.

I think Tableau can lead people to think about data and analysis in specific ways and I was excited to show how might be useful in other contexts (like digital humanities) and start a conversation about squishier concepts, like how we interpret data and our emotional responses to data.

I’ve thought a lot about changes I could have made to make this (admittedly, highly conceptual) project. It could definitely be clearer—and cleaner—but it still remains the project I’m proudest of (…so far) and the only one where I felt like values were intentionally and deliberately encoded into the project. So…that’s the one!

6. What else do you remember about participating in Iron Viz?

How much time it took! I took at least a day and a half off work for my first entry, and I know I ditched some weekend plans last minute for the second. I had joined twitter shortly before I entered, so the other thing that stands out was that I wasn’t particularly well known in the community and it felt like I received proportional attention for an absolute newbie (that is: very little).

At a job interview shortly after my Politics entry, I was asked about “the hardest visualization I’d ever done” and, as a follow up, “what success looked like” for that visualization. I remember admitting that in terms of initial goals for that project -winning the contest (…slash $500…), starting a conversation in the Tableau community – it was spectacularly unsuccessful. However, I found that I was able to use that experience and thought process to start other conversations, and that I learned a lot about the kinds of projects and conversations I want to put energy behind going forward.

I’d still love to win IronViz and meet those initial goals, but I think I’m now waiting for it to more naturally intersect with those interests.

7. Which Iron Viz competitions did you participate in, and why?

Iron Viz Politics and Mobile.

I outlined some of my thought process for the Politics entry in my blog post, but mostly I wanted to do something different than the political analysis I felt like I was drowning in at the time. It felt like a way to create space for nuanced and more personally-meaningful insights than election projections and (presumably) objective analyses.

With Iron Viz Mobile, it was more that I had an idea I didn’t know how to fully execute and solving it continued to be interesting all the way through. Still does, actually – I’ve continued to tweak aspects of this viz as new features are added and as I learn more about Tableau. It’s still not completely where I originally wanted it for the competition (…yes, the deadline was over a year ago), but I like to set ambitious goals 🙂

In both cases, I felt like I was able to find a way in to the contest theme that felt unique and worth the time it would take to put together an entry I could feel proud of.

8. What competitions did you not participate in, and why?

Iron Viz Geospatial, Safari, and Silver Screen.

For the Geospatial and the Silver Screen contests, it was all about timing. Both competitions were announced just before week-long conferences, and there was no way I’d find the time to put together a competitive entry on top of preparing my own presentations. I was pretty agonized about missing the Geospatial contest – I’d just started working with our Maps librarian about how to tackle GIS questions with Tableau and it seemed like such such a perfect opportunity to work out some of the issues. By the time it got to Silver Screen, though, I’d already started to accept that it wasn’t going to happen again this year.

The Safari contest just felt too referential for me (…I instantly set the bar at Jonni Walker). If I’d known it would be the only one I’d be able to enter, I might have pushed myself to come up with a more original idea. But, since it also changed the format (2nd place was now Best Rookie over Crowd Favorite), I only had one thing to aim at (the #1 spot) and it really didn’t seem like I could top the stuff that already existed—much less what I could imagine would be coming.

9. Do you participate in any other (non Iron Viz) Tableau community events?

More in person events than online ones. I’ve presented at local TUG events several times (on campus and in the area), and have been invited to speak on a few panels and to corporate user groups. I’ve also taught Tableau in libraries-related spheres (at in-class instruction sessions, one on one consultations, and conference workshops) but those are less reflective of the “Tableau Community”.

In the online community, I participated in one Makeover Monday last year. I love the idea, but I’m not exactly hurting for sample datasets at work. I do direct students there all the time, though. I’ve seen Workout Wednesday, but I don’t tend to be invested in figuring something out until I see a need for it in my own work. I learned how to do polygon maps in Tableau before line graphs. I memorized the calculations for Sankey Diagrams and Network Diagrams before I’d even heard about LOD expressions. I follow Twitter discussions, but I tend to tweet very occasionally – and most typically about libraries-related conferences or events.

10. Do you have any suggestions for improving representation in Iron Viz?

So many! See Part 2, here!

Posted in Uncategorized

Choose Your Own Research Adventure: A Resource Guide for the UBC Information Literacy Tutorial

When I was earning my MLIS from UBC, I took a Teaching and Learning Enhancement-funded (TLEF) position to work with Katherine Miller on revising the Land and Food Systems Information Literacy Tutorial. We developed a Choose Your Own Adventure-style tutorial module to assess students’ familiarity with certain tools and resources and to inform decisions about the ongoing development and sustainability of the tutorial and its integration with LFS course content.

This format was chosen because it was not intrusive, largely enjoyable to students, and flexible – giving students additional help when desired and skipping ahead to more relevant content when it was not. Building the tutorial in a survey platform provided rich insight into the decision-making of students during the research process, which allowed us to revise the format of the tutorial and switch over to a more sustainable tool going forward.

We suggest this kind of gamification as less invasive and more informative alternative to pre- and post-testing, which merely tests recall of information, puts a significant time and effort burden on students, and offers only limited utility to the librarians responsible for developing and maintaining the content. Embedding Choose Your Own Adventure-style content in a more traditional/linear tutorial can give you a balance of assessment and sustainability, and help serve students with a variety of different learning styles.


Want a template for your own work? Download the Twine File as using Save Link As  to save the file to your computer. To modify or edit the content, visit on your own browser and import the file. This tutorial is licensed with Creative Commons Attribution Share Alike.

Thinking about making your own from scratch? Learn from our trials and errors and check out the overview of key tools below:

Tools Overview

Narrative Development Tools


Twine is a free, open source tool built for Choose-Your-Own-Adventure-style branching narratives. It’s one of the easiest tools to manage, track, and revise the various content paths you develop, and the simple [[double bracket]] branching makes it an easy drafting tool, as well.

However, Twine does not have any tracking ability built into the program. There are instructions for using Google Analytics as a tracker here, but even with the ability to track clicks, the data is fairly complex and difficult to interpret.

Twine can also be used as a rudimentary visualization tool, as it allows you to zoom out to a structural-level view of your content as your drafting. This creates a node-link visualization that you can manually arrange without having to create each box and link by hand (as with Excel or Tableau).


  • A brief tutorial on Twine is available here.

Survey Tools

FluidSurvey was the survey tool supported by UBC and has robust tracking and embedding capabilities, as well as the ability to easily copy surveys, collaboratively write, and customize URLs. However, any survey tool with “skip logic” or branching functionality, which makes questions appear and disappear based on user selections, could work for this kind of project. However, the conditional rules quickly become complex to navigate, and it’s very hard to draft within the survey tool itself.



Quest is an interactive fiction tool that is designed for exploratory environments, rather than a directed path. For example, in Quest you build a room with objects in it (a library with a book) and users have a text-based game (“pick up book”, “look at book”, “read book”) for which you have drafted various responses.

 Visualization/Analysis/Reporting Tools

As far as I know, there are no “out of the box” tools to work with this kind of data. A list of the analysis and visualization tools we used in this project are described below, but all take a fairly significant degree of effort before you can find insights in the data.


In Tableau, we were able to create an overview of the tutorial structure that would provide the text of each node on hover and show the different paths through the content. The template was labor intensive to create, but allows you to filter the results, orient yourself within the content, and see the overall user response to the module.


The initial dashboard template was designed as part of Tamara Munzner’s Information Visualization Fall 2015 course. A copy of the final paper explaining and rationalizing the visualization choices in the dashboard above is available online here.


The first attempt to visualize the LFS CYOA tutorial was made in Excel. The image below is a screenshot of the flowchart we designed to understand the various paths through the tutorial. This was hand-created and arranged using the shapes and arrows tools.



This was a force-directed algorithm available online that uses a two-character arrow -> to connect the points on a node-link graph. Nodes can be color coded, as below, and manually or automatically arranged (and the amount of “gravity” and attraction/repulsion between nodes can be adjusted dynamically). However, this layout was also not very intuitive, as users experienced the narrative linearly, even if we did not design it that way.


Additional Resources

Interactive Fiction Database: A catalogue of Interactive Fiction/Choose Your Own Adventure-style games available to play online. To narrow the range of possibilities, check out this Top 50 list, past winners from the annual IF Competition, or XYZZY Award winners.

One Book, Many Readings: Beautiful analysis of printed CYOA books that offers some insight into the structure of CYOA narratives and is an enjoyable read in its own right.

Standard Patterns in Choice-Based Games takes a closer look at the structure of interactive narratives.

Choice Of Games offers Choose Your Own Adventure-style narratives in ebook form (Choice Of the Dragon is personally beloved). They have two posts (here and here) about their format that are worth checking out.

While none of these offer a step-by-step guide to the dashboard above, there are two Data+Science posts on Sankey Diagrams (here and here) and a post at Clearly and Simply on Network Graphs (here) that were essential in putting together the Tableau visualizations.

Posted in Uncategorized

Copy/Paste Text from Tableau to the Clipboard: A Quick Tableau Hack

If you’ve ever tried to copy/paste a line of text from a Tableau visualization, you’ve probably discovered that Tableau renders all text in visualizations as images. Even if you set “text” as the visualization type, it will still render as an image of the text that cannot be copied to the clipboard.

If you want your users to be able to copy a line of text – or even a small handful of text values –there’s a workaround you can build that allows them to do so. Because the titles of workbooks are rendered as text that can be copy/pasted, we can allow the user to select one comment at a time and copy paste from a separate location. It’s not an ideal solution by any means – and you can vote up an improved text feature request here [find link] – but it might work as a stopgap measure.

  1. Create a new worksheet and drag the field you would like to copy/paste (e.g. “Comment”) to Rows. If your users will be selecting a numeric value, you can construct a contextual phrase by concatenating a few values in a calculated field:
    "The value on "+STR([Date])+" was "+STR([Value])
  2. Drag the new workbook onto the dashboard with the primary visualization and edit the title of your new view:
    • Insert>[Comment]
    • This should now say “All”
  3. Add a new dashboard actions by going to Dashboard>Actions. This will allow users to select a single comment and have the text show up in the title of your new workbook.
    • Add Action>Filter
    • Source sheet: Your original visualization
    • Target Sheets: Your new workbook
    • Run on: Select
    • Clearing will: Exclude all values
    • The title should now say “None”
  4. Test your dashboard to see that the action works. When you select a value from your source sheet, the title should change to the text value you have selected.
  5. Now hide the data in the underlying file so that only the title is visible.
    • Uncheck “show header”
    • Under Format: remove shading, lines, and turn text white
    • Resize window to fit the space allotted
  6. Add instructions to make this action clear to your users (e.g. “select a value above to copy/paste text from here)


Posted in Uncategorized

Library Space Assessment in Tableau: A Step by Step Guide to Custom Polygon Maps and Dashboard Actions

This slideshow requires JavaScript.


Libraries need a tool for understanding observational space count data that’s as flexible as the spaces they’re trying to assess. Endless rows of numbers are difficult to interrogate or understand with any degree of nuance – whether that’s observing how one area is used compared to the whole, or how popular a particular kind of seating is by time of day.

Interactive visualization provides one solution to this problem. By building a map of the space and connecting it to the data, libraries are able to quickly see patterns and query them on the fly. The ability to subdivide and cross-section your data allows you to answer the questions you’re particularly interested in and supports finding new answers to the questions you develop.

This tutorial presents two methods for building interactive Space Assessment dashboards in Tableau. If you already have observation data, you should be able to select the method that best matches your needs and the format of your data. If you’re considering a space assessment project, thinking about the desired visualization can help identify collection methods and sampling schedules that will better answer the questions you have about your space.

Sample Projects


If you’re fairly comfortable with Tableau, don’t like detailed instructions, or just want to jump right in, here’s a quick reference version of the tutorial below:

  1. Gather the key information for your project: floorplan map, seating capacity for each area, and the observation data
  2. Use the GitHub tool or Drawing Tool to trace polygons over your floorplan. If you are making the map with icons, also place those coordinates at this time
  3. Export the polygon data. Give each shape the same name as the space has in the observation data. If you have icons, create a new column to tag those coordinates with the supplementary information, such as the type of seating the icon represents
  4. Combine all of your data sources into a single Excel workbook with separate tabs for each data source. Ensure that the Location field shows up in each tab, and that the naming is consistent across all sources.
  5. In Tableau, drag the X Coordinate to Columns and the Y Coordinate to rows. Change the mark type to Polygon, then add Path ID to Path and Location Name to details.
  6. For icons, also drag the Seating Type to details, and duplicate the X and Y pills to create a dual axis map. Change this mark type to Shape, and assign each seating type the appropriate icon from the custom shapes window. NULLs along the edge of the polygon should be assigned a transparent pixel.
  7. Color underlying map by location (categorical color scheme) or number of people (sequential color scheme) and add the underlying floorplan under the Background Images menu
  8. Synchronize axis and uncheck “show header” to remove axis

1. Data Prep

The instructions for Tableau assume you are working with a normalized dataset. If you don’t have normalized data, follow these instructions to manipulate your data into the appropriate format. If you’re not sure what normalized data looks like, check out the example below:

Cross-tab Data 

This is the format that you often see when you export observation data from Google Forms or other survey programs. In order to build the maps and visualizations shown above, we need to normalize the data so it looks like the example below.

If you’ve used Tableau’s “Data Interpreter” to normalize data in the past,  you will still need to pre-process the data before uploading, as the built in tool only supports one pivot and we’ll end up needing four.

Normalized Data


If you don’t have normalized data but are comfortable with manipulating data, check out this quick normalization trick in Excel and check back in at the next step. Otherwise, you can follow along with the sample cross-tab data to work though the data processing steps below:

  1. Unmerge all cells
  2. Fill in empty dates with the appropriate value
  3. Insert a column between Time and “Printing Scanning Info Desk”
  4. Concatenate Date and Time using the following formula: =CONCATENATE(A3,”?”,B3) and copy formula to all cells
  5. Using another concatenate formula or with copy/paste, combine the location with the study information. Instead of:  merged we want: unmerged
  6. Use the method here to normalize your data. Select only the concatenated field on the left and the newly combined field names, as in the example below:highlighting
  7. In your new, normalized workbook, insert new columns between “Row and Column” and “Column and Value”
  8. Select the “Row” column and navigate to Data>Text to Columns>Delimited and type ? into Other.  If Excel reformatted your dates into a series of numbers, you can manually format the cell to ensure your dates have copied correctly. Repeat for the “Column” column.
  9. Rename your fields as below:
    • Row → Date
    • New Column 1 → Time
    • Column → Location
    • New Column 2 → Group/Individual Study
    • Value → Count

2. Custom Polygon Coordinates

Tableau uses longitude and latitude for its out of the box GIS mapping. When you need a custom map such as a floorplan, it uses an XY coordinate plane to recognize and draw polygon shapes.

Using an external tool to find the edges of your polygon is much easier than trying to find this data within Tableau. When you use these tools, you will generally upload an image of your map to the program and draw (by clicking on the corners of the location) the shapes you want. The program will then give you the coordinates of every point you clicked and the order in which you clicked them, which you can give to Tableau as your XY coordinate plane.

The following instructions walk you through the two most popular tools of this type. You will need:

Bryant B. Howell’s Open Source GitHub Tool

This is an open source program that runs off your hard drive to allow you to draw polygons over an image. You must store the code and the desired images in the same folder.

  1. Create a new folder on your computer for storing mapping files and data
  2. Follow this link and save the code provided to the folder you just created. (I right-click the Raw button and “save link as” from there, but you can also copy the code into a program such as Notepad ++ and save as an html file type)
  3. Copy the map(s) of the desired location into your mapping folder.
  4. Open the html file from your folder and follow the instructions from there.
  5. If you are making a normal location map, draw a polygon around each location where you collected data
  6. If you are making a map with icons, draw the polygon around each location, and also select the desired locations for your icons before closing your shape. Don’t worry if your polygon shape starts to looks funny – we’ll fix it in Tableau!
  7. After you “Output Polygons,” copy/paste the data back into Excel

Power Tools for Tableau: Drawing Tool

This tool is a free, browser-based polygon tool that works with online images with a direct URL. You have to provide your name, email, phone number, and company in order to access the tool, but it also has features such as grid overlay, snap to existing point, and align with previous point.

  1. Follow this link and provide the requested credentials.
  2. Share the URL to the desired map and click “load image”
  3. For perfect corners, make sure the”Align with previous point” and “show grid line” options have been selected
  4. Follow the provided instructions
  5. If you are making a normal location map, draw a polygon around each location where you collected data
  6. If you are making a map with icons, draw the polygon around each location, and also select the desired locations for your icons before closing your shape. Don’t worry if your polygon shape starts to looks funny – we’ll fix it in Tableau!
  7. Click “Output Polygons” and copy/paste the data into Excel


If you have a shapefile map of your library or location, you could also use one of the Shapefile To Tableau Conversion tools to generate the coordinates your map:

Polygon Processing 

Once you have the XY coordinates for your polygons, create a new tab in your space count data file and paste the data into the worksheet.

  1. Rename the Shape or Identifier field to “Location” (or whatever you call “Location” in your data file)
  2. Check that the names of each shape mirror the name of the location in your data file. Capitalization, spelling, and spaces matter!
  3. If you are making a map with icons, create a new column and tag each icon location (the XY coordinates where the icon will appear) with the type of activity you want the icon to represent. Leave the points marking the edge of the polygon shape blank.

Custom Shape Library

If you are creating a polygon map with icons, you will need to add at least one custom shape to your library. If you would like your icons to be more informative than “triangle” or “square,” you can search for custom icons at or or use the package below:

carrel       Carrel

opendoor      Study Room

roundtable       Table

workstation       Computer Workstation

lounge       Lounge Chairs

floor       Sitting on the Floor

transparantpixel       Transparent Pixel (for edges)

Once your desired shapes are selected, you need to create a custom shapes menu. Navigate to Documents>My Tableau Repository>Shapes. Create a folder called Library Icons and add image files of the desired icons. Be sure to include a transparent pixel (file provided above), as you will need this to hide points along the edges of your shapes.

3. Custom Mapping in Tableau

Join all worksheets on Location field. If you have used the same field names across all data sources, Tableau should detect the connection automatically. However, you can also manually edit the connection using the Join menu below:


Polygon Location Map

  1. Drag the X field to Columns and the Y field to Rows
  2. Change the mark type to Polygon
  3. Drag Path ID to Path and drag Location to Detail or Color.

Dual Axis Polygon Map with Icons

  1. Follow the instructions for the Location map. Then:
  2. Drag seating to the Detail mark
  3. Copy the X on Columns and Y on Rows by selecting the pill and Ctrl + Drag Right, or by dragging the field onto each shelf a second time
  4. Right click each pill and select “Dual Axis” to combine the shapes into a dual axis chart
  5. Change the secondary mark type to Shape
  6. Edit the shapes so that all NULL values are the transparent icon, and the remaining seating types are assigned an appropriate icon

Background Images

To add the underlying map, go to Map>Background Images and select your data source. Click “Add Image” and select the file from your computer. You will need to know the size of your final image – you can check the exact size in pixels in a program like Microsoft Paint – and add the total value of the width (X) with 0 for Left and the width of the image for Right, and the height (Y) with the bottom at 0 and the height of the image for Top. Hit OK.

You can modify the transparency of your image under the Color menu to see the underlying map.

You can also choose whether you want the map to be colored by the number of students in each area (a Choropleth map) or to use categorical color by location. If you choose to color by location, it will be easier to identify each location in the corresponding charts, but harder to see high and low traffic patterns in the building. The reverse is true for the Choropleth map, as in the example below:


You may want to modify the look of your polygons by adding or removing the border or halo around each polygon. You can find these options under “Effects” in the Color menu.

Depending on how you shape your data, you may need to take the AVG instead of SUM for each vertex, or disaggregate measures.


4. Visualizing Capacity  

Because the questions asked in space assessment projects can be so contextual, I am not going over how to create specific views in Tableau. I assume most of the people interested in custom mapping are already pretty familiar with the basics and can create the supporting visualizations they need. However, because we used a blended data source, I do want to go over how data blending affects working with capacity.

When we blend data sources, duplication of fixed values becomes a slight problem. Tableau automatically uses SUM to aggregate, but a capacity of 25 will be duplicated for every count. After 4 counts, it would show a SUM of 100 (4*25). You can use “Average” to quickly compute the correct capacity for each location, as shown below:


You could also use a FIXED LOD calculation: {FIXED [Location]: AVG(Capacity)}. Note that with LOD calculations, any filters must be Added to Context to work as intended. I have never run into a problem with the standard Average calc, however, so I tend to stick with that.

5. Dashboard Actions in Tableau

Once you’ve created the desired visualizations to correspond with  your map, it’s time to make it interactive! Drag your map and your visualizations onto a new dashboard and arrange the windows as you see fit.

In Tableau, adding interactivity is done through Dashboard Actions. This is what tells Tableau that an action taken in one workbook should effect other workbooks on the same dashboard. Tableau has three kinds of Dashboard Actions:

  • Highlight: A highlight action makes the selected field pop out across all views in your dashboard. Tableau draws your attention by changing the field name to yellow and dimming all other information in the worksheet.
  • Filter: A filter action excludes all values that do not match your selection. It can be used to look at a particular section of data in more detail by limiting the available information to only the values that match your selection.
  • URL: A URL action directs you to an external link when selected. You can use this to direct users to a more detailed explanation of your observation methods, for example, or to tweet about the results they find – but these actions are not particularly relevant to this tutorial.
  • Tableau also allows interactivity with dropdown filters, checkboxes, and radial menus that can apply across all worksheets or selected worksheets on your dashboard. Like URL actions, these can certainly be used in your space assessment dashboard, but are not covered in this tutorial.

In general, remember that highlighting will preserve context while filtering will reduce the amount of information available.

There are two ways to apply dashboard actions in Tableau.

Generated Dashboard Actions

Generated dashboard actions are applied from the dashboard itself, not from the Dashboard Actions menu. These can be generated by adding a color legend or selecting “Use as Filter” on a worksheet.

Highlight Actions are automatically applied from color legends and,  by default, highlight all values on your dashboard with the same color palette.

Filter actions can also be applied easily; when you select a workbook on your dashboard, there’s a small icon on the upper right that you can toggle on and off to use as a filter:

Generated dashboard actions work well on many dashboards, however, they work less well in this context. Having a large custom map effectively replaces a small color legend as the obvious trigger for a highlight action, and the filter toggle automatically filters all charts by all fields. This combination means you could end up with situations like below:

You can see that a selection on the map excludes all bars that show the count of students in other areas and resizes the axis to the maximum space available. Because there is no matching field to highlight in the line chart, the values are greyed out and hard to see.

The solution to this is to use the Dashboard Actions menu.

Dashboard Actions Menu 

To access this menu, go to your dashboard and select Dashboard>Actions. This will pull up the list of current actions on your workbook, where you can see the menu-created and Tableau (generated) actions all in one place. Selecting Add Action will allow you to select the kind of action you prefer.


The Source Sheets section allows you to select the worksheet(s) in which your user must make a selection. Highlight actions can have multiple sources; filter actions can only have one source – but you can add multiple filters that will allow every worksheet to affect every other sheet. The Target Sheets allow you to specify which workbooks the action affects.

The two actions above would lead to a result like this:

With a filter applied to the line chart, you can see how the number of users change over time in the selected area only; using a highlight action on the bar chart preserves the context of the rest of the building, but helps the selected value pop out.


In general, your steps to creating a dashboard action are below:

  1. Navigate to Dashboard>Actions and select “Add action”
  2. Select either the filter or highlight action to reduce (filter) or feature (highlight) information
  3. In the Source Sheets menu, select the sheet(s) where the action must be taken by the user
  4. In the Target Sheets menu, select the sheet(s) where the effect of the action will be shown.
  5. Decide whether the action affects all fields in your workbook or selected fields and modify your action accordingly.


If your actions aren’t working as expected, these questions may help you find the problem:

  • Is everything suddenly dim? Is your highlight having no effect at all? Check if the desired field (e.g. Location) is on all sheets. If not, add the field to the view and see if your action works now.
  • Did your data disappear? Check if your filter affects All Fields. If so, there may be fields (like time of day) that aren’t showing on some views (like the map). Use Selected Fields to set the fields that should be affected and ignore those that shouldn’t.
  • Did you filter when you meant to highlight? You’ll have to create a whole new action – there’s no easy way to switch one the action is created.


A list of all tools, links, and downloads mentioned in this tutorial can be found below:

A note on versioning:

In Tableau 10, you can use data blending across all locations. You will be able to drag separate files (data, polygons, capacity) and blend on location.

For Tableau 9 and earlier, adding the data as different tabs in an Excel workbook will still enable you to blend as above. Avoid using relationships to merge data across data sources in this case, as will give you some difficulty as you try to compute % occupancy across two data sources (capacity and data)

Additional Resources

There are quite a few other resources and tutorials to work with custom polygon data. When I was learning, I consulted the Creating Custom Polygons on a Background Image instructions at Tableau and Behold. The Tableau blog also came out with a recent post about custom polygons and US Congressional Districts and they points to several other projects and tutorials at the end of the post.

Posted in Dashboard Spotlight

Tableau Mobile Iron Viz 2016

You are more likely to see a sheep on the cover of a board game box than you are to see a group of women. —Erin Ryan, of the Cardboard Republic

I used this Iron Viz as an excuse to explore some data I’ve been meaning to work with for a long time. A friend of mine shared this article on Gender Representation in Board Game Cover Art‏ a few months ago, and I’ve been looking at my games slightly differently  ever since.

Of course, while the gender stat is somewhat shocking – and it only gets worse for people of color – few people are interested in buying a game simply for the fact that it features a woman on the cover. A lot of the follow-up discussions we’ve had has been about representation more broadly – Do I have any games by female designers? How do I find more stuff created by women?

And finding games you might enjoy is enough of a challenge in and of itself! There are thousands of new games published every year and while Board Game Geek can be a helpful reference, the proliferation of titles like “Top Game of [unit of time],” “help me find a game,” “2-players my wife will enjoy?,” and “Masterpost of [game type] games!” speak to the difficulty of using it as a recommendation tool.

Enter RecommEngine!


In this Tableau visualization, I have experimented with using visual analytics to inform and guide board game selection. Selecting games you know and enjoy at the top (or the top left on desktop) will give you an overview of some of the game designers and game mechanics you enjoy, as well as the ability to see how many of the selected games fall into the various bins of game length and number of players. Each of these categories works as a filter, so you can see which of your games are by a particular designer (are they the really good ones or the kinda meh ones?) or quickly narrow down your options for board game night based on the games available, how many people you have, and the time commitment they’re willing to invest.

As you scroll, you pass a few visualizations that range from informative to comical on the gender representation of your game library. I did quite a bit of digging into the depths of the Board Game Geek website to find how and where women were involved in the back end of game design. I wanted to paint a slightly more nuanced picture of the industry, and to give people the tools to explore that nuance for themselves.

Cause for celebration

In the middle of the dashboard – whichever device you use – you find the radar chart overview. Earlier this year, Edward Kung did a Principal Components Analysis of Board Game Geek rankings and identified four “genes” that mark particular kinds of players. He shared the top 10 positive and negative indicators for each category on his blog, which I applied to the list of games I had compiled to build a high-level overview of the selected games. My visual encoding choice was directly informed by Kung’s original, here, but I also wanted it to have the look and feel of a personality test, which tells you about yourself in a way that you can (theoretically) use to inform your decisions going forward. The graph on the left (“Your Results”) shows the result of the selections and filters that were applied based on your original knowledge and input; the graph on the right (“Selected Game(s)”) gives you an overview of the new recommendations so you can assess the fit (the theory being that it’s a lot easier to compare one shape to another shape than to remember the intersecting sets and filters that inform a recommendation!).


At the bottom – or the right side of the screen on desktop – you can use the information you’ve learned (or already knew) to find new games. Selecting mechanics on this side applies an intersection filter; a selection of “Co-operative games” and “Bluffing” will show you only games that have both mechanics, rather than all co-operative games and all bluffing games. The same filters for designer; number of players; length of game are available here, to minimize the scrolling.

As this was a mobile contest, I wanted to limit the need for filtering as much as possible. While the initial input on both sides is fairly user/filter-intensive, the rest of the interaction is all done through dashboard actions and calculations (…I had to figure out LOD Expressions for this!). In case the number of games or the length of the filter was still too intimidating, I’ve provided a few pre-selected lists as buttons underneath so you can get a feel for how it works before tediously checking and unchecking a preferences list.

Clicking any particular game (whether in the recommendations side or the preferences side) will give you access to the Board Game Geek page as well as a pre-selected tweet with #ironviz 😉


There are some major limitations to this dashboard as a recommendation tool. I’m still disappointed that I could not figure out a way to turn the selection filter at the top into an exclusion filter at the bottom. Because of that, there’s really no way to prevent games you already know and like from appearing as recommendations. While I imagine that makes the preference analysis seem more accurate, it’s can be pretty frustrating if you’re really searching for something new.

There is also no weighting of your selection – every game you select factors evenly into the analysis below. You may kinda like one game and LOVE another, but all of the “Top” categories are determined by number of games that fall into the group, not the strength of preference.

Still want to check it out despite the limitations? Check it out by clicking here or on the image below!


Other Board Game Recommenders

Don’t like RecommEngine? That’s fine! we’re sad to see you go, but here are some other tools that might be more to your liking:

Tools and Data

While the final dataset required a lot of hand-compiling and curation, the following tools and sources were essential for inspiration, technique, and methodology:

Finally, special thanks to everyone who tested the dashboard, gave me feature suggestions and game suggestions, links to compelling articles, and generally kept me playing board games all these years: Matt, Ryan, Brie, other Ryan, Chloe, Andy,  Noah, Jared, Brandon, Keith, Cole…it takes a village – which incidentally looks like a fun game we should try 🙂

Posted in Excel, Intro to Tableau

Preparing Data for Tableau

Data preparation is one of the hardest and least-discussed Tableau skills, as it’s all done before you ever open the program. Getting your data into the right shape to create the views you want is almost always going to be the bulk of your visualization work. And when you go from a workshop or instruction session to using your own data, or when you try to follow a tutorial step-by-step, but your data is in a different shape, you can start running into frustrating barriers. What you thought as a “knowledge of Tableau” problem might just be a data problem.

So, how do you know if your data is headache-free?

Cross-Tab Data

Much of the data we see in libraries comes in this format. It reduces duplication in data entry and allows you to quickly compare high-level aggregated totals. However, Tableau prefers raw/normalized data as its strength is in aggregating, grouping, and performing calculations on the data itself.

If you load a cross-tab dataset in Tableau, the program will try its best to give you something to work with, but it won’t be intuitive, nor will it match any of the instructions you find online. As you see in the image above, Tableau reads each column in the data as its own field—treating July as a different category of information from August. Dragging and dropping these fields into Tableau will give you some basic information – you will be able to see how many visitors came in July – but as soon as you try to add more than one month, you’ll start to get frustrated.

Tableau is looking for data in a machine-readable format, known as:

Normalized data

This is the preferred format for use in Tableau. Reformatting the data in this way will make working with the information in Tableau much more intuitive, and you will be able to build more robust visualizations and dashboards.

This is a normalized version of the data above. As you can see, the information has been reshaped so that each different kind of data is in a new column – year, month, library, and visitor count. There is a lot of duplication in the data entry (year and branch, in particular), but once the data is in Tableau, the program correctly interprets the different types of information in the file.

How do you normalize data?

Quick data normalization in Excel


  1. Create a pivot table with multiple consolidation ranges. This requires use of the PivotTable and PivotChart Wizard (ALT + D + P is the keyboard shortcut), not the default icon in the ribbon.
    • Select “Multiple consolidation ranges”
    • Select “I will create the page fields”
    • Select the entire table for the range and click “add”
  2. Finish the wizard (remaining choices will not matter) and uncheck the Column and Row fields from the resulting PivotTable. You should see a single cell with a total Value
  3. Double click this value. The normalized data will appear in a new workbook with the row IDs on the left, the column headers in the middle, and the values on the right.
    • Rename the default column headers (“Row, ” “Column,” “Value” to something more meaningful.
    • Filter for blank values and delete the empty cells

This normalization trick can be used in combination with other Excel tricks for fairly sophisticated wrangling without the need for specialized tools or programs. Stay tuned for an overview of more advanced data wrangling in Excel, focusing on Concatenate, Text to Columns, and Trim.