Category Archives: Data Visualization

Data Visualization – A Scientific Treatment (Peter James Thomas)

Readers:

peter-thomas-h130Peter James Thomas sent me a link to his blog about the scientific treatment of data visualization. Mr. Thomas (photo, right) has extensive management experience in the insurance, reinsurance, software development, manufacturing and retail sectors, with particular focus on forming a deep appreciation of business / customer needs; developing pragmatic strategies to address these; having a passion for high-quality execution; and understanding the key role of education in enacting cultural and organisational change. While Mr. Thomas has predominantly been a General IT or IT Development Manager in most of his roles, his specialties include Business Intelligence / Data Warehousing / Analytics (the main subjects covered in his blog), Financial Systems / ERP, IT Strategy Formation, IT / Business Alignment and Customer Relationship Management systems.

Peter is currently Head of Group Business Intelligence for Validus Holdings, a leading insurance and reinsurance organisation with a global presence. For a considerable portion of his time in this role, he was also Head of IT Development at Validus’s Talbot Underwriting subsidiary.

I am including Mr. Thomas’ blog post in its entirety below. I am also including a link to his blog here.

Best regards,

Michael

Data Visualisation – A Scientific Treatment

by Peter James Thomas, November 6, 2014

IntroductionDiagram of the Causes of Mortality in the Army of the East (click to view a larger version in a new tab)The above diagram was compiled by Florence Nightingale, who was – according to The Font – “a celebrated English social reformer and statistician, and the founder of modern nursing”. It is gratifying to see her less high-profile role as a number-cruncher acknowledged up-front and central; particularly as she died in 1910, eight years before women in the UK were first allowed to vote and eighteen before universal suffrage. This diagram is one of two which are generally cited in any article on Data Visualisation. The other is Charles Minard’s exhibit detailing the advance on, and retreat from, Moscow of Napoleon Bonaparte’s Grande Armée in 1812 (Data Visualisation had a military genesis in common with – amongst many other things – the internet). I’ll leave the reader to look at this second famous diagram if they want to; it’s just a click away.While there are more elements of numeric information in Minard’s work (what we would now call measures), there is a differentiating point to be made about Nightingale’s diagram. This is that it was specifically produced to aid members of the British parliament in their understanding of conditions during the Crimean War (1853-56); particularly given that such non-specialists had struggled to understand traditional (and technical) statistical reports. Again, rather remarkably, we have here a scenario where the great and the good were listening to the opinions of someone who was barred from voting on the basis of lacking a Y chromosome. Perhaps more pertinently to this blog, this scenario relates to one of the objectives of modern-day Data Visualisation in business; namely explaining complex issues, which don’t leap off of a page of figures, to busy decision makers, some of whom may not be experts in the specific subject area (another is of course allowing the expert to discern less than obvious patterns in large or complex sets of data). Fortunately most business decision makers don’t have to grapple with the progression in number of “deaths from Preventible or Mitigable Zymotic diseases” versus ”deaths from wounds” over time, but the point remains.Data Visualisation in one branch of Science

von Laue, Bragg Senior & Junior, Crowfoot Hodgkin, Kendrew, Perutz, Crick, Franklin, Watson & Wilkins

Coming much more up to date, I wanted to consider a modern example of Data Visualisation. As with Nightingale’s work, this is not business-focused, but contains some elements which should be pertinent to the professional considering the creation of diagrams in a business context. The specific area I will now consider is Structural Biology. For the incognoscenti (no advert for IBM intended!), this area of science is focussed on determining the three-dimensional shape of biologically relevant macro-molecules, most frequently proteins or protein complexes. The history of Structural Biology is intertwined with the development of X-ray crystallography by Max von Laue and father and son team William Henry and William Lawrence Bragg; its subsequent application to organic molecules by a host of pioneers including Dorothy Crowfoot Hodgkin, John Kendrew and Max Perutz; and – of greatest resonance to the general population – Francis Crick, Rosalind Franklin, James Watson and Maurice Wilkins’s joint determination of the structure of DNA in 1953.

photo-51

X-ray diffraction image of the double helix structure of the DNA molecule, taken 1952 by Raymond Gosling, commonly referred to as “Photo 51″, during work by Rosalind Franklin on the structure of DNA

While the masses of data gathered in modern X-ray crystallography needs computer software to extrapolate them to physical structures, things were more accessible in 1953. Indeed, it could be argued that Gosling and Franklin’s famous image, its characteristic “X” suggestive of two helices and thus driving Crick and Watson’s model building, is another notable example of Data Visualisation; at least in the sense of a picture (rather than numbers) suggesting some underlying truth. In this case, the production of Photo 51 led directly to the creation of the even more iconic image below (which was drawn by Francis Crick’s wife Odile and appeared in his and Watson’s seminal Nature paper[1]):

Odile and Francis Crick - structure of DNA

© Nature (1953)
Posted on this site under the non-commercial clause of the right-holder’s licence

It is probably fair to say that the visualisation of data which is displayed above has had something of an impact on humankind in the fifty years since it was first drawn.

Modern Structural Biology

The X-ray Free Electron Laser at Stanford

Today, X-ray crystallography is one of many tools available to the structural biologist with other approaches including Nuclear Magnetic Resonance Spectroscopy, Electron Microscopy and a range of biophysical techniques which I will not detain the reader by listing. The cutting edge is probably represented by the X-ray Free Electron Laser, a device originally created by repurposing the linear accelerators of the previous generation’s particle physicists. In general Structural Biology has historically sat at an intersection of Physics and Biology.

However, before trips to synchrotrons can be planned, the Structural Biologist often faces the prospect of stabilising their protein of interest, ensuring that they can generate sufficient quantities of it, successfully isolating the protein and finally generating crystals of appropriate quality. This process often consumes years, in some cases decades. As with most forms of human endeavour, there are few short-cuts and the outcome is at least loosely correlated to the amount of time and effort applied (though sadly with no guarantee that hard work will always be rewarded).

From the general to the specific

The Journal of Molecular Biology (October 2014)

At this point I should declare a personal interest, the example of Data Visualisation which I am going to consider is taken from a paper recently accepted by the Journal of Molecular Biology (JMB) and of which my wife is the first author[2]. Before looking at this exhibit, it’s worth a brief detour to provide some context.

In recent decades, the exponential growth in the breadth and depth of scientific knowledge (plus of course the velocity with which this can be disseminated), coupled with the increase in the range and complexity of techniques and equipment employed, has led to the emergence of specialists. In turn this means that, in a manner analogous to the early production lines, science has become a very collaborative activity; expert in stage one hands over the fruits of their labour to expert in stage two and so on. For this reason the typical scientific paper (and certainly those in Structural Biology) will have several authors, often spread across multiple laboratory groups and frequently in different countries. By way of example the previous paper my wife worked on had 16 authors (including a Nobel Laureate[3]). In this context, the fact the paper I will now reference was authored by just my wife and her group leader is noteworthy.

The reader may at this point be relieved to learn that I am not going to endeavour to explain the subject matter of my wife’s paper, nor the general area of biology to which it pertains (the interested are recommended to Google “membrane proteins” or “G Protein Coupled Receptors” as a starting point). Instead let’s take a look at one of the exhibits.

Click to view a larger version in a new tab

© The Journal of Molecular Biology (2014)
Posted on this site under a Creative Commons licence

The above diagram (in common with Nightingale’s much earlier one) attempts to show a connection between sets of data, rather than just the data itself. I’ll elide the scientific specifics here and focus on more general issues.

First the grey upper section with the darker blots on it – which is labelled (a) – is an image of a biological assay called a Western Blot (for the interested details can be viewed here); each vertical column (labelled at the top of the diagram) represents a sub-experiment on protein drawn from a specific sample of cells. The vertical position of a blot indicates the size of the molecules found within it (in kilodaltons); the intensity of a given blot indicates how much of the substance is present. Aside from the headings and labels, the upper part of the figure is a photographic image and so essentially analogue data[4]. So, in summary, this upper section represents the findings from one set of experiments.

At the bottom – and labelled (b) – appears an artefact familiar to anyone in business, a bar-graph. This presents results from a parallel experiment on samples of protein from the same cells (for the interested, this set of data relates to degree to which proteins in the samples bind to a specific radiolabelled ligand). The second set of data is taken from what I might refer to as a “counting machine” and is thus essentially digital. To be 100% clear, the bar chart is not a representation of the data in the upper part of the diagram, it pertains to results from a second experiment on the same samples. As indicated by the labelling, for a given sample, the column in the bar chart (b) is aligned with the column in the Western Blot above (a), connecting the two different sets of results.

Taken together the upper and lower sections[5] establish a relationship between the two sets of data. Again I’ll skip on the specifics, but the general point is that while the Western Blot (a) and the binding assay (b) tell us the same story, the Western Blot is a much more straightforward and speedy procedure. The relationship that the paper establishes means that just the Western Blot can be used to perform a simple new assay which will save significant time and effort for people engaged in the determination of the structures of membrane proteins; a valuable new insight. Clearly the relationships that have been inferred could equally have been presented in a tabular form instead and be just as relevant. It is however testament to the more atavistic side of humans that – in common with many relationships between data – a picture says it more surely and (to mix a metaphor) more viscerally. This is the essence of Data Visualisation.

What learnings can Scientific Data Visualisation provide to Business?

Scientific presentation (c/o Nature, but looks a lot like PhD Comics IMO)

Using the JMB exhibit above, I wanted to now make some more general observations and consider a few questions which arise out of comparing scientific and business approaches to Data Visualisation. I think that many of these points are pertinent to analysis in general.

Normalisation

Broadly, normalisation[6] consists of defining results in relation to some established yardstick (or set of yardsticks); displaying relative, as opposed to absolute, numbers. In the JMB exhibit above, the amount of protein solubilised in various detergents is shown with reference to the un-solubilised amount found in native membranes; these reference figures appear as 100% columns to the right and left extremes of the diagram.

The most common usage of normalisation in business is growth percentages. Here the fact that London business has grown by 5% can be compared to Copenhagen having grown by 10% despite total London business being 20-times the volume of Copenhagen’s. A related business example, depending on implementation details, could be comparing foreign currency amounts at a fixed exchange rate to remove the impact of currency fluctuation.

Normalised figures are very typical in science, but, aside from the growth example mentioned above, considerably less prevalent in business. In both avenues of human endeavour, the approach should be used with caution; something that increases 200% from a very small starting point may not be relevant, be that the result of an experiment or weekly sales figures. Bearing this in mind, normalisation is often essential when looking to present data of different orders on the same graph[7]; the alternative often being that smaller data is swamped by larger, not always what is desirable.

Controls

I’ll use and anecdote to illustrate this area from a business perspective. Imagine an organisation which (as you would expect) tracks the volume of sales of a product or service it provides via a number of outlets. Imagine further that it launches some sort of promotion, perhaps valid only for a week, and notices an uptick in these sales. It is extremely tempting to state that the promotion has resulted in increased sales[8].

However this cannot always be stated with certainty. Sales may have increased for some totally unrelated reason such as (depending on what is being sold) good or bad weather, a competitor increasing prices or closing one or more of their comparable outlets and so on. Equally perniciously, the promotion maybe have simply moved sales in time – people may have been going to buy the organisation’s product or service in the weeks following a promotion, but have brought the expenditure forward to take advantage of it. If this is indeed the case, an uptick in sales may well be due to the impact of a promotion, but will be offset by a subsequent decrease.

In science, it is this type of problem that the concept of control tests is designed to combat. As well as testing a result in the presence of substance or condition X, a well-designed scientific experiment will also be carried out in the absence of substance or condition X, the latter being the control. In the JMB exhibit above, the controls appear in the columns with white labels.

There are ways to make the business “experiment” I refer to above more scientific of course. In retail business, the current focus on loyalty cards can help, assuming that these can be associated with the relevant transactions. If the business is on-line then historical records of purchasing behaviour can be similarly referenced. In the above example, the organisation could decide to offer the promotion at only a subset of the its outlets, allowing a comparison to those where no promotion applied. This approach may improve rigour somewhat, but of course it does not cater for purchases transferred from a non-promotion outlet to a promotion one (unless a whole raft of assumptions are made). There are entire industries devoted to helping businesses deal with these rather messy scenarios, but it is probably fair to say that it is normally easier to devise and carry out control tests in science.

The general take away here is that a graph which shows some change in a business output (say sales or profit) correlated to some change in a business input (e.g. a promotion, a new product launch, or a price cut) would carry a lot more weight if it also provided some measure of what would have happened without the change in input (not that this is always easy to measure).

Rigour and Scrutiny

I mention in the footnotes that the JMB paper in question includes versions of the exhibit presented above for four other membrane proteins, this being in order to firmly establish a connection. Looking at just the figure I have included here, each element of the data presented in the lower bar-graph area is based on duplicated or triplicated tests, with average results (and error bars – see the next section) being shown. When you consider that upwards of three months’ preparatory work could have gone into any of these elements and that a mistake at any stage during this time would have rendered the work useless, some impression of the level of rigour involved emerges. The result of this assiduous work is that the authors can be confident that the exhibits they have developed are accurate and will stand up to external scrutiny. Of course such external scrutiny is a key part of the scientific process and the manuscript of the paper was reviewed extensively by independent experts before being accepted for publication.

In the business world, such external scrutiny tends to apply most frequently to publicly published figures (such as audited Financial Accounts); of course external financial analysts also will look to dig into figures. There may be some internal scrutiny around both the additional numbers used to run the business and the graphical representations of these (and indeed some companies take this area very seriously), but not every internal KPI is vetted the way that the report and accounts are. Particularly in the area of Data Visualisation, there is a tension here. Graphical exhibits can have a lot of impact if they relate to the current situation or present trends; contrawise if they are substantially out-of-date, people may question their relevance. There is sometimes the expectation that a dashboard is just like its aeronautical counterpart, showing real-time information about what is going on now[9]. However a lot of the value of Data Visualisation is not about the here and now so much as trends and explanations of the factors behind the here and now. A well-thought out graph can tell a very powerful story, more powerful for most people than a table of figures. However a striking graph based on poor quality data, data which has been combined in the wrong way, or even – as sometimes happens – the wrong datasets entirely, can tell a very misleading story and lead to the wrong decisions being taken.

I am not for a moment suggesting here that every exhibit produced using Data Visualisation tools must be subject to months of scrutiny. As referenced above, in the hands of an expert such tools have the value of sometimes quickly uncovering hidden themes or factors. However, I would argue that – as in science – if the analyst involved finds something truly striking, an association which he or she feels will really resonate with senior business people, then double- or even triple-checking the data would be advisable. Asking a colleague to run their eye over the findings and to then probe for any obvious mistakes or weaknesses sounds like an appropriate next step. Internal Data Visualisations are never going to be subject to peer-review, however their value in taking sound business decisions will be increased substantially if their production reflects at least some of the rigour and scrutiny which are staples of the scientific method.

Dealing with Uncertainty

In the previous section I referred to the error bars appearing on the JMB figure above. Error bars are acknowledgements that what is being represented is variable and they indicate the extent of such variability. When dealing with a physical system (be that mechanical or – as in the case above – biological), behaviour is subject to many factors, not all of which can be eliminated or adjusted for and not all of which are predictable. This means that repeating an experiment under ostensibly identical conditions can lead to different results[10]. If the experiment is well-designed and if the experimenter is diligent, then such variability is minimised, but never eliminated. Error bars are a recognition of this fundamental aspect of the universe as we understand it.

While de rigueur in science, error bars seldom make an appearance in business, even – in my experience – in estimates of business measures which emerge from statistical analyses[11]. Even outside the realm of statistically generated figures, more business measures are subject to uncertainty than might initially be thought. An example here might be a comparison (perhaps as part of the externally scrutinised report and accounts) of the current quarter’s sales to the previous one (or the same one last year). In companies where sales may be tied to – for example – the number of outlets, care is paid to making these figures like-for-like. This might include only showing numbers for outlets which were in operation in the prior period and remain in operation now (i.e. excluding sales from both closed outlets or newly opened ones). However, outside the area of high-volume low-value sales where the Law of Large Numbers[12] rules, other factors could substantially skew a given quarter’s results for many organisations. Something as simple as a key customer delaying a purchase (so that it fell in Q3 this year instead of Q2 last) could have a large impact on quarterly comparisons. Again companies will sometimes look to include adjustments to cater for such timing or related issues, but this cannot be a precise process.

The main point I am making here is that many aspects of the information produced in companies is uncertain. The cash transactions in a quarter are of course the cash transactions in a quarter, but the above scenario suggests that they may not always 100% reflect actual business conditions (and you cannot adjust for everything). Equally where you get in to figures that would be part of most companies’ financial results, outstanding receivables and allowance for bad debts, the spectre of uncertainty arises again without a statistical model in sight. In many industries, regulators are pushing for companies to include more forward-looking estimates of future assets and liabilities in their Financials. While this may be a sensible reaction to recent economic crises, the approach inevitably leads to more figures being produced from models. Even when these models are subject to external review, as is the case with most regulatory-focussed ones, they are still models and there will be uncertainty around the numbers that they generate. While companies will often provide a range of estimates for things like guidance on future earnings per share, providing a range of estimates for historical financial exhibits is not really a mainstream activity.

Which perhaps gets me back to the subject of error bars on graphs. In general I think that their presence in Data Visualisations can only add value, not subtract it. In my article entitled Limitations of Business Intelligence I include the following passage which contains an exhibit showing how the Bank of England approaches communicating the uncertainty inevitably associated with its inflation estimates:

Business Intelligence is not a crystal ball, Predictive Analytics is not a crystal ball either. They are extremely useful tools [...] but they are not universal panaceas.

The Old Lady of Threadneedle Street is clearly not a witch[...] Statistical models will never give you precise answers to what will happen in the future – a range of outcomes, together with probabilities associated with each is the best you can hope for (see above). Predictive Analytics will not make you prescient, instead it can provide you with useful guidance, so long as you remember it is a prediction, not fact.

While I can’t see them figuring in formal financial statements any time soon, perhaps there is a case for more business Data Visualisations to include error bars.

In Summary

So, as is often the case, I have embarked on a journey. I started with an early example of Data Visualisation, diverted in to a particular branch of science with which I have some familiarity and hopefully returned, again as is often the case, to make some points which I think are pertinent to both the Business Intelligence practitioner and the consumers (and indeed commissioners) of Data Visualisations. Back in “All that glisters is not gold” – some thoughts on dashboards I made some more general comments about the best Data Visualisations having strong informational foundations underpinning them. While this observation remains true, I do see a lot of value in numerically able and intellectually curious people using Data Visualisation tools to quickly make connections which had not been made before and to tease out patterns from large data sets. In addition there can be great value in using Data Visualisation to present more quotidian information in a more easily digestible manner. However I also think that some of the learnings from science which I have presented in this article suggest that – as with all powerful tools – appropriate discretion on the part of the people generating Data Visualisation exhibits and on the part of the people consuming such content would be prudent. In particular the business equivalents of establishing controls, applying suitable rigour to data generation / combination and including information about uncertainty on exhibits where appropriate are all things which can help make Data Visualisation more honest and thus – at least in my opinion – more valuable.


Notes


[1]
Watson, J.D., Crick, F.H.C. (1953). Molecular structure of nucleic acids; a structure for deoxyribose nucleic acid. Nature.

[2]
Thomas, J.A., Tate, C.G. (2014). Quality Control in Eukaryotic Membrane Protein Overproduction. J. Mol. Biol. [Epub ahead of print].

[3]
The list of scientists involved in the development of X-ray Crystallography and Structural Biology which was presented earlier in the text encompasses a further nine such laureates (four of whom worked at my wife’s current research institute), though sadly this number does not include Rosalind Franklin. Over 20 Nobel Prizes have been awarded to people working in the field of Structural Biology, you can view an interactive time line of these here.

[4]
The intensity, size and position of blots are often digitised by specialist software, but this is an aside for our purposes.

[5]
Plus four other analogous exhibits which appear in the paper and relate to different proteins.

[6]
Normalisation has a precise mathematical meaning, actually (somewhat ironically for that most precise of activities) more than one. Here I am using the term more loosely.

[7]
That’s assuming you don’t want to get into log scales, something I have only come across once in over 25 years in business.

[8]
The uptick could be as compared to the week before, or to some other week (e.g. the same one last year or last month maybe) or versus an annual weekly average. The change is what is important here, not what the change is with respect to.

[9]
Of course some element of real-time information is indeed both feasible and desirable; for more analytic work (which encompasses many aspects of Data Visualisation) what is normally more important is sufficient historical data of good enough quality.

[10]
Anyone interested in some of the reasons for this is directed to my earlier article Patterns patterns everywhere.

[11]
See my series of three articles on Using historical data to justify BI investments for just one example of these.

[12]
But then 1=2 for very large values of 1

 

Information Is Beautiful Awards 2014 Announced

IIB_Awards_942_FB

 

Last Wednesday, November 12th, 2014, the third annual Information is Beautiful Awards celebrated data visualization at its best. Hundreds of entries were trimmed to an elite set of outstandingly illuminating infographics, over which the judges deliberated long and hard. Now, with thanks to their generous sponsors Kantar, here are the winners.

Data Visualization

rappers_459

Gold - Rappers, Sorted by Size of Vocabulary by Matthew Daniels

Silver - Weather Radials Poster by Timm Kekeritz

Bronze - The Depth of the Problem by The Washington Post

Special mention - The Analytical Tourism Map of Piedmont by Marco Bernardi, Federica Fragapane and Francesco Majno

 

Infographic

creativeroutines_459

Gold - Creative Routines by RJ Andrews

Silver - Game of Thrones Decoded by Heather Jones

Bronze - The Graphic Continuum by Jonathan Schwabish and Severino Ribecca

 

Interactive

The Refugee Project by Hyperakt and Ekene Ijeoma

Gold - The Refugee Project by Hyperakt and Ekene Ijeoma

Silver - How Americans Die by Bloomberg Visual Data

Joint Bronze - Commonwealth War Dead: First World War Visualised by James Offer

Joint Bronze - World Food Clock by Luke Twyman

 

Motion infographic

nyctaxis_459

Gold - NYC Taxis: A Day in the Life by Chris Whong

Silver - Beyond Bytes by Maral Pourkazemi

Bronze - Everything You Need to Know about Planet Earth by Kurzgesagt

Special mention - Energy by Adam Nieman

 

Website

selfie_459

Gold - Selfiecity by Moritz Stefaner

Silver - OECD Regional Well-Being by Moritz Stefaner

Bronze - After Babylon by Sofia Girelli, Eleonora Grotto, Pietro Lodi, Daniele Lupatini and Emilio Patuzzo

 

Tool

raw_459

Gold - RAW by Density Design Research Lab

Silver - Kennedy by Brendan Dawes

Bronze - Figure it Out by Friedrich Riha

 

Student

Sam Slover, Wrap Genius

Sam SloverWrap Genius

 

Individual

Brendan Dawes, Kennedy

Brendan DawesKennedy

 

Studio

womeninscience_459

FFunctionWomen in Science and HP What Matters

 

Corporate

biobased_459

Schwandt InfographicsBiobased Economy

 

Community

The Rite of Spring by Stephen Malinowski and Jay Bacal

The Rite of Spring by Stephen Malinowski and Jay Bacal

 

Most Beautiful

raw_459

RAW by Density Design Research Lab

 

Infographic: Gaming Family Tree

Readers:

Chris Lee from Plusnet sent me today’s blog entry.  Chris recently put together a series of graphics called “Gaming Family Trees” – the idea is to visually show the progression of 5 popular series of games (Mario, Zelda, Pokémon, Mario Sports, Sonic).

Thanks Chris for sending me your visualizations!

If you have an interesting data visualization or infographic you would like to share, please send them to me at michael@dataarchaeology.net.

Best Regards,

Michael

Gaming Family Tree

The 5 gaming family tree graphics use a combination of genealogy and creative license to visually represent the evolution of some of the most popular series’ of games.

It’s worth noting here that all cover images are used under a fair use license to illustrate the history of the games. All graphics except for Mario Sports work on a top-down level; the oldest titles in each linked sub-series are at the top, moving downward toward the newest. To keep them visually appealing (by removing large gaps), titles on the same horizontal level weren’t necessarily released in the same year – see below for an example:

Gaming Family Trees

The links between titles are intended to explain how they fit together, and clarification has been provided in some cases. Some titles aren’t linked to anything – this means they’re part of the overarching series, but not part of a sub-series (i.e. no direct sequels, not a remake). In the example below, Sonic Battle is the only fighting game in the Sonic world, but features all the characters from the series:

Gaming Family Trees 2

The borders of each title show which console it was released on. Some titles, such as Sonic Heroes (below) has various colours on the border – this means it was released on several consoles:

Gaming Family Trees 3

Not all titles from each series were included either for various reasons. Some titles drift from canon and are the subject of controversy among fans. Some were licensed spin-offs that didn’t fit the vibe of the graphic too well. Others were just too obscure.

As a tribute to the titles that didn’t make it, below we’ve picked out one title from each series that didn’t make the final graphics:

  • Pokémon: Pokémon Snap, the game that sees you carted through various landscapes taking pictures of the Pokémon who live there.
  • Legend of Zelda: Hyrule Warriors, a hack ‘n’ slash game set in the Zelda universe.
  • Mario: Mario Paint, a ‘game’ that allows users to draw and create custom animations.
  • Mario Sports: the endless and varied selection of Mario board games!
  • Sonic the Hedgehog: Waku Waku Sonic Patrol Car, where Sonic plays the role of a police officer keeping the city safe from criminal mastermind Eggman.

This time Mario, Zelda, Pokémon and Sonic made the cut, but there are hundreds of other candidates – Yu-Gi Oh? Digimon? Grand Theft Auto? Maybe even the whole FPS genre? It really depends how geeky you want to go.

You can see the family trees as we post them below:

Pokemon

Pokémon Family Tree

Legend of Zelda

Legend of Zelda

Mario

Mario

Mario Sports

Mario Sports

Sonic

Sonic family tree

DataViz: Monumental Budget Busters (Podio.com)

Readers: 

As a followup to the visualization I shared on mankind’s greatest architectural achievements, Britt from Podio.com shared a chart her team has created. It depicts the world’s most over-budget projects, which shockingly consist of some of the world’s most monumental buildings, such as the Montreal Olympic Stadium and the Empire State Building.

To give you more of an overview, it charts large-scale projects in history known for cost overruns. Project timeframes, budgets and costs were taken from reliable news media and academic literature to create a comparison chart that allows you to contrast each project’s percentage over budget, years over deadline and total amount over budget.

Simply click on any of the images to see the more detailed information.

You are allowed the option to view it via the grid or the graph view. I highly recommend checking out the graph view as it will give you a better picture of how the costs to build each building compares to one another.

Thanks to Britt and her team for sharing this with me. 

Here is where you can view the entire interactive visualization: https://podio.com/site/budget-busters

Best regards, Michael

Monumental Budget Busters

When I clicked on the Scottish Parliament Building, the following information displayed.

Scottish Parliament Building

Dataviz: Crayola Crayons – How Color Has Changed

Readers:

When I was a young boy, I loved to color with my big box of Crayola Crayons. I would pull out blank sheets of paper and create multi-colored masterpieces (at least my mother said so).

eight_crayons-200x138Crayola’s crayon chronology tracks their standard box, from its humble eight color beginnings in 1903 to the present day’s 120-count lineup. According to Crayola, of the seventy-two colors from the official 1975 set – sixty-one survive. [1]

A creative dataviz type who goes by the name Velociraptor (referred from here as “Velo”) created the chart below to show the historical crayonology (I just made that word up!) of Crayola Crayons colors.

 

crayola_crayon_color_chart-520x520Velo gently scraped Wikipedia’s list of Crayola colors, corrected a few hues, and added the standard 16-count School Crayon box available in 1935.

Except for the dayglow-ski-jacket-inspired burst of neon magentas at the end of the ’80s, the official color set has remained remarkably faithful to its roots!

Ever industrious, Velo also calculated the average growth rate: 2.56% annually. For maximum understandability, he reformulated it as “Crayola’s Law,” which states:

The number of colors doubles every 28 years!

If the Law holds true, Crayola’s gonna need a bigger box, because by the year 2050, there’ll be 330 different crayons! [1]

A Second Version

Velo was not satisfied with his first version, so he produced the second version below. [2]

Crayola Color Chart

A Third Version (and interactive too!)

Click through to the interactive version for a larger view with mouseover color names!

———————————————————————————–

References:

[1] Stephen Von Worley, Color Me A Dinosaur, The History of Crayola Crayons, Charted, Data Pointed, January 15, 2010, http://www.datapointed.net/2010/01/crayola-crayon-color-chart/.

[2] Stephen Von Worley, Somewhere Over The Crayon-Bow, A Cheerier Crayola Color Chronology, Data Pointed, October 14, 2010, http://www.datapointed.net/2010/10/crayola-color-chart-rainbow-style/.

DataViz: The Most Common Jobs For The Rich, Middle Class And Poor (NPR)

NPR has written a lot about how income has changed (or not) for the rich, middle class and poor in the U.S. In the past, however, they have written much less about what the rich, middle class and poor actually do for work.

To remedy that, NPR made this graph. It shows the 10 most popular jobs in each income bracket.

Job Ladder

Notes
Data from 2012, adjusted for inflation.

If you click on each job, you can see where it appears in different income brackets.

Job Ladder - Truck Drivers

The jobs here look shockingly familiar. It’s like a Richard Scarry model of the labor market, with people working jobs ripped right out of a storybook. This is the kind of work that needs to get done in every city in America. It shows that, at least nationally, the conventional idea of what people do for a living still holds.

Looking across incomes and rankings there are a couple of interesting things to note:

  • It’s good to be the boss: Being a manager is the most common job from the 70th percentile up to the 99th.
  • Doctors and lawyers are only found in the top two brackets. (There’s a reason our grandmothers wanted us to go to med school or law school.)
  • Sales supervisors are well-represented across all groups. It’s a broad job title that applies to people making as little as $12,000 a year all the way up to six figures.

The data come from the American Community Survey using individual income from wages and salaries. We restricted the sample to adults ages 25 to 65 and who worked at least three months in the past year.

————————————————————-

References: Quoctrung Bui, The Most Common Jobs For The Rich, Middle Class And Poor, NPR.com, October 16, 201412:50 PM ET, http://www.npr.org/blogs/money/2014/10/16/356176018/the-most-popular-jobs-for-the-rich-middle-class-and-poor.

Dataviz: A True West Moment, Bob Boze Bell and The Stagecoach Era

Bob Boze Bell - The Stagecoach Era

bob4I’ve mentioned Bob Boze Bell (photo, right) and his A True West Moment column that appears in our Sunday The Arizona Republic before. As I said then, one of my favorite things to do in life is read the Sunday newspaper. I have been doing this since I was around 10 years old. I always read the comics first, but that has diminished as most of my favorite comic strips are long since retired. However, in The Arizona Republic, one of the first things I read every Sunday is A True West Moment by the legendary Bob Boze Bell.

True West Magazine Nov 2014In 1999, Boze took over the legendary True West Magazine. Launched in 1953 by the legendary Joe “Hosstail” Small in Austin, Texas, True West is a popular history publication with a loyal, core readership, and the oldest, continuously published Western Americana publication in the world. Thanks to the proliferation of TV Westerns in the late 1950s and early 1960s, the magazine enjoyed broad circulation (200,000+ newsstand sales). But, as the market and his health started to decline, Joe Small sold out in 1974 and over the next decade, the magazine bounced around the Midwest, finally settling in Stillwater, Oklahoma. Unfortunately, the Oklahoma owners did not have the capital to stay current with the changing times and the magazine began to lose significant market share, as newer, slicker titles such as Cowboys & Indians and American Cowboy came into the marketplace. By mid-1999, the publication, along with three other titles, was for sale, and the current owners came to the rescue. True West Publishing (including assets and trademarked names of True West, Old West, Frontier Times) moved to Cave Creek, Arizona, in October 1999. [SOURCE]

In 2003, the magazine celebrated its 50th anniversary. The year also marked the incorporation of True West Publishing and an increase in the magazine’s frequency to 10 issues. True West now also publishes an annual shopping guide called the Best of the West Source Book.

Stephen Few: Now You See It

Portland

Readers:

Stephen_Few2I was in Portland, Oregon last week attending three data visualization workshops by industry expert, Stephen Few. I was very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Last Thursday, was the third workshop, Now You See It which is based on Steve’s best-selling book (see photo below).

To not give away too much of what Steve is teaching in the workshops, I have decided to discuss one of our workshop topics, human perceptual and cognitive strengths.

You can find future workshops by Steve on his website, Perceptual Edge.

Best Regards,

Michael

Now You See It

 

Designed for Humans

Good visualizations and good visualization tools are carefully designed to take advantage of human perceptual and cognitive strengths and to augment human abilities that are weak. If the goal is to count the number of circles, this visualization isn’t well designed. It is difficult to remember what you have and have not counted.

Quickly, tell me how many blue circles you see below.

Design for Humans 1

The visualization below, shows the same number of circles, however, is well designed for the counting task. Because the circles are grouped into small sets of five each, it is easy to remember which groups have and have not been counted, easy to quickly count the number of circles in each group, and easy to discover with little effort that each of the five groups contains the same number of circles (i.e., five), resulting in a total count of 25 circles.

Design for Humans 2

The arrangement below is even better yet.

Design for Humans 3

Information visualization makes possible an ideal balance between unconscious perceptual and conscious cognitive processes. With the proper tools, we can shift much of the analytical process from conscious processes in the brain to pre-attentive processes of visual perception, letting our eyes do what they do extremely well.

Stephen Few: Show Me The Numbers

Readers:

Stephen_Few2I am in Portland, Oregon this week attending three data visualization workshops by industry expert, Stephen Few. I am very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Yesterday, was the first workshop, Show Me the Numbers which is based on Steve’s best-selling book (see photo below).

To not give away too much of what Steve is teaching in the workshops, I have decided for today to show a “before and after” example with Steve’s explanation of why he made the changes he did.

You can find future workshops by Steve on his website, Perceptual Edge.

Best Regards,

Michael

Show Me the Numbers

 

“Before” Example

In the example below, the message contained in the titles is not clearly displayed in the graphs. The message deals with the ratio of indirect to total sales – how it is declining domestically, while holding steady internationally. You’d have to work hard to get this message the display as it is currently designed.

Before - Show Me the Numbers

 

“After” Example

The revised example below, however, is designed very specifically to display the intended message. Because this graph, is skillfully designed to communicate, its message is crystal clear. A key feature that makes this so is the choice of percentage for the quantitative scale, rather than dollars.

After - Show Me the Numbers

Additional Thoughts From Steve

The type of graph that is selected and the way it’s designed also have great impact on the message that is communicated. By simply switching from a line graph to a bar graph, the decrease in job satisfaction among those without college degrees in their later years is no longer as obvious.

More Thoughts - Show Me the Numbers

Infographic: Mount Hood, Portland, Oregon (Ben VanderVeen)

Readers:

Stephen_Few2I am in Portland, Oregon this week attending three data visualization classes by industry expert, Stephen Few. I am very excited to be sitting at the foot of the master for three days and soak in all of this great dataviz information.

Ben VanderveenTo follow my theme of highlighting cities I visit, I found an inforgraphic created by Ben VanderVeen of Mount Hood. Ben is a filmmaker and designer living on the west coast. On his website you will find examples of his design work, video projects, documentary film and more. If you are interested in hiring Ben for a project or working collaboratively, visit his contact page here.

Stay tune for highlights of each of the three classes by Stephen Few over the next few days.

Best Regards,

Michael

 

MtHood1024

Follow

Get every new post delivered to your Inbox.

Join 282 other followers