Yay! Spring Break!
It’s finally upon us, that “week off” you’ve all been wishing and begging for is almost here. Maybe you have something exciting planned–a vacation, an adventure, an extended period of wearing nothing but pajamas–or perhaps you’re using this time to get caught up on work. If you’re in the latter category, consider this blog post an entertaining break for your brain. If you’re in the former camp, firstly, you should know that I’m jealous; secondly, consider saving this post for if/when you find yourself living that paradoxical cliché: “I need a vacation from this vacation,” which, I believe, belongs to the “Grass is Always Greener” school of thought.
Visual literacy is hot right now. Hotter than the beach in Florida you’re sunning yourself on? Maybe not; I have no idea what the future temperature of a hypothetical Floridian beach may be. But, believe me when I say it’s a topic with a lot of currency. First of all, what is visual literacy?
If you’re more of a “chunk of text” person, read more about the ACRL’s (Association of College and Research Libraries) definition of visual literacy below:
Visual literacy is a set of abilities that enables an individual to effectively find, interpret, evaluate, use, and create images and visual media. Visual literacy skills equip a learner to understand and analyze the contextual, cultural, ethical, aesthetic, intellectual, and technical components involved in the production and use of visual materials. A visually literate individual is both a critical consumer of visual media and a competent contributor to a body of shared knowledge and culture.
In an interdisciplinary, higher education environment, a visually literate individual is able to:
• Determine the nature and extent of the visual materials needed
• Find and access needed images and visual media effectively and efficiently
• Interpret and analyze the meanings of images and visual media
• Evaluate images and their sources
• Use images and visual media effectively
• Design and create meaningful images and visual media
• Understand many of the ethical, legal, social, and economic issues surrounding the creation and use of images and visual media, and access and use visual materials ethically
Read more about the ACRL’s visual literacy standards here.
For the purposes of this blog post, I’d like to single out the two of these in bold above, known as Standards three and four, and share a couple of their relevant (for our purposes) “performance indicators.”
- identifies information relevant to an image’s meaning.
- evaluates the effectiveness and reliability of images as visual communications.
- evaluates textual information accompanying images.
- makes judgments about the reliability and accuracy of image sources.
Now that we have the academic lingo down, it’s time to bring on the fun!
For the artsy cartoon nerds out there
That image is at least vaguely on-topic, since this post is going to focus on the intersection of image and text, and specifically, the infographic.
First thing’s first, what is an infographic? Simply put, it’s a visual representation of information, usually in the form of a diagram with minimal accompanying text, that seeks to present often complex content in relatively simple terms. You probably already had a pretty good sense of the definition, since you can hardly load a webpage these days without seeing some infographic or other.
But what many people don’t know is just how hard it actually is to find effective infographics. In our image-saturated society, it’s simply not possible to apply your critical thinking skills to every image you’re confronted with. But infographics aren’t selfies; it goes without saying that you’re not going to take a picture your Facebook friend took of her reflection in the bathroom of a dirty bar seriously. And why would you? There’s a toilet in the background.
No, infographics are special because unlike many other types of images, they present themselves as a visually appealing framework for the clear transmission of authoritative, factual information to the masses. In this sense, infographics have a lot in common with photojournalism: news is supposed to be authoritative and factual, available to the masses, clear and succinct, and full of visual interest. Plus, like infographics, a lot of news media is accompanied by text. But somehow the news and journalism are easier to be wary of: we’re more likely to know or question the ideology of the journalist or news outlet, and they’re more likely to tell us. You don’t often get that with infographics. And this is where our bag of visual literacy tricks comes in handy.
Let’s start with an example:
It looks pretty slick and it’s fairly easy to read the data, so it’s doing a pretty good job, right?
If you look even somewhat closely at this chart you’ll see that there are a lot of problems. And since there are so many, we’ll structure our analysis using the performance indicators of Visual Literacy Standards Three and Four.
- Identifies information relevant to an image’s meaning.
Let’s start with the most obvious thing, what exactly it is we’re looking at: it’s a chart, specifically a radar chart. It represents the opinions of different socioeconomic groups about the secrets to success. The little legend shows us which colors correspond to which groups. If we look at the top, we see that this is probably from a publication called “Infografika” and it appeared in the first issue. If we look at the bottom, we see the source of the data is the “Obshestvennoe Mnenie Fund” and the name of the illustrator. To speak very briefly about what the data show, it seems “the poor” think that success is based on who you know, and lying and cheating your way to the top. “Rich people” believe hard work is the single greatest determinant for success, while the middle class are evenly split between connections and a good education.
There’s one thing I haven’t mentioned, and that’s the subheading. The first line provides a little information on the methods of the study: people were asked “what’s the secret to success?” Fair enough. Let’s put a pin in the second line for now.
If you look at the ACRL Visual Literacy Standards, each performance indicator has a bunch of “learning outcomes.” Here’s one associated with our current performance indicator:
“Recognizes when more information about an image is needed, develops questions for further research, and conducts additional research as appropriate”
We could start with figuring out what “Infografika” is. And what the “Obshestvennoe Mnenie Fund” is. I’ll tell you right now, though, most of that info is in Russian. The latter *may* refer to the Russian Public Opinion Research Center, however, my Russian is little rusty (read: non-existent). Here’s a screencap of the homepage for Infografika, which it turns out is a Russian magazine with nothing but images of infographics in it:
…virtually incomprehensible for the non-Russian speaker
There are definitely other areas where more information is needed: the numbers on the chart, are those… percentages? Because they add up to more than 100; but that might be fine if people were allowed to answer more than one question. They might also be raw numbers of respondents. Honestly, we don’t know. And speaking of the study, what kind was it? A survey? An interview? How many took it? How were respondents solicited? Another question we don’t have answers to are the metrics for the different groups: what counts as “rich people”? “Middle class”? “Poor”? We just don’t know.
- Evaluates textual information accompanying images.
Most of the text is clear enough, though as we noted earlier, some information is left out. Remember how we put a pin in the second line of the subheading? Let’s return to that now:
Grammatical problems aside, what’s wrong with this statement?
1) It draws a definitive conclusion. There’s no “these findings suggest that…” to soften the statement or make clear any limitations of the study. Without knowing details about the survey, how can we judge if this data is broadly applicable?
2) The statement does not represent a foregone conclusion inherent in the results. If I asked 100 random people which they like more, chocolate or vanilla, and 98% said they liked chocolate best, a statement like “according to this study, more people like chocolate than they do vanilla” is absolutely a foregone conclusion. That 98% is greater than 2% is a fact. But if I said “because chocolate is more delicious than vanilla, it is the flavor of choice,” that would be a sloppy assumption on my part, because I never asked why people like chocolate better: maybe they like the color, or maybe it’s cheaper, or maybe they’re giving away free cars with every chocolate purchase. Furthermore, deliciousness is an opinion, not a fact, so even if everyone did say they thought it was more delicious, it’s my *ethical responsibility* as a researcher to avoid jumping to conclusions and making unfounded assumptions.
Which brings us to: 3) It’s irresponsible and unethical for suggesting that the responses of the “poor” in some way demonstrate a need to change their “life approach.” I’m not totally sure what they even mean, but it certainly seems to be shifting the burden of responsibility for poverty onto the poor themselves and their negative attitudes. This is a chicken-and-egg thing: are the “poor” living in poverty because of their beliefs or are their beliefs caused by living in poverty? Consider this alongside the answers of the “rich”: they believe “hard work” is the secret to success. But really, not all rich people work hard or became rich by working hard. And “hard work” entails what, exactly? Do more rich people work hard than do middle and lower class people? I don’t know, and it doesn’t really matter: this survey is about opinions. You can believe that hard work was the secret to your success all you want, it won’t necessarily make it true. If this survey shows anything, it’s that rich people believe in the power of the individual as the biggest determining factor for success, while poor people are more cynical in their perception of success as something ill-gotten or reserved for an exclusive group who already have social and financial resources.
- Evaluates the effectiveness and reliability of images as visual communications.
- Makes judgments about the reliability and accuracy of image sources.
Is this infographic effective? The data in the chart is pretty easy to read and understand. Overlaying the colors effectively communicates how different the opinions of the three groups are. The layout and design are fairly clean and attractive. So we can at least say that it’s visually appealing and reasonably easy to understand.
But it’s not reliable. Definitely not. We know pretty much nothing about the data and where it came from, the study methods, the research design. Plus then there’s that second line in the subheading. This was features on a list of bad infographics, whose author perfectly summed up its message: “Ugh, poor people.”
So, basically, when it comes to infographics and critical thinking, you might imagine a grid like this:
Is this an infographic about infographics?
Thus, the visually literate student can figure out where a given infographic falls on this spectrum.
The one we’ve already looked at would go here:
Now let’s look at some examples from the other quadrants.
Source: New York Times
In the days after Supreme Court Justice Scalia died, the New York Times tweeted this image. It’s a more concise version of another visualization that I definitely recommend checking out.
Is it effective? It’s easy to read, despite the fact that the information it contains is quite complicated. The colors are easy to see and understand, and the line that corresponds to Scalia’s voting record is visible without hindering our ability to see the surrounding information. The labels are clear and pretty easy to read. This chart was featured on the website coolinfographics.com, which, with a name like that, hopefully needs very little in the way of introduction. That blog singled out five effective design attributes that this chart possesses: a minimal chart legend, minimal axis labels, use of opacity (emphasizing Justice Scalia’s data), minimal grid lines, and minimal text on the page.
Is it reliable? We know the names and affiliations of the people who made the chart, we know where the data came from (Supreme Court Database), we know how the data points were established (Martin-Quinn scores). I don’t know anything about Martin-Quinn scores so I did a little research. They were developed by two of the professors who made this chart (Martin and Quinn). If you want to know more, read about it here, because I couldn’t reliably summarize it further. Developing sound measures isn’t easy, but they do have replication materials available, which makes it seem pretty scientifically sound. So let’s just say, yes, it’s reliable.
Source: New South Wales Government
The New South Wales Government wants you to know it’s recruiting more nurses! Just look at those stacks of nurses; visually, it’s impressive!
Is it effective? Well it’s pretty easy to understand and it gets the point across–whoa, that’s a lot of nurses! But if you look at the stacks against the numbers, you may quickly realize that it’s not as effective as it initially appears. Four pink people represent between 43,000 and 43,500 nurses; 32 pink people represent approximately 46,500 nurses; and 40 pink people represent 47,500 (or more) nurses. So… a difference of nearly 300 nurses doesn’t warrant another pink person, but a difference of about 3,100 nurses is represented by 28 people?Each of those 28 new pink people must represent about 110 nurses… but, in the first three “bars,” each of the four pink people represent between 10,787 and 10,851 nurses. Wha?
Or, to think about it another way, a 700% increase in pink people between 2010/11 and 2011/12 is used to show a 7% increase in the number of nurses. So, it’s visually misleading and therefore really not very effective.
Is it reliable? The data are correct; New South Wales did in fact increase the number of nurses over this period of time. The numbers, at least, are reliable.
Source: Fox News; Retrieved via Google
Where to even start?
Is it effective? No. Just no. First of all, it’s really ugly. Look at the wheels and cogs in the background: what does this have to do with welfare and full time jobs? In addition to a lack of visual appeal, it’s making the same mistake the last graph made: The numerical difference between the bars is 6.9 million, an increase of slightly less than 7%, yet the size of the bar basically quadruples. Plus the y axis is unlabeled, making it basically useless.
Is it reliable? Oh definitely no. No, no, no. To begin, while “people on welfare” seems like it would designate a really obvious group, the Census Bureau doesn’t use the word “welfare” to describe government assistance programs; the term is “means-tested programs,” which includes things like public or subsidized housing, “food stamps,” and Medicaid, among others. These programs are separate from things like social security and veteran’s compensation, though, someone *could* make the argument that these are “welfare” programs too, since the recipients are getting assistance from the government. So which programs is Fox talking about? We don’t know.
Well, actually, I do. I did some digging and found what they are referring to. The data come from the results of the Survey of Income and Program Participation from 2011, and specifically Table 2: People by Receipt of Benefits from Selected Programs: Monthly Averages, 4th Quarter. Just to make it easier, here it is below. The highlights are my own.
Source: United States Census Bureau
So, there it is. 108,592 people who received benefits from a means-tested program; wow, that’s 35.4% of the population! Oh wait… that also includes anyone living in a household in which one or more people received such benefits. So, not every single person in that group is personally receiving such benefits.
Now let’s look at Table 4: Households by Number of Means-Tested Noncash Programs in Which Members Participate: Monthly Averages from the same period:
Source: United States Census Bureau
You’ll see that when considered by household, 27.2% of households receive benefits, which is a fairly significant decrease from 35.4%.
As to the second data point, honestly, I looked through a bunch of stuff on the Census website and simply couldn’t find any data about employment from 2011. So I went to the Bureau of Labor Statistics website, which is frankly the best place to get data about employment, anyway. The site has relevant information about employment in 2011 , and we’ll start by looking at data from Table 8: Employed and unemployed full- and part-time workers by age, sex, race and Hispanic or Latino ethnicity (2011):
Source: Bureau of Labor Statistics
So, as you can see, there were 112,556,000 people 16 years of age and older who were employed full time, not 101.7 million, as Fox stated. But, why are we only limiting it to full-time employees anyways? Part time workers are workers too. In any case, you get a fuller picture by also looking at the data from Table 3: Employment status of the civilian noninstitutional population by age, sex, and race (2011).
Source: Bureau of Labor Statistics
In 2011, approximately 140 million people had jobs, either full or part time. That’s only 58.4% of the population 16 and over; that’s it?!?! *cue outrage over how few people work* But wait, conversely there were only 13.8 million people who were unemployed, roughly 6% of the civilian non-institutional population. It makes sense that people who are 16-24 and over 65 aren’t employed at exceptionally high rates. I singled out the 25-54 age range and look, 75.1% of that group are employed! It’s not as bleak a picture as Fox would have you believe; and no matter where I looked, I couldn’t come up with their 101.7 million figure.
In conclusion: not reliable.
Which leaves us with a distribution of infographics that looks like this:
Phew, that was a lot of work. I was a much younger woman when I started writing this blog post. But that’s the point: being visually literate requires you to not only look at images and visual information critically, but then to employ strong research skills to figure out just how effective and/or reliable what you’re looking at really is. This naturally leads into issues of information literacy, and one’s ability to judge the authority and reliability of a source.
These skills also come in handy when you’re making infographics. Creating effective infographics requires a number of diverse skills; that’s why there are design firms that uniquely dedicated to creating infographics for clients. If you’re looking for tips on how to make infographics, I recommend looking at coolinfographics.com blogmaster Randy Krum’s book by the same name, of which the UIUC Library has both a physical and digital copy.
As you enjoy what’s left of your spring break, stay safe out there, folks: don’t let bad infographics happen to good people.