Monday, December 27, 2010

As the Sands of the Hourglass...

This is a nice graph showing where people came into and went out of my Gmail chat life. That gray line is the first day I used the service, which as you'll not is NOT in the middle of 2006 (tee hee, previous post. tee hee.)


Personally, I think this is an incredibly telling graph. It's taking, again, my top nine gchat buddies and showing how the volume of our interaction changes. For example, KRS shows a very solid line at the beginning, showing frequent interaction at the beginning of this period, but tapers off. RPM starts off as a very casual gchat friend, but gains in intensity near the end. CRS is an interesting latecomer; she's my wife. She was given a Gmail account by ASU when she was accepted to the doctoral program, and I switched her personal email over to Gmail as well. We didn't start chatting until we were essentially engaged.

You'll note that she still makes it into my top nine. This is because of the size of those early chats. Unfortunately, this graph is not weighted; you get one dot every day we converse, no matter how long or short. I'm still learning R, and am trying to find out exactly how one goes about changing plotting colors according to the values of a variable. (Rather, I know how to do this for larger symbols, but not for dots).

A note on the three-letter codes. In the interest of privacy, they are generally not the current complete initials of the person listed. However, in order to make it readable for me, they are pretty close. If you're listed here, and you want me to change the initials to protect your privacy on the internet, please let me know. I'm not Mark Zuckerberg, you know.


Saturday, December 18, 2010

The Size of Shakespeare, or, A Comedy of Errors

So.

It's been a little while, and I haven't neglected you, three blog readers. I've been working on a project in order to blow your collective mind, or at least give it a little something to chew on.

Specifically, I have delved deep into the realms of my Gmail chat logs and have begun to discover: data. Oh man, the trip this has been. And it's not over. There will be charts, there will be graphs, AND! there may be PODCASTING.

I intend to give you the tidbits I have learned in chewable form, piece by piece. Today's episode is: why you should examine your data thoroughly before you make any conclusions.

I wrote a Perl script to turn my wad of uncooked data into a delicious patty; it returned the size of each individual chat file along with other important stats. In the statistical scripting language R, I discovered that the sum total of chat content produced was, in a word, ridiculous. I did some calculations and made a graph that looks a little something like this:


Yes, it appeared that even just my most chatty friend had produced with me a larger corpus of work, bytewise, than Bill Shakespeare himself (the Bard wrote about 5 Mb worth). Sweet mercy. Note: I have been using Gmail's Chat client, and occasionally Google Talk, since the former launched in the middle of 2006.

The problem with this graph is that it's not actually accurate. When I examined the files a little further, I realized most of them looked like this:



Uh oh. So, of course, I had to write another Perl script. I found that code accounted for roughly 77 percent of the content created by Gmail chat. Here's the revised graph:


Tada! I think the most interesting development from this graph is quite simply that, even with the code stripped from the chats, a few of my friends and I have produced an entire corpus of text.

In the next few months, I'll look at some of the sociological implications of the date and size data, and then all the way into the the textual aspect of the transcripts. Analyzing the text itself should be tremendously interesting.

[Note: I changed some things as other pursuits have prevented me from diving into the actual text. Someday...someday.]

Thursday, November 18, 2010

Introducing the "Cite Your Source" button

Dear Following Few—

I made a web badge! Take the following HTML and plop it in a blog or other webpage—pass along the fact-checking goodness.






Monday, November 15, 2010

That R-pentomino Is So Hot Right Now

Why virality might a real-life application of the least competitive game in the world.

Ars Technica's Casey Johnson wrote a stellar article about game theory being a more apt explanation of viral media than actual virology. The article points out that the epidemiological approach "is fitting for some cases, in others it's an oversimplification—a person's exposure to a trend doesn't always guarantee they will adopt it and pass it on." Essentially, this is the beginning of the explanation for why websites and gadgets succeed, while other, similarly featured ones fail.

The researchers from the AT article ran a couple of models based on game theory principles. The first assumed that likelihood that a new computer application would be adopted by any given person was directly proportional to the number of friends in said person's network that adopted, and that knowledge of friends' adoption or non-adoption was 100%. This doesn't explain much—it creates a world with an infinite barrier to entry, but a preternatural tendency to growth. The second model denied absolute knowledge of friends' choices, and added a "try-it-out" rule: 100% adoption for nodes that had 0% knowledge of friends' tastes.

This was starting to sound a little like Life. Not the cereal, nor the Zen police procedural, but the game. John Conway's Game of Life is a zero-player game. I seriously won't attempt to beat Wikipedia at explaining it (skim it now, then come back), but suffice it to say that outcomes are both: absolutely predictable by machines who know the rules and can compute them on the fly, and terribly unpredictable and surprising to those who don't know or can't, you know, do several hundred computations in a few milliseconds. Patterns that seem small and silly may spread for generations and generations, and intricate designs might collapse in just a few. (Play Life here.)

It's that unpredictable propagation that makes Life interesting. And while the rules of the game are surely different from the much more complex rules of social marketing, it stands to reason that a few things are similar: it's more important who knows about your product/website than how many of them there are to start off with. If people that people trust (read that phrase again) know about your content, so much the better. But if the social networks of your early adopters can serve to propagate your message to other widely-trusted individuals, sounds like you have a really solid start.

There are HUGE amounts of conjecture in this one little post. We of course have no clue what the rules to the idea-passing mechanism are, how to determine who the starters for your viral marketing plan are, or what "special sauce" makes an idea likely to be passed. Memetics has largely failed in this regard; future research is desperately needed here.

Thursday, November 11, 2010

Conflictinator Alert - Veterans Day 2010

al-Google Veterans Day

So, here's a tempest in a teapot: Associated Content post about Google's Veterans Day logo that claims that the 'e' is actually a crescent of Islam. I'm not sure exactly where this ends, but it's possible the author actually believes the letter 'e' is a secret Muslim. Just to be safe, let's add everyone with an 'e' in their names to the No-Fly List.

Of course, when you bait the conflictinator trolls, they inevitably bite. HuffPo's response, of course, is to run with the AC author and claim that there's a widespread backlash about the logo. In their crazy, polarized view of the universe (perhaps fostered by spending too much time on the internet), the extreme right is one step away from besieging Mountain View with assault rifles, and maybe swastikas.

Way to contribute, guys.

All the President's Tax Cuts

Speaking of HuffPo, here's the title: White House Gives In On Tax Cuts. Here's the article (warning: contains serious hedging and low semantic density). Finding David Axelrod's statement that the president actually favors the extension of the tax cuts is hard, but finding anything that sounds like actually "giving in" is like playing Where's Waldo—with a Jackson Pollock painting.

Of course the Atlantic and a few other outlets took this, and ran with a "Obama gives in" type story.

Wednesday, November 10, 2010

The "Cite Your Source" Project

A little experiment.

As you might have been able to tell, I've been having difficulty finding time to blog this last week or so. (I'm working on other writing projects right now.) I've been thinking about The Problem of Information a lot, and I think I've come up with a short follow-up. It's a little social experiment, and I think it will be interesting to see if it catches on.

We all participate in online communities, whether it be in the comments section of a news website or blog, Twitter, or just on Facebook. A lot of our arguments work like discussion on major news outlets, including the citing of statistics and other supporting evidence without citing our sources.

As you well know, these stats are not necessarily true, but by in large those who agree (and many who disagree) with the point being made never question the factuality of this data.

I propose that we start. Right now. I know it will definitely make you annoying to people, but I would like to encourage everyone here to respond, at least once, to an online claim made without citing a valid source of evidence, with a polite request for citation.

It would be as simple as: "That's an interesting figure. Would you mind telling me where I can go to verify it?" or "I'm not saying I disagree with your point, but I'd like to know how I can verify that fact." You don't have to be outright contentious about it. In fact, it's probably better if you're not. People don't like having their comment or FB post ripped apart.

If everyone started requesting citation of valid sources even a few times a week, it would go a long way toward a healthier data culture. Thanks.

PS: This page will give you a web badge you can post if you like.

Monday, October 25, 2010

The Chinese Language is the Deep Web

Reading Nicholas Kristof's post "Liu Xiaobo and Chinese Democracy", about Mr. Liu's recent Nobel Peace Prize, I saw a piece of content stood out, not only for its content, but also for the offhand way in which it was presented:

Today, Liu presumably doesn’t know that he has won the prize, and the Chinese government is trying to censor the news. But China is changing and censorship no longer works so effectively. It can ban mobile phone users from texting the characters for his name, but young Chinese are smart enough to use substitute characters.

Assuming this actually is the case, it means that hidden within the Chinese languages (and it's clear that they are separate languages, not dialects of one overarching, crazily heterogeneous Chinese language) is a hidden world of possible ideogram-meaning combinations, connected by sound. Here's how that would work:

Every Chinese character represents a word. (Linguists: I know there are exceptions. Thanks.) For example, the word for "work" is 工, pronounced "gong" with a high, steady tone. The word for "attack" is 攻, also pronounced "gong" with a high, steady tone. The word for "supply" as in "power supply" is 供, also "gong" with a high, steady tone. So on with the words for "official business", "palace", and "bow" as in "bow and arrow".

Right now, the censors at Great Firewall HQ, actually called the Propaganda Department—I kid you not—are poring over blogposts and texts and other electronic content, finding subversive messages and stamping them out like bugs. Now, I imagine that a bit of this is done automatically, by keyword, and a great deal more is done by
a large government department, full of the average office assortment of flunkies, middle managers, angry bosses, and the ennui that comes along with this setup.

Now imagine an undercurrent of blogs that don't seem to make sense at first glance. They bring up no poisonous keyword hits. They carry no familiar subversive slogans. But for those who would read them aloud, they transfer hopeful messages of democracy, commentary on the Chinese political situation, and perhaps even plans for meetups and other events.

This sound-meaning correspondence is much like what serious internet people call the "deep web". The deep web consists of all the data on the Internet that's not directly accessible to the average end user of a search engine. Deep web data is significantly more voluminous than surface web data. From the wikipedia page:
Deep Web search reports cannot display URLs like traditional search reports. End users expect their search tools to not only find what they are looking for quickly, but to be intuitive and user-friendly. In order to be meaningful, the search reports have to offer some depth to the nature of content that underlie the sources or else the end-user will be lost in the sea of URLs that do not indicate what content lies underneath them.
By moving context outside of the scope of these messages of Chinese democracy, writers would easily circumvent any mechanical attempts at censorship. Certainly, it's not perfect, but even in a worst-case scenario, this practice could burden the Propaganda Department with the need for more human censors.

Monday, October 11, 2010

Groupthink

The new Facebook Groups is a miserable failure.

Quietly, amidst the opening of that movie where Zuck is played by a guy who looks like Michael Cera only more serious, Facebook rolled out a new version of Groups. The new Groups combines the function of the old Groups and the function of Lists, which I (and apparently only I) like and use a great deal. It's been touted and announced on a ton of news outlets and techblogs. Read Farhad Manjoo's syrupy article in Slate. Zuck, I hope you already have a date to prom.

Personally, I think the new Groups is at best a major tactical error after a few quiet months. (As you'll recall, there were a number of concerns over the privacy control changes in May.) At worst, it signals a Microsoft-like disconnect with the user base, which will lead to a Microsoft-like end of relevance. Here are the problems:

First, it was rolled out without any real announcement. I heard it on the tech blogs, and have still never seen any notifications on the site. That's pretty major. I have been known to miss the forest for the trees, but I haven't even seen a sapling of notice.

Second, the FAQ doesn't really differentiate it from Lists, and doesn't address the question of why it was implemented (other than "We are continually looking for ways to enhance overall user experience"). It does give a basic overview of the new features, but they seem a lot like things that you already have access to if you use any Google products.

Third, any friend can add anyone, and Facebook gives the added user no notification, even when added to public groups. And there's the spam.

Some have compared the new Groups to Google Wave (apologies for the third TC link—they just did a good job this time) and while the functionality is somewhat similar, it appears that new Groups is the latest in a series of missteps. Wave was simply a gigantic error in judgment, an aberrant faux pas. No, the better comparison here is Microsoft's gradual decline. In no way is new Groups Facebook's Vista, but it does show that to Team Facebook, features are more important than utility, which means irrelevance is inevitable.

Diaspora kids, take your chance soon—you won't get another.

Thursday, September 30, 2010

Intentionalism, Limbic Advertising, and the New Workplace

This post includes: four (4) book recommendations, including one I haven't even read yet, two (2) embedded videos, and three (3) interrelated topics. Set aside something like half an hour if you want to absorb all of this content at once.

I'm not even sure where to start, so I guess I'm going to start with: why. Simon Sinek, last year at a TED conference, gave the following talk, entitled "How great leaders inspire action". You can watch it now, or just read my discussion of it below the embed.



His big premise is this, which he repeats several times: People don't buy what you do, they buy why you do it. The major scientific underpinning of this idea is that people make decisions in their limbic brain, which is not in any way responsible for the production of language. The emotional core is best accessed by appealing to that emotion, by explaining why you are doing what you're doing and selling to your kindred spirits.

For a more in-depth exploration of this idea, go read William Gibson's Bigend novels: Pattern Recognition, Spook Country, and Zero History. I recently described these books as, "like sci-fi, only take out the hyperdrive and put in viral marketing." There's really no other way I can put it. From Pattern Recognition, mischievous bazillionaire genius Hubertus Bigend, on "knowing something in your heart":

You “know” in your limbic brain. The seat of instinct. The mammalian brain. Deeper, wider, beyond logic. That is where advertising works, not in the upstart cortex. What we think of as ‘mind’ is only a sort of jumped-up gland, piggybacking on the reptilian brainstem and the older, mammalian mind, but our culture tricks us into recognizing it as all of consciousness. The mammalian spreads continent-wide beneath it, mute and muscular, attending its ancient agenda. And makes us buy things.

Back to Sinek, whose conclusion is that to have thriving organization, we must have a purpose, and make that purpose the primary talking point. We can't just produce something that no one else produces, we must be something that no one else is.

Which brings me to Rework. The short text by the founders of 37signals, creators of web-based business efficiency software, turns the traditional model of American business on its head, proclaiming that the customer is not always necessarily right, meetings are a necessary evil at best (just plain old evil at worst), demanding work ASAP is poisonous, and working long hours is actually detrimental to one's output. The phrase "highly recommended" does not cover even a small part of how I feel about this book. I have a copy. I'll mail it to you, if you promise to return it. Jason Fried, one of the authors, about meetings: (link here but click around the site and listen to more of his stuff).

The 37signals crew is buying into intentionalism wholeheartedly, preaching that you don't sell things that you believe in, that you do things you believe in, and then you can sell what results from that. With a big enough world, with varied enough tastes, there's probably a market for whatever you are obsessed with.

Which brings me to this blog. I'm pretty obsessed with new ideas, and the way the world is changing. I believe in making the best new ideas accessible to people who wouldn't normally come in contact with them. I can't do it alone, really, especially with the micro-audience this blog has. Um, who wants to write with me? Seriously, email or comments section.

Monday, September 27, 2010

Another Legacy Post

Editor: Sorry, this is yet another post from my previous blog. I'm stewing up a really good post, about business and culture, and...um...yeah, you'll just have to see it when it goes live.

The Wiki Man

or, "An Augmented Reality Is Now a Necessity to Be Free"

"Conversations aren't contests!"
"OK, a point for you, but I'm still ahead."
-Calvin and Hobbes (not in that order)

We used to proclaim the coming of a great Age of Information, where our world would be shaped by the free dissemination of knowledge, causing unpredictable, miraculous changes to our culture. Now that it's here, we're still trying to figure out what the ramifications are. Socially, it seems that there are new structures being constructed electronically that have no analogs in the flesh-and-blood world. In the realm of pure information, a wide and cacophonous sea has arisen, full of truth, falsehood, trivia, rumor, news, and entertainment. The internet hosts information with a half-life rather than a shelf life, self-destructing in days, hours, or, in the case of some personal news on social networking sites, minutes. The rise of the meme, a small unit of virally-spreading social importance, has produced culture on the same scale. Nigh infinite specialized blogs give otherwise tiny interests worldwide audiences.

We continue to produce music, film, television, literature (and other reading material), and other art, and more people have access to it—and to content produced before the Electronic Era. In this way, the entire universe of human discourse is expanding at an amazing rate. What does not seem to be expanding, however, is our ability to retain it—at least, not fast enough. It appears that as this expansion of content continues, the average human brain will need a peripheral device in order to process it.

I don't have the research to back everything I'm about to say. Truthfully, I don't think the research has been done, and in some cases, the research may not be possible. I have listed here three assumptions about our society and its content that support the idea that the human mind cannot suffice forever as a solitary data processing device.

The Three Assumptions

1. The universe of content is expanding faster than our rate of daily content consumption
2. The rapidly dividing nature of social groups is making content less accessible overall
3. Intertextual depth is increasing the difficulty of making all the connections available in a work

I'll explain them. Number one. There is a lot of information in the world. The internet has made it so easy to produce content that everyone is doing it. (Almost literally.) As the number of content producers increases and the amount of content they produce increases, our ability to consume it all (and our desire, to be honest) can't keep pace. That's fine: no one expects you to read the whole Internet. In fact, I would advise heavily against it. But it does mean that at some point, there will be so much content to refer to that our shared universes of discourse will become more separate.

Think about that. When was the last time you talked to a friend about a basketball game, a movie, a book? Now delve deeper. When was the last time you talked about an issue, in which both parties had read similar material? What about a webcomic or a flash game? As we create a larger universe, we might eventually produce enough content that most of our communication will be in hypertext.

Number two. Culture naturally separates out, like old dairy products. One clump here, another there, divided by preferences in religion, politics, language, genetic background, tastes in music, age, and hairstyles. Modern culture separates more than it used to. In college, for example, I ran with a crowd that came from mostly the middle of the political spectrum, listened to indie music, watched indie film, but all dressed reasonably normally. We did not mingle well with groups that were slightly different politically and had stranger hair, even if they liked the same music and movies. We had enough in common with those groups, but there was clearly a divider.

Today's dividers are many, as there are many things to be divided over, and separate less dissimilar strata. Can it be that these strata produce content that is left unread by the others?

Number three. Back in the day there wasn't much content to refer to. The writers of the classics basically referred to religious texts and classics that were even older than them. Today, you can't get through an episode of Psych without a couple dozen references to other works. Some of those works refer to other works. Intertextual depth is what I'm calling that networking of references, and it makes full comprehension of a work difficult, turning pieces of content into a set of nesting dolls. Perhaps this depth could become too much for the human mind.

Note that I am not asserting that all three of these assumptions are correct, or indeed that any one of them is. But if any one of them stands true, it stands to reason that someone wishing to converse in the future should be supplied with a device running an algorithm to search for the meanings of references in any given conversation, so that information can flow unhindered. At least, I hope so, because I'm kind of getting sick of looking things up on wikipedia while I'm instant messaging people.

Friday, September 17, 2010

Convenience Killing Caprice and Broken By Design

Editor's Note: It's a twofer this week, about usability design.

Convenience Killing Caprice

I lean on Wired a lot. I'm not going to lie. Today I read an article about context-aware devices, which are objects that use sensors to learn about their users and environments and adapt accordingly. The example given in the article is a remote control that uses accelerometers to determine which user out of a predetermined set is holding it, based on the way they have held the remote in previous sessions. It got me thinking about what would happen if this took off.

Assuming I had a real television, and one of these stupid-cool remotes, my wife and I would of course hold the device in a different natural position, and each time I grabbed the thing, it would warp me over to the Daily Show and Eureka, and when she had it, it would line her up with AGT (Poppycock was robbed!) and Extreme Makeover: Home Edition. The missus would be totally ok with the setup, probably forever. She is definitely a creature of habit—and there's nothing wrong with that. I however, am not. Knowing that the remote only works right if I hold it a certain way would become an unscratchable itch. I would constantly have the urge to sidearm it, hold it upside down, give it to the baby, et cetera.

Every time we adopt a new technology, we give up one habit for another. Rarely is this anything other than trading in a laborious ritual for a simpler one. (You think getting the kids into the minivan is a chore? Try the horse and buggy—you had to freaking feed your engine. Every day. Even when you didn't use it.) But sometimes these tradeoffs are still annoying, and I think it's important to our understanding of these modern times to identify the bugs and quote-unquote features of our tech that take a bite out of its beauty.

(If you're looking for examples, cellphones are rife with these annoyances, from public cross-chatter to the fact that you need to remember to silence the device in places where it would be inappropriate to receive a call.)

Broken By Design

Have you ever used a corporate website, and thought, "What were they thinking when they designed this thing?" The answer might actually be, "Evil thoughts." An enterprising user experience guru named Harry Brignull is collecting a list of the most common tricks pulled on us by web designers. They include a whole category for Facebook's notorious privacy settings, the "sign up online, cancel by phone only" trick, and several other traps of the web's intentionally obtuse. (I was directed to this site by a Consumerist article about a particularly dirty RyanAir UI trick.)

Frankly, I'm stumped as to why major above-board corporations would act this way. It's not as though Facebook's just up a creek if they make their privacy settings simple (or have an option for simpler settings) and set the defaults to a moderately safe level. As more of the services we can't do without move to dark patterned user interfaces with impunity, the usability gap may create a savvy subgroup of people who save money and have more control over their account settings, like the DIY crowd does in the physical world, or couponers do at the register. This, however, does add a lot of extra work for those who want to access the convenience offered by these sites and services.

Friday, September 10, 2010

Truth, Or, Why We Should All Be Running Around, Screaming, With Our Pants On Our Heads

OK. This is the last post about Narratives and Narrative Engineering for a while. I really do want to touch on the other aspects, and this is going to be a pretty good note to leave on. Today's topic is: The crisis of believability in the Information Age. The groundwork: In order for information to be useful to an observer, it has to be true.

Humans are naturally driven to ascertain the truth value of propositions for themselves. Small children, when they're told something is hot, will often touch the object to be sure. After a few bad experiences, the child will learn to trust that when that particular speaker (be it a parent, sibling, or so forth) says that an object is hot, it is likely so. The human brain, a pattern recognizer, shifts the burden of proof from the individual to another, trusted, individual. This trust-based shift can extend to other entities—organizations, religious entities, and sources of information such as media outlets.

When two sources deliver conflicting information or opinions, we tend to side with a) the sources we have trusted frequently in the past and b) the sources that agree with our own existing viewpoints. But how do we choose what informational sources to trust in the first place? Here we find a gap worth exploring, but alas, I don't have a full range of studies to glean from to explain it. Suffice it to say that the actual truth value of propositions doesn't mean much in our perception of their truth.

Jonah Lehrer, who might actually be the most interesting blogger on the internet, wrote a feature for Wired's December 2009 issue called "Accept Defeat: the Neuroscience of Screwing Up". It's a tremendous piece about what it takes to be a good scientist, and well worth an entire reading. The part that interests me, however, is the bit where a brain scientist wired undergrads up and makes them watch videos of falling objects. There were two videos: one where the two, differently-sized objects fell at the same velocity, as they do in nature, and one where the larger object fell faster. The undergrads, who were unversed in physics, experienced activity in the anterior cingulate cortex, something of a BS detector in the brain, when viewing the accurate video.

Physics majors, however, had the reverse experience. The mental error alert popped up while watching the inaccurate video. In both cases, the dorsolateral prefrontal cortex, which suppresses memory, was also activated. The brain actively attempts to erase that which it perceives as impossible or incorrect.

As we are presented with information that we cannot confirm or deny ourselves, our biases work in combination with each other to accept and reject information and conclusions. Unfortunately for us, information needs to be true to be useful to the one receiving it, but this is not the case for the one delivering it. Some sources may (and very likely do) deliver up information that is intentionally misleading or untrue in order to further an ulterior purpose. Other sources adopt a stance because they believe it to be the morally correct one, or as a means to capture a segment of the content market.

For the most part, it appears we know this. We're aware that not all institutional media outlets are to be implicitly trusted. And, of course, as human beings, we're going to touch the burner to see if it's hot—and to build a worldview that will work as a shortcut to the discernment of burner temperature in the future. I see five these shortcuts frequently trusted by content consumers (and have fallen prey to some of them myself). Here they are, along with their shortcomings:

Triangulation. This method entails checking multiple reputable news sources, and believing what's reported by a large number of them. This will definitely cut some of the clutter, but not all, as can be seen from the curious story of "Jenny DryErase". Humor repository site "The Chive" ran 32 images of a young woman telling the story of the ridiculous work troubles that caused her to quit her job via whiteboard. Various newspapers (including some international outlets) ran the story as if it were true, apparently never bothering to fact-check it. I don't blame them—the images appeared to speak for themselves. The whole exercise turned out to be a hoax.

A fake, satirical Congressman, Rep. Jack Kimble, was made up by a blogger, and reported by a Washington Post blogger as if he were real. There are plenty of other examples of why we can't just triangulate our way to truth and happiness.

APCs. Similar to this is the "APCs" concept discussed by Craig Silverman on his media corrections blog, Regret the Error. The idea is that you can judge a source's validity by checking its Authority, Point of View, and Currency. The problem with this is the very same as the triangulation issue—supposedly authoritative, neutral, current outlets occasionally make major mistakes. Besides that, blogs and other features appended to major outlets (like blogs) trade on the authority of their parent brand, making it difficult to control those things for the expected level of quality. Further, since we identified above that we as people form our opinions based on other preconceived notions, the "authority" and "point-of-view" values are essentially unavailable to us.

Bias Stripping. This method is as it sounds—read the clearly biased news, then try to mentally remove the perceived bias from the story. Along with sharing in a few of the problems already discussed, there is this: While we're pretty good at removing extraneous, editorializing content while we're reading, a bigger problem arises when news sources omit information in order to maintain a narrative. For all his own biases, Jon Stewart illustrates this fairly well. (Note: long clip.)

Third Party Fact-Checkers. Politifact, FactCheck.org, and Snopes are great sites run, I believe, by people who are honestly trying to produce bias-free content. But, two things: no one is bias-free, so being harder on one person and softer on another is liable to occur, at least insofar as one contributor is concerned. Two: what sources do these sites consider trustworthy? We've just gone through the idea that you cannot ever really tell for sure.

Finding the Median Between Extremes. This is simple: you take a clearly biased source and another clearly biased, but opposing source, and split the difference. As it has been shown that bias is often a symptom of an unfounded belief, for this to work, you would have to assume that the bias of the two sources a) represented two points on a spectrum that contained all the possible representations of reality, the most accurate being located at the center, b) were nearly equidistant from that center, and c) were intentionally biased. If they were unintentionally biased, it seems unlikely that both outlets would be systemically and systematically biased in the same direction.

It seems incredibly unlikely that any major dichotomy available in the realm of public discourse contains all possible accurate representations of reality, especially the political Left-Right dichotomy in America. (DO NOT read this as an endorsement of any political viewpoint that eschews the Left vs. Right continuum.)

In short, there's nothing we can do to be sure that any relayed information is accurate. I would highly suggest reading primary sources to obtain accurate information. Primary sources (like the text of bills, speeches, laws, etc.) tend to be long and boring, though, so that means that our ability to be informed is limited by time and attention, which in many cases, is just as well served in other pursuits. My head is starting to hurt a little now, I'll be honest.

Cue the pants.

Saturday, September 4, 2010

The Invormavore Diet

A Balanced Plan of Internet Consumption for the Mental Health-Conscious

Editor's Note: This is an old post from my previous blog. I'm on vacation this week, so this is what you're getting. Next week, probably something about "Comical Ali" or maybe Facebook.

I'm taking as a source this article from Edge regarding human behavior in the Information Age. In it, our information consumption is compared with food consumption, and later, the survival of ideas is related to the survival of species via natural selection. Combining these two ideas, we arrive at the concept that information must be consumed in order to propagate, making ideas not like animal meat, but like vegetable matter. Vegetables and fruits that grow quickly, yield a lot of nutritive value, and taste good are more likely than their competitors to thrive in the world of herbivores, as their seeds are more likely to be spread, and they are more likely to be cultivated. Unfortunately, the web is like a field of jellybean plants--magical, yes, but unfulfilling and likely to give you a stomachache.

We need to "eat" better content because it makes us feel better, and is good for our development.

It's very clear that when we surf the web, we use energy. In order to look, read, learn, link, tweet, embed, and digg, we make our little cells do something, which depletes our store in some way or another, and uses our time--eating information, like any other consumer activity, comes at a cost. Massive cognitive load can cause errors in our learning, and there are other, more nebulous results of our purposeful information overload. Particularly relevant is this man's experience, echoed by millions of users of the blogotubes, but there is also research that implicates the very nature of the web in dangerous behavior.

Of course, there are benefits from our newfound infobounty. One study shows that googling may make you smarter, in some ways. Less esoteric is the benefit of simply having access to people and information that otherwise were forever out of reach. We can be more informed voters, wiser parents, better employees and students, and generally more knowlegeable people.

Unfortunately, when given this phenomenal power, we use it to further our own inanity, as we have every other medium. In this there is no great harm--a little pointless timewasting never hurt anyone. But, whether it's its novelty or some heretofore unknown power inherent in information, the internet is as interesting, or more, than television. Thankfully, it is also more modular, with content in smaller pieces, and so more decision-making is involved in our consumption. We can choose to discern the crap from the useful information, or we can choose to consume without thought, eating information like we eat food--inverting the pyramid and indulging in candy while dabbling in wholesome content.

That being said, it's not wrong to eat candy. It's not wrong to haz your occasional cheezburger. All content--except some particularly repulsive stuff, which happens to inhabit a large percentage of the extant web--has its place, but if cartoons were more occasional, and important information were more frequent, progress' clip would increase.

It is essential to note that I will say nothing of what is or is not Important Content. That is for each of us to decide. It must be intentionally sought, even if the seeds of this interest were randomly sown.

We need to "eat" better content because it spurs the creation of more quality content.

Further, in this world of meta-searching, self-analyzing, and ad-targeting, the viewer is, in fact king. It would be wise not to underestimate one's power in this world, as the link, the digg, the tweet, etc, are ways to propagate approval quickly and efficiently. The internet has built itself a very efficient feedback system.

Spread links to interesting information. Post interesting content. Eat right, and the whole world will be a better place.

Crappy Umpire Romance Stories

On August 16th, ESPN posted an article that appeared to show a systemic flaw in the umpiring of Major League Baseball games. One in every five close calls, it says, are blown by the league's chosen arbiters. The only problem with the story? It had more spin than a Stephen Strasburg curveball.

At least, Nate Silver, baseball statistics guy turned political statistics guy turned New York Times blogger guy, seemed to think so. By looking at the whole story, Nate proclaims, you'll find not only that the definition of "close call" makes this ostensible mountain a molehill, but it shows that baseball has some of the best playcalling in American sport.

The intersection of data and narrative here brings up a number of really interesting questions. Of course, the first, which has a pretty easy answer, is why did ESPN sum up its data in the way that it did? It seems clear that if a content producer like that performs a study, they're going to want to post the results in a way that will attract people to reading it. Otherwise, there's really no return on the investment of doing the study in the first place. It's not necessarily wrong, it's just good business.

The better question is, what's Nate's motivation in refuting the validity of ESPN's statement? For a better look, let's examine another recent controversy.

Wired's cover story for September 2010 is entitled "The Web is Dead". The story is primarily about the decreasing internet market share of World Wide Web content. Web content is defined fairly loosely, but it appears that the article is focused on whatever is delivered only via browser. Oceans of data had been gathered that corroborated this claim, which had then been made into a handsome infographic.

Nathan Yau of Flowing Data takes a completely different tack than Mr. Silver's takedown of ESPN: he posts a link and a shot of the graphic essentially as a conversation opener, knowing that people will post their comments, and a discussion will evolve. This discussion will, of course, draw more people in, and Yau's post presumably drove traffic (and more importantly, built community) based on the strength of the topic alone.

TechCrunch was far less forgiving. Of the many rebuttals they posted, this is the best.

The operative comment is this one, from user "Speed":

Wired's job is to deliver eyeballs to advertisers. Chris Anderson has successfully recruited TechCrunch and others -- many many others -- to help them do that job.

Michael Arrington's mission, should he choose to accept it, is to get Chris Anderson to return the favor.

The idea really boils down to this: If someone's brewing a tempest in a teapot, you'd be an idiot not to break out the crumpets. So, yes, while some people are greedier, and some more altruistic, every blogger trying to make a buck is desperately looking for as many hits as possible. Don't get me wrong. I still believe that The Nates are the good guys here, exposing the big guys for their tricky expositions that may not be 100% spin-free, but honestly, if no one built controversy where it didn't have to exist, a lot of the little guys would be out of a job.

Now where are my crumpets?

The Devil and Saul Alinsky

Today's post started out as a much smaller idea. You see, I was simply going to tell the story of how the Wikipedia page on grassroots radical Saul Alinsky is radically different from the page dedicated to his book, Rules For Radicals. As I started to develop this idea, however, it became clear that the Alinsky War was not so much an example of Narrative Engineering as a small part of a very specific and very large political narrative.

Narratives exist in the world around us, and explain that which is not easily explained. Religious narratives are the most identifiable, but there are loads of others. Some explain giant swaths of our existence—conspiracy theories and the like—whereas some focus on much smaller parts, like business management theories, or moms' shared consciousness about getting babies to sleep through the night. Living by a narrative does not mean that one is living by unprovable or unresearched principles, as you can see in economics. There are several different schools of macroeconomic thought, and all of them are well-researched, yet they contradict one another. No one current school of economic thought has a privileged position over the others, as it currently very difficult to prove beyond a shadow of a doubt that any one is perfect. (And it is almost certain that none will ever be perfect.)

In the realm of American political dialogue, there appear to be two major narratives, produced by followers of the two major political parties. Note the use of the word "followers"—I would like to propose the idea that neither of these narratives is purely the result of party manipulation of news. Narratives only have vitality if people believe them, and those believers are the ones that propagate the story. Just as a language dies when no one speaks it anymore, a narrative dies when no one changes their lives because of what it suggests.

Now, back to Alinsky. As you are almost certainly aware, any non-banned user can edit any unprotected Wikipedia page. Heavily edited or controversial pages may be protected against edits, and issues with frequent editing and reversion (sometimes called "Edit Wars") are resolved by consensus. Alinsky's page itself is fairly standard-looking, focusing on his bio, accomplishments, ideology, and awards. It definitely makes him look good. Clicking forward to the page for Rules for Radicals, however, we get mostly a number of snippets, the first from one of the introductory pages of the book. Recently, that page referred to that quote (the "Lucifer quote") as the dedication of the book (it's clearly not—the dedication is a simple "For Irene"). More interesting than that, however, is the comment made by the Wikipedian that uploaded the quotation and called it a dedication. User "Bestbuilder" states:


I believe it is critical to understanding Saul Alinsky and his motives that his "Rules for Radicals" is dedicated to Lucifer, the first rebel. Many people who may think Alinsky is a noble person would certainly reconsider their admiration of a man who praises Satan.


Why is the book such a magnet for controversy? Well, for one, a college student named Hillary Rodham wrote her senior honors thesis at Wellesley about Alinsky. And for another, he was a community organizer in Chicago, and therefore his work influenced Barack Obama.

Alinsky and his book therefore form part of the major narrative of the right. Similarly, Halliburton's discussion page shows a number of left-leaning edits, as it plays (or has played) a role in the major narrative of the left. The interesting thing about these encyclopedia battles, to me at least, is that they show how people try to change reality to fit their view of it. That is, in The World Anyone Can Edit, the narrative doesn't only explain reality. The narrative replaces reality.

Quantum Soft Science

As our world becomes one with technology, it has also become one with the people running it. Contrary to what we all read in old sci-fi, the Computer Age has been more person-centric than machine-centric. And as our actions affect everyone more and more acutely, humanity has become something of a last frontier for scientific inquiry. Certainly that’s not to say that we understand all the “hard” sciences perfectly, but the “soft” sciences appear to have matured in our connected society.

A large part of the pop-soft-sci phenomenon leads back to economics. A journalist and an economist teamed up to write “Freakonomics” in 2005, which is a perfect example of the Quantum Soft Science trend. They take one ostensibly intractable problem (e.g. urban crime rates declining, for example), and explain it with an apparently left-field solution (abortion rates increasing). But, of course, the trend doesn’t end with economy.

Of course, sociology is a natural outlet for this kind of work. General interest magazines like Slate run sociologically-oriented articles all the time (see “How Black People Use Twitter”). Even the historical aspect of this field is analyzed—see Jared Diamond’s “Guns, Germs, and Steel”.

The new psychology tells us about why wine taste is more related to cost than composition, and why marshmallow consumption can predict educational and vocational choices. And of course, there is the new linguistics, the new literary theory, and the new history.

All of these odd-fact-producing fields are driving toward one nigh-impossible goal: the prediction of human behavior. And one can see why that would be interesting. The unpredictability of humanity makes for some heavy socioeconomic turbulence, and limiting that unpredictability could have incredibly advantageous (or at least lucrative) effects.

The Sea of Data

The advent of the essential ubiquity of computing devices has seriously changed the amount of data that can be analyzed. My computer, for example, can churn out something on the order of 3 or 4 billion operations in a second, and it's not particularly high-tech. Essentially, we can do some pretty amazing things with numbers these days.

Computing has also helped people obtain data that was previously unattainable, or at least, very difficult. Increased public interest in information has made sideline actuaries of us all.

And of course, to accommodate our nerdly desires, there are more fountains of data than ever before. If you want population, weather, traffic, advertising, or most other normally sought after types of data, they’re often available on the web in one form or another. The US and UK governments, for example, have websites specifically designed to provide raw data to citizens willing to analyze it.

For analyzing that data, there are improved graphical representations, and a subculture of infographics specialists is emerging on the web. Online newspapers and other publications love printing these graphics—they allow a whole lot of data to be viewed at a glance, and they’re interesting, which makes them likely to go viral and earn lots of money.

Of course, the data is useless without an analysis, and that analysis often leads to interesting and counterintuitive results, such as “Teachers don’t actually value creativity in students”. In the modern world, for an idea to really take off, it has to buck some establishment paradigm or other.

In order to make the data tell a story, however, you have to sift through it. In today’s world, there are, as mentioned, a galaxy of datapoints being produced, and the task for any real infonaut out there is not finding it, or gathering it, but crunching it.

This presents a problem for your average data landlubber, though, in that we’re all presented with information overload every day. It’s not as though the internet is off-limits to anyone who doesn’t have a master’s in statistics. Whether it be in the form of pieces of narrative, bits from the social web, rambling blogposts, or just plain news, we are now forced to learn to navigate the sea of data ourselves.

Narrative Engineering

Counterintuitively, this data age has both granted information its wish to be free, and made the facts more difficult to obtain and ascertain. The crafting of narratives to convince people of certain propositions is as old as humanity, but now there are more people doing it faster and more profusely.

As much as the Content Economy ad-men (and -women, and -bots) want your eyes, the Narrative Gurus want your heart and soul. The Engineers of Reality strive to convince us what is right, what is wrong, what is healthy and unhealthy, and what is cool, fun, boring, or painful. They need you to buy in, and they exist in all varieties and have varying methods of attack.

In this battle, we are the infantry: your average internet user. The advent of user ratings and comments allows for the war to take place over every blog post and product listing in the world, provided it’s online. The comments, of course, range from the inane to the insane, the trivial to the trenchant. And as users spam, troll, and flamebait, their mini-narratives begin to form a larger one.

From above, broadcasters build and propagate their own narratives. The political entertainment industry becomes the shaper of American thought, whether by furor, candor, or humor. As this situation progresses, people line up as footsoldiers of a particular narrative-creating general.

Sometimes, even those who are aware of the role of narrative-construction in our culture still have to choose to buy in to one or another narrative. There’s not much middle ground between, say, believing that high-fructose corn syrup is benign or malignant. Further, some aspects of our lives require this buy-in as the price of admission. Religions, social clubs, political parties, and employers frequently require some buy-in.

Narrative not only drives the future of culture, it alters the past. A good narrative engineer doesn’t just tell you how things are, he/she/it tells you how things used to be. There was a Golden Age, perhaps, or a Dark one. Events in the past are now malleable in the hands of those who make opinion.

Narrative Awareness

An increase can be seen, of course, in those who identify and decry the attempts made to reshape our past, present, and future. There is a lot of power in changing people’s minds, and this power is sought by so many people with so many motives and allegiances. As our awareness of rhetoric increases, the power of the narrative decreases.

Each of us has a weapon in the fight against the abuse of narrative to alter reality. We all have social networks. As we discover the narratives being used to distort the world our loved ones live in, we can use our intimate, personal trust to defeat the bullhorns trying to confuse them.

The only reliable documents these days are primary sources. That may not last.

The Electronic Middleman

Writing about social technology is something of an exercise in redundancy. Essentially every technology writer in existence has taken a turn discussing the relevance and effects of social media on social networks, culture, and individual psychology. As this post is simply an introduction to the topic, here are a few of the arguments and questions that will be plumbed in later posts.

The biggest and most easily accepted argument in any discussion on social media is that it fundamentally changes the nature of human communication and sociology. But how?

An important question here involves the size of a social network. Previously, it was taken as gospel that the human neocortex limits the size of a viable social network to somewhere around 150 people. This doesn’t appear to correspond, however, to real occurrences on social networking sites. While the average Facebook user only has 130 friends, violations of Dunbar’s number on very high orders are very easily found.

And there’s another thing: a new type of prestige appears to be forming around the friend / follower number. In fact, there’s an equation that determines your Twitter clout.

Social media tend to the making of all comment into public comment. As such, new social norms are emerging regarding tact and honesty in these spontaneous forums. Further, such forums fragment the national or world discussion of certain topics into various, generally unrepresentative sample discussions.

Social media and other communication technologies allow for an increasingly developed middle ground between synchronous and asynchronous communication. This changes the nature of conversation by messing with the time between discourse turns, allowing for more or less thought between statements than previously

This fuzzy synchronicity can lead to the loss of conversational fluidity. That change in fluidity can have a whole range of effects on discourse pragmatics in general.

Previously, human communication was either very high priority (face-to-face or telephonic) or very low (correspondence). Now, there are medium settings to that priority, allowing for reasonably important communication to be done while performing other tasks.

Further, more than one conversation can be happening at the same time, and over multiple media. That simultaneity can lead to an increase in conversational density—and we may not be able to predict what that means for the brain.

For certain, some of the effects of social media, and the internet in general, are perceived under an umbrella category of “loss of focus”. But, is that loss of focus real? And is it such a bad thing?

Social media certainly affects the importance of distance in maintaining a social network. Friends can be kept, on some level, despite vast physical distances. But, what is the importance of “meatspace”, and what role will physical closeness play in the future of social networks?

The Content Economy

One indicator of progress in an economy is the change from a dependence on industry to a dependence on service. Service economies put a value on work rather than a value on a product. That work, however, is often packaged and sold just like a hard good. Paying for a landscaping job, for example, is essentially like paying for a computer: it’s sold as a package, you either pay installments or all up front, and the costs to the supplier are essentially invisible to the consumer. Of course, other services are bought in different ways (legal services, for example).

As Western society has developed its service economy, it has customized it as well. As during the transition from agriculture to industry, the easiest tasks undertaken by agriculturalists were done by machines, now the easiest services are either taken over by machines (self-check islands at the store) or outsourced (overseas call centers). It appears that the service economy is evolving. A new subsector is emerging, one that many observers are calling the Content Economy.

As the internet has grown, it’s taken a chunk out of television ratings. It’s pretty easy to see that there are just some of us that prefer an interactive medium and all that comes with it. And it’s pretty easy to see, whether you believe television is being supplanted or merely supplemented with internet browsing items, that the hits just keep coming. And behind every hit is a pair of eyes.

Eyes mean ads. Interesting content means hits. Hits mean money. This is essentially the rule. There are plenty of people that would prefer that this weren’t the case—producing interesting content is hard. With a completely open content market, and a pretty low barrier to entry, the content that wins the hits is generally legitimately superior.

Professional smart-guy Frank Schirrmacher referred to people who consume and relay content as “informavores”. This tremendous interview on the consumption and dissemination of information identifies the mechanism of the advertising of content. Other commentary has called this “viral” media, or “memes”. All in all, the content in the market seems to act as a lifeform.

Like any market, things are not as simple as they appear. Not all content markets itself, and the barriers to true entry into the big leagues of the content market are larger than they seem. Posting content does not guarantee views, even if that content is interesting. Search engine optimization and other forms of content marketing and engineering change how content is viewed, and how content is formed.

Routes of Power

It's sad, but terribly fitting that my search-fu has not been strong enough to find a stat that shows that the number of different jobs available in the world has been increasing dramatically. But it doesn't take a genius to know that what once was "web developer" is not some huge number of distinct titles, and that a lot of positions that will come into being within a decade don't even exist in embryo right now.

As an example, computer programmers in the early days of the craft were not Computer Science majors. The opportunity to study programming didn’t always exist; programmers were picked from among other scientists and engineers on an ad-hoc basis, self-selected by their talent and ability to self-train. As the digital world both expands in its own realm, and facilitates corporate ventures that are not necessarily all web-based, more of these positions are being formed, and there aren’t necessarily freshly minted degree-holders with skills specialized for these jobs.

So learning about computers, people, statistics, and content gives us power to prepare for and create new jobs. Those job choices can increase our power by allowing us to make more money, choose our employers, and gain more autonomy.

Of course, full-time employment is not the only factor in monetary empowerment. The trends today include empowerment by pennies—coupons and other savings methods make routes of power past the cash register. Agricultural co-ops offer produce at reduced prices, and in-kind economies make money less essential.

Opportunites for making pennies on the side abound in our modern world as well. The nascent content economy and the internet’s bridging of distance allows for a lot more freelancing, amateur blogging, and other individual endeavors.

Other forms of personal empowerment include fortifying oneself against risk, and maintaining control of one’s life. Emergency preparation aficionados, or “preppers”, are on the rise. The greening of home and city has become both hobby and professional pursuit. Homebirths and homeschool keep control in the hands of individuals, and out of institutions.

Of course, individuals aren’t the only ones that are seeking empowerment in our changing world. The governments and communities we live in are also seeking for power and control, innovating by leveraging their sovereignty or testing new rules on a small scale. Politicians innovate, building internet-based grassroots followings.

This is certainly not an exhaustive list of routes of power phenomena—hopefully humanity's experience with personal empowerment through innovation is just beginning.

Introduction to the Future

This blog is called “The Future”, even though it deals primarily with the present.

What It’s All About
The purpose of this blog is to discuss the ways that our society has changed because of the Information Age, both in the way information is propagated and information technology is improved.

The societal changes I’m looking at primarily deal with the way the average person lives and interacts with society at large. I’ve identified six aspects of this change, and I’ve written an in-depth introduction to each of them. Most of the blog, then, will be comprised of news items or personal observations that exemplify how one or more of the aspects of this change affect the world around us.

The Six Aspects

In this world of increasing possibility, we have ever-expanding lifestyle options: jobs and careers that didn’t exist before, new methods of saving and spending, and trends that offer additional convenience or control. These novel options I’m calling Routes of Power, and they include everything from coupons to telecommuting, and quite a lot of real estate in between.

Probably the most common of these Routes is the monetized (or even the professional) blog. With the advent of the internet, and the instant transfer of data that comes with it, a new economy is emerging—one based on the power of billions of ad clicks, freelanced infographics, and amateur analysis. This Content Economy is still nascent, but already robust.

Along with this economy, the internet is changing the way we communicate. People are conversing more on social media sites and via text messages, changing language and social norms. The Electronic Middleman is growing ever more present in even our basic, day-to-day conversations.

Explanations for the way the world operates are as old as humanity itself. In our era, though, these explanations are being seen more and more as tools to shape human opinion. On the other hand, people are seeing that manipulation more and more for what it really is, this Narrative Engineering countered by Narrative Awareness.

Information overload has been foretold by futurists for decades, and has clearly become a reality. The surfeit of demographic, natural, scientific, and other data is confusing some, leading to a desperate cling to narratives that resolve it. To others, it represents a unique opportunity to explain phenomena and produce useful content. The Sea of Data is difficult to navigate, but holds incredible promise.

Humanity is becoming more interconnected, and social networks have been altered by the technological and economic landscapes, and have altered them in return. This produces fodder for what I’m calling Quantum Soft Science, an understanding of the completely unexpected ways that human-built infrastructures react to the pushes and pulls of the world around them.