Friday, December 2, 2011

Never Name Your Blogpost "A Modest Proposal"

I recently had a conversation with a friend about homeschool. She used to be a public school teacher, but she believes the current system of public education is irredeemably broken. (She is right.) As the discussion continued, she sheepishly mentioned that she had already found a system that she would like to use for her son (who is currently 15 months old, so it's a way off yet). She beamed as she described it: "It's really focused on the classics, and they focus a lot on reading actual historical books. And there's no technology. They don't believe in technology." I was nonplussed*—how could that possibly be a benefit?

courtesy Hartlepool Cultural Services (CC BY-NC 2.0)

I.

But I guess education and technology have always had a funny relationship. The push to get computers into schools seems to have led to a lot of fraud, waste, and abuse. My wife, who for a year taught chemistry recitations at Arizona State University, told me about a typical tech-hole called "The Thunder Room" (I know, sounds like a...nah, forget it). Six digital whiteboards, a Mac laptop (now a tablet!) for every student, and a powerful desktop for presentations for the teacher. The rumor is that it cost a quarter of a million dollars, and mostly ends in frustrating glitches and ruined laptops, and probably overtime for some IT guys. So, yeah, tech can be a wasteful distraction.

But then there's Salman Khan. The Khan Academy's bajillions of math and science (and other) videos teach essential topics for free, and many of these lessons have associated exercises. The whole thing's gamified, too, so some people are liable to learn calculus just to "level up". It's something that would be completely impossible pre-internet. (The idea of free, anywhere, anytime math/sci ed, that is. People have been learning calculus since it was invented, of course.) Khan seems to justify tech in education.

On the fringes of technologically-enabled learning, there's Wikipedia, which seems to defy categorization—it's a Rorshach test for people's opinions of modernity. Detractors point to the rampant factual errors, stubby articles, and blatant self-promotion and vandalism. Supporters point out the fact that it is the largest single repository of human knowledge in history, and that it's completely free of charge and free-as-in-speech. Either way, it's a fixture of modern first-world society, and it's probably not going anywhere soon, even if it's not a valid source for term paper footnotes.

II.

Which brings me to the point of education: what is it? Is there something about the nature of education that would make a non-technological education preferable?

To my mind, there are three functions of a modern education. Note that these are not "purposes", so to speak. Talking about the purposes of education is a philosophically scary blind alley. The functions are: allowing a person to participate in society, creating opportunities for students to make money, and teaching a person information relevant to their self-awareness.

Allowing people to take part in society involves loading them up with culturally important information. Things like "O Captain, My Captain", the War of 1812, the finer points of good writing, the use of basic technologies, the rules of football (whichever type is applicable), et cetera. Of course, understanding The Allegory of the Cave isn't particularly relevant to someone who's going to be placekicking for the Chargers for the next 12 years, nor are the finer points of a 40-yard field goal essential to most Princeton philosophers. The information we give here is just that, info, and coverage is spotty by definition.

Opportunities for wealth creation usually come in the form of post-secondary degrees, although social networking and training in schools can also lead to non-collegiate methods of money making (the garage band members that met and learned chords in music classes, the entrepreneurs that started selling during lunch period, etc). The function of a homeschool in this case would be something more like a traditional high school—to teach students how to successfully navigate the collegiate universe with the end of a degree in mind.

Information relevant to one's self-awareness. That's a tricky phrase. Perhaps "teaching students how to think" would be more germane, if less accurate. You don't teach someone how to think; you teach them that there are many different ways to think, and that some are better than others, and that there isn't one that is perfect or best. The result of this is self-awareness, sapience, and consciousness, lending to this person the ability to willfully change the world for the better.

Finding a way to perform all three function is daunting; the desire for homeschool programs to focus on classics and fundamentals is understandable. In a world drowning in seas of data, a simple, clearly-defined canon of work untainted with the messiness of instant communication is very attractive. Beyond that, American schools have been falling behind, or so nearly everyone is telling us. They used to be the best in the world, so returning to the Golden Age seems like a good idea.

III.

Before we jump on the paleo-educational bandwagon, though, let's consider what all that "messiness of instant communication" is all about. It is believed that the world will have produced 1.8 zettabytes of data in 2011. (A zettabyte is a trillion gigs.) That is almost an order of magnitude greater than the amount created in 2006, which was at that point 3 million times more data than is contained in all the books ever written. And out of that, a few important pieces can be gleaned—the signal-to-noise ratio is infinitesimal.

All three of the functions of education I described depend upon a student's ability to navigate the world around her. No one has ever taken part in a culture, made money, or learned to be self-aware without other people. This is not because humans are social animals, it is because the human world is a social world. We are not particularly strong or particularly fast, but we're smart and when we get together we can build a lot of things that make life a lot less difficult and painful for ourselves. The world we now inhabit has become a new world, the world of crushingly large amounts of information. The successfully educated can navigate it.

(It is important to remember that I am not advocating relinquishing any of your educational duties. This is not "it takes a village to raise a child". The model is to take what you need from the resources that are available—"raid the village to raise your child".)

Unfortunately, most college grads cannot distinguish fact from opinion, nor can they search for information successfully. It is as if humanity has become aquatic, and schools have not taught students to swim.

IV.

The modest proposal is this: fill young children with as much knowledge—arm them with as many tools—as possible (language, math skills, etc.), and when they're older, present them with the problems they will actually have to solve in their lives, and allow them access to means by which they can find solutions. For math and science, this will probably mean pointing them to Khan (and to Wikipedia). For information literacy, unfortunately, the tools have not yet been built. I'm sure the world would love it if they were, and you may be able to join with others to do so.

Those tools would require students to find information, read it, and analyze whether they trusted the source or not. The best way to do this, again, would be to ask the student a question and let them answer it using any resource they can. Closed-book tests on simple facts are memory exams, not useful assessments of learning.

*Footnote: While I personally think technology is essential for an education, I completely respect the decisions of my friend and anyone else who decides to avoid it in schooling their children. Frankly, if you're at least halfway competent and love your child, you'll almost certainly beat the crap out of public school.

Thursday, October 27, 2011

Announcing Dispatches from Next Year

Reverse Disclaimer: I'm not a Stephenson fanboy. In fact, I've only started to read two of his books, which seemed really good, but ended up on the wrong of work or school scheduling problems and have had to return to the libraries from whence they came far too early.

Famed science fiction novelist Neal Stephenson wrote a huge, huge article for World Policy in which he faults modern science fiction for not being idealistic enough. His basic argument is thus: Modern state actors refuse to take scientific risks on the order of those of the Space Race. This is in part due to cynical, negative-effects-based sci-fi. His basis for this is simple: back in the day, we wrote epic science fiction, with space, physical or cyber-, stretched out before us as a new world. That's much less the case now.

"The imperative to develop new technologies and implement them on a heroic scale no longer seems like the childish preoccupation of a few nerds with slide rules," Stephenson says. "It’s the only way for the human race to escape from its current predicaments. Too bad we’ve forgotten how to do it."

It's in that vein that I'm announcing my new sub-blog (yeah, I'm definitely running with too many irons in the fire). It's called Dispatches from Next Year, and the concept is this: Each Dispatch will be a commentary on the very near future, given the technologies, social movements, and politics of today. Each Dispatch will have a grain of the open possibility of tomorrow—fantastic, alien, and reachable.

Friday, September 9, 2011

On Whiskey and Scorpions

Practical fallacies and the virtue of practical thinking.

There’s a ton of content out there on logical fallacies—the concept of logical fallacy has definitely entered into the network of “pop sci” content outlets (Boing Boing, Wired, Radiolab, Malcolm Gladwell, TED, etc). The idea being, I think, that if you can rid humanity of logically improper thinking, you can get right to the business of landing on Mars, ending poverty and disease, and/or finally cancelling “The Bachelorette”.

 I applaud this effort, even though I was recently caught using the phrase “begging the question” terribly wrong. Stamping out the Lake Wobegon Effect or “If-By-Whiskey” is absolutely a worthwhile endeavor, but I do have one concern (without being a concern troll): logic only gets you halfway to the door.

This guy is doing his part to stamp out "If-By-Whiskey"
Here’s an example: there’s an infamous game theory puzzle called the Two Envelopes Problem. In it, a participant is given two identical envelopes, one containing X amount of money, and the other containing twice that. This hypothetical participant takes one envelope, but before opening is, she’s given an opportunity to exchange the envelopes. When she gets that envelope, she’s given the opportunity to exchange the envelopes, and so on.

The trick here is that the math says you’re always better off to switch envelopes (much like in the Monty Hall Problem, you’re better off to switch doors). Unfortunately, the math doesn’t take into consideration that you don’t have all eternity, dang it, and $X is better than $0 (for positive values of X, of course). “Wait, wait,” you’re saying, “this is game theory! If the math doesn’t reveal the most logical course of action, there’s something wrong with the math, not with logical thinking on the whole!”

And you’d be right. But, as of right now, the math isn’t there. It hasn’t been corrected fully, so it can’t be used. So you’re stuck there shuffling envelopes indefinitely until you make an illogical, but practical, solution (though I would switch envelopes at least twice, in case the weight of the money is a giveaway). Unfortunately, there are a lot of people in the world still shuffling envelopes. I see three big ways this happens, and I'll talk about one right now:

Forgetting Why. The TED-friendly marketing dude Simon Sinek has a whole talk on “Starting with Why”, and the tl;dr version is this: “People won’t buy your product, they’ll buy your vision.” And that’s awesome, but the “why” I’m talking about is more like the “why” in “Why do scorpions have poisonous stingers?” or “Why does my car have bags of air that inflate when I crash?” and less like “Why does Apple make cool gadgets?” Forgetting Why is forgetting that scorpions can't choose not to have stingers.

A classic example of Forgetting Why stems from Malcolm Gladwell’s recent appearance on the Radiolab episode “Games”. Jad and Robert were talking about a study that showed that 4 out of 5 people root for the underdog in any given, unbiased scenario. Gladwell, apparently did not, and briefly mentioned his sadness when the expected winner fails in an athletic contest. In doing so, he illustrates just what I mean by “forgetting why”.

You see, the “purpose” of a tournament isn’t to identify the strongest team. If that goal were more important than all others, we could simply have a committee do some math to the field of teams after a long season and we’d have a verifiable winner without all the madness. But the reason we have an NCAA basketball tournament is because it’s entertaining, and entertainment has a market value. The reason March Madness exists is not to tell us who's the best—it's to make lots and lots of money.

Conflating the perceived “purpose” of a thing with its function is a problem, because it can have the logical conclusion of a stung child telling the scorpion to get rid of that stupid stinger. We've all seen people who are obsessed with a cause that they cannot possibly effectively champion. Many of them are Forgetting Why, assuming that someone, somewhere, is making life unfair by his or her choice, and if he or she would only choose otherwise, a massive problem would be solved.

In other words, if you can’t assume that products of evolutionary systems control their existence. Like scorpions, or basketball tournaments, or cultures, or economies. Next up: Forgetting How.

Thursday, September 1, 2011

Beat the Filter Bubble: News from Outside the Spotlight

Camila’s War

In a nation thousands of miles away, one that has, along with its neighbors, a history of despotic rulers, brutal coups, and ground-shaking political conflicts, a youth-led uprising is taking place against perceived injustices. The protests have been taking place for months now, and are beginning to spread to neighboring countries. The leader of these rebels against authority has given the federal government an ultimatum: he only has one chance to address their cause, at a meeting this Saturday.

That nation isn’t Libya, Syria, or Bahrain—it’s Chile. The charismatic leader is 23-year-old Camila Vallejo, president of the student government of the University of Chile, and the issue is nationalizing education. Her nemesis is Chilean president Sebastián Piñera, notable for being a right-of-center leader in a Latin America that is increasingly swinging to the left. Ms. Vallejo’s group demands nothing less than education [through post-secondary] that is “free, public, and of [high] quality”.


Camila Vallejo
Camila Vallejo in German newspaper Die Zeit. CC-BY: Germán Póo-Camaño


And their protests prove they are serious. Over the last two months, students have converged on public squares in the capital, supported the Chilean labor movement’s strike, and visited neighboring Brazil to spark protests there and meet with Brazilian president Dilma Rousseff. (Interestingly, Brazil does have public universities, but the protesters are demanding that the government double their investment in education.)

The group’s seriousness is matched only by its savvy. Ms. Vallejo, like so many revolutionaries these days, has a strong online presence. Her Twitter feed updates at least a few times every day, posts which have recently been accompanied by a black square where a profile picture should be—a show of solidarity with the loved ones of 16-year-old Manuel Gutiérrez, who was killed during the protests that accompanied the labor strike. Vallejo has over 200,000 followers.

Along with the support, of course, comes the resistance. Piñera for his part has repeatedly denounced the nationalization of education, indicating that he will never support it, which should no doubt make Saturday’s meeting very interesting. And Vallejo has informed authorities that her life has been threatened via Twitter, including one tweet that said, roughly translated, “We’re going to kill you like the bitch you are.”

Not everyone who opposes the student movement is quite that severe. La Tercera, a newspaper with wide circulation, published an opinion that the privatization of universities in Chile serves a purpose: to facilitate the explosive growth university attendance in the country, which currently sits atop the Latin American list for percentage of college-aged student currently engaging in studies. The free market, says the editorial, is the only engine that could have supported that growth.

Free-market solutions have been the hallmark of Piñera’s administration, and while they seemed popular at the time of his election, Mr. Piñera’s popularity has waned severely since taking office in March of last year. Popular or not, though, he is still in charge, which means something different in Chile, when has been largely stable and democratic since the strangely peaceful removal of military dictator Augusto Pinochet in 1990, than it would in Syria or Egypt.

The Arab Spring, in fact, may be an unfortunate backdrop for the educational protests, as Chile’s situation does not, in fact, involve widespread oppression, religious infighting, nor oil interests. A better comparison might be the anti-corruption movement in India, led by Anna Hazare, a man who has been called (admittedly in hyperbole) “a second Gandhi”. Hazare’s followers protested, non-violently, and when their leader was jailed, they watched with rapt attention as he began a hunger fast.

It may seem that these tactics of protest, normally reserved for the brutally oppressed, are unlikely to be useful for a movement like the one in Chile, over something as distant from life and liberty as free college tuition. It should be noted, then, that after a few days of Hazare’s hunger fast, India’s Parliament acquiesced, and have been working on meeting his demand: simply to close the loopholes on an already existing anti-corruption bill.


Saturday, July 23, 2011

Google+ and the Economy of Circles

I recently had an enlightening conversation with a brilliant friend that will remain nameless until he posts his own version of events. This friend had come across, somewhere on the internet, a page containing a list of “Twitter accounts that will follow you back, guaranteed”. As an experiment, he followed everyone on the list, and, as per the guarantee, they followed him back. Interestingly enough, however, is that he’s still racking up tons of followers, beyond those of the original list. Having more followers means you will get more followers.

Thinking about the ramifications of following what amount to dummy accounts in order to increase his follow rate, this friend pointed out that there’s no reason for him to unfollow the accounts he followed. Many of them don’t post, and those that do can be put into a list, and that list can be silenced. There’s no regulation in the Twitter TOS that says he can’t follow certain accounts (that would be counterproductive), and there’s no social stigma associated with it, as no one is likely to double-check his follow list for bots. To my friend, this seemed to be a fatal flaw in Twitter’s system—it can be easily gamed without consequence. Fortunately, Twitter is not a game. It’s a free-market economy.

In fact, Twitter is a very free economy. Because there are only the barest of regulations and developers can and do develop tools that automate and simplify actions in Twitter, users are left to rational self-interest as their only guide to navigating the economy, that is, their audience. Some, like my friend, do this, escaping from the trappings of social convention to further their clout by whatever means available. But because of this forced freedom, Twitter is a world of bubbles (as in “housing bubble”), where individual importance shifts from day to day, and where there seem to be more “experts” than regular Joes.

Compare this to Facebook—an economy with heavier restrictions, where businesses register differently than celebrities, who register differently than regular people. Where events aren’t just blipped out to the masses, they’re fully scheduled and organized. Where there’s such a thing as “like” and where it takes mutual consent to form a connection. Facebook, by its nature, abhors the expert, and embraces the commoner. Spamming is not only encouraged, but subsidized, and there’s no reason to aspire to any benchmark of social performance, unless you’re a company.

I count myself among what I imagine to be a majority of people, content with the shortcomings of both methods. We doubted the need for another major social media platform, especially after having been jilted by Diaspora. Yet there was the Goog, rolling out what at first blush was poised to be Wave 2.0. Thankfully, it wasn’t—it was an exceptionally simple design that approached the social media economy as a free market, much like Twitter. And, in fact, despite the brilliance that is the Circle system of separating one’s friends, there’s nothing yet baked into G+ that will keep it from becoming Twitter. There’s also nothing that can prevent it from becoming like Facebook. At the same time.

That’s the brilliance of Google+: giving the user the ability to change the rules. Because one can change one’s content stream like the channel on a television, users can make the service into anything they want: a Twitter clone, a Facebook clone (minus some largely unnecessary features), a mix of the two, or something else entirely. The killer app is being able to choose your audience, and thus, your economy.

A lot has been made of G+’s ability to succeed—whether it will be a functional social network on the order of Facebook and Twitter. The numerical evidence seems to indicate that that’s a big yes, but whether it will actually succeed—that is, yield an improvement for direct social media in general? That will depend on whether we, the users, learn to make it work to bridge the gaps in existing social media services. For that to happen, we need to experiment, and we need to share.

There’s a lot of upfront work that goes into building a social network, and in Facebook and Twitter, that setup seemed to be straightforward and repetitive. For Google+ users, the setup phase is ongoing, as we attempt to discover how best to format our circles and use them in a way that makes sense. Experimentation will yield functional formats on an individual level, but then each of us needs to continue to have a usability discussions with any friends willing to share, ensuring that disruptive ideas have a chance to spread through the network. I recommend a circle called “Meta” or “Google+” for that.

That’s really the unexpected joy of Google+: millions of people, working together to make something awesome.

Monday, June 27, 2011

Writing About Technical Subjects (Pt. I)

So, you're a company intent on empowering people with hardware, software, and/or other new tech. Or you're a non-profit with Big Ideas about digital rights, tech literacy, or another 21st Century cause. Or you're a copywriter, learning to ply your craft in the content economy. Telling compelling stories about complex topics is exciting, but challenging. This fact is your greatest friend and most terrifying enemy:

We now live in a world where the "simple facts" are not so simple.

Take Facebook. The second most-frequently visited place on the web is one of a handful of websites that has spawned its own verb. (When was the last time you said you were "facebooking"?) Behind the scenes, though, it's a complex messaging, image-sharing, networking beast of a site-slash-app, not to mention the developer interface for third-party apps.

The features aren't what Facebook splashes on the sign-up page, though. It says,

Facebook helps you connect and share with the people in your life.

There's a simple principle in effect here: you don't sell features, you sell motivations. I've long appreciated market researcher and TED Talker Simon Sinek's verbiage:

Start With "Why"

That's hard and fast for me: successful pitchmen and copywriters have always done this, even if it wasn't worded exactly as above. That doesn't change in the world of software. In fact, in a world of rapid development and widespread innovation, there's even less place for people selling boxes of features.

Recently, Twitter posted a guide to help journalists and other newsy sorts use their service. They didn't have to—heaven knows people have already been using Twitter to do news since the dawn of tweets—but they did it as a help to their userbase. #TfN may not be copy, but it follows another guideline I love:

Solve a Problem

When possible, I present services as solutions to problems that people face. (As an aside, I think the use of "solution" as a buzzwordy substitute for "service", "application", or "product" dilutes the semantic power of that word.) Some companies don't use much text to show that they solve a problem—apps like Turntable.fm solve a problem (or perhaps "grants a wish") in an obvious way, and sell themselves on word of mouth alone. For everyone else, there's copy.

Trouble is, you have to explain a complicated problem like "T1 connections used to be state of the art, but nowadays you get more bandwidth if you sign up with a Wireless ISP" to someone who may not understand "T1", "bandwidth", or "Wireless ISP". If you're a Wireless ISP, that's going to cause problems.

The writer, then, has to make the problem and its solution simple enough to be clear to the reader. "Telling your story" has become a popular phrase in the marketing industry, and one of the tricks of the trade is just that: making copy into a story, with a conflict, a resolution, a hero, and—if necessary—a villain.

Tell a Story

Of course, you don't want to take storytelling to the extreme, either, or your reader will think you don't respect them. Leave some complex concepts in the narrative, and explain what must be explained.

Don't Condescend

Here's a bit of copy I wrote for a client that I think exemplifies these ideas:

For years, network service was fastest when it was delivered by physical cables and circuits, the “T1” being the most popular with businesses. The T1 was a workhorse, but, as content on the web has become more complex, it’s clear that T1 just can’t keep up. Fortunately, wireless technology has kept pace with increasing bandwidth needs—OneAxis.net wireless can offer speeds comparable to those of cable internet[...]

So, there you have it, an overview of copywriting on complex topics. Next up: News writing.

Friday, June 10, 2011

Intersection: Trumor + The Filter Bubble

First in a series of pieces that examine the hidden links between apparently unrelated ideas.

Scientists at the Laboratory for Information and Decision Systems at MIT have been working on a way to determine how far an idea will propagate via Twitter. The system, named Trumor, measures the reach and influence of individual users on Twitter, much like the commercially-available service, Klout. Trumor, however, creates a list of "superstars" in each topic (e.g. soccer, automobiles, copyright law). These users are very likely to have their comments propagated through the network.

The Filter Bubble is the title of a book and the name of an idea developed by online activist Eli Pariser. The basic premise is this: online information is being increasingly organized automatically by individual preference, and this algorithmic curation may not be to our benefit. As an example, Pariser notes the results pages for Google searches two of his friends did on "Egypt" during the revolution earlier this year. One friend received mostly news, the other received travel agencies and basic encyclopedia information, but nothing on the protests. As human beings rarely like their ideas to be challenged, the filter bubble tends toward increasing divisions between groups of people, based on ideology.

The Intersection: As people self-organize by ideology, they will likely either choose to restrict the number of information sources they rely on, or the filter bubble will do that for them. As the bubble closes in, information from outside will leak in less and less frequently, making the power of online thought leaders very strong as concerns influencing those that follow them. (Incidentally, the implications of the word "follow" in the Twitter context will become creepier.)

As this occurs, the knowledge of who influences whom (will MIT retain this information? will it be made public?) will likely fall into the hands of corporations, political parties, hackers, and activist groups. The game will become "Who influences the influencers?" as groups attempt to convince members of the superstar list they're interested in to carry their content. This structure of top-down siloed syndication would be very weak to subtle subversion and account hacking.

In sum: As the world outside our senses becomes more important, our reliance on others' perspectives increases. As our filter bubbles close in, our worldviews become less robust and more vulnerable to manipulation via the social media superstars we rely on. Reality-hacking and reality-selling are imminent, but not in a cool cyberpunk way.

Monday, May 23, 2011

Reddit, the Abortionplex, and the Infinite Rabbit Hole

Fake news, people that think it's real news, and people that want people to know that there are people who thought the fake news was real news.

On the 18th, The Onion ran an article detailing a new $8 billion Planned Parenthood facility called "The Abortionplex". In typical Onion fashion, the article was gleefully tongue-in-cheek, timely, and completely, utterly false. The Onion is, of course, a satire news site.

That doesn't necessarily stop people from believing it. This wouldn't be the first time that the masses have bought an Onion line. Nor would it be the most ridiculous thing the general populace has believed. That being said, this graphic (click through) shouldn't come as a surprise to anyone—and yet it's one of the most interesting internet artifacts I've seen, as there is no attempt at all to prove that it is authentic.

I want to be sure that I go one record saying that I don't have any doubts as to the legitimacy of the graphic. However, if someone wanted to create a false graphic on this or any other theme, the ubiquity of Facebook thumbnails and Photoshop makes it possible for them to get involved in made-up guerrilla journalism. The Culture Wars no longer require facts (but they do need the lulz).

Eli Pariser notes the advent of the "filter bubble": the way that Google, Facebook, et al. filter content that you receive based on previously expressed preferences. His TED Talk approaches the most obvious problem with the filter bubble, which is that as websites tailor content automatically, people aren't exposed to opposing viewpoints. There's another angle here, which is that these filter bubbles protect false ideas from being debunked except by untrusted "outsiders". It's clear that the Obama Kenyan birth certificate is a fake, that vaccines do not cause autism, and that the Bush administration did not conspire to cause 9/11, but these ideas continue to spread.

Consider this: we are creating a world in which large social groups not only have different views as to how the world works, but can actively produce proof that their worldview is correct. Worse than that, the low barrier-to-forgery makes it easier and easier for us to disregard facts we find inconvenient to our preconceptions.

Friday, May 6, 2011

The Tolle Paradox: How Buzzwords Kill Big Ideas

Whether you love, hate, or don't know Eckhart Tolle, he's an important guy in modern thought. He's been on Oprah's show multiple times (which is really the modern touchstone of influence), his books are bestsellers, and the London-based spirituality group Watkins Review rates him the most spiritually influential person in the world. (Note: The author of this article is largely not convinced of the value of Tolleanism in general, but will not deny his importance.)

You may have heard some of his principles. The two pillars of his work are 1) the rejection of humanity's tendency to live in the past or the present, and 2) the idea of experiencing ideas and objects without categorizing them. That is, experiencing a flower or a day at the beach as an object and a time period, respectively, without assigning them "labels".

You may have even heard "labels" as a negative (which predates Tolle, but has a new meaning in his context)—"don't 'label' this moment," and the like. You probably have heard people ask you to be "present in the moment". Unfortunately, these shortcuts deconstruct the entirety of Tolle's thought. By using labels to describe the act of not using labels, the entire purpose of separating the mind from the self is defeated. The buzzwords kill the idea. The buzzwords become a cudgel by which any number of ideologies are forced upon an audience.

Why is this? I don't have any studies, but let me posit a theory: Buzzwords deconstruct by their nature. Thinking sorts spend months and years on their frameworks, working out exactly what they mean, and running their words through editors and publishing houses and colleagues before releasing them to the public. Their disciples struggle, but eventually work out a practical way to implement their words. Then the work becomes famous, and there arrive on the scene people who want the power of the movement without its substance.

It's easy to learn the jargon of a philosophical movement, then use this new language to identify yourself as sympathetic to the movement itself. One such an intruder overlays his or her own personal philosophy over the jargon of the movement, a weapon of ideology, the buzzword, has been fashioned—it's a funnel by which people can be moved from one school of thought to another without recognizing the shift.

Unfortunately, it's not just something that happens in the ethereal world of new age spirituality. Agile software development is a method for developing applications that has changed the face of coding. The basic principle is this: One builds out fully-formed features of software in small time increments. Every iteration of the process yields a feature for the application that could be brought to market by itself.

Agile, like Tolleanism, is widely accepted and preached, and is mostly harmless. No one would really take issue to focusing on present experience, nor on building software on a marketable piece by marketable piece basis. However, in both cases, powerful rhetoric gets changed into a tool that serves its wielder's interest only. Being asked not to "label" things becomes learning not to talk back to your boss; a "scrum" becomes a sweatshop.

Those burned by the tool-wielders then tend to backlash against Tolle and against Agile, without realizing they've been had by masters of narrative engineering.

Friday, April 29, 2011

Getting a Celebrity to Tweet to You

A well-timed Twitter post got Dulé Hill to send me one in return. You know that's right.

The other day I had an idea. A goal, really, albeit a small one. I had found that there were a couple of celebrities on my Twitter feed that seemed to be very interactive with their followers, and I thought, "I should get one of them to interact with me!" Now, I don't want you to take me for one of those celeb-stalker sorts...I follow one actor, one athlete, one novelist, and a handful of journalists.

The two celebs I follow that I deemed "interactive" couldn't be more dissimilar, but, strangely, both are in Vancouver. William Gibson, father of cyberpunk and author of some truly awesome books; and Dulé Hill, best known for his role as Charlie Young on The West Wing and Burton "Gus" Guster on USA's Psych, both of which are excellent television series.

Gibson, for his part, seems to like to tweet about current events, largely retweeting funny and interesting things he sees, but he responds when people ask him questions. Hill tends to converse with the crowd constantly, and posts about friends, especially West Wing and Psych castmates.

After a failed attempt to get Gibson to respond, I decided to focus on Hill. Now, when I say "focus", I mean, in the immortal words of Mr. Hill's castmate James Roday, "Wait for iiiiiiit!" And the moment arrived.

I saw that Rob Lowe, formerly of The West Wing was trending at the time, due to his appearance on Oprah. Hill had tweeted a number of times regarding his admiration for Lowe, so my tweet was simple, "@dulehill Rob Lowe is trending right now. Just thought you should know." Hill's response? "@benchatt Nice!" That was it. Mission accomplished.

Of course, I told everyone on Facebook what just happened on Twitter. Yes, that sentence is as strange to me as it is to you. The funny thing is that my one-word brush with greatness has stayed on the web—I think I've mentioned it once in real life, to my wife, who then probably saw it on Facebook anyway.

Social media may bring us into closer contact with celebrities these days, but those brushes with fame aren't nearly as cool as running into Ralph Macchio at Taco Bell.

Monday, April 25, 2011

Finding the Weird Words (Vocab and Gmail Corpus Part I)

Today, I can finally begin to analyze the text of my Gmail Chat corpus.

My first thought is that IM text is probably somehow different from speech and pre-planned text: books, articles, letters, etc. After the rather painful process of piecing together Perl scripts to divide my corpus between the text I wrote and the text other people wrote, I ran those two files through Laurence Anthony's great AntWordProfiler, which compares a text against the General Service List and Academic Word List.

The GSL and the AWL are general vocabulary lists designed to help ESL teachers give learners of English the most useful vocabulary first. While English has one of the largest lexicons (if not the largest) in the history of language, about 80% of most texts are comprised of a few thousand very common words. The 2000 "most useful" of these words are the GSL. The AWL adds words that appear in basic academic texts and newspapers.

AntWordProfiler divided the words from my texts into four categories: those that were found in the first thousand words of the GSL, those that were found in the second thousand, those that were found in the AWL, and those that were not found in either the GSL or the AWL. While not completely accurate, one might consider the results as being divided between "very frequent", "frequent", "somewhat frequent", and "not found".


This is the distribution of the four groups, divided in two between words typed by others and words typed by myself. I ended up using a lower percentage of K1 (first thousand) GSL words, and a higher percentage of off-list words. This is probably because my corpus of interlocutors' words includes more than 80 different people, of varying word-choice preference. As the average should be 80% K1+ K2 words, neither I nor my associates are more loquacious than the average bear. (Not true: Average bear K1 is probably 0; most bear speech is "Not in List".)

What I find most interesting, really, is how similar the two breakdowns are. The difference between the percentages of GSL K2 and AWL were a matter of fractions of a percent each, and both corpora show distributions predicted for normal English text, which indicates that instant messaging text is no different from speech or pre-planned text in its word choice (as opposed to, say SMS text, which almost certainly has a different distribution).

Next up: What was in that "Not in List" group? A look at my weird words.

Tuesday, April 12, 2011

The Social Singularity

What if the world economy is actually a malevolent artificial intelligence? Get out your tinfoil hats for this one.

So, if there are three things I hate, they are: 1) "Glossy" or over-reaching futurism. 2) Conspiracy theories and 3) People thinking an idea is important because it came to them in a dream. You must all now know that I'm about to become what I hate. Three times. Today's adventure could have been entitled, "Why We Should All Be Running Around, Screaming, With Our Pants on Our Heads Part II", or "Watson for President". Let's start.

Part the First: "Stateware"
I've been reading James Gleick's "The Information", and am struck by the Difference Engine. I've heard of it before, but I finally get it through my thick skull this time that computers were not always intended to be electronic machines, and in fact the first computers designed were not. I don't know how many difference engines it would take to build a bare-bones CPU, but theoretically it could be done. It would also be really really slow and run on oceans of steam.

My next thought was this: really, you could build a computer out of anything that responds to a binary difference. People have made computers out of biological material, Lego, et cetera. You could even make a computer out of people, in fact. If you replaced transistors with human beings, and wires with human language, you could make a very interesting computational device indeed. It would be error-prone and slow, which would make any software you ran on it very likely to crash, unless you had enough redundancy and sufficiently fast communication.

I thought a lot about governments, laws, economies, etc. and decided that, in a certain light, these systems could be considered software. Laws are essentially algorithmic—I think it no coincidence that we refer to the "code of law". In a discussion on this matter with a friend, I referred to this type of software as "stateware".

Part the Second: A Brief Interlude on the Technological Singularity
Since the beginning of computational technology, scientists have raised the possibility that eventually machines may surpass humans at all "intelligence" related functions, quickly outstripping all of humanity's prowess and (in many cases) taking control of the world.

Frankly, we have no way to predict what will happen once one piece of software is created that rivals the intelligence of a human being, generally. As that software could then write software that is more intelligent than itself, the possibilities are unfathomable. This inability to see anything in the future is the reason the phenomenon is called the "technological singularity".

One of the things that makes the singularity so scary is that an intelligent machine may or may not have goals that match those of humanity. It may have a written-in goal that it takes to an extreme (converting the world into a paperclip factory is my favorite example), or it may take as a goal the propagation of artificial intelligences at the expense of human survival.

Part the Third: Paranoia
This part is primarily putting the other two parts together: if "stateware" is a thing, and if software can be built that eventually outgrows the control of its designers, then it seems clear that states and economies are destined to arrive at singularity status before computer software. Economies are already difficult to understand, and have no stated goals, so an economic singularity is almost a given.

States, however, are more interesting. The goals of the United States Constitution, arguably the operating system for US stateware, are "to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity". The Constitution was purposefully vague about the precedence and exact definitions of these goals.

At the time, of course, the hardware of the State was not fast enough for anyone to be concerned that any one of the goals would be given outsized importance, or that an exclusive definition of the wording would be settled upon. That is no longer the case. As the network of people inside the US has grown both in size and complexity, our capacity to produce more code (in the form of legislation, executive orders, and legal rulings) has exploded, creating a social software with unpredictable ramifications and unprecedented size.

The active coders here, the three branches of the government, are hobbled by design; the Founders didn't want any of them to control the coding process too much. This "un-agile" government doesn't have the power necessary to be fast enough to revise significant amounts of this code (just to create more), and the polarization of American politics is slowing the speed of government even further.

Beyond that, there is no guarantee that any elected official would want to revise US stateware. As politicians assume more power, it appears that the stateware co-opts their individual plans; those with reformist ideas are locked out of power, and those who make it through seem to change their minds.

Lastly, it doesn't seem like any small group of humans (even Congress is relatively small) can muster the cognitive power to steer this now-mammoth ship of state. The relations between organizations, lobbies, foreign powers, celebrities, systems, and economies are too complex. If the social singularity isn't here yet, it is howling at the door.

This all seems to drive to one point: we need AI assistance to manage the United States Government. I'm not suggesting we put things on auto-pilot, but we need machines that can make the connections we will inevitably fail to see. I don't believe the software exists yet, and this is a terrible realization to make.

Monday, March 14, 2011

The Need for a New Social Science

Memetics, Marketing, and Sociology can only get us so far.

I'm going to be very, very careful in this post; I don't know how to present this in a way that doesn't make me sound a little eccentric. I've been thinking a lot lately about memetics, the study of the transmission and propagation of ideas. These ideas have been compared alternately to genes and to biological viruses, spreading about hither and yon. There's been a bit of work done in the area of what ideas stick and what ideas don't, (consensus: various values are important, truth is only one of them) but there are just some huge, fat, gaping holes in the current state of the study of the propagation of ideas.

Gaping Hole #1: Knowing What We Can Know

First, and this is the scariest, we just have a very limited ability to view memes. We can see the spread of macro images on the internet, and identify that Ceiling Cat is probably sticking around for a while, whereas Rebecca Black's "Friday" is probably not. But we don't really know what to do with this information; the shape of the meme continues to elude us.

We will almost certainly never be able to predict what data will spread, and we will almost certainly never fully get a bead on what makes data "spreadable". We can get a general idea, perhaps, but it won't be an equation. Psychohistory is an idea in Asimov books, not a thing that will actually exist. (Oh, yeah, link. Sorry.)

This gaping hole is that memetic endeavors have largely been misguided because they've been strapped to a biological framework. Memes have no chromosomes, they cannot be put under a microscope, and they cannot be "sequenced".

Gaping Hole #2: Ideas and Behaviors

Second, there's not much in the literature by way of determining what, if any, is the difference between ideas that spread and the behaviors they leave behind. A piece of information that spreads might just annoy me by leaving "Never Gonna Give You Up" stuck in my head, or it might convince me to vote for Ralph Nader. There is a big difference between a meme and its behavioral payload.

Gaping Hole #3: The Myth of Measurement

Third, memetics tends toward the very abstract, and generally fails to measure anything at all, instead engaging in length thought experiments and exegeses on what theoretical construct is better for the task, in essence applying none of them empirically. Here's an article example, and it's ridiculously long.

The Solution

The solution is to actually conduct experiments on actual cultural transmission. Richard Dawkins, the founder of memetics, once noted that memetics had not yet found its Crick and Watson; it hadn't even found its Mendel. I think he was wrong. Plenty of people have come before, studying memetics before it was called memetics. Most importantly, to my mind, is the sociolinguist William Labov.

Labov tackled issues of how language change spread, what factors made someone likely to adapt their dialect to another, and so forth. One of his earliest and most famous studies was one of employees in three different department stores in New York City. He found that one meme, the tendency to "correct" the New York City tendency to "drop" the r-sound in certain words (his test was "fourth floor"). The meme was more likely to spread along socioeconomic lines, meaning that those in stores with higher-priced goods tended to include r-sounds in "fourth floor".

I know this isn't really much, as far as studies go, but it was a start. Labov discovered a number of patterns of linguistic change, and scores of sociolinguists after him have followed suit. Sociolinguistics may be a little tame compared to full-on memetics, as language traits are hardly as world-changing as religious and political beliefs, but it is a sufficient, if terribly overlooked start. Not much different from Mendel's plants, if you think about it.

Monday, January 10, 2011

Watch Your Hed

Headlines and ledes grab attention, and are often lies.

This is, of course, a follow-up to the "Update" segment of my previous post, in which I note that Fox News changed a headline from an inflammatory one to a less inflammatory one. Today's post is about why that sort of behavior is too little, too late.

A couple of posts ago, I talked about the "Information Problem", or how the surplus of content affects the way humanity makes decisions. I divided print and web material into three categories: content (less relevant, truth value unassignable), information (more relevant, but pre-interpreted) and data (most relevant, but difficult to understand without interpretation). News reporting, in general, should fit squarely into the "information" category; it should be pre-interpreted data reported from primary sources, and in general, should be unimpeachably true.

Now, that isn't always the case, and there are ways to report things that makes it appears as though the universe, not the medium, has a bias for or against a certain position. And I've written about that too. That needs to be taken care of, but in the end, each news user is going to have to strip bias from news. What's more devious, though, is the headline.

In a content economy, attention is currency. You can never make quick money by printing just the facts, reliably, and within a reasonable timeframe, even though that might be the most sustainable position. You're generally going to want to print news biased toward an audience, report on events that interest that audience, ignore things that don't interest the audience, and try to scoop everyone else all the time, even if you have unreliable or incomplete facts.

Even if you follow that formula, though, you still need to advertise. And the only advertisement that works on a news aggregator is the headline (and the lede, on occasion). So you have to make it count. In the end, if you make a statement that's shocking, you're more likely to get hits. It has been like this from the beginning of printed news. What's different in the aggregated news / 24-hour cable news ticker world is that far more headlines are read than articles, streaming at us as they do.

Unfortunately, shocking headlines are often a little bit misleading (or more than a little bit). This wasn't so much a problem before the Information Age, but now, each headline could easily be perceived as a tiny, unimpeachable fact. Which, of course, is inaccurate, and leads to misconstructed worldviews. Which messes with systems based on preferences and beliefs.

So, save the economy just a little bit, and try not to lie in your headline, even if you fix it in the article, or retract it later.

Sunday, January 9, 2011

The Tragedy of Tragedy

Just because something is sensational doesn't mean it's true.

After yesterday's tragic shooting in Tucson, which left six dead, including a nine-year-old and a federal judge, and several more wounded, including Democratic Congresswoman Gabrielle Giffords, the media responded with its traditional tragedy two-step program: first, report the facts; second, try to explain why the event occurred.

The first step was generally respectful, as media reporting of tragedy usually is. The second step was occasionally disturbing in that many linked Sarah Palin's trigger-happy metaphors to the terrible outburst.

Let me be clear before I go any further: I find Palin's ideology, rhetoric, and image utterly execrable in every way. I cannot defend her early departure from Juneau, and find her and her family's opportunistic reality spotlight-hogging a frightening example of a possible future direction of American politics.

And while it's arguable that Arizona's hyper-conservative politics played a role in alleged shooter Jared Loughner's timing and target, it seems unlikely that they made him a crazed shooter.

Tucson Weekly has an interesting piece on Mr. Loughner's personal internet presence, which makes it pretty clear that he was likely to shoot and kill in one venue or another at some point in time. It also happens to be the case that his home was a short walk from the Safeway where the attack occurred.

The nature of the attack, Mr. Loughner's personality, and his previous communications seem to put this incident in the same category as the Columbine shootings, not the JFK assassination.

But this doesn't match the agenda of some commentators. It appears as though some wish the shooter had been able to be clearly tied to the admittedly ridiculous, extreme rhetoric that has become the stock of American political discourse. (egregious example) Upon examining the evidence, this appears not to be the case. The man had extreme, iconoclastic political beliefs, and appears to have been on a rampage, rather than a mission.

To ignore the evidence and provide a false connection to an unwanted person or ideology disrespects the memory of those who died in this terrible tragedy. There are enough valid arguments against Ms. Palin's stances; making specious accusations is unnecessary.

(Update: Fox News accuses "many on the American Left" of misusing the tragedy, and changed the headline of this story from 'Dems Blame Rhetoric for Shooting' to 'Tragedy Inspires Political "Cheap Shots"'. Stay classy, Fox.)

Wednesday, January 5, 2011

The Information Problem

We have too much content, not enough information, and surprisingly little data.

The great futurist Alvin Toffler predicted a world with relevant data so profuse that information overload would be the inevitable result. He was wrong. The 21st Century does not have a problem with information overload. It has a problem with Content Overload.

You might think that I'm splitting semantic hairs in order to make a polarizing statement, but perhaps it would help my case to point out that the surfeit of content on the internet is of varying informational value. Quite a bit of it is opinion, and much of the stuff that is purportedly informational is of questionable truth value.

It's important to make a distinction between content, information, and data. While I'm not suggesting that there's anything inherent in each of these words that usefully distinguishes them, it is clear there are definitely three different phenomena which can be distinguished, regarding the stuff we take into our brains.

Content is what I'll call all print and digital "stuff" in general. This definition of content would include movie reviews, lolcats, sports scores, et cetera. Information is content which can be assigned a truth value. That gets tricky, because an opinion piece is clearly not information in and of itself, but may contain information (or misinformation). Data is a special kind of information, that appears in tabular or numeric form. The universe of Content and Data can be seen as a scale that runs from the lowest density of fact to the highest.

Data is not always foolproof, as Charles Seife identifies in his data-in-journalism analysis, Proofiness. A lot of figures are either half-right, made to appear to support erroneous conclusions, or just plain made up. Of course, that makes any informational statements based on these data unsupported, and opinions formed around this information, misinformed. To hear Seife tell it, there are a lot more errant conclusions out there than accurate ones--which leads me to believe the following:

Nearly everyone is wrong most of the time.

The most logical solution to being wrong is to check facts to make sure they are accurate before basing opinions and decisions on them. If this were possible, it would certainly make for a much better world. Unfortunately, verifiable fact is hard to come by. In many cases, scientific studies have not been done (or cannot be done). Worse, sometimes results of different studies conflict. Exercise science and nutrition seem especially prone to this: one month, authorities say it's essential to focus mostly on cardio, the next, weights are the thing. Fat used to be the killer, now carbs are. Any attempt to be consistently right, even most of the time, is probably doomed to failure.

Unfortunately for us, the number of decisions we have to make every day appears to be increasing. From financial choices, like bank selection and the use of credit cards, to consumer choices about everything from insurance to running shoes. Further, it seems as though the options we have are also increasing, adding even more difficulty to the task of human decision-making, and making the human life experience a multi-decade pratfall.

As a shortcut to making decisions, we tend to lump like decisions together, as a pattern. We may then seek to explain these patterns with a narrative. For example, my wife loves to shop with coupons. Her decision-making pattern is that if we don't absolutely need a product right away, she doesn't buy it at full price, or even close, ever. The narrative behind this pattern is the idea that companies engage in promotional deals to catch unwary shoppers off their guard, but that by putting in a little extra effort, you can subvert these deals to make your shopping exceptionally cheap.

The progression in my wife's example is from individual decisions, to patterns, to explanatory narratives. This is a practical approach to decision making. The reverse is the ideological approach: to take the narrative and impose it on practices and individual decisions. There is nothing wrong with the ideological approach if you use it to govern decisions made without a thought for immediate outcomes. Ethical decisions, for example. One deals honestly because the narrative of "Honesty is the best policy" embodies a state which the decision-maker wishes to attain, not because it yields a direct benefit. When you use the ideological approach and expect a specific result to a specific decision, things get crazy.

This is because narratives are based on, again, patterns made from lots of little decisions, which the narratives then explain. When the method by which the narrative was built is unknown, the truth value of the narrative is obscured. Using content in the place of information or data in order to make decisions is terribly dangerous. Using unverified information as if it were verified is terribly dangerous. Analyzing data incorrectly is terribly dangerous. And yet, this is what is happening in decision-making all over the world, from households to governments and beyond.

Data in and of itself is incredibly powerful. Bringing up Seife and Proofiness again, it's clear that attaching a number to a fact is a shortcut to credibility. But humans are notorious for misreading and abusing data, and creating narratives based on these tortured numbers. Here's an example debunking a mostly harmless myth about pet ownership.

What I'm driving at is this: humanity might make better decisions if we would take what's reported as news and science, et cetera, with a grain of salt. This proves exceptionally difficult, because in reacting to one incorrect or incomplete narrative, we often create an opposing narrative that is just as incorrect or incomplete. There may not be a solution to the "Information Problem", but I propose that you, Dear Reader, go conduct your own studies to verify my narrative, and live by holding out judgment until proof is overwhelming.