Fake news, people that think it's real news, and people that want people to know that there are people who thought the fake news was real news.
On the 18th, The Onion ran an article detailing a new $8 billion Planned Parenthood facility called "The Abortionplex". In typical Onion fashion, the article was gleefully tongue-in-cheek, timely, and completely, utterly false. The Onion is, of course, a satire news site.
That doesn't necessarily stop people from believing it. This wouldn't be the first time that the masses have bought an Onion line. Nor would it be the most ridiculous thing the general populace has believed. That being said, this graphic (click through) shouldn't come as a surprise to anyone—and yet it's one of the most interesting internet artifacts I've seen, as there is no attempt at all to prove that it is authentic.
I want to be sure that I go one record saying that I don't have any doubts as to the legitimacy of the graphic. However, if someone wanted to create a false graphic on this or any other theme, the ubiquity of Facebook thumbnails and Photoshop makes it possible for them to get involved in made-up guerrilla journalism. The Culture Wars no longer require facts (but they do need the lulz).
Eli Pariser notes the advent of the "filter bubble": the way that Google, Facebook, et al. filter content that you receive based on previously expressed preferences. His TED Talk approaches the most obvious problem with the filter bubble, which is that as websites tailor content automatically, people aren't exposed to opposing viewpoints. There's another angle here, which is that these filter bubbles protect false ideas from being debunked except by untrusted "outsiders". It's clear that the Obama Kenyan birth certificate is a fake, that vaccines do not cause autism, and that the Bush administration did not conspire to cause 9/11, but these ideas continue to spread.
Consider this: we are creating a world in which large social groups not only have different views as to how the world works, but can actively produce proof that their worldview is correct. Worse than that, the low barrier-to-forgery makes it easier and easier for us to disregard facts we find inconvenient to our preconceptions.
Monday, May 23, 2011
Friday, May 6, 2011
The Tolle Paradox: How Buzzwords Kill Big Ideas
Whether you love, hate, or don't know Eckhart Tolle, he's an important guy in modern thought. He's been on Oprah's show multiple times (which is really the modern touchstone of influence), his books are bestsellers, and the London-based spirituality group Watkins Review rates him the most spiritually influential person in the world. (Note: The author of this article is largely not convinced of the value of Tolleanism in general, but will not deny his importance.)
You may have heard some of his principles. The two pillars of his work are 1) the rejection of humanity's tendency to live in the past or the present, and 2) the idea of experiencing ideas and objects without categorizing them. That is, experiencing a flower or a day at the beach as an object and a time period, respectively, without assigning them "labels".
You may have even heard "labels" as a negative (which predates Tolle, but has a new meaning in his context)—"don't 'label' this moment," and the like. You probably have heard people ask you to be "present in the moment". Unfortunately, these shortcuts deconstruct the entirety of Tolle's thought. By using labels to describe the act of not using labels, the entire purpose of separating the mind from the self is defeated. The buzzwords kill the idea. The buzzwords become a cudgel by which any number of ideologies are forced upon an audience.
Why is this? I don't have any studies, but let me posit a theory: Buzzwords deconstruct by their nature. Thinking sorts spend months and years on their frameworks, working out exactly what they mean, and running their words through editors and publishing houses and colleagues before releasing them to the public. Their disciples struggle, but eventually work out a practical way to implement their words. Then the work becomes famous, and there arrive on the scene people who want the power of the movement without its substance.
It's easy to learn the jargon of a philosophical movement, then use this new language to identify yourself as sympathetic to the movement itself. One such an intruder overlays his or her own personal philosophy over the jargon of the movement, a weapon of ideology, the buzzword, has been fashioned—it's a funnel by which people can be moved from one school of thought to another without recognizing the shift.
Unfortunately, it's not just something that happens in the ethereal world of new age spirituality. Agile software development is a method for developing applications that has changed the face of coding. The basic principle is this: One builds out fully-formed features of software in small time increments. Every iteration of the process yields a feature for the application that could be brought to market by itself.
Agile, like Tolleanism, is widely accepted and preached, and is mostly harmless. No one would really take issue to focusing on present experience, nor on building software on a marketable piece by marketable piece basis. However, in both cases, powerful rhetoric gets changed into a tool that serves its wielder's interest only. Being asked not to "label" things becomes learning not to talk back to your boss; a "scrum" becomes a sweatshop.
Those burned by the tool-wielders then tend to backlash against Tolle and against Agile, without realizing they've been had by masters of narrative engineering.
You may have heard some of his principles. The two pillars of his work are 1) the rejection of humanity's tendency to live in the past or the present, and 2) the idea of experiencing ideas and objects without categorizing them. That is, experiencing a flower or a day at the beach as an object and a time period, respectively, without assigning them "labels".
You may have even heard "labels" as a negative (which predates Tolle, but has a new meaning in his context)—"don't 'label' this moment," and the like. You probably have heard people ask you to be "present in the moment". Unfortunately, these shortcuts deconstruct the entirety of Tolle's thought. By using labels to describe the act of not using labels, the entire purpose of separating the mind from the self is defeated. The buzzwords kill the idea. The buzzwords become a cudgel by which any number of ideologies are forced upon an audience.
Why is this? I don't have any studies, but let me posit a theory: Buzzwords deconstruct by their nature. Thinking sorts spend months and years on their frameworks, working out exactly what they mean, and running their words through editors and publishing houses and colleagues before releasing them to the public. Their disciples struggle, but eventually work out a practical way to implement their words. Then the work becomes famous, and there arrive on the scene people who want the power of the movement without its substance.
It's easy to learn the jargon of a philosophical movement, then use this new language to identify yourself as sympathetic to the movement itself. One such an intruder overlays his or her own personal philosophy over the jargon of the movement, a weapon of ideology, the buzzword, has been fashioned—it's a funnel by which people can be moved from one school of thought to another without recognizing the shift.
Unfortunately, it's not just something that happens in the ethereal world of new age spirituality. Agile software development is a method for developing applications that has changed the face of coding. The basic principle is this: One builds out fully-formed features of software in small time increments. Every iteration of the process yields a feature for the application that could be brought to market by itself.
Agile, like Tolleanism, is widely accepted and preached, and is mostly harmless. No one would really take issue to focusing on present experience, nor on building software on a marketable piece by marketable piece basis. However, in both cases, powerful rhetoric gets changed into a tool that serves its wielder's interest only. Being asked not to "label" things becomes learning not to talk back to your boss; a "scrum" becomes a sweatshop.
Those burned by the tool-wielders then tend to backlash against Tolle and against Agile, without realizing they've been had by masters of narrative engineering.
Friday, April 29, 2011
Getting a Celebrity to Tweet to You
A well-timed Twitter post got Dulé Hill to send me one in return. You know that's right.
The other day I had an idea. A goal, really, albeit a small one. I had found that there were a couple of celebrities on my Twitter feed that seemed to be very interactive with their followers, and I thought, "I should get one of them to interact with me!" Now, I don't want you to take me for one of those celeb-stalker sorts...I follow one actor, one athlete, one novelist, and a handful of journalists.
The two celebs I follow that I deemed "interactive" couldn't be more dissimilar, but, strangely, both are in Vancouver. William Gibson, father of cyberpunk and author of some truly awesome books; and Dulé Hill, best known for his role as Charlie Young on The West Wing and Burton "Gus" Guster on USA's Psych, both of which are excellent television series.
Gibson, for his part, seems to like to tweet about current events, largely retweeting funny and interesting things he sees, but he responds when people ask him questions. Hill tends to converse with the crowd constantly, and posts about friends, especially West Wing and Psych castmates.
After a failed attempt to get Gibson to respond, I decided to focus on Hill. Now, when I say "focus", I mean, in the immortal words of Mr. Hill's castmate James Roday, "Wait for iiiiiiit!" And the moment arrived.
I saw that Rob Lowe, formerly of The West Wing was trending at the time, due to his appearance on Oprah. Hill had tweeted a number of times regarding his admiration for Lowe, so my tweet was simple, "@dulehill Rob Lowe is trending right now. Just thought you should know." Hill's response? "@benchatt Nice!" That was it. Mission accomplished.
Of course, I told everyone on Facebook what just happened on Twitter. Yes, that sentence is as strange to me as it is to you. The funny thing is that my one-word brush with greatness has stayed on the web—I think I've mentioned it once in real life, to my wife, who then probably saw it on Facebook anyway.
Social media may bring us into closer contact with celebrities these days, but those brushes with fame aren't nearly as cool as running into Ralph Macchio at Taco Bell.
The other day I had an idea. A goal, really, albeit a small one. I had found that there were a couple of celebrities on my Twitter feed that seemed to be very interactive with their followers, and I thought, "I should get one of them to interact with me!" Now, I don't want you to take me for one of those celeb-stalker sorts...I follow one actor, one athlete, one novelist, and a handful of journalists.
The two celebs I follow that I deemed "interactive" couldn't be more dissimilar, but, strangely, both are in Vancouver. William Gibson, father of cyberpunk and author of some truly awesome books; and Dulé Hill, best known for his role as Charlie Young on The West Wing and Burton "Gus" Guster on USA's Psych, both of which are excellent television series.
Gibson, for his part, seems to like to tweet about current events, largely retweeting funny and interesting things he sees, but he responds when people ask him questions. Hill tends to converse with the crowd constantly, and posts about friends, especially West Wing and Psych castmates.
After a failed attempt to get Gibson to respond, I decided to focus on Hill. Now, when I say "focus", I mean, in the immortal words of Mr. Hill's castmate James Roday, "Wait for iiiiiiit!" And the moment arrived.
I saw that Rob Lowe, formerly of The West Wing was trending at the time, due to his appearance on Oprah. Hill had tweeted a number of times regarding his admiration for Lowe, so my tweet was simple, "@dulehill Rob Lowe is trending right now. Just thought you should know." Hill's response? "@benchatt Nice!" That was it. Mission accomplished.
Of course, I told everyone on Facebook what just happened on Twitter. Yes, that sentence is as strange to me as it is to you. The funny thing is that my one-word brush with greatness has stayed on the web—I think I've mentioned it once in real life, to my wife, who then probably saw it on Facebook anyway.
Social media may bring us into closer contact with celebrities these days, but those brushes with fame aren't nearly as cool as running into Ralph Macchio at Taco Bell.
Monday, April 25, 2011
Finding the Weird Words (Vocab and Gmail Corpus Part I)
Today, I can finally begin to analyze the text of my Gmail Chat corpus.
My first thought is that IM text is probably somehow different from speech and pre-planned text: books, articles, letters, etc. After the rather painful process of piecing together Perl scripts to divide my corpus between the text I wrote and the text other people wrote, I ran those two files through Laurence Anthony's great AntWordProfiler, which compares a text against the General Service List and Academic Word List.
The GSL and the AWL are general vocabulary lists designed to help ESL teachers give learners of English the most useful vocabulary first. While English has one of the largest lexicons (if not the largest) in the history of language, about 80% of most texts are comprised of a few thousand very common words. The 2000 "most useful" of these words are the GSL. The AWL adds words that appear in basic academic texts and newspapers.
AntWordProfiler divided the words from my texts into four categories: those that were found in the first thousand words of the GSL, those that were found in the second thousand, those that were found in the AWL, and those that were not found in either the GSL or the AWL. While not completely accurate, one might consider the results as being divided between "very frequent", "frequent", "somewhat frequent", and "not found".

This is the distribution of the four groups, divided in two between words typed by others and words typed by myself. I ended up using a lower percentage of K1 (first thousand) GSL words, and a higher percentage of off-list words. This is probably because my corpus of interlocutors' words includes more than 80 different people, of varying word-choice preference. As the average should be 80% K1+ K2 words, neither I nor my associates are more loquacious than the average bear. (Not true: Average bear K1 is probably 0; most bear speech is "Not in List".)
What I find most interesting, really, is how similar the two breakdowns are. The difference between the percentages of GSL K2 and AWL were a matter of fractions of a percent each, and both corpora show distributions predicted for normal English text, which indicates that instant messaging text is no different from speech or pre-planned text in its word choice (as opposed to, say SMS text, which almost certainly has a different distribution).
Next up: What was in that "Not in List" group? A look at my weird words.
My first thought is that IM text is probably somehow different from speech and pre-planned text: books, articles, letters, etc. After the rather painful process of piecing together Perl scripts to divide my corpus between the text I wrote and the text other people wrote, I ran those two files through Laurence Anthony's great AntWordProfiler, which compares a text against the General Service List and Academic Word List.
The GSL and the AWL are general vocabulary lists designed to help ESL teachers give learners of English the most useful vocabulary first. While English has one of the largest lexicons (if not the largest) in the history of language, about 80% of most texts are comprised of a few thousand very common words. The 2000 "most useful" of these words are the GSL. The AWL adds words that appear in basic academic texts and newspapers.
AntWordProfiler divided the words from my texts into four categories: those that were found in the first thousand words of the GSL, those that were found in the second thousand, those that were found in the AWL, and those that were not found in either the GSL or the AWL. While not completely accurate, one might consider the results as being divided between "very frequent", "frequent", "somewhat frequent", and "not found".

This is the distribution of the four groups, divided in two between words typed by others and words typed by myself. I ended up using a lower percentage of K1 (first thousand) GSL words, and a higher percentage of off-list words. This is probably because my corpus of interlocutors' words includes more than 80 different people, of varying word-choice preference. As the average should be 80% K1+ K2 words, neither I nor my associates are more loquacious than the average bear. (Not true: Average bear K1 is probably 0; most bear speech is "Not in List".)
What I find most interesting, really, is how similar the two breakdowns are. The difference between the percentages of GSL K2 and AWL were a matter of fractions of a percent each, and both corpora show distributions predicted for normal English text, which indicates that instant messaging text is no different from speech or pre-planned text in its word choice (as opposed to, say SMS text, which almost certainly has a different distribution).
Next up: What was in that "Not in List" group? A look at my weird words.
Tuesday, April 12, 2011
The Social Singularity
What if the world economy is actually a malevolent artificial intelligence? Get out your tinfoil hats for this one.
So, if there are three things I hate, they are: 1) "Glossy" or over-reaching futurism. 2) Conspiracy theories and 3) People thinking an idea is important because it came to them in a dream. You must all now know that I'm about to become what I hate. Three times. Today's adventure could have been entitled, "Why We Should All Be Running Around, Screaming, With Our Pants on Our Heads Part II", or "Watson for President". Let's start.
Part the First: "Stateware"
I've been reading James Gleick's "The Information", and am struck by the Difference Engine. I've heard of it before, but I finally get it through my thick skull this time that computers were not always intended to be electronic machines, and in fact the first computers designed were not. I don't know how many difference engines it would take to build a bare-bones CPU, but theoretically it could be done. It would also be really really slow and run on oceans of steam.
My next thought was this: really, you could build a computer out of anything that responds to a binary difference. People have made computers out of biological material, Lego, et cetera. You could even make a computer out of people, in fact. If you replaced transistors with human beings, and wires with human language, you could make a very interesting computational device indeed. It would be error-prone and slow, which would make any software you ran on it very likely to crash, unless you had enough redundancy and sufficiently fast communication.
I thought a lot about governments, laws, economies, etc. and decided that, in a certain light, these systems could be considered software. Laws are essentially algorithmic—I think it no coincidence that we refer to the "code of law". In a discussion on this matter with a friend, I referred to this type of software as "stateware".
Part the Second: A Brief Interlude on the Technological Singularity
Since the beginning of computational technology, scientists have raised the possibility that eventually machines may surpass humans at all "intelligence" related functions, quickly outstripping all of humanity's prowess and (in many cases) taking control of the world.
Frankly, we have no way to predict what will happen once one piece of software is created that rivals the intelligence of a human being, generally. As that software could then write software that is more intelligent than itself, the possibilities are unfathomable. This inability to see anything in the future is the reason the phenomenon is called the "technological singularity".
One of the things that makes the singularity so scary is that an intelligent machine may or may not have goals that match those of humanity. It may have a written-in goal that it takes to an extreme (converting the world into a paperclip factory is my favorite example), or it may take as a goal the propagation of artificial intelligences at the expense of human survival.
Part the Third: Paranoia
This part is primarily putting the other two parts together: if "stateware" is a thing, and if software can be built that eventually outgrows the control of its designers, then it seems clear that states and economies are destined to arrive at singularity status before computer software. Economies are already difficult to understand, and have no stated goals, so an economic singularity is almost a given.
States, however, are more interesting. The goals of the United States Constitution, arguably the operating system for US stateware, are "to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity". The Constitution was purposefully vague about the precedence and exact definitions of these goals.
At the time, of course, the hardware of the State was not fast enough for anyone to be concerned that any one of the goals would be given outsized importance, or that an exclusive definition of the wording would be settled upon. That is no longer the case. As the network of people inside the US has grown both in size and complexity, our capacity to produce more code (in the form of legislation, executive orders, and legal rulings) has exploded, creating a social software with unpredictable ramifications and unprecedented size.
The active coders here, the three branches of the government, are hobbled by design; the Founders didn't want any of them to control the coding process too much. This "un-agile" government doesn't have the power necessary to be fast enough to revise significant amounts of this code (just to create more), and the polarization of American politics is slowing the speed of government even further.
Beyond that, there is no guarantee that any elected official would want to revise US stateware. As politicians assume more power, it appears that the stateware co-opts their individual plans; those with reformist ideas are locked out of power, and those who make it through seem to change their minds.
Lastly, it doesn't seem like any small group of humans (even Congress is relatively small) can muster the cognitive power to steer this now-mammoth ship of state. The relations between organizations, lobbies, foreign powers, celebrities, systems, and economies are too complex. If the social singularity isn't here yet, it is howling at the door.
This all seems to drive to one point: we need AI assistance to manage the United States Government. I'm not suggesting we put things on auto-pilot, but we need machines that can make the connections we will inevitably fail to see. I don't believe the software exists yet, and this is a terrible realization to make.
So, if there are three things I hate, they are: 1) "Glossy" or over-reaching futurism. 2) Conspiracy theories and 3) People thinking an idea is important because it came to them in a dream. You must all now know that I'm about to become what I hate. Three times. Today's adventure could have been entitled, "Why We Should All Be Running Around, Screaming, With Our Pants on Our Heads Part II", or "Watson for President". Let's start.
Part the First: "Stateware"
I've been reading James Gleick's "The Information", and am struck by the Difference Engine. I've heard of it before, but I finally get it through my thick skull this time that computers were not always intended to be electronic machines, and in fact the first computers designed were not. I don't know how many difference engines it would take to build a bare-bones CPU, but theoretically it could be done. It would also be really really slow and run on oceans of steam.
My next thought was this: really, you could build a computer out of anything that responds to a binary difference. People have made computers out of biological material, Lego, et cetera. You could even make a computer out of people, in fact. If you replaced transistors with human beings, and wires with human language, you could make a very interesting computational device indeed. It would be error-prone and slow, which would make any software you ran on it very likely to crash, unless you had enough redundancy and sufficiently fast communication.
I thought a lot about governments, laws, economies, etc. and decided that, in a certain light, these systems could be considered software. Laws are essentially algorithmic—I think it no coincidence that we refer to the "code of law". In a discussion on this matter with a friend, I referred to this type of software as "stateware".
Part the Second: A Brief Interlude on the Technological Singularity
Since the beginning of computational technology, scientists have raised the possibility that eventually machines may surpass humans at all "intelligence" related functions, quickly outstripping all of humanity's prowess and (in many cases) taking control of the world.
Frankly, we have no way to predict what will happen once one piece of software is created that rivals the intelligence of a human being, generally. As that software could then write software that is more intelligent than itself, the possibilities are unfathomable. This inability to see anything in the future is the reason the phenomenon is called the "technological singularity".
One of the things that makes the singularity so scary is that an intelligent machine may or may not have goals that match those of humanity. It may have a written-in goal that it takes to an extreme (converting the world into a paperclip factory is my favorite example), or it may take as a goal the propagation of artificial intelligences at the expense of human survival.
Part the Third: Paranoia
This part is primarily putting the other two parts together: if "stateware" is a thing, and if software can be built that eventually outgrows the control of its designers, then it seems clear that states and economies are destined to arrive at singularity status before computer software. Economies are already difficult to understand, and have no stated goals, so an economic singularity is almost a given.
States, however, are more interesting. The goals of the United States Constitution, arguably the operating system for US stateware, are "to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity". The Constitution was purposefully vague about the precedence and exact definitions of these goals.
At the time, of course, the hardware of the State was not fast enough for anyone to be concerned that any one of the goals would be given outsized importance, or that an exclusive definition of the wording would be settled upon. That is no longer the case. As the network of people inside the US has grown both in size and complexity, our capacity to produce more code (in the form of legislation, executive orders, and legal rulings) has exploded, creating a social software with unpredictable ramifications and unprecedented size.
The active coders here, the three branches of the government, are hobbled by design; the Founders didn't want any of them to control the coding process too much. This "un-agile" government doesn't have the power necessary to be fast enough to revise significant amounts of this code (just to create more), and the polarization of American politics is slowing the speed of government even further.
Beyond that, there is no guarantee that any elected official would want to revise US stateware. As politicians assume more power, it appears that the stateware co-opts their individual plans; those with reformist ideas are locked out of power, and those who make it through seem to change their minds.
Lastly, it doesn't seem like any small group of humans (even Congress is relatively small) can muster the cognitive power to steer this now-mammoth ship of state. The relations between organizations, lobbies, foreign powers, celebrities, systems, and economies are too complex. If the social singularity isn't here yet, it is howling at the door.
This all seems to drive to one point: we need AI assistance to manage the United States Government. I'm not suggesting we put things on auto-pilot, but we need machines that can make the connections we will inevitably fail to see. I don't believe the software exists yet, and this is a terrible realization to make.
Monday, March 14, 2011
The Need for a New Social Science
Memetics, Marketing, and Sociology can only get us so far.
I'm going to be very, very careful in this post; I don't know how to present this in a way that doesn't make me sound a little eccentric. I've been thinking a lot lately about memetics, the study of the transmission and propagation of ideas. These ideas have been compared alternately to genes and to biological viruses, spreading about hither and yon. There's been a bit of work done in the area of what ideas stick and what ideas don't, (consensus: various values are important, truth is only one of them) but there are just some huge, fat, gaping holes in the current state of the study of the propagation of ideas.
Gaping Hole #1: Knowing What We Can Know
First, and this is the scariest, we just have a very limited ability to view memes. We can see the spread of macro images on the internet, and identify that Ceiling Cat is probably sticking around for a while, whereas Rebecca Black's "Friday" is probably not. But we don't really know what to do with this information; the shape of the meme continues to elude us.
We will almost certainly never be able to predict what data will spread, and we will almost certainly never fully get a bead on what makes data "spreadable". We can get a general idea, perhaps, but it won't be an equation. Psychohistory is an idea in Asimov books, not a thing that will actually exist. (Oh, yeah, link. Sorry.)
This gaping hole is that memetic endeavors have largely been misguided because they've been strapped to a biological framework. Memes have no chromosomes, they cannot be put under a microscope, and they cannot be "sequenced".
Gaping Hole #2: Ideas and Behaviors
Second, there's not much in the literature by way of determining what, if any, is the difference between ideas that spread and the behaviors they leave behind. A piece of information that spreads might just annoy me by leaving "Never Gonna Give You Up" stuck in my head, or it might convince me to vote for Ralph Nader. There is a big difference between a meme and its behavioral payload.
Gaping Hole #3: The Myth of Measurement
Third, memetics tends toward the very abstract, and generally fails to measure anything at all, instead engaging in length thought experiments and exegeses on what theoretical construct is better for the task, in essence applying none of them empirically. Here's an article example, and it's ridiculously long.
The Solution
The solution is to actually conduct experiments on actual cultural transmission. Richard Dawkins, the founder of memetics, once noted that memetics had not yet found its Crick and Watson; it hadn't even found its Mendel. I think he was wrong. Plenty of people have come before, studying memetics before it was called memetics. Most importantly, to my mind, is the sociolinguist William Labov.
Labov tackled issues of how language change spread, what factors made someone likely to adapt their dialect to another, and so forth. One of his earliest and most famous studies was one of employees in three different department stores in New York City. He found that one meme, the tendency to "correct" the New York City tendency to "drop" the r-sound in certain words (his test was "fourth floor"). The meme was more likely to spread along socioeconomic lines, meaning that those in stores with higher-priced goods tended to include r-sounds in "fourth floor".
I know this isn't really much, as far as studies go, but it was a start. Labov discovered a number of patterns of linguistic change, and scores of sociolinguists after him have followed suit. Sociolinguistics may be a little tame compared to full-on memetics, as language traits are hardly as world-changing as religious and political beliefs, but it is a sufficient, if terribly overlooked start. Not much different from Mendel's plants, if you think about it.
I'm going to be very, very careful in this post; I don't know how to present this in a way that doesn't make me sound a little eccentric. I've been thinking a lot lately about memetics, the study of the transmission and propagation of ideas. These ideas have been compared alternately to genes and to biological viruses, spreading about hither and yon. There's been a bit of work done in the area of what ideas stick and what ideas don't, (consensus: various values are important, truth is only one of them) but there are just some huge, fat, gaping holes in the current state of the study of the propagation of ideas.
Gaping Hole #1: Knowing What We Can Know
First, and this is the scariest, we just have a very limited ability to view memes. We can see the spread of macro images on the internet, and identify that Ceiling Cat is probably sticking around for a while, whereas Rebecca Black's "Friday" is probably not. But we don't really know what to do with this information; the shape of the meme continues to elude us.
We will almost certainly never be able to predict what data will spread, and we will almost certainly never fully get a bead on what makes data "spreadable". We can get a general idea, perhaps, but it won't be an equation. Psychohistory is an idea in Asimov books, not a thing that will actually exist. (Oh, yeah, link. Sorry.)
This gaping hole is that memetic endeavors have largely been misguided because they've been strapped to a biological framework. Memes have no chromosomes, they cannot be put under a microscope, and they cannot be "sequenced".
Gaping Hole #2: Ideas and Behaviors
Second, there's not much in the literature by way of determining what, if any, is the difference between ideas that spread and the behaviors they leave behind. A piece of information that spreads might just annoy me by leaving "Never Gonna Give You Up" stuck in my head, or it might convince me to vote for Ralph Nader. There is a big difference between a meme and its behavioral payload.
Gaping Hole #3: The Myth of Measurement
Third, memetics tends toward the very abstract, and generally fails to measure anything at all, instead engaging in length thought experiments and exegeses on what theoretical construct is better for the task, in essence applying none of them empirically. Here's an article example, and it's ridiculously long.
The Solution
The solution is to actually conduct experiments on actual cultural transmission. Richard Dawkins, the founder of memetics, once noted that memetics had not yet found its Crick and Watson; it hadn't even found its Mendel. I think he was wrong. Plenty of people have come before, studying memetics before it was called memetics. Most importantly, to my mind, is the sociolinguist William Labov.
Labov tackled issues of how language change spread, what factors made someone likely to adapt their dialect to another, and so forth. One of his earliest and most famous studies was one of employees in three different department stores in New York City. He found that one meme, the tendency to "correct" the New York City tendency to "drop" the r-sound in certain words (his test was "fourth floor"). The meme was more likely to spread along socioeconomic lines, meaning that those in stores with higher-priced goods tended to include r-sounds in "fourth floor".
I know this isn't really much, as far as studies go, but it was a start. Labov discovered a number of patterns of linguistic change, and scores of sociolinguists after him have followed suit. Sociolinguistics may be a little tame compared to full-on memetics, as language traits are hardly as world-changing as religious and political beliefs, but it is a sufficient, if terribly overlooked start. Not much different from Mendel's plants, if you think about it.
Monday, January 10, 2011
Watch Your Hed
Headlines and ledes grab attention, and are often lies.
This is, of course, a follow-up to the "Update" segment of my previous post, in which I note that Fox News changed a headline from an inflammatory one to a less inflammatory one. Today's post is about why that sort of behavior is too little, too late.
A couple of posts ago, I talked about the "Information Problem", or how the surplus of content affects the way humanity makes decisions. I divided print and web material into three categories: content (less relevant, truth value unassignable), information (more relevant, but pre-interpreted) and data (most relevant, but difficult to understand without interpretation). News reporting, in general, should fit squarely into the "information" category; it should be pre-interpreted data reported from primary sources, and in general, should be unimpeachably true.
Now, that isn't always the case, and there are ways to report things that makes it appears as though the universe, not the medium, has a bias for or against a certain position. And I've written about that too. That needs to be taken care of, but in the end, each news user is going to have to strip bias from news. What's more devious, though, is the headline.
In a content economy, attention is currency. You can never make quick money by printing just the facts, reliably, and within a reasonable timeframe, even though that might be the most sustainable position. You're generally going to want to print news biased toward an audience, report on events that interest that audience, ignore things that don't interest the audience, and try to scoop everyone else all the time, even if you have unreliable or incomplete facts.
Even if you follow that formula, though, you still need to advertise. And the only advertisement that works on a news aggregator is the headline (and the lede, on occasion). So you have to make it count. In the end, if you make a statement that's shocking, you're more likely to get hits. It has been like this from the beginning of printed news. What's different in the aggregated news / 24-hour cable news ticker world is that far more headlines are read than articles, streaming at us as they do.
Unfortunately, shocking headlines are often a little bit misleading (or more than a little bit). This wasn't so much a problem before the Information Age, but now, each headline could easily be perceived as a tiny, unimpeachable fact. Which, of course, is inaccurate, and leads to misconstructed worldviews. Which messes with systems based on preferences and beliefs.
So, save the economy just a little bit, and try not to lie in your headline, even if you fix it in the article, or retract it later.
This is, of course, a follow-up to the "Update" segment of my previous post, in which I note that Fox News changed a headline from an inflammatory one to a less inflammatory one. Today's post is about why that sort of behavior is too little, too late.
A couple of posts ago, I talked about the "Information Problem", or how the surplus of content affects the way humanity makes decisions. I divided print and web material into three categories: content (less relevant, truth value unassignable), information (more relevant, but pre-interpreted) and data (most relevant, but difficult to understand without interpretation). News reporting, in general, should fit squarely into the "information" category; it should be pre-interpreted data reported from primary sources, and in general, should be unimpeachably true.
Now, that isn't always the case, and there are ways to report things that makes it appears as though the universe, not the medium, has a bias for or against a certain position. And I've written about that too. That needs to be taken care of, but in the end, each news user is going to have to strip bias from news. What's more devious, though, is the headline.
In a content economy, attention is currency. You can never make quick money by printing just the facts, reliably, and within a reasonable timeframe, even though that might be the most sustainable position. You're generally going to want to print news biased toward an audience, report on events that interest that audience, ignore things that don't interest the audience, and try to scoop everyone else all the time, even if you have unreliable or incomplete facts.
Even if you follow that formula, though, you still need to advertise. And the only advertisement that works on a news aggregator is the headline (and the lede, on occasion). So you have to make it count. In the end, if you make a statement that's shocking, you're more likely to get hits. It has been like this from the beginning of printed news. What's different in the aggregated news / 24-hour cable news ticker world is that far more headlines are read than articles, streaming at us as they do.
Unfortunately, shocking headlines are often a little bit misleading (or more than a little bit). This wasn't so much a problem before the Information Age, but now, each headline could easily be perceived as a tiny, unimpeachable fact. Which, of course, is inaccurate, and leads to misconstructed worldviews. Which messes with systems based on preferences and beliefs.
So, save the economy just a little bit, and try not to lie in your headline, even if you fix it in the article, or retract it later.
Subscribe to:
Posts (Atom)