I’m a library guy; have been all my life. Best part about the library? The Dewey Decimal System. Now, I know there are those of you out there saying “whoa, slow down there slugger-let’s talk about something a little less exciting than good old Dewey Digits or I might just explode!”
So let’s talk about something only mildly less exciting – METADATA!
You don’t find either of those things exciting? Ok, fine, I admit a vast majority of people (including my wife) do not.
Even though you may not find either topic exciting, there is no denying that information classification and management are highly useful these days and a good number of folks would go mental without them.
For generations since the emergence of machine language (which I would argue has been around since before written language due to mnemonic devices in oral cultures – see embed below) we have striven to make the language that machines use more and more like the language that we use. I would argue that what we are doing these days is finally meeting the machines in the middle. We are altering our written language patterns (using certain technologies) to more closely emulate lessons learned from machine language, including the ability to add metadata to our communications (data about the data we’re transmitting, like categories.)
Programmers of code have been doing this for years, but now the general population has a way to do it to. With the Twitter hashtag, we have a machine language habit that has worked its way into the everyday schprect of non-machines and non-machine coding folks – one that for many people has become instrumental in the organization and classification of an increasingly unmanageable flow of information and communication.
Knowing that, I think that one of the most functional UI features of Twitter (though often less discussed) is the # (hashtag.) Hashtags are like machine language for humans. Machine language naturally wants to by on classification-y when trying to organize information (hence things like the Dewey Decimal System, binary, object oriented programming, etc…) and that’s all good and well for machines.
But we’re people and we like to get our classification on in a more literal sense. By that I mean having to do with literature. By that I mean via words that humans understand. Not like these. Like these, but – not like these.
Humans like to read words they can understand and classify things into categories that make legible sense (that should be clear by now, I’ve said it a few times.) Case in point, the literal hashtag on Twitter. #caseinpoint
There is so much information coming at us from so many directions today, it makes sense that we would need to develop “smart bombs” of information.
A smart bomb is fired but also keeps in mind things like current location relative to destination.
Our information exchange processes have evolved such that not only do I libraries carry books of information that are tagged with stickers that classify they subject matter for easier retrieval – all our communications need to be tagged that way from the largest to the smallest.
Again with the #caseinpoint of Twitter.
I teach classes about use of Twitter to folks who don’t use Twitter and one of the most successful metaphors for understanding hashtags is to see that symbol (#) and let your brain think “file this under” right away upon seeing it.
That’s what a computer would do. That’s what machines and people who make them do. Now that’s what we need to do.
What suprises me is that this is not what Facebook does. LinkedIn recognized early the value of information classifification by integrating well with Twitter’s hashtags. Even Google+ seems to have been able to get this right.
When will it be that Facebook understand the value of externally classifiable metadata to our posts?
In my opinion they will have to if they want to survive the leap to the next level of evolved human communication, which is an odd combination of the thought patterns of oral cultures and “literate” cultures.
What do you think?