Archive for May, 2009

Infants, Rodents and Blind People

Posted in Digital Innovation on May 20th, 2009 by Elijah Meeks – Comments Off

Jurgen Appelo just bodychecked Web 2.0.

I should just repost the whole thing, but I’ll pick and choose the very best parts.  Everyone should simply go over there and read it.

Have you noticed that, with the exception of infants, rodents and blind people, the whole world has picked up the hobby of digital photography?

As an aside, Hajra is on DeviantArt, and a non-scientific but long survey backs up Jurgen’s claims.

It’s very different for software developers. Programmers are like novelists, movie directors, and traditional painters. The barrier to entry is high because the work is hard, complicated, and requires craftsmanship. Nobody among my friends is silly enough to “give programming a try”. It’s much easier for them to take pictures, write poems, or to empty a trash bin on the floor and call it modern art.

The only issue I’d take with this excellent, acerbic posting is that I “give programming a try” all the time.  I did it back with XConq (C++ and whatever loonie scripting language it used), Bughunter (PHP), the Song Digital Gazeteer (SQL) and, lately, with Flash (Here’s yet another example that I won’t even explain).  I think that barriers to entry have always defined the peer-produced media process, and while I agree that universal access to digital cameras reduces the novelty of photographs, it also allows for some pretty low-cost profile modeling.  With the growth of beautiful vector graphics packages–And by that I mean Inkscape–which will finally provide low-barrier Turing complete functionality to the masses, as well as more and more intuitive (Though resource-hungry*) programming languages I think software development will  steadily fall into the Poetry and Modern Art realm, and that should be a generally good thing.  Except for those guys who still know Machine Language and insist we never needed to develop anything after COBOL.  For them, it’ll suck, and they’ll probably take up residence next to the candlemakers and the film developers and complain about how today’s code doesn’t have that hand-crafted feel to it.

*But who cares?  I’ve got processes to burn!!

Search Visualization

Posted in Digital Innovation on May 19th, 2009 by Elijah Meeks – Comments Off

google-wheel

How interesting…  Infosthetics just pointed out Google’s new toy, an entry in the realm of visualized search.  I just stumbled on FaceSaerch which promises a different kind of visual search experience.  I wonder how long it will be before we have a search engine that doesn’t require any text whatsoever…

Wherefore art thou, Wikiatlas?

Posted in Academia, Digital Innovation on May 18th, 2009 by Elijah Meeks – 2 Comments

It’s always of concern when a journal article opens with a vague statement.  So when What does Google Earth mean for the Social Sciences? opens a claim that cave painting can be equated to maps, then one has to wonder about the explantory power of the word “map”.  Is it a representation of geography, of ecology, of political boundary?  I’ve never seen a cave painting that could be classified as a map, unless we’re going to so broaden the definition in both symbolic and analytical ways so that we might as well include any image of any thing.  I realize it’s a throwaway intro remark, but the lack of specificity is worrisome.  The article points out that hierarchically invested geographies have theoretical consistency in urban, political and environmental fields, and then goes on to describe Google Earth.  But when it comes to Google Earth, all I can focus on is the simplicity of the tool itself, which seems to enforce a methodological simplicity on its users.  Sure, you can build kmls till the cows come home, but the “lack of analytical functionality in Google Earth” means one can do little more than display data.

The difference between discrete objects and continuous fields underlies Google Earth’s orientation toward an audience interested in looking at a map that is a static spatial reference point for various phenomena to be placed upon.  The map exists, as a classic map, underneath the data, and is considered an external, agreed-upon norm for the purpose of discussing the really interesting subjects, which exist on top of the map.  Whereas in GIS the raster or continuous field data is as easy to manipulate and analyze as the point and polygon data, in Google Earth it is the points and polygons where any manipulation will occur.  Doesn’t sound so bad, really, until we start to think beyond the surface of the Earth being simply topographical.  Land use change, environmental change and all those transportation and urban networks represented by Google Earth have an effect on what is presented as static data.  Deforestation areas, available as polygons, or sites of past flood levels, available as points, in reality overlap these and raster data areas, either periodically or permanently, and Google Earth reinforces a worldview wherein this is not the case.  You’re using a static tool consisting of overlays to represent an acknowledged dynamic socio-political/socio-environmental object whose various (arbitrarily mediated) fields are in constant interaction and do not sit neatly upon one another, without mixing (Are there miscegeny laws against the interaction of different data from different disciplines, did nobody tell me?).

This doesn’t mean Google Earth is trash, of course.  You can still examine the horizontal and vertical spatial relationships of point and polygon data.  And you can see how census laws get circumvented by certain coastal California cities.  But, ultimately, and as with most consumer products, its assumptions reinforce an ontology that may hinder knowledge generation by unnecessarily limiting the scope of questions asked or relationships examined.  Maybe that’s why the current crop of spatially-informed peer projects is so, well, bad.  Wikimapia seems little more than an application waiting for a meme bomb; there’s still no Wikimaps or Wikiatlas (and why would you even propose a Wikiteer when Wikipedia already contains its wealth of geo-referenced information in gazetteer format?).  I’m sure I’ve mentioned that Hypercities is terrible, but it bears repeating.  Open Street Map seems to have the community support, but rather pedestrian (if you’ll pardon the pun) aims.  I realize not everyone has the time or inclination to pick up a copy of ArcGIS and start creating hillshades, but isn’t there a middle ground in the manipulation and collection of spatial data between the superficial and the highly technical?

What we need is a serious, peer-collaborative, spatially-oriented agent-based model.  It’s not as crazy as you think.  First off, people are familiar with modeling, they just don’t realize it.  People just call them games, especially the sandbox games.  Given a space to play in and describe the interactive, dynamic processes that define our existence, I think we’d do wonders for improving general human knowledge (Which can’t be bad).

It should start small, with established principles, by creating a commons-based model of the Earth’s environmental systems.  Scientists have been building these for years, and creating a scaled-down (But not dumbed down) visually rich version that anyone can play with (And, more importantly, contribute to, which would require a MediaWiki version of some scripting system to design the logic for your particular actor or process) and then slowly integrate geologic processes (Imagine a world where every child has access to an animated explanation of the formation of the Andes), economic processes, migration, political change, you name it.  Since I already described Web 3.0 as the Hybrid World, I’m going to call the commons-based peer collaborative agent-based modeling Web 4.0, or maybe Web4 or Web4D or Axxiles.  No, if it has to be a Greek, it should be Antikythera.  The hardest part is getting the name right, after that everything else should simply fall into place.

Just to update, because you should never write anything about spatial matters without first looking at Very Spatial and Vector One, take a look at this procedurally-generated city and tell me we can’t build a beautiful model of world systems and processes with a low barrier to entry.

The Hybrid World

Posted in Academia, Digital Innovation on May 16th, 2009 by Elijah Meeks – Comments Off

Michelle Obama just gave an excellent speech this afternoon, calling upon the latest crop of graduates to remember that they were lucky, and it is the responsibility of the fortunate to look out for the unfortunate.  So, naturally, like all of you, I thought it was apropos to talk about WordPress and “people” in Guy Fawkes masks.  Oh yes, it’s that tangled a web of flimsy connections, so let’s see if I can unravel it.

So it may seem strange that in listening to the speech I thought about folks looking like goofy late 16th century terrorists from a terribly dull graphic novel, but let’s review a recent statement from the WordPress Dev Blog:

WordPress is an open source project, successful because of the community that both develops and uses it. At the same time, some people find it difficult to become involved in the project, and are unsure of how to engage with the core team and community at large. The channels listed above can be overwhleming to someone just joining the community, and/or frustrating to longtime community members who feel like they used to have more influence. We need to fix this. The WordPress project needs to be welcoming, easy to navigate as a contributor, and provide useful feedback to help grow the expertise of its community members.

And compare that to the First Lady’s statements, putatively directed toward the non-digital world (Putative since there is, I would argue, a pure digital world but I doubt you can find a place on Earth anymore that doesn’t have some connectedness to the connected world) but directed toward that open-source, peer-collaborated project known as “life”:

So, whenever you get ready to give up, think about all of these people and remember that you are blessed. Remember that you are blessed. Remember that in exchange for those blessings, you must give something back. You must reach back and pull someone up. You must bend down and let someone else stand on your shoulders so that they can see a brighter future.

As advocate and activist Marian Wright Edelman says, “Service is the rent we pay for living…it is the true measure, the only measure of our success.”

Maybe the 100º heat addled my brain, but it reminds me of a particular identity crisis that a certain cat-and-anime obsessed web-site went through a little while back.  Reports of the Anonymous anti-Scientology protests stressed their real life nature, in contradiction to the pure digital social power that members of sites such as 4chan and Something Awful represent.  It’s an interesting orthodoxy that develops in these communities–whether it’s in the creation of open source software, peer-production of knowledge, or photoshopped sharks about to ironically eat various persons–and it’s the same orthodoxy of which Michelle Obama was reminding the students of UC Merced:  You didn’t just get your degree because of your hard work, you got it because you were lucky enough to get the opportunity and because the system wasn’t closing its doors to you.  WordPress found itself with a bunch of stuck and locked doors, Wikipedia finds itself with similar problems, 4chan found itself with a whole new wing of the house being built, complete with all new doors that led to places they weren’t sure they wanted to go.

So what does it mean?  I think this is Web 3.0–the Hybrid World–where the pure digital begins to do more than reproduce itself physio-socially in the form of T-Shirts but instead rebounds upon sociological and historical knowledge creating the Community Activist who worries not about access for the poor to a college education but access for the intimidated to the dev-channel of WordPress, or for newbie Wikipedia contributors to be treated as well as 10,000 edit veterans.  And the reverse, already more prevalent, the savvy use of the digital not as a gimmick or addendum but as a true asymmetrical ruleshift.  While everyone’s desperately anticipating RFDwhatever and the end of silos (like the web is really siloed to any meaningful degree–isn’t that just a bit too much hyperbole–I mean, I’m sure you could claim the data doesn’t have any way of interacting, but you’d have to posit the nonexistence of people, in which case I’m glad the World Wide Internets are not, for one, ready to welcome our AI overlords) or really-we-swear-the-Semantic-Web-is-almost-here optimism, there’s a growing permeability between the concept of on-line community and traditional community, and if you think it’s old news than I’d say you’re mistaking the extrinsic for the intrinsic.  There are a lot of orthodoxies on the digital side that will need to be thrown down before it’s complete (Ironically typified by a guy who injected himself into the real but insists that there’s a “firewall in between” real life and internet life).

Two states, grown in isolation, merge slowly and with pain, and with orthodoxy fighting from both sides.  But unification (Re-unification, really, it’s just that our life-in-letters was temporarily replaced with our “Internet Life” and we all pretended that it was a real distinction and not compartmentalization grown ossified) or synthesis, if you prefer the dialectic, is where this is all going.  Cross-pollination continues and I expect to see both creepy and beautiful, fertile hybrids as a result–as well as some useful, but stiff-minded mules.

Digital History at UC Merced

Posted in Academia, Digital Innovation, Events on May 15th, 2009 by Elijah Meeks – Comments Off

Tomorrow morning I’ll be giving a presentation on digital history as part of the big commencement celebration coinciding with the visit by first lady Michelle Obama. Along with giving me an opportunity to talk about the importance of using digital tools in the study of the humanities and integrating environmental systems into historical systems, it gives me the chance to show off some majestic cattle.

UC Merced students often complain about the cows, which is a shame, given the respect our ancestors gave to the animals.

UC Merced students often complain about the cows, which is a shame, given the respect our ancestors gave to the animals.

The Digital Bantustan – Connectivity Qua Balkanization

Posted in Academia, Digital Innovation, Eschatology on May 14th, 2009 by Elijah Meeks – Comments Off

Just finished reading Immanuel Wallerstein’s article in the excellent Rethinking Environmental History: World System History and Global Environmental Change.  Wallerstein is the founder of World Systems Theory, which focuses on the processual links between societies as an explanation of historical events, though the theory has grown far beyond Wallerstein’s capitalism-oriented, modernity-constrained initial description.  Without getting too arcane, there’s a movement in the academy toward integrating environmental systems into the study of history, and to do so you need to systemitize history to make it feasible.  But Wallerstein’s essay–the final one in the book–doesn’t focus on understanding extractivist culture or divining proxies for deforestation, but rather on the collapse of the modern world.

Pretty heady stuff.  According to Wallerstein, the systemic failure of the current system is already a given, and it’s only a question of whether the enlightened aristocracy of Davos ends up controlling the next great system or the Wikipedia-like, distributed (and chaotic) peers typified by the World Social Forum.  For those of you, like me, who are unfamiliar with the WSF, their meetings sound like the equivalent of a real-world Wikipedia:

Other people that were not coming from Latin America were unconsciously excluded from the forum, as there were no interpreters at the forum at all, and it was very difficult for people who were coming from outside Latin America to follow speeches or activities that were taking place in the forum from day one of the forum. It was made clear that it was not the responsibility of the organizers to organize interpreters for people, it was people’s responsibility to organize their own interpreters and it was very difficult for us to get that as there was no prior arrangements made. This was a pity. In our struggles in South Africa we have many different languages but our movements always take responsibility for organizing translation – especially for visitors. Of course the NGOs in South Africa want to do everything in English but not the movements.

Of note is the growing importance of academics and Non-Governmental Organizations in the ranks of the WSF, which runs afoul of an anti-expert bias like that typically associated with Wikipedia (And philosophical Daoism, but that’s way off topic).  Contrast this with the expert-driven and much swankier World Economic Forum and you start to see an almost uncanny resemblance between the state of these two groups and the state of the university and the growing connected-world knowledge bases.  The World Economic Forum is about to be underway, and it’ll even include celebrity Twitter interviews as well as a host of externally accredited experts, thereby limiting the number of participants to a modest two and half thousand, versus the tens of thousands who show up for the WSF.

Wallerstein posits these two organizations as emblematic of the two paths toward the “Next System” (Some kind of post-capitalist/post-Marxist future means of economic organization) and wonders, as I did in my last post, how the current instability will play out.  It is interesting to note that there is some kind of dichotomous self-organization occuring across various realms, with a peer-collaboration expression on the one side (WSF is criticized, like Wikipedia, as being Communist at its core) and an expert-oriented version acting like a Zoroastrian neccessary-opposite.  Strangely enough, these various evil dopplegangers seem to be unaware of their placement within a putative Pantheon of Global Social Conflict:

Wikipedia, Open Source Software, World Social Forum

vs.

Academia, Proprietary Software, World Economic Forum

Since I can’t think of a simple dichotomous relationship to posit Local Community / Global Multinational without expanding on whether I’m talking about services, products or agriculture, I’ll leave it out.  I realize there’s no strict alignment between these forces, and that you have academics supporting Wikipedia and Apache being used by major corporations, but there’s already a growing sense that the local organic farmer (or bookshop owner) should be using Linux and supporting Wikipedia and taking part in the WSF.  Not sure how the social networking sites figure into this, they don’t seem to skew or splinter along ideological lines, but that could just be a sign of my own unfamiliarity (and extreme disdain) for them.  What’s extremely strange, at least to me, is that there is a definite dualistic nature to our modern world ideological system, and yet it seems that we’ve fractured into more ideological bantustans than ever before–due in part to the remarkable ability of the Web to break down communication and organization costs and therefore allow for Mao’s thousand-flower continuum.  The bantustans are natural conflict-absorbers, because they make disagreements, like modern art, seem so subjective.  To paraphrase a quote that may or may not have come from Kissinger, you can express vicious disagreement precisely because of the very low stakes.  But this masks a very real, distinct dichotomy of ideology that permeates global culture and which seems to be expressing itself in every new endeavor.

Gaming the Systems

Posted in Academia, Digital Innovation, Eschatology on May 11th, 2009 by Elijah Meeks – 5 Comments

Another interesting point raised at Akahele (The cautious site with the beautiful name) highlights the problem of crowds: the concern for peer-produced data in an environment where some of those peers want to insert malicious, propagandistic or otherwise known-flawed results into a system.  But the problem isn’t limited to peer-produced knowledge, and it seems that El Sevier, the publisher everyone loves to hate, is just as happy to game the system in an entirely expert-driven manner.  The cynical part of me has always argued that the criticisms of Wikipedia result from a naive vision of knowledge production in traditional spheres, and this seems to back it up, but I think it’s more problematic than that.  We seem to be moving into a post-post-modern period where it’s not just the critics who think that meaning and truth are malleable but also the consumers and creators of knowledge.  It’s like 1984 but instead of just Big Brother manipulating the facts, imagine if Winston Smith was also manipulating the facts in his own personal life and on common peer resources.  The journal industry in academia has been oft-criticized as of late, concurrent with the criticism of academia itself, and I think if we’re not careful we’re going to see the El Seviers and Wikipedias of the world meet in the middle, where only the most mundane of facts are agreed upon and, like modern news agencies, anyone will be able to point to their favored experts (or non-expert encyclopedias) to support whatever malicious intention they have.  Doubly worrisome is the effect this must be having on the traditional citizen, who according to Enlightenment theory requires their proper understanding of the world (known as education) to make informed choices in directing the course of their political system.

Nowadays, with ready access to expert and non-expert knowledge that supports every side of every debate, we’re faced with an extremely public social surgery, and all indications are that amputation or leeching will come back in vogue.  Whatever your stance on creationism or globalization or climate change or Islam or Tibetan nationalism, you can now find experts, news outlets, journal articles, &c &c to support you.  This does not seem to be a sustainable system.  But what’s the solution?  Will the backlash be a clamoring for a restoration of aristocratic control or a pulsing anarchy of rival ideologies?  History tends to describe the latter as too unstable to last for long.  I don’t know.

Poor Gimlet

Posted in Art, Bughunter, Eschatology on May 9th, 2009 by Elijah Meeks – 2 Comments

Recon by fire

Recon by fire

For those of you curious as to how the game is played, it was summed up like this:

ALL OF YOUR CHARACTERS WILL BE SLAUGHTERED LIKE CATTLE.  Don’t expect anyone you roll up in the first week to remain alive for more than twenty four hours.  Remember Wierzbowski in Aliens?  Of course you don’t.  You’re Wierzbowski.  Or that guy who takes off his helmet on Omaha Beach in Saving Private Ryan.  Expect to have the lifespan of an extra in a John Woo film.  Most likely within two seconds of your first contact, you’ll be reduced to bloody chunks.  Learn to love it.  Don’t get too attached – after all, you don’t have to write letters to their mothers/queen.  Learn the ropes, and someday you’ll be able to do the same to the next batch of lambs led to the slaughter.

For those of you trying to figure out the theme of this blog…  Well, I’ve got no answers.

Sour Grapes or, I hope Social Computing in 2020 isn’t anything like this

Posted in Academia, Digital Innovation on May 6th, 2009 by Elijah Meeks – Comments Off

UC Santa Barbara’s Transliteracies Project just released the winners of their Social Computing in 2020 competition.  I’d put together an entry for this, based on the rise of animation as a medium for non-lingual knowledge formation and transmission, but apparently the folks at Santa Barbara have a rather constrained vision of social computing (read Facebook), as well as too much exposure to science fiction and not enough exposure to dystopian literature.  To wit, their winners consist of:

First Place – A hypothetical tool to manage the privacy settings of your on-life.
Now, this is the most technologically feasible of the winning entries, which isn’t saying much once you find out what took second place, but the idea that you can one day prevent your boss from seeing your drunken MySpace pics isn’t exactly groundbreaking, earth-shattering or radical.  It is, however, highly unlikely, given that the growth of social computing and the Web in general have eroded the concept of private life both from a technological perspective (It’s hard to hide who you are once you begin to take part in the connected world) and from a social perspective (More and more people think once incredibly private details are not such a big deal in the connected world).  People already manage the privacy settings in their on-life (Their on-line life, I’m coining that term right now, you all owe me $.05 every time you use it) by using multiple email accounts, signing up with monikers on their various social web sites, &c, &c.  If I was a judge, I’d give this entry a B for feasibility, a D for novelty and a C- for need.

Second Place – Recording Sensations for Fun and Profit!
Okay, so we’re going to assume that someone will create a “sensation suit” in the next ten-twenty years, that will record “experience” and allow you to replay it.  If you’re thinking of a particularly crumby movie starring Ralph Fiennes, you’re not alone.  Let’s just imagine that such a technology is within our reach–wait, no, feasibility was one of the hallmarks of this competition, you can’t just assume that somebody will perform alchemical sorcery and create a magic SensoSuit.  How the heck do you expect this technology to work?  The entry includes erotic activity on its list of experiences, so apparently the suit can transmit sensations particular to men and women.  How’s that work?  Is it a unisex suit?  At least the sci-fi that’s dealt with this kind of thing acknowledges you’ll need to tap into the brain to do it, in which case even the Johnny Mnemonic possibility is still out of reach, but at least it’s hypothetically feasible.  Alright, alright, let’s say somebody makes these suits (dry-clean only, I hope) then what?  Is it just a pressure point and pin-prick system (The Full Body Acupuncture Experience, now at Disney World!) and if so, is that really what an “experience” is?  I hate technological optimism, that’s why I think The Truth Machine is an amazing work of fascist fandom, but the amount of technical mastery necessary to create a dubiously structured version of experiential knowledge is, in my mind, a bad mix.  F for feasibility, C for novelty (Again, I feel the need to point out crappy movies and literature from the depths of science fiction) and, I don’t know, a B- for need, just because I acknowledge that porn still rules the Internet (A sad shame and a case for the ultimate failure of a major tenet of the Enlightenment) and that doesn’t show any signs of letting up.

Third Place – RFID in Your Colon
Okay, so one of the other reasons I hate technological optimism is because it completely ignores the misuse of technology.  I call it the Nazi Substitution Hypothetical.  Nazis plus microwave ovens?  Not much difference.  Nazis plus nuclear weapons?  Big Mistake.  If the Nazis had your miracle technology, would they be geometrically worse or exponentially worse.  Again I have to point to the Truth Machine, a novel based on the premise of the invention of a machine that could unerringly determine if the subject was lying.  Oh, that’s great if we live in a happy world of unicorns, but think of how much easier it’d be to run a fascist state if you had that?  So this entry proposes putting RFID chips in EVERY ORGAN IN YOUR BODY so that some, assumably benevolent, controlling power could track the function of your body.  The write-up on the Social Computing site doesn’t include a specific social computing aspect, and seems to assume a classic governmental agency approach, and really, why would all your twitter fans want to know about your gall bladder efficiency?  And who’s going to willingly subject themselves to three hundred and twenty-eight RFID insertions?  We bristle about the possibilities of an authoritarian state looking into our backyard, much less our prostate. D for feasibility (I have to average the feasibility of creating the technology with the feasibility of a free society subjecting themselves to it), F/A for need (F if you think The State will ever turn bad, A if you live in a hippie utopia where your spleen is everybody’s spleen) and a B- for novelty (Tracking the physiological function of the polity is another staple of sci-fi–heck, even Aliens did it).

Knowing my luck, this will all come to pass, so at least my boss won’t know the function of my kidneys while I record my ride on the most dangerous rollercoaster in the world.

Quasi-Amateurs in the Digital Humanities

Posted in Academia, Digital Innovation on May 4th, 2009 by Elijah Meeks – 1 Comment

The last time I was writing about qausi-amateurs in knowledge production, I primarily focused on the phenomenon of scientists writing history, particularly environmental history, because historians trained in the current university system are unfamiliar with environmental science and uncomfortable making sweeping claims about environmental systems, whereas scientists feel that philosophy and history are easy enough to jump into based, I assume, in a perceived “Rigor Gap” between the sciences and the humanities.

But that’s not the only way in which academics change course from their initial field of study to produce scholarship outside their discipline.  Laura Mandell wrote an excellent article about the growth of the Digital Humanities, wherein she highlights the attempts by scholars of literature and history to assimilate digital tools and theory into their scholarship.  Mandell echoes Mark Taylor’s call to dismantle the disciplines that define modern scholarship but with more pressing concern based on the currency of university education:

But these efforts to dismantle the disciplines may resemble a wrestling match between Thor and the Evil Demon more than it does any revolution, velvet or otherwise.  There will be no disciplines if there is no one to discipline.  It’s not simply a matter of Phoenix University or universities-without-walls or distance learning.  It’s really that so much learning is taking place online, without us.  Oh sure, kids in the U.S.: they’ll come to college, they’ll do their time, get their credentials: but their thinking will happen elsewhere.

A couple years back I claimed that the university needs Wikipedia more than Wikipedia needs the university, and I think it’s time I amend that statement.  It’s really not the university–which is simply a construct arising out of the interaction between scholars and students, society and academy–or Wikipedia–which is much the same.  Rather, it is that the humanities need the digital more than the digital needs to be humanistic.  We already wander an Internet where humanities knowledge is readily available, produced and collated by non-scholars in more innovating ways than even well-funded scholarly digital humanities projects.  The Victoria Improvement Project, a user-created mod of Paradox Interactive’s colonial conquest game, has done a better job of creating a vibrant, historically accurate world than History Game Canada, which just won a large Digital Media and Learning Prize.  Another prize-winner was Hypercities, which is little more than a Google Maps mashup that seems to be inspired by the amazing work of David Rumsey.

It’s not that these projects aren’t the cutting edge of digital innovation, they shouldn’t be, but the current production of scholarly digital media is sadly lacking in real depth and is reflected by the concurrent monographs and journal articles written about it.  More and more, scholars are realizing that there is something to be gained from the development and use of digital tools, but their years developing expertise in literature, history or art have left them unable to write or understand code.  As such, though they may be experts in the colonial history of Kenya, they cannot translate their knowledge into a systems theoretical expression suitable for presentation in digital media, and so we miss out on truly innovative humanities digital media.  The work done in the Digital Humanities is done by quasi-amateurs, whose vision needs to be translated by technical experts unschooled in historical, artistic or literary dynamics.  It is as if we had brilliant but illiterate scholars of philosophy whose arguments were being written down by scribes unschooled in the philosophical arguments they record.  Heidegger wrote of the damage done by this process when he bemoaned the translation of Greek philosophy by translators not as brilliant as the original writers.  Plotinus argued that philosophers shouldn’t write books, and there are still many humanities scholars who see digital media in the same light, but if we’re going to accept that digital media is the new book, then that requires the academics to be software literate.  Lev Manovich makes the same claim in Software Takes Command, pointing out rightly that you don’t need to produce all the code, or even understand it all, but you must understand how software functions if you want to create software.  As Daniel Cohen points out in The Promise of Digital History:

But those who would like to do advanced work in digital history will ultimately have to acquire significant technical skills, not only to execute complex digital projects successfully (or to guide those doing the design and programming in a technically literate way), but also to have a more far-reaching vision of what is possible for historians in this new medium.

This focus on software literacy has to begin in undergraduate humanities education and must be stressed in graduate school.  Already we claim that a quality humanities scholar must be able to write, hence the required writing courses in every university, and that they must have a grasp of foreign languages, hence the similar requirements in graduate programs, now it’s time to place software literacy on that list.  You cannot be an expert if you are illiterate in the subject you claim expertise.

Updated to improve my street cred, apparently Nick Montfort at MIT is willing to acknowledge the reality on the ground:

I certainly don’t want to ban anyone from the field for not knowing about computing systems, but I also think it would be a disservice to give out game studies or digital media degrees at this point and not have this sort of essential technical background be part of the curriculum.

Wouldn’t you find it strange if a literature professor said that they didn’t “want to ban anyone from the field for not knowing how to read and write”?