Archive for June, 2004

THE BALLAD OF BILBO BAGGINS

Thursday, June 24th, 2004

This is the awesomest video in the history of mankind. Humanity evolved just so that this video could be made.

It’s Leonard Nimoy singing a song about hobbits.

Enjoy!

CANADIAN INTERNET POLICY (OR LACK THEREOF)

Monday, June 21st, 2004

Here’s an article in the Toronto Star, (via Crooked Timber), discussing the technology and intellectual property positions of the parties in the election.

In the only interesting policy pronouncement to be found, the Greens are agressively chasing the ‘beardy Unix administrator’ demographic with their promise to “only [acquire] computer systems built upon open standards and protocols.” Yeah right. I like Linux as much as the next geek, but in no way is it ready for the desktops of civil servants.

OVER THE EDGE

Sunday, June 20th, 2004

I like Edge.org because I disagree with almost everything they publish. This gives me a warm, satisfied feeling of intellectual superiority without having to do any difficult work.

Edge participants are generally very enthusiastic about scientific culture. This is not a bad thing, of couse. But sometimes their enthusiasm causes their speculation about the future of technology to outpace even the goofiest utopian sci-fi pulp.

Case in point: The sunny optimism about science’s potential for improving our lives, without any real understanding of people’s actual lives, makes this Danny Hillis’ article sound like something out of that 1950’s Popular Science “Life in the Year 2000!” article.

I can’t find a scan online, but it famously predicted that the housewife of the future would clean the house by spraying it down with a garden hose, because everything would be waterproof. Not only did this not come about, but the naivete about which parts of daily life are maleable and which are not is charmingly quaint in retrospect. In fact, culture has been by far the largest area of change in the last 50 years. A dynamic and changing culture is the medium in which technologies either flourish or perish.

The inscrutability of social factors when making predictions for the future is not a problem exclusive to scientists and engineers, but Hillis provides a pretty amusing example with his ideal tutor automaton, Aristotle.

“First, imagine that this tutor program can get to know you over a long period of time. Like a good teacher, it knows what you already understand and what you are ready to learn. It also knows what types of explanations are most meaningful to you. It knows your learning style: whether you prefer pictures or stories, examples or abstractions.”

Ok I’m imagining it… no wait, I was thinking about pie. Ok I think I’ve got it now.

“Imagine that this tutor has access to a database containing all the world’s knowledge.”

No. I cannot imagine an actual database containing all the world’s knowledge. Who is going to type it all in, along with it’s various stories and illustrations? How many people will it take to maintain it? How long will a search take? Oh right, I won’t need to search because this database will know what I’m ready to learn.

“I will call this database the knowledge web, to distinguish it from the database of linked documents that is the World Wide Web.”

I will call this database the impossible web, to distinguish it from the actual web and anything that could ever possibly exist anywhere.

“Given such a database, it is well within the range of current technology to write a program that acts as a tutor by selecting and presenting the appropriate explanations from the database.”

There are two things wrong with that sentence. I suppose if God came down and handed us a database containing a complete list of facts, it wouldn’t be impossible to extract information from it in a pedagogically useful form. The other problem is that it is not remotely forseeable that a tutor of the sort we were asked to imagine. Yes, computers can learn your preferences by crunching the statistics on past decisions and behavior, (via neural networks or Bayesian algorithms, for example). This is a far cry from providing the kind of personalized instruction Hillis imagines.

Next, Hillis takes us on a magical mystery tour of what problem the knowledge web might solve.

For example, imagine yourself in the position of an engineer who is designing a critical component and wants to learn something about fault-tolerant design. This is a fairly specialized topic, and most engineers are not familiar with it; a standard engineering education treats the topic superficially, if at all. Fault-tolerant design is an area normally left to specialists. Unless you happen to have taken a specialized course, you are faced with a few unsatisfactory alternatives. You can call in a specialist as a consultant, but if you don’t know much about the field it’s difficult to know what kind of specialist you need, or if the time and expense are worth the trouble. You could try reading a textbook on fault-tolerant design, but such a text would probably assume a knowledge you may have forgotten or may never have known. Besides, a textbook is likely to be out of date, so you will also have to find the relevant journals to read about recent developments. If you find them, they will almost certainly be written for specialists and will be difficult for you to read and understand. Given these unsatisfactory choices, you will probably just give up.

The first option seemed pretty good to me, actually: call a specialist. What’s the problem with that? If you’re not a specialist, you don’t know what sort of specialist to call! Catch-22! (Also: the system won’t be out-of-date like a textbook, yet will not require the professional knowledge and linguistic competence assumed by research publications. I guess even internal consistency is too much to ask.)

The normal human reaction to this sort of scenario is to ask the opinions of those who have had experience with the relevant issues. People love to give advice, and the currently-existing World-Wide Web (or Usenet for that matter) is built on the sort of free advice-giving exactly suited to resolving the dilemma that Hillis poses. Unfortunately, the actual web contains bad advice. The solution being suggested is that we should build a web that only gives good advice.

So let’s explore further what the web of good advice can do for us:

  • “Aristotle would show you a map of what you need to learn … by comparing what you know to what needs to be known to design fault-tolerant modules.”
  • “Aristotle knows what you know because it has worked with you for a long time.” (But what about before it’s worked with you for a long time?)
  • “Aristotle might take your word for what you know, but it is more likely to quiz you about some of the key concepts, just to make sure.” (Well then Aristotle can go fuck himself.)

And that’s only in the first of several paragraphs of breathless flights of fancy about Aristotle‘s amazing possibilities.

Every single paragraph contains claims more absurd than the last. I desperately wanted to find some sort of recognition that Aristotle is really just a fantastical thought-experiment, but I instead found the opposite: “Such a Primer [an AI from a Neal Stephenson novel] is beyond the capabilities of current technology, but even a program as limited as Aristotle would be a step in that direction.” Holy Christ. I can’t continue. It’s all just too stupid.

Artificial Intelligence is dead dead dead if these are the kind of ideas seriously being suggested. You’d think 50 years of abject failure would have engendered some humility.

KIDS SAY THE DARNDEST THINGS (ABOUT NEUROSCIENCE)

Saturday, June 19th, 2004

Here’s an interview with developmental psychologist Paul Bloom, from Edge.org. He claims that children are “natural-born dualists”: that there is an innate intuition of duality between bodies (/brains) and minds.

One story Bloom tells is that, after quizing his six-year-old son about the relationship between his brain and his mind, his son claims that he dreams, he imagines and loves his brother and so on, but his brain does none of these things. Bloom also bring in data on infant cognition, to show that infants come with certain fundamental ontological categories pre-installed (catgories like object-persistence, person, gravity, etc.). The further claim that the mind/matter distinction is one of these basic categories doesn’t seem well supported, but then I wasn’t sufficiently impressed to read his actual research data.

Bloom is skeptical about the possibility that materialism will ever take root in common sense, because dualism is just too darned common and sensible — it’s built into our neural architecture.

This is wrong. Intuitions and common sense are, more often than not, just the trivialities of language reified. When Bloom says that his son has dualist intuitions, what he’s really observing is that his son recognizes that the sentences “My brain is dreaming” or “My brain loves my brother” are deviant utterances; they don’t make sense to contemporary English-speakers. The non-sense of these statements has nothing at all to do with some essential configuration of our psychology, and even less to do with basic distinction in nature. The further step of projecting these linguistic preferences back onto pre-linguistic infants is even more absurd.

But the fundamental error, the axis mundi centering the constellations of Bloom’s embarassing conceptual blunders, is in thinking that a materialist should talk as if he is identical with his brain. When his son claims that the brain doesn’t dream or love his brother, he calls it a “misunderstanding” of the relevant neuroscience data. He makes the bizarre claim that talking about our bodies in possessive form (“my body”, “my brain”) testifies to our common-sense dualism. “These are things that we possess, that we are intimately related to—but not what we are,” he writes. Unfortunately, he fails to notice that we also talk about “my mind”, “my soul” and even “my self”.

This tendency to believe that materialists should identify exclusively with their brains is rooted, I suspect, in Descartes solipsistic identification of himself as a “thinking thing” in his Meditations. (Bloom’s most recent book is titled Descartes’ Baby.) It turns out that it’s our brains which are responsible for thinking, so we must therefore be our brains. This is an error; We are not thinking things, we are things being thought.

Language is constantly evolving, especially (I predict) given the advent of large-scale written conversation. There is nothing necessary about dualism, except insofar was we may need to talk about ourselves as social creatures when engaging in social practices. It’s a valuable metaphor which I fully embrace in casual conversation, while, as a 21st-century philosophical dilettante, I completely deny that minds are made of mind-stuff.

CATHODE RAY MISSION

Saturday, June 19th, 2004

On the technological inevitability of television’s demise, Jeff Jarvis offers up a few data-points. Beep. Boop.

THE TELEVISION WILL NOT BE REVOLUTIONIZED

Thursday, June 17th, 2004

One of my favorite books ever is Infinite Jest by David Foster Wallace. I know it’s a ‘look-at-me-I-read-books’ cliché to say so, but look at me! I read books!

In this book, DFW describes the decay of television as a popular entertainment medium. The increasing desperation involved in advertising to a desensitized and shrinking viewership leads to commercial messages which are horrifying and disturbing to witness. The networks, facing massive flight among their viewership, are at the whim of advertising dollars, have no choice but to air the most excruciatingly unforgettable ads. It seemed plausible to me at the time that competition between marketeers would could lead to drastic measures undertaken to make an impression, any impression at all, in the supersaturated brain of the consumer.

This is not the point of the book or “what it’s about” or anything, it’s just a cool idea that stuck with me. I thought of it when I read Hugh Macleod’s (and then Jeff Jarvis’ and Seth Godin’s) post about McDonalds endorsing “brand journalism”, as opposed to “one big execution of one big idea.”

What is ‘brand journalism’? Well, it’s like a ‘brand narrative’, or a ‘brand chronicle’. That sure sounds exciting, doesn’t it? We want collaborative brand storytelling, not your fascist, hegemonic “universal message concept”! Lets all clap our hands excitedly over a fantastic new era of corporate intercourse! Yay!

“We don’t need one big execution of a big idea,” Larry Light, McDonald Corp.’s chief marketing officer. “We need one big idea that can be used in a multidimensional, multilayered and multifaceted way.” OIC. There’s still one big idea. It’s just being used in a multitude of ways, instead of only one way. That’s a good idea, I guess. A better idea might be to throw out the ‘idea’ idea entirely.

It seems to me that the opposite of what DFW envisions is happening. The modern philosophy of advertising is (or should be) tending against “monolithic” (i.e., televisual) campaigns. As people drift away from advertising-sponsored network tv, (which is a technological inevitability, in my opinion), it’s the networks themselves who are becoming increasingly perverse in their attempts to lure audience attention and thus preserve their value as a portal into the public consciousness.

When commercial-supported tv is turned off, marketers had better learn how to have a conversation. The bloggers mentioned above have interesting things to say about how this could work.

PS. I can’t wait to read DFW’s new book, Oblivion.

PPS. Whoever borrowed my copy of Infinite Jest five years ago should return it >:(

EVERYONE LOVES THE PIXIES

Wednesday, June 16th, 2004

Bam thwok!

Do you know why this song rocks so hard? It’s because Kim Deal wrote it and sings it, with Frank Black on backup. As it should be! She is the superior songwriter and rocks harder before breakfast than Frank Black does all day.

If you haven’t heard it, about a billion copies are available on your favorite p2p network. Or you can buy it from iTunes.

DEBATE 2004

Tuesday, June 15th, 2004

Here’s my punchy, shallow summary of the 2004 Canadian election debate! Taking my cue from the debaters, I will refrain from addressing any actual political issues. Instead, I will restrict my comments to the ‘image’ and ‘brand message’ that the candidates are going through so much trouble over. The substance of the debate is entirely irrelevant to the electoral process, it’s merely a dog-and-pony show, (mostly a dog show).

Paul Martin made a pretty egregious crack about Jack Layton’s “handlers” telling him to talk nonstop. Chretien could really deliver digs like that. If Chretien had said it, I would would have spouted my gin’n juice from my nostrils. Even Duceppe (OR WHATEVER) lauched a few scud missles of witticism across the bow of Martin’s pomposity. Coming from Martin, the metacomment seemed bizarre and vicious. It’s because he looks (and acts) like a slick, soulless capitalist, no doubt. Layton’s indignation was righteously well-played.

Overall Layton came off the best, I thought. He started out rocky, overselling his optimistic positivity brand. Inevitably, it sounded calculated and pandering. I, for one, do not give a fuck what your image consultants have to say, and I’m certainly not going to vote on a pro-optimism platform. “Hey everybody! Let’s all be optimistic about our glorious future of positive togetherness! Healthcare and handjobs for everyone!” It was grating in the extreme. He came off better when it got down and dirty, because he was really the only person on that stage who could pull off the moral superiority necessary for an effective character-attack. And Martin was right: he did talk non-stop, and that was obviously a considered strategy. He desperately wants to be seen as a serious contender, and the only way to do that is to make sure you’re getting lots of that sweet sweet limelight. I definitely consider him a more serious contender after that debate, although that’s mostly because Stephen Harper is so utterly repellant.

Stephen Harper’s handlers told him the exact opposite of Layton’s. “Just attack and then shut the fuck up,” they said. “People want to vote Conservative to punish the Liberals. If you keep quiet, nobody will realize that you’re a centipide-filled robot.” This strategy, while reasonable, made Harper seem like he was moping through the whole thing. Whenever he wasn’t speaking he had a sour, petulant look on his face. Instead of appearing “statesman-like” (as one post-game image consultant said), he just gave the impression of pouting because everybody hates him.

The theme of this debate was Image Management. It’s sad but inevitable that fads of marketing are embraced with gusto by the electoral process. The worst part by far of the whole televised debate (on Global anyways) was the utterly fucking ridiculous focus group insta-poll. See people’s reactions IN REAL TIME as they twist a little knob according to the vague sensations of ‘agreement’ or ‘disagreement’ that flit through their awareness! Wow!! The little bar — which actually took up the bottom half of the screen, another quarter of which was taken up by reaction-shots of the knob-twiddlers — would show how far their knobs were twiddled towards “Strongly Agree” or “Strongly Disagree” during the debate.

Predictably, the measurement seemed to fluxuate mostly at random. At times it seemed to correspond to, if anything, the tone of the speaker more than the points being made. Anytime there was a gracious or generous comment made, agreement edged upward. Anytime there was any hint of defensiveness or hostility, disagreement was rampant. I think it was more a “to what extent are you repulsed by this person’s very existence”-meter than any sort of comment on policy preferences. All candidates started out as moderately disagreable, and then edged upward towards neutrality as the knob-twiddlers got bored.

What this focus-grope* was supposed to add to the debate is unclear. In the pre-coverage, someone claimed that in after past debates, the editorial analysis on TV and in newspapers tends to diverge from the reaction of “the average person” who doesn’t read/watch any of the mainstream recap. Obviously. The average person is not very perceptive. The average person is an idiot. At least journalists and their interview panels are professionals who are expected to think carefully and form reasoned opinions about the debate. If I disagree, I should wonder about my interpretation and maybe rethink my opinions. In fact, I respect the opinion of anyone able to muster some coherent interpretation. I could care less what Joe-from-Toronto’s viceral phenomenology of agreement was like during the debate.

I think I’ll vote NDP, but only because Layton doesn’t make me want to projectile vomit from every orifice.

*focus-grope: a focus group which is given exceptionally vague goals and directions to ‘intuit’ feelings or reaction about some subject or product. The subjects then grope around in the dark for ways to please the experimenters by reacting in the appropriate way. (Actually I just misspelled ‘focus-group’ but I think it’s apropos.)

HOW TO CHANGE MINDS (PART 1)

Tuesday, June 15th, 2004

The symptoms of viral or bacterial infection are entirely negative. Isn’t this a remarkable fact? Why aren’t there viruses that improve your physical or mental wellbeing?

Why doesn’t the flu make you energetic and courteous and prone to spontaneous orgasms?

Why can’t you catch a buzz like you catch a cold?

Why aren’t you calling into work, laid up with inspiration and under doctor’s orders to remain in bed writing concertos and novellas?

A beneficial infection spreads so rapidly that it quickly becomes incorporated into our bodies. It becomes identical with us; we gain possession of it they way we own our immune systems. (This relationship is really more like a delicate truce, unfortunately.)

The same is true of ideas. Encountering a valuable idea, we absorb it into ourselves. It become my belief, my thought, my possession.

108735519451671871

Tuesday, June 15th, 2004

scott says: (11:31:28 AM)
i’m feeling pessimistic about psychology in general lately

scott says: (11:31:43 AM)
i’ve almost stopped believing in minds entirely

misseeee says: (11:31:55 AM)
you are freaking me out

scott says: (11:31:58 AM)
such a baroque cartesian fiction!

misseeee says: (11:32:26 AM)
sounds to me like someone is grumpy and needs a nap