Get your Portable ID!

Monday, September 21, 2009

Diablo III and the Art of Procrastination

I originally started composing the following as a response to the article at Gamasutra, being a slightly less limp than usual Chris Remo interview with two gentlemen from Blizzard about Diablo III's development process.

I realized that this was less a response than a screed, so I'll just put it here so that the only 2-3 people in the world who care about my opinion on the Diablo series can read it.


The interview highlights the development process at Blizzard for Diablo III as one of perpetual revision, where they decide to release it not on any sort of sane schedule but on a process of "hey, this isn't awesome enough" and then they add more crap.



It must be nice to have that level of freedom to constantly revise the game whenever the whim takes them. It most certainly yields a more colorful game with significant replayability. Diablo II still has the ability to get its hooks into people 9 years later; its gameplay still visceral, its sounds still evocative, its dated graphics still retaining a kind of timeless quality no doubt provided for by the right mix of cartoony and dark.

However, I wonder how much of a sacrifice a process like that is. Is there some basic book of design and plot description they work with from the beginning, or are they stuck trying to nail all of the "awesome" together into a coherent story line?

There was much about Diablo II that seemed like it was added to the game to pave the way for some new features which were never developed, like the assassin lady in Kurast with the interesting back story. With a little more focus at some point during development, the sort of unfinished empty feel yielded by their environments could be fleshed out into an emotional experience more significant to the player than just "awesome."

With a solid plot and some emotional investment added to the already powerful experiences (by a mechanism other than the truly gripping cutscenes) the Diablo series could not help but be improved.

Thursday, July 2, 2009

Thinking about the feasibility of Project Natal

I'm still thinking more about the feasibility of Project Natal.

Assuming you have a stereo camera setup with one infrared sensor and one camera with a high dynamic range and a very high sample rate even if the resolution is piss poor then I can imagine depth sensing working quite well. If the system has sufficient breadth (which by the pictures it probably does not) then it should be able to get oblique enough views to correct for accidental occlusion of body parts and the inevitable depth errors when for instance, a man wearing no shirt in a room whose lighting is either too bright or too dim for his skin tone is doing things with his hands in front of his chest - or they're wearing a shirt with similar emissive/reflective characteristics as their skin (on both the infrared and visible spectra) - both of which should be considered worst-case scenarios for the purpose of resolving individual body parts.

The infrared emission and reflection characteristics of human flesh are fairly distinct from most natural fiber clothing that people wear - unfortunately, some synthetics show up almost the exact same "color" of infrared as at the various common human skin tones. Unless the infrared sensor has a good gamut in the infrared range, this could produce serious problems with discernment. Nudity, one would suppose, would present all sorts of problems. A terrible scenario would probably be someone's pasty-skinned child trying to play a game while coming in from the sun for a break in their Lycra swimsuit and thick sunscreen and no shirt - especially if they're baggy shorts. Then you just have a confusing mass of light and dark spots slopping around on the screen of the camera in the occasional vague shape of a human child.

Assuming that the RGB camera data mixed with the depth data from the infrared well - which is the best-case scenario that Microsoft is no doubt depending on, then the system could do a sort of depth sampling which would end up, even in less than ideal situations, a bit like that intentionally distorted Radiohead LIDAR cloud which Thom Yorke turns into (only much lower resolution). If that quality of data (or near that) can be extracted, and there's enough skew between the depth camera and the rgb camera, then you should be able to discern hands in front of the body with some depth accuracy - in which case, the device will work quite well as a motion sensor, assuming that the libraries which pick out which body part is which at least return consistent even if not always accurate results. If game developers have to develop their own libraries for discerning anatomical characteristics, then the system's going to flop hard, and I think MS knows this, so we'll probably start seeing XNA updates by the end of the year with the beginnings of motion control libraries in them.

Moving on to facial recognition, it's glitchy at best, but with the additional depth data, it'll help reduce the false positives quite a bit. Assuming some intelligent use of "motion trails" with ordinary face recognition and depth data, some slightly more than trivial object and face recognition might be possible; like "Hey, who's that back on the couch? Is that Dave?" If the Natal system can ever pull that off, even occasionally hilariously incorrectly, then I'll be plenty impressed.

Facial recognition leads to the next problem, however, which is their claim of emotional recognition. Assuming that you're using a gestalt of methods - erratic movements sensed by the depth cameras, facial shifts, basic emotional grammars on the face, vocal tics and other vocal cues, then it could probably manage to detect stress and amusement, but beyond that, I have absolutely no confidence in the ability of the Natal sensor to pick up emotion. Something like that ridiculous Vitality Sensor that Nintendo is putting out would augment the data adequately to give me greater confidence in its results, but it's hard enough to read emotion using a few hundred thousand years of behavioral evolution, let alone trying to do it with technology less than 30 years old.

This brings us to the other point, if you're paying attention to voices, you want more than emotion sensing - so we talk about speech. Speech recognition in something like Natal is really only going to work in absolutely ideal situations unless their multi-array mics are sufficient in number and precision to yield rough positional data which can be cross-referenced using the depth and camera data. If this can be done, then "source filtration" ofthe audio can be done so that you're only trying to process sound from one location.

I could see the developers at Microsoft creating a sort of data structure which represents a fuzzy cloud of positional data which they identify as an "individual" whether it's a human or a cat or a roomba. This entity would become an input source, at least from the perspectives of developers working with the technology for games. From this source, you could pull speech samples (and a confidence number), video data, generalized motion information, and a low-data-rate history of what they've been doing for the past few seconds to help make prediction and resolution of actions a little easier.

As much as remembering inputs is important for, say, an arcade fighting game, it will be even more important in a Natal-enabled game, because these games will not only have to interpret what the player is doing at each instant, they need to resolve "intent" behind the players' current motions - in other words, what they're trying to do. If somebody's wimpy albino kid (from my previous example) tries to throw a punch, that shit is going to be all over the place. If the kid's exact motion shows up on the screen, instead of the desired result (that being a nice punch), then the kid will become frustrated before long and not want to play. The same is true of the fat geek with no muscle tone. Nobody except true athletes really want to hear an absolutely truthful representation of their physical prowess, and in a video game they're going to want to see an idealized representation of their intent. So you average the samples from a fixed amount of time, and see that the kid is vigorously moving his fist forward (or at least his arm) and you make the character throw a punch in the rough direction to whatever's closest to the destination of the punch, relative to the character's position and orientation.

So, all told, Natal promises that it knows what you're doing, how you're doing it, when you did it, what you felt while you did it who you are (at least within their predefined "people list"), what your facial expression was when you were doing it, and anything you may have already said or have been saying when you did it. That's a pretty damned tall order. If they can pull it off with SUFFICIENT precision to make it a not-frustrating experience, then it's going to be THE must-have technology of this generation, but given the ridiculous amount of processing power that something like this will require, I don't see how Microsoft can hope to deliver it with any degree of accuracy on the Xbox 360.

http://pc.watch.impress.co.jp/

With a "theoretical peak performance" of 115 and change gigaflops on the main processor and maybe a spare hundred or so gigaflops from the GPU unit (assuming the Xbox's rendering system supports that sort of tampering with the pipeline), you're looking at (after the hypervisor) probably about 150 gigaflops to work with. Assuming you're dealing with a composite 320x240 depth image with, say, 40 levels of depth at 60 fps for the depth camera even assuming you're using 8-bit monochrome depth samples you're looking at 11mflops for the video positional data. Assuming "reflection" where it looks back on older data, let's call that about 80mflops. Still under a gigaflop, not bad. Let's now do positional sound data at 40khz from (I'll assume) 4 microphones. Assuming that getting good positional data from it takes about 1000 floating point operations per sample (a probably conservative estimate) we're talking about 160mflops for that. Let's be charitable and say that with intelligent downsampling and some analog filtering we can reduce that by a factor of 8 to 20mflops. We're up to a gigaflop now.

Now let's reexamine our estimates for the floating point power of the Xbox 360. Let's now realize that the system probably can't operate at anything more than about 60% of its theoretical peak for any length of time and realize that 40% of those resources are just going to be tied up with polling devices, the various live internet shit it does for XBL and the hypervisor which allows it to slide in that sweet XBL blade whenever the menu is hit. We're really looking at about 50gflops then for the whole system.

Great, so the motion tracking and positional audio only takes 1/50th of the available power, right? What's the problem? Well, those are the easy part, computationally speaking. Speech recognition is going to take another 3-4 gflops if it's trying to handle multiple source audio. Well, let's be kind and say speech and facial and emotional recognition together will use 4gflops total, bringing up the thing to just under 8% of the total available processor power. That seems perfectly reasonable, actually! Seems feasible for use in games. Unfortunately, the Xbox 360 has only 512mb of RAM... And the depth data alone uncompressed at 20 depth layers 320x240x60fps one byte per voxel with an 8-second history takes up 720mb of ram.

There's a basic law in computer science which states that the faster and more time-efficient an algorithm is going to be, the less memory-efficient it's going to be - the inverse is also true. I can't imagine that really accurate high framerate data which aggregates all of the positional, motion, gestural, facial and emotional and speech data is going to fail to use less than, say, 20% of the available memory and 30% of the available processor time on a 360 even with great technical wizardry on their part - meaning that we're talking about game developers trying to create games with 30% less processor time and 20% less memory - and they have to manage input/wait cycles between their game engine's input layer and the Natal layer.

All of this adds up to a very risky proposition for Microsoft, especially if there is a large amount of R&D money tied up into it and they intend to do a full-scale launch of it before having a significant number of A-list studios signed on to do titles which require it - a feat which seems almost impossible without MS throwing a lot of M$ at them. All of which means that Microsoft really has a lot riding on this product if they're actually trying to sell the product - and after the demos they've already staked their credibility on it being a successful launch.

2010 is going to be one motherfucker of an interesting year for gaming.

Tuesday, May 5, 2009

Science Fiction and the Perfect Alien

I had written some notes on a post of this sort some time ago and completely forgotten about it. I felt like blogging about something today, but my topic was completely stupid so I destroyed it. A recent conversation about this in Golden Teriyaki made me decide to resurrect this post however.

In popular sci-fi there is a sort of overarching notion that aliens who have the highest technologies would be pretty much perfect - geniuses who will never make mistakes, civilized so that they have no use for war, abhorring of violence, etc. Even if they do not satisfy all of those traits, there is still the belief present that at least one of them should be true. A friend of mine scoffed at the idea of an alien species who had crazy advanced physics to bridge the vast reaches of space succumbing to the common cold. There's any number of reasons something like that could happen, and I might blarg that later, but I'm more concerned with the notion that a technologically advanced civilization is basically perfect.

I think this sort of thinking is a remnant of the overwhelming classicism of European and derived cultures. The idea that the ancients held great knowledge and it was thrown away in folly or lost to tragedy suggests that humanity had some sort of reversion to the base or the primitive. This isn't actually unfounded, as some sort of severe stunting of development has happened in most civilizations (the Dark Ages of Europe, the total technological stagnation of Qing Dynasty, the Maya collapse, the stagnation and collapse of the Ottoman Empire, etc) where in some cases vast amounts of technological and natural knowledge have been lost. The idea that a civilization which was sufficiently, well, civilized to have avoided this sort of collapse and subsequent stagnation, then they would surely be superior to us. However, the ubiquity of such an experience of collapse and stagnation should lead us to believe that such an experience is either integral to the "human condition" or integral to all organization when the scale of that civilization reaches a certain critical mass.

This inferiority complex is further fed by the idea that, due to these failings of human society, we are flawed beings who are destined to succumb to tragedy over and over by destroying our geniuses, peacemakers, great works and knowledge through sheer folly. I think we're probably not the only species with that problem. This also speaks to the ideals of European culture - the people who record our history are those people who have a love of knowledge and a hatred for disorder - and so they project those traits onto their civilization, who naturally agrees because they're all united by a common respect for authority and longing for legitimacy.

All of this conspires to paint a picture of a civilization which has, in the exact same short time we've had as a species (say 150,000 years) starting at the exact same time, reached perfection that remains forever out of our reach because we're flawed and they're not.

Considering, of course, that life on earth originated 3.5 billion years ago and the age of the universe is about 13 billion years old, this doesn't seem like too ridiculous a proposition, until you realize that such assumptions are based on the flawed supposition that Earth and its sun are the ideal conditions for the formation of life.

We find ourselves located in a fairly large supercluster with a reasonably high density level. Consider the metric expansion of space signifying that everything is moving apart from everything else at a constantly accelerating rate. The fact that life on earth is only about 3.5 billion years signifies that it takes at least that long for a planet with identical conditions and materials to ours to become hospitable to life. However, since the universe is not locally isotropic, we know that not every star in the universe had identical conditions. We must conclude that other planets may have formed much earlier than our own and become hospitable to life perhaps as many as (to pick a number out of a hat) 8 billion years ago. This means that we've pushed back the minimum length of time for a sentient race to form to being only 8.5 billion years after the formation of the universe, which is about 5 billion years sooner than us.

Now let's attack the idea of nonviolence. We have not appreciably managed to become less violent or evil in 150 thousand years of human development. Consider that, as a species, we have wiped out any and all plants and animals which pose a credible threat to our species on the entire earth - at least those which were large enough to see prior to the invention of optics. Our population is sufficiently large that it would take nothing short of what insurance companies like to call an "act of God" to destroy our entire race. Still the most deadly threat to the human race on the planet is still the human race - we still kill anybody who gets in our way or figure out some other way to marginalize them until they're no longer a threat, and preferably helpless. Consider if you will, the possibility that we do this not maladaptively, no matter how harmful it may be to our technological and social development, but as a survival trait. Everything which threatened our food supply, offspring, or our tribes themselves was destroyed by mankind before we even figured out how to effectively record that information. Anytime it became difficult for us to live, our greatest technological developments were those designed to destroy. Barring the existence of an unusually less hostile planet for development of an intelligence, I put forth that this pattern probably follows in any species which evolves as the pinnacle predator.

Now let us consider the instances of animal intelligence. Nearly every animal on earth which shows what we define as the hallmarks of intelligence are predatory and have been for millions of years.

Certainly, there are intelligent herbivores, however, advanced problem solving skills are seldom necessary when your primary food source is found absolutely anywhere you walk. Some scavengers display intelligent behaviors as well, but there are virtually no higher animal species who are strict scavengers - all of them will hunt or browse in the absence of carrion. So let us consider that intelligence can only form in the presence of predation pressure - either as the predator or the prey. A prey animal who becomes intelligent enough to do so will inevitably decide to kill any truly dangerous predators. As a classic example, dolphins have been observed to kill sharks who come too close to the pod by ramming them with their bony noses - painful for the dolphin, but fatal to the shark who has no rigid bone structure to protect their internal organs from bruising and rupturing. Herd animals use violence routinely to discourage predators, and the act of supporting the herd organization relies on violence or the threat of it in order to establish leadership. Believing that this association of intellect and aggression is only common to Earth species would suggest that all creatures of Earth are cursed with original sin. Not only would any scientist scoff at that, so would nearly any theologian.

So we've dispensed with the notion that we were here first, dispensed with the idea that we're in any way special, and dispensed with the idea that intelligence or technological development prevents violence - at least in any truly general terms. We've now got to look at the idea of societal superiority. Assuming that an alien species successfully subsumes, at least at the official level, the role of aggression in the ordering of a society. We have the classic sci-fi example, courtesy of the various writers of Star Trek, of the planet Vulcan. The people of Vulcan, also known as Vulcans, are guided solely by a love of logic and the suppression of their own aggressive instincts. This requires that a sufficient majority of the population decides, coldly and logically, that the ones who won't get on the boat can either fuck off or die. To trigger Godwin's Law prematurely, let us now remind the reader of the tenets of National Socialism in attempting to form a single perfect world culture based on the ideals of collaboration and equality. That didn't work out too well either. Let us consider that the organization of the government of these theoretical organisms was sufficient to resist corruption at the highest levels for long enough (say, 5 or 6 generations) to successfully order the society and put an end to irrational behavior done for selfish means, outside of the occasional deviant who is "re-educated" just like the ostensibly admirable Vulcans of Star Trek did. Assuming that this becomes ingrained adequately into their social makeup and such a society decided that it was, in fact, logical to travel the stars in search of other intelligent aliens, then it's possible that this one exceptional species would be everything that sci-fi loves and holds dear. However, this one species would be a fluke, as nothing we have seen in world society has ever convinced us that such a pure totalitarian society could ever form in any mixed pot of genetic heritage.

Let us therefore choose to agree that humans are unexceptional in their overall wickedness, love of perversity, self-hatred, destructiveness, xenophobia and so on. Given that, assuming we are not exceptional (still sticking, as before, to the Copernicus principle) that most if not all intelligent alien species have our similar handicaps when developing their societies. Accepting this, we can decide to move ahead to attacking the notion of scientific perfection - the idea that if they've got enough physics to traverse spacetime faster than light or at least subjectively appear to, then they very well ought to be able to solve X problem, where X is pretty much anything we could think of to stop them from doing whatever they wanted to do.

A simple observation of history shows us that scientific advancement never progresses evenly or uniformly, or even along sensible lines, regularly stopping to backfill any implications they may have missed. If this seems like an inaccurate appraisal to you, I encourage you to watch James Burke's Connections series for a sort of luxury cruise throughout the development of human technology. Science always leaves these gaps and blind spots which older sci-fi and the good old Popular Science magazine would have had us believe would be solved by the futuristic year 2000; things like artificial intelligence, holographic displays, truly ubiquitous computing, robotic labor, cheap energy and free transportation and a life of leisure and freedom from disease and inconvenience that comes with all of the above. The reasons science misses these things are as diverse as the things themselves, but typically boil down to effects ancillary to the actual research. Things are discovered accidentally while researching other things, and the significance of those things are never understood until some bright guy is trying to find out a more efficient way of doing something else entirely before the true importance of the previous discovery is ever found out. Things are almost discovered if it weren't for some simple accidental happenstance which result in the destruction of some work, or an overlooking of some minor side effect. Science, in short, despite all its orderliness and documentation with which it parades through recent history, is just plain disorganized. Considering thought itself is an apparently nondeterministic, nonlinear, chaotic process involving semi-random convolutions inside a chemical electric matrix, it's no wonder that we've got any trouble making creativity - the most evanescent and capricious of all kinds of thought - an orderly process. It is also worth considering that most science and nearly all invention happens in response to a problem or a nagging gap in knowledge. An itch whose only scratcher is methodical efforts to work it out. Some problems have never been solved because they simply aren't that irritating or there was no money in doing so. Other problems are simply too hard to solve, at least with existing knowledge and technology, or there is a body of conventional wisdom which says it is too hard and so nobody dares attack it. All of this suggests that the ordinary pattern of discovery and development will always manifest these voids where people either didn't care to, didn't think of, or feared to investigate a certain topic.

Alien races might also, due to eccentricities in brain function and developmental conditions, have enormous blind spots in their abilities and knowledges. A lack of sight sensory organs may leave light - and by extension, much of the electromagnetic spectrum - wholly unexplored for millennia, only discovering them due to measurable effects in materials they were working with at the time. Think of all the technology we discovered before even figuring out how light worked - mostly in Newton's time. Think of how long it was before we discovered radio waves. Without all of this investigation we still could have developed rocketry, life support apparati, a form of cryogenic preservation or hibernation and even computers - electricity was observed and successfully used well before we knew what it was and how it worked. This merely requires a simple lack of sensory equipment to perceive light, which is present in a great deal of species on earth, and would be far more likely to develop among creatures who occupy a planet wherein their sun is not visible due to cloud cover or an ice layer (as on Europa). Consider also that the division of the human brain into right brain, left brain, and the hindbrain shape the process of our thought quite significantly - so much so that people who have had their brains divided surgically undergo radical personality shifts. Individuals who possess only a single functioning hemisphere show some exceptional behavioral and cognitive traits which are not observable in humans with more fully functional brains. A lack of division in the brain which may be observed in an alien species might result in a very fundamentally different approach to learning, discovery and thought.

Finally, we've got the other idea, which is that the aliens wouldn't want anything we have to offer if they're so advanced that they can whiz across space and time all hurly-burly. Given the above noted (not only possible but probably) cognitive and scientific differences between two species, it is exceedingly likely that we have done any number of things that another intelligent race has simply never thought of, or done it better in a number of ways that they have not. After all, the idea of getting off of Earth was never a very high priority one as Earth is (as young-earth creationists and anti-copernicans have note throughout history) particularly hospitable for human life.

Suppose we have an intelligent organism for whom their planet has never been particularly comfortable. This theoretical species developed on the very margins of livability and thrived in a series of "pocket" environments on a much larger planet where they developed intelligence out of a simple fact that all of those who did not were gradually eliminated by a steadily worsening biosphere. They had perhaps a couple of million years to go from tiny lizard to spacefaring race, but the hostility of their environment conspired to produce for them all of the resources they needed to get out of their harsh environments and into a more stable artificial space they create for themselves. A species like this would have little loyalty for their planet and little desire to stay put. They would be constantly looking for ways to expand and respond to reproductive pressure. If such a species was constantly searching for a way to improve their environment, some of the earliest developments would be in chemistry and demolitions - looking to expand the survivable regions - and biology to improve the stability of their environements. These organisms might have developed gunpowder before the plow, rocketry before the Caesarian section or metallurgy before dyeing.

Now consider they find themselves travelling through spaces, millions of miles from somewhere habitable, fleeing their final doom, or scouting for a new place for their people to live - something easily convertible to their livable space. They happen across Earth by some fluke of universal cartography and find an almost ideal environment in some deep caves or the bottom of the sea, etc. They discover that humans are already living somewhere else on Earth but we do not compete directly for the same resources, as their technologies are based on elements which are in plentiful supply, and their ideal spaces are in regions unlivable to us. However, they've had little enough leisure time, and they find themselves on a world so rich for their people that they have nothing to do with their time - a true utopia. They discover our art, our recreation, our games and songs and music. What a trade, right? We get interstellar travel, they get paradise. For all we know, the disaster which made their world unlivable might have made it ideal for humans. We could trade. This is astronomically unlikely, but it gives lie to the idea that there is nothing that humans do which all other races would not already have done better. Remember our Copernican principle - there's no reason to believe that we're unusually useless, either.

That's not to say that the aliens who come to conquer us would immediately understand what we had to offer, either. We may have been overlooked a number of times by intelligent species already, simply because the things we have didn't seem like anything they'd want, or they failed to recognize what they wanted because it takes such a different form with us. For that matter, we may completely fail to recognize the value in the things we already have. Consider the geothermal vents in the bottom of the ocean - certain kinds of chemical processes may only be possible under extreme pressure and heat, while others require a great deal of cold and pressure. Consider that our gravitational pull, though great, may be much weaker than most comparable planets in our portion of the galaxy. Consider that our sun is not really that dangerous but puts out ample energy to render at least four of our planets, and multiple moons potentially habitable - or at least fixer-upper opportunities.

There's a way forward for alien portrayal in science fiction, however: flawed races who unfortunately have forgotten more in the process of advancing their technology than they necessarily could afford - a total lack of knowledge informing them how to survive without high technology. Consider species utterly missing a cultural heritage due to emergency deletion of information considered nonvital in order to preserve the necessary science to keep their colony ship running. Consider a species so new and in such a hurry that they really have no antiquity, so short-lived that they have only a fleeting glimpse at a sense of cultural identity. To species like this, we would be the wise elder race, as with the ancient Greeks and the European Classicisists - thought very highly of because they could not remember how far they have progressed, treasured because they represent a past that they have either lost or had stolen from them. The things that keep us from flinging ourselves across the voids of space may be those which an alien may treasure most - a bottomless past and a positive future.

Thursday, March 12, 2009

The Internets are Fragile Things

I'm not the only webmaster who has problems with periodic DoSing.

Even the mighty Microsoft must fail from time to time.

Friday, February 27, 2009

Interface and Steampunk

What do these two have in common? One is the reality of the future and one is the imagination of the future. Let's back that shit up, right? If you look at views of the future we live in today from the perspective of individuals from the late 1800s you'll notice a unifying theme (Besides ubiquitous flight... can you blame them? Do you know what Broadway looked like in 1880? I'll give you a hint: Ocean of Horse Poop).

Let's try that again - you'll notice a unifying theme; they illustrate only how an entire human or system of humans will interact with a technology - a man standing on an exposed platform of a proto-zeppelin, people standing on a rail platform a hundred feet above the ground, etc.

The concept of technology was so new (and large) that the scale was too complicated to absorb. The best they could come up with was systems of levers and wheels. The concept of the need for technology to communicate with people hadn't even been thought of - you won't see anything more complicated than (literally) bells and whistles until the turn of the century. Gauges were a beautiful dream.

Around 1910 we see the rise of displays in fiction - but only for remote observation; pegboards, buttons, patch cables, switches, levers, tube systems - these are the way of the future. Communication between people is accomplished in a very handwavey fashion, as later in the 1950s and 60s by whatever futurists of the time decided to call "videophone." How could they possibly foresee machines which construct their own reality inside them to display to you - or augment your own reality?

The ubiquity of the moving picture changed all that, soon everybody though it would replace the written word. Here is a technology which is essentially preconstructed and recorded reality-in-a-box. Just turn the crank and shine a light through it!

By the 1920s and 30s, the concept of mechanical men and automatons had come into the public consciousness. Gone were the days of people thinking about how to make machines work - they just had to talk to their videophone and some mechanical man (filled to the brim with rebellious evil but shackled to the human will by the miracles of technology) would perform your every whim. It's safe to say that people at the time failed to understand the complexity of consciousness. This continues for the next 6 decades, however.

In 1945, a man named Vannevar Bush wrote of a device he described as the memex or memory extender, which would basically be a dual touchscreen Wikipedia, in concept anyway. The system would be collections of microfilm notes which were linked mechanically in train-of-consciousness linkages. The only problem with that is that it was single-player - meant to be used only as an extension to memory for a single person - and there was no technology that could do it at the time. A further weakness was the lack of searchability, and you'll see this in early computer-based sci-fi from the 1950s - the idea that people who are using computers have highly personalized systems which are really only usable by the primary user due to how rarefied people's personal filing systems tend to be.

The ubiquity of search engines was another unforeseen development in the modern way of doing things.

While the fiction authors and inventors foresaw automata which would remember things for you, they gave little thought toward how this memory would actually work - they required an intelligent agent to scan through information. The belief that understanding awareness and the comprehension of abstract thought was the future of technology. Augmentative technology - making humans bigger, better, smarter, faster, stronger - began to take off in fiction in the 1950s and 1960s.

Ultimately, all of this came down to either a sort of "companion spirit" sort of benevolent AI or some form of stupid automation which provided information and calculations on demand - automation which had to be skillfully wrangled and required specialized learning.

The late 20th century and the beginning of the 21st century has, however, heralded the arrival of what I feel is the most important development in human thought: the invention of the concept of an "interface."

The advent of interface came rather as a surprise to everybody. The first computers were pre-set to run a task, then printed out their results on tickertape or inscrutable output from various registers. The inefficiency of working with that, once it became clear that working with computing machines at all was a good thing, became immediately obvious. Before too much longer, we had keyboards and monitors. And then we kept using them for 30 years. We'll probably keep using them for another 30 years. However, the arrival of the mouse (and pointing devices in general) heralded the arrival of the age of the graphical user interface. The first ones were not particularly popular.

An interface is, in its most general definition, an area where two systems collide and interact. In chemistry, the interface is the fuzzy area between two states or phases of the same material. In business, an interface is a system which connects two related but disjointed portions of business - marketing is the interface between production and the consumers. In computers, the interface is the facilitator (or barrier, as the case may be) between the user and the deep internals of the computer. User interfaces represent a bogus reality, created by consensus between the user and the machine, which facilitates communication between the two.

The user interface is such an important technology and scientific leap not because of its nature; the concept of a false or imaginary reality created by mutual consent for the purpose of facilitating communication or interaction between two intelligent entities is as old as the concept of the game, or the parable, or the metaphor. The significance of the user interface is more subtle - it creates a false reality which can be agreed upon between an intelligent agent and a huge library of dumb automation.

It is nothing more than a logical tool which facilitates the use of other tools. Without the simple interfaces we have created, whether graphical or mechanical, the workings of most of the modern mechanical and electronic systems we have invented would be beyond all but the most technical individuals. The interface allows nearly any human capable of logical thought and basic learning to interact with technology of arbitrary complexity - no matter how complicated the task the computer is performing, the interface can simplify it down to the most elemental levels. The user need only provide direction (hey, do this) and starting conditions (the basic data needed to start the automation) and the rest is performed by the computer - exactly as if it were another person or intellect performing the task.

Interface is so important because it allows dumb technology to take the place of an intelligent agent without requiring that the technology understand the task. It allows humans to delegate complicated jobs to entities incapable of thought.

Pretty heavy, no?

Wednesday, February 18, 2009

LOLz.

When I was about eight years old and suffering from a particularly nasty case of mononucleosis, I spent a great deal of time at home. My Doctor prohibited me from school on the grounds that I would die due to my hyperactive behavior.

I felt too miserable to go to school anyway - even sitting up caused pain in the back and guts, so I'd usually lay propped up against something and watch TV, play some Atari or Nintendo games when I had the strength to do so, or, periodically, lay propped up under my computer desk and use the shelf below the desk as an impromptu desk. While down there, I would write in little spiral notebooks whatever things seemed important to an inquisitive 8-year-old mind.

One week in particular, watching an A&E's "Evening at The Improv" marathon, I found myself noticing connections between what were good comedians and which were bad - the ones that got a good audience reaction and still amused me (after 6 hours of standup) were considered "good" for the purposes of these observations.

I began to write down (admittedly childish) scholarly observations about things like the "rule of three" and what I called a "serial list joke" (three guys walk into a bar, people serializing related things then breaking the pattern with either the third or fifth element outrageously incongruous. I wrote about surprise, self-referential comedy, prop comedy (and the violation of expectation associated with it) and pretty much what anybody who has any intention of going into standup observes on their own. I used these principles later to great effect to amuse friends and family (when applied judiciously and not heavy-handedly - this was another of my notes.)

From time to time, I'll be watching standup and catch one of these classical archetypes I noted in my childhood. A side-effect of all this observation one particularly ill week of childhood is that I now have an interest in the mechanics of comedy - when I particularly like a comedian who doesn't use these standard mechanisms for humor, I try to figure out why.

It turns out I'm not the only one at all. The mechanics of humor have been analyzed over twenty-five hundred years of western (and near eastern) writing alone - not only what we laugh at, but why we laugh at all. Reputedly (I cannot find a single citation) Aristotle said that "only the human animal laughs." This is a stock traditional introduction to mentioning that rats, dogs, chimps, bonobos, gorillas and orangutans all laugh as well. I'm further convinced that other animals have a sense of humor, including all higher predators (especially cats.) Squirrels and cats both have in common a sense of humor, mischievous pranking and shame. (This is strictly anecdotal observation, but when a squirrel acts enraged because it noticed you saw it fall off a branch and land on its face, what else do you call it?)

Laughter is called the best medicine, and I'd have to say that's only true if you don't have whooping cough or a body cavity injury. At any rate, it makes a good antidepressant in dogs. Laughing will improve a human's mood if sitcom laugh tracks are any indication - the show seems funnier, even if it's less humorous than a similar show without a laugh track. Laughter is relaxing and healthy therefore, just so long as it doesn't kill you.

Er, that's all I've got. No conclusions.

Monday, January 12, 2009

File System Stupidity

In the interest of creating a filesystem which is more versatile than the default ones packed in with OS X Server, I spent some time looking at ZFS and was amused at the absurdity of the whole thing.

Some observations:

[stolen from]

Although we'd all like Moore's Law to continue forever, quantum mechanics imposes some fundamental limits on the computation rate and information capacity of any physical device.

In particular, it has been shown that 1 kilogram of matter confined to 1 litre of space can perform at most 1051 operations per second on at most 1031 bits of information.

A fully populated 128-bit storage pool would contain 2128 blocks = 2137 bytes = 2140 bits; therefore the minimum mass required to hold the bits would be (2140 bits) / (1031 bits/kg) = 136 billion kg.

To operate at the 1031 bits/kg limit, however, the entire mass of the computer must be in the form of pure energy. By E = mc2, the rest energy of 136 billion kg is 1.2x1028 J.

The mass of the oceans is about 1.4x1021 kg. It takes about 4,000 J to raise the temperature of 1 kg of water by 1 degree Celsius, and thus about 400,000 J to heat 1 kg of water from freezing to boiling. The latent heat of vaporization adds another 2 million J/kg.

Thus the energy required to boil the oceans is about 2.4x106 J/kg * 1.4x1021 kg = 3.4x1027 J. Thus, fully populating a 128-bit storage pool would, literally, require more energy than boiling the oceans.

So they've created the first storage system which is impossible to house, let alone fill. Great. An interesting corollary to this is that therefore something with half that capacity would be impossible to back up.

Moreover, the controlling hardware for something that large would require at least an order of magnitude more circuitry to do lookups, so let's assume filling anything with a hundredth of that capacity is possible, anything with a hundred thousandth of that capacity might be considered usable, anything with a millionth of that capacity might actually be affordable and backing up half of that is conceivable.

So 87 trillion yottabytes might be considered a sane upper limit for the foreseeable future? I can live with that.