Get your Portable ID!

Monday, August 22, 2011

Is OCR detecting the wrong thing?

After being totally disheartened with the current state of OCR and layout detection, I started thinking about the problem.

This is the sort of thing I do a lot: notice something that people have been working on for 50+ years still doesn't work, then start thinking maybe I have some (uninformed) ideas on the subject.

As usual, I've come to a potentially spurious conclusion solely backed by a bit of Googling and some logic: People are trying to detect the wrong thing.

In my mind, optical character recognition is only a tiny part of the actual problem involved in scanning a text for content. Arguably more important is layout detection - you can't read an OCR text from a three-column-and-some-tables-plus-a-center-justified-image scan if it thinks everything is text.

However, layout detection always seems to be centered around finding unbroken areas of grey (apparently, based on how it behaves) that are assumed to be entirely orthagonal to the image borders. This is obviously not usually true, and this is why people have been burning their sanity away trying to come up with really solid edge detection and deskewing algorithms - so they can make it true and then their orthagonality-based crap will work.

What if you didn't assume any orientation at all, except that the general top region of the image is upright? Obviously, this makes figuring out layout a bit harder, right?

Here's a good question: Why should it? If you're looking for straight lines, it shouldn't matter in what direction they're going - only that they're all (mostly) going in the same direction. I propose detecting the predominant angles of whitespace on the page. Yeah, whitespace.

Dig: A document is either going to be mostly left-justified or right-justified, whichever is most common for the language or format. Chinese/Japanese idiograms in newspapers let scanners get off even easier in layout detection: It's typically both left and right justified to the column.

So what's a column? Another good question. I propose a number of ways of finding what I'm going to call fuzzy whitespace.

Take half of one GREYSCALE page, flood fill it with a number of increasing tolerances. Use a color which doesn't exist on the page, or maybe an additional channel. Take the flood fill image that sits at the median of page percentage covered. If there are large holes in any one of the flood fills, remember where those are. Look for straightish holes in the flood fill, there are your lines. They're probably not right, but we don't care right now. Unless the page has really annoying sidebars, you've probably got a series of "horizontal" stripes connected to one longer "vertical" stripe. Count each by tracing a line parallel to the outside margin of the fattest stripe at 2cm intervals. Repeat at a 90 degree angle. The majority rule of the stripe count from each line wins. Unless the layout is downright weird, the more numerous broken stripes are whitespace between lines, the largest unbroken stripes are margins.

Using your orientation discovery, take the average angle of the "line whitespace" centerlines. They should be mostly parallel anyway. Now remove any whitespace that matches those centerlines. What's left over is your blocking. Determine how "broken" that blocking is, and segment the image into "zones" with that blocking.

Chances are you've got some false positives. That's why you kept the color image as well. The human eye performs OCR using color cues, why shouldn't your algorithm? Figure vs ground separation is a problem I don't claim to be clever enough to figure out, but it seems that applying a color vision model similar to the average human eye will help matters out. The AVERAGE color of stuff that sits inside one of your line chunks for each block is the font color. Everything else should be considered an aid for determining page layout. So remove everything foreground-colored and replace it with the average background color per block.

That's all I've got so far, mostly because it's early and I've hardly had any sleep. I really don't expect anybody in the field of computer vision to read this (or anybody at all), but I'm leaving this final note for my own satisfaction: there is more to text recognition than following the same old "tried-and-true" dogshit which produces pages of text which read "amd then the nan t7`'./\g+0 his m0th24 4/={ said"

If that's EVER your output, your method is not just a little wrong, but completely the opposite of how human textual recognition works. Also, what's wrong with having OCR that trains itself? "Hey, that's clearly not what the page says, I don't see any of that stuff in my dictionary... Try again with a bit of skew, offset, hell, spawn 20 worker threads to try to figure out that block there. If all else fails, I'll drop any characters that are giving me problems into image blocks and then asking the user to define any repeating ones."

Saturday, August 21, 2010

PS3/PC Releases for the Holidays and New Year

I've composed a release list of the best info available at the moment about release dates for stuff that seems cool to me coming up in the next 5-6 months.

This was the result of about 2 hours of Googling and IGN-combing, so enjoy.

I've linked to GameTrailers where there is no obvious official site or where the official site is useless (I'm looking at you, CAPCOM.)

Shank08/24/10PS3
Castle Crashers08/31/10PS3
R.U.S.E.09/07/10PC/PS3
Spider-Man: Shattered Dimensions09/07/10PS3
Playstation Move09/19/10PS3
True Crime: Hong Kong09/21/10PC/PS3
Dead Rising 209/28/10PC/PS3
Guitar Hero: Warriors of Rock09/28/10PS3
Blade Kitten Part 109/29/10PC/PS3
Blade Kitten Part 2?09/29/10PS3
Borderlands: Claptrap's New Robot RevolutionSeptember 2010PC
PixelJunk Shooter 2Q3 2010PS3
HoardSeptember 2010PS3
Castlevania: Lords of Shadow10/05/10PS3
Sengoku Basara: Samurai Heroes10/12/10PS3
Fallout: New Vegas 10/19/10PC/PS3
Star Wars: The Force Unleashed II10/26/10PC/PS3
Arcania: Gothic 4October 2010PC/Xbox360
Kung Fu LiveOctober 2010PS3
NBA Jam10/05/10WiiWare or Incomplete PS3 which Requires NBA Elite 11 - This is total bullshit, see below.
Assassin's Creed: Brotherhood11/16/10PC/PS3
Need for Speed Hot Pursuit11/16/10PS3
LittleBigPlanet 211/16/10PS3
Max Payne 32010 – UncertainPS3
Grim Dawn2010 – UncertainPC
Hunted: The Demon's Forge2010PC
TerRover2010PS3
Crazy Taxi2010PS3
L.A. NoireQ4 2010PS3
Agent2010 – UncertainPS3
Dead Space 201/25/11PC/PS3
Portal 202/09/11PC
Bulletstorm02/22/11PC/PS3
Dragon Age 203/08/11PC/PS3
Red Faction: ArmageddonMarch 2011;PC/PS3
Driver: San FranciscoQ1 2011PC/PS3
Bionic Commando Rearmed 2Q1 2011 – UncertainPS3
Arcania: Gothic 4Q1 2011 – “First Half 2011”PS3
F.E.A.R. 3Forecasted April 2011PC/PS3
Rock of AgesSpring 2011PS3
From DustQ2 2011PC - Check GameTrailers on this one.
SorceryQ2 2011PS3

 

So the deal with NBA Jam is that there's a version coming to WiiWare of the (now) classic arcade title NBA Jam that's been remastered and had some game modes added and is generally spiffy.

The problem here is that they decided to make an HD version for 360 and PS3, but EA decided with one of their classic douchebag marketing moves to require you to purchase NBA Elite 11 in order to play a stripped-down version of NBA Jam HD.

This is no doubt because the general consensus is that NBA Elite * is garbage and NBA Jam is sweet. Thus they figure on boosting sales. In all likelihood this will do more to hurt sales of NBA Jam, as people who might have bought it on the Wii if there were no HD version are now pissed at EA and won't buy it at all.

Good job, EA. Way to continually mystify the world as to how you ever became one of the only successful publishers out there.

Wednesday, March 3, 2010

OK Go

It's unlikely that any of the Internet dwellers I know has managed to miss "OK Go," but just in case:


The famous Treadmill video for "Here it Goes Again"


So now that you know the band here's why I bring them up. They're fairly well-known for their imaginative do-it-yourself approach to producing their music videos. They (presumably with assistance) contrived this epic Rube Goldberg contraption as the video for one of two versions of their new song "This Too Shall Pass:"


OK go - "This too shall pass" from Renzo on Vimeo.


The second version, also pretty groovy, utilizes the Notre Dame marching band and a somehow non-annoying children's choir to produce a pretty epic sound at the song's crescendo.



OK Go - This Too Shall Pass from OK Go on Vimeo.



Dig it.

Friday, January 22, 2010

AI and Game Observations

It occurred to me today that I've made a number of useful observations on public forums about video games, AI and programming which I should really do something about in the future. I need a dumping ground for those observations, so why not here?



From: amicusNYCL in response to Monkeedude1212

| Because programming -IS- Logic. If you tell the program to do soemthing at Random, its not a very good AI. If you tell it to do the most strategically sound plan, it doesn't vary much at all.

You tell it to try to learn the rules, and make the best decision that it can.

Consider AI for chess. The best AI can beat any human because it can spend the processing power to look, say, 25 moves into the future. When the computer considers all possible moves and for each one looks at all possible next moves, next moves, etc, for 25 turns, it's going to be able to quantify which move it should make now to have the best chance at winning. When you download a chess game and you can set the difficulty, the main thing they change is how far ahead the AI is allowed to look. An "easy" AI might only look 3 moves ahead. It's been a while since I took any AI courses, but I seem to remember that the human masters like Kasparov are capable of looking ahead around 10-12 turns.

So it's not that you tell the AI to make bad decisions, you simply limit the information it has to work with. This is more equivalent to what most humans do when they make bad decisions ("I didn't think of that").



From: me

Indeed.

In fact, feeding bogus data to the AI is one of the realistic ways to limit, say, a racing game's agents - if they don't see the post in front of them because they aren't spending enough time per frame watching the road and are instead eyeballing their opponent, they're going to crash, just like any human. So you simulate that by using player proximity and the "erraticness" of the other opponents to model distraction and modulate the AI's awareness of dynamic obstacles and hazards.



From: Monkeedude1212

A computer can mimick the logic of a human being, yes.
But it can't copy our illogical decisions. Because our Illogical decisions are just based on poor logic.
You can program a computer to make a mistake - but its not the same.


From: me

Our illogical behavior is largely deterministic as well.

We tend to behave illogically only in response to specific stimuli (fear, anger, hunger, lust) or when our system is under strain (fatigue, extreme hunger or thirst, neurological stress), nearly all of which can be simulated effectively enough for a game simulation.

So now we examine the character of our illogical behavior - we prioritize actions inappropriately, mistake one input for another of a similar kind, suffer from reduced reflexes or recognition time, respond with an inappropriate reaction to a familiar stimulus, fail to suppress responses which we would ordinarily not allow in ourselves due to social strictures or personal beliefs, or simply fail to notice things in an appropriate timeframe.

What about that couldn't be simulated with an extremely simple system of defaults with a laundry list of pre-programmed failure behaviors? Illogic may be more complicated to simulate in a limited domain of actions than logic - the elevator can go up, down, or stop, but it's hard to make it change its mind when it's tired of going up - but illogic is really easy to simulate when the expected domain of AI activity includes nearly any action. This is the sort of condition found in most sandbox games - you expect the pedestrians and enemies to behave in an almost random fashion because you expect humans "in the wild" to be unpredictable. This means that anything short of obviously programmatical behavior or obvious illogical *group* behaviors will seem fairly realistic to the player - especially if the AI isn't just "instanced" appearing and disappearing with a short library of functions, but instead is programmed with agendas, no matter how simple (go from residential A to commercial B, append grocery bag model to arms, use carrying walk animation, return to residential A).

The AI in Ultima 7 was praised as being exceedingly lifelike because the AI had agendas, day/night schedules, and would respond to stimuli like violence, the appearance of a monster, etc in a variety of ways depending on their character role. This sort of realism (if not actually passing whatever Turing-test-like metric you employ when observing it) will serve to satisfy the requirements of suspension of disbelief on the part of the player.

One of the best things about video games is the potential to surprise the player with unexpected behaviors. The first Quake bots, even though following fairly simple nodegraphs, would continually surprise players by behaving in a fashion seen as "unpredictable" simply because the player themselves had not been taking the most efficient routes between "pickups."

The first "learning" Unreal bots would actually remember routes that bots saw players take and append nodes and traversal instructions so that it could follow or use that route in the future. As a result, you think you can evade a bot by leaping out a window they never go near then are alarmed to find that not only does it follow you out, it uses the same route to escape you in the next round.

The emergent behaviors of The Sims have been pleasing and surprising gamers for nearly 10 years now, all based on fairly simple wants/needs systems along with some basic stimulus response. The Sim is less intelligent than the average cockroach, and yet they are still capable of behaviors which seem satisfyingly realistic, at least in the short term. If a Sim is too tired to make a full meal, it might just grab a bag of snacks from the refrigerator. They might fall asleep on the couch instead of going to bed. These are all failure states in an ideal AI's daily routine, and yet they give the human touch - with very little computational cost.

The point here is that AI doesn't need to be perfectly human to be humanlike, and it's far from impossible to simulate illogical behavior - you just have to program some chaos into the system by which the AI selects actions.



From: Lumpy

Yes and no. Back in the day when I was writing Quake bots, there were things you could do to always beat the AI. The AI cant pick out patterns that are luring it into a trap. WE are a long LONG way from having AI that can think about the situation and make a decision on it's own...

"Player 4 has done this 4 times trying to lead me down that corridor, what the hell is he doing? I'm gonna sit and wait or try and circle around to see what is up."

AI cant make a conscious decision that is not preprogrammed.


From: me

| AI cant make a conscious decision that is not preprogrammed.

Definitely. The job of the AI designer is to come up with a set of default behaviors and reactions which make the AI appear to be doing so.

You may not be able to make an AI figure out intent, but you can train them to recognize erratic motion - players in a pure deathmatch game don't often stop or double back quickly without an obvious reason, so something like that could trigger the bot to go into "cautious mode" and fire, say, a grenade to the entrance of that corridor then try to circle around. About 90% of the time it'd look like the bot was paranoid, but the few times it worked, the victim would be completely convinced.

I agree that we're a long way from being able to solve the problem of actual rational AI. I think we first have to figure out how logical frameworks and learning work before we can begin tackling making computers think and reason like people.

Fortunately, it's often a lot easier to make them LOOK like they're thinking and reasoning like people.


I regret that I can't find the old post I made about multithreaded MMO servers and load distribution or the one about dividing work within AI systems. Must be too old to show up on Google anymore and Slashdot doesn't seem to have a good way of browsing through your own comment history. I'll have to keep that in mind when designing the EG comment system.

Monday, September 21, 2009

Diablo III and the Art of Procrastination

I originally started composing the following as a response to the article at Gamasutra, being a slightly less limp than usual Chris Remo interview with two gentlemen from Blizzard about Diablo III's development process.

I realized that this was less a response than a screed, so I'll just put it here so that the only 2-3 people in the world who care about my opinion on the Diablo series can read it.


The interview highlights the development process at Blizzard for Diablo III as one of perpetual revision, where they decide to release it not on any sort of sane schedule but on a process of "hey, this isn't awesome enough" and then they add more crap.



It must be nice to have that level of freedom to constantly revise the game whenever the whim takes them. It most certainly yields a more colorful game with significant replayability. Diablo II still has the ability to get its hooks into people 9 years later; its gameplay still visceral, its sounds still evocative, its dated graphics still retaining a kind of timeless quality no doubt provided for by the right mix of cartoony and dark.

However, I wonder how much of a sacrifice a process like that is. Is there some basic book of design and plot description they work with from the beginning, or are they stuck trying to nail all of the "awesome" together into a coherent story line?

There was much about Diablo II that seemed like it was added to the game to pave the way for some new features which were never developed, like the assassin lady in Kurast with the interesting back story. With a little more focus at some point during development, the sort of unfinished empty feel yielded by their environments could be fleshed out into an emotional experience more significant to the player than just "awesome."

With a solid plot and some emotional investment added to the already powerful experiences (by a mechanism other than the truly gripping cutscenes) the Diablo series could not help but be improved.

Thursday, July 2, 2009

Thinking about the feasibility of Project Natal

I'm still thinking more about the feasibility of Project Natal.

Assuming you have a stereo camera setup with one infrared sensor and one camera with a high dynamic range and a very high sample rate even if the resolution is piss poor then I can imagine depth sensing working quite well. If the system has sufficient breadth (which by the pictures it probably does not) then it should be able to get oblique enough views to correct for accidental occlusion of body parts and the inevitable depth errors when for instance, a man wearing no shirt in a room whose lighting is either too bright or too dim for his skin tone is doing things with his hands in front of his chest - or they're wearing a shirt with similar emissive/reflective characteristics as their skin (on both the infrared and visible spectra) - both of which should be considered worst-case scenarios for the purpose of resolving individual body parts.

The infrared emission and reflection characteristics of human flesh are fairly distinct from most natural fiber clothing that people wear - unfortunately, some synthetics show up almost the exact same "color" of infrared as at the various common human skin tones. Unless the infrared sensor has a good gamut in the infrared range, this could produce serious problems with discernment. Nudity, one would suppose, would present all sorts of problems. A terrible scenario would probably be someone's pasty-skinned child trying to play a game while coming in from the sun for a break in their Lycra swimsuit and thick sunscreen and no shirt - especially if they're baggy shorts. Then you just have a confusing mass of light and dark spots slopping around on the screen of the camera in the occasional vague shape of a human child.

Assuming that the RGB camera data mixed with the depth data from the infrared well - which is the best-case scenario that Microsoft is no doubt depending on, then the system could do a sort of depth sampling which would end up, even in less than ideal situations, a bit like that intentionally distorted Radiohead LIDAR cloud which Thom Yorke turns into (only much lower resolution). If that quality of data (or near that) can be extracted, and there's enough skew between the depth camera and the rgb camera, then you should be able to discern hands in front of the body with some depth accuracy - in which case, the device will work quite well as a motion sensor, assuming that the libraries which pick out which body part is which at least return consistent even if not always accurate results. If game developers have to develop their own libraries for discerning anatomical characteristics, then the system's going to flop hard, and I think MS knows this, so we'll probably start seeing XNA updates by the end of the year with the beginnings of motion control libraries in them.

Moving on to facial recognition, it's glitchy at best, but with the additional depth data, it'll help reduce the false positives quite a bit. Assuming some intelligent use of "motion trails" with ordinary face recognition and depth data, some slightly more than trivial object and face recognition might be possible; like "Hey, who's that back on the couch? Is that Dave?" If the Natal system can ever pull that off, even occasionally hilariously incorrectly, then I'll be plenty impressed.

Facial recognition leads to the next problem, however, which is their claim of emotional recognition. Assuming that you're using a gestalt of methods - erratic movements sensed by the depth cameras, facial shifts, basic emotional grammars on the face, vocal tics and other vocal cues, then it could probably manage to detect stress and amusement, but beyond that, I have absolutely no confidence in the ability of the Natal sensor to pick up emotion. Something like that ridiculous Vitality Sensor that Nintendo is putting out would augment the data adequately to give me greater confidence in its results, but it's hard enough to read emotion using a few hundred thousand years of behavioral evolution, let alone trying to do it with technology less than 30 years old.

This brings us to the other point, if you're paying attention to voices, you want more than emotion sensing - so we talk about speech. Speech recognition in something like Natal is really only going to work in absolutely ideal situations unless their multi-array mics are sufficient in number and precision to yield rough positional data which can be cross-referenced using the depth and camera data. If this can be done, then "source filtration" ofthe audio can be done so that you're only trying to process sound from one location.

I could see the developers at Microsoft creating a sort of data structure which represents a fuzzy cloud of positional data which they identify as an "individual" whether it's a human or a cat or a roomba. This entity would become an input source, at least from the perspectives of developers working with the technology for games. From this source, you could pull speech samples (and a confidence number), video data, generalized motion information, and a low-data-rate history of what they've been doing for the past few seconds to help make prediction and resolution of actions a little easier.

As much as remembering inputs is important for, say, an arcade fighting game, it will be even more important in a Natal-enabled game, because these games will not only have to interpret what the player is doing at each instant, they need to resolve "intent" behind the players' current motions - in other words, what they're trying to do. If somebody's wimpy albino kid (from my previous example) tries to throw a punch, that shit is going to be all over the place. If the kid's exact motion shows up on the screen, instead of the desired result (that being a nice punch), then the kid will become frustrated before long and not want to play. The same is true of the fat geek with no muscle tone. Nobody except true athletes really want to hear an absolutely truthful representation of their physical prowess, and in a video game they're going to want to see an idealized representation of their intent. So you average the samples from a fixed amount of time, and see that the kid is vigorously moving his fist forward (or at least his arm) and you make the character throw a punch in the rough direction to whatever's closest to the destination of the punch, relative to the character's position and orientation.

So, all told, Natal promises that it knows what you're doing, how you're doing it, when you did it, what you felt while you did it who you are (at least within their predefined "people list"), what your facial expression was when you were doing it, and anything you may have already said or have been saying when you did it. That's a pretty damned tall order. If they can pull it off with SUFFICIENT precision to make it a not-frustrating experience, then it's going to be THE must-have technology of this generation, but given the ridiculous amount of processing power that something like this will require, I don't see how Microsoft can hope to deliver it with any degree of accuracy on the Xbox 360.

http://pc.watch.impress.co.jp/

With a "theoretical peak performance" of 115 and change gigaflops on the main processor and maybe a spare hundred or so gigaflops from the GPU unit (assuming the Xbox's rendering system supports that sort of tampering with the pipeline), you're looking at (after the hypervisor) probably about 150 gigaflops to work with. Assuming you're dealing with a composite 320x240 depth image with, say, 40 levels of depth at 60 fps for the depth camera even assuming you're using 8-bit monochrome depth samples you're looking at 11mflops for the video positional data. Assuming "reflection" where it looks back on older data, let's call that about 80mflops. Still under a gigaflop, not bad. Let's now do positional sound data at 40khz from (I'll assume) 4 microphones. Assuming that getting good positional data from it takes about 1000 floating point operations per sample (a probably conservative estimate) we're talking about 160mflops for that. Let's be charitable and say that with intelligent downsampling and some analog filtering we can reduce that by a factor of 8 to 20mflops. We're up to a gigaflop now.

Now let's reexamine our estimates for the floating point power of the Xbox 360. Let's now realize that the system probably can't operate at anything more than about 60% of its theoretical peak for any length of time and realize that 40% of those resources are just going to be tied up with polling devices, the various live internet shit it does for XBL and the hypervisor which allows it to slide in that sweet XBL blade whenever the menu is hit. We're really looking at about 50gflops then for the whole system.

Great, so the motion tracking and positional audio only takes 1/50th of the available power, right? What's the problem? Well, those are the easy part, computationally speaking. Speech recognition is going to take another 3-4 gflops if it's trying to handle multiple source audio. Well, let's be kind and say speech and facial and emotional recognition together will use 4gflops total, bringing up the thing to just under 8% of the total available processor power. That seems perfectly reasonable, actually! Seems feasible for use in games. Unfortunately, the Xbox 360 has only 512mb of RAM... And the depth data alone uncompressed at 20 depth layers 320x240x60fps one byte per voxel with an 8-second history takes up 720mb of ram.

There's a basic law in computer science which states that the faster and more time-efficient an algorithm is going to be, the less memory-efficient it's going to be - the inverse is also true. I can't imagine that really accurate high framerate data which aggregates all of the positional, motion, gestural, facial and emotional and speech data is going to fail to use less than, say, 20% of the available memory and 30% of the available processor time on a 360 even with great technical wizardry on their part - meaning that we're talking about game developers trying to create games with 30% less processor time and 20% less memory - and they have to manage input/wait cycles between their game engine's input layer and the Natal layer.

All of this adds up to a very risky proposition for Microsoft, especially if there is a large amount of R&D money tied up into it and they intend to do a full-scale launch of it before having a significant number of A-list studios signed on to do titles which require it - a feat which seems almost impossible without MS throwing a lot of M$ at them. All of which means that Microsoft really has a lot riding on this product if they're actually trying to sell the product - and after the demos they've already staked their credibility on it being a successful launch.

2010 is going to be one motherfucker of an interesting year for gaming.

Tuesday, May 5, 2009

Science Fiction and the Perfect Alien

I had written some notes on a post of this sort some time ago and completely forgotten about it. I felt like blogging about something today, but my topic was completely stupid so I destroyed it. A recent conversation about this in Golden Teriyaki made me decide to resurrect this post however.

In popular sci-fi there is a sort of overarching notion that aliens who have the highest technologies would be pretty much perfect - geniuses who will never make mistakes, civilized so that they have no use for war, abhorring of violence, etc. Even if they do not satisfy all of those traits, there is still the belief present that at least one of them should be true. A friend of mine scoffed at the idea of an alien species who had crazy advanced physics to bridge the vast reaches of space succumbing to the common cold. There's any number of reasons something like that could happen, and I might blarg that later, but I'm more concerned with the notion that a technologically advanced civilization is basically perfect.

I think this sort of thinking is a remnant of the overwhelming classicism of European and derived cultures. The idea that the ancients held great knowledge and it was thrown away in folly or lost to tragedy suggests that humanity had some sort of reversion to the base or the primitive. This isn't actually unfounded, as some sort of severe stunting of development has happened in most civilizations (the Dark Ages of Europe, the total technological stagnation of Qing Dynasty, the Maya collapse, the stagnation and collapse of the Ottoman Empire, etc) where in some cases vast amounts of technological and natural knowledge have been lost. The idea that a civilization which was sufficiently, well, civilized to have avoided this sort of collapse and subsequent stagnation, then they would surely be superior to us. However, the ubiquity of such an experience of collapse and stagnation should lead us to believe that such an experience is either integral to the "human condition" or integral to all organization when the scale of that civilization reaches a certain critical mass.

This inferiority complex is further fed by the idea that, due to these failings of human society, we are flawed beings who are destined to succumb to tragedy over and over by destroying our geniuses, peacemakers, great works and knowledge through sheer folly. I think we're probably not the only species with that problem. This also speaks to the ideals of European culture - the people who record our history are those people who have a love of knowledge and a hatred for disorder - and so they project those traits onto their civilization, who naturally agrees because they're all united by a common respect for authority and longing for legitimacy.

All of this conspires to paint a picture of a civilization which has, in the exact same short time we've had as a species (say 150,000 years) starting at the exact same time, reached perfection that remains forever out of our reach because we're flawed and they're not.

Considering, of course, that life on earth originated 3.5 billion years ago and the age of the universe is about 13 billion years old, this doesn't seem like too ridiculous a proposition, until you realize that such assumptions are based on the flawed supposition that Earth and its sun are the ideal conditions for the formation of life.

We find ourselves located in a fairly large supercluster with a reasonably high density level. Consider the metric expansion of space signifying that everything is moving apart from everything else at a constantly accelerating rate. The fact that life on earth is only about 3.5 billion years signifies that it takes at least that long for a planet with identical conditions and materials to ours to become hospitable to life. However, since the universe is not locally isotropic, we know that not every star in the universe had identical conditions. We must conclude that other planets may have formed much earlier than our own and become hospitable to life perhaps as many as (to pick a number out of a hat) 8 billion years ago. This means that we've pushed back the minimum length of time for a sentient race to form to being only 8.5 billion years after the formation of the universe, which is about 5 billion years sooner than us.

Now let's attack the idea of nonviolence. We have not appreciably managed to become less violent or evil in 150 thousand years of human development. Consider that, as a species, we have wiped out any and all plants and animals which pose a credible threat to our species on the entire earth - at least those which were large enough to see prior to the invention of optics. Our population is sufficiently large that it would take nothing short of what insurance companies like to call an "act of God" to destroy our entire race. Still the most deadly threat to the human race on the planet is still the human race - we still kill anybody who gets in our way or figure out some other way to marginalize them until they're no longer a threat, and preferably helpless. Consider if you will, the possibility that we do this not maladaptively, no matter how harmful it may be to our technological and social development, but as a survival trait. Everything which threatened our food supply, offspring, or our tribes themselves was destroyed by mankind before we even figured out how to effectively record that information. Anytime it became difficult for us to live, our greatest technological developments were those designed to destroy. Barring the existence of an unusually less hostile planet for development of an intelligence, I put forth that this pattern probably follows in any species which evolves as the pinnacle predator.

Now let us consider the instances of animal intelligence. Nearly every animal on earth which shows what we define as the hallmarks of intelligence are predatory and have been for millions of years.

Certainly, there are intelligent herbivores, however, advanced problem solving skills are seldom necessary when your primary food source is found absolutely anywhere you walk. Some scavengers display intelligent behaviors as well, but there are virtually no higher animal species who are strict scavengers - all of them will hunt or browse in the absence of carrion. So let us consider that intelligence can only form in the presence of predation pressure - either as the predator or the prey. A prey animal who becomes intelligent enough to do so will inevitably decide to kill any truly dangerous predators. As a classic example, dolphins have been observed to kill sharks who come too close to the pod by ramming them with their bony noses - painful for the dolphin, but fatal to the shark who has no rigid bone structure to protect their internal organs from bruising and rupturing. Herd animals use violence routinely to discourage predators, and the act of supporting the herd organization relies on violence or the threat of it in order to establish leadership. Believing that this association of intellect and aggression is only common to Earth species would suggest that all creatures of Earth are cursed with original sin. Not only would any scientist scoff at that, so would nearly any theologian.

So we've dispensed with the notion that we were here first, dispensed with the idea that we're in any way special, and dispensed with the idea that intelligence or technological development prevents violence - at least in any truly general terms. We've now got to look at the idea of societal superiority. Assuming that an alien species successfully subsumes, at least at the official level, the role of aggression in the ordering of a society. We have the classic sci-fi example, courtesy of the various writers of Star Trek, of the planet Vulcan. The people of Vulcan, also known as Vulcans, are guided solely by a love of logic and the suppression of their own aggressive instincts. This requires that a sufficient majority of the population decides, coldly and logically, that the ones who won't get on the boat can either fuck off or die. To trigger Godwin's Law prematurely, let us now remind the reader of the tenets of National Socialism in attempting to form a single perfect world culture based on the ideals of collaboration and equality. That didn't work out too well either. Let us consider that the organization of the government of these theoretical organisms was sufficient to resist corruption at the highest levels for long enough (say, 5 or 6 generations) to successfully order the society and put an end to irrational behavior done for selfish means, outside of the occasional deviant who is "re-educated" just like the ostensibly admirable Vulcans of Star Trek did. Assuming that this becomes ingrained adequately into their social makeup and such a society decided that it was, in fact, logical to travel the stars in search of other intelligent aliens, then it's possible that this one exceptional species would be everything that sci-fi loves and holds dear. However, this one species would be a fluke, as nothing we have seen in world society has ever convinced us that such a pure totalitarian society could ever form in any mixed pot of genetic heritage.

Let us therefore choose to agree that humans are unexceptional in their overall wickedness, love of perversity, self-hatred, destructiveness, xenophobia and so on. Given that, assuming we are not exceptional (still sticking, as before, to the Copernicus principle) that most if not all intelligent alien species have our similar handicaps when developing their societies. Accepting this, we can decide to move ahead to attacking the notion of scientific perfection - the idea that if they've got enough physics to traverse spacetime faster than light or at least subjectively appear to, then they very well ought to be able to solve X problem, where X is pretty much anything we could think of to stop them from doing whatever they wanted to do.

A simple observation of history shows us that scientific advancement never progresses evenly or uniformly, or even along sensible lines, regularly stopping to backfill any implications they may have missed. If this seems like an inaccurate appraisal to you, I encourage you to watch James Burke's Connections series for a sort of luxury cruise throughout the development of human technology. Science always leaves these gaps and blind spots which older sci-fi and the good old Popular Science magazine would have had us believe would be solved by the futuristic year 2000; things like artificial intelligence, holographic displays, truly ubiquitous computing, robotic labor, cheap energy and free transportation and a life of leisure and freedom from disease and inconvenience that comes with all of the above. The reasons science misses these things are as diverse as the things themselves, but typically boil down to effects ancillary to the actual research. Things are discovered accidentally while researching other things, and the significance of those things are never understood until some bright guy is trying to find out a more efficient way of doing something else entirely before the true importance of the previous discovery is ever found out. Things are almost discovered if it weren't for some simple accidental happenstance which result in the destruction of some work, or an overlooking of some minor side effect. Science, in short, despite all its orderliness and documentation with which it parades through recent history, is just plain disorganized. Considering thought itself is an apparently nondeterministic, nonlinear, chaotic process involving semi-random convolutions inside a chemical electric matrix, it's no wonder that we've got any trouble making creativity - the most evanescent and capricious of all kinds of thought - an orderly process. It is also worth considering that most science and nearly all invention happens in response to a problem or a nagging gap in knowledge. An itch whose only scratcher is methodical efforts to work it out. Some problems have never been solved because they simply aren't that irritating or there was no money in doing so. Other problems are simply too hard to solve, at least with existing knowledge and technology, or there is a body of conventional wisdom which says it is too hard and so nobody dares attack it. All of this suggests that the ordinary pattern of discovery and development will always manifest these voids where people either didn't care to, didn't think of, or feared to investigate a certain topic.

Alien races might also, due to eccentricities in brain function and developmental conditions, have enormous blind spots in their abilities and knowledges. A lack of sight sensory organs may leave light - and by extension, much of the electromagnetic spectrum - wholly unexplored for millennia, only discovering them due to measurable effects in materials they were working with at the time. Think of all the technology we discovered before even figuring out how light worked - mostly in Newton's time. Think of how long it was before we discovered radio waves. Without all of this investigation we still could have developed rocketry, life support apparati, a form of cryogenic preservation or hibernation and even computers - electricity was observed and successfully used well before we knew what it was and how it worked. This merely requires a simple lack of sensory equipment to perceive light, which is present in a great deal of species on earth, and would be far more likely to develop among creatures who occupy a planet wherein their sun is not visible due to cloud cover or an ice layer (as on Europa). Consider also that the division of the human brain into right brain, left brain, and the hindbrain shape the process of our thought quite significantly - so much so that people who have had their brains divided surgically undergo radical personality shifts. Individuals who possess only a single functioning hemisphere show some exceptional behavioral and cognitive traits which are not observable in humans with more fully functional brains. A lack of division in the brain which may be observed in an alien species might result in a very fundamentally different approach to learning, discovery and thought.

Finally, we've got the other idea, which is that the aliens wouldn't want anything we have to offer if they're so advanced that they can whiz across space and time all hurly-burly. Given the above noted (not only possible but probably) cognitive and scientific differences between two species, it is exceedingly likely that we have done any number of things that another intelligent race has simply never thought of, or done it better in a number of ways that they have not. After all, the idea of getting off of Earth was never a very high priority one as Earth is (as young-earth creationists and anti-copernicans have note throughout history) particularly hospitable for human life.

Suppose we have an intelligent organism for whom their planet has never been particularly comfortable. This theoretical species developed on the very margins of livability and thrived in a series of "pocket" environments on a much larger planet where they developed intelligence out of a simple fact that all of those who did not were gradually eliminated by a steadily worsening biosphere. They had perhaps a couple of million years to go from tiny lizard to spacefaring race, but the hostility of their environment conspired to produce for them all of the resources they needed to get out of their harsh environments and into a more stable artificial space they create for themselves. A species like this would have little loyalty for their planet and little desire to stay put. They would be constantly looking for ways to expand and respond to reproductive pressure. If such a species was constantly searching for a way to improve their environment, some of the earliest developments would be in chemistry and demolitions - looking to expand the survivable regions - and biology to improve the stability of their environements. These organisms might have developed gunpowder before the plow, rocketry before the Caesarian section or metallurgy before dyeing.

Now consider they find themselves travelling through spaces, millions of miles from somewhere habitable, fleeing their final doom, or scouting for a new place for their people to live - something easily convertible to their livable space. They happen across Earth by some fluke of universal cartography and find an almost ideal environment in some deep caves or the bottom of the sea, etc. They discover that humans are already living somewhere else on Earth but we do not compete directly for the same resources, as their technologies are based on elements which are in plentiful supply, and their ideal spaces are in regions unlivable to us. However, they've had little enough leisure time, and they find themselves on a world so rich for their people that they have nothing to do with their time - a true utopia. They discover our art, our recreation, our games and songs and music. What a trade, right? We get interstellar travel, they get paradise. For all we know, the disaster which made their world unlivable might have made it ideal for humans. We could trade. This is astronomically unlikely, but it gives lie to the idea that there is nothing that humans do which all other races would not already have done better. Remember our Copernican principle - there's no reason to believe that we're unusually useless, either.

That's not to say that the aliens who come to conquer us would immediately understand what we had to offer, either. We may have been overlooked a number of times by intelligent species already, simply because the things we have didn't seem like anything they'd want, or they failed to recognize what they wanted because it takes such a different form with us. For that matter, we may completely fail to recognize the value in the things we already have. Consider the geothermal vents in the bottom of the ocean - certain kinds of chemical processes may only be possible under extreme pressure and heat, while others require a great deal of cold and pressure. Consider that our gravitational pull, though great, may be much weaker than most comparable planets in our portion of the galaxy. Consider that our sun is not really that dangerous but puts out ample energy to render at least four of our planets, and multiple moons potentially habitable - or at least fixer-upper opportunities.

There's a way forward for alien portrayal in science fiction, however: flawed races who unfortunately have forgotten more in the process of advancing their technology than they necessarily could afford - a total lack of knowledge informing them how to survive without high technology. Consider species utterly missing a cultural heritage due to emergency deletion of information considered nonvital in order to preserve the necessary science to keep their colony ship running. Consider a species so new and in such a hurry that they really have no antiquity, so short-lived that they have only a fleeting glimpse at a sense of cultural identity. To species like this, we would be the wise elder race, as with the ancient Greeks and the European Classicisists - thought very highly of because they could not remember how far they have progressed, treasured because they represent a past that they have either lost or had stolen from them. The things that keep us from flinging ourselves across the voids of space may be those which an alien may treasure most - a bottomless past and a positive future.