Very early on, while meeting with our lawyer, another entrepreneur was present to give us some advice. He said,

During your development, you’ll come across other opportunities. Ignore them! Stick to what you know; stick to your business plan.

That was pretty obvious, even to us. Of course you stick to what you know.

compass.jpgMistake #4,5,&6: Not only did we fail to follow that advice, but we failed to follow it on three separate occasions. We had “Ferret”, a mid-size search engine tied to perl that was basically a cheap replacement for the expensive full-text search engines. We had “InSite”, which was Ferret tied to a web spider so, for a small monthly fee, customers could search your web site (remember, this was 1996 — before Google). And we had “Coordinator”, a web-based calendar that was never completed. In the end, we may have made a few hundred dollars in exchange for the hundreds of man-hours of effort.

If you believe in where you’re going, then go there. A company like Google can afford to give it’s employees 20% of their time to work on any interesting project; they can afford to take risks. For a small start-up, unrequired risk is death. You’re betting the farm on every project you take on. Remember that!

Without day jobs to pay our rent and no income from the product, we needed some way to live. As such, we went to our friends and family and asked them to invest in this great little idea we had. I will be forever thankful to those that had the confidence in us to put money in to our venture. Unfortunately, I’m sure most of them will never be thankful for making that investment since they all lost most of it in the end.

All told, we raised about $100,000. We rented a small office and set up salary payments.

money.jpgMistake #2: This is simply not enough money to build a product or launch it when complete.

Mistake #3: I was determined to do this like a proper business. Duh! When you’re poor, you do things cheap. For example: All of us founders put cash in as well which was eventually paid out to us as salary, minus what the government took as tax. It would have been much smarter to simply give a “credit” for the amount contributed and then not paid them that much salary, tax-free.

After a few months, we started taking on some contract programming jobs. We earned enough to cover the monthly rent and some small salaries, but it also cut in to the amount of time we had to work on our game. It isn’t what we wanted to do, but we didn’t have many options by that time. Our investor’s money was spent and we had almost nothing to show for it. I still think about our “angels” now and then. If you remember that child on the corner of time, I still remember I still owe some nickels and dimes.

It started with a discussion among friends about a play-by-mail game. Some companies were updating to use email as the communication channel but the style of game play was still turn based. Remember, the world-wide-web was still pretty new at this point; Mosaic Communications Corp had just released Netscape v.1.0 and it was not a viable front-end for interactive game play.

Once a week, the group of us would get together and discuss the project and what we had accomplished during the week. We still had full-time day jobs, so progress was somewhat slow.

Mistake #1: It was too big. We had grand ideas for what a finished product should be able to do and worked towards that goal on whichever piece seemed interesting or important on any given day. Instead, we should have catalogued those features and narrowed it down to a minimum list for v1.0, then ranked those in order of importantance, and then worked on each in turn. Yes, it had to sing & dance, but it didn’t have to do so in twelve different languages and with thirty-eight styles of music right from day one.

images.jpegIn addition to the game plan, we were also working on a business plan. We were naive, but not clueless. As a service, we would give the front-end software away for free and charge a small monthly amount to play a game. With hundreds or thousands of players per game and the ability to run as many games in parallel as we wanted, there was some very real upside potential. All we needed was to get it working and do so before someone else beat us to the market, and so four of us proceeded to quit our day jobs and work on it full time.

Remember when you were a teenager and you knew everything? You didn’t think you knew everything. You knew you knew everything. Your parents were idiots and anybody over 30 was just too old-world to have a clue. (If you’re a teenager right now, I’m sorry to burst your bubble, but you’re hopelessly naive and everybody older than 30 knows it.) Eventually you got older, looked back, and realized how foolish you were.

The problem is that we don’t seem to realize that this attitude isn’t limited to teenagers! Everybody of all ages thinks they know pretty much all they need to know and everybody 10 years older knows how naive those younger people are. So as I look back 10 years on the mistakes made by those know-it-all computer geniuses who formed Verisim, I’ll try to keep in mind that I still have a lot to learn even now.

Wayback Machine

 

We were fresh out of university; a group of shit-hot electrical & computer engineers who could design and build a computer system right from the doping of the silicon making transistors to the database back-end behind a game. There was nothing we couldn’t do! Well… nothing except business. And management. And leadership. And project planning. And finance. And… You get the picture.

Oh, and these minor issues aside, noone had any experience creating a large software system and coordinating a group of five people for over a year. The truth is, we were doomed from the start. I still think the idea was a good one for the time. I think it’s still a good idea today and would love to build it… If I was financially independent and could pay the salaries of five top-notch software designers for a few years. I could handle all those other problems today because now I know everything!

About 10 years ago, some friends and I decided to start our own business. We were all computer geeks and gamers in some sense so we decided we would make money doing two of the things we loved most. Verisim was born! (Please note, the Verisim of 10 years ago has nothing to do with the company that currently owns the verisim.com domain name.) Our goal was to create a very large, continuous time, multi-player network game.

Would You Like To Play A Game?In short, the idea involved creating a world (”game”) in which hundreds or thousands (”very large”, “multi-player”) characters would go about their actions whether your were actually connected to the game or not (”continuous time”). Network connectivity and CPU power 10 years ago were not what they are today and this design solved several problems because it did not rely heavily on either. Like the old play-by-mail games, players would give their characters a series of directions as well as “standing orders” indicating what to do in certain situations. A front-end program would send directions out and display the results whenever the player choose to look. Connectivity could be accomplished via direct socket, HTTP posts, or email attachments.

Verisim never accomplished that goal and eventually closed when the company that bought us started having financial difficulties and informed us that we’d have to go without pay for probably the better part of the next year.

Over the next few months, I’m going to conduct a review of this endeavor based on hindsight and an additional 10 years of accumulated wisdom. I hope you’ll come back and read more.

I decided the other day that the server in my basement was insufficient for the needs of this site and my other site, Background Exposure (my photography gallery and blog). It’s not that it was too much trouble — I do system administration as part of my day job. No, the problem was unreliability. For one, if the machine had a problem, I might not notice until the next day and might not be able to fix it for several more. Also, though my cable modem has sufficient bandwidth, it is only a semi-static IP and so changes every few months. Lastly, I’m going to be moving to Switzerland and certainly won’t have the same sort of bandwidth there as I have here in Canada. So off I went in search of a web hosting company.

  • 1and1: This company was recommended by a friend of mine for their good service. I registered backgroundexposure.com through them and they were generally quite helpful. Unfortunately, there were two problems with their web hosting, one of them a deal-breaker. First of all, their really inexpensive hosting packages don’t include MySQL database access of which both of my sites require. The $10/mo is the cheapest that does. The big problem, though, is that each database can be a maximum of 100MB in size, even though the space I’m allocated is 100GB. I’d like to think my Linux Wiki will eventually grow larger than 100MB.

0024-13.jpg

  • AN Hosting: (also knows as “midphase.com”) For $7/mo, I can get an account here that includes all the same stuff as 1and1 and the databases can be any size up to the maximum account size. Sure, the size is limited to only 50GB, but I think that’s plenty for the next little while. They really strive for good support, and achieve it… mostly. Their response time is amazing; I’ll get a reply to an email question within minutes. MINUTES! If only they read my entire email before replying. I get the feeling they’re used to certain questions and as soon as they see my message with something akin to what they’ve seen hundreds of times before, they send me the standard answer without actually reading my questions carefully. Sometimes it took three emails to get all the answers I was looking for.

0023-21.jpg

  • GlobAt: I originally registered the riverworth.com domain through them because it was the cheapest I could find. Exactly two days after I signed up with AN Hosting, they sent me an offer of pretty much the same package as AN-H but with twice the storage space, the same monthly cost, and a special promotion that would have me paying only 1/2 the total amount for the first year! What a deal! The only thing that really irked me about the package was that I was going to be upgraded to it if I didn’t opt out! The nerve! I pay $3/yr for a domain name registration and then they’re going to up me to $7/mo if I don’t tell them “no”! STRIKE ONE and STRIKE TWO! Still, the offer was really good and, since I don’t make any money off of these sites, very tempting. So I sent some email inquiries. Of the first two, only one got a reply. I asked some more questions. No reply. I asked again. Still no reply. STRIKE THREE! You’re outta here!

So in the end, I went with AN-Hosting/MidPhase. Other than their support staff not really answering all my questions, I’m quite happy with them. Their responses are near instantaneous and they seem happy to help me out. For example, when I asked if I could change my userid to my standard “bcwhite”, they replied they would have to recreate the account which would lose all the data. I said that was fine (I hadn’t put any data on it) and the change was made. The whole thing took less than an hour!

… time passes …

It’s been a little over a month now and my experience with midphase/an-hosting is absolutely terrible!!! Let’s recap:

  • Three days after I move my Background Exposure landscape photography website to this account, their server died. I’m not sure they even noticed until I sent them email. A few hours later I’m told there’s some possibly unrecoverable error and, sorry, there is no backup. Apparently the machine had only been alive three weeks and so they hadn’t bothered to start the backups. It seems they’ve never heard of the Bathtub Curve.
  • About one month after starting and moving the Riverworth Systems pages (including the linux wiki), their server dies again and again I’m not sure they even noticed until I told them via email. I certainly never received any notification.
  • Another five days later, after being off-line on a business trip, I find out that they still haven’t restored the machine. Instead they’ve moved my entire account to a new server and new IP address without telling me. That’s right… not one email. Some things weren’t working because I’d used the IP address in places when DNS was not fully set up. And guess what… They didn’t bother to restore the databases!
  • Another day later and the database is restored. But wait… It’s not up to date. It’s over two weeks old! When I ask them their backup policy, they say it’s weekly. Yeah, sure.
  • Apparently there is a server notification list that announces these problems, etc. Nobody ever told me about it and they didn’t subscribe me to it automatically. I’m told I should sign up for the new server, but guess what… There is no list yet for the new server! Their super-responsive techs don’t even bother to see if the advice they’re dispensing is valid. I think I’m going to cry.

So to recap… They’ve directly caused my websites to be down for 6 days over the course of 1 month and I still had to restore some things by hand. They claim a 99.9% uptime, so I can only assume they’re confident that they won’t have any other problems for at least 20 years! I’m still with them, not because I want to be, but because I’ve already paid for a year and don’t want the trouble of moving. I’m also doing my own daily backups!

If you’re considering MidPhase or AN-Hosting for your sites, run! Don’t walk… Run!

I’d like to hear what other people think of the hosting companies they use.

There are three reasons people do things for you:

  1. It helps them.
  2. It helps you.
  3. It’s their job.

The higher up that list you can move the motivation, the better the results. I think this is pretty obvious once you see it. We’re all like that. We’re generally much more motivated by our own hobbies than helping out friends and more motivated helping out friends than digging ditches just because we’re paid for it.

The question is how to make use of this knowledge. When it comes to hiring people, especially in creative arts like software programming, you’ll always get your best work from someone who loves what they do. I know some of you are saying “Duh! I already know that.” Hey, I said it was obvious.

0020-05-small.jpgWhen I’m interviewing someone for a software position, I always ask why kind of computer set up they have at home. Then I ask what their personal projects are. If they do programming for fun, then they’ll find their job up there under reason #1 and will do their best work. I’m like this. I’ve never had to work for anyone… I just play the way they ask me to. It’s a sweet deal if you can get it.

Once they’re hired, the real challenge is to keep the job interesting and new and, most of all, fun. You want them to look forward to coming in to the office every day. To do this, you have to have a pleasant environment, good equipment, opportunities to learn new things, and challenging tasks. If you let a developer get bogged down in constant maintenence with no chance to create something new (remember, software design is an art, not a science) then eventually they’ll fall down to reason #3 and either be much less productive or leave for greener pastures, and that is costly.

Replacing a top-notch software developer is probably the most costly activity a small company will undertake. It’s not the HR costs of advertising the position, or the costs of an interviewer’s time, or the relocation costs to bring them from out of town. No, by far the biggest cost is the knowledge that the departing person takes with them. There is no way to transfer that so once gone, it has to be learned anew by someone else.

Treat your artists like gold; they’re worth their weight in it.

Here’s a quick little tip I learned…

On my corporate firewall, I have rules that block all outgoing HTTP and SMTP traffic unless it is coming from known servers. These servers run a Squid web proxy and an Exim mail relay. Blocking these ports for all workstations ensures all traffic must go through a server, first. Why?

  • Blocking the HTTP port ensures that everybody is using the web cache. This ensures the best performance for the user and the company as a whole.
  • Blocking the SMTP port protects against the spread of mail worms. No, it doesn’t protect us — it protects others against us in the case somebody in the company becomes infected. Three times over the past few years this has prevented the spread of a worm that somebody inadvertently brought inside the corporate LAN. Since the worms all tried to use their own internal SMTP engines instead of relaying through the server, none could make any outgoing connections to the Internet.

Just some thoughts. Comments welcome! Also, check out this article on spam and responsible behavior.

The Extreme Programming paradigm is a very interesting concept. I don’t agree with all of it and some of the things I do agree with are not always easy to achieve. One of the ideas I’ve absolutely fallen in love with, though, is the idea of integrated self tests, or “unit tests” as they’re sometimes called.

Up, Daddy?I have my own personal library of C++ code. It’s posted as free software under “starlib” on SourceForge, if you want to check it out… just don’t expect any real documentation. This software is an interesting study, I think, because it was originally written way back before the STL was available and the design has evolved as it has been used for different projects and operating systems. But I digress…

Until 2004, STAR had no built-in tests. When the idea was presented to me and I wanted to use that library in a new project, I decided to do yet another design upgrade and add support for self-tests. This was not easy! STAR is primarily an I/O abstraction library and testing I/O is difficult because at some point the operating system gets involved and then you simply can’t interfere. So I went to the base “tIoPath” class (a thin wrapper around open, close, read, write, etc.) and added hooks to all the functions there. Once you attach to the path, all reads and writes instead go to memory buffers from which the test case can eject and inject data. The test case can thus emulate what the operating system needs to do to exercise the code being tested. You can also set a callback function to intercept almost every I/O system call there is (including “read” and “write” if you’d rather interface directly instead of using intermediate memory buffers).

All this took a couple weeks for the initial version with numerous improvements/extensions over the next two years. It was a large piece of work. The benefits from it, though, are ten-fold! Now, any time I want to write a new I/O class, I can quickly and easily add test cases that will exercise all of code paths. I can inject bad data, cause data packets to get broken in odd places, and force write calls to only accept odd amounts of data at one time. These are all things that can happen when talking to the OS, but usually don’t, making it difficult to test any other way.

I say these test cases are easy, but that’s a relative term. They’re easy compared to what it would be like trying to do so without the underlying hooks in the I/O base class. They’re still work. However, all this needs to be tested anyway, so as a programmer I’d still have to find ways to create these situations by hand in order to verify the code I’ve written. That takes time, too. In the beginning, it would have been faster to hand-test things, but now the support is broad enough that I think writing test-case code is actually faster. But that is just the first time you test! The real benefit of built-in tests is that there is almost zero cost to performing it a second, third, fourth, or ninety-ninth time!

The key to self-testing is to make sure that all tests (or, at least, a significant subset) are run every time a build is done. If you change something, there skytrees.jpgis no need to go through the tedious hand testing again; the test cases written during the initial testing are still active and will validate your changes with no additional cost! That’s free work! You simply cannot afford not to have built-in self tests (appologies for the double-negative).

Some people argue that there should be a built-in test for every single method in a class and that all big methods should be broken down in to smaller, individually-testable methods. I don’t agree with that. I believe that a class should be tested completely at the API level, or “public interface”. If you’ve completly tested the public interface, then you’ve tested all the code of a class. If not, what is the purpose of that extra, untested code? Sure, you can test private parts if you wish; I just don’t think it’s critical. Plus, the API is less likely to change than the internal implementation so there are fewer changes required in the test code itself as maintenence is done.

If you’re still not sure exactly what I mean, try it. If you’re running a Unix-like system, try this:

cvs -d :pserver:starlib.cvs.sf.net:/cvsroot/starlib co star
cd star
./configure
make

At the end of the build, you’ll see it run it’s test suite. Better than that, it’s easy to incorporate this in any programs built using this library. You build your application and then just type ./myapp –test and !presto! you see all your tests run and have confidence that your app is ready to go. If you take the app to a new host, just run the test again to make sure that all required libraries are available; you don’t have to worry that it will fail at some unusual condition after running for an hour.

When it comes time to ship and you don’t want the bloat of the test cases, you can rebuild it all without the tests. It’s that easy!

I’m not saying all this to advocate my library. Far from it. It’s just an example of how useful this type of system can be. Once you’ve tried it, you’ll never want to go back to hand-testing again.

To summarize… Built-in self tests are a great way to validate your code as it’s written. The real benefit of it, though, comes when you’re doing maintenence! Whether it be bug-fixes or improvements, the built-in tests will ensure that your code is always working as it’s supposed to, and it will do so for free.

Security is a chain only as strong as its weakest link.

When placing servers on the internet, there are two obvious ways to secure it.

  1. Place a tightly-secured machine directly on the internet.
  2. Place a possibly-secured machine behind a firewall connected to the internet and forward one or more ports from the firewall to the server.

Whis is better?

To begin with, that’s a loaded question. Which is “better” depends on your criteria for “better” and varies from person to person and task to task. My preference is #1 (direct), though many people disagree with me. Let me explain why.

  • Option #1 is more reliable. There is only one piece of hardware and no extra network interconnection. Some argue that the firewall can be a small embedded-system box that is extremely reliable and that the network in between is just a cable, but the fact remains that it cannot be more reliable than the single machine.
  • Option #1 is easier to configure. There is no need to deal with NAT, port forwarding, multiple IP addresses, etc., etc.
  • Option #1 is cheaper. There is less hardware and it uses less space. If you’re co-locating this server in an ISP’s rack then this is even more important. Plus, rack-mount firewalls are not the same as the cheap, small, home versions.
  • Option #1 is more secure! This is the point that I get in to arguments about, so let me elaborate…

The biggest argument for the multiple-box solution is that your simple, no-nonsense firewall can completely block all traffic except that you really want going to your server. It means that when new security flaws are found in Windoze, you don’t worry about them because it’s already blocked by your firewall. True, but equally true for the single-machine solution. A single, standalone machine should have only those services running that you are actually using. As such, it doesn’t matter that 13 new execute-remote-code exploits have been found during the last week in Windows XP file sharing because you don’t have it enabled anyway! In fact, on any decently administered server, the biggest security hole is in the services you have open on purpose because those are the services being provided.

And this leads me to why I say that a single stand-alone server is actually more secure than a server behind a proper firewall… Trust

As soon as Option #2 is in place and a server is secured behind a firewall, people and administrators tend to trust that server because it’s protected. That means that it’s often not as tightly secured in itself and sometimes has access to other hosts that it really doesn’t need just because it’s convenient. If you put two servers behind the same firewall, each has unrestricted (by the network) access to the other. And if you put the server inside the corporate firewall, then there is no protection between that server and every other machine on the network.

Why is that important? Remember, this server is doing something for the outside. That means that some traffic is being intentionally forwarded from the “Internet At Large” to it. What happens when there is a security hole in the service being provided? Yes, that machine is compromised, but it’s worse than that. Suddenly, the exploited machine becomes a gateway for an attacker to access everything behind the firewall. That is a Bad Thing™! You had a nice big, strong, chain of security with one weaker link that was the service being provided. That link is broken and your chain disintegrates. With Option #1, yes, the machine is still compromised, but because it’s a machine on the internet, it’s not trusted any more than any other machine on the internet. It cannot be used as a stepping stone to more sensitive information within your network. That is why it’s more secure.

There is one way I see that Option #2 is “better” and that is flexibility. The firewall can be programmed to do load balancing between multple servers and so forth. If you need that type of thing, then you know what you need. If you don’t but still want to go the server-plus-firewall then you really should have one firewall per server (good luck convincing your bean counters of that), and each machine must still be secured as if it were open to unrestricted attack. And never, ever, ever, ever, ever put that server inside the corporate LAN.

Happy Hallowe’en!

Next Page »