Oshiro Remake for iPad underway

Hey guys!  I’ve been meaning to start the remake of Oshiro on my free time for a long time.  I got inspired to tackle it at the beginning of this year, after seeing a couple friends having a lot of fun playing the old web version.  But I’ve been stuck in a waiting pattern because I didn’t know what framework/technology to use.  LoomSDK looked interesting with live code reload and ease of deployment, and trying out new, bleeding-edge platforms always excited me.  But when I got real about what I’m doing with my project (getting something in the store eventually, not just a fun pet project), I decided that I shouldn’t take a risk on the unproven and young framework.  So I went with Cocos2D-JS, which has been enhanced with some new tools (Cocos IDE and Cocos Studio).  To be honest, I don’t really know what the tools can do exactly, but it tells me the framework is mature and still well supported.  And I just need to get a move on things :)

To start, I threw a placeholder tileset together just so I have something to work with.  My first goal is to just get a single-player puzzle level prototype working, then get to the look and feel after that.  Here’s how the placeholder tileset I’m rolling with:

Placeholder tileset for Oshiro for iPad

Posted in Development | Tagged , | Leave a comment

New puzzle game underway!

Don’t have a whole lot of specifics yet, but Rick and I dusted off a puzzle game prototype that had been sitting in his digital drawers for a few years, until some recent chain of events lead to its rediscovery.  And now we’re going to make it come alive on an iPad near you.  We’re tentatively calling it: Take Us Home!

Here’s a random sketch from the character concepts today:


Take us home! Character sketch

 

Posted in Development | 3 Comments

Couchbase vs. DynamoDB for Free-To-Play Games

Perry and I used to joke about what will get released first: FableLab’s next game or Couchbase 2.0.  And yes, he won :)  But that does mean that I get the option to use the new version to power my next game.  Besides key operational improvement, 2.0 also added several key features that were missing in a side-by-side comparison to other document-store choices like MongoDB.  Back in 20111, it seemed like a no brainer that would upgrade.  But after spending a year and a half with live deployment of Membase 1.7/1.8, I am finding good reasons to use the new Amazon DynamoDB instead.

Reason 1: Growing memory usage.

The Couchbase cluster need to keep the metadata of every single key in memory, even if those values are not in the working set.   Here’s how the memory usage broke down for our live cluster:

Total users: 5 millions
Keys in the cluster: 300 millions
Average key size: 20 bytes
Metadata per key: 120 bytes
Total copies: 2 (original + 1 replica)

The metadata alone for us came out to 78GB – and it was larger than the actual data in the working set!  And all of this data must remain in the cluster memory at all time.  We ran a cluster with 8 m2.xlarge servers, and metadata ate up 60% of the 131GB cluster memory.

We probably could collate more user data into a fewer keys to go lower than average of 30 k/v pairs per player.  But the point here is that as the game grew, so did the memory requirement – regardless of the working set – because we couldn’t just delete old and inactive players who hadn’t logged in for a year!  Animal Party had a stabilized player base at about 300k monthly actives, but we still needed enough memory for 5m players’ metadata.  Yikes.

If you want more detail on calculating Couchbase memory usage, check out: http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-bestpractice-sizing-ram.html

Reason 2: Recovery

The inability to perform online compacting in 1.x was a real issue for us, especially when servers had to be restarted.  Without compacting, databases will take increasingly more time to warm-up after a reboot, and at our size it meant several hours of downtime.  The auto-compacting in 2.0 should reduce this issue, but the warm-up time still remains an issue the next time AWS goes into a tizzy.  Granted, no one yet knows how DynamoDB service will be impacted during one of the future AWS outage.  But for a small team like ours, I’d rather put more onus on the Amazon engineers than us in a recovery situation.

After doing a bit more research, it turns out that the two above issues are not exclusive to Couchbase.  Several other NoSQL solutions have similar problem.

What about DynamoDB?  Judging from its spec, it should be able to allow a game to age gracefully past the peak.  You can dial down access rate as concurrent user drops, and the amount you pay for additional storage as data grow is very small compared to the additional memory needed to hold them in a NoSQL database.  Increasing and decreasing DynamoDB access rate takes about 10 min, so it’s also possible to run a script to ram up and down to match daily traffic cycle, which is challenging to setup for any other solution.

There are some way around the aging issue for Couchbase.  Old data can be identified and pulled out of the cluster into a different storage system that’s suitable for data archiving.  When user try to retrieve the old data, the system pull them out from archive and restore into Couchbase.  However, if we’re going down that route, we might as well in-memory databases like VoltDB that provide transaction and SQL support.

There have been a lot of success stories with scaling up quickly with NoSQL like OMGPOP with Draw Something, but there aren’t as much discussion about the managing the data in the late stage of product life cycle.  Zynga and certainly has a lot of knowledge and proprietary solution on this issue (they use Couchbase as well – one of the earliest adopters who also contributed on the technology), though it will be something the indie studios will have to tackle as we go.

Posted in Development | Tagged , , | Leave a comment

If you are unable to find attached iOS device on Flash Builder 4.7 Beta 2…

Hey guys!

We at FableLabs have been working on doing mobile development using Adobe AIR and our AS3 codebase on my Windows machine.  The new FB 4.7 Beta 2 and Project Monocle have been working out really well, exceeding our expectations.  And I will do a write up on them later when I finally get a breather.  However, I did run into a problem with getting iOS devices recognized by FB today.  After burning a few hours on it, I finally figured it out… and I hope none of y’all will have to go through the same problem again :)

When you do “Debug over USB” and FB tells you that it doesn’t see the device AND you swear that AIR 3.4 is configured, latest iTunes is running, cable’s connected, iPad’s powered up, and sanity pills have been taken with the proper dosage, execute this command line tool to see what the underlying problem is:

<Flash Builder 4.7 Program Files>\eclipse\plugins\com.adobe.flash.compiler_4.7.0.348297\AIRSDK\lib\aot\bin\iOSBin\idb.exe -devices

(Note: If you are not using the default Flex SDK that comes with FB (for example, if you are using the AIR 3.5 Beta), find the command under your chosen SDK’s directory structure)

If it works, you should get some output like the following:

List of attached devices:
Handle  DeviceClass     DeviceUUID              DeviceName
   2    iPad            1234567890abcdef...     My Rad iPad

However, when I ran it, I get an error saying “The procedure entry point sqlite3_wal_checkpoint could not be located in the dynamic link library SQLite3.dll” and tells me to check my iTunes installation.  I’m guessing most people will not get this issue, but if your iTunes installation is as broken as mine, download SQLite3.dll from here and drop it into the same directory as idb.exe.  Once you are able to get the command to run correctly, your FB should be able to recognize your precious iPad/iPhone.

Enjoy!

Posted in Development | Tagged , , , | Leave a comment

Upgrading Membase 1.7.2 to Couchbase 1.8.1, provisioned IOPS for EBS

I just completed an upgrade of our prod Membase cluster to go from 1.7.2 to Couchbase 1.8.1 community edition, which was made available recently.  Since I am planning to do the upgrade by taking all the nodes down and update all the server software which will incur downtime, I figured I will also try out the newly available “EBS with provisioned IOPS”.  Things went well for the most part, however there is one key thing when doing the upgrade that was not covered in Couchbase’s documentation.

If you are deploying your Couchbase server in a cloud service like EC2, you likely have changed your server setting so it uses a DNS name rather than a self-reported IP address (see: http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-bestpractice-cloud-ip.html).  And if that is part of your setup, you have to also do the same to the database upgrade script that will convert your data to 1.8.1 format.  Here are the steps I used to do the upgrade on my boxes running Amazon Linux AMI:

  1. Of course, backup the server.  You never know what’s going to happen in one of these major upgrades.
  2. rpm -e membase-server
  3. INSTALL_DONT_START_SERVER=1 INSTALL_DONT_AUTO_UPGRADE=1 \
    INSTALL_UPGRADE_CONFIG_DIR=/opt/membase/var/lib/membase/config \
    rpm -i couchbase-server-community_x86_64_1.8.1.rpm
  4. vim /opt/couchbase/bin/cbupgrade
  5. Find “127.0.0.1″ in the file, and replace it with your custom DNS
  6. vim /opt/couchbase/bin/couchbase-server
  7. Find the following and add in your server name:
    ...
    -ns_server config_path "\"/opt/couchbase/etc/couchbase/static_config\"" \
    -name ns_1@YOUR_DNS_HERE \
    -ns_server pidfile "\"$PIDFILE\"" \
    ...
  8. Do a dry-run of the upgrade:
    /opt/couchbase/bin/cbupgrade -c /opt/membase/var/lib/membase/config -n
  9. If it all works, then run it without the -n option and it should complete very quickly (took no more than a few seconds for me)
  10. I found myself needing to reboot the box after the upgrade.  Simply starting Couchbase right after the upgrade didn’t bring the server up properly.

After that, it’s back to waiting for the server to finish warming up.  I am getting much better warm-up time with the new EBS with provisioned IOPS.  No surprise there.  Knowing how often disk ends up being the bottleneck (like just about any database under the sun), I wouldn’t setup any future Couchbase nodes without this puppy.

  1. vim /opt/couchbase/bin/cbupgrade
Posted in Development | Tagged , , , | Leave a comment

Tim Schafer, Double Fine to crowd-source a new adventure game

Okay, I admit, I am a Tim Schafer fan.  If you know what we do at FableLabs, it should be no surprise that I love to see good stories in a game.  And Schafer has produced some of the most beautiful and story-rich graphics adventure games in the past.  He now turns to Kickstarter to fund his next “modern age” point-and-click adventure game.  Do yourself a favor and check them out!

http://www.kickstarter.com/projects/66710809/double-fine-adventure

On the KS page, they pointed out that “even something as ‘simple’ as an Xbox LIVE Arcade title can cost upwards of two or three million dollars.  For disc-based games, it can be over ten times that amount.”  This is something I mentioned in my rant about in my other post about the rise of game clones in the social/freemium space.  Traditional games are expensive to make, and developers have to finish all the content at the time of release because players don’t continue to download updates to content and game mechanics each time they play.

Yes, downloadable content things like Steam update are starting to change that situation slowly, but it is not the same as freemium games for one major reason.  If I paid $20 up front, I need to know that I will have $20 worth of content ready for me.  But if I started a game for free, I wouldn’t mind if it only has three weeks worth of content and I just have to see how the game evolves as I continue to play.  So instead of having no revenue stream until the entire game is finished, freemium games can start to receive revenue at a much earlier stage.

Crowd-sourcing however, is giving game developers another viable way to fund-raise through the dev cycle.  There have been a few indie games that were funded and eventually released through KS (e.g. No Time To Explain), but Double Fine just proved (this morning!) that crowd-sourcing can do a lot more.  Their original pledge goal of $400k is rather small for any studio quality game, but they already hit $700k in just over 9 hours.  Obviously, having Tim Schafer as a lead makes a night-and-day difference (to the point where they didn’t even need to reveal any info or screenshot on the game being made), but this reinforces two of my existing believes:

1. Story driven, click-adventure games are viable today

The recent success of Machinarium and Sword & Sworcery EP and the wild funding success of Double Fine show that there is a demand for adventure games.  Their audience is somewhat different from the popular FPS, RTS, or MMORPG players, but developers are finding new ways to reach those players.  We’re also seeing less adventure games focused on challenging puzzles and more focus on making sure puzzles do not impede players from progressing through the game plots.

2. Studios are finding new paths to funding and revenue outside of the old developer-publisher relationship

Whether it’s freemium, crowd-sourcing or episodic releases, developers are finding new ways to get it done without relying on a publisher.  I think this bodes well for everybody, because this will allow more courageous and out-of-the-box ideas to see the light of day.

Can’t wait to see how much momentum Tim Schafer and Double Fine will generate from this KS project.

Posted in Gaming Bizdev, Theoretical Thoughts | Leave a comment

gevent compatible Memcache client

I have been looking for a memcache client that plays well with gevent, and I stumbled upon one today:

https://github.com/esnme/ultramemcache

It’s written and maintained by the good folks at ESN.me.  They built Battlelog, the social network for Battlefield 3, using gevent and this memcache client.  They also released a gevent-compatible MySQL driver and a few other interesting python projects.

I’ll be doing load/stress testing with various different clients and setups for flask/gevent in the coming weeks, and I’ll post my findings here.

Posted in Development | Tagged , | 4 Comments

Game Design: Rise of the Clones

There has been a recent flurry of media coverage on big game companies releasing clones of games developed by smaller, indie studios.  Here are a couple of claims made by two studios against Zynga:

http://kotaku.com/5879046/zynga-totally-rips-off-tiny-tower
http://venturebeat.com/2012/01/29/buffalo-studios-blasts-zynga-for-copying-bingo-blitz-social-game/

And here is a more recent and more serious accusation that is actually turning into a lawsuit, against our own publisher 6waves/LOLAPPS:

http://www.edery.org/2012/01/standing-up-for-ourselves/

Having been in contact with several folks involved in the accusation and lawsuit, it’s been interesting to hear people’s take on the issue.  There is a really fine line between inspiration and copying, and this problem has been seen in any creative field for a long time.  It has come up before in the game industry in the past, but it has become more of a focus in the current social/freemium game landscape.  Why?  Because the efforts involved in cloning a game have been reduced while the financial reward has gone up.

Read More »

Posted in Gaming Bizdev, Theoretical Thoughts | Tagged , , | Leave a comment

Key stats to monitor for your Membase cluster

Happy holidays everyone!  I realize that from working with Membase in our production over the last year, I’ve collected a few key commands in my .bashrc for quickly checking vital stats on my Membase servers, many of them came from the good folks at Couchbase.   Their wiki has improved over the year as well, and you can find a lot of good information there.  Here I will list the most common commands I run for monitoring and troubleshooting, along with related links to the Membase wiki:

Read More »

Posted in Development | Tagged | Leave a comment

Migrating Membase Cluster – Part 2

After up all night babysitting the rebalance process, I am happy to report that it was a rather uneventful night of maintenance.  The rebalance itself took 8-9 hours to complete, and then took another hour for all the replicas to get saved to the disk also.  Theoretically, I didn’t need to take the site down while the rebalance was happening, but I took the game down just to be safe and not compromise the game experience.

The disk access was definitely the bottleneck through out the rebalance process once again.  One the the reason we went for more # of smaller nodes rather than smaller # of bigger nodes is to spread out our disk activities over more # of EBS drives during a rebalance, conceptually similar to a RAID 0.  We do increase the higher risk of hardware failure simply by having more nodes in the cluster, but the disk performance gain is definitely worth it.

Some folks are doing a RAID 0 setup using multiple EBS as described here on alestic, but I haven’t tried it personally.  If anyone has attempted that setup, especially in a production environment, please share your experience in the comments!

Posted in Development | Leave a comment