Quantcast
Channel: GameDev.net
Viewing all 17825 articles
Browse latest View live

A Resource Manager for Game Assets

$
0
0

Preamble


This article has been written based on my personal experience, if you think you are offended in some ways because of my personal opinions about programming and / or my coding style, please stop reading now. I am not a guru programmer and I don’t want to impose anything on anyone.

Introduction


During the development of my engine(s), I have always had the need for better handling of game assets, even if in the past it was sufficient to hard code the needed resources at startup. Now with the added speed of modern computers it is possible to create something more advanced, dynamic and general-purpose.

The most important points I needed for a resource manager were:
  • reference counted resources.
  • If a resource is already present in the database, expose it as raw pointer, else load it.
  • know automatically when to free the resource.
  • fast access using string to retrieve resources
  • clean everything on exit without manually deleting the resources.
  • mapping resource using a referenced structure.
At this point it looks like I am going to reinvent the wheel, given the assumption that smart pointers are now included with the new C++ standard. My personal problem with the smart pointers is that it was very difficult to create the kind of data structure I wanted. My idea was to create an unordered map of shared pointers and then hand out a weak pointer when the resource was asked for. The problem with this data organization is that the weak pointer basically ‘observes’ the resource but doesn’t hold it. On the contrary using an unordered map of weak pointers and handing out a shared pointer created different problems, which could be resolved using a custom deleter and other indirect strategies.

One of this was that if I loaded the resource for the first time, its count was set to 1 and when the same resource was asked for its count was increased to 2. So I still had the problem to ignore the first reference, which was again solvable using a custom deleter and tracking how many resources where effectively still left. Further more, shared pointers are thread safe and according to the standard they are synced even if there is no strict necessity.

I am not saying that smart pointer are useless, they are used quite extensively, but my approach needed something different; I needed to have total control of reference counting. Basically what I needed was a wrapper class to store the pointer to the loaded resource, and its reference counting. Note that the code I copied from my engine uses some other utility functions - they are not necessary, because they handle error messaging and string ‘normalisation’ (eliminating white spacing and lowering the string down), so you can easily ignore those functions and substitute them with yours. I have also removed all debugging printing during the execution, to keep things clearer.

Let’s have a look at the base class for resources.

class CResourceBase 
{

	template < class T > friend class  CResourceManager;

	private:

	int References;

	void IncReferences() { References++; }
	void DecReferences() { References--; }

	protected:

	// copy constructor and = operator are kept private

	CResourceBase(const CResourceBase& object) { }
	CResourceBase& operator=(const CResourceBase& object) { return *this; }

	// resource filename

	std::string ResourceFileName;

	public:

	const std::string &GetResourceFileName() const 
	{
		return ResourceFileName;
	}

	const int GetReferencesCount() const
	{
		return References; 
	}

	////////////////////////////////////////////////
	// ctor / dtor
					
	CResourceBase( const std::string& resourcefilename ,void *args ) 
	{
		// exit with an error if filename is empty

		if ( resourcefilename.empty() ) 
		CMessage::Error("Empty filename not allowed");

		// init data members

		References = 0;
		ResourceFileName=CStringFormatter::TrimAndLower( resourcefilename );

	}

	virtual ~CResourceBase() 
	{
	}
};

The class is self-explanatory - the interesting point here is the constructor, which needs a resource filename, including the full path of your resource and a void pointer to an argument class in case you wanted to load the resource with some initial default parameters. The args pointer comes in handy when you want to instantiate assets during runtime and don’t want to load them. There are some cases where this is useful, for every other case the constructor will serve our purposes well. All of our assets will inherit from this class.

There is, obviously, the reference counter and some functions for accessing it.

The Resource Manager


The resource manager basically is a wrapper for an unordered map. It uses the string as a key and maps it to a raw pointer. I have decided to use an unordered map because I don’t need a sorted arrangement of assets, I really do care at retrieving them in the fastest possible way, in fact accessing a map is O(log N), while for an unordered map is O(N). In addition just because the unordered map is constant speed (O(1)) doesn't mean that it's faster than a map (of order log(N)). Anyway, in my test cases the N value wasn’t so huge, so the unordered map has always been faster than map, thus I decided to use this particular data structure as the base of my resource manager.

The most important functions in the resource map are Load and Unload.

The Load function tries to retrieve the assets from the unordered map. If the asset is present, its reference count is increased and the wrapped pointer is returned. If it's not found in the database, the function creates a new asset class, increases its reference count, stores it in the map and returns its wrapped pointer. Note that its the programmer’s responsibility to create an asset class with a proper constructor. The base class, inherited from CResourceBase class, must provide a string containing the full path from where the asset needs to be loaded and an argument class if any - this will be clearer when the example is provided.

The Unload function does exactly the opposite: looks for the requested asset given its file name, if the resource is found, its reference counter is decreased, and if it reaches zero the associated memory is released.

Since I think that a good programmer understands better 1000 lines of code rather than 1000 lines of words, here you have the entire resource manager:

template < class T >
	class CResourceManager 
	{

		private:

		// data members

		std::unordered_map< std::string, T* > Map;	
		std::string	Name;

		// copy constructor and = operator are kept private

		CResourceManager(const CResourceManager&)  { };
		CResourceManager &operator = (const CResourceManager& ) { return *this; }

		// force removal for each node

		void ReleaseAll()
		{
			std::unordered_map< std::string, T* >::iterator it=Map.begin();

			while ( it!=Map.end() )
			{	
				delete (*it).second;

				it=Map.erase( it );
			}

		}

		public:


		///////////////////////////////////////////////////
		// add an asset to the database

		T *Load( const std::string &filename, void *args )
		{

			// check if filename is not empty

			if ( filename.empty() ) 
			CMessage::Error("filename cannot be null");

			// normalize it

			std::string FileName=CStringFormatter::TrimAndLower( filename );

			// looks in the map to see if the
			// resource is already loaded

			std::unordered_map< std::string, T* >::iterator it = Map.find( FileName );

			if (it != Map.end())
			{

				(*it).second->IncReferences();

				return (*it).second;
				
			}

			// if we get here the resource must be loaded
			// allocate new resource using the raii paradigm
			// you must supply the class with a proper constructor
			// see header for details

			T *resource= new T( FileName, args );

			// increase references , this sets the references count to 1

			resource->IncReferences();

			// insert into the map

			Map.insert( std::pair< std::string, T* > ( FileName, resource ) );

			return resource;

		}

		///////////////////////////////////////////////////////////
		// deleting an item

		bool Unload ( const std::string &filename )
		{
			// check if filename is not empty

			if ( filename.empty() ) 
			CMessage::Error("filename cannot be null");

			// normalize it

			std::string FileName=CStringFormatter::TrimAndLower( filename );

			// find the item to delete

			std::unordered_map< std::string, T* >::iterator it = Map.find( FileName );

			if (it != Map.end())
			{
										
				// decrease references

				(*it).second->DecReferences();

				// if item has 0 references, means
				// the item isn't more used so , 
				// delete from main  database

				if ( (*it).second->GetReferencesCount()==0 )
				{
																									// call the destructor 
					delete( (*it).second );
					Map.erase( it );

				}

			return true;

			}

			CMessage::Error("cannot find %s\n",FileName.c_str());

			return false;
		}


		//////////////////////////////////////////////////////////////////////
		// initialise

		void Initialise( const std::string &name )
		{
			// check if name is not empty

			if ( name.empty() )  
			CMessage::Error("Null name is not allowed");

			// normalize it

			Name=CStringFormatter::TrimAndLower( name );
			
		}

		////////////////////////////////////////////////
		// get name for database

		const std::string &GetName() const { return Name; }
		const int Size() const { return Map.size(); }

		///////////////////////////////////////////////
		// ctor / dtor

		CResourceManager()
		{
			
		}

		~CResourceManager()
		{
			ReleaseAll();
		}

};

Mapping Resources


The resource manager presented here is fully functional of its own, but we want to be able to use assets inside a game object represented by a class.
Think about a 3D object, which is made of different 3D meshes, combined together in a sort of hierarchial structure, like a simple robot arm, makes the idea clearer. The object is composed of simple building blocks, like cubes and cylinders. we want to reuse every object as much as possible and also we want to access them quickly, in case we want to rotate a single joint.

The engine must fetch the object quickly, without any brute force approach, also we want a name for the asset so we can address it using human readable names, which are easier to remember and to organize.

The idea is to write a resource mapper which uses another unordered map using strings as keys and addresses from the resource database as the mapped value. We need also to specify if we want to allow the asset to be present multiple times or not. The reason behind this is simple - think again at the 3D robot arm. We need to use multiple times a cube for example, but if we use the same resource mapper for a shader, we need to keep each of the shaders only once. Everything will become clearer as the code for the mapper unfolds further ahead.

template < class T >
	class CResourceMap
	{

		private:

		/////////////////////////////////////////////////////////
		// find in all the map the value requested

		bool IsValNonUnique( const std::string &filename )
		{

			// if duplicates are allowed , then return alwasy true

			if ( Duplicates ) return true;

			// else , check if element by value is already present
			// if it is found, then rturn treu, else exit with false

			std::unordered_map< std::string, T* >::iterator it= Map.begin(); 

			while( it != Map.end() )
			{
				if ( ( it->second->GetResourceFileName() == filename ) ) return false;

				++it;
			}

			return true;

		}

		//////////////////////////////////////////////////////////////////////////////
		// private data

		std::string Name;														// name for this resource mapper
		int Verbose;																// flag for debugging messages
		int Duplicates;															// allows or disallwos duplicated filenames for resources
		CResourceManager<T> *ResourceManager;				// attached resource manager
		std::unordered_map< std::string, T* > Map;	// resource mapper

		// copy constructor and = operator are kept private

		CResourceMap(const CResourceMap&)  { };
		CResourceMap &operator = (const CResourceMap& ) { return *this; }

		public:

		//////////////////////////////////////////////////////////////////////////////////////
		// adds a new element

		T *Add( const std::string &resourcename,const std::string &filename,void *args=0 )
		{

			if ( ResourceManager==NULL ) CMessage::Error("DataBase cannot be NULL (5)" );
			if ( filename.empty() ) CMessage::Error("%s : filename cannot be null",Name.c_str());
			if ( resourcename.empty() ) CMessage::Error("%s : resourcename cannot be null",Name.c_str());

			std::string ResourceName=CStringFormatter::TrimAndLower( resourcename );

			// looks in the hashmap to see if the
			// resource is already loaded

			std::unordered_map< std::string, T* >::iterator it = Map.find( ResourceName );

			if ( it==Map.end() )
			{
				std::string FileName=CStringFormatter::TrimAndLower( filename );

				// if duplicates flag is set to true , duplicated mapped values
				// are allowed, if duplicates flas is set to false, duplicates won't be allowed

				if ( IsValNonUnique( FileName ) )		
				{

					T *resource=ResourceManager->Load( FileName,args );

					// allocate new resource using the raii paradigm

					Map.insert( std::pair< std::string, T* > ( ResourceName, resource ) );

					return resource;

				}
				else
				{
					// if we get here and duplicates flag is set to false
					// the filename id duplicated

					CMessage::Error("Filename name %s must be unique\n",FileName.c_str() );

				}

			}

			// if we get here means that resource name is duplicated

			CMessage::Error("Resource name %s must be unique\n",ResourceName.c_str() );

			return nullptr;

		}

		/////////////////////////////////////////////////////////
		// delete element using resourcename

		bool Remove( const std::string &resourcename )
		{

			if ( ResourceManager==NULL ) CMessage::Error("DataBase cannot be NULL (4)");
			if ( resourcename.empty() ) CMessage::Error("%s : resourcename cannot be null",Name.c_str());

			std::string ResourceName=CStringFormatter::TrimAndLower( resourcename );

			if ( Verbose ) 
			CMessage::Trace("%-64s: Removal proposal for : %s\n",Name.c_str(),ResourceName.c_str() );

			// do we have this item ?

			std::unordered_map< std::string, T* >::iterator it = Map.find( ResourceName );

			// yes, delete element, since it is a reference counted pointer, 
			// the reference count will be decreased

			if ( it != Map.end() ) 
			{

				// save resource name

				std::string filename=(*it).second->GetResourceFileName();

				// erase from this map

				Map.erase ( it );

				// check if it is unique and erase it eventually

				ResourceManager->Unload( filename );

				return true;
			}

			// if we get here , node couldn't be found
			// so , exit with an error

			CMessage::Error("%s : couldn't delete %s\n",Name.c_str(), ResourceName.c_str() );

			return false;

		}

		//////////////////////////////////////////////////////////
		// clear all elements from map

		void Clear()
		{

			std::unordered_map< std::string, T* >::iterator it=Map.begin();

			// walk trhough all the map

			while ( it!=Map.end() )
			{

				// save resource name 

				std::string filename=(*it).second->GetResourceFileName();

				// clear from this map

				it=Map.erase ( it );

				// check if it is unique and erase it eventually

				ResourceManager->Unload( filename );

			}

		}

		//////////////////////////////////////////////////////////
		// dummps database content to a string

		std::string Dump()
		{

			if ( ResourceManager==NULL )
			CMessage::Error("DataBase cannot be NULL (3)");

			std::string str=CStringFormatter::Format("\nDumping database %s\n\n",Name.c_str() );

			for ( std::unordered_map< std::string, T* >::iterator it = Map.begin(); it != Map.end(); ++it )
			{

				str+=CStringFormatter::Format("resourcename : %s , %s\n",
				(*it).first.c_str(),
				(*it).second->GetResourceFileName().c_str() );

			}

			return str;

		}

		/////////////////////////////////////////////////////////
		// getters

		/////////////////////////////////////////////////////////
		// gets arrays name

		const std::string &GetName() const { return Name; }
		const int Size() const { return Map.size(); }

		//////////////////////////////////////////////////////////
		// gets const reference to resource manager

		const CResourceManager<T> *GetResourceManager() { return ResourceManager; }

		/////////////////////////////////////////////////////////
		// gets element using resourcename, you should use this
		// as a debug feature or to get shared pointer and later
		// use it , using it in a section where performance is
		// needed might slow down things a bit

		T *Get( const std::string &resourcename )
		{

			if ( ResourceManager==NULL ) CMessage::Error("DataBase cannot be NULL (2)");
			if ( resourcename.empty() ) CMessage::Error("%s : resourcename cannot be null",Name.c_str());

			std::string ResourceName=CStringFormatter::TrimAndLower( resourcename );

			std::unordered_map< std::string, T* >::iterator it;

			if ( Verbose ) 
			{
				CMessage::Trace("%-64s: %s\n",Name.c_str(),CStringFormatter::Format("Looking for %s",ResourceName.c_str() ).c_str());
			}

			// do we have this item ?

			it = Map.find( ResourceName );

			// yes, return pointer to element

			if ( it != Map.end() ) return it->second;

			// if we get here , node couldn't be found thus , exit with a throw

			CMessage::Error("%s : couldn't find %s",Name.c_str(), ResourceName.c_str() );

			// this point is never reached in case of failure

			return nullptr;		

		}

		/////////////////////////////////////////////////////////
		// setters

		void AllowDuplicates() { Duplicates=true; }
		void DisallowDuplicates() { Duplicates=false; }
		void SetVerbose() {	Verbose=true; }
		void SetQuiet() { Verbose=false; }

		////////////////////////////////////////////////////////////
		// initialise resource mapper

		void Initialise( const std::string &name, CResourceManager<T> *resourcemanager,
		bool verbose,bool duplicates )
		{

			if ( resourcemanager==NULL ) CMessage::Error("DataBase cannot be NULL 1");
			if ( name.empty() ) CMessage::Error("Array name cannot be null");

			Name=CStringFormatter::TrimAndLower( name );			// normalized name string

			ResourceManager=resourcemanager;						// copy manager pointer

			// setting up verbose or quiet mode

			Verbose=verbose;

			// setting up allowing or disallowing duplicates 

			Duplicates=duplicates;

			// emit debug info

			if ( Verbose ) 
			{
				if ( Duplicates ) CMessage::Trace("%-64s: Allows duplicates\n",Name.c_str() );
				else if ( !Duplicates )	CMessage::Trace("%-64s: Disallows duplicates\n",Name.c_str() );

			}

		}

		/////////////////////////////////////////////////////////
		// ctor / dtor

		CResourceMap()
		{
			Verbose=-1;				// undetermined state
			Duplicates=-1;			// undetermined state
			ResourceManager=NULL;	// no resource manager assigned
		}

		~CResourceMap()
		{
			if ( Verbose ) CMessage::Trace("%-64s: Releasing\n",Name.c_str() );

				Clear();			// remove elements if unique

			}

		};

	}

Basically, the class is a wrapper for the resource database operations. Among the private data, as you can see, its present an unordered map where the first key is a string and the mapped value is the pointer directly mapped from the resource database. Let's have a look at the function members now.

The Add function performs many tasks. First, checks if the name for the asset is already present, since duplicated names for the assets are not allowed. If name is not present, it performs the attempt to upload the assets from the resource database, then it checks if the filename is unique and if the duplicates flag is not set to true. Here I have used a brute force approach, the reason behind it is that if I wanted to have a sort of bidirectional mapping, I should have used a more complex data structure, but I wanted to keep things simple and stupid. At this point the resource database uploads the asset, and if it's present, it hands back immediately the address for the required resource. If not it loads it, making the process completely transparent for the resource mapper and it gets stored in the unsorted map data structure. Note again, that all the error checking are just wrappers for a throw, you may want to replace with your error checking code, without compromising the Add function itself.

The Remove function is a little bit more interesting, basically the safety checks are the same used in Add, the resource is erased from the map, the resource database removal function is invoked, but the resource database doesn’t destroy it if it is still shared in some other places. By ‘some other places’ I mean that the asset may be still be present in the same resource mapper or in another resource mapper instantiated somewhere else in your game. This will be clearer with a practical example further ahead.

The Clear function basically performs the erasure of the entire resource map, using the same counted reference mechanism from the resource database.

The Get function retrieves the named resource by specifing its resource name and gives back the resource pointer.

The Initialise function attaches the resource mapper to the resource database.

Example of Usage


First of all, we need a base class, which could be a game object. Let's call it foo just for example

class CFoo : public vml::CResourceBase
{

	////////////////////////////////////////////////////
	// copy constructor is private
	// no copies allowed since classes
	// are referenced

	CFoo( const CFoo &foo ) : CResourceBase ( foo )
	{
	}

	////////////////////////////////////////////////////
	// overload operator is private, 
	// no copies allowed since classes
	// are referenced

	CFoo &operator =( CFoo &foo ) 
	{
		if ( this==&foo )
		return *this;
		return *this;
	}

	public:

	////////////////////////////////////////////////
	// ctor / dtor

	// this constructor must be present

	CFoo(const std::string &resourcefilename, void *args ) : CResourceBase( resourcefilename,args )
	{
	}

	// regular base constructor and destructor

	Cfoo()
	{}

	~CFoo()
	{
	}

};

Now we can instantiate our resource database and resource mappers.

CResourceManager<CFoo> rm;
CResourceMap<CFoo> mymap1;
CResourceMap<CFoo> mymap2;

I have createed a resource manager and two resource mappers here.

// create a resource database

rm.Initialise("FooDatabase", vml::CResourceManager<CFoo>::VERBOSE );

// attach this database to the resource mappers, bot of them allowd duplicates

mymap1.Initialise( "foolist1",&rm, true,true );
mymap2.Initialise( "foolist2",&rm, true,true );

// populate first resoruce mapper
// the '0' argument means that the resource 'a' , whose filename is foo1.txt
// doesn't take any additional values at construction time

mymap1.Add( "a","foo1.txt",0 );
mymap1.Add( "b","foo1.txt",0 );
mymap1.Add( "c","foo2.txt",0 );
mymap1.Add( "d","foo2.txt",0 );
mymap1.Add( "e","foo1.txt",0 );
mymap1.Add( "f","foo1.txt",0 );
mymap1.Add( "g","foo3.txt",0 );

// populate second resource mapper

mymap2.Add( "a","foo3.txt",0 );
mymap2.Add( "b","foo1.txt",0 );
mymap2.Add( "c","foo3.txt",0 );
mymap2.Add( "d","foo1.txt",0 );
mymap2.Add( "e","foo2.txt",0 );
mymap2.Add( "f","foo1.txt",0 );
mymap2.Add( "g","foo2.txt",0 );

// dump content into a stl string which can be printed as you like

std::string text=rm.Dump();

Running this example and printing the text content gives:

Dumping database foodatabase

Filename : foo1.txt , references : 7
Filename : foo2.txt , references : 4
Filename : foo3.txt , references : 3

This concludes the article. I hope it will be useful for you, thanks for reading.

Crowdfunding as Marketing

$
0
0
I was chatting with the incredible PR guru Emily Claire Afan this week and received insight into the merits of using crowdfunding as a form of marketing. This is perhaps not groundbreaking news for some, but upon evaluating a lot of independent releases this year, few have done this properly.

We should start by outlining what crowdfunding really offers developers. If your new to crowdfunding, take a look at my intro to game crowdfunding article. In the advent of digital distributors like Steam and Humble Bundle, developers have the ability to produce a game with a full end run to their customers without a publisher. With the typical publisher financing model now removed, developers require funding from a new source. While I rarely see a studio who can fully finance a game through crowdfunding, it is a great opportunity to supplement financing for production. So for this argument, let’s define a crowdfunding initiative as a platform for early monetization and community interaction at the center of an awareness campaign.

After creating a basic media kit and pitch deck to interact with your customers, your crowdfunding portal becomes the base of operations for sending potential customers. The concept I think many developers should consider is creating a game crowdfunding operation as a means to gain awareness and present your game concept in such a way that the effort, time and resources can be offset with monetization. Regardless of if you make a profit with the operation, you’ve exposed your title to potentially thousands of potential customers – an operation usually costing independent developers an arm and a leg.

So what are some major actionables to include in your campaign?

Community Engagement


I am a huge advocate of creating dialogue with your customers in a community based fashion. Forming a community around a game is hard work and requires strategy and effort, but has the potential for massive payoff! Many ignore this function of marketing because the cost and return allocation can appear disassociated and impossible to determine, but rarely do I see an excited community who isn’t evangelizing the game to their friends and peers. My biggest recommendation is to encourage crowdfunding backers to participate in your community. Even if it’s just a Facebook page, having a way to dialogue and interact with your users becomes one of your most important assets.

Talk “with” instead of “to”


Turning your presentation into an infomercial about your game feels natural and is easy to do, but it’s likely the biggest mistake a developer can make. Gamers crave an authentic relationship with developers to know what experience they can expect with their game. Invite your audience to a dialogue instead of a lecture. This is as simple as asking specific questions of your audience and responding to answers they give. Ask what your audience is excited about and comment with expansions of how these points play a role in your game.

Show Gameplay


To state this frankly – players are interested in what the game experience will be and not in its concept. I know from experience that customers are far more critical and distrusting of a game that doesn’t have gameplay footage to show. And why shouldn’t they be? Would you buy a home you couldn’t tour first?

I know this is just scratching the surface of the discussion so let me know advice you’d give to developers in crowdfunding their game. Are you thinking of putting together a crowdfunding campaign? I’d love to brainstorm with you!

The Pros and Cons of Going to College for Game Development/Design

$
0
0

The College Fad


College seems to have become the norm for this generation as it has gained the reputation for landing graduates better jobs with better pay. On top of that, it is becoming more expectant of young adults to attend college by parents, peers, and schools. College is deemed as the automatic next step in your advancement into the adult life. And if you don't go to college, you're a no-good, low-life bum.

Or are you really?

While the prospect of picking that special college once we near the end of our high school days has been a looming decision over our heads for most of our lives, college actually might not be right for you.

Now, before you run away, let me explain.


There are two types of careers in this world.

1. The careers that require a fancy diploma. Doctors, lawyers, and accountants fall into this category.

2. And the careers that are based on experience and a portfolio. Generally, this includes careers focused on art and design.

Look at it like this. Going into the medical industry as a doctor, you will be expected to have a doctorate's degree to prove you are capable of the job. As a lawyer, clients will be expecting credibility; otherwise, how will they know you're the right person for their case? But considering you're reading this article, I don't think you want to go into either of those fields. As a game developer, it's not a degree developing teams will be looking at; it's your work. Do you think a potential customer is going to look at a game and think, "Hey, this guy (or girl) has a degree! That makes me want to buy this game!" If you mentally answered no, you win the prize of self-congratulations.

Your Portfolio


Companies like Bethesda or Valve may not look at the dust-collecting piece of paper that says you earned a degree, but as I briefly mentioned earlier, they will be focusing on one main thing: a portfolio. I can not stress enough how important this is. Whether it's a primped and polished website or simply a folder containing all of your work, you will not survive the industry without something to showcase your skills. Even if you're the greatest digital artist or the most advanced programmer in the world, a company simply will not even consider you if you have nothing to show them.
A portfolio should include anything created by yourself including concept art, demos, or even an entire game you made yourself.

"So what does a portfolio have to do with my decision to go to college for game development/design?"

Simply put, college uses up most of the time you could be using to build your portfolio. While you're spending time cramming for an exam that quite frankly will not matter for your future career in game development, you could be working on the next Binding of Isaac or Divinity game. Another reason college life usually hinders the building of a portfolio is that generally a student tends to only complete the assignments given to him/her throughout the course. In your entire four years of majoring in Game Development, you will probably only produce one game. If you had just stayed home and focused on game development, you might have a nice, healthy stack of different prototypes, ideas, and demos sitting in your portfolio.

The Pros of College


Now that I've explained why you shouldn't go to college, I'll play devil's advocate and explain why college may be beneficial. While, in theory, you could teach yourself the ins and outs of game development, it can be very difficult to find all the answers and proper tools to use. Game Development courses are specifically designed to give you a step-by-step approach to creating a game. So instead of searching through forums, documentation, and sometimes online resources that may be wrong, you have all the right instructions and resources right there.

While college may be a supreme tool for gaining knowledge in certain areas, you still should not ignore the portfolio that needs to be growing every single day. What you do in your free time using the knowledge you've gained through the course is what is going to help you land a job.

In a simple statement, college is your access to knowledge, but the work you do outside of your assignments using the things you learn is what matters.

A Happy Medium


In my personal opinion, there is an option that combines the knowledge accessed through a structured course on game development and the time that comes with not taking that path.

Online education.

There it is, the answer to the world's problems.

As a student who has recently transferred from a seven-hour day (not including homework that needs to be done later) in public school to an online school that I only spend three hours a day doing, I have already begun to consider several online colleges that I will be attending next school year. With online college, you'll have all the information you need at your hands while having the time to utilize it as well as gain a degree in less time than it would take attending a campus school.

Here's a non-promotional example of an online college that offers Game Development as a major: Westwood college's page covers a mouthwatering course description including: advanced programming, artificial intelligence, graphics programming, and 3D game architecture.

And you could learn it all on your own from the comfort of your home and even finish early.

In Conclusion


The following is a list of the main points summarizing the article:

1. Your access into the gaming industry is a heavy portfolio, not a college degree.

2. If you do decide to attend college, it should not be a replacement of a portfolio that needs to be built in your spare time, but used as a tool to access a bank of knowledge in a single place.

3. An alternative that I personally suggest is online college. You can learn the information you need while having time to use it from your own home.

Most of all, follow your gut. If you feel more comfortable learning the traditional way, do it. If you're a self-taught prodigy, get your work out there and start living your life now! You don't need to fit other people's expectations in order to be successful in this industry; you just need to put yourself out there.

"Update" Log


[9.30.2014] Initial release.

Pre-Structure Phrases for Internationalisation

$
0
0
Key-value pair is a commonly seen format to store phrases for translations. However, it is not enough for most of the cases especially for games that involve a lot of items and characters. Characters and items are commonly seen components in all genre of games. It is easy enough to develop the game in English with key-value pairs since there are only 2 different keys for an item i.e singular or plural. Things become complicated when it comes to European languages or Middle East languages, and the solution to that would be to use structural formats instead of key-value pairs.

Different forms for pluralization


There are many languages with different plural rules like English. For example:
  • There are 6 plural forms in Arabic.
  • There are 4 plural forms in Russian.
  • There are no plural forms (or, only 1 single form for all items) in Chinese.
If the phrases are stored as simple key-value pairs, you’ll need to have 6 entries for every item in order to make sure it works fine in all languages.

Key-value examples:

English
```
APPLE_ZERO = “%d apples”;
APPLE_ONE = “%d apple”;
APPLE_TWO = “%d apples”;
APPLE_FEW = “%d apples”;
APPLE_MANY = “%d apples”;
APPLE_OTHER = “%d apples”;
```

Russian
```
APPLE_ZERO = “%d яблоки”;
APPLE_ONE = “%d яблоко”;
APPLE_TWO = “%d яблока”;
APPLE_FEW = “%d яблока”;
APPLE_MANY = “%d яблоки”;
APPLE_OTHER = “%d яблоки”;
```

Chinese (Traditional)
```
APPLE_ZERO = “%d個蘋果”;
APPLE_ONE = “%d個蘋果”;
APPLE_TWO = “%d個蘋果”;
APPLE_FEW = “%d個蘋果”;
APPLE_MANY = “%d個蘋果”;
APPLE_OTHER = “%d個蘋果”;
```

You may check out the detailed plural rules of all languages at unicode.org

Different forms for genders


There are similar problems for the cases of masculine/feminine/neutral in most of the European languages. English-speaking developers can easily miss the handling until the translation needs come.

In some of the European languages, there is a gender attribute for a noun, either masculine or feminine. This grammar rule does not apply only to human beings, but also objects.

There are no strict rules to determine the gender. They vary in different languages as well. In Spanish, “computer” (La computadora) is feminine. In German, “computer” (der Computer) is masculine.

The adjectives or the verbs following the noun should consider the gender of the noun.

For example, in Italian,

English:
```
GOOD_FEMININE = “%s is good.”
GOOD_MASCULINE = “%s is good.”
```

Italian:
```
GOOD_FEMININE = “%s è buona .” (when %s is a female character)
GOOD_MASCULINE = “%s è buono .” (when %s is a male character)
```

A game usually involves a lot of items and characters with different genders. In order to deliver high quality translation, a well-defined structure that supports gender is a must so that the translators can fill in the corresponding translations.

You may also need to have a set of rules to define the gender of the characters and the items in the game for different languages. It will be nice to have the structure available at the very beginning in order to avoid the problems later when the project becomes really big.

Conclusion


Use Structural Format Instead of Key-value Pairs.

A more structural format converted from gettext for Russian:

<group restype="x-gettext-plurals">
 <trans-unit id="1[0]">
 <source><b>%d apple</b></source>
 <target><b>%d Ãñûþúþ</b></target>
 </trans-unit>
 <trans-unit id="1[1]">
 <source><b>%d apples</b></source>
 <target><b>%d Ãñûþúð</b></target>
 </trans-unit>
 <trans-unit id="1[2]">
 <source><b>%d apples</b></source>
 <target><b>%d Ãñûþúø</b></target>
 </trans-unit>
</group>

Instead of hardcoding the variations in the keys, rich data structures like XML helps. There is a well-defined format for localization which standardized the needs of localization: XLIFF (XML Localization Interchange File Format)

You can learn more in the Wikipedia page of XLIFF.

There are many different frameworks for game development like Unity, Cocos2dx, Unreal Engine. Most of them support localization in a very good way, maybe with different formats. You may search for their documentations in order to learn more.

  1. Unity Localization Asset
  2. Unreal Engine Localization
  3. Cocos2dx Localization

You can also check out our website for more information on localisation.

How Alias Templates Saved my Sanity

$
0
0

Before we begin...


This article has been reformatted to be more readable on GameDev.net, the original can be found at the following blog.

Are you sitting comfortably?


C++ supports two powerful abstractions, Object Orientation and Generic Programming. Ask any battle-hardened games industry veterans about the two and you’re likely to see an eye twitch with the latter. It’s not that Generic Programming is particularly hard but the errors you get out of the language can be particularly verbose without even getting to the private hell of errors relating solely to that usage...

This article provides example issues with template typedefs and the alternatives that modern C++ provides.

Let’s make a game!


Let’s say you have a simple game where multiple wizards lay the smack down, nerdy-spellcast style! We’ll impose some rules:
  • Each battle arena contains several magic pools, each imbued with a different spell.
  • A wizard casts spells using these pools (maybe their robes soak up the juice?)
  • Over time, these pools lose their power. When the power is lost, spells can no longer be cast.


wizard1.png


Sounds… fun? Let's get into it.

Modelling Spells


Let’s briefly look at two approaches to the modelling of spells. Often a key difference between OO and Generic code is that we may have a reliance on dispatch when identifying "IS-A" relationships with the former, and Type Traits or Duck Typing for the latter.

Your OO code may look something like:

class ISpell abstract

With the concrete specification of two spells:

class MagicMissileSpell : public ISpell
class HealSpell : public ISpell

(Actually, if this gets any more complicated it would be a good idea to take a look at prototype and component patterns at http://gameprogrammingpatterns.com and save yourself a headache).

Your Generic approach on the other hand is likely to be more like:

template<typename T>
class Spell
{
T mSpell;
}

With the implementation provided by individual classes satisfying whatever functionality the spell requires:

class MagicMissile
class Heal

There are benefits and pitfalls to both approaches and in all honesty the two aren't even mutually exclusive. Let's not dwell on exactly why you would pick one implementation over the other (I didn't) but instead focus on how to make the code work well (I had to).

Spell Ownership


It seems like we need some consideration over ownership in this game:

“Over time, these pools lose their power. When the power is lost, spells can no longer be cast.”


Ownership semantics in C++ 11 are supported in one way with smart pointers. We can model this scenario by letting each pool hold a shared pointer to the spell type, with each wizard holding a weak pointer to the same asset as required. As long as we lock that weak pointer whilst we cast, the condition should be fine (we do have a small amount of time where the cast could be using magic no longer in the pool, but we'll pretend Wizards are just down with that).


wizardownership.png


OO Spell Ownership


This looks pretty easy, we'll set up:

class Pool
{
// ...
private:
std::shared_ptr<ISpell> mSpell;
};

class Wizard
{
// ...
public:
bool cast(std::weak_ptr<ISpell> spell);
};

We could choose to define a type for these pointers, making them easily alterable and reducing the amount of typing:

typedef std::shared_ptr<ISpell> SharedSpellPtr;
typedef std::weak_ptr<ISpell> WeakSpellPtr;

Looks OK, we’re actually going to leave the OO approach now as it doesn't suffer from the same plague affecting the Generic approach but feel free to check out the source for a more in-depth comparison.

Generic Spell Ownership


Let’s take a quick step back and look at how our spells are modelled again. The pools in this implementation will want to be imbued in a similar way, so how would that look? As we don’t have the common base we will have to bind to a template on the pool:

template<typename T>
class Pool
{
//... will have mSpell variable, related to T
}

An explicit specialisation of T can be provided as a constructor argument. For example, a magic missile spell:

std::shared_ptr<Spell<MagicMissile>> mMagicMissileSpell;

In the same manner as the OO aproach, we can probably define this as a custom type:

typedef Spell<MagicMissile> MagicMissileSpell;
std::shared_ptr<MagicMissileSpell> magicMissileSpell

Maybe even go further...

typedef std::shared_ptr<MagicMissileSpell> MagicMissileSpellPtr;
MagicMissileSpellPtr magicMissile;

This is especially useful if we were overriding types with allocators etc as we get to avoid writing an essay every time we use the type (which would also be error prone as hell).

The problem here is that we’re going to have to jump through the same hoops to define the weak pointer, and any other structures we wanted further down the line (unique pointers, vectors, maps…). It doesn’t scale too well and needs a lot of boilerplate for every spell.

Template Typedefs


Wouldn’t it be great if we could define a more abstract template type for the above? We can… eventually. Let’s start with a more general shared spell shared pointer:

template<typename T>
typedef std::shared_ptr<Spell<T>> SpellSharedPtr;

This looks innocuous enough… but try to compile and *gasp*

error C2823: a typedef template is illegal


ILLEGAL??? That’s not ideal... and sure enough, this is a well trodden restriction of olden times C++.

The common workaround is to take advantage of the fact that classes can be templated, and can contain typedef:

template < typename T >
class SpellType
{
public:
typedef std::weak_ptr< T > SpellWeakPtr;
typedef std::shared_ptr< T > SpellSharedPtr;
}

typedef SpellType<MagicMissile> MagicMissileSpellType;

Which now means that we can refer to the various pointers like so:

MagicMissileSpellType::SpellSharedPtr magicMissileSharedPtr;
MagicMissileSpellType::SpellWeakPtr magicMissileWeakPtr;

This is the point where a lot of literature leaves the subject. Sadly it can still get a little worse. Disappointment comes whenever we want to use that type definition (e.g. if we set up a magic pool like so):

template<typename T>
class Pool final
{
public:
explicit Pool(SpellType<T>::SpellSharedPtr spellPtr)
: mSpellPtr(std::move(spellPtr))
{
}
private:
SpellType<T>::SpellSharedPtr mSpellPtr;
};

On compilation of the above, we’re again greeted with a nice compilation error:

warning C4346: 'SpellType<t>::SpellSharedPtr' : dependent name is not a type. prefix with 'typename' to indicate a type


This one is pretty obviously fixable, we just need to rephrase that declaration every time we see it:

typename SpellType<T>::SpellSharedPtr

We’ve got a workable solution, there’s one last consideration here though...

What if our spells were referenced in a large amount of places? Maybe we’re not so sure whether the pool should be the sole owner anymore, shared ownership might be fine but the model holds well together… for now. Let’s define an alias (remember that name for later). We reserve the right to change type later and it’s going to be a single point of change (with some hopefully minor fiddling with locks etc, dependant on functionality):

template<typename T>
class SpellTypePointer
{
typedef typename SpellType<T>::SpellSharedPtr Type;
};
SpellTypePointer<MagicMissile>::Type spellPtr;

Notice the typename again, you'll probably forget to type it every time. There was a point where every code review I ever took for this pattern had someone arguing against that keyword too. The technique works well enough but when you've had to defend your code for the fiftieth time, you really wish there was an alternative...

Type Alias, Alias Template


In the C++ 11 standard, type alias and alias templates fill this hole in functionality. For Visual Studio this means upgrading to 2013 but it’s worth the wait. Remember when we couldn’t even define this type:

template<typename T>
typedef std::shared_ptr<Spell<T>> SpellSharedPtr;

The syntax for Alias Templates make this all possible by propagating the template binding:

template<typename T>
using SpellSharedPtr = std::shared_ptr<Spell<T>>;

This feels much cleaner, the same technique can be applied to all the above examples as well.

So here are the details:
  • A type alias declaration introduces a name which can be used as a synonym. This is essentially the new typedef.
  • An alias template is a template which allows substitution of the template arguments from the alias template. The new functionality allowing us to define aliases on templates like we never could before.

Standard Gameplay and IAP Metrics for Mobile Games

$
0
0
Over the next few weeks I am publishing some example analytics for optimising gameplay and customer conversion. I will be using a real world example game, "Ancient Blocks", which is actually available on the App Store if you want to see the game in full.

The reports in this article were produced using Calq, but you could use an alternative service or build these metrics in-house. This series is designed to be "What to measure" rather than "How to measure it".

Attached Image: GameStrip.jpg

Common KPIs


The high-level key performance indicators (KPIs) are typically similar across all mobile games, regardless of genre. Most developers will have KPIs that include:
  • D1, D7, D30 retention - how often players are coming back.
  • DAU, WAU, MAU - daily, weekly and monthly active users, a measurement of the active playerbase.
  • User LTVs - what is the lifetime value of a player (typically measured over various cohorts, gender, location, acquiring ad campaign etc).
  • DARPU - daily average revenue per user, i.e. the amount of revenue generated per active player per day.
  • ARPPU - average revenue per paying user, a related measurement to LTV but it only counts the subset of users that are actually paying.
There will also be game specific KPIs. These will give insight on isolated parts of the game so that they can be improved. The ultimate goal is improving the high-level KPIs by improving as many sub-game areas as possible.

Retention


Retention is a measure of how often players are coming back to your game after a period. D1 (day 1) retention is how many players returned to play the next day, D7 means 7 days later etc. Retention is a critical indicator of how sticky your game is.

Arguably it's more important to measure retention than it is to measure revenue. If you have great retention but poor user life-time values (LTV) then you can normally refine and improve the latter. The opposite is not true. It's much harder to monetise an application with low retention rates.

A retention grid is a good way to visualize game retention over a period


Attached Image: GameExample-Retention.png


When the game is iterated upon (either by adding/removing features, or adjusting existing ones) the retention can be checked to see if the changes had a positive impact.

Active user base


You may have already head of "Daily/Weekly/Monthly" Active Users. These are industry standard measurements showing the size of your active user base. WAU for example, is a count of the unique players that have played in the last 7 days. Using DAU/WAU/MAU measurements is an easy way to spot if your audience is growing, shrinking, or flat.


Attached Image: Retention-DAU.png


Active user measurements need to be analysed along side retention data. Your userbase could be flat if you have lots of new users but are losing existing users (known as "churn") at the same rate.

Game-specific KPIs


In addition to the common KPIs each game will have additional metrics which are specific to the product in question. This could include data on player progression through the game (such as levels), game mechanics and balance metrics, viral and sharing loops etc.

Most user journeys (paths of interaction that a user can take in your application, such as a menu to start a new game) will also be measured so they can be iterated on and optimised.

For Ancient Blocks game specific metrics include:
  • Player progression:
    • Which levels are being completed.
    • Whether players are replaying on a harder difficulty.
  • Level difficulty:
    • How many attempts does it takes to finish a level.
    • How much time is spent within a level.
    • How many power ups does a player use before completing a level.
  • In game currency:
    • When does a user spend in game currency?
    • What do they spend it on?
    • What does a player normally do before they make a puchase?

In-game tutorial


When a player starts a game for the first time, it is typical for them to be shown an interative tutorial that teaches new players how to play. This is often the first impression a user gets of your game and as a result it needs to be extremely well-refined. With a bad tutorial your D1 retention will be poor.

Ancient Blocks has a simple 10 step tutorial that shows the user how to play (by dragging blocks vertically until they are aligned).


Attached Image: GameStrip2.jpg


Goals


The data collected about the tutorial needs to show any areas which could be improved. Typically these are areas where users are getting stuck, or taking too long.
  • Identify any sticking points within the tutorial (points where users get stuck).
  • Iteratively these tutorial steps to improve conversion rate (the percentage that get to the end successfully).

Metrics


In order to improve the tutorial a set of tutorial-specific metrics should be defined. For Ancient Blocks the key metrics we need are:
  • The percentages of players that make it through each tutorial step.
  • The percentage of players that actually finish the tutorial.
  • The amount of time spent on each step.
  • The percentage of players that go on to play the level after the tutorial.

Implementation


Tracking tutorial steps is straight-forward using an action-based analytics platform - in our case, Calq. Ancient Blocks uses a single action called Tutorial Step. This action includes a custom attribute called Step to indicate which tutorial step the user is on (0 indicates the first step). We also want to track how long a user spend on each step (in seconds). To do this we also include a property called Duration.

ActionProperties
Tutorial Step
  • Step - The current tutorial step (0 for start, 1, 2, 3 ... etc).
  • Duration - The duration (in seconds) the user took to complete the step.

Analysis


Analysing the tutorial data is reasonably easy. Most of the metrics can be found by creating a simple conversion funnel, with one funnel step for each tutorial stage.

The completed funnel query shows the conversion rate of the entire tutorial on a step by step basis. From here it is very easy to see which steps "lose" the most users.


Attached Image: GameExample-Tutorial-Funnel2.png


As you can see from the results: step 4 has a conversion rate of around 97% compared to 99% for the other steps. This step would be a good candidate to improve. Even though it's only a 1 percentage point difference, that still means around $1k in lost revenue just on that step. Per month! For a popular game the different would be much larger.

Part 2 continues next week, looking at metrics on game balance and player progression.

Procedural Generation of Puzzle Game Levels

$
0
0

If you ever wanted to create a puzzle game you probably found that implementation and coding of game rules is relatively easy, while creating the levels is a hard and long-lasting job. Even worse, maybe you spent a lot of time on creating some levels, with intent to incorporate specific challenges into it, but when you asked your friend to try it, she solved it in a totally different way or with shortcuts you never imagined before.


How great would it be if you found a way to employ your computer, saved lot of time and solved issues similar to the above mentioned... Here is where the procedural generation comes to the rescue!


It is necessary to say that while there is only one correct way how, for example, to sum vectors and every programmer wanting to do it has to follow the same rules, when it comes to procedural generation you are absolutely free. No way is right or bad. The result is what counts.


Fruit Dating – its rules and features


In past days we released our Fruit Dating game for iOS devices (it is also available for Android and even for unreleased Tizen). The game is a puzzle game with simple rules. Your target is to match pairs of fruit of the same color simply by swiping with your finger. The finger motion matches the tilting of the board in any given direction. So, while trying to fulfill your goal, various obstacles like stones, cars or other fruit, get in the way. Simply, all movable objects move in the same direction at once. To illustrate, in the pictures below you can see first level that needs 3 moves to match a pair.


Attached Image: Fruit Dating_html_m88599c8.png

Attached Image: Fruit Dating_html_m44e3317e.png

Attached Image: Fruit Dating_html_121d49c2.png

Attached Image: Fruit Dating_html_m1882db5e.png


Over time new features are introduced:


Attached Image: Fruit Dating_html_m661f5865.png One-ways are placed on the border of the tile and limits directions in which you can move.
Attached Image: Fruit Dating_html_m297a4d61.png Anteaters can look in any direction, but its direction is fixed and unchanging during the level. When fruit is in the direction of the anteater and no obstacles are in the way, the anteater will shoot its tongue and drag the fruit to it.
Attached Image: Fruit Dating_html_6d318479.png Mud can be passed through with stones, cars or barrels but not by fruit. When the fruit falls into mud it gets dirty and there is no date!
Attached Image: Fruit Dating_html_74377081.png Sleeping hedgehog is sitting on a tile and wakes up when hit by something. If hit by a barrel, stone or car he falls asleep again as these items are not edible. But when he is hit by fruit he eats it.

You probably noticed that the game is tile-based which simplifies things as each level can be represented with a small grid. Maximum size is 8x8 tiles but as there is always solid board, the “usable” area is 6x6 tiles. It may seem to be too small but it proved that some very complex puzzles can be generated for it.


With the basic rules (as the additional features were added later) I started to build my generator. My first thought was that someone in the world surely already solved a similar problem, so I started to search the internet for procedural generation of levels for puzzle game. It showed that this topic is not widely covered. I found only a few articles useful for me. Most of them were about generating / solving Sokoban levels - for example:

http://larc.unt.edu/ian/pubs/GAMEON-NA_METH_03.pdf
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=52E99748D5C48013A84BA983612AB7C4?doi=10.1.1.47.2303&rep=rep1&type=pdf

It was also interesting, that most of them were written by academic people (professors of Sokoban! :-)) From the papers I learned two things: first, when generating something randomly it is good if it has some symmetry in it as people will perceive it more positively. Second, the algorithm is up to you, but none is ideal.


Solver


As it was obvious that every generated level will have to be tested (whether it is possible to solve it and how easy or hard it is) I first wanted to code a solver. As at that time I was only considering basic rules and not features added later I came up with these ideas for the solver:


a) from initial position you can start in any direction (up, left, right, down),
b) from next position you can continue in any direction again,
c) in any position check for fruit match, remove matched fruits from board and continue with b) until some fruits remain on board.


As you can see, it is a simple brute force approach. So, the number of possible board situations was: 4, 4*4 = 4^2, 4*4*4 = 4^3, … 4^n. In the 10th move it was more than a million board situations and in the 25th move it was 1125899906842624 board situations. Okay, you could limit maximum moves to some number, let's say 10 and not be interested in more difficult levels, but there is hidden another danger. Some of the puzzles can be designed or generated in such a way that if a player does some bad moves in the beginning, she cannot finish the level. Or, in some levels you can get into a loop of board situations. If the algorithm branched early into such a way the level would be marked as not solvable, even if there were other branches with a simpler solution. Also if this algorithm found a solution there would not be any guarantee that it is the shortest solution – you would have to finish all branches to find the shortest solution. Beside this there are very often board situations in which one move in particular direction does not change anything. See third picture in “Fruit Dating – its rules and features” - there is no change if moved left.


So, the rules changed:


a) from current position try to move in any direction,
b) if there is a change in the board situation, check if the situation is new or you already were in such a situation,
c) if a new situation, store it along with solution depth (number of moves to get into this situation)
d) if previously was in this situation and solution depth was equal or lower, terminate this branch. Else, remove old situation (as you just got into it with less moves) and continue.


There are also other rules, like checking matches and thus terminating the whole process when a solution is found and later new rules when features were added, but this is the core of the solver. It quickly cuts whole branches without a solution. Beside solution depth, it also references to parent situations stored in each board situation, so it is easy to print the final solution in the end. Let's show it on the first level of the game:


Attached Image: Fruit Dating_html_m2cc98e5e.png


From the initial position a move into all four directions is possible. These are labeled 1-1, 1-2, 1-3, 1-4. The algorithm always tries to move right, up, left, down in this order. As it employs a stack to store situations to examine further, the first situation to continue is the last one pushed onto the stack (1-4 in this case). Again, first is a move to the right (2-1) and as this is a new situation it is put onto the stack. Next is a move up which results in situation 2-2. We already were in this situation and it was in the first round. So, we apply rule d) and terminate this branch – nothing is put onto the stack. Next, a move to left is tried. It results in a new situation (2-3) and this is put onto the stack. Last move is down, but there is no change between 1-4 and 2-4 so we put nothing onto the stack (rule b) … no new situation = do nothing). Now, the stack top situation is 2-3. From it we move right and get into situation 3-1, which is equal to situation 2-1. But in 2-1 we were in the second round so we terminate this branch. Next we move up and fruits are on adjacent tiles, matched and as it was the only pair the game ends.


The algorithm works, however it may not find the shortest way. It is simply the first solution found. To overcome this I first start with limiting the maximum moves to 30. If a solution is not found I say that the level has no solution. If a solution is found in, let's say, 15 moves I run the solver again with maximum moves depth 14 (15 – 1). If no solution is found then 15 was the shortest way. If solution is found in, let's say, 13 moves I run the solver again with 12 (13 – 1) maximum allowed depth. I repeat while some solution is still returned. The last returned solution is the shortest solution.


Generator


Now that the solver works, we can move to the generator and validate every generated puzzle with it.


The generation phase can be split into two parts:
  • generating walls
  • generating on-board objects

The wall generation always start with drawing a solid board border:


Attached Image: Fruit Dating_html_323042ca.png


Some random parameters are generated that say whether wall will be paint by one tile a time or two tiles a time. If two tiles a time then random symmetry is generated. It says where the second tile will be placed – if it will be flipped horizontally, vertically or rotated by 90 degrees or combination of these. First grid in the picture below is when one tile a time is painted. The rest are for two tiles a time with different random symmetries:


Attached Image: Fruit Dating_html_38e41668.png


The number of walls is random as well as their length and direction. Each wall starts from a random point on the border. Every wall is drawn in one or more iterations. After the first iteration, a random number between 0 and wall length – 1 is chosen. If equal to zero, the iteration loop is terminated. If greater than zero, then this number becomes the length of the next part of this wall. A random point on the current wall part is chosen, direction is set randomly to be orthogonal to the current wall part and the next part of the wall is drawn. The result may look like this (the numbers label the iterations):


Attached Image: Fruit Dating_html_m5fe08493.png


From picture it can be seen that every next part of the wall is shorter, so you can be sure it will be terminated at some point.


So far all walls started from the border, so every single tile was always connected to the border. It looked boring so I added another step where inner walls are generated. Inner walls are not connected to any existing tile. It starts by selecting a random tile and checking if it is free as well as its 3x3 tiles surrounding. If yes a wall WILL be placed into the grid and the next tile is chosen based on random direction (this direction is randomly chosen before first tile was tested). Loop terminates when condition for 3x3 free surrounding is not true. Notice the stress on word “will” above. If you placed the wall into the grid immediately and proceeded to next tile, the 3x3 surrounding would never be free as you just placed a wall there. So I store all wall tiles into some temporary array and place them into the grid at once when the loop is terminated.


During wall generation some of the walls may overlap others and it is very probable that some small spaces will be created or even that the initial area will be divided into several disconnected areas. This is something we do not want. And this is why in the next step I check which continuous area is largest and fill all others with walls.


In this check I iterate through the whole board grid and if the tile is free I recursively fill whole continuous area with area ID (free tiles are tiles without wall and with no area ID yet). After that I iterate through the whole board again and count tiles for each area ID. Finally I iterate the board one last time filling all tiles with area ID with walls except for the area ID with the highest count.


The whole process of generating walls can be seen in this animation. There is wall generation, inner wall generation and in the last frame a hole in lower right corner is filled during area consolidation:


Attached Image: Fruit Dating_html_12b35c0f.gif


When walls are generated we can generate objects. We need at least one pair of fruit and zero or more obstacles (represented by stones, cars, barrels in the game).


It would be nice if fruit was placed most of the times into corners, in the end of corridors or so. Placing it in the middle of an open area can be also interesting sometimes but the first is more preferable. To achieve this we will add weights to every free tile from the point of its attractiveness for placing fruit there.


For the end of corridors, surrounded with tiles from 3 sides I selected weight 6 + Random(3). For tiles in horizontal or vertical corridors I selected weight 2. For corners I selected 3 + Random(3) and for free areas 1.


From weights it is obvious that the most preferable placement is in the end of a corridor, followed with placement in corners, corridors and free areas. The random numbers in weights can also influence this and change weights between corridor ends and corners. The weights are generated only once for each generated level.


Obstacles (stones, cars, barrels) are placed in a similar way, only the difference is these weights are separate from weights for fruits and also some random obstacles density, which says how many obstacles will be in level, is chosen.


By the way, with the weights you can do other tricks. Features added later were sleeping hedgehog or anteater (see features description in the beginning). Placing them into the middle of a corridor made no sense so they have a weight for corridors = 0.


In animation bellow you can see populating level with fruits and obstacles:


Attached Image: Fruit Dating_html_40aca24b.gif


The final generated level is in the static picture below. It takes 6 moves to solve it (right, up, left, down, right, up). Great, after 1-2 minutes of clicking on the Generate button we have a level that looks interesting, and the solution is possible in 6 steps (no one will play levels with solution in 30 steps!), while it is also not a breeze to find it. But … it still could be little bit better. It is this point where our manual entries took place to make the levels nicer.


Attached Image: Fruit Dating_html_6d9a0a02.png


5. Editor


The generation ended in the previous part. Our editor supports drag and drop, so it is easy to rearrange the objects to achieve a higher level of symmetry like this:


Attached Image: Fruit Dating_html_1926162e.png


It is important to re-test the level with the solver after adjustments. Sometimes a small change may lead to an unsolvable level. In this case the adjustments increased the number of solution steps from six to seven.


With this manual step the approach to procedurally generated levels forks. If you need or want manual adjustment then procedural generation only works for you as a really big time saver. If this step is not necessary or you think that generated levels are OK then the generator can be part of the final game and players have the possibility to generate future levels by themselves.


6. Final result


Generating levels procedurally saved us enormous amounts of time. Although the generator also generates rubbish – levels too easy to complete or levels too hard to complete, levels full of obstacles or ugly looking levels - it still saved us an enormous amount of time. It also allowed us to make selections and throw a lot of levels away. If we made it by hand it would take months. This is how levels generated in this article look like in the final game:


Attached Image: Fruit Dating_html_4463e96e.jpg

Dijkstra's Algorithm - Shortest Path

$
0
0
Note - This is not my area of expertise but I am very much interested in it and I welcome any corrections

Outline


This post will cover the basics of Dijksta's shortest path algorithm and how it can apply to path finding for game development. It is my opinion that understanding this algorithm will aid in understanding more complex AI algorithms, such as A*. This post is aimed more towards developers starting out in game development or those curious about Dijkstra's algorithm, but this will be a somewhat simplification of it and discuss mainly the concepts.

Introduction


What’s an algorithm?


An algorithm is basically a system for solving a problem. For us humans, looking at a 2D grid with many objects we can easily tell which path the character should take to reach his or her goal without thinking much about it. What we want to try to do is translate those semi-subconscious mental steps to a list of steps that anyone (or a computer) can repeat to get the same answer every time.

Finding the shortest route from one object to another when developing game AI is a very common problem and many solutions exist. At least in 2D grid / tile based games, perhaps the most common one is A*, with Dijkstra's being also quite good. Depending on the complexity of the game, Dijkstra's algorithm can be nearly as fast as A*, with some tweaking. A* is generally a better implementation, but can be slightly more complex, so I'm going to discuss the fundamentals of Dijkstra's algorithm and in later posts talk about others, such as A*.

I'll be using the word graph here a lot, and it may not be immediately obvious how this translates to game dev, but you can easily translate this to 2D grid or tile based maps.

Dijkstra’s Algorithm


Let's first define what exactly the problem is. Take this graph, for instance.


Attached Image: shortest_path1.png


For the purposes of this post, the blue circles represent "nodes" or "vertices" and the black lines are "edges" or "node paths". Each edge has a cost associated with it. For this image, the number in each node is simply a label for the node, not the individual node cost.

Our problem is to find the most cost efficient route from Node1 to Node4. The numbers on the node paths represent the "cost" of going between nodes. The shortest path from Node1 to Node4 is to take Node1 to Node3 to Node4, as that is the path where the least cost is incurred.

Specifically, the cost to go from Node1 to Node3 is (2), plus the cost of Node3 to Node4 (5) is 7 (2 + 5).

Now, we can see that the alternative (Node1 to Node2 to Node4) is much more costly (it costs 11, versus our 7).

An important note - greedy algorithms aren't really effective here. A greedy algorithim would bascially find the cheapest local costs as it traverses the graph with the hopes that it would be globally optimum when it's done. Meaning, a greedy algorithm would basically just take the first low value it sees. In this case, the lower value is 1 but the next value is 10. If we were to simply just apply a greedy algorithm, we end up taking the more costly route from Node1 to Node4.

Figuring out the best path to take with this graph is pretty easy for us to do mentally, as if you can add small numbers you can figure out the best path to take. The goal is translate the steps we take in our mind to steps a computer can follow.

Dijkstra's algorithm is an algorithm that will determine the best route to take, given a number of vertices (nodes) and edges (node paths). So, if we have a graph, if we follow Dijkstra's algorithm we can efficiently figure out the shortest route no matter how large the graph is.

Dijkstra's algorithm provides for us the shortest path from NodeA to NodeB.

This high level concept (not this algorithm specifically) is essentially how Google maps provides you directions. There are many thousands of vertices and edges, and when you ask for directions you typically want the shortest or least expensive route to and from your destinations.

So, how does this apply to game AI? Well, the correlation is quite strong. In a 2D grid or tile based map, there are many nodes (or tiles) and each tile can have a value associated with it (perhaps it is less expensive to walk across grass than it is to walk across broken bottles or lava).

You can set up your tiles so that each tile has a node path value associated with it, so if you put an non player character (NPC) in the map you can use Dijkstra's algorithm to compute the shortest path for the NPC to take to any tile in your map.

How it works


First we'll describe Dijsksta's algorithm in a few steps, and then expound on them further:

Step 0


Temporarily assign C(A) = 0 and C(x) = infinity for all other x.
C(A) means the Cost of A
C(x) means the current cost of getting to node x


Step 1


Find the node x with the smallest temporary value of c(x).
If there are no temporary nodes or if c(x) = infinity, then stop.
Node x is now labeled as permanent. Node x is now labeled as the current node. C(x) and parent of x will not change again.


Step 2


For each temporary node labeled vertex y adjacent to x, make the following comparison:
if c(x) + Wxy < c(y), then c(y) is changed to c(x) + Wxy
assign y to have parent x


Step 3


Return to step 1.


Before diving into a little more tricky graph, we'll stick with the original graph introduced above. Let's get started.

Step 0.


Temporarily assign C(A) = 0 and C(x) = infinity for all other x. C(A) means the Cost of A C(x) means the current cost of getting to node x.

The following graph has changed a little from the one shown above. The nodes no longer have labels, apart from our starting point NodeA and our goal NodeB.


Attached Image: sp_1_1.png


Legend


Orange line – path to parent node
Yellow arrow – points to the node’s parent
Green node cost text – node cost is permanent
White node cost test – node is temporary
Yellow highlight – Current node


We assign a cost of 0 to Node A and infinty to everything else. We're done with this step now.

Step 1


Find the node x with the smallest temporary value of c(x).
If there are no temporary nodes or if c(x) = infinity, then stop.
Node x is now labeled as permanent. Node x is now labeled as the current node. C(x) and parent of x will not change again.
Since 0 is the lowest value, we set A as the current node and make it permanent.

Step 2


For each temporary node labeled vertex y adjacent to x, make the following comparison:
if c(x) + Wxy < c(y), then
c(y) is changed to c(x) + Wxy
assign y to have parent x

There are two temporary nodes adjacent to our current node, so calcuate their cost values based on the current node's value + the cost of the adjacent node. Assign that value to the temporary node only if it's less than the value that's already there. So, to clarify:

The top node is adjacent to the current node and has a cost of infinity. 0 (the current node's value) + 1 (the cost associated with the temporary node) = 1, which is less than infinity, so we change its value from infinity to 1. This value is not yet permanent.

Now, do the same calucation for the next adjacent node. which is the bottom node. The value is 0 + 2 = 2, which is also less than infinity.

To illustrate:

Attached Image: sp_1_2.png
</center>
So we now have looked at each temporary node adjacent to the current node, so we're done with this step.

Step 3


Return to step 1.


So, let's go back to step 1. From this point forward, I'll be using the term iteration to describe our progression through the graph via Dijkstra's algorithm. The steps we previously took I'll refer to as iteration 0, so now when we return to step 1 we'll be at iteration 1.

Iteration 1


We’re back at the first step. It says look for the smallest temporary cost value and set it as permanent. We have two nodes to look at, the top node with cost 1 and the bottom node with cost 2.

The top node has a cost of 1, which is less than 2, so we set it as permanent and set it as our current node. We designate this by a yellow shadow in the image. Now, it is important to keep in mind that the bottom node still has a temporary cost assigned to it. This temporary cost is what allows the algorithm to find actual cheapest route – you’ll see in a second.

Step 1


Find the cheapest node. Done, it’s set as permanent and our current node is this one. This node value will not change.

Attached Image: sp_2_1.png
</center>
The yellow highlight indictates the node we are currently on, and the green text means the node cost is permanent. The nodes with white text for their costs are temporary nodes.

Step 2


Assign cost values. There is only one adjacent node to our current node. Its current value is infinity, which is less than 1 + 10, so we assign 11 to its temporary cost value.

Attached Image: sp_2_2.png
</center>
This is not the shortest path from NodeA to NodeB, but that's fine. The algorithm traverses all nodes in the graph, so you get the shortest path from a node to any other node. You can see that the shortest path from NodeA to the top node is the line between NodeA and the top node - well, of course, you say, because that's the only possible path from NodeA to the top node. And you are right to say that, because it's true. But let's say we have a node above the top node (we'll call it Top2). The shortest path to that would from NodeA to the top node to node Top2. Even though our goal is to go from A to B, as a side effect we also get the shortest route to every other node. If that's a bit unclear, it should clear up after we go through the next iteration.

Done with step 2, let's continue to step 3.

Step 3


Return to step 1.


Iteration 2


Ok, so now we look again at the temporary nodes to see which has the lowest value. Even though we calculated the temporary value of B as 11, we are not done because that value might change (in this case, it will definitely change).

Step 1


Pick the cheapest node and set it as our current node and make it permanent, and assign it its parent. We have two remaining temporary nodes with costs of 2 and 11. 2 is lower, so pick it and set it permanent and set it as our current node. Let’s take a look at the graph to elucidate a bit. So, out of 11 and 2, as we said, 2 is cheaper so pick it. We set this node’s value to be permanent and assign its parent is NodeA, demonstrated by the arrow.

Attached Image: sp_3_1.png
</center>


Step 2


Assign cost values to temporary nodes adjacent to the current node. Again, like in the previous iteration, there is only one node to do a cost calculation on, as there is only one temporary node adjacent to the current node. This adjacent node is NodeB. So, we check to see if 2 + 5 < Node B’s temporary cost of 11. It is, so we change Node B from 11 to 7.

Attached Image: sp_3_2.png
</center>


Step 3


Return to step 1


Iteration 3


Almost done.

Step 1


Choose the cheapest temporary node value. There is only one temporary node remaining, so we pick it and set it as permanent, set it as our current node, and set its parent.

Attached Image: sp_4_1.png
</center>


Step 2


Assign costs. There are no temporary nodes adjacent to Node B (there –are- permanent nodes, but we don’t check them).

Step 3


Return to step 1.


Iteration 4


Step 1


Choose the cheapest temporary node. If none exists or c(x) = infinity, then stop. There are no more temporary nodes and no nodes have values of infinity, so we’re done. Algorithm has finished, and we have our shortest path from A to B, but also from that node to every other node in the graph. With such a small graph as this, it's not immediately obvious how powerful and useful this algorithim is.


Another Example


So, on to a more complicated graph now.


A is our starting point, and B is the ending point. Now, we could just as well apply this to a 2D tile based game where A could represent an NPC and B could represent the NPC's desired destination.

If you take a minute, you can probably find the least expensive route yourself. As mentioned earlier, it's fairly trivial for us to come up with the answer, what we need to do is figure out how to convey the steps we take to more extensible steps that can be repeated by a computer for any graph. For this graph, I won't be as thorough explaining every step, but the exact same process is applied. Instead, I'll just provide an example of a slightly more complex graph and what it would look like using Dijkstra's algorithm.


Step 0


Temporarily assign C(A) = 0 and C(x) = infinity for all other x.
C(A) means the Cost of A
C(x) means the current cost of getting to node x
So what's this mean? Well, our start point is A so c(A) = 0 means assign A a cost of 0 and set the cost of x for every other node to infinity. Like the following

Attached Image: shortest_path2_1.PNG
</center>
We assign a cost of 0 to our starting node A and a cost of infinity to every other node. As before, none of these costs are permanent yet.


Step 1


The node with the smallest temporary value is node A with a cost of 0. Therefore, we're going to make it permanent - meaning c(x) and the parent will not change.

Attached Image: shortest_path2_1_a_selected.png
</center>
The 0 will not change now. If there are no temporary nodes, or if c(x) is infinity, the algorithm stops. Now, step 2.


Step 2


Basically, we're going to look at all the nodes that are connected to the currently selected node and calculate the cost to get to them. If the cost of y is less than what it previously was, it will change - this will be discussed soon.

So, let's first calculate the cost to get to the adjacent nodes. The cost is based on the value of the current node code plus the edge (node path) cost. Right now, since this our first go, the cost of our current node is at 0 since we haven't done any traversals.

So, let's start to figure out the c(x), the node costs.

Attached Image: shortest_path2_1_a_selected_3.png
</center>
Notice the yellow arrows. I'm using them to designate what node it got its cost from. Here, since there is only one possible parent node, they all point to the same place.

For the three nodes adjacent to A, we add the values of the edge and our current node (value of 0). So, the top node is 0 + 3 = 3, which is less than the current value (which is infinity), so we apply the value of 3 to the node. Then, the middle node 0 + 7 = 7, also less than infinity. Finally the bottom node has a value of 0 + 5 = 5, which is less than infinity. Therefore, the top node has a c(x) of 3, the middle a c(x) of 7, and the bottom a c(x) of 5.

Step 3.


Return to step 1

As before, we just iteratively go through graph applying the same steps.

So, walking through this - as step 1 says:

We find node x with the smallest temporary value of c(x). So, out of the three temporary nodes with values 3, 5, and 7 that we just worked out, the smallest value is 3. We then make this node permanent.


Attached Image: shortest_path3_2.png


Now, this entire process just repeats itself over and over until there are no more temporary nodes.


Attached Image: shortest_path_final.png


And we're done. We have the shortest path from nodeA to any other node (or vice versa). Pretty convenient.

Conclusion


Hopefully that explains a bit about how Dijkstra's Algorithm works. For game development, in particular overhead 2D tile based games, it is usually easier to implement Dijkstra's than A*, and not much worse performance wise.

Performance


How well does Dijsktra's algorithm perform? Well, in terms of big O notion it is O(n^2), which is efficient. Specifically, suppose G has n vertices and m edges. Going through the steps, Step 0 has time n. Step 1 is called, at the very most, n times. Finding the cheapest vertex takes at most n steps, so step 1 has an upper bound time of n^2. In Step 2, each edge / node path is examined, at most, twice. There, the upper bound time is 2m. So, putting it all together, it's no worse than n^2 + 2m. Again, in computer science terms, it is O(n^2) efficient; or: on the order of at most n^2 steps times a constant.

Better algorithms for NPC path finding certainly exist, but in general Dijkstra's is pretty good, and fairly easy to implement yourself.

A very good explanation of implementation in python (written by the guy who wrote python) can be found at http://www.python.org/doc/essays/graphs.html

Article Update Log


19 October 2014: Initial release

How to Create a Scoreboard for Lives, Time, and Points in HTML5 with WiMi5

$
0
0
This tutorial gives a step-by-step explanation on how to create a scoreboard that shows the number of lives, the time, or the points obtained in a video game.

To give this tutorial some context, we’re going to use the example project StunPig in which all the applications described in this tutorial can be seen. This project can be cloned from the WiMi5 Dashboard.

image07.png

We require two graphic elements to visualize the values of the scoreboards, a “Lives” Sprite which represents the number of lives, and as many Font and Letter Sprites as needed to represent the value of the digit to be shown in each case. The “Lives” Sprite is one with four animations or image states that are linked to each one of the four numerical values for the value of the level of lives.


image01.png image27.png


The Font or Letter Sprite, a Sprite with 11 animations or image states which are linked to each of the ten values of the numbers 0-9, as well as an extra one for the colon (:).


image16.png image10.png


Example 1. How to create a lives scoreboard


To manage the lives, we’ll need a numeric value for them, which in our example is a number between 0 and 3 inclusive, and its graphic representation in our case is the three orange-colored stars which change to white as lives are lost, until all of them are white when the number of lives is 0.


image12.png


To do this, in the Scene Editor, we must create the instance of the sprite used for the stars. In our case, we’ll call them “Lives”. To manipulate it, we’ll have a Script (“lifeLevelControl”) with two inputs (“start” and “reduce”), and two outputs (“alive” and “death”).


image13.png


The “start” input initializes the lives by assigning them a numeric value of 3 and displaying the three orange stars. The “reduce” input lowers the numeric value of lives by one and displays the corresponding stars. As a consequence of triggering this input, one of the two outputs is activated. The “alive” output is activated if, after the reduction, the number of lives is greater than 0. The “death” output is activated when, after the reduction, the number of lives equals 0.

Inside the Script, we do everything necessary to change the value of lives, displaying the Sprite in relation to the number of lives, triggering the correct output in function of the number of lives, and in our example, also playing a negative fail sound when the number of lives goes down..

In our “lifeLevelControl” Script, we have a “currentLifeLevel” parameter which contains the number of lives, and a parameter which contains the “Lives” Sprite, which is the element on the screen which represents the lives. This Sprite has four animations of states, “0”, “1”, “2”, and “3”.


image14.png


The “start” input connector activates the ActionOnParam “copy” blackbox which assigns the value of 3 to the “currentLifeLevel” parameter and, once that’s done, it activates the “setAnimation” ActionOnParam blackbox which displays the “3” animation Sprite.

The “reduce” input connector activates the “-” ActionOnParam blackbox which subtracts from the “currentLifeLevel” parameter the value of 1. Once that’s done, it first activates the “setAnimation” ActionOnParam blackbox which displays the animation or state corresponding to the value of the “CurrentLifeLevel” parameter and secondly, it activates the “greaterThan” Compare blackbox, which activates the “alive” connector if the value of the “currentLifeLevel” parameter is greater than 0, or the “death” connector should the value be equal to or less than 0.

Example 2. How to create a time scoreboard or chronometer


In order to manage time, we’ll have as a base a numerical time value that will run in thousandths of a second in the round and a graphic element to display it. This graphic element will be 5 instances of a Sprite that will have 10 animations or states, which will be the numbers from 0-9.


image10.png

image20.png


In our case, we’ll display the time in seconds and thousandths of a second as you can see in the image, counting down; so the time will go from the total time at the start and decrease until reaching zero, finishing.

To do this in the Scenes editor, we must create the 6 instances of the different sprites used for each segment of the time display, the tenths place, the units place, the tenths of a second place, the hundredths of a second place, and the thousandths of a second place, as well as the colon. In our case, we’ll call them “second.unit”, “second.ten”, “millisec.unit”, “millisec.ten” y “millisec.hundred”.


screenshot_309.png


In order to manage this time, we’ll have a Script (“RoundTimeControl”) which has 2 inputs (“start” and “stop”) and 1 output (“end”), as well as an exposed parameter called “roundMillisecs” and which contains the value of the starting time.


image31.png


The “start” input activates the countdown from the total time and displays the decreasing value in seconds and milliseconds. The “stop” input stops the countdown, freezing the current time on the screen. When the stipulated time runs out, the “end” output is activated, which determines that the time has run out. Inside the Script, we do everything needed to control the time and display the Sprites in relation to the value of time left, activating the “end” output when it has run out.

In order to use it, all we need to do is put the time value in milliseconds in, either by placing it directly in the “roundMillisecs” parameter, or by using a blackbox I assign it, and once that’s been assigned, we then activate the the “start” input which will display the countdown until we activate the “stop” input or reach 0, in which case the “end” output will be activated, which we can use, for example, to remove a life or whatever else we’d like to activate.


image04.png


In the “RoundTimeControl” Script, we have a fundamental parameter, “roundMillisecs”, which contains and defines the playing time value in the round. Inside this Script, we also have two other Scripts, “CurrentMsecs-Secs” and “updateScreenTime”, which group together the actions I’ll describe below.

The activation of the “start” connector activates the “start” input of the Timer blackbox, which starts the countdown. As the defined time counts down, this blackbox updates the “elapsedTime” parameter with the time that has passed since the clock began counting, activating its “updated” output. This occurs from the very first moment and is repeated until the last time the time is checked, when the “finished” output is triggered, announcing that time has run out. Given that the time to run does not have to be a multiple of the times between the update and the checkup of the time run, the final value of the elapsedTime parameter will most likely be greater than measured, which is something that will have to be kept in mind when necessary.

The “updated” output tells us we have a new value in the “elapsedTime” parameter and will activate the “CurrentTimeMsecs-Secs” Script which calculates the total time left in total milliseconds and divides it into seconds and milliseconds in order to display it. Once this piece of information is available, the “available” output will be triggered, which will in turn activate the “update” input of the “updateScreenTime” Script which places the corresponding animations into the Sprites displaying the time.

In the “CurrentMsecs-Secs” Script, we have two fundamental parameters with to carry out; “roundMillisecs”, which contains and defines the value of playing time in the round, and “elapsedTime”, which contains the amount of time that has passed since the clock began running. In this Script, we calculate the time left and then we break down that time in milliseconds into seconds and milliseconds--the latter is done in the “CalculateSecsMillisecs” Script, which I’ll be getting to.


image19-1024x323.png


The activation of the get connector starts the calculation of time remaining, starting with the activation of the “-” ActionOnParam blackbox that subtracts the value of the time that has passed since the “elapsedTime” parameter contents started from the total run time value contained in the “roundMillisecs” parameter. This value, stored in the “CurrentTime” parameter, is the time left in milliseconds.

Once that has been calculated, the “greaterThanOrEqual” Compare blackbox is activated, which compares the value contained in “CurrentTime” (the time left) to the value 0. If it is greater than or equal to 0, it activates the “CalculateSecsMillisecs” Script which breaks down the remaining time into seconds and milliseconds, and when this is done, it triggers the “available” output connector. If it is less, before activating the “CalculateSecsMillisecs” Script, we activate the ActionOnParam “copy” blackbox which sets the time remaining value to zero.


image30-1024x294.png


In the “CalculateSecsMillisecs” Script, we have the value of the time left in milliseconds contained in the “currentTime” parameter as an input. The Script breaks down this input value into its value in seconds and its value in milliseconds remaining, providing them to the “CurrentMilliSecs” and “CurrentSecs” parameters. The activation of its “get” input connector activates the “lessThan” Compare blackbox. This performs the comparison of the value contained in the “currentTime” parameter to see if it is less than 1000.

If it is less, the “true” output is triggered. What this means is that there are no seconds, which means the whole value of “CurrentTime” is used as a value in the “CurrentMilliSecs” parameter, which is then copied by the “Copy” ActionOnParam blackbox; but it doesn’t copy the seconds, because they’re 0, and that gives the value of zero to the “currentSecs” parameter via the “copy” ActionOnParam blackbox. After this, it has the values the Script provided, so it activates its “done” output..

On the other hand, if the check the “lessThan” Compare blackbox runs determines that the “currentTime” is greater than 1000, it activates its “false” output. This activates the “/” ActionOnParam blackbox, which divides the “currentTime” parameter by 1000’, storing it in the “totalSecs” parameter. Once that is done, the “floor” ActionOnParam is activated, which leaves its total “totalSecs” value in the “currentSecs” parameter.

After this, the “-” ActionOnParam is activated, which subtracts “currentSecs” from “totalSecs”, which gives us the decimal part of “totalSecs”, and stores it in “currentMillisecs” in order to later activate the “*” ActionOnParam blackbox, multiplying by 1000 the “currentMillisecs” parameter which contains the decimal value of the seconds left in order to convert it into milliseconds, which is stored in the “CurrentMillisecs” parameter (erasing the previous value). After this, it then has the values the Script provides, so it then activates its “done” output.

When the “CalculateSecsMillisecs” Script finishes and activates is “done” output, and this activates the Script’s “available” output, the “currentTimeMsecs-Secs” Script is activated, which then activates the “updateScreenTime” Script via its “update” input. This Script handles displaying the data obtained in the previous Script and which are available in the “CurrentMillisecs” and “CurrentSecs” parameters.


image06.png


The “updateScreenTime” Script in turn contains two Scripts, “setMilliSeconds” and “setSeconds”, which are activated when the “update” input is activated, and which set the time value in milliseconds and seconds respectively when their “set” inputs are activated. Both Scripts are practically the same, since they take a time value and place the Sprites related to the units of that value in the corresponding animations. The difference between the two is that “setMilliseconds” controls 3 digits (tenths, hundredths, and thousandths), while “setSeconds” controls only 2 (units and tens).


image11.png


The first thing the “setMilliseconds” Script does when activated is convert the value “currentMillisecs” is to represent to text via the “toString” ActionOnParam blackbox. This text is kept in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters, grouping it up in a collection of Strings via the “split” ActionOnParam. It is very important to leave the content of the “separator” parameter of this blackbox empty, even though in the image you can see two quotation marks in the field. This collection of characters is gathered by the “digitsAsStrings” parameter. Later, based on the value of milliseconds to be presented, it will set one animation or another in the Sprites.

Should the time value to be presented be less than 10, which is checked by the “lessThan” Compare blackbox against the value 10, the “true” output is activated which in turn activates the “setWith1Digit” Script. Should the time value be greater than 10, the blackbox’s “false” output is activated, and it proceeds to check if the time value is less than 100, which is checked by the “lessThan” Compare blackbox against the value 100. If this blackbox activates its “true” output, this in turn activates the “setWith2Digits” Script. Finally, if this blackbox activates the “false” output, the “setWith3Digits” Script is activated.


image15.png


The “setWith1Digit” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite that corresponds with the units contained in the “millisec.unit” parameter. The remaining Sprites (“millisec.ten” and “millisec.hundred”) are set with the 0 animation.


image22.png


The “setWith2Digits” Script takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the tenths place number contained in the “millisec.ten” parameter, the second character of the collection to set the Sprite animation corresponding to the units contained in the “millisec.unit” parameter and the “millisec.hundred” Sprite is given the animation for 0.


image29.png


The “setWith3Digits” Sprite takes the first of the collection of characters, and uses it to set the animation of the Sprite corresponding to the hundredths contained in the “millisec.hundred” parameter, the second character of the collection to set the animation of the Sprite corresponding to the tenths place value, contained in the “millisec.ten” parameter, and the third character of the collection to set the animation of the Sprite corresponding to the units place value contained in the “millisec.unit” parameter.


image18.png


The “setSeconds” Script when first activated converts the value to represent “currentSecs” to text via the “toString” ActionOnParam blackbox. This text is grouped in the “numberAsString” parameter. Once the text is obtained, we divide it into characters, gathering it in a collection of Strings via the “split” ActionOnParam blackbox. It is very important to leave the content of the “separator” parameter of this Blackbox blank, even though you can see two quotation marks in the field. This collection of characters is collected in the “digitsAsStrings” parameter. Later, based on the value of the seconds to be shown, one animation or another will be placed in the Sprites.

If the time value to be presented is less than 10, it’s checked by the “lessThan” Compare blackbox against the value of 10, which activates the “true” output; the first character of the collection is taken and used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter. The other Sprite, “second.ten”, is given the animation for 0.

If the time value to be presented is greater than ten, the “false” output of the blackbox is activated, and it proceeds to pick the first character from the collection of characters and we use it to set the animation of the Sprite corresponding to the tens place value contained in the “second.ten” parameter, and the second character of the character collection is used to set the animation of the Sprite corresponding to the units place value contained in the “second.unit” parameter.

Example 3. How to create a points scoreboard.


In order to manage the number of points, we’ll have as a base the whole number value of these points that we’ll be increasing and a graphic element to display it. This graphic element will be 4 instances of a Sprite that will have 10 animations or states, which will be each of the numbers from 0 to 9.


image10.png


In our case, we’ll display the points up to 4 digits, meaning scores can go up to 9999, as you can see in the image, starting at 0 and then increasing in whole numbers.


image08.png


For this, in the Scene editor, we must create the four instances of the different Sprites used for each one of the numerical units to be used to count points: units, tens, hundreds, and thousands. In our case, we’ll call them “unit point”, “ten point”, “hundred point”, and “thousand point”. To manage this time, we’ll have a Script (“ScorePoints”), which has 2 inputs (“reset” and “increment”), as well as an exposed parameter called “pointsToWin” which contains the value of the points to be added in each incrementation.


image09.png


The “reset” input sets the current score value to zero, and the “increment” input adds the points won in each incrementation contained in the “pointsToWin” parameter to the current score.

In order to use it, we must only set the value for the points to win in each incrementation by either putting it in the “pointsToWin” parameter or by using a blackbox that I assign it. Once I have it, we can activate the “increment” input, which will increase the score and show it on the screen. Whenever we want, we can begin again by resetting the counter to zero by activating the “reset” input.

In the interior of the Script, we do everything necessary to perform these actions and to represent the current score on the screen, displaying the 4 Sprites (units, tens, hundreds, and thousands) in relation to that value. When the “reset” input is activated, a “copy” ActionOnParam blackbox sets the value to 0 in the “scorePoints” parameter, which contains the value of the current score. Also, when the “increment” input is activated, a “+” ActionOnParam blackbox adds the parameter “pointsToWin”, which contains the value of the points won in each incrementation, to the “scorePoints” parameter, which contains the value of the current score. After both activations, a “StoreOnScreen” Script is activated via its “update” input.


image03.png


The “StoreOnScreen” Script has a connector to the “update” input and shares the “scorePoints” parameter, which contains the value of the current score.


image00.png

image28-1024x450.png


Once the “ScoreOnScreen” Script is activated by its “update” input, it begins converting the score value contained in the “scorePoints” parameter into text via the “toString” ActionOnParam blackbox. This text is gathered in the “numberAsString” parameter. Once the text has been obtained, we divide it into characters and group them into a collection of Strings via the “split” ActionOnParam.

This collection of characters is gathered into the “digitsAsStrings” parameter. Later, based on the value of the score to be presented, one animation or another will be set for the 4 Sprites. If the value of the score is less than 10, as checked by the “lessThan” Compare blackbox against the value 10, its “true” output is activated, which activates the “setWith1Digit” Script.

If the value is greater than 10, the blackbox’s “false” output is activated, and it checks to see if the value is less than 100. When the “lessThan” Compare blackbox checks that the value is less than 100, its “true” output is activated, which in turn activates the “setWith2Digits” Script.

If the value is greater than 100, the “false” output of the blackbox is activated, and it proceeds to see if the value is less than 1000, which is checked by the “lessThan” Compare blackbox against the value of 1000. If this blackbox activates its “true” output, this will then activate the “setWith3Digits” Script. If the blackbox activates the “false” output, the “setWith4Digits” Script is activated.


image21.png

image05.png


The “setWith1Digit” Script takes the first character from the collection of characters and uses it to set the animation of the Sprite that corresponds to the units place contained in the “unit.point” parameter. The remaining Sprites (“ten.point”, “hundred.point” and “thousand.point”) are set with the “0” animation.


image24.png

image02.png


The “setWith2Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the tens place contained in the “ten.point” parameter, and the second character of the collection is set with the animation of the Sprite corresponding to the units place as contained in the “units.point” parameter. The remaining Sprites (“hundred.point”) and (“thousand.point”) are set with the “0” animation.


image25.png

image17.png


The “setWith3Digits” takes the first of the collection of characters and uses it to set the animation of the Sprite corresponding to the hundreds place contained in the “hundred.point”) parameter; the second character in the collection is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the third character in the collection is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter. The remaining Sprite, (“thousand.point”) is set with the “0” animation.


image23.png

image26.png


The “setWith4Digits” Script takes the first character of the collection of characters and uses it to set the animation of the Sprite corresponding to the thousands place as contained in the “thousand.point” parameter; the second is set with the animation for the Sprite corresponding to the hundreds place as contained in the “hundred.point” parameter; the third is set with the animation for the Sprite corresponding to the tens place as contained in the “ten.point” parameter; and the fourth is set with the animation for the Sprite corresponding to the units place as contained in the “unit.point” parameter.

As you can see it is not necessary to write code when you work with WiMi5. The whole logic of these scoreboard has been created by dragging and dropping blackboxes in the LogicChart. You also have to set and configure parameters and scripts, but all the work is visually done. We hope you have enjoyed this tutorial and you have understood how to create scoreboards.

300 Employees On Multiple Continents: How We Work Without An Office

$
0
0
We decided to go office-less at the very start. For a small translation agency focused on working with IT companies via the Internet, this was a logical step. Now, ten years later, Alconost includes more than 300 people worldwide. Our staff is diverse: besides translators, we employ marketing specialists, contextual advertising experts, sales staff, editors, localization managers, and video production pros. But despite our growth, we still think that offices are inefficient and we feel good about the choice we made. As company co-founder, I, Kirill Kliushkin, would like to share about how we make the absence of an office work for us.

Not having an office has had a large and positive effect on our business. Our clients are located all over the world, so they often write to our managers outside of our local working hours. Because of this time difference, an ordinary, office-bound company would take days to communicate with distant clients and resolve issues. But not us. We do not hold our employees to a strict eight-hour regimen, instead asking them to answer messages quickly whenever they have the opportunity. Clients truly appreciate fast answers, even if it is just to say that “I will get the necessary information and write back to you tomorrow.” The client is happy, which means that we are happy too.

We have gone without offices not because we wanted to take a more relaxed pace. If anything, the answer is the opposite: often tasks need to be finished in minutes, not hours. Half of orders on our Nitro rapid online translation service are completed in less than two hours. We promise to reply to all client questions regarding Nitro within one hour. If we were stuck to a fixed office schedule, we could never attain the responsiveness that we have today.

Our formula: remote work + people + freedom - control


Our formula for success consists of remote work plus excellent people and an open schedule, minus overbearing control. Remote work is common enough these days – work wherever you want, as long as you get the job done. The same goes for the schedule too: we do not actually care when and how much you work. What counts is that tasks are resolved, processes launched, projects completed quickly, and the other employees not waiting because of any delays from you. Often I find it easiest to write articles or scripts at 2 or 3 AM, when the day’s problems are finally set aside and I can get more done in two hours than I have during all of the last week.

We do not ask our employees to fill out time sheets or, even worse, install tracking software on their computers to monitor time worked and get screenshots of what they are working on. Our approach is fundamentally different. Standing over an employee’s shoulder with a stopwatch and a calendar is counterproductive both for the employee and for the company. If a person is putting in the proper effort, we can see this by the tasks that get done and the satisfaction of colleagues and clients. If someone is lagging behind, we can see this too. We value the results, not the processes that led to these results. Business is what interests us, not control.

The next component of our formula is “excellent people”. Without them, nothing else works. But “excellent” is the key part. If someone just wants to sit in a desk chair for eight hours and does not care what they are working on, that person would not last long here. If work for someone is exclusively a way to earn money, that person would not fit us either.

How do I identify excellence? My way involves asking a lot of questions at the job interview – some of them personal, some of them uncomfortably so. By the end of the conversation, I have a high-resolution psychological portrait of the candidate. Looking back at all of my interviews with potential employees, I think that our conversations have usually allowed figuring out right away whether a person is the right one for us.

Mistakes can always happen, of course, and sometimes employees lose their motivation and start to drift. We battle for each employee: we try to figure out the reason for this change in attitude, inspire the employee to get back “into the groove”, and think of interesting work that could excite him or her. If we still lose the battle, we cut our losses and part ways.

Motivation vs. internal crisis


If we are on the topic of motivation, I should add a few words about the importance of motivation for employees at office-less companies. It is not a question of salary. When you are not sitting side by side with your boss, colleagues, or subordinates, it is easy to forget that you are part of a team. After working online for six months or so, an internal crisis sets in – you can forget that you work at a company and fall out of the corporate culture. Even Internet-centric companies like ours have a culture: in our case, one of care for the client, the desire to be a step ahead of the game, and the ability to answer questions that the client has not even thought of yet.

There is no one-size-fits-all technique for fighting off these teleworking blues. One effective method in our toolbox is to ask the employee to write an article for the media or to speak at a conference. While the employee is preparing the text or presentation, he or she dives into the topic and feels like part of something bigger. Another way is to simply meet and socialize informally, maybe drink a little whiskey. One way or another, managers need to think proactively about how to preserve motivation and help employees to feel socially needed, so that they do not suddenly snap one fine day and jump ship for a company with a plush office and after-work drinks on Fridays.

It is absolutely critical to be in contact with every employee and provide them with proper feedback. Don’t forget to praise a job well done, and don’t be afraid to say if a job could have been done better – but critique the work, not the person. The most important thing is to keep the lines of communication open and not be silent. I learned this the hard way, unfortunately. Last spring I traveled together with the other co-founder, Alexander Murauski, to Montenegro (another advantage of remote work, incidentally!) for three months with our families. All of the hassles of the temporary move distracted us from communication with employees. As a result, we lost a pair of workers who, if we had been “virtually” at their side, could have stayed, had we been able to help them in maintaining their motivation.


Work-Motivation.jpg


But leaving the country is not the only way of losing contact with employees. Simply concentrating too much on one aspect of the business can leave other employees feeling lonely and uncared for. Now I know how dangerous this can be.

Trello, Skype and The Cloud


Setting up workflows is much more important for an office-free company than it is for a company with employees housed in a giant cubicle farm. We realized this right away at the beginning of our company’s growth, when we needed to hire a second and later third project manager for handling client requests. We had to design processes and mechanisms to make telework just as efficient and seamless as working with a colleague at a neighboring desk.

Finding task management tools was a long effort. We tried Megaplan and Bitrix24, but later migrated to Trello, which is both very convenient and intuitive. Trello remains our project management tool of choice, although we continue to refine our processes. For localization of large projects, we often work with translators through a cloud-based platform. The rest of our communications go through email, Skype or Google Hangouts, which allow sharing screens in virtual group conferences.

All of our documents and files are stored on Google Drive. We forego Microsoft Office and other offline programs in favor of online documents only. The advantages are that documents are accessible from any device and the group collaboration/revision process is convenient.

We also have created an internal wiki to centralize and systematize our knowledge, rules, references, and procedures. Everything is in there, from step-by-step setup of Alconost email accounts to basic principles for working in Trello. Wiki articles are regularly added and updated, which helps new employees to get oriented quickly and makes work get done quicker.

Automating routine tasks and simplifying business processes is key. This saves work time, reduces headcount needs, and simply frees up resources for more creative tasks. A monotonous task that eats up five minutes every day will consume almost a week over the course of a year.

And of course, I recommend acquiring the tools you need so that you can work anytime, anywhere. With today’s devices and mobile Internet access, this is eminently doable. I remember when I spent an entire day writing video scripts, communicated with clients, and managed the company as I was waiting in line at a customs checkpoint. All I needed was my mobile phone and its five-inch screen!

Three tips for those working without an office


First: create a schedule. Wake up at the same time every day and figure out which times are most productive. People need rhythm.

Second, if you cannot work properly where you are, create the right setting so that you can. You simply cannot be productive in a two-room apartment with screaming kids and hyperactive pets. You need your own clearly marked, private space. For me, this is the study in my apartment. For Alexander, the other Alconost co-founder, the solution to two noisy children is a small room at a nearby business center.

And third: when there is no set schedule, your working day imperceptibly begins to “morph”. You do not have the clear division between personal time and working time that an office gives. Some people become fatigued by this, which is a sign that remote work is probably not right for them. When you like your work – if it is something that you are passionate about – it does not matter which of the day’s 24 hours you choose to spend doing it. Personally, I don’t even like the word “work”. I don’t “work”, I live and simultaneously pursue my business. It makes me happier – and lets me truly live.

Luck in Games: Why RNG isn't the answer

$
0
0
The topic of luck in competitive gaming always ruffles a lot of feathers, leading to never-ending complaints and hostility from many different types of gamers: players whining about losses caused entirely by randomness, fans whining about their favourite pros being knocked out of tournaments due to bad luck, and everyone else whining about all the whiners. The subject arises frequently in discussions surrounding card games like Hearthstone, where the issue has become a hotly debated topic in the wake of serious complaints from professional players concerning the role of randomness in the game.

In developing Prismata—a competitive turn-based strategy game sharing many features with card games—we’ve questioned whether the presence of luck was really worth all the fuss, raging, and drama. Could a game like Hearthstone still be as popular and fun if the element of luck was removed?

Over the years, we’ve talked to many professional gamers and expert game designers, including folks from Hearthstone’s design team, about the role of luck in card games. When asked whether it would be possible to design a card game without luck, they all told us the same thing:

“Bad players will never think they can win, and they will stop playing.”


“Your game can’t thrive if it doesn’t have luck.”


“You'd be crazy to try and make it a commercial success.”


Challenge accepted. I guess we’re crazy.

Of course, it’s no secret that there are many benefits to having elements of luck in competitive games. Randomness can create exciting moments for players, alleviate balance issues, and provide losing players with an excuse to avoid feeling bad about their performance. As a design decision, it has become the de facto standard in card games, copied from one game to another throughout the industry. But are luck-based game mechanics the only method of achieving these goals?

After four years of struggling over this issue, the answer is finally clear: a resounding NO.

The Secret


What took us four years to understand is that luck-based game mechanics are not necessary to achieve excitement, balance, or consolement. All of these objectives are reachable through other means, without the player frustration or toxic community behaviour that inevitably arises in games featuring a high amount of randomness.

In a nutshell, we've concluded that it’s possible to design a compelling competitive game without luck. Possible. Not easy. Not necessarily doable in a manner consistent with the breakneck pace and intolerance of failure that characterizes much of AAA game development. But possible.


games.jpg
Chess, go, shogi, and checkers have been played for centuries and feature no randomness whatsoever. This means that masters of these games will crush you. Every. Time.


In some sense, this doesn’t sound too surprising. Most of the world’s most famous traditional competitive tabletop games—like chess and go—have fascinated people for centuries despite possessing no luck whatsoever. But it would be naive to assume that their success should translate to the modern-day gaming audience, with their short attention spans and insatiable addiction to novelty. Accordingly, throughout the four-year process of designing Prismata, we encountered many roadblocks in trying to meet our goal of having no randomness in the game whatsoever.

Before discussing those roadblocks, or even our reasons for wanting to avoid luck-based game mechanics in the first place, let’s take a look at different types of luck found in video games, and their effects in games like Hearthstone.

Forms of Luck in Gaming


Most types of uncertainty or variance in competitive gaming fall into one of the following four categories:

(1) Absolute Luck

Examples: coin flips, die rolls, waiting for the result after going all in pre-flop in poker.


7-GL-staring-at-Antonio-after-going-all-


In games with absolute luck, there comes a point where no amount of skill or knowledge can prevent you from losing. You’re at the mercy of the dice, the cards, or the random number generator (RNG) used in a video game. You have no way of reacting to what happens; you simply win, or lose.

Absolute luck is seldom a wise choice when designing video games, because it often leaves players feeling frustrated and helpless when they become unlucky.

(2) Execution Luck

Examples: basketball shooting, bowling, headshots in first-person shooters.


CS-Headshot-2.jpg


Execution luck refers to unavoidable variance in performance due to imperfect skill, such as basketball players who sink only 70-90% of free throws.

Execution luck can have huge effects on player psychology. Players feel bad when they lose 3 consecutive all-in bets in poker due to unlucky river cards. But they can feel much worse when they’re playing a competitive shooter and miss 3 consecutive headshots that they are usually able to make. Execution luck feels different from most other types of luck because players blame themselves when they exhibit short bursts of sub-average performance, even if those bursts are statistically inevitable due to random variations in human performance (a person who makes 80% of their shots will miss three in a row about 1% of the time). Players often question their own skill when this happens, leading to them feeling extremely demotivated (“Am I playing like crap today? Did I get worse at the game? Should I just quit?”). Worst of all, it’s statistically unavoidable. Players will inevitably feel demoralized at some point in all games where execution luck is a huge factor.

For these reasons, designers need to be very careful when incorporating game mechanics that result in heavy penalties for poor execution. It’s important that players are given ample opportunities to demonstrate their skills so that random variations in performance are “averaged out” over the course of a full match.

(3) Yomi Luck (i.e. “mind games”)

Examples: rock-paper-scissors, build order selection in real-time strategy games, move selection in fighting games or pokemon.


maxresdefault-1024x576.jpg


StarCraft has many rock-paper-scissors situations due to the fog of war. A greedy economic build may yield an advantage against an opponent playing a standard build, but may lose to a rush. However, a rush can fail to an opponent playing standard, leading to a disastrous economic disadvantage. Hence Greedy > Standard > Rush > Greedy. (Of course, this is an oversimplification.)

Yomi is the Japanese name given to the “mind-reading” skill that allows people to win at games like rock-paper-scissors, in which both players simultaneously select an action. Despite having no true randomness associated with them, these situations exhibit large amounts of variance; no player can win 100% of the time at rock-paper-scissors.

Yomi situations show up often in real-time strategy games. The correct units to build often depend on what the opponent chooses to get, and that information may not be available at all times due to the fog of war and inability to scout the opponent’s base. In fighting games, human reaction time itself creates a natural “fog of war”; you won’t have enough time to counter your opponent’s move if you wait until it animates on the screen; you must predict what your opponent will do in order to counter it.

Games rich in yomi often provide a multitude of options to players: safe plays, risky gambles, all-ins, hard counters, soft counters, and the ability to trade resources for information (for example, by scouting with a worker in StarCraft). The blending of play skill and yomi luck can create a complex web of interaction that rewards experienced players. Many yomi situations allow experts to crush new players by exploiting their natural tendencies or lack of understanding. However, in expert vs expert games where both players have a mastery of the rules and mechanics, yomi situations often devolve into purely arbitrary outcomes that depend highly on luck rather than skill. Nevertheless, this can have some benefits: players feel accomplished when they “outplay” their opponents, even if they simply got lucky.

(4) Soft RNG Luck

Examples: backgammon, most card games.


1017_94i2l4kisg_matchinprogress-1024x650


RNG stands for “random number generator”. In the context of gaming, RNG refers to any situation in which an outcome is random. Games like Hearthstone have RNG effects every turn when card drawing occurs, as well as randomized in-game effects (such as spells that deal random amounts of damage, or minions that automatically attack a random enemy). However, these RNG effects are soft in that players are given an opportunity to react to the different situations that occur. In theory, better players should be better at planning their turns around the randomness that occurs, so the increased amount of in-game luck should theoretically be counterbalanced by an increase in the skill ceiling of the game itself.

As we'll see, this theory can break down in practice.

How Luck can Fail


To see a key example of where the presence of RNG can have a strongly negative effect on some players’ enjoyment of the game, we'll examine Hearthstone.

As the Hearthstone metagame has become more fully explored, many strong players have become frustrated at the lack of opportunities for skill expression. Unlike in chess—where the best player in the world is a 91% favourite when playing a single match against the 100th-best player—in Hearthstone, the best player is often only a marginal favourite when playing any reasonably good player with a good deck. Gosugamers reports that popular player Tidesoftime, who is currently ranked 4th in the global ELO rating, has won only 63% of his matches. With that win rate, a player will lose a best-of-five series over a quarter of the time, meaning that most tournaments (televised ones in particular) don't have nearly enough games to have a high likelihood of rewarding the most skilled players.

Worst of all, unlike in poker—where a talented player will inevitably see a profit from playing millions of hands over the course of his or her career—skilled Hearthstone players have only a few opportunities each year to do well in a meaningful tournament, where winning requires an enormous amount of luck. Being good isn't enough.

Of course, many players insist (rightfully so) that this is how card games typically are—luck is a part of the game. But it's also abundantly clear that at least some fraction of players are very unhappy with the current state of the game.

This week, popular streamer Reynad announced he was taking a break from Hearthstone, complaining that the game, in its current state, doesn't reward skill enough. Kripparrian—another renowned gaming celebrity who streams Hearthstone regularly—posted a video of his own in the wake of Reynad's announcement, in which he stated the following:

Kripparrian: In Hearthstone, in constructed, at this time, it's pretty much just about draw RNG, and that really dictates who wins the matches.


Gaara, a teammate of Reynad, had similar concerns, which he made clear in a video. Gaara complains that Hearthstone has too many auto-win hands and situations where there is little decision making involved. If the opponent gets a good draw and you don’t, there’s often very little you can do.

For those familiar with Hearthstone, one picture says it all:


Untitled.png


Against any reasonable opponent, your chances of winning here are likely less than two percent, even though the game has just barely begun.

In the scene above, which we shall refer to as the awful zoo hand, the player on the bottom has drawn too many high-cost cards. These cards don’t work well together, and none of them can be played during the first few turns of the game anyway. The player on top is in a much stronger position, and is virtually guaranteed victory. All the skillful decision-making in the world cannot save the player on the bottom from losing. These situations are not fun at all, and fairly common with the current set of decks that are most effective in Hearthstone.

Are We Better Off Without Luck?


Mark Rosewater, head designer of Magic: The Gathering, has written extensively about the different types of RNG effects found in card games, and their effects on player enjoyment. Though he stresses that most players of card games don’t enjoy too much randomness, he also emphasizes several key benefits of RNG: increased surprises and excitement, the ability for losing players to make comebacks, the ability for weaker players to win, and the increase in opportunities for strong players to demonstrate skill by accurately preparing for random events and reacting to them.

We’ll look at several of these points. In each case, the question we’re asking is “can the same effect be obtained without any luck?”

Comebacks

Mark lists a catch-up feature as the fourth entry on his list of ten things every game needs and describes how the random card draw system in Magic and other card games ensures that players who are behind can always draw a key card required to make a dramatic comeback.

However, I think Mark is missing the bigger picture here (he loves to say “every game needs X”, where X is a feature that Magic has.) Having talked to our players, we’ve learned that what they truly want is NOT comebacks. They simply want to avoid being dragged along for many turns in an unwinnable position. Comeback mechanics are one way of achieving this, but not the only way.

Let’s go back to our “awful zoo hand” from above. If you’re in this situation, you’re faced with an uncomfortable decision: do you play on, knowing that the chance of winning is likely under 2 percent? Or do you resign, saving yourself some time, but costing yourself a chance to win? Many players choose to play on, unable to resist the urge to eke out every last percentage point of possible winnings. But players who do so seldom have a good experience during the remainder of the game, often just sitting there cursing the RNG gods for dealing them such an awful hand.

Another common example can be found in League of Legends, where teams often play on for 20 minutes or more in situations where they have an extremely low probability of winning, but are forced to cling on in hopes that their opponents make enough mistakes for them to catch up:


SZuIuxi-1024x576.jpg


The key insight is that lack of a comeback mechanic is NOT the problem. Indeed, these games both have comeback mechanics. The problem is that regardless of the strength of the comeback mechanics, there will always be situations where your chance of winning lies in the single digit percentages, and it shouldn’t take 20 minutes for your opponents to close out the game when they have such a monstrous advantage. Your opponent should be able to swiftly terminate you, and the game rules should be designed so that they are strongly incentivized to do so, rather than to play conservatively and torture you for another 20 minutes.

In many games, luck has the opposite effect of facilitating this. In games like League of Legends, fear of bad Execution Luck or Yomi Luck results in teams playing conservatively when they’re ahead, exacerbating the problem that comeback mechanics are meant to solve.

Balance

In a tabletop game like Magic: the Gathering, in which players must physically meet up to play and don’t have the luxury of choosing from hundreds of online opponents, players aren’t always able to find opponents of exactly equal skill. Consequently, it’s important that weaker players stand a chance against stronger ones so that weaker players don’t quit when they have no balanced opponents to play against and get crushed every game. Thus it makes a lot of sense for games like Magic to be balanced so that weaker players can get lucky and win against stronger players.

With modern online matchmaking and rating systems, any player of any game with a sufficiently large audience should be able to quickly find a match against an opponent that they can beat 50% of the time. There’s absolutely no reason to deliberately increase the role of luck in determining who wins.

Excitement

Random events in card games can be very exciting and have produced some extremely memorable moments like Craig Jones’s famous topdeck of the century is filled to the brim with videos displaying amazing instances of RNG completely turning the tide of a game, or yielding funny or ironic results.

However, many of the most popular video clips are not acts of luck. They are acts of skill, like Reynad’s brilliant highmane sacrifice, which won him a key match in a Dreamhack tournament from earlier this year when all hope was lost.

You can find many other examples of truly amazing feats of skill in Hearthstone, often yielding clutch victories. Check out some of Amaz's crazy board clears and lethal combos if you haven’t seen them already. One thing is obvious: highly skilled plays are just as exciting, if not more exciting, than lucky plays.

Skill

Here, I won’t argue with Mark. It’s certainly true that much of the skill expressed in games like Magic: the Gathering is centered around preparing for, and reacting to, random events. Any Shaman player in Hearthstone will tell you the same thing. Playing well under situations where a lot of random events can occur requires a lot of planning, calculation, and ingenuity.

That said, there is no shortage of skill to be displayed in games that have no randomness at all. In games like chess or StarCraft, players can concretely understand every aspect of the game at incredible depth because of the ability to replay deterministic openings or build orders and study the situations that result. In games like Hearthstone, it’s much harder to argue that a person “made the optimal play”, because merely calculating the percent chance of winning is incredibly complicated in all but the simplest of positions. So randomness can make it harder to obtain satisfying answers to what the optimal move in a particular situation is.

Just Imagine…


Given our understanding of the benefits and drawbacks of luck in card games, let’s perform a thought experiment to see what it might be like if we tried to design a card game in which all of the luck was removed.

Imagine a game like Hearthstone or Magic: the Gathering where there was no draw phase at all; you simply drew your whole deck on the first turn, and could play any card from your deck as if it were in your hand. Let’s call this imaginary game:

DeckHand. In DeckHand, there are no mulligans, no bad draws, no RNG, and you can “live the dream” every game. You can always play a perfectly optimized (“on curve”) sequence of cards, and you always have access to whichever cards are necessary to deal with whatever your opponent is plotting.


DeckhandFinal2-1024x706.jpg
DeckHand. Imagine having a 30-card hand on turn one. What would your dream opening be?


What would DeckHand be like?

If your answer is, “broken as hell”, you’re probably right, but let’s think a bit about how the metagame in DeckHand would play out, supposing that the cards were balanced around the new format, and appropriate steps were taken to ensure that the player going second wouldn’t be too disadvantaged (for example, by providing something similar to Hearthstone’s “coin” card).

We can already predict several problems with DeckHand. Many of them were also issues with Prismata during the early days of its development:

Problem 1: Openings

DeckHand players would quickly settle on a few optimal decks, and learn precise opening sequences for each deck. This would lead to an “opening book” like chess, where the best players memorize deep sequences of moves to play in the early stages of the game. Optimal play would depend heavily on huge amounts of study and memorization rather than game knowledge or strategizing. Many players would not find this to be fun.

Problem 2: Repetition

There would likely be a few popular decks in DeckHand, and players would quickly learn all of the basic matchups. Without randomness to naturally create variation in each game’s opening sequence of moves, each matchup would proceed quickly and predictably through the first few turns as players “played out the book”. A player of a specific deck in DeckHand might only really ever find themselves in 3 or 4 different situations over the course of all of the early turns in all of their games. This would cause games to get very repetitive. Not fun at all.

Problem 3: Balance

In most card games, if your opponent is playing a deck that “counters” your deck and is favoured to win, then despite being an underdog, you seldom have a probability of winning below 25%. Your opponent can always get unlucky with their draws, giving you the opportunity to win despite playing a disfavoured deck. In DeckHand, this would no longer be the case. Your opponent would have complete access to all of the cards that counter your strategy, and you could easily find yourself in situations where your odds of winning are effectively zero if your opponent plays correctly. Many games in DeckHand could simply be decided by which deck you get matched up against. The "automatch RNG" would simply take over as the dominant factor in determining the winner. Definitely not fun.

Solution Step 1: Use Randomized Decks

The problems listed above are inherent to the constructed metagame in any card game, but they are mitigated by random opening hands, which add variety to games, discourage deep memorization of openings, and help boost the win rate of underdog decks. Without random opening hands, we need another trick up our sleeves. The solution is simple: don’t randomize the opening hands, randomize the whole deck! Providing each player with a randomized, but balanced decklist would make each game of DeckHand fresh, with new strategies to uncover in every match. A game that does something similar to this is Dominion, in which players build decks over the course of the game using cards from a randomly generated pile.


6117864_orig.jpg
Dominion—where the cards you use are different every game. However, unlike in DeckHand, you can still fall victim to bad draws. Better pray to RNGesus.


You might be thinking, “Wait, I thought we wanted to remove luck from the game; why are we adding random decks?” You’d have a point, but we’ll address that later.

Solution Step 2: Use the Same Deck for Both Players

Of course, randomized decks vary greatly in strength, meaning that many games of DeckHand would be unfair if one player’s randomly generated deck was stronger than the other. So for fairness, let’s give both players the same randomly generated deck. Unfortunately, this leads to a further problem: if every match features the same cards on both sides, won’t both players just play identical cards every game? DeckHand won’t be too interesting if every game is a mirror match, which can be common in games like Dominion when experienced players square off.

Solution Step 3: Build Diversity By Forcing Players to Make Tech Choices


Res.jpg
The three technologies that can be purchased in Prismata. If you want to rush, go red.


This step is a bit harder to explain, but I’ll summarize what we did in Prismata. In Prismata, there are three different technologies that players can invest in. Each unit in the game has different technology requirements, so you can’t buy a unit whenever you want; you have to purchase the prerequisite technologies first. Upon seeing a player invest in a particular technology, the opponent will often react by investing in a different technology to have access to the units that counter those of the first player. This process continues as the two players jockey for position, making their tech investments in response to those of their opponent. This naturally promotes tech diversity, and hence mirror matches are uncommon. We could easily imagine doing something similar for DeckHand, though it might require a non-trivial revamp of the game's economy.

Let’s try DeckHand?

Of course, DeckHand was an imaginary game, and many changes to the cards and abilities would likely be necessary to ensure that everything worked well. We can’t say for sure that DeckHand would be a good game, but it should be conceivable that the key problems induced by removing card draw RNG can be overcome.

So What About Prismata?


Of course, the whole discussion about DeckHand is very much an analogy of some of the struggles we faced in designing Prismata. Prismata is essentially just DeckHand with modified combat rules, and an economy that feels a bit more like a turn-based version of what you’d find in a real-time strategy game. Prismata isn’t completely free of RNG, but the only randomness present lies in the random selection of units available for purchase in each game, and the selection of which player goes first. Once the game begins, there is absolutely no luck involved.

There is one last point that we didn’t address. As I mentioned at the very outset of this article, there was one serious doubt that was much harder to shake:

“Bad players will never think they can win, and they will stop playing.”


Back when Prismata was our pet project and we were still in school, we never intended for it to be a game for “bad players”. We were massively addicted to it and tried quite hard to play well! But before quitting school to work on Prismata full time, we needed to be absolutely sure that players inexperienced with strategy games wouldn’t have a bad time. We did several rounds of user testing, and what we discovered was quite astonishing.

Despite Prismata having no randomness, beginners who lost actually thought they were unlucky.

As it turned out, beginners had not formulated any concrete strategies when deciding which units to buy, and had just chosen some at random. If their units happened to be strong against whatever their opponent chose, they would win. If not, they would lose. And they didn’t blame themselves for losing, because they had just chosen randomly.

The best explanation that I have for this phenomenon is that it exemplifies a fifth type of luck in games:

(5) Outcome Uncertainty

Examples: strategy games, in which players choose a strategy without knowing whether it will work.

Outcome uncertainty is sometimes called opaqueness luck as it refers to situations in which the final outcome of a choice is not visible to players, even though it may be deterministic. A quick example would be a contest in which the goal is to guess the closest date to a chosen person's birthday. Such a contest involves no RNG in any sense, but to the participants, the results are essentially random. Opaqueness luck is not unique to beginners; in fact, much of the variance in performance among chess grandmasters can be attributed to it. Strong chess players may make a move thinking, "this is probably good for white", but they seldom know for sure.

As it turns out, opaqueness is the key source of luck in games like Prismata. With a near-infinite number of possible combinations of initial configurations, games of Prismata present limitless opportunities for players to be placed in unfamiliar situations. While still learning the game, beginners often buy the wrong thing and lose. Frequently, they develop a favourite unit as a result of getting lucky with it, and then continually purchase that unit whenever it's available, regardless of whether a unit countering it can be bought by their opponents. Confronted with a loss, they actually tend to blame the RNG for providing their opponent with a counter to their favourite unit.

In any case, I’m now wholeheartedly convinced that strategy game players will never change. Despite our best efforts to make a luck-free game, there will still be threads in which people claim that going second is OP, or whine that the randomly generated card sets are rigged, or complain that there are too many whiners. In the end, such discussions are a healthy part of most strategy game communities, as excuses help protect players' egos. However, we think players are good enough at coming up with excuses on their own, so we've come down firmly against the idea of adding more randomness for its own sake. Instead, our highest priority (on top of creating an enjoyable game) is to provide a quality matchmaking service guaranteeing that our players genuinely have a 50% chance of winning.

That should be enough to keep them happy.

Or so I hope.


prismata_footer-1024x290.jpg

What Makes Old Games Addictive

$
0
0
As a programmer by trade, I rarely want to write code when I get home... but even 10-12 hours of coding won't stop me from hooking up my USB controller, firing up a keyboard mapper script I wrote, and going to Virtual NES.com. Sure I've been playing these games since I was five, but they're still that fun! I kicked... uh, played well... back in the day, and I still rock those games now (and have a great time doin' it)!

Now you might be wondering, what's up with that? It can't just be that weird Miguel guy, because sites like Virtual NES are everywhere. Why do they buy - or even make - a custom USB controller for old NES games? Why would anyone spend so much time programming them, right down to the last detail, to be exactly like the originals? The graphics and sound from back then was so cheesy compared to now. I mean okay, sure, for games like Super Mario Bros. and the Legend of Zelda, the nostalgia-factor is pretty intense. But Kung-Fu? RC Pro-Am? 10-Yard Fight? Who even remembers those (other than me, lol)? And why do people still create (and get hooked to) games with similar quality? I mean, today we have 3D (even 3D audio); we've got super-realistic sports games, amazing adventure games, etc. and they keep getting better. So what's up with that?

Obviously, it's more than just nostalgia. And although I'm far from an expert, I think I know what the secrets are, and I want to share them with you.

Why Old Games Still Rock


1. They're easy to play, but hard to beat!


You know what got me interested in making my own games? PS2; But not in the way you think. To me, those games were insanely hard... to play. Three directional controls, 12 buttons (not including start & select) and each one of them does different things in different situations? Forget that! I figured it would be easier to learn to create my own games than to learn how to play theirs.

But NES, Super NES, Sega Genesis etc. were different: they were easy to figure out, but you had to play them like crazy to be able to beat the game. Gannon was hard to kill because he turned invisible and could shoot at you from any direction - not because I forgot how to use my sword! King Koopa's castle was tough to get through because it was a maze loaded with baddies, dead ends and traps - not because I couldn't figure out how to shoot a fireball! They were most definitely hard, but there wasn't a vertical wall of a learning curve just to play the thing!

In other words, the challenge was in the levels (the obstacles, the AI etc.), not the gameplay. This is something that I think we've lost in today's games. We've forgotten how to KISS (Keep It Simple, Stupid! :)). If we combined the amazing graphics and sound available today with the simplicity of the OGs, those old games might actually become a thing of the past. But until then, gamers and game addicts everywhere keep on makin'em like they did in the old days - and that's cool with me! :)

2. They're relatively easy to create


The first point came from me as a gamer; this point comes from me as a programmer. I say 2D games are "relatively" easy because you do need to know a thing or three about programming to write a game either way. But even if you're still fairly new to programming, you might want to check this part out.

Let's take for example collision checking. Just about every game has to have it (the exceptions being stuff like crossword puzzles or Sudoku). In a 2D game, it's as simple as this:

if (Rectangle1.Intersects(Rectangle2)){
	Lives -= 1;
	RestartLevel();
}

Granted the example above is not actual code, you get the idea. If one rectangle (say Mario) intersects another (say one of those flying fish things), then Mario loses a life and you restart the level. The only thing that'd make this a bit tricky is how to check if one object intersects another (which I've done in JavaScript and hope to never do again). But AFAIK every language with a Rectangle object has a function to check that already (In Java it's "Intersects", and I think in C# it's "Contains") and other objects like Oval and Polygon may have them as well. So it's no big deal, not the end of the world (unless that's what happens in the level :) yeah that was corny).

But in a modern game, there are a gajillion other variables to check for. We're no longer talking about simple shapes, but complex 3-dimensional figures and physics and camera angles and other stuff I haven't thought of yet. And on the graphics side you've got stuff like texturing, plotting all the 3D coordinates, and other stuff that to me is just annoying. Hopefully, there's some framework or SDK out there now that abstracts a lot of this away, but in my experience it's just not worth the aggravation. Especially for newbies, but even for more experienced programmers like me, if you're not a math guru this can be such a pain. And creating games should be fun, shouldn't it? And that brings me to my last point:

3. They're just plain fun!


Games back in the day weren't trying to tell a story, prove a point, or be as realistic as possible. Back stories were written in the same document as the instructions, which were quickly thrown away (see point #1). So adventure games were simply Good vs. Evil. Heroes explored strange new lands, defeated monsters and rescued princesses. Sports games were about scoring points. And some games were just about getting your initials on a highscore list. So whatever you were doing, it was fun, not work. Maybe I'm just getting old... but to me a lot of the newer games are more work than fun. But my neice, who is growing up with all the latest tech, rocks the new games like I rocked the originals. So "fun" is a subjective term, no doubt about it.

So how do we translate this into something more concrete? To answer that, I would suggest you ask yourself, what makes games fun to you? Back on the topic of the classics, here are some of the things I always enjoyed:
  • Secret places that feel like cheating (like the whistles in Mario 3) - why try to find ways to actually cheat when you can just use what's built into the game?
  • Weapons and other tricks that give you new abilities (like the frog suit in Mario 3 or the hook-shot in Zelda/Link to the Past)
  • Random stuff that's just plain funny (like in Link to the Past when you go to the Dark World and turn into a bunny - that was so hilarious! Or in Mario World how Yoshi can eat... well just about anything, and spit it at the enemies! I could go on all night with this one.)
  • Anything you can play with someone else (sports, Mario, Mario Kart, etc. - the more the merrier! This one is especially important because multiplayer online games are so big nowadays)
  • Anything where the object of the game is stupidly obvious (even as a kid playing Zelda, I often bypassed the frustrating puzzle-like parts by using a book or bugging a friend who already beat it; not that adding some brain-teasing "figure it out" stuff to games is bad, but it's not always fun; for me it went from brain-teasing to mind-grinding way too quick)

Conclusion


As you can see, there are definitely some features of 80s/90s-style video games that still apply to today's world of hi-tech gaming awesomeness. In fact if I ever find a game system that uses even 2 out of the 3, I'll be first in line to get one. So anyway, take it for what it's worth, use what you can and pass it on. :)

Inside the Indie Art Process of Archmage Rises

$
0
0
I'm relatively new to art appreciation. A big turning point for me was a few years back when I read John Milton's Paradise Lost. Since then, my appreciation for art and the artists behind the work has continued to grow. I even have some Peter Max limited edition prints hanging on my walls.

I'm a huge fan of fantasy art and pretty much any game art coming out of TSR in the 1980s. There is something about the oil on canvas, seeing the stroke of the brush—which for me sort of elevates the work. I'm not complaining about the perfection found in digital painting—just that I like the "handmade" quality brush strokes bring.

Archmage Rises is a love letter to tabletop role-playing games. The art style is ’80s TSR for the modern area.


lordsothscharge.jpg
Seriously awesome!


I can't afford to hire Larry Elmore, Todd Lockwood, or Keith Parkinson. So to recruit artists, I went through hundreds of fantasy art portfolios on Deviant Art. Of those, I contacted many. Of those, I worked with three. Rogier van de Beek's style was (by far) my favorite.

So this week, we're going behind the scenes with Rogier on how he does freelance indie game art. Take it away, Rogier!

Rogier van de Beek:
Whether for concepts, game art, or promotional purposes, you're going to need someone to create the art. Visual representation is the easiest way to set a mood or tell a story after all, since human beings are highly visual.

Although Archmage Rises is heavy on storytelling, art is still needed to set the tone or mood. This is where I come in. My name is Rogier van de Beek, a freelance concept artist and illustrator from the Netherlands. I’ve worked with Thomas for the past seven months on Archmage Rises, and I have to say that it’s been a total blast. But enough talk: Let’s take a deep dive into the piece, "The Mage Classroom."

First off, every artist out there has a unique own way of working. As they say, there is more than one road that leads to Rome. So the first lesson is to follow the one you are most comfortable with!

The goal for this image was to create a classroom where a mage teacher imparts his or her knowledge. Someone who is talented, but not overly powerful or special. It is a pivotal point in the character creation process. The room is owned by someone who knows a lot about magic and teaches the children their foundational knowledge.

The first step is always to discuss the image idea with Thomas. Maybe he has something specific in mind like a certain type of chair, or monster skull on the wall. Other times, it’s just a certain emotion the image must evoke. I then ask him to provide me with several reference images. Having the client provide reference material eliminates a lot of guesswork and makes me feel more confident about the task.

Thomas Henshell:
Finding reference images takes forever! As a programmer, designer and writer, I don't really know what I want other than "I want it to be awesome!" Isn't that enough direction? :-) Finding reference images helps me narrow down the ideas into something more concrete—something that both of us can understand, point at, and discuss. So I definitely see the value of spending hours finding the right references.

Rogier:
After the initial discussion, I look through the provided references but also find my own. I'm looking for a way to capture all the important ideas in a single image. Then it’s time to start sketching!

I start by exploring the composition of the image. Camera angle is super important, for example. I don’t create a list of objects or anything, I just imagine what I would see if I was standing (or sitting) in that location. Then I move into what would be the most interesting way to display all the elements in the image. Like the large chair of the teacher. The giant desk. The magical staffs and the bookshelves. It needs to feel like a real classroom.

After doing some sketchbook work and exploring the idea, I finally have something I can use. It’s time to switch to the tablet in order to be able to sketch it up digitally—and then send it off to Thomas and find out what he thinks.


classroom1.jpg


Thomas:
This is the most exciting and scary part for me. It's exciting to see a game scene I've only thought about or written some dialogue for suddenly come to life right before my eyes. At this stage, I pounce on any email coming from Rogier so that I can see what he's done.

It is also scary because if the sketch is completely wrong, it means I haven't done a good job of communicating the vision of what is needed. A wrong sketch means lots of time has to be spent finding better reference images, writing more documentation, and having more meetings. So everyone is happier if I just approve the sketch.

Finally, as a non-artist, I sometimes have a hard time knowing how to evaluate a sketch. The door in the sketch isn't open, but it is in the final. The monster head on top of the bookshelf isn't as large as it is in the final. A sketch is like Act 1 and 2 of a story. Its up to my imagination what Act 3 will look like, and my Act 3 may be different from Rogier’s. I usually will approve a sketch without changes. If we have totally different visions for the picture, it will show up at the next milestone.

Rogier:
When Thomas approves the sketch, it's time to start painting it up. Sometimes this can be very intimidating! What looks great as a sketch may not look great as a painting… You have the lines, you have a picture in your head, you have a feeling this picture will be really cool when it's done. But until it's done, the first steps of painting it up will be horrible! Especially when you start going over your sketch and finding all the mistakes you made. Yikes!

I use references for many different reasons. Most of the time, I use them for materials or lighting. When I don’t know how something will look in a certain lighting setup in the “real world,” I try to find a picture of it so I can see how it really looks.

The beginning of a painting can be a lot of fun or be absolute hell and make you doubt everything about yourself and the idea you have set up to create. :-) The process varies, but I find it’s always best to start big and go small. I ‘block in’ the shapes, detail a bit of the shapes, and make them more distinguished to get the overall values in place. Then I go smaller and smaller… I try to work out all of the picture at once; this way, you have a higher degree of “control,” understanding where the picture is going to end up, the POV (point of view), and final composition.

In this case, I happen to like the initial illustration. It wasn’t too long before I had some nice values and mood in the picture. I was confident that I could take it all the way through to a satisfying conclusion.


classroom2.jpg


Thomas:
This is a pivotal stage. Now I understand what the picture will look like. Sure . . . a lot of detail is missing—but the mood, lighting, and palette are all there. I may not know how Act 3 ends, but I can see the gist of it!

Rogier:
With the first general colors in and approved, pretty much everything is set up. Now is the time for detail work, which I call "rendering" even though I'm doing it by hand :-)

This is the part where you sort of shut off your brain and just paint away hour after hour. The relaxing part of the job, finally!

I used to not take any breaks while working on a project. That was a (big) mistake. I now understand that taking a break actually speeds things up. Leaving and returning allows me to clearly see what I am actually painting. Surprisingly so, spending hours looking at the same image makes it harder to see. I take a break to avoid the feeling of ‘Oh no, what have I done?!’ – or worse, getting a bunch of revisions from the client.

Since this is all about indie development, it can get tricky because timelines are tight and budgets are tighter. It is better to spread three days' work over four actual days and do some other little things in between, than to power through. A work of art cannot be rushed!

Here's a helpful tip: Flip your canvas a lot so that you get a fresh view of your image. When you flip (mirror) your image, it sort of refreshes it in your brain—which helps when looking for mistakes.


classroom3.jpg


Also, zoom out a lot to avoid tuning the mood/feel of the image out with too much detail. Sometimes images can be so detailed that they end up looking dead. Zooming out makes it easier to figure out where the focal point should be, which equals “more detail.” This greatly enhances the overall composition of the painting.

On the other hand, I want viewers to have something to see wherever they look in the image. As they focus in, they discover details they hadn't noticed at first. This makes the image more engaging to look at. However, preserving the mood and composition is essential. Play with light and atmosphere, and keep detail in check to create a nice balance.

Then it’s time to send the finished work to Thomas again for feedback. It wastes everyone's time to put lots of detail into an element of the painting only to have it moved or removed due to late revisions.


classroom4.jpg


Thomas:
At this stage, it’s clear to me how the picture will end up. Small revisions may be asked at this point, but we're way past any big revisions. I generally am very excited to see it coming to life and tell him just to keep going. Due to my excitement, I pretty much bug him every day to see if he is done yet :-)

Rogier:
Once I get feedback and approval, I go into the final detail phase.

I continue with the detailing and then add some atmosphere while desaturating some of the darker areas. This will tie the whole piece together and give me more control on where I want the viewer to look.

Then I send it off (hopefully!) for final approval.


classroom5.jpg


Overall, I'm pleased with how the image turned out. The fact that it’s detailed but never gets boring to look at is something I’m really proud of. When you size it down, the values and composition still work really well—which is also a plus. When up close, nothing feels “un-rendered’; you can always tell what you’re looking at!

Thomas:
I could stare at it all day.

Rogier:
I hope this art walkthrough was helpful! Can’t wait to play the finished game.

Thomas:
Thanks, Rogier!
SDG

Check out Rogier's Deviant Art portfolio. Feel free to contact him about your project.

You can follow the game I'm working on, Archmage Rises, by joining the newsletter or Facebook page.

Or if you really want to call me out on something, you can tweet me @LordYabo

Ultimate Input Manager for Unity

$
0
0
Unity Input Manager pain has lasted for years. On the Unity feedback site you can find requests for InputManager programmatic access dating from 2009.

Futile!

Problems:

1) Ingame input controller mapping.
Unity has a user interface for mapping predefined bindings at the start the of game. Changing mappings later requires a game restart.

2) Handling input in code based on states/tags abstracting real input.
Some sort of abstraction is done thru InputManager's well known "Horizontal" and "Vertical", but that abstraction is still bonded to axis and buttons of the developer's test controller and not based on Actions/Final States of the game (for ex. Mecanima Animation States) mapped to player's currently plugged-in controller.

3) Saving and restoring user preferences (Input settings)
Unity's built-in PlayerPref might do the trick if you do not plan to support Web, Droid... and if file size is not bigger then 1 megabyte. Though XML I think is a better solution as it is human readable and also exchangable between systems and players.

4) Distinct positive and negative part of the axis.
Unity will recognize controller's axis as one part and gives value -1 to 1 to -1.  It is expected values to be in range of 1 to 0 to -1, so distinction of, for example turning left/right on wheel input controller or joystick's push/pull forward/backward, is possible.

5) OS independent driver and easy expansion with drivers supporting other devices and special properties
Unity's internal handler might not recognize HID device or it would identify same button in different system differently. Also doesn't offer support of device extra features like force feedback, IR cameras, accelerators, gyros ... today more and more parts of modern input controllers. Instead of plug and play OS dependent drivers, a much better solution seems to be OS dependent HID interfaces with OS independent pluggable drivers.

6) Handling input axis and buttons as digital or analog
In Unity thru Input class you can handle axis only as analog and buttons as analog/digital. It's handy to have axis as digital events HOLD, DOWN, UP...

7) Create combination of inputs that would trigger action/state
Unity doesn't offer an out of the box way for combined input action like 2 keys in row, axis move and button push...for example in fighting game scenario -> 2times joystick left + fire (Mustafa kick from Capcoms Cadillacs and Dinosaurs)

8) Handling inputs by events
Seems the whole Unity engine is not much planned as event-based, signal or reaction based system and encouraging use of Broadcast messaging and handling complex input inside Update is far from a good solution even if you need last processor's MIPS.

9) Plug and play instead of plug and pray.
Attach or remove controllers while game is running and continue playing.

10) Profiles - Layouts

Why not InControl or CInput? 
Both are based on the bad foundation which is Unity InputManager and even they give you runtime mapping and change, they have same sickness even worse. They create InputManager.asset with all possible Joystick#+Buttons# (Joystick1Button1,...Joystick2Button2...) combinations and by reading name of joystick they give you contoroller layout(profile), claming they support bunch of devices. Actually they didn't support anything else then what Unity default driver support(on button-axis level),... so NO AXIS DIRECTION DISTINNCTION, NO PLUG and PLAY, NO FFD or SPECIAL FEATURES ACCELEROMETERS, GYROS, IR CAM...., NO COMBOS, NO EXPANSION SUPPORT...AND NOT FREE

Input Mapper system to try addressing above issues




1) Ingame input controller mapping.
Input Mapper allows you to easily map game controller input to Animation States from your Animation Controller or custom states

2) Handling input in code based on states/tag abstracting real input.
InputMapper API is very similar to Unity APIs with the big difference that it is abstraction on two levels. First you do not program with real inputs like KeyCode.Q or Joystick1Button99, but with states which also allows player to map different input to same action state.

if(InputManager.GetInputDown((int)States.Wave)){
  Debug.Log("Wave Down");
   }

3) Saving and restoring user preferences
Saving would export your settings to .xml file and States.cs will be generated containing enum of your mapped states.

    public enum States:int{
  		Wave=1397315813,
        MyCustomState=-1624475888,

Now you can forget about:

//  static int idleState = Animator.StringToHash("Base Layer.Idle"); 
//  static int locoState = Animator.StringToHash("Base Layer.Locomotion");  

as we are taught in Unity tutorials, but you can use States.[some state].

Library contains a simple component so you can test user perspective right away. Just drag saved .xml and Play.


Clipboard02.jpg


4) Distinct positive and negative part of the axis.
System recongizes Left,Right,Forward,Backward on POV axis and Positive/Negative on axis.

5) OS independent driver and easy expansion with drivers supporting other devices and special properties
I understood Unity couldn't support all game devices but some system that allows simple binding of driver might be a good idea (Yeah they have plugins.....).  

So instead of building plugins for all OSes InputMapper system had built HID interface systems for (Win,Web,Droid and OS(not tested)) that allows writing only one device specified driver OS in independent.

Device would be handled by default driver (WinMMDriver for Win and OSXDriver for OSX) or by custom driver if added like:

//supporting devices with custom drivers
   InputManager.AddDriver(new XInputDriver());

Your implementation of custom device driver needs to handle two entry points:

 (1)public IJoystickDevice ResolveDevice(IHIDDeviceInfo info)...

 (2)public void Update(IJoystickDevice joystick)...

Resolve device where HIDInterface provide device info from the OS, you can check VID and PID to decide to handle and init device properties and structures, or not returning null, and Update function to query device by use of provided Read/Write methods and fill the JoystickDevice structures, so they can be accessed by InputManager. Scared about handling few bytes??? :) check XInputDriver.cs.

Still want to use Unity InputManger as backup???

public enum States:int{
 InputManager.AddDriver(new UnityDriver());//(-1 to 1 to -1)

Swap your InputManger.asset with InputManager.asset from github source code.

6) Handling input axis and buttons as digital or analog
Second abstraction is that you can use digital or analog output regardless of whether your real input source is digital or analog meaning for ex. Joystick Axis is counted as analog can produce normalized values from 0 to 1 but also pushed true/false, or key or even mouse button.

//using input as digital
bool bHold=(InputManager.GetInput((int)States.Walk_Forward,false));

//using input as analog value   
float analogValue=InputManager.GetInput((int)States.Walk_Forward,false,0.3f,0.1f,0f);

7) Create combination of inputs that would trigger action/state
By just clicking keys/mouse/buttons or moving joystick, as SINGLE, DOUBLE and LONG. Primary and secondary. In the example below I've mapped Wave state combo of Mouse1 double clicks + Joystick1AxisYForward (Long push forward of the Joystick) + double click of letter Y.

You can set modifier format or clicks sensitivity. 


Clipboard01.jpg


8) Handling inputs by events
As the Update method gets overcrowded the library offers modern input handling solution by use of Event based system.

//Event Based input handling
InputEvent ev = new InputEvent("Click_W+C_State");
//InputEvent ev = new InputEvent((int)States.SomeState);
        
ev.CONT+=new EventHandler(Handle1);
ev.CONT+= new EventHandler(Handle2);
ev.UP += new EventHandler(onUp);//this wouldn't fire for combo inputs(single only)
ev.DOWN += new EventHandler(onDown);//this wouldn't fire for combo inputs(single only)
 

    void onUp(object o, EventArgs args)
    {
        Debug.Log("Up");
    }

    void onDown(object o, EventArgs args)
    {
        Debug.Log("Down");
    }

    void Handle1(object o, EventArgs args)
    {
        Debug.Log("Handle1");
    }

    void Handle2(object o, EventArgs args)
    {
        Debug.Log("Handle2");
    }

Hardcore developers can manually map input to states and even mix with loaded settings.

InputManager.loadSettings(Path.Combine(Application.streamingAssetsPath,"InputSettings.xml"));
     
//adding input-states pairs manually
InputManager.MapStateToInput("My State1",new InputCombination("Mouse1+Joystick12AxisXPositive(x2)+B"));

InputManager.MapStateToInput("Click_W+C_State", KeyCodeExtension.Alpha0.DOUBLE,KeyCodeExtension.JoystickAxisPovYPositive.SINGLE);

In KeyCodeExtension all KeyCode stuff are supported plus additional for Joystick Axis mapping.

9) Plug and play instead of plug and pray.
Attach or remove/switch controllers while game is running, remap inputs and continue playing.

10) Profiles - Layouts
Your homework.

Devices used during testing: XBox360W controller, ThrustMaster Wheel FFD, Wiimote + Nunchuk.
One classic Gamepad controller, one Wheel and complex controller.

19.08.14 Thrustmaster wheel RGT FFD demo WIN+DROID




GITHUB: https://github.com/winalex/Unity3d-InputMapper

Code is free if you contribute by solving a bug or enhance some version.!!!Joking :) Who can stop you/us.

Feedback, Forks are welcome if you aren't too smart, too movie star or too busy making money.

Gmail me as winxalex.

Knowledge should be free! 

13.07.2014 Added WiimoteDevice and WiimoteDriver Skeleton

22.07.2014 Quite more stability and plug in/out supported added.

26.07.2014 Web Player joystick support (Chrome, FireFox)




01.10.2014 WiiDevice and WiiDriver 




05.10.2014 XInput driver pure C# (No DirectX xinput.dll wrappers)

13.10.2014 (OSXDriver default driver pure C#)

17.10.2014 (Thrustmaster Wheel FFD and XBOX360W working on OSX) 




CONCEPT PROVED!!!

Making a Game with Blend4Web Part 6: Animation and FX

$
0
0
This time we'll speak about the main stages of character modeling and animation, and also will create the effect of the deadly falling rocks.

Character model and textures


The character data was placed into two files. The character_model.blend file contains the geometry, the material and the armature, while the character_animation.blend file contains the animation for this character.

The character model mesh is low-poly:


gm06_img02.jpg?v=20141022153048201407181


This model - just like all the others - lacks a normal map. The color texture was entirely painted on the model in Blender using the Texture Painting mode:


gm06_img03.jpg?v=20141022153048201407181


The texture then has been supplemented (4) with the baked ambient occlusion map (2). Its color (1) was much more pale initially than required, and has been enhanced (3) with the Multiply node in the material. This allowed for fine tuning of the final texture's saturation.


gm06_img04.jpg?v=20141022153048201407181


After baking we received the resulting diffuse texture, from which we created the specular map. We brightened up this specular map in the spots corresponding to the blade, the metal clothing elements, the eyes and the hair. As usual, in order to save video memory, this texture was packed into the alpha channel of the diffuse texture.


gm06_img05.jpg?v=20141022153048201407181


Character material


Let's add some nodes to the character material to create the highlighting effect when the character contacts the lava.


gm06_img06.jpg?v=20141022153048201407181


We need two height-dependent procedural masks (2 and 3) to implement this effect. One of these masks (2) will paint the feet in the lava-contacting spots (yellow), while the other (3) will paint the character legs just above the knees (orange). The material specular value is output (4) from the diffuse texture alpha channel (1).


gm06_img07.jpg?v=20141022153048201407181


Character animation


Because the character is seen mainly from afar and from behind, we created a simple armature with a limited number of inverse kinematics controlling bones.


gm06_img08.jpg?v=20141022153048201407181


A group of objects, including the character model and its armature, has been linked to the character_animation.blend file. After that we've created a proxy object for this armature (Object > Make Proxy...) to make its animation possible.

At this game development stage we need just three animation sequences: looping run, idle and death animations.


gm06_img09.jpg?v=20141022153048201407181


Using the specially developed tool - the Blend4Web Anim Baker - all three animations were baked and then linked to the main scene file (game_example.blend). After export from this file the animation becomes available to the programming part of the game.


gm06_img10.jpg?v=20141030124956201407181


Special effects

During the game the red-hot rocks will keep falling on the character. To visualize this a set of 5 elements is created for each rock:

  1. the geometry and the material of the rock itself,
  2. the halo around the rock,
  3. the explosion particle system,
  4. the particle system for the smoke trail of the falling rock,
  5. and the marker under the rock.

The above-listed elements are present in the lava_rock.blend file and are linked to the game_example.blend file. Each element from the rock set has a unique name for convenient access from the programming part of the application.

Falling rocks

For diversity, we made three rock geometry types:


gm06_img12.jpg?v=20141022153048201407181


The texture was created by hand in the Texture Painting mode:


gm06_img13.jpg?v=20141030124956201407181


The material is generic, without the use of nodes, with the Shadeless checkbox enabled:


gm06_img14.jpg?v=20141030124956201407181


For the effect of glowing red-hot rock, we created an egg-shaped object with the narrow part looking down, to imitate rapid movement.


gm06_img15.jpg?v=20141030124956201407181


The material of the shiny areas is entirely procedural, without any textures. First of all we apply a Dot Product node to the geometry normals and vector (0, 0, -1) in order to obtain a view-dependent gradient (similar to the Fresnel effect). Then we squeeze and shift the gradient in two different ways and get two masks (2 and 3). One of them (the widest) we paint to the color gradient (5), while the other is subtracted from the first (4) to use the resulting ring as a transparency map.


gm06_img16.jpg?v=20141030124956201407181


The empty node group named NORMAL_VIEW is used for compatibility: in the Geometry node the normals are in the camera space, but in Blend4Web - in the world space.

Explosions


The red-hot rocks will explode upon contact with the rigid surface.


gm06_img17.jpg?v=20141030124956201407181


To create the explosion effect we'll use a particle system with a pyramid-shaped emitter. For the particle system we'll create a texture with an alpha channel - this will imitate fire and smoke puffs:


gm06_img18.jpg?v=20141030124956201407181


Let's create a simple material and attach the texture to it:


gm06_img19.jpg?v=20141030124956201407181


Then we setup a particle system using the just created material:


gm06_img20.jpg?v=20141030124956201407181


Activate particle fade-out with the additional settings on the Blend4Web panel:


gm06_img21.jpg?v=20141030124956201407181


To increase the size of the particles during their life span we create a ramp for the particle system:


gm06_img22.jpg?v=20141030124956201407181


Now the explosion effect is up and running!


gm06_img23.jpg?v=20141030124956201407181


Smoke trail


When the rock is falling a smoke trail will follow it:


gm06_img11.jpg?v=20141030124956201407181


This effect can be set up quite easily. First of all let's create a smoke material using the same texture as for explosions. In contrast to the previous material this one uses a procedural blend texture for painting the particles during their life span - red in the beginning and gray in the end - to mimic the intense burning:


gm06_img24.jpg?v=20141030124956201407181


Now proceed to the particle system. A simple plane with its normal oriented down will serve as an emitter. For this time the emission is looping and more long-drawn:


gm06_img25.jpg?v=20141030124956201407181


As before this particle system has a ramp for reducing the particles size progressively:


gm06_img26.jpg?v=20141030124956201407181


Marker under the rock


It remains only to add a minor detail - the marker indicating the spot to which the rock is falling, just to make the player's life easier. We need a simple unwrapped plane. Its material is fully procedural, no textures are used.


gm06_img27.jpg?v=20141030124956201407181


The Average node is applied to the UV data to obtain a radial gradient (1) with its center in the middle of the plane. We are already familiar with the further procedures. Two transformations result in two masks (2 and 3) of different sizes. Subtracting one from the other gives the visual ring (4). The transparency mask (6) is tweaked and passed to the material alpha channel. Another mask is derived after squeezing the ring a bit (5). It is painted in two colors (7) and passed to the Color socket.


gm06_img28.jpg?v=20141030124956201407181


Conclusion


At this stage the gameplay content is ready. After merging it with the programming part described in the previous article of this series we may enjoy the rich world packed with adventure!

Link to the standalone application

The source files of the models are part of the free Blend4Web SDK distribution.

Intercepting a Moving Target in 2D

$
0
0
You often have to have an entity controlled by an AI "shoot" at something in a 2D game. By "shoot" here, I mean launch something towards another moving something in the hopes that it will intercept it. This may be a bullet, a grenade, or trying to jump to a moving platform. If the target is moving, shooting at the current position (1) will usually miss except in trivial cases and (2) make your entity look dumb. When you play hide and seek, you don't run to where the person is unless they are stopped. You run to where the (moving) thing you are chasing is going to be.

There are two cases for "shooting" at something:

  1. Non-Rotating Shooter
  2. Rotating Shooter

This article only covers the case of the non-rotating shooter. The second case is a bit more complicated. I'll post a separate article for that.

Your goal is to predict the future position of the target so that you can launch something at it and have them collide. It is given that you know the following:

  1. The position of the shooter when the projectile will be launched: \(\vec{P_s}\)
  2. The position of the target when the shooter will launch the projectile (i.e. now): \(\vec{P_T^0}\)
  3. The speed at which your projectiles travel: \(S_b\)
  4. The velocity of the target, \(\vec{v_T}\)

What you would like to find is \(\vec{P_T^1}\), the final position where the projectile will intercept the target.

We are going to have to assume that the target will travel at a constant velocity while the projectile is moving. Without this, the projectile will have to adapt after it has been launched (because it cannot predict the future position of the target if it is changing velocity in an unpredictable way). It should be pointed out that this is not limiting and in fact, still works out pretty well as long as the "edges" are handled well (target right in front of you or headed straight at you).

The Problem Space


Consider the image below:


Attached Image: post_images_hitting_targets_with_bullets.png


This image shows the situation for the shooter. The target is moving with some constant velocity and the intercept position for the projectile, when launched, is unknown until the calculation is done. When the projectile is launched, it will intercept the target after \(t_B\) seconds. The projectile is launched with a constant speed. We don't know its direction yet, but we know the magnitude of its velocity, its speed will be \(S_b\).

If the target is at position \(\vec{P_T^0}\) now, it will be at position \(\vec{P_T^1}\), given by:

(1) \(\vec{P_T^1} = \vec{P_T^0} + \vec{v_T} * t_B\)


Since the projectile will have traveled for the same amount of time, it will have moved from \(\vec{P_s}\) to \(\vec{P_T^1}\) as well. In that time, it will have moved a distance of \(S_B x t_B\). Since we are talking about vector quantities here, we can write this as:


\(\mid\vec{P_T^1}-\vec{P_s}\mid = S_b * t_B\)


If we square both sides and break it into components to get rid of the absolute value:


(2) \((P_{Tx}^1 - P_{Sx})^2 +(P_{Ty}^1 - P_{Sy})^2 = S_b^2 * t_B^2\)


Breaking (1) into components as well and substituting back into (2) for the value of \(P_{Tx}^1\) and \(P_{Ty}^1\), we get the following:


\((P_{T0x} - P_{Sx} + v_{Tx}t_B)^2 + (P_{T0y} - P_{Sy} + v_{Ty}t_B)^2 = S_b^2 * t_B^2\)


For the sake of simplicity, we going to redefine:


\(P_T^0 - P_s = R \)(this is a constant)


After some algebra, this gives us the final equation:


\(t_B^2(v_{Tx}^2 + v_{Ty}^2-S_B^2) + t_B(2*R_x*v_{Tx} + 2*R_y*v_{Ty}) + (R_x^2 + R_y^2) = 0\)


This is a quadratic in \(t_B\):


\(t_b = \frac{-b \pm \sqrt{b^2 - 4ac}}{2a}\)


where:

\( a =v_{Tx}^2 + v_y^2-S_B^2\)

\( b =2(R_x*v_{Tx} + R_y*v_{Ty})\)

\( c = R_x^2 + R_y^2\)


You can test the discriminant, \(b^2-4ac\):


< 0 \(\Rightarrow\) No Solution.

= 0 \(\Rightarrow\) One solution.

> 0 \(\Rightarrow\) Two solutions, pick the lowest positive value of \(t_B\).


Once you have solved the quadratic for \(t_B\), you can then substitute it back into (1) and calculate the intercept position, \(\vec{P_T^1}\).


The Code


Putting this together and covering some edge cases:
/* Calculate the future position of a moving target so that
 * a projectile launched immediately can intercept (collide)
 * with it.
 *
 * Some situations where this might be useful for an AI to
 * make this calculation.
 *
 * 1. Shooting a projectile at a moving target.
 * 2. Launching a football or soccer ball for to a player.
 * 3. Figuring out the best position to jump towards in
 *    a platform game.
 *
 *
 * The output value, solution, is the position that the
 * intercept will occur at and the location that the
 * projectile should be launched towards.
 *
 * The function will return false if a solution cannot
 * be found.  Consider the case of a target moving away
 * from the shooter faster than the speed of the
 * projectile and you will see at least one case where
 * this calculation may fail.
 */
bool CalculateInterceptShotPosition(const Vec2& pShooter,
                                    const Vec2& pTarget0,
                                    const Vec2& vTarget,
                                    float64 sProjectile,
                                    Vec2& solution
                                    )
{
   // This formulation uses the quadratic equation to solve
   // the intercept position.
   Vec2 R = pTarget0 - pShooter;
   float64 a = vTarget.x*vTarget.x + vTarget.y*vTarget.y - sProjectile*sProjectile;
   float64 b = 2*(R.x*vTarget.x + R.y*vTarget.y);
   float64 c = R.x*R.x + R.y*R.y;
   float64 tBullet = 0;
   
   
   // If the target and the shooter have already collided, don't bother.
   if(R.LengthSquared() < 2*DBL_MIN)
   {
      return false;
   }
   
   // If the squared velocity of the target and the bullet are the same, the equation
   // collapses to tBullet*b = -c.  If they are REALLY close to each other (float tol),
   // you could get some weirdness here.  Do some "is it close" checking?
   if(fabs(a) < 2*DBL_MIN)
   {
      // If the b value is 0, we can't get a solution.
      if(fabs(b) < 2*DBL_MIN)
      {
         return false;
      }
      tBullet = -c/b;
   }
   else
   {
      
      // Calculate the discriminant to figure out how many solutions there are.
      float64 discriminant = b*b - 4 * a * c;
      if(discriminant < 0)
      {  // All solutions are complex.
         return false;
      }
      
      if (discriminant > 0)
      {  // Two solutions.  Pick the smaller one.
         // Calculate the quadratic.
         float64 quad = sqrt(discriminant);
         float64 tBullet1 = (-b + quad)/(2*a);
         float64 tBullet2 = (-b - quad)/(2*a);
         if(tBullet1 < tBullet2 && tBullet1 >= 0)
         {
            tBullet = tBullet1;
         }
         else
         {
            tBullet = tBullet2;
         }
      }
      else
      {
         tBullet = -b / (2*a);
      }
   }
   // If the time is negative, we can't get there from here.
   if(tBullet < 0)
   {
      return false;
   }
   // Calculate the intercept position.
   solution = pTarget0 + tBullet*vTarget;
   
   return true;
}


I have posted a working solution, with a simulation of using the above function and which you can tinker with, on github.

Article Update Log


28 Oct 2014: Initial Release
1 Nov 2014: Update discriminant description.
1 Nov 2104: Added code example.

How to Make Gameplay Trailers Like the Triple A

$
0
0
A basketball coach once told me "if you want to get good, play against someone far better than you". This led me to a bruised ego, but much better performance in the long run. I want to do the same exercise right now breaking down a Triple A studio's gameplay trailer and even if this is far beyond your project's budget my hope is that you will understand the high level concepts and apply them where you can.

For those who don't know, Quantum Break is a new IP by Remedy Entertainment in development for the Xbox One. It's another contestant to the high risk game of trans-media development featuring a live action show to accompany the game.

Watch Quantum Break's gameplay trailer below (or as much as you can)




I'll try to break down as many of the concepts as I can.

1. Develop a Story


You will probably notice that for the first three and a half minutes, there are no "gameplay" mechanics shown. Sam, the Creative Director, spends the entire time setting up the story to give players a sense of immersive context. Why? There needs to be an emphasis on the backstory so character development, narrative exploration and plot arc will mean something to the player. In a trans-media property, developing an engaging story to share across your media platforms is a primary goal.

2. Worldbuilding


Books work best when the author tells you about the world you're reading about. In videogames, the audio and visual experience should do all of the talking. Again, notice that while Sam is talking for the first portion of the game, all the worldbuilding components have been depoloyed to show players what is going on in the world;
  • citizens are panicking and feel out of control
  • martial law is in effect and defines the "authority" as the enemy
  • the overall atmosphere of a totalitarian regime in infancy
  • you are an underdog hero
These moods and tones have to be shown and not told. Specific examples? I think the biggest theme in this trailer is the fact that you are an underdog hero. I see this tone set by a variety of methods;
  • You are creeping and hiding through a derelict building while the streets are ruled by the militia. Seeing someone up against the glass with police lights flashing behind them shows the need to find indirect alternate paths because the traditional routes are not safe. Confirmed by the need to crawl out an old window.
  • The character reacts out of necessity and not out of expertise. When moving through the building, he doesn't move confidently but hesitantly. The developers wanted to show this clearly as he holds his hands up to block the light when a bright light pierces a window.
  • Forced to make use of what you're given. When entering a new section, the character's head looks around curiously as if to analyze the new environment he's in and what he can make use of.

3. Don't Sell the Features, but Instead the Benefits


This is a super old marketing practice you'll see in most well run establishments. Don't discuss what hardware a computer has, explain why it should matter to a user (Example - This computer has 256GB SSD with 16GB of RAM - This computer has a 15 second boot up time with the ability to run multiple programs so that you can multitask)

Sam really understands what the benefits of his game are. Watch at 5:00 and listen to what Sam says. "When Jack uses time powers, the enemy loses track of him resulting in exciting cat and mouse gameplay". 

4. Pacing


A most important concept - remember that you are attempting to create a short story encapsulated in this trailer. Plunging a character into action yields apathy and confusion. Spend the time to tell a story which sets the tone for the game. Anytime you want to show a moment of intensity in respect to plot, mechanics or conflict give the player breathing room to process it before exploring something new.

5. Silence


During the fight scene you will notice there are a few minutes without any talking. Sam has described the context and built the stage for what players are about to see - so he stops talking so they can watch. Understand that your goal is not just to have a viewer watch your trailer but to become engaged with the content. If you find yourself constantly talking you need to back up and decide what are the most important things which must be said and leave out the rest.

My Process


I urge you to start with your Creative Mandate - what is a one sentence theme or concept you feel defines your game's strengths? Based on this, come up with 3-5 of the ways in which you're going to demonstrate this to your audience. Call these your "means". Finally, develop the technical specifics for how you will expose these means. I'll write an example creative mandate based on Quantum Break.

Create an interactive immersive experience empowering players to overcome enemies and obstacles by bending the very fabric of time

We have our goal defined with several major portions (the means) that we want to carve out.

1. {Create an interactive immersive experience} empowering players to overcome enemies and obstacles by bending the very fabric of time

I can think of no better way to showcase this than to give a rich sequence of audio and visual worldbuilding. If you look back at the first 3 minutes of the footage, the content focuses on the character navigating through an eerie and dark warehouse filled with cold and sterile objects. Your player skulking through shadows and flinching at noises aims to mirror the player's internal psyche.

2. Create an interactive immersive experience {empowering players} to overcome enemies and obstacles by {bending the very fabric of time}

This is an obvious component of the game which needs to be displayed. With your audience being introduced to this mechanic for the first time, ease them into the experience with strong visual and audio cues. Have the player camera pause to first witness the time freeze prompted by the visual and audio functions which accompany it. Let the first few times it happens be like art - something to be viewed.

3. Create an interactive immersive experience {empowering players to overcome enemies} and obstacles by bending the very fabric of time

Consider the careful progression one must make whenever teaching someone anything new. In the trailer, the first few seconds of the first combat experience is about demonstrating the very fact that the character is facing an enemy. Before shooting back, the character has hidden behind multiple objects and moved tactically. The character shoots down his enemies with traditional shooting mechanics before introducing the time warp abilities allowing an augmented experience to the classic 3rd person shooter. By the end of the sequence, chained combat animations allow an informed audience to be captivated by visual effects and animations because they understand what they are watching.

4. Create an interactive immersive experience {empowering players to overcome} enemies and {obstacles} by bending the very fabric of time

Call it platforming, action based quick-time, intuitive environmental problem solving or whatever you like - a major feature of the game is applying your time powers to the world around you. Showing unbroken sequences of puzzle solving with increasing visual and audio intensity showcases the player conquering their world.

We now have 4 primary means of communicating our creative mandate to the audience. Take time and create a trailer you're proud of. Show friends and your team before publishing. It's entirely about quality when showcasing your game. I sincerely hope this helps with your promotional efforts.

If you have any more questions about the process, get in touch with me here

Originally posted at videogamemarketing.ca/2014/10/28/make-trailers-like-triple

Making Missiles Hit Targets

$
0
0
This article discusses an approach to making physics bodies rotate smoothly, whether they are stationary or actively moving. I used cocos2d-x and Box2D, but the basic approach will work for any physics body (even a 3D one if you are trying to rotate in a 2D plane).

The approach uses a Proportional-Integral-Derivative (PID) control loop on the angle of the body to apply rotational force to it in a well controlled and predictable manner. Two examples of using the approach are shown in the video, one for a missile that can only move in the direction it is facing and another for an "Entity" like a game character that moves independent of its facing direction.

The video below shows this in action. You don't have to watch the whole video to get the idea, but there is a lot in there to see...




Facing Direction


Whether your game is in 2-D or 3-D, you often have the need to make an object "turn" to face another direction. This could be a character's walking direction as they are moving, the direction they are shooting while sitting crouched, the direction a missile is flying in, the direction a car is racing towards, etc. This is the job of the "character controller", the piece of code in your system responsible for the basic "movement" operations that a character must undergo (seek, turn, arrive, etc.).

Building games that use physics engines is a lot of fun and adds a level of realism to the game that can dramatically improve the gameplay experience. Objects collide, break, spin, bounce, and move in more realistic ways.

Moving in more realistic ways is not what you usually think about, though, when you think about facing direction. Your usual concern is something like "The character needs to turn to face left in 0.5 seconds." From the standpoint of physics, this means you want to apply forces to make it turn 90° left in 0.5 seconds. You want it to stop exactly on the spot. You don't want to worry about things like angular momentum, which will tend to keep it turning unless you apply counter force. You really don't want to think about applying counter force to make it stop "on a dime". Box2D will allow you to manually set the position and angle of a body. However, if you manually set the position and angle of a physics body in every frame, it can interfere (in my experience) with the collision response of the physics engine.

Most important of all, this is a physics engine. You should be using it as such to make bodies move as expected.

Our goal is to create a solution to change the facing direction of the body by applying turning force (torque) to it.

If we decouple the problem of "how it turns" from "how it moves", we can use the same turning solution for other types of moving bodies where the facing direction needs to be controlled. For this article, we are considering a missile that is moving towards its target.


Attached Image: PID-Angle-300x207.jpg


Here, the missile is moving in a direction and has a given velocity. The angle of the velocity is measured relative to the x-axis. The "facing direction" of the missile is directly down the nose and the missile can only move forward. We want to turn it so that it is facing towards the target, which is at a different angle. For the missile to hit the target, it has to be aiming at it. Note that if we are talking about an object that is not moving, we can just as easily use the angle of the body relative to the x-axis as the angle of interest.

Feedback Control Systems 101


The basic idea behind a control system is to take the difference of "what you want the value to be" and "what the value is" and adjust your input to the system so that, over time, the system converges to your desired value.

From this wikipedia article:

A familiar example of a control loop is the action taken when adjusting hot and cold faucets (valves) to maintain the water at a desired temperature. This typically involves the mixing of two process streams, the hot and cold water. The person touches the water to sense or measure its temperature. Based on this feedback they perform a control action to adjust the hot and cold water valves until the process temperature stabilizes at the desired value.

There is a huge body of knowledge in controls system theory. Polynomials, poles, zeros, time domain, frequency domain, state space, etc. It can seem daunting to the uninitiated. It can seem daunting to the initiated as well! That being said, while there are more "modern" solutions to controlling the facing direction, we're going to stick with PID Control. PID control has the distinct advantages of having only three parameters to "tune" and a nice intuitive "feel" to it.

PID Control


Let's start with the basic variable we want to "control", the difference between the angle we want to be facing and the angle of the body/velocity:


\(e(t) = desired - actual\)


Here, \(e(t)\) is the "error". We want to drive the error to 0. We apply forces to the body to make it turn in a direction to make it turn so that it moves \(e(t)\) towards 0. To do this, we create a function \(f(.)\), feed \(e(t)\) into it, and apply torque to the body based on it. Torque makes bodies turn:


\(torque(t) = I * f(e(t)), I \equiv Angular Inertia \)


Proportional Feedback


The first and most obvious choice is to apply a torque that is proportional to the \(e(t)\) itself. When the error is large, large force is applied. When the error is small, small force is applied. Something like:


\(f(e(t)) = K_p * e(t)\)


And this would work. Somewhat. The problem is that when the error is small, the corrective force is small as well. So as the body is turning \(e(t)\) gets small (nearing the intended angle), the retarding force is small also. So the body overshoots and goes past the angle intended. Then you start to swing back and eventually oscillate to a steady state. If the \(K_p\) is not too large, it should settle into "damped" (exponentially decaying) sinusoid, dropping a large amount each oscillation (stable solution). It may also spiral off towards infinity (unstable solution), or just oscillate forever around the target point (marginally stable solution). If you reduce \(K_p\) so that it is not moving so fast, then when \(e(t)\) is large, you don't have a lot of driving force to get moving.

A pure proportional error also has a bias (Steady State Error) that keeps the final output different from the input. The error is a function of \(K_p\) of the form:


\(Steady State Error = \frac{desired}{ [1 + constant * K_p]} \)


So increasing the \(K_p\) value makes the bias smaller (good). But this will also make it oscillate more (bad).

Integral Feedback


The next term to add is the integral term, the "I" in PID:


\(f(e(t)) = K_p * e(t) + \int\limits_{-\infty}^{now} K_i * e(t) \)


For each time step, if \(e(t)\) has a constant value, the integral term will work to counter it:
  • If direction to the target suddenly changes a small amount, then over each time step, this difference will build up and create turning torque.
  • If there is a bias in the direction (e.g. Steady State Error), this will accumulate over the time steps and be countered.
The integral term works to counter any constant offset being applied to the output. At first, it works a little but over time, the value accumulates (integrates) and builds up, pushing more and more as time passes.

We don't have to calculate the actual integral. We probably don't want to anyway since it stretches back to \(-\infty\) and an error back in the far past should have little effect on our near term decisions.

We can estimate the integral over the short term by summing the value of \(e(t)\) over the last several cycles and multiplying by the time step (Euler Integration) or some other numerical technique. In the code base, the Composite Simpson's Rule technique was used.

Derivative Feedback


Most PID controllers stop at the "PI" version. The proportional part gets the output swinging towards the input and the integral part knocks out the bias or any steady external forces that might be countering the proportional control. However, we still have oscillations in the output response. What we need is a way to slow down as the body is heading towards the target angle. The proportional and integral components work to push towards it. By looking at the derivative of \(e(t)\), we can estimate its value in the near term and apply force to drive it towards not changing. This is a counter-force to the proportional and integral components:


\(f(e(t)) = K_p * e(t) + \int\limits_{-\infty}^{now} K_i * e(t) dt + K_d * \frac{de(t)}{dt}\)


Consider what happens when \(e(t)\) is oscillating. Its behavior is like a sine function. The derivative of this is a cosine function and its maximum occurs when sin(e(t)) = 0. That is to say, the derivative is largest when \(e(t)\) is swinging through the position we want to achieve. Conversely, when the oscillation is at the edge, about to change direction, its rate of change switches from positive to negative (or vice versa), so the derivative is smallest (minimum). So the derivative term will apply counter force hardest when the body is swinging towards the point we want to be at, countering the oscillation, and least when we are at either edge of the "swing".

Just like the integral, the derivative can be estimated numerically. This is done by taking differences over the last several \(e(t)\) values (see the code).

Note:  Using derivative control is not usually a good idea in real control systems. Sensor noise can make it appear as if \(e(t)\) is changing rapidly back and forth, causing the derivative to spike back and forth with it. However, in our case, unless we are looking at a numerical issue, we should not have a problem.


Classes and Sequences


Because we are software minded, whatever algorithm we want to use for a PID controller, we want to wrap it into a convenient package, give it a clean interface, and hide everything except what the user needs. This needs to be "owned" by the entity that is doing the turning.


Attached Image: PID-Controller-Components.png


The MovingEntityInterface represents a "Moving Entity". In the case of this demo, it can be an entity like a Missile, which moves forward only, or a "character", which can turn while moving. While they have different methods internally for "applying thrust" they both have nearly identical methods for controlling turning. This allows the implementation of a "seek" behavior tailored more to the entity type.

The interface itself is generic so that the MainScene class can own an instance and manipulate it without worrying about what type it is.

The PIDController class itself has this interface:

/********************************************************************
 * File   : PIDController.h
 * Project: Interpolator
 *
 ********************************************************************
 * Created on 10/13/13 By Nonlinear Ideas Inc.
 * Copyright (c) 2013 Nonlinear Ideas Inc. All rights reserved.
 ********************************************************************
 * This software is provided 'as-is', without any express or implied
 * warranty.  In no event will the authors be held liable for any 
 * damages arising from the use of this software.
 *
 * Permission is granted to anyone to use this software for any 
 * purpose, including commercial applications, and to alter it and 
 * redistribute it freely, subject to the following restrictions:
 *
 * 1. The origin of this software must not be misrepresented; you must 
 *    not claim that you wrote the original software. If you use this 
 *    software in a product, an acknowledgment in the product 
 *    documentation would be appreciated but is not required.
 * 2. Altered source versions must be plainly marked as such, and 
 *    must not be misrepresented as being the original software.
 * 3. This notice may not be removed or altered from any source 
 *    distribution. 
 */

#ifndef __Interpolator__PIDController__
#define __Interpolator__PIDController__

#include "CommonSTL.h"
#include "MathUtilities.h"

/* This class is used to model a Proportional-
 * Integral-Derivative (PID) Controller.  This
 * is a mathemtical/control system approach
 * to driving the state of a measured value
 * towards an expected value.
 *
 */

class PIDController
{
private:
   double _dt;
   uint32 _maxHistory;
   double _kIntegral;
   double _kProportional;
   double _kDerivative;
   double _kPlant;
   vector<double> _errors;
   vector<double> _outputs;
   
   enum
   {
      MIN_SAMPLES = 3
   };
   
   
   /* Given two sample outputs and 
    * the corresponding inputs, make 
    * a linear pridiction a time step
    * into the future.
    */
   double SingleStepPredictor(
                               double x0, double y0,
                               double x1, double y1,
                               double dt) const
   {
      /* Given y0 = m*x0 + b
       *       y1 = m*x1 + b
       *
       *       Sovle for m, b
       *
       *       => m = (y1-y0)/(x1-x0)
       *          b = y1-m*x1
       */
      assert(!MathUtilities::IsNearZero(x1-x0));
      double m = (y1-y0)/(x1-x0);
      double b = y1 - m*x1;
      double result = m*(x1 + dt) + b;
      return result;
   }
   
   /* This funciton is called whenever
    * a new input record is added.
    */
   void CalculateNextOutput()
   {
      if(_errors.size() < MIN_SAMPLES)
      {  // We need a certain number of samples
         // before we can do ANYTHING at all.
         _outputs.push_back(0.0);
      }
      else
      {  // Estimate each part.
         size_t errorSize = _errors.size();
         // Proportional
         double prop = _kProportional * _errors[errorSize-1];
         
          // Integral - Use Extended Simpson's Rule
          double integral = 0;
          for(uint32 idx = 1; idx < errorSize-1; idx+=2)
          {
          integral += 4*_errors[idx];
          }
          for(uint32 idx = 2; idx < errorSize-1; idx+=2)
          {
          integral += 2*_errors[idx];
          }
          integral += _errors[0];
          integral += _errors[errorSize-1];
          integral /= (3*_dt);
          integral *= _kIntegral;
         
         // Derivative
         double deriv = _kDerivative * (_errors[errorSize-1]-_errors[errorSize-2]) / _dt;
         
         // Total P+I+D
         double result = _kPlant * (prop + integral + deriv);
         
         _outputs.push_back(result);
         
      }
   }
   
public:
   void ResetHistory()
   {
      _errors.clear();
      _outputs.clear();
   }
   
   void ResetConstants()
   {
      _kIntegral = 0.0;
      _kDerivative = 0.0;
      _kProportional = 0.0;
      _kPlant = 1.0;
   }
   
   
	PIDController() :
      _dt(1.0/100),
      _maxHistory(7)
   {
      ResetConstants();
      ResetHistory();
   }
   
   void SetKIntegral(double kIntegral) { _kIntegral = kIntegral; }
   double GetKIntegral() { return _kIntegral; }
   void SetKProportional(double kProportional) { _kProportional = kProportional; }
   double GetKProportional() { return _kProportional; }
   void SetKDerivative(double kDerivative) { _kDerivative = kDerivative; }
   double GetKDerivative() { return _kDerivative; }
   void SetKPlant(double kPlant) { _kPlant = kPlant; }
   double GetKPlant() { return _kPlant; }
   void SetTimeStep(double dt) { _dt = dt; assert(_dt > 100*numeric_limits<double>::epsilon());}
   double GetTimeStep() { return _dt; }
   void SetMaxHistory(uint32 maxHistory) { _maxHistory = maxHistory; assert(_maxHistory >= MIN_SAMPLES); }
   uint32 GetMaxHistory() { return _maxHistory; }
   
   void AddSample(double error)
   {
      _errors.push_back(error);
      while(_errors.size() > _maxHistory)
      {  // If we got too big, remove the history.
         // NOTE:  This is not terribly efficient.  We
         // could keep all this in a fixed size array
         // and then do the math using the offset from
         // the beginning and module math.  But this
         // gets complicated fast.  KISS.
         _errors.erase(_errors.begin());
      }
      CalculateNextOutput();
   }
   
   double GetLastError() { size_t es = _errors.size(); if(es == 0) return 0.0; return _errors[es-1]; }
   double GetLastOutput() { size_t os = _outputs.size(); if(os == 0) return 0.0; return _outputs[os-1]; }
   
	virtual ~PIDController()
   {
      
   }
};

This is a very simple class to use. You set it up calling the SetKXXX functions as needed, set the time step for integration, and call AddSample(...) each update cycle with the error term.

Looking at the Missile class, which owns an instance of this, the step update (called in Update) looks like this:

void ApplyTurnTorque()
   {
      Vec2 toTarget = GetTargetPos() - GetBody()->GetPosition();

      float32 angleBodyRads = MathUtilities::AdjustAngle(GetBody()->GetAngle());
      if(GetBody()->GetLinearVelocity().LengthSquared() > 0)
      {  // Body is moving
         Vec2 vel = GetBody()->GetLinearVelocity();
         angleBodyRads = MathUtilities::AdjustAngle(atan2f(vel.y,vel.x));
      }
      float32 angleTargetRads = MathUtilities::AdjustAngle(atan2f(toTarget.y, toTarget.x));
      float32 angleError = MathUtilities::AdjustAngle(angleBodyRads - angleTargetRads);
      _turnController.AddSample(angleError);

      // Negative Feedback
      float32 angAcc = -_turnController.GetLastOutput();

      // This is as much turn acceleration as this
      // "motor" can generate.
      if(angAcc > GetMaxAngularAcceleration())
         angAcc = GetMaxAngularAcceleration();
      if(angAcc < -GetMaxAngularAcceleration())
         angAcc = -GetMaxAngularAcceleration();

      float32 torque = angAcc * GetBody()->GetInertia();
      GetBody()->ApplyTorque(torque);
   }

Nuances


If you look carefully at the video, there is a distinct difference in the way path following works for the missile vs. the character (called the MovingEntity in the code). The missile can overshoot the path easily, especially when its turn rate is turned down and it is only moving forward.

The MovingEntity always moves more directly towards the points because it is using a "vector feedback" of its position vs. the target position to adjust its velocity. This is more like a traditional "seek" behavior than the missile.

I have also, quite deliberately, left out a bit of key information on how to tune the constants for the PID controller. There are numerous articles on Google for how to tune a PID control loop, and I have to leave something for you to do, after all.

You will also note that the default value for _dt, the time step, is set for 0.01 seconds. You can adjust this value to match the timestep you actually intend to use, but there will be tradeoffs in the numerical simulation (error roundoff, system bandwidth concerns, etc.) that you will encounter. In practice, I use the same controller with the same constants across multipe sized physical entities without tweaking and the behavior seems realistic enough (so far) that I have not had to go and hunt for minor tweaks to parameters. Your mileage may vary.

The source code for this, written in cocos2d-x/C++, can be found on github here. The PIDController class has no dependencies other than standard libraries and should be portable to any system.

Article Update Log

3 Nov 2014: Text correction in "Integral" section.
29 Oct 2014: Initial release

Opening Up Advertising to the Community

$
0
0

Introduction


Generic bulk banner ads suck and they don't pay all that well. GameDev.net gets sponsorships from time to time that allow us to continue to operate the site throughout the year but sometimes we have to make ad deals not because we want to, but because without them we'd have to shut down. To be truthful, it would be awesome to either run the site without advertising OR run it with ads that come from within the community. This article is largely about an idea brought to us by one of our users named StarMire in a discussion about some crappy bulk ads we were showing.

Our Goals for 2015


We have some pretty big goals for 2015 which includes a major focus on beginner tutorials particularly in the mobile development arena. We also understand how hard it is for indies to advertise their middleware to others. Our "Your Announcements" forum has long been one of our more popular areas for people looking to get some support for their development work. This next pitch is for you guys.

Can We Cut The Cord?


Our little site here gets about 1.7 million page views a month and for November and December we will be trying an experiment.. a pretty huge experiment. You see, we have this little subscription service called GDNet+ that has a few features to improve your site experience. It's only a few bucks a month but every dollar goes a long way in helping us to keep running. And for November and December, we're turning all our advertising over to you guys. Can we cut the cord that ties us to silly bulk ads? We hope so.

Every single GDNet+ subscriber will get a shot to have your product or service in our ad rotation. If this is successful we may be able to do this forever! Now we will get bigger sponsorships from time to time, but they'll be the type of sponsorships that interest you rather than generic bulk ads. We'll fit them all in and make our ad space something that would allow you to discover gem products and services that will make you a better developer.

We don't have all the rules figured out just yet so we're going to allow you guys to help us figure them out as we run this experiment.

So go out and sign up for GDNet+, then head over to our store and post your first ad! GDNet+ subscribers will see the price as $0! Thanks for supporting us!

Note:  We're betting everything on you guys for support. Click here to see plans and pricing

Linguistic Testing: Devil in the Details

$
0
0
If you want a high-quality localization of your product, linguistic testing is an absolute must. To get good results for any kind of project – whether a site, application, game, or mobile app – you have to do more than just translate strings in resource files. At the final stage of localization, linguistic testers must carefully and thoughtfully perform one more task: testing the translation as implemented in the final product.

Linguistic testing accomplishes three (and sometimes even more) tasks:

First, testing allows pinpointing strings that do not fit into their GUI elements, be these menus, buttons, or toolbars. This can happen because the length of words is different in different languages. When translating from Russian to French or German, for example, the length of text increases by 15 to 20%. Things are even more complicated for Asian languages. A handful of Chinese characters when translated into English, for example, turn into a long phrase that simply cannot fit in the relevant GUI window. For character languages it is also a good idea to increase the font size, so that all the small details of characters are legible. GUIs should be beautiful and localization testing is critical for keeping them that way.

The second job of linguistic testing is to make sure that phrases fit their context. Most often this question arises when testing games: does the translation match the in-game situation that the end user encounters? When making the initial translation, the translator was looking at resource files and, although helped by comments and screenshots, still saw only a list of strings. So there are probably places where the translation does not capture 100% of the context. Common errors in games include incorrect gender, repeating units, and incorrect object names.

Non-games can have their complexities too: if we are translating the word “rate” from English, do we mean the price (“hourly rate”) or ratio of currencies (“exchange rate”)? Or maybe “rate” is used as a verb – but then is it in the sense of evaluate (“to rate an app”) or to deserve (“to rate a mention”)?

These aspects are tricky and deserve close attention.

Third, it's important to check how the text in your interface is displayed in different localizations of the target operating system. This can help solve possible issues with text encoding, such as when special characters (for example, diacritics or umlauts) in different languages are displayed incorrectly.


Localization%20mistakes.png
This screenshot shows incorrect display of special characters. Without linguistic testing, this is what French gamers would have seen in the interface.


What's the right way to do linguistic testing?


For almost ten years, we at Alconost have offered professional translation, localization, and linguistic testing in forty languages. Here are a few hard-won tips for linguistic testing based on our experience.

Let's say that all of the interface strings have been translated and integrated into the product. What is the next step? Optimally, the translators now receive the localized product and carefully review each window, checking each and every piece of text. Why do we say “optimally” here? In practice, complications crop up both on the translator side and on the client side. Sometimes a translator may not have a device capable of running the product, or the client cannot provide a custom build or grant access to the product. As a workaround, the client takes as many screenshots as possible for review by the tester.

Testing goes beyond just checking interface elements – it includes system errors, help materials, and other accompanying documentation.

When a tester finds an error, he or she makes corrections in the translation file and also records the error in the bug list. Bugs can include pieces of untranslated text, missing text, incorrectly formatted dates or numbers, incorrect first name/last name order, or incorrect currency. Keeping a bug list gives the client a visual representation of how many bugs have been found and how each of them has been fixed.

The situation is more complicated when, besides translation errors, there are cosmetic errors: the translated strings may be too long and get cut off, or even spill out of their button/window. In these situations, the usual method is to find a shorter way of rephrasing the text. If worst comes to worst and there is no way of rephrasing the text, then we can simply remove a portion of it. Another solution in some situations is to leave a word in the original language (i.e., in English), but this works only when the term is very well known and translation is not truly necessary.


devil_alconost.png


Three secrets for awesome linguistic testing


Secret No. 1: By choosing the right tools during the translation stage, you can significantly simplify and speed up linguistic testing later. Unlock this “magic” by automating as much of translators’ work as possible. At Alconost we do this by using the latest computer-assisted translation tools (SDL Trados, SDL Passolo, OmegaT, Sisulizer, Poedit, and MemoQ) and cloud-based platforms (Webtranslateit, Crowdin, GetLocalization, Google Translator Toolkit). These CAT tools allow multiple translators and editors to work on a project at the same time, as well as utilize translation memory.

Translation memory is powerful: each translated word is memorized, and when a word is found in the text a second time (or third or fourth...), the translation memory will make a suggestion based on the existing translation. This makes the translation consistent, reducing the time required for linguistic testing and preventing issues from occurring.

Secret No. 2: It's critical to write the test plan carefully. Make the work as simple as it can be, while making sure that everything (and we mean everything!) is verified and proofread. The test plan should explain to the translator how to view all texts in full and provide access to hidden areas of the product (error messages, bonus levels in games, paid functionality in software). When testing games, it's best to provide translators with cheat codes for quickly completing all levels.


Computer_games_testing.jpg


Secret No. 3: Linguistic testing needs to be done by professional translators who are native speakers in the language being tested. Ideally, translation should be performed by only natives as well (in our nine years of experience at Alconost Translations, we have seen that excellent translation quality is possible only when native speakers are used). But if for whatever reason the translation was performed by non-natives, it is even more important that linguistic testing be performed by a specialist who was raised and educated in the target language. Only native speakers can pick up all the subtleties of context, as well as carefully and accurately shorten words and phrases.

As you can see, linguistic testing is a key step in the localization process. If you want a high-quality product, ignore it at your peril! Test well and prosper!
Viewing all 17825 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>