Quantcast
Channel: GameDev.net
Viewing all 17825 articles
Browse latest View live

The Latest Evolution in Microtransactions

$
0
0
I'm going to guess that many might not see the significance of this article. Microtransactions are a hot topic in the realm of the hated corporate agenda of video games, but I don't think it's enough to just "hate" microtransactions or the companies pushing them, I think you should know what they're trying to do. The nature of experiential microtransactions is simple when compared to real world examples.

Experience as a Product


If you ever have gone to StarBucks you'll understand what an "experiential" product is. The basic idea is that you're not just buying a coffee, but buying an experience - the ethnic/exotic music being played, the decorations on the wall (usually dark and warm colors) or the rough wood grained table that feels different under your fingers compared to your regular IKEA table. I don't want to get weighed down with this point, but Harvard Business Review has a great article expanding on it. Main stream marketing has clued into the fact that people aren't so interested in the utilitarian function of a product anymore - it's about how the product is experienced that's the focus of modern marketing.

When I chat about MTX design and examples with friends, I often hear examples pulled from WarCraft and GuildWars. It's long been the classic standard where you exchange an in-game commodity for real world currency and the value is in how great the "product is". When there is an agenda to increase sales volume, awesome equipment and items might be put on sale as an incentive for players to buy. Having worked with clients to balance their in-game economies, I've seen first hand this isn't a reliable or sustainable model to work with. You can flood the in-game economy with epic items but this unbalances other systems and devalues other items, forcing you to find new items to put on sale. This conundrum sparked the desire to put the emphasis off the actual item subject to the microtransaction and more to the experience.

Examples


In my series on HearthStone and its monetization strategy I mention how the principle MTX mechanic, purchasing packs of cards, is built on the experience of opening the pack. There are flashy visual effects, sounds and even an interactive function to create a "moment" when you're receiving your cards. I need to dispel the thought that this is just a "random" occurrence - video game design resources on IPs this large are calculated and have a purpose.


KBBXA5F5UR6B1375311534875.jpg


An even better example is the Arena mode of gameplay in HearthStone. You pay $2 to make a random deck of cards and see how well it performs. Once you have lost 3 matches you are given prizes in volume based on the number of wins during your Arena run. the picture above shows the screen you're given with your prizes - the experience is so based around earning prizes, the prizes are "wrapped" so you can open them to fully experience the surprise along with flashy animation and sounds. The reason HearthStone marks a landmark in MTX design is the Arena was exclusively designed for users to pay money (or hard to earn game points) to experience. For players who get bored of the regular HearthStone gameplay and want a unique or more challenging game mode they have this experience always ready to purchase. I pretty much see this design as a carnival on a video game. You're not paying for the crappy prize they give you, you're paying for the experience of achieving.

The Battlefield series has just implemented a similar system. From a consumer's view it would seem straight forward to just be able to purchase the specific guns and equipment desired. The regular course of acquiring guns is a long drawn-out process and many hardcore clan members would likely pay $2.50 for a given item, but instead players are only given the option of purchasing battlepacks which are filled with random pieces of equipment.

One final example is the new mystery skin gifting in League of Legends. The basic concept is that instead of buying a specific skin for your champion, you can buy a random skin for one of your owned champions. On top of the excitement of randomness in the skin you'll receive, you might even get a rare or legendary skin. You're able to gift the skins to other players and it becomes a new skin buying experience less about the actual skin but more around the excitement or surprise.

The Historic Method


If you remember back to your first time playing Zelda: Ocarina of Time, your heart would race when you'd see a big chest in the middle of the dungeon. The chest opening animation and music was done in such a specific way - to build anticipation and further the excitement of the player in opening the reward for their progress. This new MTX strategy is putting a coin slot on the experience of achievement. Game design is fundamentally about having players achieve progress and we're Microtransactions experience devaluing the process by creating an MTX model around selling progress. Yes, this is nothing new to the arguments against "pay to play", but what really discourages me here is not that the MTX element exists but that the game systems are adjusted to further incline players to the monetization strategy.


chests_610-300x179.jpg


Usually, you make a progression system oriented around having a player feel they've earned their reward when they've spent sufficient time commensurate with that reward. If the rewards are too low for the time or effort invested, a player will naturally feel unsatisfied. Experiential MTX design relies on this dissatisfaction to push players to spending

What's always fascinated me about MTX theory is that it has generally mimicked the Western consumer economic theories. If you're familiar with the idea that the very first goods sold in the historic economies were products which then led to the service based economy you'll understand how we're experiencing this exact same thing here.

The Ultimate Dangers of This


These should be super obvious;

1. The excitement in videogames becomes a virtual good bought and sold. I personally wouldn't be interested in a game where the design is meant for me to be under-satisfied or underwhelmed by the content unless I was willing to pay more than the price to acquire the game. It's a bait and switch tactic that I find repugnant.

2. Content becomes optimized from a financial perspective creating redundancy. Once the magic formula of what sells the best is found, every developer will just copy it. We're seeing it happen right now with the major titles all springing to implement random MTX purchases.

3. Game content becomes about the experience of progress rather than the acquisition of achievement. This might sound unimportant, but it means that games will be made in such a way to encourage players to achieve ambigious goals rather than the traditional PvE, PvP or social goals.

Maybe I'm looking at this really subjectively. Do you like the way this works? Do you think this has a positive impact on game development?

Flexible particle system - The Container

$
0
0
One of the most crucial part of a particle system is the container for all the particles. It has to hold all the data that describe particles, it should be easy to extend and fast enough. In this post I will write about choices, problems and possible solutions for such a container.

The Series


Introduction


What is wrong with this code?

class Particle {
public:
    bool m_alive;
    Vec4d m_pos;
    Vec4d m_col;
    float time;
    // ... other fields
public:
    // ctors...

    void update(float deltaTime);
    void render();
};

and then usage of this class:

std::vector<particle> particles;

// update function:
for (auto &p : particles)
    p.update(dt);

// rendering code:
for (auto &p : particles)
    p.render();    

Actually one could say that it is OK. And for some simple cases indeed it is.
But let us ask several questions:

  1. Are we OK with SRP principle here?
  2. What if we would like to add one field to the particle? Or have one particle system with pos/col and other with pos/col/rotations/size? Is our structure capable of such configuration?
  3. What if we would like to implement a new update method? Should we implement it in some derived class?
  4. Is the code efficient?

My answers:


  1. It looks like SRP is violated here. The Particle class is responsible not only for holding the data but also performs updates, generations and rendering. Maybe it would be better to have one configurable class for storing the data, some other systems/modules for its update and another for rendering? I think that this option is much better designed.
  2. Having the Particle class built that way we are blocked from the possibility to add new properties dynamically. The problem is that we use here an AoS (Array of Structs) pattern rather than SoA (Structure of Arrays). In SoA when you want to have one more particle property you simply create/add a new array.
  3. As I mentioned in the first point: we are violating SRP so it is better to have a separate system for updates and rendering. For simple particle systems our original solution will work, but when you want some modularity/flexibility/usability then it will not be good.
  4. There are at least three performance issues with the design:
    1. AoS pattern might hurt performance.
    2. In the update code for each particle we have not only the computation code, but also a (virtual) function call. We will not see almost any difference for 100 particles, but when we aim for 100k or more it will be visible for sure.
    3. The same problem goes for rendering. We cannot render each particle on its own, we need to batch them in a vertex buffer and make as few draw calls as possible.

All of above problems must be addressed in the design phase.

Add/Remove Particles


It was not visible in the above code, but another important topic for a particle system is an algorithm for adding and killing particles:

void kill(particleID) { ?? }
void wake(particleID) { ?? }

How to do it efficiently?

First thing: Particle Pool


It looks like particles need a dynamic data structure - we would like to dynamically add and delete particles. Of course we could use list or std::vector and change it every time, but would that be efficient? Is it good to reallocate memory often (each time we create a particle)? One thing that we can initially assume is that we can allocate one huge buffer that will contain the maximum number of particles. That way we do not need to have memory reallocations all the time.


partDesign1block.png


We solved one problem: numerous buffer reallocations, but on the other hand we now face a problem with fragmentation. Some particles are alive and some of them are not. So how to manage them in one single buffer?

Second thing: Management


We can manage the buffer it at least two ways:
  • Use an alive flag and in the for loop update/render only active particles.
    • this unfortunately causes another problem with rendering because there we need to have a continuous buffer of things to render. We cannot easily check if a particle is alive or not. To solve this we could, for instance, create another buffer and copy alive particles to it every time before rendering.
  • Dynamically move killed particles to the end so that the front of the buffer contains only alive particles.


partDesign1mng.png


As you can see in the above picture when we decide that a particle needs to be killed we swap it with the last active one.

This method is faster than the first idea:
  • When we update particles there is no need to check if it is alive. We update only the front of the buffer.
  • No need to copy only alive particles to some other buffer

What's Next


In the article I've introduced several problems we can face when designing a particle container. Next time I will show my implementation of the system and how I solved described problems.

BTW: do you see any more problems with the design? Please share your opinions in the comments.

Links


14 Jun 2014: Initial version, reposted from Code and Graphics blog

Dynamic Narrative in The Hit

$
0
0
I've been dreaming about a player-driven dynamic narrative system for the last 20 years, and trying to come up with a workable design for the last 5. The Hit will be the first part of that to see a public release. I’m going to need as much user feedback and player metrics as possible, so I’m designing the game to be enjoyable and attractive from the start, and only building in the framework for the dynamic narrative system. Once the game is released, I’ll start to add more of the dynamic systems, and develop systems for creating richer and deeper narrative content.

One thing I should make clear: I’m not trying to create a dynamic narrative system for a traditional FPS or RPG, though if anyone reading this is, I hope they will find the following to be useful. Instead, I’m building the game around the dynamic narrative system, and many of the design and mechanical systems in The Hit stem from that.

Here's an overview of how the dynamic narrative system will work.

Level 0: Pedestrians and The City


At the simplest level, the City is full of pedestrians. Each pedestrians has an NPC style [which just describes, in numbers the game can understand, how the character looks] and a looped path, which they will walk along forever. The pedestrians are very simple, and ridiculously cheap in terms of processor time, so thousands can exist simultaneously in a scene. They’re also synchronised across the network, so that other online players will see exactly the same pedestrians on the same street.

The City itself is composed of sections (each about half a block in size), which each have a set number of assigned pedestrians. Generation rules for each city section will dictate the percentage of different NPC types (suits, casual, etc.) which will be generated for each section.

This is where I am at the moment. I’m concentrating on making The Hit fully playable and polished for an initial release, at which point I’ll start building the level 1 systems into the game.


XU2Y8ge.jpg


Level 1: NPCs and The Cloud


If the player interacts with a pedestrian (Initially, if they speak to the pedestrian, photograph them, or follow them for a set amount of time), some procedurally generated information will be attached to that pedestrian, and it will become an NPC. NPCs will have a name, a job, and two or more destinations (usually home and workplace, though they may also have a car, which they will use to drive from one to the other). The simple looping path they follow will also be replaced by a path with a start and an end, so that anyone following will see them behaving realistically.

The data used to create the NPC will be taken from the cloud, which is a persistent and continually changing set of information that covers every aspect of the gameplay. It is essentially a reserve of pre-generated information, so the game always has suitable data on-hand for when it is required. During quiet moments of gameplay, the cloud will be creating new sets of data, including NPC data, but also procedurally generated posters, signs, billboards, graffiti, paint-jobs etc.

If the NPC data is not used (say, if the player begins a conversation with the NPC, but doesn’t learn their name, or discover where they live or work), it is either discarded completely, or returned to the cloud, and the NPC will revert to being a pedestrian again. This avoids the need to store data about every single pedestrian in the city, memory which can be much better used elsewhere.

This is where I think the system described recently by Ken Levine at GDC falls down. It is not necessary to simulate everything in the game-world, as long as what the player experiences feels real enough. More signal, less noise. Dwarf Fortress, the reigning king of emergent content, doesn’t actually create a narrative through simulation, it just generates enough ‘noise’, with a specific enough context for the the player to be able to create a signal from the noise. However, most players won’t have the patience to filter out all that noise in order to create an interesting narrative.

The pedestrians are essentially a programmed animation of the flow of people through a city. As long as that animation is convincing enough, as long as the NPCs are believable enough people up close, and as long as the switch between them is not too obvious, it will appear to the player as though everything in the world is fully simulated.


XzXnzAe.jpg


Level 2: Characters and the Director


An NPC isn’t quite a character yet. Characters can be created in one of two ways: Firstly, if the player spends enough time in the vicinity of the NPC, it will request character data from the cloud. The other method is via the Director. Similar in purpose to the Director AI in the Left 4 Dead series, the Director is constantly watching over the player, and can make decisions about the various narrative threads which are in play. It can pull data from the cloud on the fly, and attach it to NPCs in the game.

Characters have traits, which can have exclusion rules, so that conflicting traits are never assigned to the same character. Traits are modular, and can be common, rare or unique. Most unique traits will have a story (see below) attached to them.

Examples of Traits: Hard of hearing, Southern accent, Religious, Unsociable, Gung-ho, Insanely Jealous, Relative is a special character, Deathwish, Serial killer...

A small, but important part of creating abstract systems is coming up with ways to represent that system visually, so that users can create and share content quickly. One of my goals is to open up The Hit’s systems to story designers, who will need an easy way to map traits onto their characters, or to understand how collaborators have set up those characters. It’ll most likely end up looking something like the Chakra system, or the Kabbalah.

Characters can also be created by designers, either in full or in part, in which case the modular system will allow the designers to rapidly bring a new character to life. For The Hit, I'm not planning to use spoken dialogue at all in the near future, which will make prototyping and testing significantly cheaper and faster.


u1k5hAV.jpg


The Story Game


The Director is essentially playing its own game with the player, and has a few rules it operates by. It has a memory of when the most recent plot-beats (events related to other events) occurred, and will try to ensure that beats continue to happen on a regular basis, occasionally punctuated with standalone incidents. It also knows how far the player is along the current major and minor story arcs. That part’s a bit more complex. Probably the best way to explain it is to use a card game as a metaphor.

Stories are made up of discrete events, and can exist as individual events, chains of events, or arcs. There is no limit to how long a chain or an arc can be, and stories can also be nested inside larger arcs.

Each event will have a trigger condition, where a set number of pieces need to be in place before the event can begin. Conditions can be acquaintances (or rather their traits), objects or information, and can be thought of as cards in the player’s hand. The Director will constantly be sorting its list of events into order of desirability. If the player is missing just one card for a high-value event, the Director may decide to 'force' the card into the player's hand by means of a smaller event.

Example of an Event 1 (Western genre): The player has tracked down the Jacoby gang, who have holed up at the family farm. Through stealth and strategy, the player kills the gang one by one, until only 18 year-old Larry Jacoby remains. Instead of fighting, he throws down his gun, and offers to trade his freedom for a particular piece of information. This event could be used to force a character (not necessarily Larry himself) into the player's hand, or an item, or even just information, regardless of whether or not the player lets Larry Jacoby live, and could also provide an exciting and surprising story beat.

Example of an Event 2 (Fantasy genre): the player has a magical artifact, and is on friendly terms with a powerful special character, who discovers that the artifact is both incredibly dangerous, and also sought after by a powerful evil special character. He/she decides the player should transport the artifact to a location where it may be destroyed, and calls upon some capable friends to assist the player. This could be the start of an epic arc, and would also allow for the introduction of many characters at once.

The advantages of this system are many: rapidity of development, flexibility and game variety, and it also will allow designers to create and alter events as they go, instead of having to design everything at the outset.

One point I should stress here is that events will be, as far as is possible, player-directed. The Director is continually trying to set up events around the player, but they will be left for the player to trigger unless it becomes absolutely necessary to force an event. In order for this to happen, the Director can have triggers for multiple events in play at any one time, and will increase that number, and therefore the likelihood of triggering an event, the longer the player goes without experiencing a plot-beat. When the player does trigger an event, most of the other triggers will be removed from play until they are needed again. Only some one-off events will remain; we don’t want the player to be confused by multiple plots. Again, this could be another big advantage over a traditional RPG Quest system. It will allow designers to pace the story, rather than letting players stack up multiple missions in order to maximise their chances of gaining XP from any location.

Multiplayer is being built into The Hit from the start, and should offer less problems than in traditional narrative games. Because characters and events are generated per-game, designers will never be faced with the problem of characters existing in different states in each player’s game. Also, characters can be shared between players while they are connected, then withdrawn to their respective games when they leave the session. The main effect multiplayer will have on gameplay is that the Director will have more cards to play with, and therefore more opportunities to create events and advance the story.

Conclusion


As I said at the top, right now my focus is solely on making The Hit an entertaining and engaging experience, and get the game out later this year, and then start releasing the user creation tools soon after that. If everything goes according to plan, the first dynamic story content should be appearing in the game before the end of 2015.

Dan Stubbs

http://www.TheHitGame.co.uk

This article was originally published on Gamasutra.com in April 2014

Update: The Kickstarter for The Hit is now live at https://www.kickstarter.com/projects/374958068/the-hit-stealth-action-in-a-dynamic-city

The Art of Feeding Time: Characters

$
0
0
When Feeding Time (out now!) began to move past its prototyping phase, we decided we didn’t want to make just another puzzle game with abstract shapes and symbols; jewels and candies are all fine, but they lack a certain sense of life and personality.

Since we also weren’t making a match-3 title but rather a game about pairing things up, combining animals with their iconic snacks seemed like a perfect fit.

It took a little while to get to this point:

ft_eating_spree.jpg

At the beginning of Feeding Time’s development, Abel Oroz — an artist we had worked with previously — was busy joining Tequila Works to work on future projects like Rime. However, he was still gracious enough to provide some advice and work with us through the early conceptual phase.

To emphasize the game’s pairing mechanic we sketched out some samples of animals being merged with their archetypal foods, but those came off a bit too surreal. We also realized that showing the whole body of an animal didn’t neatly fit into the grid of the gameboard. We could still do it, but it shrunk the real estate available to the animals’ faces and required more complex animations for movement.

In the end we chose to simply focus on the animal heads, which also fixed scaling issues by displaying both the animals and the foods at the exact same size.

Some early animal sketches here. I still have a soft spot for the absurd mouse with the Swiss cheese holes:


ft_early_animals.jpg


With that much figured out, it was time to seek out an illustrator. A fun and colourful look was a must for Feeding Time, but we also wanted the visuals to stand apart from all the cutesy titles that used a bland, glossy art style. George Bletsis contacted us during our search, and his incredibly varied illustrations and subtle texturing proved to be a great fit.

After putting together a bunch more concepts, we had to address one important issue: should we have multiple facing directions for each animal?

It proved increasingly difficult to showcase the animal’s face while pointing both up and down:


ft_dog_sides.jpg


On the surface it seemed like a good idea to display each animal so its direction would clearly indicate the direction from which it could start eating. Unfortunately there were multiple issues with this approach. Not only would it triple all our art/animation costs for every animal (we’d need to do a version that points up, down, and left — the right side would be a flipped version of the left side), but there’d be some visual oddities for the up/down directions, and we’d lose a consistent silhouette for each animal.

Simply keeping a single direction and flipping it 90 degrees to facilitate facing directions wasn’t an option either as it looked cheap and awkward.

I don’t think anyone ever noticed that all the units in Clash of Heroes pointed down. In the very least, there didn’t seem to be any complaints about the approach:


clash_of_heroes.jpg


Back when we were at Capy working on Heroes of Might & Magic: Clash of Heroes, we encountered a similar problem. The player’s units were located at the bottom of the screen facing up, but this left something to be desired as it only displayed their backs. Eventually it was decided that both the player’s and the enemy’s units would all face down (unless attacking) to create more interesting visuals.

We tried a similar approach in Feeding Time by making each animal point “head-on” at the screen in a neutral pose. It worked but looked a bit too
symmetric and boring. In the end we decided to give each animal a singular but unique pose that best displayed its personality.

Here's various early takes on the animal heads. From left to right: animals rotated by 90 degrees, animals pointing straight at the viewer, and animals in non-standardized poses:


ft_animals_evolution.jpg


Once we established the format for each animal, we sketched out a lot more concepts and made sure to give each animal a distinct silhouette in order to make them easier to recognize. Since we were now confined to only a single animal pose, we tried to mold each one into a shape similar to that of the animal’s corresponding food. We had already taken some liberties with the colouring, but when this extra step made sense, it further helped to make the pairings easier to spot.

Below are concepts for animals in the safari and tundra stages. Note that the shapes of the foods closely resemble those of the animals for ease of recognition:


ft_animal_sketches.jpg


And this is how it all turned out!


ft_full_cast.jpg


Next up: backgrounds!

Article Update Log


25 June 2014: Initial release

Investing in Community Management

$
0
0
I'm only making this point because I talk to enough studios who aren't willing to invest in their community management - and that scares me. Sure, the big studios do it, sure some small games achieve a good social following, but we're right now trying to answer the generalized question for everyone - should you invest in your community management?

Video Game Community Management


Let's first set the expectations of what professional community management for a video game studio looks like. It's more than creating a Facebook page. It's really about leading and growing a group of intention-based individuals through discussion and action in regard to a specific subject. It balances a role between loyalty to the player-base and the studio and helps information freely flow between the two. You shouldn't be the figure head celebrity of the game, but instead the facilitator of the studio's identity and personality.

If you agree with my understanding of above, then consider the outcomes of proper Community Management that I've noticed over the years.

Advocates


People are always looking for interesting stuff to share with their like-minded friends. The entire concept of viral marketing is centered around the assumption that people will want to share what they find. One of my last articles was about urging developers to create viral assets that can easily be shared (videos, pictures, art etc..) and I'd strongly recommend implementing the concepts. There is honestly nothing more rewarding than creating these assets and watching them be shared by the community - and it's really not as complicated as you think. From a financial standpoint, you will never find a cheaper CPA for users than when you push for a high social referral.

The entire idea of a k-factor being "how many new players will one player invite" emerged because a significant number of users were attributed to existing users' referral.

I prefer practical to theoretical, but there's a concept in marketing theory which speaks about how the "fully actualized" customer will have your product (or game) as a part of their identity and naturally represent your game and actively promote it to their peers. You'll always find whales who really take on this role so consider enabling them with viral intent-based content.

Retention


In recent years the average lifespan of a player has plummeted, in my opinion, due to all the other affordable games they have access to. For some titles it makes sense, I can only play BioShock so many times, but for non-linear or narrative genres like MOBAS, MMO's and casual/puzzle games there needs to be more than just a game for players to stick around. Starcraft BroodWar wasn't being played by a large active player base more than 15 years after game launch because its content stayed fresh - instead, organic communities sprang up which gave new life to an old title. Community is essential to creating any staying power for your title, and people are masters at finding new goals or ideas for games. If you look at games which have the highest video game LTV they commonly have the greatest attention to their community. Examples?
  • The Battlefield series which even won a social strategy award
  • World of WarCraft
  • League of Legends - which I think has one of the largest social platform communities ever?
Basically, if your game relies on continual APRU, then it appears the best in the business rely on community management to further create value with their players.

Feedback


I can speak for certain that this function is the most neglected in all of community management, especially for video games. The concept of "co-creation" has many proponents and still many opponents but what I'm referring to is a stable user base who wants to give you detailed feedback on their user experience. I've worked on QA, and unless you have a team who really is motivated, getting good information on user experience is like pulling teeth - so instead draw answers from your players who will give you endless feedback and even ideas on how to improve your offering.

In this function a community manager is able to fulfill the role of an intermediary as an advocate of the player community to the development team. Is there a mechanic or gameplay issue that the community in general isn't a fan of? A community manager should be the role who is continually giving feedback to the development team on what players think and want. Look at how the Diablo 3 Auction House was shut down because the community insisted it wasn't giving a positive user experience. I'm a huge fan of feedback and in a stance of humility, developers can get fantastic feedback from their players on what would improve their experience.

So What Does Investment Look Like?


It does not mean you hire someone to get you more Facebook likes. What it does mean is making the shift to consider your social platforms (social media, forums, sub-reddit pages) as a source for communication to your players and committing to building their integrated role. Here's my checklist.
  • What's your offering? (example; share in-game rewards with any users who join your social platforms)
  • Offer unique interesting content for your users - (League of Legends posts cartoons every Sunday. They understand their demographic enough to know comics are a perfect fit)
  • Promise real-time news updates on important game information. This is so important! Your users should be coming to you for updates on your game.
  • Having interesting contests run through your social platforms! Be creative - it's not hard (if it is for you, ask me and we'll come up with something)
  • Is your game complex? Consider building and promoting a forum which allows for topic specific discussion.
  • Development blogs help a community know how a game is changing and that it is continually being improved.

The Paradigm Shift


I really want to encourage you to see that community management isn't another "job", it's a function that connects to so many parts of a studio. Doing Alpha or Beta testing? You probably want a medium for users to report bugs. You probably want a medium or voice to even give them early access in the first place! I could go on for pages about why community management is an essential part of a game's marketing mix.

What has been your experience with community management and tangible growth of your game? Do you too see the reward?


Cover image courtesy of iStockphoto, romakoshel

Is the Game Industry a Bad Place to Work?

$
0
0
It appears to be vogue now to renounce the game industry as an evil empire. A recent article posted at http://tinyurl.com/l5gcxo4 claims that "poor work conditions and sexism give games industry a bad rap".

Media Influence


It is true that the reports of sexism and bad working conditions give the industry a bad rap. Note that I am agreeing the reports lead to the bad rap, not that I agree with the reports.

For example, the article cited that only 22% of respondents identified themselves as women. This is exactly in line with other national statistics. For example, a quick Internet search revealed that about 20% of computer science degrees were earned by women in 2012 (http://tinyurl.com/kcxp2g7). I realize that computer science is only one discipline used in the game industry. My point is that if the national average of one of the key skills is at 20% while the game industry is at 22%, then we are not exceptional at all!

Sexism


Now let's talk about sexism in games. This is the part where I will really get in trouble. It always bothers me that people bemoan the blatant use of exaggerated female sexuality in games, but no one ever mentions the same (and probably more pervasive) portrayal of women in almost every other visual media including art, opera, movies, and advertising to name a few. So why single out the game industry?

In fact, I'll go so far as to say that the game industry actually has a split personality on this issue. I constantly read articles about how the game industry needs to "mature" and "grow up". Doesn't maturity imply that we can create game content that is mature? That we can use sexuality (or even over-use it, as is the case in most media) as both content and a means of promotion?

Last year there was a huge uproar because the IGDA had exotic dancers at one of their parties (and exotic does not equal nude). The whole argument sounded very adolescent. It occurs to me that adults were attending that party, which also included an abundant amount of alcohol, and adults should be able to handle adult oriented entertainment. Adults can also leave if they choose to. Maybe they should have had male exotic dancers as well!

I don't see a problem with an adult industry using the same adult-oriented types of entertainment that you would expect to see at other similar types of events. Do you imagine that parties in other types of media (let's say in the movie or music industry) wouldn't use similar entertainment? Part of growing up is being able to handle grown-up modes. And by the way, I have never seen a single article that argues against the abundant use of alcohol at such parties.

Now, I admit: I am not a woman. If there are women who feel victimized by the game industry's portrayal of women, then I only hope that those same people also refrain from attending movies and concerts for the same reason. And I certainly don't condone any kind of sexual discrimination or abuse under any circumstances.

Working Conditions


Let's finally talk about working conditions. I agree that companies take advantage of their employees. It was not uncommon for me to work 15 hour days 7 days a week at the studios where I was employed. However, now that I am running my own studio, I am still working the same crazy hours. The difference is that is my own choice. Apparently, 40% of my colleagues feel the same way and stay in the industry because they are willing to put in the hours.

Again, I don't think it is okay for studios to take advantage of their employees. At least the studios that do it right offer other incentives and perks (flex time, holiday time off, end-of-year bonuses) to try to compensate... something you rarely see in other industries.

Conclusion


The game industry is not exceptional in any of these areas. I worked in corporate America long before I entered the game industry, and there were the same issues: fewer women, sexism, workers being taken advantage of (and without any perks).

I agree that we should work toward being an industry that treats all people in an equitable manner. We should encourage diversity in all ways. However, it bothers me if including any particular group means we have to begin censoring our content or our celebrations.


GameDev.net Soapbox logo design by Mark "Prinz Eugn" Simpson

Design of an Indie Action Game

$
0
0
So you want to be a game designer? Now, more than ever, many people get to realize their dreams of becoming a game designer due to the number of Indie game projects available, catalyzed by the Internet. I started in video games as a programmer, but for my latest project, Bain's Redemption, I had to wear many hats including game design. There are many different ways to design a game. Some make a design document that nobody reads (it's happened to me) and others design as they go. This article describes what worked and what didn't work for us.

What I Learned from Bain's Redemption


Reactive or Proactive?


There are two ways to design a game: reactive or proactive. They are just as they sound, we either think it out as we go, or we plan ahead. It's a good idea to plan ahead as you don't want your artists to be working on something that might be tossed out. But at the same time, sometimes you put in elements that you think will work well together on paper, but once they are implemented in the game it just doesnt' work. Learning from Bain's Redemption, I realized you can only plan ahead so much. We wanted to create a game that was as fun as Devil May Cry or God of War, all the while doing our best to maintain its authenticity.


dmcgow.png


What I noticed is that a lot of times the difference between good design and bad design is the implementation of parameters. Give every NPC 200 health and everyone will complain that the game is too hard. Give every NPC 50 health and it will be too easy. Some things are hard to formulate mathematically, especially AI. It can be done with probability constructs, (most AI depends on probability), but to me that's kind of a misnomer as probability lacks structure.

I'm sure probability scholars would beg to differ, but let's just leave it at that. Nobody formulates the AI of their game with a mathematical model such that you know for example, that a particular NPC has a prescribed chance to die or chance to damage the player. Consider the figure below.


aidiagram.png


Let's say each NPC, every second, once a second, has a chance of one in two (1:2) of either tracking the player or running away. Let's assume they don't care about their own health for a minute. Even with these assumptions, it is quite cumbersome to formulate what the rate of dps (damage per second) output would be from the AI. Add to this the fact that they would probably only cower when their health or morale is low and you got yourself a nightmare of a mathematical formulation. Lastly and probably most importantly, don't forget that the player is a human and as such all formulation would be subject to the Halting Problem which basically means that you cannot know how the player will play the game. What is more common, is to try values out and see if they work.

So which route should the up and coming game designer choose? (Reactive or proactive)? I recommend a little bit of both. If you go too much into the reactive route, you may end up throwing away important assets that your artists/designers/programmers might make. If you go too much into the proactive route, you may make a game with elements that just don't go well together.

Indicators, Indicators, Indicators...


If you ever want to reverse engineer the design of any game, the best place to start is the UI. The UI will tell you a whole lot about how the game works. Our game is no different. Observe the UI for our game. There is a lot going on, but everything has a purpose.


uicap_annotated.png


(A) is the portrait. It's necessary in a lot of games, but just as Doom cleverly used the portrait to give a status update on your damage taken, we use this one to give the player an indication of his current mental state (insanity). Insanity is a variable in our game that counts up when the player uses rage (E). Rage is primarily built up by getting damaged and less so by doing damage. Some games have different combat stances that determine if rage (or its equivalent) is gained by doing damage or taking damage so this is a design decision you will have to make for your action game. (B) is the current special move. The four main buttons on the controller were not enough to cover all of Bain's moves so we decided to devote one of them to an ability that can be changed with the four-directional pad. Another decision here is whether the four-directions are each a move, or you cycle through the moves with left/right or up/down (see diagram below). We chose the latter setup.


specialMoveDesign.png


C) is of course the health bar. (D) is the cooldown bar. Some games reward you for action, while other games give you a quota for action. The cooldown bar gives you a quota for action while the rage bar rewards you for action. How do they not contradict? The cooldown bar limits how many special moves you can do while the rage bar enhances your moves (sometimes including special moves that use the cooldown bar). Devil May Cry went with a reward-for-action special move bar. In other words if you don't do anything, you cannot perform special abilities. In other games, you are given a quota. In God of War, you are given a magic bar that fills up and as such acts as a quota (but is dependent on action still) as well as a rage bar that also acts as a quota. In World of Warcraft, rogues use an energy bar that is a quota for action while warriors use a rage bar that rewards you for action (while druids get both).

Balancing The Game


One gameplay parameter can be the difference between a well-balanced game and a poorly-balanced game. Gamers have a slang term for this: "OP." That move is over-powered, they would say. Consequently, when that move is revised, there is an over-revised effect that happens that calls for another slang term: "Nerfed." They nerfed that move, they would say. This happens in every game. It's not due to the formulation of a mathematical model of the game, but due to the trial-and-error nature of game design. When I see a move is over-powered, I will try to fix it but sometimes it's very off-balanced and its parameters need to be doubled or halved, other times it needs a subtle fix. Which one you will need for your game (big fix, or subtle fix) is an art form in itself.

This is what separates good designers from bad designers. Good designers will keep tuning gameplay parameters until it feels right, while bad designers might tune it once or twice and assume they are done. I noticed in Bain's Redemption that insanity was OP. Insanity went from a value of 0 to 1, with 0 being sane and 1 being insane. When Bain goes insane, the player loses control of him and the player must wiggle the joystick to regain control. I noticed that between encounters the insanity would not decay sufficiently. So I adjusted the decay rate. Now consider that the object here is not to show off the insanity mechanic, but to penalize a player for over-using rage. In other words, we don't want insanity to kick in every 30 seconds just to see that it exists. Rather, we have a niche for this mechanic as all mechanics need a niche. Bain as a character is conflicted, so it makes sense that rage conflicts with insanity. This is what design is like. It's more of an art than a science I would say. Remember, form follows function.

Conclusion


I have learned a lot from Bain's Redemption, in the context of design of an action game. Specifically, some things you just have to work out and see if they work well together. No amount of planning can guarantee good design. The design doc for a platformer might say the player's jump will reach 10 feet, but if the designers design a heavily organic environment and a cliff ends up being more than 10 feet away from another cliff, you will have to make adjustments. Humans are not machines and we can only predict so much. Why do directors have rudimentary recording equipment (in addition to the main recording equipment) when they film a movie? It's because the director cannot predict if the movie will look and play good without it actually being done (even with storyboarding the scene before hand!) Any good dynamical system has a feedback loop and as such, you can think of games as a dynamical system that requires feedback to be designed well. So in short, I don't have a magic bullet piece of advice for anyone getting into design. You will have to try it and balance it as you go. And even when you're done and you think it's perfect, beta testing will bite you and tell you that something needs to be revised. So keep tuning and tuning and tuning. And when you are done, tune some more.

Still there are lessons to be learned from other games that have seen success. I talked about ability indicators that reward you for action and others that give you a quota for action. You will notice that the most popular action games incorporate different flavors of these two. Which you use will depend on the type of action game. The best designers are also gamers. Ever stopped and thought about why this is so? It is due to the trial-and-error process of designing games. Designers will spend a lot of time playing their game. Whether it's looking for bugs or testing parameters, good designers will know the most about their game. There is nothing extravagant about trial-and-error, but it is the path to a well-balanced game.


Bain's Redemption


Article Update Log


6 Jul 2014: Initial release

The Art of Feeding Time: Backgrounds

$
0
0
With the look of Feeding Time‘s animals nailed down, it was time to move on to the backgrounds.

For our initial backdrop, we went with a living room as it nicely tied together all the typical household pets. It also let us use a carpet to cleanly delineate the numerous gameboard components.

Our original sketch followed the perspective-bending approach of Zelda: A Link to the Past:


ft_background_perspective.jpg


While our first mockup tried to match the four angles at which the animals faced the gameboard, direction ceased to be a concern when we decided to present each animal from just a single side.

This allowed us a bit more freedom, but the lack of a clear and consistent perspective also bred confusion. Were the animals stacked on top of each other, or being viewed from above?

The background lost a certain sense of being a real place, but Abel suggested we roll with it. To prove his point, he showed us how well Hanna Barbera’s skewed and uneven backgrounds worked in various old cartoons.

Top-left and bottom-left: Hanna Barbera’s skewed and uneven backgrounds. Right: Feeding Time’s indoors background inspired by the style:


ft_background_hb.jpg


We agreed, and were quite pleased with George’s first crack at the style. However, in the end we abandoned the indoor environment itself.

The reason was a desire to keep the areas consistent, and constraining them to interiors was too limiting and had some negative associations with confinement. Instead, we went with a suburban backyard for the “pet zone” and kept the other biomes to the great outdoors.

The initial rough draft of the tundra zone and its finished version:


ft_tundra_progression.jpg


We also wanted to organically duplicate the carpet’s natural grid for all the areas, but this proved very difficult.

The backyard was a natural fit for a checkered pattern akin to the turf of various sports fields, but the safari and tundra zones were trickier. We experimented with rows and columns of cracks in dry bedrock and an arrangement of sticks and twigs, but neither proved ideal. The extra decorations muddied the gameboard and took up too much space.

The issue of clarity proved substantial even when working with a grid that only had slight variations in surface pattern and lighting. Since easy recognition of the foods and animals was a crucial part of the game, we decided to keep the gameboard as uniform grids and only change their colour scheme to match each biome.

The grid of the original Safari zone consisted of grassy tufts that got progressively larger towards the bottom of the screen. Along with a light gradient, the design helped to create depth but was eventually removed to make the gameboard easier to parse:


ft_safari_grid.jpg


In hindsight this was probably an issue we spent too much time debating by looking at the background illustrations themselves. As it turned out, the gameboard pieces covered too much of the grid to fret over its design, and the uniform shape actually fit the overall art style.

The football and various other background animations add a subtle sense of life and don’t overly distract the player:


ft_backyard_football.jpg


To add some life and personality to the biomes we introduced various interactive Easter Eggs and tied them to in-game achievements. For example, the backyard zone was filled with elements that could be activated with a tap: sprinklers let out bursts of water, the house door could be knocked on and its lights individually turned on and off, a football could be launched over the fence, etc.

While these were fun ideas, they had nothing to do with the core gameplay and actually detracted from it. The player had to sporadically stop to click on random parts of the screen instead of focusing on matching the animals with their corresponding foods. Eventually we simply removed the interactive component and activated them based on a timer. It helped to make the areas feel alive, but the player didn’t miss out on any gameplay by ignoring them.

The finished tableaus of Feeding Time’s main three zones:


ft_background_tableaus.jpg


One final aesthetic change we made towards the end of development was to turn all the clouds into food shapes. Since each area was outdoors and included parallax-scrolling clouds, it suddenly hit us that we could “standardize” their shapes and velocities while adding a bit of whimsy to the game. This also helped out with the level transitions and other aspects of UI, but more on that next time!


Article Update Log


2 July 2014: Initial release

Abusing the Linker to Minimize Compilation Time

$
0
0

Abstract


In this article, I will describe a technique that reduces coupling between an object that provides access to many types of objects ("the provider") and its users by moving compile-time dependencies to link-time. In doing so, we can reduce the amount of unnecessary compiling of the provider's dependencies whenever the provider is changed. I will then explore the benefits and costs to such a design.

The Problem


Typically in a game, a provider object is needed to expose a variety of services and objects to many parts of the game, much like a heart pumps blood throughout the body. This provider object is a sort of "context object", which is setup with the current state of the game and exposes other useful objects. Such a class could look something like listing 1, and an example of use could look something like listing 2.

// 
// Listing 1:
// ServiceContext.h
#pragma once

// Dependencies
class Game;
class World;
class RenderService;
class ResourceService; 
class PathfindingService;
class PhysicsService;
class LogService;

// ServiceContext
// Provides access to a variety of objects
class ServiceContext {
public:
	Game* const game;
	World* const world;
	RenderService* const render;
	PathfindingService* const path;
	PhysicsService* const physics;
	AudioService* const audio;
	LogService* const log;
};
Listing 1: The definition of a sample context object

//
// Listing 2:
// Foobar.cpp

#include "Foobar.h"
#include "ServiceContext.h"
#include "PathService.h"
#include "LogService.h"


void Foobar::frobnicate( ServiceContext& ctx ) {
	if ( !condition() )
		return;
	
	current_path = ctx.path->evaluate(position, target->position);
	if ( !current_path )
		ctx->log("Warning","No path found!");
}
Listing 2: An example usage of the sample context object

The ServiceContext is the blood of the program, and many objects depend on it. If a new service is added to ServiceContext or ServiceContext is changed in any way, then all of its dependents will be recompiled, regardless if the dependent uses the new service.
See figure 1.


Attached Image: figure1-final_480px.png
Figure 1: Recompilations needed when adding a service to the provider object


To reduce these unnecessary recompilations, we can use (abuse) the linker to hide the dependencies.

The Solution


We can hide the dependencies by moving compile-time dependencies to link-time dependencies. With templates, we can write a generic get function and supply specialized definitions in its translation unit.

// 
// Listing 3:
// ServiceContext.h
#pragma once

// Dependencies
struct ServiceContextImpl;

// ServiceContext
// Provides access to a variety of objects
class ServiceContext {
public: // Constructors
	ServiceContext( ServiceContextImpl& p );

public: // Methods
	template<typename T>
	T* get() const;

private: // Members
	ServiceContextImpl& impl;
};


//
//  ServiceContextImpl.h
#pragma once

// Dependencies
class Game;
class World;
class RenderService;
class ResourceService; 
class PathfindingService;
class PhysicsService;
class LogService;

// ServiceContextImpl
// Exposes the objects to ServiceContext
// Be sure to update ServiceContext.cpp whenever this definition changes!
struct ServiceContextImpl {
	Game* const game;
	World* const world;
	RenderService* const render;
	PathfindingService* const path;
	PhysicsService* const physics;
	AudioService* const audio;
	LogService* const log;
};
Listing 3: The declarations of the two new classes

//
// Listing 4:
// ServiceContext.cpp

#include "ServiceContext.h"
#include "ServiceContextImpl.h"

ServiceContext::ServiceContext( ServiceContextImpl& p ) : impl(p) {
}

// Expose impl by providuing the specializations for ServiceContext::get
template<> 
Game* ServiceContext::get<Game>() { 
	return impl.game; 
}

// ... or use a macro
#define SERVICECONTEXT_GET( type, name )	\
	template<> \
	type* ServiceContext::get<type>() const { \
		return impl.name; \
	}

SERVICECONTEXT_GET( World, world );
SERVICECONTEXT_GET( RenderService, render );
SERVICECONTEXT_GET( PathfindingService, path );
SERVICECONTEXT_GET( PhysicsService, physics );
SERVICECONTEXT_GET( AudioService, audio );
SERVICECONTEXT_GET( LogService, log );
Listing 4: The new ServiceContext definition

In listing 3, we have delegated the volatile definition of ServiceContext to a new class, ServiceContextImpl. In addition, we now have a generic get member function which can generate member function declarations for every type of service we wish to provide. In listing 4, we provide the get definitions for every member of ServiceContextImpl. The definitions are provided to the linker at link-time, which are then linked to the modules that use the ServiceContext.

//
// Listing 5:
// Foobar.cpp
#pragma once

#include "Foobar.h"
#include "ServiceContext.h"
#include "PathService.h"
#include "LogService.h"


void Foobar::frobnicate( ServiceContext& ctx ) {
	if ( !condition() )
		return;
	
	current_path = ctx.get<PathService>()->evaluate(position, target->position);
	if ( !current_path )
		ctx.get<LogService>()->log("Warning","No path found!");
}
Listing 5: The Foobar implementation using the new ServiceContext

With this design, ServiceContext can remain unchanged and all changes to its implementation are only known to those objects that setup the ServiceContext object. See figure 2.


Attached Image: figure2-final_480px.png
Figure 2: Adding new services to ServiceContextImpl now has minimal impact on ServiceContext's dependants


When a new service is added, Game and ServiceContextImpl are recompiled into new modules, and the linker relinks dependencies with the new definitions. If all went well, this relinking should cost less than recompiling each dependency.

The Caveats


There are a few considerations to make before using this solution:

  1. The solution hinges on the linker's support for "whole program optimization" and can inline the ServiceContext's get definitions at link-time. If this optimization is supported, then there is no additional cost to using this solution over the traditional approach. MSVC and GCC have support for this optimization.
  2. It is assumed that ServiceContext is changed often during development, though usually during early development. It can be argued that such a complicated system is not needed after a few iterations of ServiceContext.
  3. It is assumed that the compiling time greatly outweighs linking time. This solution may not be appropriate for larger projects.
  4. The solution favors cleverness over readability. There is a increase in complexity with such a solution, and it could be argued that the complexity is not worth the marginal savings in compiling time. This solution may not be appropriate if the project has multiple developers.
  5. The solution does not offer any advantage in Unity builds.

Conclusion


While this solution does reduce unnecessary recompilations, it does add complexity to the project. Depending on the size of the project, the solution should grant a small to medium sized project with decreased compilation time.

The Art of Feeding Time: Interface

$
0
0
0_composite.jpg
The evolution of the Feeding Time interface.

Feeding Time's interface was one of many components of the game that went through continuous iteration along its development. The overall theme revolved around traveling the world and delivering food to hungry animals, but the style it was presented in changed several times over.

Through this article we will be taking a look at how the UI evolved from a bunch of scribbles to a fully fleshed out interface that complemented our in-game art.

Two of the original motifs we used for Feeding Time's UI:


1_passport_briefcase.jpg


Among the first interface ideas we had were a passport and some luggage that went together with the traveling theme of the game. While these motifs were eventually scrapped we did take away some lessons from them. For example, the passport brought along with it a very negative bureaucratic association; its drab colours clearly something to avoid in an otherwise joyful game.

In contrast to the cold rigidity of the passport, the briefcase motif was a lot more fun and we took some of the colouring and texturing from it for future use. The object itself however brought with it certain challenges.

Opening up a briefcase-menu might be fun at first, but by the twenty-seventh time it loses its charm and becomes a time-consuming chore. Its overall shape also limited us on the dimensions of the menus and made pop-ups much trickier to implement.

Another minor strike against the briefcase was that we didn't want to limit ourselves only to locales that it made sense to bring luggage to, i.e., if we wanted to create an underwater level, bringing a suitcase might seem a bit odd.

The scrapbook and restaurant menu motifs also brought elements that we refined and used in the final design:


3_scrapbook_and_menu.jpg


Our next mockups involved a scrapbook theme that brought with it a very arts and crafts feel, which suited the overall tone of the game, and a restaurant menu that helped to define how we framed and organized our information.

These were also the first iterations that began to prominently use patterns, and their texturing was a closer match for the graininess of the foods and animals. The scrapbook used too much real-estate due to its irregular components, though, and the restaurant menu proved a bit too formal.

The final style began coming together with the clipboard motif:


5_clipboard.jpg


Another idea we tried out revolved around a delivery person with a little clipboard that kept track of all the areas the player visited with their food-packages. We used what we learned from the previous revisions to inject a little fun in the form of patterned borders and colourful textured icons and buttons.

Looking back on this iteration we can still see parts of it that survived into the final cut, however this screen still lacked a bit of personality.

A more lightweight menu with our friend the store clerk mixed in:


6_feeding_time_gamemenu.jpg


To add more character to the UI we decided to include the clerk himself as something of a mascot, peering over the menus and offering encouragement. This required a lot of cuts to the amount of space the menus used, but the end result was well worth it.

Final art for the summary screen:


7_final_summary.png


With the clerk firmly in place, we combined the rough colouring of the briefcase, the symmetrical framing of the restaurant menu, the patterns and borders of the scrapbook, and the checking-in motif of the clipboard. These made for a more abstract, papercraft-like design, but one that fit the overall game while facilitating the appearance of the clerk and exposing the game's colourful backgrounds.

Final art for one of the profile menus:


7_final_tableau.jpg


All of these changes led us to the final style above. We made some small tweaks along the way and polished the graphics and the user flow as best as we could. With Feeding Time now out on the AppStore, we hope that all of our efforts have paid off and you check out the finished product!


Article Update Log


9 July 2014: Initial release

Making a Game with Blend4Web Engine. Part 1: The Character

$
0
0
Today we're going to start creating a fully-functional game app with Blend4Web.

Gameplay


Let's set up the gameplay. The player - a brave warrior - moves around a limited number of platforms. Melting hot stones keep falling on him from the sky; the stones should be avoided. Their number increases with time. Different bonuses which give various advantages appear on the location from time to time. The player's goal is to stay alive as long as possible. Later we'll add some other interesting features but for now we'll stick to these. This small game will have a third-person view.

In the future, the game will support mobile devices and a score system. And now we'll create the app, load the scene and add the keyboard controls for the animated character. Let's begin!

Setting up the scene


Game scenes are created in Blender and then are exported and loaded into applications. Let's use the files made by our artist which are located in the blend/ directory. The creation of these resources will be described in a separate article.

Let's open the character_model.blend file and set up the character. We'll do this as follows: switch to the Blender Game mode and select the character_collider object - the character's physical object.


ex02_img01.jpg?v=20140717114607201406061


Under the Physics tab we'll specify the settings as pictured above. Note that the physics type must be either Dynamic or Rigid Body, otherwise the character will be motionless.

The character_collider object is the parent for the "graphical" character model, which, therefore, will follow the invisible physical model. Note that the lower point heights of the capsule and the avatar differ a bit. It was done to compensate for the Step height parameter, which lifts the character above the surface in order to pass small obstacles.

Now lets open the main game_example.blend file, from which we'll export the scene.


ex02_img02.jpg?v=20140717114607201406061


The following components are linked to this file:

  1. The character group of objects (from the character_model.blend file).
  2. The environment group of objects (from the main_scene.blend file) - this group contains the static scene models and also their copies with the collision materials.
  3. The baked animations character_idle_01_B4W_BAKED and character_run_B4W_BAKED (from the character_animation.blend file).

NOTE:
To link components from another file go to File -> Link and select the file. Then go to the corresponding datablock and select the components you wish. You can link anything you want - from a single animation to a whole scene.

Make sure that the Enable physics checkbox is turned on in the scene settings.

The scene is ready, lets move on to programming.

Preparing the necessary files


Let's place the following files into the project's root:

  1. The engine b4w.min.js
  2. The addon for the engine app.js
  3. The physics engine uranium.js

The files we'll be working with are: game_example.html and game_example.js.

Let's link all the necessary scripts to the HTML file:

<!DOCTYPE html>
<html>
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1">
    <script type="text/javascript" src="b4w.min.js"></script>
    <script type="text/javascript" src="app.js"></script>
    <script type="text/javascript" src="game_example.js"></script>

    <style>
        body {
            margin: 0;
            padding: 0;
        }
    </style>

</head>
<body>
<div id="canvas3d"></div>
<body>
</html>

Next we'll open the game_example.js script and add the following code:

"use strict"

if (b4w.module_check("game_example_main"))
    throw "Failed to register module: game_example_main";

b4w.register("game_example_main", function(exports, require) {

var m_anim  = require("animation");
var m_app   = require("app");
var m_main  = require("main");
var m_data  = require("data");
var m_ctl   = require("controls");
var m_phy   = require("physics");
var m_cons  = require("constraints");
var m_scs   = require("scenes");
var m_trans = require("transform");
var m_cfg   = require("config");

var _character;
var _character_body;

var ROT_SPEED = 1.5;
var CAMERA_OFFSET = new Float32Array([0, 1.5, -4]);

exports.init = function() {
    m_app.init({
        canvas_container_id: "canvas3d",
        callback: init_cb,
        physics_enabled: true,
        alpha: false,
        physics_uranium_path: "uranium.js"
    });
}

function init_cb(canvas_elem, success) {

    if (!success) {
        console.log("b4w init failure");
        return;
    }

    m_app.enable_controls(canvas_elem);

    window.onresize = on_resize;
    on_resize();
    load();
}

function on_resize() {
    var w = window.innerWidth;
    var h = window.innerHeight;
    m_main.resize(w, h);
};

function load() {
    m_data.load("game_example.json", load_cb);
}

function load_cb(root) {

}

});

b4w.require("game_example_main").init();

If you have read Creating an Interactive Web Application tutorial there won't be much new stuff for you here. At this stage all the necessary modules are linked, the init functions and two callbacks are defined. Also there is a possibility to resize the app window using the on_resize function.

Pay attention to the additional physics_uranium_path initialization parameter which specifies the path to the physics engine file.

The global variable _character is declared for the physics object while _character_body is defined for the animated model. Also the two constants ROT_SPEED and CAMERA_OFFSET are declared, which we'll use later.

At this stage we can run the app and look at the static scene with the character motionless.

Moving the character


Let's add the following code into the loading callback:

function load_cb(root) {
    _character = m_scs.get_first_character();
    _character_body = m_scs.get_object_by_empty_name("character",
                                                     "character_body");

    setup_movement();
    setup_rotation();
    setup_jumping();

    m_anim.apply(_character_body, "character_idle_01");
    m_anim.play(_character_body);
    m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
}

First we save the physical character model to the _character variable. The animated model is saved as _character_body.

The last three lines are responsible for setting up the character's starting animation.
  • animation.apply() - sets up animation by corresponding name,
  • animation.play() - plays it back,
  • animation.set_behaviour() - change animation behavior, in our case makes it cyclic.
NOTE:
Please note that skeletal animation should be applied to the character object which has an Armature modifier set up in Blender for it.

Before defining the setup_movement(), setup_rotation() and setup_jumping() functions its important to understand how the Blend4Web's event-driven model works. We recommend reading the corresponding section of the user manual. Here we will only take a glimpse of it.

In order to generate an event when certain conditions are met, a sensor manifold should be created.

NOTE:
You can check out all the possible sensors in the corresponding section of the API documentation.

Next we have to define the logic function, describing in what state (true or false) the certain sensors of the manifold should be in, in order for the sensor callback to receive a positive result. Then we should create a callback, in which the performed actions will be present. And finally the controls.create_sensor_manifold() function should be called for the sensor manifold, which is responsible for processing the sensors' values. Let's see how this will work in our case.

Define the setup_movement() function:

function setup_movement() {
    var key_w     = m_ctl.create_keyboard_sensor(m_ctl.KEY_W);
    var key_s     = m_ctl.create_keyboard_sensor(m_ctl.KEY_S);
    var key_up    = m_ctl.create_keyboard_sensor(m_ctl.KEY_UP);
    var key_down  = m_ctl.create_keyboard_sensor(m_ctl.KEY_DOWN);

    var move_array = [
        key_w, key_up,
        key_s, key_down
    ];

    var forward_logic  = function(s){return (s[0] || s[1])};
    var backward_logic = function(s){return (s[2] || s[3])};

    function move_cb(obj, id, pulse) {
        if (pulse == 1) {
            switch(id) {
            case "FORWARD":
                var move_dir = 1;
                m_anim.apply(_character_body, "character_run");
                break;
            case "BACKWARD":
                var move_dir = -1;
                m_anim.apply(_character_body, "character_run");
                break;
            }
        } else {
            var move_dir = 0;
            m_anim.apply(_character_body, "character_idle_01");
        }

        m_phy.set_character_move_dir(obj, move_dir, 0);

        m_anim.play(_character_body);
        m_anim.set_behavior(_character_body, m_anim.AB_CYCLIC);
    };

    m_ctl.create_sensor_manifold(_character, "FORWARD", m_ctl.CT_TRIGGER,
        move_array, forward_logic, move_cb);
    m_ctl.create_sensor_manifold(_character, "BACKWARD", m_ctl.CT_TRIGGER,
        move_array, backward_logic, move_cb);
}

Let's create 4 keyboard sensors - for arrow forward, arrow backward, S and W keys. We could have done with two but we want to mirror the controls on the symbol keys as well as on arrow keys. We'll append them to the move_array.

Now to define the logic functions. We want the movement to occur upon pressing one of two keys in move_array.

This behavior is implemented through the following logic function:

function(s) { return (s[0] || s[1]) }

The most important things happen in the move_cb() function.

Here obj is our character. The pulse argument becomes 1 when any of the defined keys is pressed. We decide if the character is moved forward (move_dir = 1) or backward (move_dir = -1) based on id, which corresponds to one of the sensor manifolds defined below. Also the run and idle animations are switched inside the same blocks.

Moving the character is done through the following call:

m_phy.set_character_move_dir(obj, move_dir, 0);

Two sensor manifolds for moving forward and backward are created in the end of the setup_movement() function. They have the CT_TRIGGER type i.e. they snap into action every time the sensor values change.

At this stage the character is already able to run forward and backward. Now let's add the ability to turn.

Turning the character


Here is the definition for the setup_rotation() function:

function setup_rotation() {
    var key_a     = m_ctl.create_keyboard_sensor(m_ctl.KEY_A);
    var key_d     = m_ctl.create_keyboard_sensor(m_ctl.KEY_D);
    var key_left  = m_ctl.create_keyboard_sensor(m_ctl.KEY_LEFT);
    var key_right = m_ctl.create_keyboard_sensor(m_ctl.KEY_RIGHT);

    var elapsed_sensor = m_ctl.create_elapsed_sensor();

    var rotate_array = [
        key_a, key_left,
        key_d, key_right,
        elapsed_sensor
    ];

    var left_logic  = function(s){return (s[0] || s[1])};
    var right_logic = function(s){return (s[2] || s[3])};

    function rotate_cb(obj, id, pulse) {

        var elapsed = m_ctl.get_sensor_value(obj, "LEFT", 4);

        if (pulse == 1) {
            switch(id) {
            case "LEFT":
                m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0);
                break;
            case "RIGHT":
                m_phy.character_rotation_inc(obj, -elapsed * ROT_SPEED, 0);
                break;
            }
        }
    }

    m_ctl.create_sensor_manifold(_character, "LEFT", m_ctl.CT_CONTINUOUS,
        rotate_array, left_logic, rotate_cb);
    m_ctl.create_sensor_manifold(_character, "RIGHT", m_ctl.CT_CONTINUOUS,
        rotate_array, right_logic, rotate_cb);
}

As we can see it is very similar to setup_movement().

The elapsed sensor was added which constantly generates a positive pulse. This allows us to get the time, elapsed from the previous rendering frame, inside the callback using the controls.get_sensor_value() function. We need it to correctly calculate the turning speed.

The type of sensor manifolds has changed to CT_CONTINUOUS, i.e. the callback is executed in every frame, not only when the sensor values change.

The following method turns the character around the vertical axis:

m_phy.character_rotation_inc(obj, elapsed * ROT_SPEED, 0)

The ROT_SPEED constant is defined to tweak the turning speed.

Character jumping


The last control setup function is setup_jumping():

function setup_jumping() {
    var key_space = m_ctl.create_keyboard_sensor(m_ctl.KEY_SPACE);

    var jump_cb = function(obj, id, pulse) {
        if (pulse == 1) {
            m_phy.character_jump(obj);
        }
    }

    m_ctl.create_sensor_manifold(_character, "JUMP", m_ctl.CT_TRIGGER, 
        [key_space], function(s){return s[0]}, jump_cb);
}

The space key is used for jumping. When it is pressed the following method is called:

m_phy.character_jump(obj)

Now we can control our character!

Moving the camera


The last thing we cover here is attaching the camera to the character.

Let's add yet another function call - setup_camera() - into the load_cb() callback.

This function looks as follows:

function setup_camera() {
    var camera = m_scs.get_active_camera();
    m_cons.append_semi_soft_cam(camera, _character, CAMERA_OFFSET);
}

The CAMERA_OFFSET constant defines the camera position relative to the character: 1.5 meters above (Y axis in WebGL) and 4 meters behind (Z axis in WebGL).

This function finds the scene's active camera and creates a constraint for it to follow the character smoothly.

That's enough for now. Lets run the app and enjoy the result!

ex02_img03.jpg?v=20140717114607201406061

Link to the standalone application

The source files of the application and the scene are part of the free Blend4Web SDK distribution.

Mobile Game Crowdfunding Experience from Two Kickstarter Campaigns

$
0
0
Hello. In my previous "Development of the Game: From an Idea on a Napkin to a Campaign on Kickstarter" publication I wanted to dedicate a separate article to the Kickstarter campaign. Now the game is already available on the App Store so I have finally got a chance to share some Kickstarter experience with you.

The First Campaign


When we had a 90%-ready game, it was not possible to fund this project using our own resources anymore. At that time the financial support of our game seemed obtainable only with the help of borrowed funds so we decided to choose a crowdfunding platform as a source of our additional capital.


3993d0d4cb39fb87026457908d54dccd.jpg
One of the main game posters


To start our first campaign we needed a presentation of our game in English. The search for a copywriter capable to make a good campaign description that would be strong enough to attract potential investors began. The choice was made in favor of the freelance "professionals", but all the texts they proposed looked cold hearted and written without any desire to understand the essence of the game. Despite the dozens of pre-casted writers we still had a hope for good results.

We decided to focus on the best candidate. After 2 weeks of work he gave us texts that were still requiring our participation, constant changes and additions through the whole process of writing. As a result, all the texts about the campaign on Kickstarter have been rewritten from the very start and fulfilled with our own deep understanding of the idea and product features. "If you want something done well, do it yourself." Getting exciting content from a person that’s not involved in the product creation is quite difficult.

The first part of the presentation was text, the second was the schedule. Next we started selecting the most valuable screenshots and game arts.


ec898aa318f86828cb4bfce860449438.png
Demolition Lander screenshot for Kickstarter campaign


There are several specific design restrictions on the Kickstarter platform. Besides the standard format, text size and graphics resolution rules, it’s not allowed to have more than one break line between the paragraphs of your campaign’s text. If you want to create a paragraph or split the sections of your text with some additional space – try to insert a transparent rectangular image to replace paragraph indent. We have solved this problem and used stylized images to cover spaces:


26e631f69a510ebc682ae8ca3f80d70f.png
The end of previous paragraph separated from the beginning of the next one


Presentation was ready. Remember that developer’s team photos and personal information about each employee make potential investors more confident. Real people are always better than faceless brands.

During the making of our presentation we began to search for a studio to create the promotional video. We were immediately connected with a local video producer. His creative team and lots of interesting ideas won our trust and we continued to communicate. After long negotiations we have not discussed the final cost of the full package of their services. Cooperation with our video producer turned into a stunning amount of money that our budget was not able to carry out. Do not waste your time and try to set up the third-party services costs in advance! But it was not over yet. We have managed to find a less tempted and more hardware-simple team. Their ideas deserved our attention and the work began. Part of the scenery was ours, other part was provided by video producer. Acting staff was carefully selected, the plot and the script were approved and the filming started.

After 2 days of video shooting we were able to combine art (opening scenes in the office) and technical (gameplay and voice) parts into this video trailer:



Demolition Lander Kickstarter presentation


I knew it would be better to add the voice of a native English speaker for the main game features and gameplay part voicing, but after collective discussion we decided not to use it at that time. Anyway, native speech makes a developer stand out from a crowd of developers that are foreign to a targeted customer.

Texts, images, video, campaign design and presentation were ready. Our next challenge was to find a PR agent for promotion of our crowdfunding project. We focused on agencies that promised a flow of visitors on our campaign landing page for ultra-low budget (under $100). We found one, indeed, and started our collaboration. CrowdfundBuzz guaranteed traffic delivery to our campaign for one hundred dollars over the phone. They got their money and the project started.

During the campaign there were some backers. Every one of them donated a certain amount of money and became a real fan of our games. CrowdfundBuzz really gave us some visitors, but none of the 1,000 hits per page turned into any financial investments. Apart from the official fundraising campaign, PR agency was running Twitter, Facebook and YouTube profiles. Since the use of aggressive following and spam mechanisms Twitter account was closed and YouTube threatened us with complete removal of our material from their website. After some manipulations with traffic on Facebook, about 600 subscribers with almost zero activity appeared.

Our efforts brought us some money, but the goal was not achieved and our product was not known by anyone. We decided to cancel the crowdfunding campaign, think over all of the nuances and start again.


5893ffc630b609790c6661e9fa3266f5.png
First campaign infographics: collected funds

e93cab9eb8c0f22726a23a2d7049042c.png
First campaign infographics: video views


The Second Campaign


After the 1st try we were pondering for a long time over some improvements and new promotion methods that should be included in the second campaign. The main innovation was to increase our PR budget with a view to hire a reliable and experienced person for the upcoming campaign promotion.

Welcome back, freelance. More than 10 candidates were revised, including Indian managers, who promised up to 10,000 visitors a day and few "veteran” PR persons, demanding a fee that is three times higher than our most ambitious goal on Kickstarter. We stopped at the PR manager from the United States, who seemed a surprisingly adequate person during our conversations and started our collective work instantly: rephrased existing texts, substituted them with a native speech, slightly polished our graphics and video. It is very important to understand that the key factor in achieving success is an interesting, exciting presentation movie. People always make their first impression looking at your video and then they read texts.

The PR roadmap for a moderate budget looked like this:
• Press release about the launch of our Kickstarter campaign distributed to bloggers, journalists and gaming press.
• Active Facebook and Twitter accounts of the game
• Subscription to YouTube channels of leading game reviewers and video bloggers.

PR agency delivered some results: an article on Cliqist, GamerHeadlines and a few blog posts. In addition, the agent offered a variety of prizes after detailed research about certain topics: "the most popular payment amounts", "how much money does the average backer invest in game projects" and "what are the top prize lists offered by competitors." In our first campaign we offered 9 different rewards for those who supported us. Prizes depended on the contributed amount of money ($1 minimum).

For one dollar we proposed to perpetuate Backer’s name in game credits. For $5,000 or more we were ready to make backers co-authors, implement their ideas and keep them up to date with our latest news. Second campaign had 19 awards, including the "early bird" option for most popular investment options: 1, 4, 4EB, 10, 10EB, 15, 20EB, 25, 35EB, 45EB , 50, 100, 200EB, 350EB, 500, 500, 1000, 2000, 5000.


3c9cf4e437dd5c9707ba03675344159e.png
Screenshot


The second campaign started. This time we decided to ask our friends to invest some money in our project. We thought that their funding will make the project look alive in the eyes of potential investors. Kickstarter works pretty simple: funds displayed on the counter of the project are withdrawn from cardholder’s account in case of successful campaign funding only. The Kickstarter community just makes a promise to transfer you some money, that’s why contributions are called "pledges". As a result, our friends have funded a larger half from our campaign total.

On the 11th day of our campaign we have received a negligible amount of traffic on Kickstarter and attracted about 20 investors. Then we decided to reanimate our project performing cross-promo with other games that were still running their campaigns too. The idea was to promote a partner’s publications on our page in response to similar from a partner. I wanted to let everybody know about Demolition Lander! At least it did not require any additional costs. My plans could have been done, but I had to catch up only to 1 day. After hundreds of cross-promotional messages within one night, we were almost shut down for spamming. There was a tiny effect from such a process but it was far away from a total funding goal.


1964ff18c88c21e4e40353d935eeaeae.png
Second campaign infographics: collected funds

61dc71f38b5985634cfdf172d7a25ea9.png
Second campaign infographics: video views


Results


Crowdfunding is a very complicated way for iOS game fundraising. Kickstarter is not the best platform for mobile game promotion as such. People think that console and PC games are more creative than mobile for some reason. That’s why PC and console projects are getting their goal more likely than any other crowdfunding campaigns dedicated to video gaming.

The bottom line is: try to start spreading news about your game and its plans to raise funds with the help of crowdfunding before you start any actual fundraising activity; create social media accounts, develop contacts with press and bloggers, let the world know about you while everything is on the stage of idea and invest more in PR.

In spite of our multiple attempts to raise necessary amount of funds within crowdfunding platform, our team brought the project live on their own. Now there are 2 games available on the App Store:

Demolition Lander: Planet Earth. Free version with only one planet and one ship available.
Demolition Lander: Universe. Full version.

3ds Max 2015 Review

$
0
0
Occasionally the development team at Autodesk focuses their efforts on improving the application performance instead of adding flashy new features. The 2015 release of 3ds Max seems to be one of these times. This version has a few nice surprises, but the biggest change is under the hood. The viewports seem more responsive and the software doesn't crash nearly as often as the previous version.

New Welcome Screen


You don't have to go very far into the software to see some new features. The Welcome Screen that appears when you first start the software has been overhauled. It is now divided into three different sections including Learn, Start and Extend.

The Learn section has links to a list of 1-minute movies that cover all the basics of using the software. The videos touch on just the fundamentals, but they are great for those new to the software. There are also links to the 3ds Max Learning Channel on YouTube as well as some example content pages.

The Start section of the Welcome Screen has the same features from the previous Welcome Screen including a list of recently opened files along with a Workspace drop-down list and path to the current Project Folder.

The Extend section includes access to Autodesk Exchange and highlights a new plug-in each time the software is loaded. There is also a link to various scripts found on the Area website and a link to Autodesk 360 where you can access new content such as animation sequences from the Animation Store and new trees and plants using the Download Vegetation link.

One of the new features available through the Autodesk Exchange application store is the Stereo Camera plug-in. This plug-in helps to quickly create stereoscopic scenes with the red and blue views for each eye that creates a 3d effect when viewed using the red/blue 3d glasses.

Placement Tools


One of the more difficult skills to master for new users is positioning and orienting objects in the scene. 3ds Max 2015 includes a new tool called the Select and Place tool that makes it easier to position objects relative to one another.

Using the Select and Place tool, you can drag the selected object and it automatically snaps the pivot of the object to the surface of the other scene objects. If you right click on the tool in the main toolbar, then you can access a dialog box of settings that give you even more control. Within the dialog box is a Rotate button that causes the selected object to spin about its pivot.

The Use Base As Pivot button in the Placement Settings dialog box causes the points closest to the other objects to be the pivot point. For example, a sphere would just touch the edge of another sphere instead of being embedded since its pivot is in the center if this button is enabled.

The Pillow mode button moves the object over the surface of the other objects without allowing the two objects to intersect. The AutoParent automatically makes the moved object a child of the object that it is positioned next to. You can also control the Object Up Axis using the buttons in the Placement Settings dialog box.

This tool is great for beginners, but it is also helpful when placing objects such as characters on the ground surface of a scene. Figure 1 shows this tool in use as the raised object is neatly placed on the surface of the rounded housing.


Attached Image: Figure 1 - Place Tool.jpg
Figure 1: The Select and Place tool lets you place the selected object by moving it over the surface of the other scene objects. Image courtesy of Autodesk


Working with Point Cloud Datasets


A Point Cloud is a dataset that includes a large number of points that together make up a real-world object. Point Cloud datasets are created by scanning an environment and the points are all saved using the RCP or RCS file formats. The points within a Point Cloud object are unique because each point defines a location in space, but they also have a color associated with it, so you could get a point cloud dataset of an entire bridge that can be used as a backdrop for your scene.

Autodesk also has an application called ReCap that works with point scanners to create Point Cloud datasets. These datasets can then be imported into 3ds Max where they appear as a Point Cloud object. These Point Cloud objects aren't affected by the scene lighting, which makes them ideal for backdrops. The drawback to using a Point Cloud object is that they cannot be edited using the modeling tools.

Usually the points are so dense that you wouldn't want to edit them anyway, but you can pare down the points to a specific volume to reduce its size.

You also cannot apply materials to Point Cloud objects. These objects automatically get a Point Cloud material applied to them. This material lets you change the color intensity of the points and you can also change the ambient occlusion and shadow reception settings. If you are planning to render out a scene with a Point Cloud object, then you'll need to activate the mental ray renderer.

Within the Grid and Snap Settings dialog box is a option to snap to Point Cloud points, but most point clouds are so huge, that it could easily take hundreds of hours to convert a Point Cloud object to an editable geometry object, so this really isn't feasible.

When a Point Cloud object is created, you can change the color channel and level of detail of the point cloud using the settings in the Display rollout. The Color Channel options include True Color, Elevation Ramp, Intensity Ramp, Normal Ramp and Single Color. There is also a map button for applying a map to the Point Cloud object.

The Level of Detail setting includes a slider that moves from Performance to Quality. It also updates the number of points included and the total number of points in the Point Cloud object. There are also settings for changing the size of each point, which is helpful to fill in all the empty space.

Although the Point Cloud object seems like a useful construct, most Point Clouds are way too dense to be interactively used within a scene. Also, Point Clouds typically have large gaping areas without any detail that hurts the visual look of the object as a background. More specifically for game developers, Point Clouds aren't supported by any game engines and the work to convert a Point Cloud object to a useable piece of geometry would take too long to be beneficial.

Quad Chamfer


For those that like to model with edge loops, the Chamfer modifier has a new Quad Chamfer option that insures that all chamfer edges are divided using quad-based polygons. This results in cleaner edits that deform better.

The tool also lets you smooth across just the chamfer area or across the entire object. You can also define the number of edges to include, remove the chamfer area or isolate the chamfer sections. Figure 2 shows how edges are smoothed using the enhanced Chamfer tool with a Quad Chamfer setting.


Attached Image: Figure 2 - Quad Chamfer.jpg
Figure 2: The new Quad Chamfer settings results in smoother objects when applied to edges. Image courtesy of Autodesk


Using the ShaderFX Editor


Previous versions of 3ds Max allowed you use external apps to generate and import shaders, but 3ds Max 2015 finally has added a ShaderFX Editor to the software. When the DirectX Shader material is added to an object, then the Open ShaderFX button appears in the rollout.

The ShaderFX Editor lets you right click to access the various shader nodes. These nodes can be connected together by dragging from output channels to input channels just like using the Slate Material Editor. The inputs and outputs are all color-coded which helps to insure that the right data type is being passed. At the top of each shader node is a preview that shows what the current texture map looks like.

Once you are happy with the shader results, you can export the shader to the HLSL, CLSL or CgFX formats. Created shaders can then be viewed in the viewports, as shown in Figure 3, without having to export the shader to the game engine.


Attached Image: Figure 3 - ShaderFX.jpg
Figure 3: Shaders created in the ShaderFX Editor can be viewed directly in the viewports. Image courtesy of Autodesk


Layer Explorer


The Scene Explorer has been overhauled and now is docked to the left of the viewports for immediate access. Although the Scene Explorer appears docked to the left by default, you can quickly drag it to the right or top of the interface also.

Having the Scene Explorer docked to the side of the interface provides a quick, easy way to select and organize all the scene objects. It also makes it easy to link objects to create a hierarchy.

It has also been upgraded with a layer manager mode that you can quickly select. This makes dividing your scene into layers quick and easy. You can also nest layers which is a huge benefit.

Populate Improvements


The big feature from the last version of 3ds Max was the Populate tool that quickly added diverse animated crowds to a scene. Populate has been improved in a number of ways for this release including the ability to add runners to the crowd and more control over the speed of the crowd characters. There is also an ability to add seated characters to the idle areas so crowds can include restaurants and concert venues. There are also new controls that let you set the appearance of the characters. Finally, if there is a particular character that you like, you can bake its geometry and animation into the scene.

Small Changes with Big Rewards


Autodesk has always been really good at listening to requests from their users. They've even established a set of forum pages called Small Annoying Things that lets users identify annoying parts of the software and then vote on which ones get fixed.

One of the winners in the latest release was to add the Undo and Redo buttons to the main toolbar. This is a good example of a simple change that helps many users greatly.

Another small change that will be a big boost is that mental ray renderings can be viewed within the ActiveShade preview window (Figure 4). The iray renderer also supports render elements, so you can break down rendering to several compositing layers.


Attached Image: Figure 4 - ActiveShade.jpg
Figure 4: Enabling mental ray within an ActiveShade window lets you interact with the scene in real-time. Image courtesy of Autodesk


Scripters will be happy to hear that there is now a 3ds Max Python API that lets you execute Python scripts from the 3ds Max command line.

Summary


Most of the work in this release was done to improve performance, which is great for users, but can be hard to quantify. The biggest new features in this release are the Point Cloud support and the ShaderFX Editor. As far as game developers go, the Point Cloud features aren't really feasible for game scenes since most Point Clouds are extremely dense and they cannot be used within most of the major game engines. The ShaderFX Editor is a nice addition and one that game creators can take full advantage of, but most shader effect builders already have a process outside of the software and it will take some time before complex shader trees will be built within 3ds Max.

Overall, the improved performance of the software will help users immensely, but something as vague as better performance might be a hard sell for cost-conscience managers.

3ds Max 2015 and 3ds Max Design 2015 are both available as a stand-alone products. 3ds Max 2015 is also available as part of the Entertainment Creation Suite, bundled with Autodesk Maya, Mudbox, MotionBuilder, Softimage, and Sketchbook Designer. For more information on 3ds Max 2015, visit the Max product pages on Autodesk’s web site at http://usa.autodesk.com. A free trial version of 3ds Max is also available at http://www.autodesk.com/3dsmaxtrial.

The Art of Feeding Time: Animation

$
0
0
While some movement was best handled programmatically, Feeding Time‘s extensive animal cast and layered environments still left plenty of room for hand-crafted animation. The animals in particular required experimentation to find an approach that could retain the hand-painted texturing of the illustrations while also harkening to hand-drawn animation.


old_dogsketch.gif old_dogeat.gif


An early pass involved creating actual sketched frames, then slicing the illustration into layers and carefully warping those into place to match each sketch. Once we decided to limit all the animals to just a single angle, we dispensed with the sketch phase and settled on creating the posed illustrations directly. When the finalized dog image was ready, a full set of animations was created to test our planned lineup of animations.

The initial approach was to include Sleep, Happy, Idle, Sad, and Eat animations. Sleep would play at the start of the stage, then transition into Happy upon arrival of the delivery, then settle into Idle until the player attempted to eat food, resulting in Sad for incorrect choices and Eat for correct ones.


dog2_sleeping.gif dog3_happy.gif dog1_idle.gif dog_sad2.gif dog3_chomp.gif


Ultimately, we decided to cut Sleep because its low visibility during the level intro didn’t warrant the additional assets. We also discovered that having the animals rush onto the screen in the beginning of the level and dart away at the end helped to better delineate the gameplay phase.

There were also plans to play either Happy or Sad at end of each level for the animal that ate the most and the least food. The reactions to this, however, was almost overwhelmingly negative! Players hated the idea of always making one of the animals sad regardless of how many points they scored, so we quickly scrapped the idea.

The Happy and Sad animations were still retained to add a satisfying punch to a successful combo and to inform the player when an incorrect match was attempted. As we discovered, a sad puppy asking to be thrown a bone (instead of, say, a kitty’s fish) proved to be a great deterrent for screen mashing and worked quite well as a passive tutorial.

While posing the frames one by one was effectively employed for the Dog, Cat, Mouse, and Rabbit, a more sophisticated and easily iterated upon approach was developed for the rest of the cast:


monkeylayers.gif jaw_cycle.gif lip_pull.gif


With both methods, hidden portions of the animal's faces such as teeth and tongues were painted beneath separated layers. In the improved method, however, these layers could be much more freely posed and keyframed with a variety of puppet and warp tools at our disposal to make modifications to posing or frame rate much simpler.


monkey_eat.gif beaver_eating.gif lion_eat.gif


The poses themselves are often fairly extreme, but this was done to ensure that the motion was legible on small screens and at a fast pace in-game:


allframes.png


For Feeding Time’s intro animation and environments, everything was illustrated in advance on its own layer, making animation prep a smoother process than separating the flattened animals had been.

The texture atlas comprising the numerous animal frames grew to quite a large size — this is just a small chunk!


ft_animals_atlas.jpg


Because the background elements wouldn’t require the hand-drawn motion of the animals, our proprietary tool “SLAM” was used to give artists the ability to create movement that would otherwise have to be input programmatically. With SLAM, much like Adobe Flash, artists can nest many layers of images and timelines, all of which loop within one master timeline.

SLAM’s simple interface focuses on maximizing canvas visibility and allows animators to fine-tune image placement by numerical values if desired:


slamscreen.jpg


One advantage over Flash (and the initial reason SLAM was developed) is its capability to output final animation data in a succinct and clean format which maximizes our capability to port assets to other platforms.

Besides environments, SLAM also proved useful for large scale effects, which would otherwise bloat the game’s filesize if rendered as image sequences:


slamconfetti.jpg


Naturally, SLAM stands for Slick Animation, which is what we set out to create with a compact number of image assets. Hopefully ‘slick’ is what you’ll think when you see it in person, now that you have some insight into how we set things into motion!

Article Update Log


16 July 2014: Initial release

OpenGL 3.3+ Tutorials

$
0
0

Megabte Softworks OpenGL 3.3+ Tutorials


Hello Guys! My name is Michal Bubnar and I'm maintaining a series of modern OpenGL tutorials. The minimum version of OpenGL used is 3.3, where all of the deprecated stuff has been removed, so the knowledge you learn is forward compatible. At the time of writing this post, there are 24 tutorials, more to come. These tutorials are completely free :)

List of tutorials so far


01.) Creating OpenGL 3.3 Window - teaches you how to create window with OpenGL 3.3 context
02.) First Triangle - in this tutorial, first triangle (and quad :) ) is render
03.) Shaders Are Coming - the most basic shader, that does color interpolation and replaces old glColor3ub function
04.) Going 3D With Transformations - now we go to the 3D space and do some basic rotations and translations
05.) Indexed Drawing - teaches indexed drawing mode - rendering made by indexing vertices
06.) Textures - texture mapping basics and explanation of most commonly texture filterings (bilinear, trilinear, mipmap etc.)
07.) Blending Basics - creation of transparent objects, discussing having fully opaque and transparent objects on the scene at the same time
08.) Simple Lighting - really simple lighting model, that uses only diffuse part of the light, so that the triangles that face light direction are illuminated more than the triangles facing the opposite direction according to cosine law
09.) Fonts And Ortho Projection - teaches you how to use 2D fonts using FreeType library and also discusses orthographics projection
10.) Skybox - make the scene nicer by adding some skies around! Skybox (sometimes called skydome) is really the oldest and easiest way to achieve this
11.) Multitexturing - mapping two or more textures at once
12.) Fog Outside - fog is always a nice effect. This tutoria. teaches you how to make a fog using shaders
13.) Point Lights - adding type of light, that has a position and some color (like bulb or flaming torch) can really improve the appearance and feeling of the scene
14.) Geometry Shaders - new shader type, taht generates additional geometry. This tutorial subdivides incoming triangles and makes three new triangles. All is done in geometry shader on GPU.
15.) OBJ Model Loader - tutorial, that loads OBJ model file types. This tutorial is later replaced by more robust 20th tutorial - model loading using Assimp. But you can learn how does obj file looks like
16.) Rendering To A Texture - offline rendering, where the result is a texture with your rendered stuff. If you were to program a security camera in 3D game, you could use this to render scene from camera's view and then show the result on some display in 3D game
17.) Spotlight - have you ever played Doom 3? In this tutorial there is a really simple, yet powerful flashlight model using shaders, that looks really nice
18.) 3D Picking Pt. 1 - picking method using color indexing
19.) 3D Picking Pt. 2 - picking method using ray casting
20.) Assimp Model Loading - loading of 3D models using Assimp library, which is freeware and can handle almost every modern model format
21.) Multilayered Terrain - create a nice terrain with multiple textures blended together and some paths and pavements craved into the terrain
22.) Specular Lighting - specular part of light depends on the position of camera and creates a nice, shining effect on metallic objects. You can control this by setting material properties
23.) Particle System - learn how to program a particle system, that runs entirely on GPU using transform feedback
24.) Animation Pt. 1 - Keyframe MD2 - very basics of computer animation, that uses keyframe animation. Old good MD2 file format, which has been used in games like Quake II was using exactly this method for animations, so it's a good starting point

Conclusion


I hope you will find these tutorials useful, as I invested pretty much time into writing them and articles. If it helps some of you, I'll be only glad :)

Article Update Log


22 Jul 2014: Initial release

WildStar CREDD System Explained

$
0
0
I rarely think it's worth talking about MMO's and the slightly innovative things they do - the scene has become too bland to see significant differences. That being said, WildStar has designed and implemented an exceedingly interesting monetization system that I wanted to analyze - so let's discuss WildStar CREDD!

This entire design concept is based around a monthly subscription model with options of payment. The first option is a "CREDD" system, which functions as an in-game commodity, which can be bought and sold like any other virtual good. WildStar CREDD can be consumed by a player to lengthen the subscription of an account by one month allowing players the option to exchange game gold for their subscription. If players don't want to bother farming gold they can just pay a regular subscription themselves. A real point of interest is that a normal subscription is $15.99 per month while CREDD costs $19.99.

I'll start by saying that I think this is one of the most brilliant monetization systems I've ever seen, and here's why. This system rewards whale users, who play substantially more than other players, with potential for a free subscription and WildStar earns more money each time WildStar CREDD is exchanged. Typically, an MMO relies on the heavy micro-transaction buying whales to contribute to the main revenue of the game, but this design supplements the time invested by whales to be a "play to pay" model.

What's interesting about this idea is that it potentially creates two distinct player demographics for their game and WildStar is aware of the nature of the two player types.


Credd.png


I know some of the folks doing the monetization of WildStar and they're not fools; they know exactly what they're doing. Here's the irony of this player type segmentation.

Players who contribute a ton of time to a game, far more than the average, are generally called Whales. These players usually represent less than 5% of a total player base and quite often are buying micro-transactions significantly more than the average casual player. WildStar is saying players who typically become whales in nature won't have to spend their money to play the game. It's basically the reverse of what happens in a typical MMO environment.

Concerns


Although my initial first thought of this model was positive, I found some immediate issues with the concept.

Value Drain - Because players have to spend their time in-game gathering gold to then pay for their month of play you can actually attribute a time frame of how many hours a player is playing to contribute his earnings towards his subscription rather than in-game pursuits. Have you ever been to one of those restaurants that lets you wash dishes for an hour in exchange for a meal? Rarely is the food they are serving top notch and a meal is usually less satisfying when you're eating at a table then working in the back.

Value Extension - WildStar isn't doing this because they are philanthropic care bears - they are experimenting with a new form of monetization that they believe will draw more players in and therefore earn them more revenue. This also isn't just a random addition they added to the game. This is a specific design meant to earn 25% more subscription revenue for WildStar.

Value Mis-match - This model has one assumption that I am unsure about - that a regular player is willing to pay $15 for their subscription and another $20 to earn more in-game gold in just one month. $35 for one month of game-play is just too high and I can't imagine how a game can deliver content that ensures players feel they are getting good value out of the exchange.

Variable Rate Value - This concept still doesn't make sense to me so I hope I'm wrong here. It is going to be far more difficult to earn gold at lower levels than it will be at higher levels. The issue I see here is that the price of WildStar CREDD will be too high for new players and exceedingly low for high level players able to do raids and end game farming. The price of CREDD needs to, in some way, be constant for all players regardless of income rate. My guess is in the early months players are going to be buying CREDD like mad to enjoy the gimmick experience which will drive up the price. As the game matures CREDD's price will fall after the regular player volume drop off occurs (always happens around 3 months after MMO launch).

Value Extension - This idea is somewhat basic. If the gold you have in game can be used for a variety of things like new items and repairing armor but also for CREDD, then any gold spent not on paying for your subscription now has real world value because you could have spent that gold on WildStar CREDD. This always ruins a game for players like me who want to feel like he's getting value from a game instead of having to fight to get value from it.

Potential Abuse


I'm not going to state this concept as something which may be happening, but something which is possible due to the system design. Imagine I decide to buy WildStar CREDD and list it on the in-game marketplace for sale. Now the very important role of the CREDD system is that the price of CREDD stays high in-game because no player is willing to pay $20 for just a few dollars of in-game money.

So what's to stop WildStar from buying up any excess CREDD in the market place to keep the value high? Letting the fox guard the hen house is a concept that any good design should stay away from if only to disprove any accusations of market manipulation.

On top of this, the desire to keep the in-game gold volume low is exceedingly high. If players have too much spare game gold, they will throw it into CREDD and basically play for free without even trying. WildStar has a strong interest in keeping your gold income low with money sinks like armor and item repair from dying. If there's any relationship between CREDD sales and how hard the game is (ensuring players die more) the very design of the game may be compromised.

This really isn't different from the controversy that Elder Scrolls Online faced with the collectors edition version which came with a mount. Essentially players could spend the extra $15 or $20 to purchase the special edition of the game which came with an in-game mount for players. To purchase a mount regularly with in-game gold was just too time consuming so it became an obvious pay-wall.

Summary


It's a brilliant monetization strategy and I'm excited to study the long term effects it has on the game. It creatively distributes the cost of subscription and will allow for some fascinating economic experiments. My guess is that players will really enjoy WildStar CREDD and it will enhance their experience.

I'm drafting a monetization design for another MMO in development right now but I wouldn't consider utilizing this design. I believe a game should belong to a player for 10 days or 10 years based on their decision without a variable fee dependent on their time invested.


What do think of WildStar CREDD? I'd love to hear your experience with it or ideas you've had about the concept!

Making a Game with Blend4Web. Part 2: Models for the Location

$
0
0
In this article we will describe the process of creating the models for the location - geometry, textures and materials. This article is aimed at experienced Blender users that would like to familiarize themselves with creating game content for the Blend4Web engine.

Graphical content style


In order to create a game atmosphere a non-photoreal cartoon setting has been chosen. The character and environment proportions have been deliberately hypertrophied in order to give the gaming process something of a comic and unserious feel.

Location elements


This location consists of the following elements:
  • the character's action area: 5 platforms on which the main game action takes place;
  • the background environment, the role of which will be performed by less-detailed ash-colored rocks;
  • lava covering most of the scene surface.
At this stage the source blend files of models and scenes are organized as follows:


ex02_p02_img01.jpg?v=2014072916520120140


  1. env_stuff.blend - the file with the scene's environment elements which the character is going to move on;
  2. character_model.blend - the file containing the character's geometry, materials and armature;
  3. character_animation.blend - the file which has the character's group of objects and animation (including the baked one) linked to it;
  4. main_scene.blend - the scene which has the environment elements from other files linked to it. It also contains the lava model, collision geometry and the lighting settings;
  5. example2.blend - the main file, which has the scene elements and the character linked to it (in the future more game elements will be added here).

In this article we will describe the creation of simple low-poly geometry for the environment elements and the 5 central islands. As the game is intended for mobile devices we decided to manage without normal maps and use only the diffuse and specular maps.

Making the geometry of the central islands


ex02_p02_img02.jpg?v=2014073110210020140


First of all we will make the central islands in order to get settled with the scene scale. This process can be divided into 3 steps:

1) A flat outline of the future islands using single vertices, which were later joined into polygons and triangulated for convenient editing when needed.


ex02_p02_img03.jpg?v=2014072916520120140


2) The Solidify modifier was used for the flat outline with the parameter equal to 0.3, which pushes the geometry volume up.


ex02_p02_img04.jpg?v=2014072916520120140


3) At the last stage the Solidify modifier was applied to get the mesh for hand editing. The mesh was subdivided where needed at the edges of the islands. According to the final vision cavities were added and the mesh was changed to create the illusion of rock fragments with hollows and projections. The edges were sharpened (using Edge Sharp), after which the Edge Split modifier was added with the Sharp Edges option enabled. The result is that a well-outlined shadow has appeared around the islands.

Note:  It's not recommended to apply modifiers (using the Apply button). Enable the Apply Modifiers checkbox in the object settings on the Blend4Web panel instead; as a result the modifiers will be applied to the geometry automatically on export.


ex02_p02_img05.jpg?v=2014073110210020140


Texturing the central islands


Now that the geometry for the main islands has been created, lets move on to texturing and setting up the material for baking. The textures were created using a combination of baking and hand-drawing techniques.

Four textures were prepared altogether.


ex02_p02_img06.jpg?v=2014072916520120140


At the first stage lets define the color with the addition of small spots and cracks to create the effect of a rough stony and dusty rock. To paint these bumps texture brushes were used, which can be downloaded from the Internet or drawn by youself if necessary.


ex02_p02_img07.jpg?v=2014072916520120140


At the second stage the ambient occlusion effect was baked. Because the geometry is low-poly, relatively sharp transitions between light and shadow appeared as a result. These can be slightly blurred with a Gaussian Blur filter in a graphical editor.


ex02_p02_img08.jpg?v=2014072916520120140


The third stage is the most time consuming - painting the black and white texture by hand in the Texture Painting mode. It was layed over the other two, lightening and darkening certain areas. It's necessary to keep in mind the model's geometry so that the darker areas would be mostly in cracks, with the brighter ones on the sharp geometry angles. A generic brush was used with stylus pressure sensitivity turned on.


ex02_p02_img09.jpg?v=2014072916520120140


The color turned out to be monotonous so a couple of withered places imitating volcanic dust and stone scratches have been added. In order to get more flexibility in the process of texturing and not to use the original color texture, yet another texture was introduced. On this texture the light spots are decolorizing the previous three textures, and the dark spots don't change the color.


ex02_p02_img10.jpg?v=2014072916520120140


You can see how the created textures were combined on the auxiliary node material scheme below.


ex02_p02_img11.jpg?v=2014072916520120140


The color of the diffuse texture (1) was multiplied by itself to increase contrast in dark places.

After that the color was burned a bit in the darker places using baked ambient occlusion (2), and the hand-painted texture (3) was layered on top - the Overlay node gave the best result.

At the next stage the texture with baked ambient occlusion (2) was layered again - this time with the Multiply node - in order to darken the textures in certain places.

Finally the fourth texture (4) was used as a mask, using which the result of the texture decolorizing (using Hue/Saturation) and the original color texture (1) were mixed together.

The specular map was made from applying the Squeeze Value node to the overall result.

As a result we have the following picture.


ex02_p02_img12.jpg?v=2014072916520120140


Creating the background rocks


The geometry of rocks was made according to a similar technology although some differences are present. First of all we created a low-poly geometry of the required form. On top of it we added the Bevel modifier with an angle threshold, which added some beveling to the sharpest geometry places, softening the lighting at these places.


ex02_p02_img13.jpg?v=2014072916520120140


The rock textures were created approximately in the same way as the island textures. This time a texture with decolorizing was not used because such a level of detail is excessive for the background. Also the texture created with the texture painting method is less detailed. Below you can see the final three textures and the results of laying them on top of the geometry.


ex02_p02_img14.jpg?v=2014072916520120140


The texture combination scheme was also simplified.


ex02_p02_img15.jpg?v=2014072916520120140


First comes the color map (1), over which goes the baked ambient occlusion (2), and finally - the hand-painted texture (3).

The specular map was created from the color texture. To do this a single texture channel (Separate RGB) was used, which was corrected (Squeeze Value) and given into the material as the specular color.

There is another special feature in this scheme which makes it different from the previous one - the dirty map baked into the vertex color, overlayed (Overlay node) in order to create contrast between the cavities and elevations of the geometry.


ex02_p02_img16.jpg?v=2014072916520120140


The final result of texturing the background rocks:


ex02_p02_img17.jpg?v=2014072916520120140


Optimizing the location elements


Lets start optimizing the elements we have and preparing them for displaying in Blend4Web.

First of all we need to combine all the textures of the above-mentioned elements (background rocks and the islands) into a single texture atlas and then re-bake them into a single texture map. To do this lets combine UV maps of all geometry into a single UV map using the Texture Atlas addon.

Note:  The Texture Atlas addon can be activated in Blender's settings under the Addons tab (UV category)


ex02_p02_img18.jpg?v=2014072916520120140


In the texture atlas mode lets place the UV maps of every mesh so that they would fill up all the future texture area evenly.

Note:  It's not necessary to follow the same scale for all elements. It's recommended to allow more space for foreground elements (the islands).


ex02_p02_img19.jpg?v=2014072916520120140


After that let's bake the diffuse texture and the specular map from the materials of rocks and islands.


ex02_p02_img20.jpg?v=2014072916520120140


Note:  In order to save video memory, the specular map was packed into the alpha channel of the diffuse texture. As a result we got only one file.


Lets place all the environment elements into a separate file (i.e. library): env_stuff.blend. For convenience we will put them on different layers. Lets place the mesh bottom for every element into the center of coordinates. For every separate element we'll need a separate group with the same name.


ex02_p02_img21.jpg?v=2014072916520120140


After the elements were gathered in the library, we can start creating the material. The material for all the library elements - both for the islands and the background rocks - is the same. This will let the engine automatically merge the geometry of all these elements into a single object which increases the performance significantly through decreasing the number of draw calls.

Setting up the material


The previously baked diffuse texture (1), into the alpha channel of which the specular map is packed, serves as the basis for the node material.


ex02_p02_img22.jpg?v=2014072916520120140


Our scene includes lava with which the environment elements will be contacting. Let's create the effect of the rock glowing and being heated in the contact places. To do this we will use a vertex mask (2), which we will apply to all library elements - and paint the vertices along the bottom geometry line.


ex02_p02_img23.jpg?v=2014072916520120140


The vertex mask was modified several times by the Squeeze Value node. First of all the less hot color of the lava glow (3) is placed on top of the texture using a more blurred mask. Then a brighter yellow color (4) is added near the contact places using a slightly tightened mask - in order to imitate a fritted rock.

Lava should illuminate the rock from below. So in order to avoid shadowing in lava-contacting places we'll pass the same vertex mask into the Emit material's socket.

We have one last thing to do - pass (5) the specular value from the diffuse texture's alpha channel to the Spec material's socket.


ex02_p02_img24.jpg?v=2014072916520120140


Object settings


Let's enable the "Apply Modifiers" checkbox (as mentioned above) and also the "Shadows: Receive" checkbox in the object settings of the islands.


ex02_p02_img25.jpg?v=2014072916520120140


Physics


Let's create exact copies of the island's geometry (named _collision for convenience). For these meshes we'll replace the material by a new material (named collision), and enable the "Special: Collision" checkbox in its settings (Blend4Web panel). This material will be used by the physics engine for collisions.

Let's add the resulting objects into the same groups as the islands themselves.


ex02_p02_img26.jpg?v=2014072916520120140


Conclusion


We've finished creating the library of the environment models. In one of the upcoming articles we'll demonstrate how the final game location was assembled and also describe making the lava effect.

Link to the standalone application

The source files of the application and the scene are part of the free Blend4Web SDK distribution.

The Art of Feeding Time: Branding

$
0
0
Although a game's branding rarely has much to do with its gameplay, it's still a very important forward-facing aspect to consider.


ft_initial_logos.jpg
Initial concepts for a Feeding Time logo.


For Feeding Time's logo, we decided to create numerous designs and get some feedback before committing to a single concept.

Our early mockups featured both a clock and various types of food. Despite seeming like a perfect fit, the analog clock caused quite a bit of confusion in-game. We wanted a numerical timer to clearly indicate a level's duration, but this was criticized when placed on an analog clock background. Since the concept already prompted some misunderstandings -- and a digital watch was too high-tech for the game's rustic ambiance -- we decided to avoid it for the logo.

The food concepts were more readable than the clock, but Feeding Time was meant to be a game where any type of animal could make an appearance. Consequently we decided to avoid single food-types to prevent the logo from being associated with just one animal.


ft_further_logos.jpg
Even more logo concepts. They're important!


A few more variations included a placemat and a dinner bell, but we didn't feel like these really captured the look of the game. We were trying to be clever, but the end results weren't quite there.

We felt that the designs came across as somewhat sterile, resembling the perfect vector logos of large conglomerates that looked bland compared to the in-game visuals.


ft_logo_bite_white.jpg
Our final logo.


Ultimately we decided to go with big, bubbly letters on top of a simple apéritif salad. It was bright and colourful, and fit right in with the restaurant-themed UI we were pursuing at the time. We even used the cloche-unveiling motif in the trailer!

One final extra touch was a bite mark on the top-right letter. We liked the idea in the early carrot-logo concept, and felt that it added an extra bit of playfulness.


ft_icon_concepts.jpg
Initial sketches for the app icon.


The app-icon was a bit easier to nail down as we decided not to avoid specific foods and animals due to the small amount of space. We still tried out a few different sketches, but the dog-and-bone was easily the winner. It matched the in-game art, represented the core of the gameplay, and was fairly readable at all resolutions.

To help us gauge the clarity of the icon, we used the App Icon Template.

This package contains a large Photoshop file with a Smart Object embedded in various portholes and device screenshots. The Smart Object can be replaced with any logo to quickly get a feel for how it appears in different resolutions and how it is framed within the AppStore. This was particularly helpful with the bordering as iOS 7 increased the corner radius making the icons appear rounder.


ft_final_icon.jpg
Final icon iterations for Feeding Time.


Despite a lot of vibrant aesthetics, we still felt that Feeding Time was missing a face; a central identifying character.

Our first shot at a "mascot" was a grandmother that sent the player to various parts of the world in order to feed its hungry animals. A grandmother fretting over everyone having enough to eat is a fairly identifiable concept, and it nicely fit in with the stall-delivery motif.


ft_babushkas.jpg
Our initial clerk was actually a babushka with some not-so-kindly variations.


However, there was one problem: the introductory animation showed the grandmother tossing various types of food into her basket and random animals periodically snatching 'em away.

We thought this sequence did a good job of previewing the gameplay in a fairly cute and innocuous fashion, but the feedback was quite negative. People were very displeased that all the nasty animals were stealing from the poor old woman!


animation_foodsteal.gif
People were quite appalled by the rapscallion animals when the clerk was played by a kindly grandma.


It was a big letdown as we really liked the animation, but much to our surprise we were told it'd still work OK with a slightly younger male clerk. A quick mockup later, and everyone was pleased with the now seemingly playful shenanigans of the animals!

Having substituted the kindly babushka for a jolly uncle archetype, we also shrunk down the in-game menus and inserted the character above them to add an extra dash of personality.


ft_clerk_lineup.jpg
The clerk as he appears over two pause menus, a bonus game in which the player gets a low score, and a bonus game in which the player gets a high score.


The clerk made a substantial impact keeping the player company on their journey, so we decided to illustrate a few more expressions. We also made these reflect the player's performance helping to link it with in-game events such as bonus-goal completion and minigames scores.


ft_website.jpg
The official Feeding Time website complete with our logo, title-screen stall and background, a happy clerk, and a bunch of dressed up animals.


Finally, we used the clerk and various game assets for the Feeding Time website and other Incubator Games outlets. We made sure to support 3rd generation iPads with a resolution of 2048x1536, which came in handy for creating various backgrounds, banners, and icons used on our Twitter, Facebook, YouTube, tumblr, SlideDB, etc.

Although branding all these sites wasn't a must, it helped to unify our key message: Feeding Time is now available!

Article Update Log


30 July 2014: Initial release

Anatomy of an Idle Game - A Starters Guide to AngularJS

$
0
0
A little background is an order first I think to give some context to what you are reading. I’ve been a software engineer for over 10 years but during that time I have done very little web development. It’s just not an area I was interested in career-wise, but there is a saying - ‘If you’re not moving forward, you’re moving backwards’. So after moving back to Canada in an effort to make myself more employable I decided to learn about the hot new technologies that companies are looking for at the moment here in Toronto, which are mainly NodeJS, AngularJS, and python. To that end I decided to challenge myself to learn and build a simple idle game using AngularJS in 12 hours. This article is based on that effort.

This article is targeting the advanced beginner level; it assumes that you know the basics of HTML, javascript, and programming.

I also recommend you download the attached zip file which contains all the HTML, CSS, and javascript files. I'll be calling out key parts but the full code is available for you to look at and will run straight out of the box. At least in Chrome - cross browser compatability is out of scope for this article.

You can learn more about angularJS at https://angularjs.org/

It All Starts with Spreadsheets


Spreadsheets. Seriously, use them. Learn to love them. With an idle game everything comes down to time and clicks which means you can do all your planning in a nice friendly spreadsheet. You can work out all your buildings and upgrades, figure out how much each will cost and how the costs will change over time without ever writing any code.

A good plan is to think in terms of multipliers, limiters, and diminishing returns. For Gnomore Heroes the main limiter was intended to be free space. Free space was designed to limit how fast the player can expand, forcing them to invest in bigger more expensive buildings to house their gnome population. I used both a percentage increase and fixed cost increase.

The formula is 10 * hardness * depth

Each unit of free space increases depth by 1 and hardness by 5%. That determines the number of gnome seconds it takes to dig out the next free space. As you can see from the chart below the cost jumps up initially but eventually only the 5% increase will be significant.


Depth 1 - 10 GS(gnome seconds)
Depth 2 - 21 GS (31 GS total)
Depth 5 - 61 GS (171 GS total)
Depth 10 - 155 GS (742 GS total)
Depth 20 - 505 GS (4000 GS total)


Once you have everything in your spreadsheet and you have toyed around with the numbers so that it looks good - easy at the start but taking progressively longer to get the same benefit - you can start on coding.

The Bones


angular.module('gnomoreHeroesApp', ['ngCookies'])
    .controller('GameController', ['$scope', '$interval', '$cookieStore', function ($scope, $interval, $cookieStore) {

        const UNAVAILABLE = 0;
        const AVAILABLE = 1;
        const COMPLETE = 2;

        $scope.gamedata =
        {
            gnomes: 2, space: 0, mushrooms: 0, stone: 0, underpants: 0, research: 0,
            miners: 0, farmers: 0, thieves: 0, researchers: 0, builders: 0,
            mining_points: 0, farming_points: 0, thieving_points: 0,
            totalGnomes: 2, totalSpace: 1, dug: 0.0, depth: 1, hardness: 1,
            minerBonus: 1, farmerBonus: 1, thiefBonus: 10, stoneBonus: 25, builderBonus: 1, researchBonus: 1,
            cost: 10.0,
            buildings: { huts: 0, houses: 0, houseVisible: false, houseResearched: false, villages: 0, villageVisible: false, villageResearched: false, towns: 0, townVisible: false, townResearched: false, cities: 0, cityVisible: false, castles: 0, castleVisible: false },
            researchStates: [{ id: 1, value: AVAILABLE }, { id: 2, value: AVAILABLE }, { id: 4, value: AVAILABLE }, { id: 5, value: AVAILABLE }]
        };
    }]);

That block of code makes up the main game object. Gamedata is where all of the dynamic data lives. It contains all the state specific data and when any action occurs in the game it either uses some information within this object or it updates some information within this object. Keeping most of your game state information in one class makes things easier in terms of maintainable code and adding features. There is no confusion with local variables in methods, it’s clear when you are updating or accessing this information and when it comes to save your game data you can save the entire object without the need to parse out specific parts.

The other important part of the code is

angular.module('gnomoreHeroesApp', ['ngCookies'])
    .controller('GameController', ['$scope', '$interval', '$cookieStore', function ($scope, $interval, $cookieStore) {

This is your first piece of angularJS code and it forms the heart of application. Everything we will be using is contained in the module. Pay careful attention to the first parameter, gnomoreHeroesApp - this name is how we will link the HTML page to this code module.

You’ll also see $scope, $interval, and $cookieStore. these are essentially include statements that provide access to additional functionality and services. In this app we will be using:


$scope, which allows us to bind objects and methods in the controller to HTML view.
$interval, which lets us create methods that are triggered based on a timer.
$cookieStore, which lets us read and write cookie information.


You’ll notice several other objects like:

$scope.buildings =
        {
            hut: { space: 1, mushrooms: 1, buildcost: 10, description: "A hollowed out mushroom." },
            house: { space: 0, mushrooms: 3, buildcost: 30, description: "A lovely two story mushroom house." },
            village: { space: 0, mushrooms: 30, stone: 10, buildcost: 300, description: "The first step in rebuilding the gnomish civilization." },
            town: { space: 0, mushrooms: 200, stone: 50, buildcost: 600, description: "A growing community." },
            city: { space: 0, mushrooms: 1000, stone: 100, buildcost: 900, description: "Only the best gnomes can afford to live here." },
            castle: { space: 2, mushrooms: 5000, stone: 500, buildcost: 6000, description: "Triumph of gnomish engineering." }
        };

Their purpose is to hold the static data about the game. In this case the building object contains information on the different buildings available, the costs of producing those buildings, and descriptions that will appear on the front end.

The Body


One thing that angularJS does extremely well is making webpages dynamic. It makes adding click events and manipulating content very easy and it allows you to bind HTML content to your javascript objects. This means that when you change the data it automatically updates the front end without the need for the developer to write any code.

This is the important. If you don’t get this first part right nothing else is going to work and as I found out you might just end up with some confusing errors.

<HTML ng-app="gnomoreHeroesApp">
<script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0-beta.15/angular.min.js"></script>
    <script src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.0-beta.15/angular-cookies.min.js"></script>
  <script src="js/controller.js"></script>

<body ng-controller="GameController">

ng-app="gnomoreHeroesApp" - this is the first gotcha moment. That HTML property is what binds everything together. It identifies that not only is everything within an AngularJS application, it also tells AngularJS what module to use. Make sure that app field is the same as the module name in your javascript file, otherwise it won’t be able to locate your controller or any other part of its code.

The rest handles the importing of the javascript libraries and then telling AngularJS to bind everything in the body to the GameController.

Now let’s look at the status bar code first. This is the bar at the top that lets you know how many gnomes you have, how many mushrooms etc...

<div>
        <span><label>Gnomes:</label> {{gamedata.gnomes}}/{{gamedata.totalGnomes}}</span>
        <span><label>Free Space:</label> {{gamedata.space}}</span>
        <span><label>Mushrooms:</label> {{gamedata.mushrooms}}</span>
        <span><label>Stone:</label> {{gamedata.stone}}</span>
        <span><label>Underpants!!!:</label> {{gamedata.underpants}}</span>
        <span><label>Research:</label> {{gamedata.research}}</span>
        <span><label>Depth:</label> {{gamedata.depth}}</span>
    </div>

The {{}} notation is the AngularJS notation for creating a template which binds that space to some data in $scope. In this case {{gamedata.gnomes}} binds the field gnomes in gamedata to that span. AngularJS will then replace that template with the current value of gamedata.gnomes along with creating a hook which will update the value whenever it changes.

And it’s just that easy if you want to display any field in your controller - all you need is to use a template. The same works for methods:

<span class="top-gap">{{miningRate()}}/sec</span>

$scope.miningRate = function () {
            return (($scope.gamedata.miners * $scope.gamedata.minerBonus) / ($scope.gamedata.cost * $scope.gamedata.depth * $scope.gamedata.hardness)).toFixed(3);
        }

In this case I’ve bound the miningRate method to a template which will display how much free space is dug a second.

Another worthwhile point to talk about in the body of the app is using repeaters and click events. Let's take a look at the research repeater.

<div class="research-list">
                <div class="research-item " ng-repeat="item in research" ng-click="buyResearch(item.id)"
                     ng-show="isResearchAvailable(item.id)" ng-class="{'buyable-true' : gamedata.research >= item.cost, 'buyable-false' : gamedata.research < item.cost}">
                    <label class="research-name">{{item.name}}</label>
                    <strong class="research-cost">{{item.cost}}</strong>
                    <br />
                    <em class="research-text">{{item.description}}</em>

                </div>
            </div>

ng-repeat is a piece of built-in AngularJS functionality that iterates over an array and generates HTML code and bindings. In this case I'm iterating over the research array to create a div that contains all the information related to a research item. I've also bound a click event to that div which will call buyResearch with the id associated for the current research item displayed in that div. In addition to that the ng-show method will dynamically show or hide the div based on whether it is available, and the ng-class method will inject a dynamic class into the div, which means it can be styled differently in CSS depending on whether the player has enough research points or not.

The Guts


Still here? Don’t worry it’s almost over. The final part that needs going over is the guts of the game and this is your timer or game loop.

var timer = $interval(function () {
            var cost = $scope.gamedata.cost;
            $scope.gamedata.mining_points += $scope.gamedata.miners * $scope.gamedata.minerBonus;
            $scope.gamedata.farming_points += $scope.gamedata.farmers * $scope.gamedata.farmerBonus;
            $scope.gamedata.thieving_points += $scope.gamedata.thieves;
            $scope.gamedata.research += $scope.gamedata.researchers * $scope.gamedata.researchBonus;


            while ($scope.gamedata.mining_points >= cost * $scope.gamedata.hardness) {
                $scope.gamedata.dug++;
                $scope.gamedata.mining_points -= cost;
                if ($scope.gamedata.dug >= $scope.gamedata.depth) {
                    $scope.gamedata.space++;
                    $scope.gamedata.dug = 0;
                    $scope.gamedata.depth++;
                    $scope.gamedata.hardness *= 1.05;

                }

                var rnd = Math.floor((Math.random() * 100) + 1);

                if (rnd <= $scope.gamedata.stoneBonus) {
                    $scope.gamedata.stone++;
                }
            }

            while ($scope.gamedata.farming_points >= cost) {
                $scope.gamedata.farming_points -= cost;
                $scope.gamedata.mushrooms++;
            }


        }, 1000, 0);

The timer or game loop is where all the idle work is done. In this case it fires every second, and repeats until cancelled. I’ve removed part of the code for brevity, the rest can be seen in the source files, but as you can see it is pretty straight-forward. The timer fires every second and the game data progress fields all increase by the number of gnomes working on them, plus bonuses.

If cost is less than the number of points accumulated then it adds another level dug or a mushroom.

One important thing to note is that you have to destroy the timer when you’re done or it will cause performance problems for the user. The code block below performs that function.

$scope.$on('$destroy', function () {
            // Make sure that the interval nis destroyed too
            if (angular.isDefined(timer)) {
                $interval.cancel(timer);
                timer = undefined;
            }

            if (angular.isDefined(autoSaveTimer)) {
                $interval.cancel(autoSaveTimer);
                autoSaveTimer = undefined;
            }
        });

Lastly let’s talk about saving. Idle games take time and because of that you are going to want to save the data to a cookie that will let the player continue whenever they want to. Fortunately AngularJS makes that simple enough:

var autoSaveTimer = $interval(function () {
            $cookieStore.put('version', 1);
            $cookieStore.put('gamedata', $scope.gamedata);

        }, 60000, 0);

var init = function () {

            if($cookieStore.get('version')==1)
                {
                $scope.gamedata = $cookieStore.get('gamedata');
                }

        }

        init();


Using $interval and $cookiesstore I’ve created a second timer that every minute writes the contents of gamedata to the cookie. It also writes the current version number of the game to the cookie for compatibility reasons. When the application starts, the init method checks the version number and then loads the contents of the cookie into the gamedata.

The reason it does a version check is that as your game expands and you add new options and make other changes you’ll want to ensure that loading still works for existing players. When loading old versions of the game data you’ll want to perform a mapping operation to convert the old gamedata format to the new one, if required.

Conclusion


That’s it - that’s idle games and AngularJS in a nutshell. It might not be a traditional idle game but now you have the tools to make your own cookie clicker or dark room. If I was developing Gnomore Heroes into a real idle game I would have to add a lot more research, sort out balance issues, and I was planning on adding a quest mechanic. Eagle-eyed coders might also notice there is one key state object that isn’t saved and loaded but I’ll leave that for you to figure out.

Article Update Log


31 July 2014: Initial release
07 August 2014: Added Missing CSS files to zip

Autodesk Maya 2015 Review

$
0
0
Autodesk recently released Maya 2015. It took a little longer to get this release out than in years past, but the extra time was worth it. Maya 2015 is packed full of new features, new effects engines, improvements and updates that make it easily the best version ever. The developers at Autodesk have really gone above and beyond with this release.

In particular for gamers the new simulation tools, including Bifrost for liquids and Bullet Physics for rigid and soft-body dynamics, add a huge level of realism to animated scenes. For background scene population and control over hair and fur, the new XGen system is remarkable and the ability to create shaders directly in Maya with the ShaderFX Editor is a huge time-saver. Couple these new features with the ability to see all these effects in real-time within Viewport 2.0 and you have one strong tool for game artists.

Bifrost Simulation


Maya 2015 introduces a new liquid simulation platform called Bifrost. It was initially created by Naiad and has now been fully integrated into Maya 2015. It is quite powerful and surprisingly easy to use.

To create a liquid effect, you simply need to select a geometry object and choose the Create Liquid option from the Bifrost menu. Then you need to select the collision objects and change settings as needed. Bifrost will automatically compute all the splashes and spray from the water particles caused by the collisions and their fall due to gravity.

You can also specify animated geometry to act as emitters and accelerators for the simulation. These emitters and accelerators are used to add forces to the liquid object to create flowing rivers and gentle waves. Figure 1 shows an example of the type of results that Bifrost can create.


Attached Image: Figure 1 - Bifrost.jpg
Figure 1: Bifrost makes it possible to create amazing liquid effects.


Once the simulation is started, the solution is computed in the background using a Scratch Cache and the available frames are shown in green on the animation timeline while frames still to be computed are displayed in yellow. This lets you continue to work while the simulation is still being computed. When you are satisfied with the results, you can save out a User Cache so the simulation doesn't need to be re-computed each time you scrub the timeline and for rendering the simulation.

The resulting water particle motion from the simulation can be viewed in the viewport if Viewport 2.0 is selected . This provides a way to view and tweak the simulation without having to render out frames.

If you are unsure where to start, the Visor has several Bifrost examples to get you started.

XGen Particles


The new XGen system gives you great control over the placement and style of curves, spheres and instanced geometry particles on the surface of other objects. This is a great system for creating unique hair and fur styles. It can also be used to populate an environment with instanced models such as trees and flowers. Custom objects can be exported as an archive and then read back in as multi-instances spread across the face of the scene and multiple archives can be selected and used together.

For all the instanced geometry, you can control the length, width, depth, tilt, twist and density of their placements or even define them using expressions. The XGen system also includes an array of brushes that are used to edit various attributes of the applied objects. These brushes include the ability to attract, repel, part, bend, twist, smooth and add noise, among other things. The system includes support for Ptex maps that can define areas clear of particles. Figure 2 shows a scene where the plants are populated using the XGen system.


Attached Image: Figure 2 - XGen.jpg
Figure 2: The XGen system is great for adding a variety of plants to the current scene.


Bullet Physics


Dynamic animations are a breeze with the new Bullet Physics features. Simply select an object, choose an effect and click the Play button. The constraints and forces, such as gravity, are automatically applied and a full set of settings are available for tweaking the results to get just the motion you want. The Bullet engine includes support for both rigid and soft-body objects and ragdolls. Be aware that the Bullet engine has to be loaded using the Plug-In Manager before it can be used.

Selected objects can be set as Active Rigid Body objects or Passive Rigid Body objects. Active objects will react to gravity and will collide with other rigid body objects. Passive objects will stay put and will collide unyielding with active objects. You can also animate rigid body object to interact with other objects such as a meteor impacting a group of rocks. Using the Shatter effect, you can break solid objects into pieces which break upon on impact.

ShaderFX Editor


Shaders can now be created within Maya using a real-time shader editor called ShaderFX. This node-based editor lets you connect nodes together and see the results directly within Viewport 2.0. The ShaderFX Editor supports both OpenGL and DirectX 11.

Once a shader is applied to an object, you can select the Open ShaderFX option in the Attribute Editor for the shader node. Within the editor, you can add nodes using the list to the right or you can right click and select them from the menu. Input and output channels are connected by simply dragging between them and color coding between the channels makes it easy to know which channels are compatible. Each node also has a preview that you can access. Shaders can be saved to the HLSL, GLSL and CgFX formats.

Viewport Improvements


Maya's Viewport 2.0 now supports viewing dynamics and particles. This is a huge benefit for simulations allowing you to see the results without having to wait for the render. There is also support for viewing Paint Effects, Toon shading, nCloth, nHair, nParticles and fluids. There are also controls to enable and disable Ambient Occlusion and Anti-Aliasing on the fly. Viewport 2.0 is now set up to be the default renderer in Maya 2015.


Attached Image: Figure 3 - Particles in Viewport.jpg
Figure 3: Viewport 2.0 supports particles and makes it easy to tweak effects to get the right look without having to render the scene. Image courtesy of Autodesk


For navigating an existing scene, the new Walk tool is awesome. Using this mode, you can move through the scene as if you were playing a first-person shooter using the S, W, A and D keys. This allows navigation of the scene using a method that is familiar to most gamers.

Another navigation option specifically for tablet users is support for Multi-Touch devices including certain Wacom, Cintiq and MacBook systems. Using the common two-finger pinch gesture, you can zoom in and out of the viewport. You can also tumble the scene with a swipe and return to home with a two-finger double-tap.

Texture and Shrinkwrap Deformers


Texture Deformers provide an easy way to view and use displacement maps. Once applied you can control the amount of influence the texture has over changing the surface of the model.

Another new deformer is the Shrinkwrap deformer that wraps one object around another. This is a great way to apply flat textures to the surface of another object like a decal or to completely engulf an object like armor on a character's arm would.

OpenSubDiv Support


Maya 2015 now supports Pixar's OpenSubDiv mesh smoothing method. This allows for much faster playback of animated meshes over legacy smoothing methods. This is possible because OpenSubDiv utilizes parallel CPU and GPU architectures. Support for OpenSubDiv lets you view animated mesh-smoothed characters and objects as smoothed meshes without slowing down the frame rate.

Modeling Improvements


If you look closely at the manipulator for moving and scaling, you'll see that there are now plane handles that let you move and scale within a plane along two axes at once.

The Quad Draw tool has been updated with some great new features including Edge Extend and Auto Weld. Edge Extend lets you create new polygons by extending a current edge and when vertices are place near one another, they can automatically be welded together when Auto Weld is enabled. There is also a new Relax brush that smoothes the mesh across the surface. The Make Live tool lets you set a scanned or dense mesh as a surface to follow. This makes tasks such a retopology much faster and easier.

The Quad Draw tool also includes a new setting that lets you customize the hotkeys for controlling the different tool features. You can also customize the color display that is highlighted when using the different tools.

The Split Polygon tool has been combined with the Interactive Split tool and the Cut Faces tool to create a single cutting tool called the Multi-Cut tool. This single tool can now be used to cut, split and combine polygons without having to switch to different tools.

Beveling has been improved allowing for non-destructive bevel of corners. Boolean operations have also been improved to be faster and cleaner. There is also a new Convert Selection option to change the current component selection to the edges or vertices that make up the perimeter of the selection.

UV Updates


The UV Editor has also been improved with several great new features. The new multi-threaded Unfold3D mapping option lets you define cuts and then automatically unfolds the mesh for painting. The results are good because neighboring faces are kept together. There is also an Optimize feature that cleans up the UVs that are twisted along with a Distortion shader that lets you visually see the areas that are tightly bunched with red areas showing the stretched UVs and blue areas showing the compressed UVs, as shown in Figure 4.


Attached Image: Figure 4 - Unfold mapping.jpg
Figure 4: The Distortion shader quickly shows which UVs are stretched and compressed. Image courtesy of Autodesk


There are also several new editing features such as Nudge UVs, Normalize: Scale on Closest Tile, and Create UV Shell, which creates a UV shell from the current selection. The UV Editor also includes support for multi-tiled textures that are common in Mudbox and ZBrush. These tiles are presented in a grid and you can easily move UV shells between the different tiles.

Character Skinning



Skinning characters is one of the trickiest aspects of character modeling, but the new Geodesic Voxel Binding feature handles this task by allowing for some flexibility when dealing with overlapping objects. It can also work with multiple objects. The system voxelizes the skeleton and the skin mesh and computes a good fit. This results in skin weights that are more realistic than other binding options and the system is compatible with most game engines.

Summary


Since Maya is used in so many different industries, it is common to see features custom-made for other types of users. However, with this release, game developers should rejoice because almost every new feature is specifically designed to help game creators be more efficient and cool effects just got easier to do.

From the advanced simulation tools like Bifrost and Bullet Physics to the new ShaderFX Editor and improved Quad Draw and Multi-Cut tools, game developers have a lot to be happy about. And all the back-end engines wouldn't be that helpful if you couldn't see their results, but the new Viewport 2.0 improvements make it possible to see all these simulation improvements directly in the viewport letting you tweak the effect until it is just right and rendering it out only once.

Maya 2015 is available for Windows, Linux, and Macintosh OS X. For more information on any of these products, visit the Autodesk web site located at http://www.autodesk.com. A free trial version is also available at http://www.autodesk.com/freetrials.
Viewing all 17825 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>