What do you think about these mashed up sounds, that eventually has become horror music??
↧
Horror composition - What do you think?
↧
Geometry Shader only working on subset of 10.0 capable hardware
Hi,
I've got a problem with one of my geometry shaders with my game. It works on most hardware, but on a subset of machines, typically laptops with integrated graphics (although I do have a laptop with integrated graphics where it works), my shader doesn't work (that is, nothing is displayed on screen, but the calls themselves don't appear to be failing). My code is set up to require feature level 10.0 and on the machines where it's not working they're reporting as supporting this level. Everything else is working on these machines (I also have a pure 9.3 feature level fallback renderer which works perfectly on these machines).
Usually, I'd run the code through the debugger however these machines aren'are either not mine or struggle to run visual studio (it's an old netbook - Acer Aspire) hence that's an easy option.
So 2 questions:
1. Can anyone think of why one might see such issues between feature level 10.0 compatible hardware and if there are issues, then how would one programmatically identify this?
2. Suggestions on how to diagnose these problems without the use of VS
Background:
The shaders are designed to render a fluid in the game. The fluid is stored in a single large byte array where each droplet of fluid is represented by 4 bits (2 for colour, 2 for movement, ie. each byte represents 2 droplets). The location of the fluid is determined by its psition in the array. The geometry shader takes in an int and then, using bit masks, potentially outputs a set of vertices for every valid droplet. The rendering code then copies the original array to a buffer:
D3D11_INPUT_ELEMENT_DESC waterLayout[1];
waterLayout[0].AlignedByteOffset = 0;
waterLayout[0].Format = DXGI_FORMAT::DXGI_FORMAT_R32_UINT;
waterLayout[0].InputSlot = 0;
waterLayout[0].InputSlotClass = D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_VERTEX_DATA;
waterLayout[0].InstanceDataStepRate = 0;
waterLayout[0].SemanticIndex = 0;
waterLayout[0].SemanticName = "BITMASK";
auto hr = dxDevice->CreateInputLayout(waterLayout, 1, _vertexShader->shaderByteCode.get(), _vertexShader->shaderByteCodeLength, &_inputLayout);
I've attached the files in case there's anything obvious
Thanks
DataStructures.hlsl
GeometryShader.hlsl
PixelShader.hlsl
VertexShader.hlsl
↧
↧
Geometry Shader only working on subset of 10.0 capable hardware
Hi,
I've got a problem with one of my geometry shaders with my game. It works on most hardware, but on a subset of machines, typically laptops with integrated graphics (although I do have a laptop with integrated graphics where it works), my shader doesn't work (that is, nothing is displayed on screen, but the calls themselves don't appear to be failing). My code is set up to require feature level 10.0 and on the machines where it's not working they're reporting as supporting this level. Everything else is working on these machines (I also have a pure 9.3 feature level fallback renderer which works perfectly on these machines).
Usually, I'd run the code through the debugger however these machines are either not mine or struggle to run visual studio (it's an old netbook - Acer Aspire) hence that's not an easy option.
So 2 questions:
1. Can anyone think of why one might see such issues between feature level 10.0 compatible hardware and if there are issues, then how would one programmatically identify this?
2. Suggestions on how to diagnose these problems without the use of VS
Background:
The shaders are designed to render a fluid in the game. The fluid is stored in a single large byte array where each droplet of fluid is represented by 4 bits (2 for colour, 2 for movement, ie. each byte represents 2 droplets). The location of the fluid is determined by its psition in the array. The geometry shader takes in an int and then, using bit masks, potentially outputs a set of vertices for every valid droplet. The rendering code then copies the original array to a buffer:
D3D11_INPUT_ELEMENT_DESC waterLayout[1];
waterLayout[0].AlignedByteOffset = 0;
waterLayout[0].Format = DXGI_FORMAT::DXGI_FORMAT_R32_UINT;
waterLayout[0].InputSlot = 0;
waterLayout[0].InputSlotClass = D3D11_INPUT_CLASSIFICATION::D3D11_INPUT_PER_VERTEX_DATA;
waterLayout[0].InstanceDataStepRate = 0;
waterLayout[0].SemanticIndex = 0;
waterLayout[0].SemanticName = "BITMASK";
auto hr = dxDevice->CreateInputLayout(waterLayout, 1, _vertexShader->shaderByteCode.get(), _vertexShader->shaderByteCodeLength, &_inputLayout);
I've attached the files in case there's anything obvious
Thanks
DataStructures.hlsl
GeometryShader.hlsl
PixelShader.hlsl
VertexShader.hlsl
↧
Profiling
Hey all,
As the code slows to a crawl we have to instrument our binaries, but its been a while since I did programming on a platform that had options. I'm used to live stack sampling inside small kernels, and because of this I don't want to go back to the stone-age of fully instrumented executables.
I had a look at CxxProf: https://github.com/monsdar/CxxProf/wiki/What-is-CxxProf%3F and it looks nice. Especially the part where you can pick and choose where it does the work.
But, before I go ahead and do all this integration, what profilers do you use? Not to mention, CxxProf hasn't been updated in 3 years, although hopefully it still works well.
Actually, I have to sneak in a second question. I'm dividing my levels into a grid of X*X cells so that I only have to iterate over the cells (X-1 to X+1, Y-1 to Y+1) to find nearby objects. Using a "hash" of the cell position to assign objects to cells, where each cell has a set of objects. Does this sound like a plan?
↧
Idle Game language and programm
Hi there,
I´m looking for a new challenging hobby and I thought that Game Development could be pretty challenging, still inexpensive.
I´m a huge fan of idle raiders and Valthirian Arc and I´d like to create a game like that.
I have basic programming experience in HTML.
My question is what language / design program (unity, rpg maker, game maker) would be the best choice for such a project? I guess there are advantages and disadvantages.
Thanks Zwuckel.
↧
↧
The striking difference between liking and wanting
https://sitavriend.wordpress.com/2017/05/29/the-stiking-difference-between-liking-and-wanting/
There are two different kinds of pleasures we experience every day, we have anticipatory pleasure or ‘wanting’ and consummatory pleasure or ‘liking’. ‘Wanting’ is pleasure for looking forward to future events. On the other hand we have ‘liking’, this is pleasure for things in the moment. Think of it this way: when you play a game right now and enjoying it, you experience consummatory pleasure (liking). You might experience anticipatory pleasure when you are at your day job or school but can’t wait to be home this evening so you can play your favorite game. It might surprise you, it certainly surprised me, but these two pleasures are very different from each other and even have their own neural system in the brain. This means that according to your brain, liking and wanting aren’t the same thing. The wanting-type pleasure relies of the dopamine system. Dopamine is released each time you’re looking forwards to something you enjoy. The liking-type pleasure relies on your reward-driven system. When you do something you enjoy doing, opiates such as endorphins are released as a reward. These chemicals of the brain make you feel good. While wanting and liking are very different, it’s good to realize that you have to like or enjoy the thing at first before the wanting system for that same thing kicks in. However, you can have liking without wanting and wanting without liking. Think about a party you are dreading to go to. You really don’t ‘want’ to go but you know that you will ‘like’ being there once you get to the party. Addiction is probably the best example of wanting without liking. An addict will ‘want’ his drug but he doesn’t ‘like’ the effect of the drug anymore.
So be careful with too much wanting though, this can create addiction (Berridge & Robinson, 1998). I realize it’s an ethical debate whether you as a designer are responsible for a player being addicted to your game. In most cases you simply want people to enjoy your game on a regular basis and a healthy player shouldn’t become seriously addicted (where gaming becomes a problem for their daily lives). While not everyone is equally susceptible for addiction it’s important never to design for it.
The difference between ‘liking’ and ‘wanting’ doesn’t seem to be very logical and not much research has been done. It’s only logical that I couldn’t find many games that apply this theory. The closest application of the wanting-system to a game I could find was
It takes forever before I can play again!
Candy Crush. Candy Crush and other similar mobile games want their players to come back every day. The design of these games is driven by retention and that’s why they often have a lives-system and short levels. The short levels encourage the player to try another level. Once the player fails too many levels and runs out of lives, he or she has to wait before they are restored. Most games with such a lives-system have cycles of about 20 hours. This means that if a player runs out of lives, it takes about 20 hours for all lives to be fully restored.
Both the wanting- and liking-systems can be applied to all types of games. However, mobile games can probably benefit most from these different neural-systems. Chances are that you aim for high retention when you design a mobile game. The wanting system is important here, your players should look forward to play your game every day. And of course they should ‘like’ playing your game as well, especially the first time they play.
Games with micromanagement can also benefit from the wanting-system, especially if the player has to use a limited resource that replenished over time. Imagine that you have people as a resource and you can use them to build stuff. Of course building stuff isn’t instant, it takes time. There is nothing for the player left to do after a while because all people are building things. The player will than leave the game with the intention to come back when his people are finished building his stuff. The player won’t be annoyed or dislike the game because there is nothing left to do since it’s the nature of the game.
Some design ideas for you
When designing for retention, it’s good practice to ask yourself why the player should come back to play your game a second time. In my opinion your first answer should always be: “because they liked playing the game”. There is no point in playing a game you didn’t like the first time. The other answers are up to you to think about. Designing your game to be ‘liked’ is much more difficult that designing your game to be ‘wanted’. whether you like something or not is very personal. Some people can’t get enough of shooters while others (like myself) aren’t a big fan. But there are a couple of things that can help the player like your game. Completing or finishing something feels good. When your game is level-based, it helps to keep the first couple of levels short. You can increase the time spend in a level slowly as the player progresses. Finishing each level leaves the player wanting more: “just one more level, then I’ll stop”. Designing your game to be ‘wanted’ is a lot easier. Design your game in such a way that player has some unfinished business when he or she finished the first session. Think about a good cliff-hanger at the end of an episode: it leaves you wanting more. It’s the reason you and your friends are dying to see the new game of thrones season. You can design cliff-hangers for your game as well. The only difference is that you might have to “force” the player out of your game somehow. Add a resource-system to your game that is time-based but is depletes when you are playing. It can be a lives-system like in candy crush or a resource such as money or people in a micro-management game. There is no reason to stay in the game once the player runs out of the resource. Balance the resource in such a way that the player runs out of it when he or she is enjoying your game the most. It’s always important to make sure your player ends the game on a high note. It leaves them wanting more and have them looking forwards to the next session. If you want, you can send the player a reminder when the resource is replenished. But there is no need for daily rewards, this kills the player’s intrinsic motivation (I will talk more about intrinsic motivation the next time) and they won’t like to play your game anymore.
References and further research
Berridge, K. & Kringelbach, M. (2008). Affective neuroscience of pleasure: Rewards in humans and animals. Psychopharmacology, 199(3), 457-480.
Litman, J (2005). Curiosity and the pleasure of learning: Wanting and liking new information. Cognition & Emotion, 19(6), 793-814.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2756052/
http://lsa.umich.edu/psych/research&labs/berridge/research/affectiveneuroscience.html
https://www.sciencedaily.com/releases/2007/03/070302115232.htm
https://www.researchgate.net/publication/245823962_Curiosity_and_the_pleasures_of_learning_Wanting_and_liking_new_information
https://www.marketingsociety.com/the-gym/liking-vs-wanting#6ZiiMdJXqRtJvGSX.97
↧
Reactance theory in games
https://sitavriend.wordpress.com/2017/05/15/reactance-theory/
You can make something more desirable by forbidding it. That something can be anything: an item, an action, an idea. Well this is possible and known as the reactance theory. Reactance is the feeling you get when someone limits your freedom or option. Basically when you’re not allowed to do something or when you are told you have to do something.
This feeling results in you:
1. Wanting forbidden option even more.
2. Trying to reclaim your lost option.
3. Experiencing aggressive and angry feelings towards the person (this person may be fictional as well, or and AI) who limited your options or freedom in the first place. (These feelings can be very subtle and barely noticeable but motivate you to do the opposite from what you have been told to do.)
The first scientist to talk about the idea reactance was Brehm in a theory of psychological reactance. He was the first to research the reactance theory and explains reactance as a motivational state people experience when their freedom is removed or threatened (1966). But you probably already know the reactance theory as reverse psychology. And that’s what reactance basically comes down to: Getting people to do something by telling them they are not allowed to do that something or the other way around. Unfortunately, it doesn’t always work. Some people are just not as sensitive to experience reactance as others and circumstances matter too. For instance: reactance breaks down when people can rationalize why they shouldn’t do something. If someone told you not to buy the bag you really wanted, you’d probably buy the bag anyway. But if that someone explained that he bought the same bag and it broke after 2 days, you’d probably think twice before buying the bag.
Portal 2 applies the idea of reactance brilliantly in their level design when the player enters Aperture’s dungeons. Along the way back up, the player encounters several warning messages as you can see in the picture below: “warning”, “do not enter”. Of course these warnings are not to discourage the player, they are meant to lore the player closer. Reactance helps the storyline feel less linear than it actually is. Player is more attracted to this option and goes on to explore it. It also guides the player through the level more naturally because they want to explore this forbidden option rather than going somewhere else.
You probably want to know what’s behind those walls
The Stanley parable applied the reactance theory to their gameplay using narrative. The player is encouraged to try all storylines since the end is never the end in the game. In fact, the game is all about discovering new endings and alternative storylines and that means you don’t want listen to the narrator most of the time. The blue door ending is a great example of this: The narrator tells Stanley to walk through the red door when the player approaches a room with a red and blue door. When you ignore the narrator and walk through the blue door, he’ll send the player back and tells Stanley to walk through the red door again. The blue door becomes a more attractive option now, so the player choices the blue door again. The player will be send back to choose the red door again but this time the blue door is moved behind the player and the narrator stresses Stanley he has to walk through the red door. The blue door has never been a more attractive option.
Such an attractive blue door! Look at those curves!
The reactance theory can easily be applied to your own games. It can help you design interesting levels or create interesting narrative for games that rely on (branching) narration. When you want to implement the idea of reactance into your own game you can make something more desirable by forbidding it or you can make something less attractive by forcing it. This something can be anything: an item, a choice you want the player to make, a path the player should walk, an action you want the player to perform. Be creative! Keep in mind that not everybody is equally sensitive to reactance and that the effect breaks down when the player can rationalize why they shouldn’t do something.
Here are some ideas for you.
Level design:
– Use some art! Show something is dangerous or advise the player not to go there with signs or writing on the walls. Doesn’t have to be art-heavy, just tell them a certain area is closed off and that they are not allowed to enter.
Narration games:
– Somewhere in the narrative you can tell the player they are not allowed to make a certain choice (remember: don’t explain why). You can also “force” players to make a certain decision like the red door in the Stanley parable.
– Empower the player by telling them they aren’t good enough to do something, they will do it.
– Tell the player that he/she has to do something a certain way, they will do the opposite.
Items:
– Tell your player is a forbidden item and they shouldn’t take it.
Want to read more (scientific) stuff on the reactance theory?
Brehm, J. W. (1966). A theory of psychological reactance. London: Academic Press.
Jack W. Brehm (1989) ,”Psychological Reactance: Theory and Applications”, in NA – Advances in Consumer Research Volume 16, eds. Thomas K. Srull, Provo, UT : Association for Consumer Research, Pages: 72-75.
https://books.google.nl/books?hl=nl&lr=&id=gd4iAQAAQBAJ&oi=fnd&pg=PT317&dq=reactance+proneness&ots=RSjeInAUj2&sig=xBekeKqXAkdk5JPYckJvlgZkDdQ#v=onepage&q&f=false
↧
Player’s Emotions
https://sitavriend.wordpress.com/2017/05/22/players-emotions/
This topic will probably be one of the more ambitious topics I will write about for a number of reasons. First of all emotions are not a just about feeling excited about playing that new game you bought today or feeling sad because your favorite character in game of thrones just got killed. It’s very closely related to longer lasting moods. Secondly, psychologists aren’t completely sure on how to explain human emotions. There are a number of different theories that explains what happens when we experience an emotion and many of them are support by scientific studies. I’m not going into those theories because I don’t think they are relevant to this article. There is a link to a crash course video in the references below just in case you’d like to know about emotions in general.
So what is an emotion? And more importantly, why should you take them into account when you design and develop games? Emotions are a bit ambiguous, even psychologists can’t agree on a unified definition. One of the definitions I found: an emotion is an internal response to an event. Something within your body might change when you experience an emotion, for example, your heart rate can increase or decrease. Some other psychologists might say an emotion is more like a feeling or mood. From these definitions it feels as if emotions aren’t very tangible and difficult to study. However, specific emotions and moods can be very useful when designing games. Taking emotions into account when designing games can definitely help you to enhance the player’s experience. And although emotion is an ambitious and broad topic, it also means there are countless ways you can apply it in your game design.
Russel’s dimensional model of affect
Just like there are multiple theories of emotions, there are several models to classify them. I will keep to one: the picture below is Russell’s model of affect (Russell, 1980). This is a two dimensional model in which emotions are classified based on how active (level of arousal) and pleasant (positive or negative) an emotion is. Many action games use the model to some extent. You feel your heart pounding in your chest, your arousal is up, feel stressed and tense as you approach the enemy camp. On Russell’s model this would be high arousal and a sort of negative emotion.
Now the important question: Why should you apply all this to your game? Here are a number of reasons:
Emotions can help form memories so players remember your game in more detail (LeDoux & Doyere, 2011). This enhances the player’s experience, making it richer and feel more personal.
Allowing your players to experience a positive mood can help them solve the puzzles and riddles in your game (Isesn & Daubman, 1987).
Arousal in general can be quite useful as well. When you want something important to be noticed by the player, make it more arousing to grab their attention (Buodo & Sarlo, 2002).
Arousal can also boost the player’s performance. According to the Yerkes-Dodson law (Yerkes & Dodson, 1908) easy tasks can benefit from high arousal while difficult tasks are handled best when the player’s arousal level is low. You can use this law to adjust the difficulty curve of your game accordingly.
Keeping your player in a positive mood will motivate them and make them try harder (Nadler, 2010). Basically you can keep increasing the difficulty curve of your game as long as the player is in a good mood.
More specific emotions can also be beneficial as well. Anger, for example, motivates players for confront a problem or pursue a goal. On the other hand, players who feel guilty about an action they did can be motivated by their guilt to do good and counteract what they have done (Parrott, 2004).
Even negative emotion, such frustration can improve your game. It can motivate your player when done right. Remember when you fought an end-boss in a game but lost? What did you do? Did you quit the game or did you go back to the last save and try again? Most games have a difficulty curve of some form to keep players challenged and when the curve is just right, you will occasionally loose and have to try again. This trial-and-error will come with a bit of frustration but quickly changes to excitement and motivates to try again. Frustration in these situations only become a problem when the difficulty curve is too steep and the player gets stuck somewhere in your game. It that case they might even quit all together which is not very good for your retention. Of course there should also be a moment of joy when the player finally overcomes an obstacle to make all the effort feel rewarding.
Be careful with too much frustration and confusion though. It’s never good when your players become frustrated because they can’t figure out how the controls work, how to read the UI of your game or don’t know what to do. Obviously you need to address this kind of frustration and figure out how to minimize it. Unfortunately, it’s not always possible to get rid of the bad kind of frustration in your game for all players. Not all players are the same and for some the difficulty curve might be a little on the steep side. While others will always be a bit frustrated about your UI. In those cases you can benefit from the Halo effect (Nisbett & Wilson, 1977): certain salient characteristics bias the perception of other less salient characteristics. It’s not about getting rid of frustration all together, make desired emotions stand out more and the player will focus on them more.
You can apply the knowledge about emotions in your game design regardless of the genre, however, I’d like to show you some examples for narrative and puzzle games. Puzzle games are all about frustration, confusion and joy. The halo effect is at work here: the joy of the eureka moment when the player completes a puzzle is much more salient than the frustration and confusion from the trail-and-error. Puzzle games are a great example of the good kind of frustration as I talked about before. A great example of a puzzle game that uses the good kind of confusion and frustration is Anti-chamber. The player is told very little when they start the game, basically it’s the game to figure out the game (game-ception!). it’s can be great example if you want to make a puzzle game without a tutorial that takes the player by the hand each step of the way.
Antichamber: all you need to know
Narrative games probably are the best type of games to evoke emotions in players. When done right, your player will have a memorable experience of an emotional journey. As I talked about before emotions help form memories. There is nothing better than remembering the joy you felt when you helped your character do something amazing. Narrative games can allow players to really empathize with characters when something truly sad happens. My favorite example for such a game is Thomas was alone, one of my favorite games of all time. The emotional narration makes it such a memorable journey. The designers did a great job expressing a full range of passive emotions such as sadness, happiness and serenity. Everything within the design of the game supports these emotions: the choice of the abstract art style, music and the way it is narrated. I’ve never felt so much empathy towards any video game character as I did for Thomas and his friends (and they are just colored squares!).
Thomas was alone: squares with a personality!
Some tips and examples for you
Now how could you implement all this knowledge into your game or narrative design? It seems like a lot of stuff to take into account but it all depends on your game. A good place to start is to identify the overall feeling or mood you want the player to get when they play your game. Ask yourself: how should the player feel after each session? And what about when they finish your game? Maybe your game has some key-events where you want the player to feel a certain way. Of course your game design document describes how players should interact with your game but why not add a section on how they should feel when they do it?
PANAS example
Playtesting is where you find out if players experience your intended emotions. Set your playtests up in such a way that you can either see or film the play-tester’s face directly. The decode all the different emotions you can use the coding system for facial emotions (FACS) developed by Ekman and Friesen (1978). Even better would be to use software to decode even the subtlest emotions for you. There is a huge range of apps, software and even APIs and SDKs to use such as EmoVu (http://emovu.com/e/). When you don’t have the money for these tools, time to get familiar with FACS or you want to be more thorough with your playtests, you can use PANAS (Watson, Clark, Tellegen, 1988). PANAS is a questionnaire where your play-testers answer questions on how much they experience a certain emotion. The picture at the right is a good example of what a PANAS questionnaire can look like. With PANAS you can find out what overall emotions the player experienced during the game or during key-events in your game. It will be a bit time-consuming to set up but once you’ve created one you can use it for all future games. There is a link to a PANAS worksheet in the references below to help you get started.
Some useful links and references
Crash Course Psychology: https://www.youtube.com/watch?v=4KbSRXP0wik&list=PL8dPuuaLjXtOPRKzVLY0jJY-uHOH9KVU6&index=26
Worksheet PANAS questionnaire: http://booksite.elsevier.com/9780123745170/Chapter%203/Chapter_3_Worksheet_3.1.pdf
LeDoux, J.E. & Doyere, V (2011). Emotional memory processing: Synaptic connectivity. In S. Nalantian, P.M. Matthews, & J.L. McClelland (eds), The Memory Process: Neuroscientific and humanistic perspectives (pp. 153-171). Cambridge, MA: MIT Press.
Yerkes R. M. & Dodson, J. D. (1908). The Relation of strength of a stimulus to rapidity of habit-formation. Journal of Comparative Neurology and Psychology, 18, 459-482.
Parrott, W. G. (2004). The nature of emotion. In M. B. Brewer & M. Hewstone (eds), Emotion and Motivation (pp. 5-20). Malden, MA: Blackwell Publishing.
Posner, J., Russell, J. A., & Peterson, B. S. (2005). The circumplex model of affect: An integrative approach to affective neuroscience, cognitive development, and psychopathology.Development and Psychopathology, 17(3), 715–734. http://doi.org/10.1017/S0954579405050340
Isen, A. M., Daubman, K. A., & Nowicki, G. P. (1987). Positive affect facilitates creative problem solving.Journal of personality and social psychology, 52(6), 1122.
Buodo, G., Sarlo, M., & Palomba, D. (2002). Attentional resources measured by reaction times highlight differences within pleasant and unpleasant, high arousing stimuli.Motivation and Emotion, 26(2), 123-138.
Nisbett, R. E., & Wilson, T. D. (1977). The halo effect: Evidence for unconscious alteration of judgments.Journal of personality and social psychology, 35(4), 250.
Nadler, R. T., Rabi, R., & Minda, J. P. (2010). Better mood and better performance learning rule-described categories is enhanced by positive mood.Psychological Science, 21(12), 1770-1776.
↧
Emulator for Mobile games?
So I am looking for a emulator to test my mobile games on.
It must be free.
It must be able to emulate multiple devices, or at least there resolutions.
Must be able to run my game from my hard drive and the store.
Most importantly it must be easy to use and setup, I only plan on making a single small mobile game.
I don't want to use Android Studio, although I will if there isn't a other option.
Also manymo isn't in service anymore, is there site like it?
Edit:
What do you other mobile devs use?
↧
↧
Are there any 3D games I could clone as I'm just Getting used to UE4 and Need a Project to Get Me Started
Hi
I'm a beginner to blueprints and programming in UE4 and want to produce a game in the next 3 months. I'm getting used to the blueprint system and making mechanics but don't have an idea or goal. Any simple 3D clone would be a fine suggestion. I don't really want 2D that much.
So just any simple, small 3D idea/clone would be good, thanks. Hope you leave a comment.Thanks
↧
Week 2 - Tackling & polishing
After getting into our second week of early access, we have identified a few major flaws in our game, for example, tower system being unclear, latency issue, slow response from key input etc.
Currently, we are shifting our focus to tackle these problems instead of content creation. We would like to give players a better first-time experience than a re-playability feature. Following things are what we are working on right now.
1) Client-side prediction
Currently, the response time for pressing jump or wasd is slow, and it makes it even worse when latency is high. The uneasy of movement is due to the lack of the "client-side prediction" implemented in our system. Our system is currently depending on position synchronization to issue actions; therefore, there will be a big delay between input and output. To solve this, we have to implement client side prediction which improves the experience of users. We anticipate this system takes around 2 to 3 days to develop and around a week to debug it before it is live.
2) Tower re-work and upgrade
Firstly, our towers function is unclear. Secondly, our towers are boring. We have identified this problem and discussed it with our team. There are net-launcher and cannoneer, but users do not recognize or understand their function without a very detailed manual, but this is against our face-pace, no resource management playstyle. Therefore, we will be modifying most of the defense units to something easier to understand, something like blockade where users can easily understand their function.
Another issue is towing being too boring and without characteristics. We know that some players want to focus on tower placement and enjoy the beauty of defense arrangement. Therefore, we are working on a tower upgrade system which players will pick up spirit orbs and feed it to the towers. After reaching a certain level, it can evolve into different units.
3) Smaller maps
One of our flaws is focusing on co-op experience instead of focusing solo gameplay. When we launch during early access, we only contain 4-player maps which are too large for the single-player experience. We saw a few players complaining about picking spirit orbs being tedious. We have already taken immediate measure to develop a 1-2 player map (Dark Alley) in 2 days to compensate this flaw. A smaller map gives players a better experience and easier for players to handle. Therefore, more smaller maps will be developed to enhance solo player's experience.
↧
Week 3 - Incoming Giants & Problems
After a week of development and testing, we are now close to the 0.4 update, Gentle Giants.
Firstly, we would like to address the issue on tower progression. We received the comments from players about the lack of depth in tower aspect. We are trying to modify this tower system into something fun for tower defense fans. Currently, our approach is limited to upgrading the tower through the spirit orb system, where you collect spirit orb and level them up. This upgrade system will not be the final form of the game and a lot of more to come.
Over the weekend, we have discussed with our team and planned out the road map of our development. There are a few things we would like to have in the game.
1) Monster pack encounters to increase the depth of gameplay. Currently, the different stages of our game do not vary much. Therefore players will fight against the same monsters over and over again from map 1 to map 4; we know this is boring! Therefore, we will introduce monster packs, which contains specific monster army, such Abaddon army, Moloch army, or a mix of both, to challenge player’s defense. We hope this will bring more variation. At the same time, we will also introduce armor, magic defense, armor & magic penetration to different monsters and equipment. This encounter system will increase the depth and allows players to react to these monster packs, whether stack up armor or magic penetration units.
2) Quests, missions and Hell Warders Rank to increase the incentive of progressing. What we see in the game is that players do not have an incentive to proceed further apart from just the fun from the battle; therefore we are introducing quests and missions. These quests and missions have rewards that give hero equipment, companion equipment, cosmetics, and ranking. We would like to introduce a “Hell Warders” ranking system, which you proceed further, you will get more challenging quests and missions to challenge players, at the same time, having better rewards!
3) Tower spellbook to reduce the tedious of picking up spirit orb. One of the unique points in Hell Warders is the picking up of spirit orb without the resource management system. Numerous players commented on the spirit orb system being tedious; therefore we are thinking of scrapping the spirit orb - tower picking system. Removal of spirit orb does not mean we are throwing it away, but looking whether battle upgrade (double damage, movement speed) spirit orb instead of a defensive unit, which might be more suitable for players. The player can choose to leave the battle, pick spirit orbs, then go back to fight.
4) The inventory system is vital to hero progression. Currently, the game only has a simple perk system which does not provide a sufficient RPG element in the game. We will be introducing cosmetic & equipment in the future. These items will affect how strong a player will be, but not a random "stats" rolling drop.
In conclusion, we have identified a few areas that we should work on to increase the depth of gameplay. We will also continue to modify these changes to balance out the game and introduce more features to make Hell Warders a better game.
↧
Week 4 - Full speed ahead!
After testing and deploying Patch 0.4, we will start the next phase 0.5 content patch. We expect this patch to roll out two months later. Why will it take us this long? It is because there will be a massive update on every aspect of the game.
We heard a lot of user's feedback about the tower placement system being tedious but unique at the same time. Although we would like to keep it, it hinders the game expansion on tower contents. As the previous blog mentioned, we will be implementing a tower loadout and quest/mission system! More news will be posted on our development blog.
Firstly, we would like to focus on improving monster types & depth. We are introducing armor, magic resistance, physical & magic damage, monster archetype and boss mechanics. For example, some monster will reduce or ignore physical damage from towers etc.
Secondly, we will start designing quests and missions with different monster type to challenge player's choices of towers. The final goal of this quests and missions shall include a campaign mode.
Thirdly, tower loadout system will allow players to customize tower progression and usage in different quests!
Time to dive back into Unity.
↧
↧
Farming/Village simulator hobby project
Hello!
My name is Matt. I want to make a 3D farming/village simulator game, similar to Harvest Moon and Rune Factory series. I am looking for anyone with Unity/C# programming skills, and a 3D artist to develop the look and feel of the game. This is a hobby only game, so its not paid, but depending on the development of the project, I may try to push it onto steam.
I am looking for committed team members that will enjoy working on this style of game. Experience playing these games is a plus. I want this to be a fun, productive development. I want our productivity schedule to be flexible, but I also want committed team members to crank out a fun product. Feel free to message me or reply to this post. Thanks!
Matt
↧
Cleanup work
There were two main pieces of cleanup work that I wanted to take care of in my Direct3D12 framework. The first was to break apart the LoadContent functions of the various test programs. The second was to minimize object lifetimes. Previously all the various framework wrapper objects and their internal D3D12 resources would live for nearly the entire program regardless of how long they were actually needed for.
LoadContent
In each test program the LoadContent function was a mess due to them doing more than they were originally intended to do, which was to load the models or other content needed for the program. Since I have been trying to minimize dependencies in these test programs to keep the D3D12 code as clear as possible, I'm not using a model library and am instead filling in the vertex, index, and other buffers directly. On top of that, D3D12 also requires setting up the graphics pipeline, which was also being done in those functions. To clean this up and make the code more understandable at a glance, I've introduced new TestModel and TestGraphicsPipeline classes in each test program (some have an additional pipeline class as needed for the particular test case). These new classes take a lot of the burden off the Game subclass by having them be responsible for managing just a just a model or a graphics pipeline respectively (even if the model is still hard-coded for demonstration purposes). The LoadContent functions now take care of creating and configuring instances of these classes. So, it is still doing graphics pipeline setup as needed by Direct3D12, but it is encapsulated and offloaded to an appropriate class. Where before a typical LoadContent function was a few hundred lines, now a typical LoadContent function now looks like:
void GameMain::LoadContent(){
GraphicsCore& graphics = GetGraphics();
m_pipeline = new TestGraphicsPipeline(graphics);
m_model = new TestModel(graphics, m_pipeline->GetShaderResourceDescHeap(), m_pipeline->GetCommandList()); m_pipeline->SetModel(m_model); // setup the cameras for the viewport
Viewport full_viewport = graphics.GetDefaultViewport();
m_camera_angle = 3 * XM_PI / 2;
m_camera = new Camera(full_viewport.width / full_viewport.height, 0.01f, 100.0f, XMFLOAT4(0, 0, -10, 1), XMFLOAT4(0, 0, 1, 0), XMFLOAT4(0, 1, 0, 0));
m_pipeline->SetCamera(m_camera);
}
For the various fields that were part of the test program Game subclasses, those have been moved to either TestModel or TestGraphicsPipeline as appropriate. One field of note is the descriptor heap. Due to a requirement of ID3D12GraphicsCommandList::SetDescriptorHeaps, where it can use only 1 D3D12_DESCRIPTOR_HEAP_TYPE_CBV_SRV_UAV and only 1 D3D12_DESCRIPTOR_HEAP_TYPE_SAMPLER heap for rendering, the descriptor heap needs to be shared between the model and pipeline classes. The models just need it while creating their texture resources, and the pipeline needs it for both creating non-model resources (e.g. a constant buffer to hold the camera matrices) and for binding the descriptor heap when rendering a frame. So, the TestGraphicsPipeline class owns the descriptor heap instance and provides an accessor method for the TestModel to use it. This means a real game using this approach would need to compute the number of unique textures for its models and add in the number of constant buffers the rendering process requires, then either pass that information into the pipeline for creation of the descriptor heap and allow the models access to the same descriptor heap, or move creation of the descriptor heap outside of the pipeline and pass it to both the pipeline and model along with deal with managing its lifetime alongside the pipeline and models.
Minimizing Object Lifetimes
For the fields that were moved to the TestGraphicsPipeline class, there were a few that were being kept for nearly the whole duration of the program that weren't needed for that long. In particular are the various shader instances and the input layout. Those are used to create a framework's Pipeline instance, which internally creates a ID3D12PipelineState. Once that instance is created, there is no need to keep the shaders or input layout around any longer.
For the TestModel classes, they didn't need to keep around the TextureUploadBuffer instances once their content had been copied to the Texture subclass that is actually used for rendering. So, after the TextureUploadBuffer's PrepUpload function has been called for all the textures to upload, the command list executes, and a fence is waited on, then the TextureUploadBuffer should be safe to delete. However, I would occasionally get an exception and message in the debug window of:
D3D12 ERROR: ID3D12Resource::: CORRUPTION: An ID3D12Resource object (0x0000027D7F65D880:'Unnamed Object') is referenced by GPU operations in-flight on Command Queue (0x0000027D7D3BB130:'Unnamed ID3D12CommandQueue Object'). It is not safe to final-release objects that may have GPU operations pending. This can result in application instability. [ EXECUTION ERROR #921: OBJECT_DELETED_WHILE_STILL_IN_USE]
So, this exposed a bug in my fence implementation that had been hiding there all along. While the code for D3D12_Core::WaitOnFence is a conversion from Microsoft's D3D12HelloWord sample, it turns out I had forgot to initialize value passed along to the command queue's Signal function. In debug builds this was leading to the value to start at 0, which also was the fence's initial value. This caused D3D12_Core::WaitOnFence to go into the case of thinking the fence was already complete and allow execution of the program to continue. Sometimes my system would be fast enough that was okay for this data set, other times I'd get this error. Once I initialized the initial signal value to 1, D3D12_Core::WaitOnFence would properly detect whether the fence needed to be waited on. Technically any value greater than the fence's initial value would work.
Miscellaneous
I also tweaked D3D12_TextureUploadBuffer::PrepUploadInternal to only do state changes in the subresource being uploaded to instead of all subresources in the target texture.
When I had initially written the D3D12_ReadbackBuffer, my starting point was copying D3D12_ConstantBuffer. As a result of that the 256 byte alignment required for a constant buffer was also in the readback buffer code for the past couple of times I've uploaded the framework's source to a dev journal/blog. Since that isn't actually a requirement for a readback buffer, it has been removed.
↧
FPS meter, Moving buffers to the GPU, and Using the stencil part of the depth-stencil
While trying to build a couch and dealing with a broken pipe below the concrete floor of the basement, I've also been continuing playing with Direct3D12. Since the last blog entry, I have implemented an FPS meter that uses a basic texture atlas for its display, added classes for having vertex and index buffers reside in GPU memory without direct CPU access, and I added a depth-fail shadow volume test case for adding use of the stencil part of the depth-stencil to the framework.
FPS Meter
So far in the framework, the Game base class passed the value of the fixed timestep to the update and draw functions as the elapsed time. In order to compute the actual number of frames per second, the actual elapsed time between frames is needed instead. So, both values are now provided as arguments to the update and draw functions. This allows for it to easily be the choice of the game for which value to use, or it can use both. This of course required a minor update to all the existing test programs to add in the additional argument even though they are still using the fixed timestep value.
The FPS meter itself is a library in the project named "fps_monitor" so it can be easily re-used for projects as needed. The library is the FPSMonitor class and the shaders needed for rendering it. The FPSMonitor calculates and displays the minimum, maximum, and average FPS over a configurable number of frames. It has its own graphics pipeline for rendering. So that it doesn't get bloated with code for loading different image formats or texture atlas data formats, the already loaded data is taken as arguments to the constructor.
The vertices sent to the vertex shader use projection space x and y coordinates that maintain the width and height of the character as provided to the FPSMonitor constructor (which means this works best with monospaced fonts), uv coordinates for the texture going from 0-1 in both dimensions, and the key into the texture atlas lookup table (initialized to 0, but the Update function fills in the desired value for that frame).
m_vertex_buffer_data[i * VERTS_PER_CHAR ] = { XMFLOAT2(-1 + x, y), XMFLOAT2(0.0f, 0.0f), 0 };
m_vertex_buffer_data[i * VERTS_PER_CHAR + 1] = { XMFLOAT2(-1 + x, y - m_char_height), XMFLOAT2(0.0f, 1.0f), 0 };
m_vertex_buffer_data[i * VERTS_PER_CHAR + 2] = { XMFLOAT2(-1 + x + m_char_width, y - m_char_height), XMFLOAT2(1.0f, 1.0f), 0 };
m_vertex_buffer_data[i * VERTS_PER_CHAR + 3] = { XMFLOAT2(-1 + x + m_char_width, y), XMFLOAT2(1.0f, 0.0f), 0 };
The texture atlas lookup table is provided to the vertex shader through a constant buffer that is an array of the uv coordinates to cover a rectangle for that entry.
struct LookupTableEntry
{
float left;
float right;
float top;
float bottom;
};
cbuffer LOOKUP_TABLE : register(b0)
{
LookupTableEntry lookup_table[24];
}
The combination of the 0-1 uv coordinates on each vertex and the lookup table index allow for the vertex shader to easily compute the uv coordinates for the particular character in the texture atlas.
output.uv.x = (1 - input.uv.x) * lookup_table[input.lookup_index].left + input.uv.x * lookup_table[input.lookup_index].right;
output.uv.y = (1 - input.uv.y) * lookup_table[input.lookup_index].top + input.uv.y * lookup_table[input.lookup_index].bottom;
An alternative approach would be to skip the index field in the vertex data and update the uv coordinates on the host so that the vertex shader becomes more of a pass through.
In order to test that the FPS values are being computed correctly, the test program needs the frame rate to vary. Conceptually there are 2 ways to accomplish this within a program. One is to switch between different content for one set that don't stress the system's rendering capabilities and one that does. Another way, and the way taken in the test program, is to change the fixed timestep duration. By pressing and releasing numpad 1, 2, or 3 the test program will move between 60, 30, or 24 FPS respectively. While changing the frame rate up or down instantly changes the min or max FPS, the average FPS takes a little bit, based on the number of samples, to get to a steady value. Assuming a system can handle the requested frame rate, once enough samples at the new frame rate have occurred to fill all of the sample slots in the FPSMonitor class, then all 3 should have the same value.
GPU Vertex and Index Buffers
The vertex and index buffers in the framework thus far have used D3D12_HEAP_TYPE_UPLOAD so that their memory can be mapped when their data needs to be updated. While the FPS meter discussed in the previous section needs to update a vertex buffer every frame, this is a rare case. Taking the common example of loading a model, normally after loading its vertex and index buffers wouldn't change. So there is no need for CPU access after loading. To cover this, there are additional classes for vertex and index buffers that use D3D12_HEAP_TYPE_DEFAULT named VertexBufferGPU_* and IndexBufferGPU16. To populate or update the data in in these GPU-only buffers, the existing vertex and index buffer classes provide a PrepUpload function for the corresponding GPU-only type. This adds to a command list for copying data between the two buffers. The actual copying is done when the command list is executed. Beyond the lack of CPU access, they function the same as the previously existing vertex and index buffers, so there's not too much to say about these.
Stencil Part of the Depth-Stencil Buffer
Up until now, the depth-stencil buffer has been used for just depth data. Exercising the stencil portion of this buffer required framework updates to create a depth-stencil with an appropriate format (previously the depth-stencils were all DXGI_FORMAT_D32_FLOAT), adding the ability to configure the stencil when creating a pipeline, and an algorithm to use for a test case.
For the format, the DepthStencil class has an optional argument of "bool with_stencil" that if true will create the depth stencil with a format of DXGI_FORMAT_D32_FLOAT_S8X24_UINT. If it is false (the default), the format will be DXGI_FORMAT_D32_FLOAT.
For configuring the stencil, the static CreateD3D12 functions in the Pipeline class had their "DepthFuncs depth_func" argument changed to "const DepthStencilConfig* depth_stencil_config". If that argument is NULL, both the depth and stencil tests are disabled. If it points to an instance of the DepthStencilConfig struct, then the depth and stencil test can be enabled or disabled individually along with the specifying the other configuration data.
/// <summary>
/// Enum of the various stencil operations
/// </summary>
/// <remarks>
/// Values must match D3D12_STENCIL_OP
/// </remarks>
enum StencilOp
{
SOP_KEEP = 1,
SOP_ZERO,
SOP_REPLACE,
SOP_INCREMENT_CLAMP,
SOP_DECREMENT_CLAMP,
SOP_INVERT,
SOP_INCREMENT_ROLLOVER,
SOP_DECREMENT_ROLLOVER
};
/// <summary>
/// Configuration for processing pixels
/// </summary>
struct StencilOpConfig
{
/// <summary>
/// Stencil operation to perform when stencil testing fails
/// </summary>
StencilOp stencil_fail;
/// <summary>
/// Stencil operation to perform when stencil testing passes, but depth testing fails
/// </summary>
StencilOp depth_fail;
/// <summary>
/// Stencil operation to perform when both stencil and depth testing pass
/// </summary>
StencilOp pass;
/// <summary>
/// Comparison function to use to compare stencil data against existing stencil data
/// </summary>
CompareFuncs comparison;
};
/// <summary>
/// Configuration for the depth stencil
/// </summary>
struct DepthStencilConfig
{
/// <summary>
/// true if depth testing is enabled. false otherwise
/// </summary>
bool depth_enable;
/// <summary>
/// true if stencil testing is enabled. false otherwise
/// </summary>
bool stencil_enable;
/// <summary>
/// Format of the depth stencil view. Must be correctly set if either depth_enable or stencil_enable is set to true.
/// </summary>
GraphicsDataFormat dsv_format;
/// <summary>
/// true if writing to the depth portion of the depth stencil is allowed. false otherwise.
/// </summary>
bool depth_write_enabled;
/// <summary>
/// Comparison function to use to compare depth data against existing depth data
/// </summary>
CompareFuncs depth_comparison;
/// <summary>
/// Bitmask for identifying which portion of the depth stencil should be used for reading stencil data
/// </summary>
UINT8 stencil_read_mask;
/// <summary>
/// Bitmask for identifying which portion of the depth stencil should be used for writing stencil data
/// </summary>
UINT8 stencil_write_mask;
/// <summary>
/// Configuration for processing pixels with a surface normal towards the camera
/// </summary>
StencilOpConfig stencil_front_face;
/// <summary>
/// Configuration for processing pixels with a surface normal away from the camera
/// </summary>
StencilOpConfig stencil_back_face;
};
After those changes it was onto an algorithm to use as a test case. While over the years I've read up on different algorithms that use the stencil, I haven't implemented one before. I ended up picking depth-fail shadow volume using both the Wikipedia article and http://joshbeam.com/articles/stenciled_shadow_volumes_in_opengl/ for reference (I don't plan for this entry to be a tutorial on depth-fail, so I'd recommend those links if you want to read up on the algorithm). The scene is a simple one comprised of an omnidirectional light source at (8, 0, 0), an occluder at (1, 0, 0), and a textured cube that can be moved in y and z with the arrow keys that is initially positioned at (-7, 0, 0). The textured cube is initially in shadow, so the up, down, and left arrows allowed it to be moved so it can be partially or completely out of shadow or back into shadow. For the right arrow key, there was an issue where the framework was always assuming D3D12_CULL_MODE_BACK which prevented the stencil buffer from being correct. Since the stencil configuration in D3D12 allows different stencil operations for front faces and back faces, only 1 pass is needed for setting the stencil buffer when the cull mode is set to none. By doing that, the model was correctly lit when moving out the shadow volume with the right arrow key as well.
I would attach a zip file containing the source for the framework and its test programs to this post, however it seems that ability was removed for non-images. So, I'll look into alternatives for sharing the source.
↧
[Project Peril] Particle Systems/Ability Effects, Procedural Dungeon Generation
This week I implemented a system to spawn and display particle systems which convey abilities, fixed a variety of bugs in the procedural dungeon generator, and refactored old code + optimized a few things here and there to maintain smooth performance. With dungeon generation in a good state I was able to begin writing the logic that handles placement of dungeon assets (walls, gates, textures, doodads, chests, monsters, etc). Overall Project Peril is making excellent progress. I expect to have some sort of playable demo by the end of 2017. Maybe sooner if I bust ass.
Programming is getting a a bit stale as of late, I might be switching my workflow to animation and effects for a few days. It never hurts to have more eye candy.
↧
↧
daughter wants to learn coding
I am a professional and certified Network Engineer for a large telecom in the US. I do limited programming at work but do a lot of hobby stuff on my own and consider myself very proficient. I also have a Bachelor's and Master's degree in Computer Science.
My daughter (almost 10 years old) has expressed to me that she would like to learn to code. And that has me nerding out lol. I would like some feedback on what language would be good to start with. I am very proficient in C, C++, C#, Java, and Python so one of those would be best. I do not want to use just one of the visual editor WYSIWYG editor or anything like that for programming. Just an IDE and compiler/interpreter.
I was thinking Java might be the one to go with. Any suggestions?
↧
Making a texture mask
Hello guys! Totally new here, as well as to graphics programming in general.
I'm making a 2D fighting game engine using XNA/MonoGame, and oddly enough what's tripping me up is how I can have the user define their lifebars. Obviously the simplest (old-school) method would be to just use a bounding rect to cut off the display region of the current sprite/animation to only the "remaining life" region. Done.
But what if they want to define some other non-rectangular shape for the end of the lifebar, such as a rhombus, or pill shape? So I figure, why not have them make a mask sprite to define what the end of the lifebar should look like? That way they can have whatever end shape they want at any point in the lifebar, whether it's a static sprite or animation. It will also provide another useful tool familiar to designers that they can also use anywhere else in the engine.
Basically, I want the user to be able to insert a sprite like this:
And a sprite like this:
And end up with this at the end of the lifebar (after aligning the mask with the end of the lifebar):
The algorithm itself is simple. I just need it to take the sprite and multiply it by the color (or color component, to simplify things) in the overlapping mask sprite.
But the problem is this: the mask texture and the lifebar texture may be different dimensions. In what I've worked with in HLSL so far, the texture indices are represented from 0.0F to 1.0F in the X or Y direction. So in my head, that mask sprite is going to be treated the same size as the lifebar texture, which means it would create a totally different image as the life decreases!
I'm using HLSL, but I'm unsure how to approach this problem. Is there any way to check image dimensions so I can tell it to use the mask texture as-is, and not clip anything outside of the mask texture?
↧
PlayerController Question
I using this tutorial to make my game
if want to use the playerController like this toturial
My question i only have to port my input from character class to my playercontroller
↧