Skip to content

Florian Noirbent

Articles sur mes réalisations, et sur la programmation de jeux

factorio

gladiabot

leekwars

screeps

shenzhen

spacechemcolobot

Il y a maintenant plus d’un an, que j’ai décidé avec un ami de faire des jeux mobiles. J’attendais d’avoir un résultat concret avant de rédiger un article. C’est chose faite avec la sortie du premier jeu: Hop Boy. Je ferais probablement un autre article pour parler de notre parcours, et expliquer pourquoi on a mis autant de temps pour en arriver à ce résultat. Mais pour résumer rapidement, j’ai travaillé sur 6 prototypes de jeux différents et avec au moins 5 graphistes (freelance, indien, stagiaire, …). Mon partenaire s’occupe de la partie administrative, de la recherche de graphistes, de l’achat des musiques et des effets sonores, du game design, des tests, du marketing, etc. tandis que je me concentre sur le développement.
HopBoy

Hop boy est un runner infini dans lequel le personnage doit monter le plus haut possible en sautant, il doit éviter les obstacles sur son chemin et ramasser les gemmes qu’il rencontre. Ces gemmes permettent de continuer en cas de mort et de débloquer de nouveaux personnages.

Génération procédurale

Le développement de hop boy a vraiment commencé début février et même si la grosse partie du travail a été d’intégrer, analytics, régies publicitaires, google play games, game center, achats in-app, etc. Je vais plutôt parler de la génération procédurale de niveau. Le concept est assez simple, j’ai créé une série de module répartie en quatre niveaux de difficulté, et ils sont instanciés “aléatoirement” quand le personnage avance. Pour éviter de tout recréer deux fois, un module peut être “mirroré” gauche/droite au moment de son instanciation.
Mais je me suis vite rendu compte qu’un simple random ne produisait pas un parcours de qualité: certains passages étaient trop répétitifs et certains enchaînements étaient impossibles à franchir.
Pour résoudre ces problèmes, j’ai développé un moteur générique de génération de parcours infinis que j’ai déjà commencé à réutiliser dans d’autres jeux en développement.
Chaque module est associé à une liste de tags créée dynamiquement dans l’inspector et qui permettent de le décrire, par exemple dans hop boy on va avoir “droite” ou “gauche” qui définissent la position du premier obstacle du module, la difficulté “départ” “facile” “moyen” “difficile”, la présence de gemme, savoir s’il est plutôt “court” ou “long”. Ensuite un algorithme, lui aussi dynamique et défini dans l’inspector vient définir les règles d’apparition. J’ai réutiliser les concepts que j’avais mis en place lors de mon projet libre de 2éme année Epitech où j’avais développer un poc de rogue like modulaire
moduleEditor
L’objectif était de reprendre un maximum la main sur le level design tout en conservant la génération procédurale. Pour l’instant ce moteur ne fonctionne qu’avec des booléens pour sélectionner une liste de modules “candidat”, puis un random ordinaire vient sélectionner le module suivant dans cette liste. J’ai plusieurs projets de runners infinis en cours de développement et je compte réutiliser et améliorer ce système, notamment en spécifiant des chances d’apparition en fonction des circonstances, pour permettre d’ajuster encore davantage la forme que prennent les niveaux.

Disponible sur Android et IOS, je vous laisse tester par vous même et j’attends vos retours avec impatience. En plus de mes autres projets, je travaille sur un mise à jour pour Hop Boy, suivez-moi sur twitter pour en savoir plus 😉

https://play.google.com/store/apps/details?id=fr.supremapp.hopboy https://itunes.apple.com/us/app/hop-boy/id1076690375

Here is my entry for ludum dare #35 made last weekend:

WebGL, Windows, Source
Because of the dropped support of webplayer, I can’t enable you to play directly into the article but I will work on it. Also I can’t rebuild every project of the last 5 years easily to webGL, because most projects from Unity 3 don’t upgrade well to Unity 5 (different physic behavhiors, and a lot of API changes).

The development took about 20h:
Start on Saturday 8:30, I first thought about the theme: “shapeshifter”, my first idea was a fighting game with transformations into a werewolf. too many graphics, so I trashed it. Then I came into this “transformers” idea, at first I wanted three transformations: a car, a plane and a mecha. the car and the plane would explode at the slightest mistake, the mecha would progress slowly, but would be invincible to be used at the hardest places. after one and half hour thinking I began to code.
Lunch break, I have a working car and the begenning of a rocket (I replaced the plane because I’m fan of rockets: @2010, @2013). First review, controls are too difficult to punish mistakes, so I gave up on the blow up vehicles idea, the mecha lost much of its potential, and because he might be too hard to model and animate, I decided to leave it aside.
Then came 4 hours of struggle, switching vehicles don’t work: colliders are different, and when I change the player get bellow the map, get bumped away or get stucked… I tried several ways before I found a solution: the new vehicle spawn with a 50% size collider and grow for a second. It worked pretty well but the rocket still crashed immediatly and the car would start too slowly, 2nd idea: add an impulse when switching.
Once the gameplay working and settled, I spent the rest of the afternoon (2h) working on level design. a new problem appeared: the rocket was too strong and when I’m testing I have no reason to switch back to the car, meanwhile my testers can’t even move. The solution I chose was to greatly improve the air drag of the rocket: beginner could much more easily control the rocket (less inertia) and the rocket’s max speed became smaller than the car’s.
After dinner, I worked on the menu and leaderboard, I was lucky to find dreamlo, very poorly referenced, I have seen people talking about it on a forum, but not finding the site, I almost gave up, believing him dead.
at the end of the first day, the game is functional with cubes, cylinders and an ugly menu.
I spent the Sunday morning on the “artistic” side, modeling, texture, lighting, police, sound, menu layout, and I spent the afternoon to test the game, balance it again, change the level design and publish it.

I didn’t publish for almost a year, not that I have nothing to say, actually I have never been more busy. I’m actually more hesitant than before to talk about ongoing projects, and I prefer to wait for them to be in a more advenced stage.

I finished Epitech with my internship defense late September. I had the oportunity to get an excellent internship offer in the development of military applications, and then I decided to continue, at least until the end of the current project. Without going into confidential details , I can say that I am working on a set of software that will be embedded in Tanks (cartography, tcp / ip network by radio, messaging, driver for the turrets, etc). I even had the opportunity to be offered to meet our customer and take part in field testing. So I decided to keep the video games developement as a hobby for now.

I worked on several projects in my free time this year, I will make articles to talk about it in the coming weeks: a Unity plugin hexagonal board editor, some shaders and more than five prototype mobile games which two of them will soon actually be published on the android and IOS store.

I programmed a basic genetic algorithm program as a school project few month ago. I didn’t immediately think about putting it on my blog as it’s not directly game related, but it can actually be used for video games in various ways… and it’s cool!
By the way I told you about a game concept mainly using this last year when I arrived in china: http://florian-noirbent.com/blog/2013/09/, I tolk about a game like robotcraft (which didn’t exist yet) with AI making their own vehicles like every players and would even evolve their playstyle depending of the vehicle they built. at that time, I mentionned I didn’t know how to evolve behavhiour and it prevented me to actually start prototyping.

Now about this year’s school project, our teacher gave us a robot with two free wheels and a movable arm and an API to control it. the purpose was to make a software to allow him to move as fast as possible using the arm. so I built a genetic algorithm:
I made a population of hundreds of virtual robots with a random program, tried them and selected the ones actually moving slightly , I combined their programs (beginning of one with the ending of another, etc ) then I made a new population this way that I evaluated again. I repeated this process tens of time to finally get a robot moving quite well, I was even the first one in this module thank to this nice robot.

If you are interested, you will find a lot of equivalent experiences on youtube, here are some links:
An impressing worm.
Some funny behavior.
and the big ones, boston dynamics (cyberdine ?)

You will find the sources on github. After this project I discovered other concepts allowing better results, like making island: having separeted population evolving on their own with some occasional exchange or having a geographical distribution to only match neighbour individuals… all these technics intend to slow the convergence to make sure the robot keep improving as long as possible.

We also worked on artificial neural networks (ANN) in the “machine learning” courses, we made an handwritting and captcha recognition software that we trained using “back propagation” (starting from a known correct result to build the network), but a genetic algorithm could also work to train an ANN.

I recently have tried a lot of graphics software as I wasn’t fully satisfied with my blender => paint.net workflow, I tried among others 3dsmax, zbrush, photoshop and most important substance painter which I won’t leave anymore. However about Zbrush and 3dsmax I have mixed feelings and I think I’ll stay with blender for now. Zbrush don’t allow me to make low poly eficiently, and 3dsmax, I’m don’t like the UV mapping tools and no modeling tools absent from blender is really helpful.
So I’ll keep modeling low poly and unwrap in blender then import the mesh inside substance painter to generate textures, including normals. back a high poly could output nice result, but consumes too much times, and it’s not that easy to do “for a dev”.
Here is a car modeled in 3dsmax:
Photo QQ20150110181721
And here is the results once painted in substance painter:
car
Windows aren’t really straight, would have been beter with a drawing tablet and some more patience but possibilities are here.
To improve this workflow and increase details, I might only paint a mask as well as the global “normal” map in substance painter and then import that in substance designer to apply materials.

I developed a portal (in the portal way) for my final project.
Here is the result:

The rendering is done using the mirror shader from the wiki (itself made from the water reflexion package from the standard assets), with a modified projection matrix.
Still have to move the character:
the hitbox is actually quite big and continuous test make it wait the precise frame where the character position relative to the portal is negative on he forward axis to teleport it, make it look seamless and make sure the physic still work without an inch at high speed.
Finally the rotation of the player and its inertia vector are just quaternion multiplications.

as a reminder, Cursor machina beta is now available on our website http://cursormachina.fr/ and even though there is still a lot to do, the project will be done soon.

I’m lucky enough to make an Occulus rift application in my internship for few weeks now and I want to share my experience.
First let’s talk about what everyone else say: the DK2 resolution is better, but there is still a long way to go before it can be considered acceptable (I found that video showing what we actually see inside the first version: https://www.youtube.com/watch?v=eUbFdUzUYmA). The field of view is also a problem, but I don’t see him evolve any soon, and it would make worse the resolution problem. finally the nausea, I don’t know why everyone think he won’t be concerned (I was too), maybe the marketing team did a great job, insisting heavily on “some people…” in every sentence mentioning it, and announcing every while that a magical update fixed it. In practice everyone get ill, on about twenty persons, everyone single on was… but there is resilience differences, some just need 10 seconds while other can stand it 10 minutes for a first try. I’ve heard that it get slowly better with training, it seems true since at start I could bear 3 minutes and now 8 minutes before feeling anything, but I mainly learned to avoid to do everything that could make me ill, and now I barely dare to move the head when I wear it.

occulus rift

Now, from a developer view point, there is other issues: cinematics can’t be precomputed, everything has to be fully in real-time, we did a lot of research to make that possible but nothing yielded to acceptable results.
The framerate can’t drop bellow 75, at the time where more and more console games are satisfied with 30 fps, this imperative of 75fps is quite disturbing, especially since stereoscopic rendering need 2 image, so software need to be able to make 150 scene render each second. add to that a quite large resolution (on paper at least), which promise to be increased again for the official release version and even for next ones until we reach the minimum acceptable of 4k! so huge graphics concession are necessary. When I say “imperative”, it’s not an overstatement: any drop bellow even if only at 70fps for a frame will slash a considerable part of your precious minutes (/seconds) before the sickness catch up.

I regret not to have a razer hydra, or STEM system (http://sixense.com/wireless) … being in a virtual world “without hands” is very disturbing. Using mouse and keyboard is not at all intuitive and breaks immersion, virtual reality visualization tools might even have to wait for powerful interfaces to really shine.

I realize that this is a rather negative picture, I am still convinced that this is the future, or at least a new emerging branch that will co-exist with other media, but it will take at least 5 to 10 years of hardware improvement and think more the role of this technology.

I don’t have much time currently to make games, so I returned on 3D modeling I started classic, having a huge amount of photos and preexisting models is very helpful. It seems a lot harder to come up with a science fiction weapon from scratch. So here is an AK-47 WASR-10 (k4l4ch): AK47_screenshoot It required around 12 hours: 6 for modeling, 3 for UV unwrapping and 3 for texturing. I don’t have any comparasion point to know if it’s quick but I guess I’m still extremely slow. compared to the turbosquid models, it seems equivalent to the best free models, maybe even the “serious ones” until 10$, in this polygon range Photo QQ20141031211404 AK47_screenshoot2   I decided to share this model, here is the link, the package include .blend .fbx and textures (difuse, specular and UV layout). about the licence, do what ever you want 🙂

Dernièrement je me forme à l’utilisation de l’unreal engine 4, on peut programmer en C++, mais Epic games à aussi développer un système d’interface nodal pour programmer. J’ai d’abord commencé en C++ pour mes premier test, l’interface nodal étant un apprentissage supplémentaire, mais sur cette démo j’en suis arrivé à un projet 100% blueprint (le nom de l’interface). J’ai fait ce choix parce que la programmation en blueprint permet de s’affranchir des temps de compilation (qui sont étonnamment long lorsque l’ont compile un projet UE4) et pour avoir une intégration parfaite avec l’éditeur. Ce qui apporte des temps de prototypage très cours, pour peut que l’ont sache s’en servir efficacement.

Voici à quoi ça ressemble, ceci est le script de mon IA:

blueprint AI

Bien sur c’est un peu plus long à exécuté que du code C++, mais les API blueprint et C++ étant identique il est très facile de réécrire les parties posant des problèmes de performance en C++, une bonne pratique étant à mon avis d’identifier c’est partie au moment de conception (voxel engine, AI élaboré, etc) de les programmer en C++ puis de les exposer au blueprint ou y est programmer le gameplay.

Voici une petite vidéo de de mon projet de tests:

update: youku miror http://v.youku.com/v_show/id_XNzgyOTY4NDYw.html

Le multijoueur est fonctionnel, même si il s’agit de bot. Comme d’habitude si quelqu’un pense que les sources peuvent lui être utile, je me ferrai un plaisir de les partager.