TECHNICAL CHALLENGES
CHARLIE BEHAN
3D Modeller
Modelling for an Augmented Reality environment meant I had to keep two important things in mind throughout the process.
Frame Rate
Firstly understanding the importance of frame rate in our game and keeping it as high as possible. With the tech we are using already being so demanding, means we don't have much room for fancy graphics. The original model of the ACC has a somewhat lengthy load time on the Hololens so I made my first goal to lower the vert count on all components but keeping as much detail as possible. I was also able to add many details through texture maps eliminating many high vert areas on the model. Another step I took to improve performance was to remove the need for lighting in our scene, I done this by baking all shading into the texture maps.
Graphics
Second challenge was creating assets that are visually pleasing when viewed inside the Hololens. How graphics look on a monitor can be very different to when viewed inside the Hololens. This is due to the similar effect from projectors unable to project the colour black through light. To prevent this issue I added subtle colour tints to many areas within the textures, some suggest just lightening the the colours but I found it gave a bland look to the models. I mostly used the three additive primary colors(red,blue,green) when adding tints as they showed the most improvement.
CORMAC WHITE
Team Lead/Scrum Master
Voice Controls - Lexicon Issues
One of the biggest issues I've encountered in this project so far has been to do with pronunciation issues with voice commands. Certain words are not being picked up by the GrammerRecognizer while other, similar words and phrases are working perfectly fine. Some things I have found that help is defining the phonetic spelling of certain words in the srgs.xml file. An example of this would be trying to access the ABC component. Abbreviations do not work very well and even defining the phrase to be recognized as "a b c" does not help.
To solve this particular example I've changed the phrase to be recognized to be "ay bee sea". This phonetically matches what a person would say and can be recognized. There are many other examples to be solved and it is a slow process as testing the commands requires a new build being pushed to the HoloLens or the emulator.
PADDY REID
Character Modeller/Animator
Developing the AI user guide for our game is not as easy as throwing together a 3d model in a piece of software. There are various steps involved in ensuring that our character is represented properly in Augmented Reality.
Shaders
One of these include working with Unity shaders, this needed to be done in order to ensure that the hologram our character projects is represented properly in the hololens. In the early stages of this project I attempted to represent the hologram using a spotlight, unfortunately, the spotlight was not working correctly in the hololens. Unity Shaders are very complex, they were also very new to me. They work very differently to any other programming language that I was used to, written in HSLS. I was able to source a unity shader online which allowed me to make some adjustments in order to create a working hologram in augmented reality.
This was idea was scrapped last minute because the shader was affecting the performance of the scene.
Animation
Another challenge was correctly setting up G.U.S for animation, in order to set up our AI assistant to be correctly animated in 3ds max I was required to CAT rig the model and apply a morpher to his eyes and eyebrows. Using the morpher to shape G.U.Ss expressions showed some difficulty in setting up. When trying to create new shapes, another would often be overwritten resulting in setbacks. I was able to overcome this obstacle by eventually retracing my steps ensuring that I had enough object copies to morph. Animating the morphs also cause some trouble and required several keyframes.
Character Presentation in Hololens
I found myself losing a lot of time adjusting G.U.S attempting to make him look good in the augmented environment. There are things that need to be understood about the HoloLens, something that looks great in Unity will not look the same in augmented reality. There were two team members on flipside who took responsibility for the Microsoft HoloLens, unfortunately, I was not one of them. If I would have been given some personal time with the HoloLens I feel like I could have tested G.U.S’ appearance on a regular base and not every other week.
KIERAN KEEGAN
Sound Design/Testing
Integrating AWS Polly
To implementing Amazon Polly into our game I found a workaround import that a Unity dev put on GtiHub. This is as there is currently no official support for AWS Polly yet. I then set up incognito user pools on AWS. I was having a bug at this stage where the generated audio file was corrupted. The cause of this issue was the AWS certificate not having access to Polly.
To fix this I used AWS IAM service to set up a role that had permission for AWS Polly. In order to read in new dialogue from a file, I wrote a c# script. The script reads in strings from a .csv file. The .csv associates two strings together. I stored the first as the file name and the second as the string to be generated. If the first value was a '!' the second value is the folder to save the generated files to.
The next challenge was to access these lines from the rest of the project. To solve this I wrote a c# sound manager class. The Sound Manager checks all the folders stored in the Audio folder. It loads all the files in these folders into separate dictionaries. These dictionaries are in turn stored in a root dictionary. The sound manager has a PlayClip function that takes a dictionary and a sound name. It will use the sound name as a key in the specified dictionary. The function also takes an audio source that it will play the sound file from. This function can be used from anywhere in the project to play any audio clip from any audio source.
Sound Design in Augmented Reality
My other main challenge in the development of ABB Training Day was sound design in augmented reality. Just like the rest of the team, this was my first experience with augmented reality on this scale.
As the game is a simulator the goal was to make the experience as believable as possible. This meant that all the sounds needed to sound as close as possible to real life, something which is tough to do for anyone other than an ABB engineer. I took a trip to ABB were an engineer showed me an ACC (The cabinet that is central to our game) in operation. I was able to record reference sounds to help me. There is a lot more creative freedom when designing a sound for GUS than when designing a sound for the removal of an ABC (Handles communication between ACC and actuators on machines) unit from an ACC. If the sound did not sound the same for the engineer as it did when they carry out maintenance on real ACC's then it would break immersion.
ARNAS CIBULSKIS
Main Programmer
Being the lead programmer, I was working on the majority of functionality in the game and I had to come up with multiple systems that work together in order for a simple and usable experience. Two of these systems were the Input System and the Event System, which were created in order to solve two very unique and important problems, how the user interacts with the game (a key issue, as the entire game is made up of user interaction), and, how the game knows the progress of the user.
Input System
The input system was at first only a way of allowing the user to interact with components in-game, using drag and drop functionality, achievable by raycasting from the camera. Problems were evident in how a lot of functionality was required with a simple tap gesture in the HoloLens. For example, to take out a wire, it was necessary to tap precisely on the wire, then tap again to drop it. If the user had made a mistake and his tap was slightly off, the system was unforgiving as a different component would be selected.
The revised input system included all the functionality from the old system, but with a more streamlined and user friendly approach; allowing a player to interact with components by selecting them, which would then bring up a menu with all the options for interaction, such as taking out or putting in wires, setting a binary address, etc.
Event System
The event system was created out of the necessity for being able to track exactly what a player is doing. This system receives all of the players actions from the input system, for example, what component the user was interacting with, was he attaching/detaching this component, etc.
All this information would then get processed, allowing the game to know exactly at what stage of a puzzle the player is, and by extension, know if he is making correct or incorrect steps (which would lead to the player receiving help from the AI assistant, GUS). This is achieved by having two lists of steps, and each step having varying degrees of importance. One list contains the disassembly and fixing steps for a particular puzzle, and the second list contains the reassembly steps. As the player goes through the correct steps, they’re removed from the list and once the fixing order list is complete, it switches to the reassembly order list, and after that, the puzzle is complete.
CIARAN MOONEY
UI Design/Testing
Creating UI for Augmented Reality
Augmented Reality proves to be challenging to develop solid UI for because of its unique space. It is such a new emerging area that there is little specific studies done on it already. The first issue is how the UI is no longer a standard space on the screen it is now a space in the real world with the user being the controller. The fact that the user is the controller means UI has to be bigger and must be developed to handle this problem where there is less accuracy compared to a standard mouse selection. If an element is too big it can block a users view; however too small and they may have trouble keeping a dead centre focus to select it over neighbouring elements.
The second problem within designing UI in Augmented Reality is the UI has to be designed with the world space in mind. It cannot have a HUD as this is a big issue with immersion in AR. From researching problems with UI in AR I have also found how important it is to not distract the user. UI should complement as much as possible and if it is on the users view plane it should be in their peripheral vision but preferably should be within the world. This is because AR as a whole is designed to add to the world around the user and not just an overlay.