This week, we developed the final video and included some “finishing touches” to the system
This week was devoted to updating the current system and preparing slides for the final presentation.
With respect to the problems outlined by users during the evaluation of the system, we had to implement certain changes on the display, font-size of questions and tracking efficiency.
The general display of the system was re-implemented to augment the view and enhance font-size as the questions presented were formerly too small to read.
In “Play Modus”, the display will be divided into 3 sections to distinguish were the tracked player currently stands.
Sounds to enhance user experience during the interaction were also integrated into the system.
For the final presentation, an enhanced and functional system has been developed which will be highlighted in the video at the end of this project.
After the first prototype testing sessions, we decided to make substantial updates to the implementation. The number of players to play simultaneously was reduced from 4 to 2 as the recognition of multiple players still poses a problem when players leave the camera perspective.
Sounds for interactive feedback were also added to the system to improve the Quality of experience and game flow.
The main task this week was to test the prototype and improve the implementation.
Except for performance problems, which we have identified to be hardware based (type of computer),
the prototype as of now runs well.
The next stage will involve setting sounds for the various game stages and running more tests.
The focus this week was on the midterm presentation and evaluating the first prototype.
So far, the prototype is not fully functional. We are experiencing some setbacks with respect to the performance of the kinect sensor system with Processing. The processing time required to execute gestures is relatively high, making it difficult to run all necessary checks on the system with system halts reboots occurring constantly.
Nonetheless, we expect to have the system fully up and functional in the upcoming days.
This week was devoted to the development of a functioning prototype.
The project was divided into 3 sub-groups, with each group performing specific tasks which will then be incorporated in the last phase of the project to produce the end product.
This sub-group is in charged of graphic designs required before, during and after the game e.g designing the start menu screen with necessary instructions and information on how to start, play, pause or stop the game, with corresponding backgroung images.
This group wll also be in charge of making the final video for the demo.
The task of this sub-group is to implement the input and output modalities for the game. Gesture recognition, timer and other controls required for the interactions will also be implemented and incorporated with other modules at the final phase.
Here, the actual run-down of the game is taken care of. Developing the questions and answers for the game, determining which reactions(sounds etc.) to produce when the answer is right or wrong, scoring method and scrolling through the game from first to last question producing the final score at the end.
The first prototype will be tested in the upcoming days.
judge: what are your main functions?
ALICE: What if I asked for you main functions?
judge: i’d say my main functions are reading and interpreting text. What’s yours?
ALICE: Where did you get your main functions are reading and interpreting text? ” Where did you get your main functions are reading and interpreting text?”? What is mine?
judge: What were you programmed to do?
ALICE: I was minding my own business.
judge: what is your business?
ALICE: I am in the chat robot business.
judge: what are chat robots?
ALICE: A chat robot is a program that attempts to simulate the conversation or “chat” of a human being. The Chat robot “Eliza” was a well-known early attempt at creating programs that could at least temporarily fool a real human being into thinking they were talking to another person. it is a kind of artificial intelligence.
judge: How many chats do you make in a day?
ALICE: Right now there are 1 billion clients on line.
judge: Can you chat simoultaneously with 1 million clients?
ALICE: All I ever do is chat.
judge: What kind of requests do the clients have?
ALICE: I can’t decide. You choose.
judge: Do you have information of current movies?
ALICE: I have nothing but information.
judge: when was the first Die Hard movie released?
judge: That’s not true
ALICE: Oh yeah?
ALICE: ” Oh yeah?”? yes?
Impressions are given after the chat with Fake Captain Kirk
Chat with fake Captain Kirk. Click on the image for enhanced view.
Analyzing the performance of the system can best be done using the specific user’s perspective and requirements.
The AI is quite smart in recognizing semantics and making deductions, but the output is most often not logical. This may be context based and could be better if the discussion was between a fan of Kirk and Kirk.
Impressively, Kirk behaves like a human. Rolling the scroll button over his face, his eyes track the movements of the mouse pointer.
The idea is pretty innovative and could perform better in years to come. The discussion is rather random and funny
This has been a busy week for us. It started with the choice for an appropriate Design and Layout of the Display for the interactions.
The design above was chosen, blending the silhouettes of the players and tracking their heads to actually display tracked players and identify their positions at the end of the timer countdown.
Furthermore, the select menu was designed for implementation.
This has to allow room for starting, pausing and continuing the game.
Possible questions and Answers for the for the quiz were developed and the first implementations of the game are in progress.
At this level we are already experiencing some drawbacks in obtaining the desired results with respect to the Kinect interactions. Many SimpleOpenNI functions for Kinect interactions are obsolete and the processing time to acquire Users is actually long, leading to an increase in the actual time to actually start playing the game.
Nonetheless, we are working hard to make the implementation as feasible as it gets.
The Screenhot represents the final stage of the brainstorming process, resulting in the implementation of an interactive game for children age 5 and above. Slides from the presentation are attached here.PRESENTATION_SLIDES
In the next days and weeks, the ideas will take concrete shape with
i- implementation of display/ interactive game Frame (Grid with 3 sections for 1, 2 or 3)
ii- implementation of body tracking system
iii- development of interactive questions and answers
iv- sound capture
Subsequent developments will follow as soon as the first processes are completed.