Fire Simulator

This is a fire simulator I worked on a few years ago. The main aim of the fire was to try and create a realistic effect by using fluid dynamic equations to control the behaviour of the particles.

The Navier-Stokes equations are primarily used when discussing fluid dynamics. However these equations include variables such viscosity.

Navier-Stokes

The formula for the Navier-Sokes equation

Within the context of a fire these variables are unnoticeable to a user graphically, yet they could prove costly to compute. For these reasons I used the simplified Euler fluid equations.

The other main characteristic needed to allow for a fire is the temperature. When a material is heated up it will release a gas after a threshold temperature has been reached. It is then the gas mixed with an oxidiser that becomes flammable. The fire starts off at the maximum temperature, it then cools down as it moves throughout the atmosphere (unless another fuel source is found). The rate of cooling can be seen as

Rate of Cooling

The greater the difference in fire temperature and ambient temperature the faster the particles cool.

Within the simulation the heat from the particle is transferred to the grid causing the grid to heat up and the rate of change from other particles to slow down. The grid itself also cools down by the same ratio. This allows for an equilibrium to eventually occur as the grids cool down and delete themselves faster then the particles are able to heat them up.

The code provided changes the temperature of the particle. It also returns the temperature that has been removed from the particle. This is then passed to the grids  to allow them to update.

float Particle::particleTemperature(const float Cooling, float ambientTemperature, float currentTemp, int i) {

//cooling = -c(t-Tambient/Tmax-Tambient)^4

//Tambient = temperatur in Grid Cell

//Tmax = maximum temperature of fuel

//t = current Temp

//C = cooling depending on material eg air.

//static float particleTemp = currentTemp;

static float tMax; //static for particle to allow calcualtion to be performed correctly  static float t;

if(particlesToRender[i].initialTemp) //if particle reset temperature variables change  {

tMax = currentTemp; //set t to maximum temp of particle in new grid  t = currentTemp; //set t to new maximum temp  particlesToRender[i].initialTemp = false; //stop variables from resetting  }

float newTemp = (t – ambientTemperature)/(tMax – ambientTemperature); //calculate new temperature  float powerOfTemp = newTemp*newTemp*newTemp*newTemp; //multiply by power of 4  float tempCooling = Cooling-powerOfTemp; //take of the cooling  float finalTemp = t+tempCooling; //final temp = temp with cooling  particlesToRender[i].temperature = finalTemp; //assign temperature to particle

t = finalTemp; // allow t to change for each iteration  return tempCooling;

}

The simulation is given a starting temperature from this the particles move outwards and generate a grid. The grid contains necessary information about its heat,pressure and density. The particles then lose an amount of heat into the grid, this warms the grid up and cools the particles down. Each grid is then aware of its surrounding. If there is a grid to the side of it in any of the 3 dimensions it then uses this information to tell the particles which way to travel in the x and z direction (as high pressure moves to lower pressure). As the particles move between grids it changes the pressure and so causes it to move side to side. The colour of the particles is also governed by the temperature of the particle, this allows the particles to become different colours and create a better looking result.

The grids are created using a custom made linked list that allowed grids to be quickly added and removed (if there was no particles in the grid) . Additionally each grid also contained information about the grids around it. This made it quick for particles to be able to move and travel in the right direction. Each grid requires its own unique information to correctly affect its particles, this means that no two grids can overlap. Since the starting position of the particles is random this may not always occur by default. The grids are then resized to fit within the simulation correctly. The grids being non-uniform allows for the movement of particles to appear more random and so achieve a higher level of realism.

A particle is created and checks its position to see if it is possibly within a grid. If its not in a grid it then creates one. Using a create grid method.

void MacGrid::generateGrid(Particle &position, int i) {

static int gridNum = 1;

gridList = new node; //declare new node

gridList->test.minX = (int)position.particlesToRender[i].x-5; //create grid elements to initial size

gridList->test.minY = (int)position.particlesToRender[i].y-5;

gridList->test.minZ = (int)position.particlesToRender[i].z-5;

gridList->test.maxX = (int)position.particlesToRender[i].x+5;

gridList->test.maxY = (int)position.particlesToRender[i].y+5;

gridList->test.maxZ = (int)position.particlesToRender[i].z+5;

gridList->test.gridNumber = gridNum; //number the grid

gridList->test.absoluteTemperature = 20; //set temperature to ambient temperature  gridList->test.pressure = 1; //set pressure

gridList->test.numberOfParticles = 0; //particles initally 0 updates in grid

gridList->test.exist = true; //set to exist

gridList->test.backGrid = NULL; //set all pointers to null

gridList->test.rightGrid = NULL;  gridList->test.leftGrid = NULL;

gridList->test.frontGrid = NULL;  gridList->next = NULL;  gridNum++; //increase gridNum for numbering

totalNumberOfGrids++; //increase total grids available to allow for accurate deleting  nullPointerUpdate(); //update pointers to null (ambient) default  if(startPointer == NULL)  {

startPointer = gridList; // first element

}  else  {

gridList2 = startPointer;

while(gridList2->next != NULL)

{  gridList2 = gridList2->next; //loop through til the end of the list  }

gridList2->next = gridList; //add new element to rear of list  } compareGrid(position,i); //comapre grids to resize }

If no girds are near the new grid then it is created at its default size. If the grid checks around it and it intersects with other grids then it resizes itself.

void MacGrid::compareGrid(Particle &position,int i) {

static node * Localtemp = new node;

static node * potentialGrid = new node;

Localtemp = startPointer; //begin iteration local temp is list of premade grids

potentialGrid = new node; //declare new node

potentialGrid->test.minX =(int) position.particlesToRender[i].x-5; //create grid elements of new node

potentialGrid->test.minY =(int) position.particlesToRender[i].y-5;

potentialGrid->test.minZ =(int) position.particlesToRender[i].z-5;

potentialGrid->test.maxX =(int) position.particlesToRender[i].x+5;

potentialGrid->test.maxY =(int) position.particlesToRender[i].y+5;

potentialGrid->test.maxZ =(int) position.particlesToRender[i].z+5;

potentialGrid->test.gridNumber = gridList->test.gridNumber;

potentialGrid->test.pressure = 1;

potentialGrid->test.numberOfParticles = 0;

potentialGrid->test.exist = true;

potentialGrid ->test.leftGrid = NULL;  potentialGrid ->test.rightGrid = NULL;  potentialGrid ->test.frontGrid = NULL;  potentialGrid ->test.backGrid = NULL;  potentialGrid->next = NULL;

do  {

if(potentialGrid->test.maxX >= Localtemp->test.minX && potentialGrid->test.minX <= Localtemp->test.maxX) //if the boxes overlap

{

if(potentialGrid->test.maxY <= Localtemp->test.minY && potentialGrid->test.minY >= Localtemp->test.maxY) //if y overlaps

{  if(potentialGrid->test.maxZ >= Localtemp->test.minZ && potentialGrid->test.minZ <= Localtemp->test.maxZ) //if all overlap resizing needed

{  float resizeGridminX = potentialGrid->test.minX – Localtemp->test.maxX; //calculate amount needed to resize grid

float resizeGridMaxX = potentialGrid->test.maxX – Localtemp->test.minX;

float resizeGridminY = potentialGrid->test.minY – Localtemp->test.maxY;

float resizeGridmaxY = potentialGrid->test.maxY – Localtemp->test.minY;

float resizeGridminZ = potentialGrid->test.minZ – Localtemp->test.maxZ;

float resizeGridmaxZ = potentialGrid->test.maxZ – Localtemp->test.minZ;

std::cout<<”currently i am pointing to “<<gridList->test.leftGrid<<std::endl;

if(resizeGridminX >= -5 && resizeGridminX <= 5)//test to see if grid resizing is within the particle grid threshold  {  potentialGrid->test.minX -= resizeGridminX; //resize grid as appropriate

gridList->test.minX = potentialGrid->test.minX; //update current grid with new values

Localtemp->test.rightGrid = potentialGrid; // update pointers to exisitng grids

gridList->test.leftGrid = Localtemp;

}

if(resizeGridMaxX <= 5 && resizeGridMaxX >= -5)

{  potentialGrid->test.maxX -= resizeGridMaxX; //resize grid as appropriate

gridList->test.maxX = potentialGrid->test.maxX;

Localtemp->test.leftGrid = potentialGrid;

gridList->test.rightGrid = Localtemp;

}

if(resizeGridminY >=-5 && resizeGridminY <= 5)  {

potentialGrid->test.minY -= resizeGridminY;

gridList->test.minY = potentialGrid->test.minY;  }

if(resizeGridmaxY <= 5 && resizeGridmaxY >= -5)

{  potentialGrid->test.maxY -= resizeGridmaxY;

gridList->test.maxY = potentialGrid->test.maxY;  }

if(resizeGridminZ >= -5 && resizeGridminZ <=5)

{  potentialGrid->test.minZ -= resizeGridminZ;

gridList->test.minZ = potentialGrid->test.minZ;

Localtemp->test.frontGrid = potentialGrid;

gridList->test.backGrid = Localtemp;

}

if(resizeGridmaxZ <= 5 && resizeGridmaxZ >= -5)

{  potentialGrid->test.maxZ -= resizeGridmaxZ;

gridList->test.maxZ = potentialGrid->test.maxZ;

Localtemp->test.backGrid = potentialGrid;  gridList->test.frontGrid = Localtemp;

}  }  }  }

if(Localtemp == NULL)  {  }  else  {   Localtemp = Localtemp->next; // loop through list

}  }  while(Localtemp != NULL);

}

Additionally the grids also compare the functions to calculate the difference in pressure and temperatures etc. If a pointer is connected to another gird it uses that grid as a comparison, if the pointer is null(not connected to a grid) then it uses a default ambient variable to update.

An example of one of these functions is given below.

float MacGrid::calculateGridPressure(float numberOfparticles,float temperatureOfGrid) {  Conversion absoluteTemperature;

/***************************testing variables****************************  //MacGrid::MacGridStruct pressure;  //pressure.numberOfParticles = 50; //test number of particles  //pressure.absoluteTemperature = 523.15; //test absolute temperature  //pressure.pressure = (pressure.numberOfParticles*8.314472*pressure.absoluteTemperature)/1000; //p = nRT/V  //std::cout<<”the pressure of the grid square is “<<pressure.pressure<<std::endl;  ******************************************************************************/  temperatureOfGrid = absoluteTemperature.centigradeToKelvin(temperatureOfGrid); //used to convert from centigrade to kelvin

float pressure = (numberOfparticles*8.314472*temperatureOfGrid)/1000; //p = nRT/V

return pressure+2.4373875; }

The primary particles are an openGl spheres however the simulation can be made to display points that a smaller and faster to compute. The way the girds are created and deleted on the fly means that different fuel sources can be generated in different co-ordinates and the fire can spread. It also saves on a lot of memory overhead as all the grids are required. A number of other functions are also included within the simulator such as temperature conversion including Centigrade, Kelvin  and Rankine.

win32template 2012-07-02 13-27-55-57.avi from philip orrill on Vimeo.

Posted in Uncategorized | Leave a comment

Dragonhall builder

This application was written in C++ and OpenGL. It also uses the bullet engine (an open source physics engine) to allow the building and components to be constructed. The building itself is based off a real building in Norwich (Dragon hall) and the order of construction and details are all believed to be correct.

A team of four of us built this application. One person did the modeling of the hall, he also did the research of how it was constructed, what materials it was made out of and the history of the hall. A second member was responsible for the lighting. Another member did the interface and touch-input and I worked closely with him to manage the code, making it adaptable and easier to maintain. I was in charge of the construction of the building and how the components interacted.

The system is designed and tested on a touch screen which is why the buttons are a certain size. It is also designed for the general public. The application is full screen so that no one can change any computer settings whilst the application is running. It is also desgned to be extremely quick and easy to use.

to allow the system to run a physics Environment has to first be specified within the bullet engine. As there are two tabs there are two seperate environemnts (one for each tab). These environemnts are responsible for initilising the type of collision detection that will be used, what type fo spacial decomposition will be applied and how what type of algorihtms will be used to check for collisions. Additionally the gravity is set here and the ground.

The initilisation fo the physics environment is show below

void InitilisePhysicsEnvironment::initiliseEnvironment()

{

broadphase = new btDbvtBroadphase();

collisionConfiguration = new btDefaultCollisionConfiguration();

dispatcher = new btCollisionDispatcher(collisionConfiguration);

solver = new btSequentialImpulseConstraintSolver;

dynamicsWorld = new btDiscreteDynamicsWorld(dispatcher,broadphase,solver,collisionConfiguration);

dynamicsWorld->setGravity(btVector3(0,-9.8,0));

groundShape = new btStaticPlaneShape(btVector3(0,1,0),1);

groundMotionState = new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1),btVector3(0,-1,0)));

btRigidBody::btRigidBodyConstructionInfo

groundRigidBodyCI(0,groundMotionState,groundShape,btVector3(0,0,0));

groundRigidBody = new btRigidBody(groundRigidBodyCI);

groundRigidBody->setFriction(0.5);

groundRigidBody->setRestitution(0.5);

dynamicsWorld->addRigidBody(groundRigidBody);

}

The bullet engine allows for both rigid bodies and soft bodies to be used within its API. AS the building is made of wood and is designed to be stable it makes sens to use a rigid body object. Any model that is then uysed within the building has to have certain parameters allocated to it to allow the physcis engine to be able to update it correctly. These include a collision a shape, a rigid body and a mass.

Abstract classes are used to allow for many multiple types of objects to be easily maintainable and to prevent code repetition. The abstract classes cotain information such as how to load and draw the object. It also adds the rigid bodies to the physics environment.

void AbstractBuilding::addRigidBodyToWorld(btDiscreteDynamicsWorld& dynamicsWorld)

{

if(!addedRigidBody)

{

dynamicsWorld.addRigidBody(rigidBody);

addedRigidBody = true;

}}

The bullet physcis engine only allows one shape to be added to any individual object. It has built in primitive shapes that can be used such as boxes, cylinders and spheres. These work well for convex shapes, however not for concave shapes. To allow for concave shapes multiple promitive shapes have to be combined. The enigne only allows one shape per object. To allow this to happen a “compound shape” can be created. This is a single shape that can contain multiple primitive shapes. Within dragonhall it made snese for the stone base to be built out of one large compound shape. This allows for anything forces that enact upon the hall to enact upon the object as a whole as opposed to just one small section.

MainBuilding::MainBuilding()

{

btCollisionShape* sideMain = new btBoxShape(btVector3(360,99,30)); //the two long sides of the building

btCollisionShape* frontBackMain = new btBoxShape(btVector3(50,50,50)); // the side with the two stone arches

btCollisionShape* pillarShape = new btBoxShape(btVector3(13,22,23)); // required for the arcade support beam

btCollisionShape* wing = new btBoxShape(btVector3(35,100,275));

btCollisionShape* insideHallLongSide = new btBoxShape(btVector3(20,20,20)); //the inside hall that beams lean against

btCollisionShape* insideHallShortSide = new btBoxShape(btVector3(20,20,20)); //the short side of the inside hall

btCollisionShape* stoneWings = new btBoxShape(btVector3(100,100,100));

btCollisionShape* woodenSupport = new btBoxShape(btVector3(220,10.0f,10));

btCollisionShape* doorFrame = new btBoxShape(btVector3(35,100,10));

float xMod = -5;

float yMod = -30;

float buildingX = 135;

float buildingY = 320;

float buildingZ = -270;

shape = new btCompoundShape(); // used to allow the convex shape to have the correct collision detection

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(-490,-121,270)),woodenSupport);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(-405,-211,345)),sideMain);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(-405,-211,645)),sideMain);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(35,-211,375)),sideMain);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(35,-211,675)),sideMain);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(465,-211,375)),sideMain);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(77-buildingX,-309,278)),pillarShape);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(385,-211,80)),wing);

shape->addChildShape(btTransform(btQuaternion(0,0,0,1),btVector3(785,-211,80)),wing);

motionState =new btDefaultMotionState(btTransform(btQuaternion(0,0,0,1),btVector3(buildingX,330,buildingZ)));

shape->calculateLocalInertia(1000000,btVector3(0,0,0));

btRigidBody::btRigidBodyConstructionInfo MainBuildingRigidBodyCI(1000000,motionState,shape,btVector3(0,0,0));

rigidBody = new btRigidBody(MainBuildingRigidBodyCI);

rigidBody->setLinearVelocity(btVector3(0,0,0));

rigidBody->setAngularVelocity(btVector3(0,0,0));

rigidBody->setLinearFactor(btVector3(0,0,0));

rigidBody->setAngularFactor(btVector3(0,0,0));

name =“Main Building”;//name of the object for debugging

location =“TestModels/stone_base.obj”; // load the model

addedRigidBody = false;

The construction of the hall is split into individual sections. These sections are contained within a vector from the Standard Template Library (STL) in C++. Each time a new section is added. The vector updates and adds it to itself so it can be used later. Each Section contains multiple components. These components are again stored in a Vector. This allows each section to have as many or as few components as required and to be easily upgradable. To be able to loop through and build all the sections the coder need simply add a section to the Vector in the desired place. In the example below first a section is put in. Then it is pegged into place, then the floor is added in in multpile sections. The this is pegged, and finally the decking is added on top.

//floorSection

sections.push_back(new ArcadeSection);

sections.push_back(new ArcadePegs(1));

sections.push_back(new ArcadePegs(2));

sections.push_back(new FirstFloorSection);

sections.push_back(new SecondFloorSection);

sections.push_back(new ThirdFloorSection);

sections.push_back(new FourthFloorSection);

sections.push_back(new FifthFloorSection);

sections.push_back(new FloorPegs(1));

sections.push_back(new FloorDecking);

Each section has multiple fileds associated with it. It decides whetehr it should use gravity or not and what objects to draw.

ArcadeSection::ArcadeSection()

{

usesGravity = true;

startTimeSet = false;

sectionComplete = false;

elements.push_back(new ArcadePost);

for(int i =0; i  < 4;i++)

{

elements.push_back( new ArcadePlate(i));

}

}

The Arcade plate in the example above is used to create the rigid body. A function in the abstratc classes handles all the input, and draws the object and creates a rigid body to the specification provided. The first parameter creates a collision box to the desired size. The second parameter is the start position of the object. The third parameter is the a shape offset (if there is one). The forth paramter is the shapes end position. The fifth position is the shapes rotation. The fifth is a string that is used for debugging purposes. and the final field is where the model is and what it is called.

shape = new btCompoundShape();

if( i == 0)

{

createRigidBody(new btVector3(154,5.0f, 10), new btVector3(-444.5, 300, -5),new btVector3(0,0,0), new btVector3(-444.5,226,-5), new btQuaternion(0,0,0,1),“arcade Plate”, “TestModels/arcade_plate_1.obj”);

}

This is all the information that is required to be bale to load a model in a desired start position (such as above the building). The object can then be dropped from above until it hits the building in its final position, once there it stops.

The other algorithms are all hidden inside the abstract classes. To allow an object to movee a start and end position is required. (This gives the effect of the floor beams sliding in from the side). The algorithm of how to d othis is given below.

//move to/from start/end position

bool AbstractElement::update()

{

bool isDone = false;

float velocityX, velocityY, velocityZ;

if(!reverse)

{

velocityX = (endPosition.getX() –this->getX());

velocityY = (endPosition.getY() –this->getY());

velocityZ = (endPosition.getZ() –this->getZ());

}

else

{

velocityX = (startPosition.getX() –this->getX());

velocityY = (startPosition.getY() –this->getY());

velocityZ = (startPosition.getZ() –this->getZ());

}

if((velocityX < -1 || velocityX > 1) || (velocityY < -1 || velocityY > 1) || (velocityZ < -1 || velocityZ > 1))

{

btVector3 linearForce(velocityX*2,velocityY*2,velocityZ*2);

rigidBody->setLinearVelocity(linearForce);

}

else

{

if(reverse)

{

connected = false;

rigidBody->setLinearVelocity(btVector3(1,1,1));

rigidBody->setLinearFactor(btVector3(1,1,1));

}

else

{

setPosition(&endPosition);

rigidBody->setLinearVelocity(btVector3(0,0,0));

rigidBody->setLinearFactor(btVector3(0,0,0));

connected = true;

}

isDone = true;

moving = false;

}

return isDone;

}

The further the objects start position is from the end position the faster it travels, as it gets closer it slows down. Once the velocity is below the threshold it snaps itself into the correct final place. The start and end position can be chnaged to allow the simualtion to go bot hforwards and backwards.

Once the object has been snapped int oposition it can then be locked into place. This is done by forcing the LinearFactors and Velocity to be 0 in all axis.

//go straight to end position

//lock out velocity as well

void AbstractElement::snapToEnd()

{

setPosition(&endPosition);

rigidBody->setLinearVelocity(btVector3(0,0,0));

rigidBody->setLinearFactor(btVector3(0,0,0));

connected =true;

moving =false;

movedToMiddle =true;

}

If gravity is used to cause the object to fall into place (as oppsoed to a start and end position) then a different function is called. Once the object starts moving it will continue doing so until it hits an object that will cause it to bounce or stop. Once this happens it is known that this is its final position.

//check if element is falling and has stopped moving

bool AbstractElement::fall()

{

rigidBody->setLinearFactor(btVector3(1,1,1));

if(rigidBody->getLinearVelocity().getY() <-8.5 && !startedFalling)

{

startedFalling = true;

}

if(rigidBody->getLinearVelocity().getY() >-0.001f && rigidBody->getLinearVelocity().getY() < 0.001f && startedFalling)

{

startedFalling = false;

return true;

}else

return false;

}

These are some of the main features of the simulation, there are in total over 50 classes. That all work together to allow for the touch screen inputs on the interface using MFC, to render and move the objects within the bullet engine to the correct place at the correct speeds.

DragonHall from philip orrill on Vimeo.

Posted in Porfolio Work, Uncategorized | Leave a comment

Kinect voice command tutorial

This is my first tutorial and I’m new to this blogging world. I have created a simple program that runs on my computer and Kinect without a hitch. I am going to put the code up so it can hopefully be used easily by others. If it doesn’t work as I have missed something out or you have any problems, comments or suggestions feel free to let me know. I’ll try and improve this blog as we go.

Thought I would put together a quick tutorial on how to get voice commands working with the Kinect. There are a few things you need:

A Kinect (obviously)

the Kinect SDK  for xna

and interop.speechlib.dll

The first thing to do is to set your application up to be a console application. This is done by right clicking on the name of the project and selecting properties.

Then click on the dropdown box that says windows application and change it to console application.

You then have to add the references for the Kinect and the speech dll. Right click on the references and click add reference. The Kinect reference should be easy to find under the .net tab as Microsoft.Kinect. The speech reference I had to browse for it should be found in

c:->programFiles(x86)->MicrosoftSDKs->Speech->V11.0->Assembly

Click the dll and OK and it should be added

Ok with that set up onto the code.

The final bit of preparation is to ensure we use all the necessary libraries. At the top we need

using Microsoft.Speech.AudioFormat; //needed for speech

using Microsoft.Speech.Recognition; // needed for speech recognition engine

using Microsoft.Kinect;

using System.IO; //required to input the stream

First off some variables we need to set up the kinect and to store the speech variables. The code below is standard stuff for setting up the kinect sensor.

KinectSensor kinectSensor;

SpeechRecognitionEngine speechEngine; //used to understand speech

Stream stream;

RecognitionResult result;

string connectedStatus;

private string voiceInput;// used to print to the console results of what you have said

The first thing to do is to calculate any energy that passes over the kinect and convert it into sound. This can be found in the microsoft kinect SDK. I’ll add it here for ease.

private const int WaveImageWidth = 500;

private class EnergyCalculatingPassThroughStream : Stream// class used to convert energy from microphone into speech

{

private const int SamplesPerPixel = 10;

private readonly double[] energy = newdouble[WaveImageWidth];

private readonly object syncRoot = newobject();

private readonly Stream baseStream;

private int index;

private int sampleCount;

private double avgSample;

public EnergyCalculatingPassThroughStream(Stream stream) // find the energy from the device and pass it into the stream

{

this.baseStream = stream;

}

public override long Length

{

get { return this.baseStream.Length; } //input lenth

}

public override long Position

{

get { return this.baseStream.Position; }

set { this.baseStream.Position = value; }

}

public override bool CanRead

{

get { return this.baseStream.CanRead; }

}

public override bool CanSeek

{

get { return this.baseStream.CanSeek; }

}

public override bool CanWrite

{

get { return this.baseStream.CanWrite; }

}

public override void Flush()

{

this.baseStream.Flush();

}

publicvoid GetEnergy(double[] energyBuffer)

{

lock (this.syncRoot)

{

int energyIndex = this.index;

for (int i = 0; i < this.energy.Length; i++)

{

energyBuffer[i] =

this.energy[energyIndex];

energyIndex++;

if (energyIndex >= this.energy.Length)

{

energyIndex = 0;

}

}

}

}

public override int Read(byte[] buffer, int offset, int count)

{

int retVal = this.baseStream.Read(buffer, offset, count);

const double A = 0.3;

lock (this.syncRoot)

{

for (int i = 0; i < retVal; i += 2)

{

short sample = BitConverter.ToInt16(buffer, i + offset);

this.avgSample += sample * sample;

this.sampleCount++;

if (this.sampleCount == SamplesPerPixel)

{

this.avgSample /= SamplesPerPixel;

this.energy[this.index] = .2 + ((this.avgSample * 11) / (int.MaxValue / 2));

this.energy[this.index] = this.energy[this.index] > 10 ? 10 : this.energy[this.index];

if (this.index > 0)

{

this.energy[this.index] = (this.energy[this.index] * A) + ((1 – A) * this.energy[this.index – 1]);

}

this.index++;

if (this.index >= this.energy.Length)

{

this.index = 0;

}

this.avgSample = 0;

this.sampleCount = 0;

}

}

}

return retVal;

}

public override long Seek(long offset, SeekOrigin origin)

{

return this.baseStream.Seek(offset, origin);

}

public override void SetLength(long value)

{

this.baseStream.SetLength(value);

}

public override void Write(byte[] buffer, int offset, int count)

{

this.baseStream.Write(buffer, offset, count);

}

}

The next thing to do is to create a grammar builder this is in control of taking the inputs and checking them against input scenarios to see if it produces a result. Depending on what is input different outputs can be produced.

private SpeechRecognitionEngine CreateSpeechRecognizer()

{

RecognizerInfo ri = GetKinectRecognizer(); //set the input to be the kinect and check for operation

if (ri == null)

{

Console.WriteLine(“the kinect can not do audio for some reason ooops!!!!”);

return null;

}

SpeechRecognitionEngine sre; //set up the speech recognition engine (microsofts in-built one)

try

{

sre = new SpeechRecognitionEngine(ri.Id); //use the kinect audio for input to the engine

}

catch

{

Console.WriteLine(“the kinect can not do audio for some reason ooops!!!!”);

return null;

}

var Choices = newChoices(); // add choices here remeber or the kinect wont pick it up var forces strong type to disallow ambiguity

Choices.Add(“WORD”); //add the choice to the grammer

Choices.Add(“THIS IS A SENTANCE”);

Choices.Add(“YOU CAN ADD WHAT YOU WANT”);

Choices.Add(“EVEN HAHAHAHA”);

var grammerBuilder = newGrammarBuilder { Culture = ri.Culture }; //build the grammer

grammerBuilder.Append(Choices); // add the choices for the kinect to check said words against

var g = newGrammar(grammerBuilder); //create gramer from choices

sre.LoadGrammar(g); //load the choices to check against

sre.SpeechRecognized += this.speechRecognized; //check to see if speech is recognized

sre.SpeechHypothesized += this.speechHypothesized; //hypothisise aqnd check for accuracy

sre.SpeechRecognitionRejected += this.speechRecognitionRejected; //reject words that are wrong to choices

return sre;

}

The next few functions decide on how close what it thinks the sound was to what is in the builder, it then decides to reject speech if its not accurate enough.

private void RejectSpeech(RecognitionResult result)

{

string status = “Rejected “ + (this.result == null ? string.Empty : result.Text + ” “ + result.Confidence); //if the sound wasn’t recognised reject

Console.WriteLine(status);

}

private void speechRecognitionRejection(object sender, SpeechHypothesizedEventArgs e) //hypothisise the sound if to low a chance reject

{

Console.WriteLine(“Hypothesized: “ + e.Result.Text + ” “ + e.Result.Confidence);

}

private void speechRecognitionRejected(object sender, SpeechRecognitionRejectedEventArgs e)

{

this.RejectSpeech(e.Result);

}

private void speechHypothesized(object sender, SpeechHypothesizedEventArgs e)

{

Console.WriteLine(“Hypothesized”+ e.Result.Text + ” “ + e.Result.Confidence);

}

Now for the more interesting bit what to do if the speech is recognised. Currently it assigns an output to a string and prints the string. But it can also be used to change internal values and other stuff in your program.

private void speechRecognized(object sender, SpeechRecognizedEventArgs e)

{

if (e.Result.Confidence < 0.4) //if the cance of the sound is to low reject (between 0 and 1)

{

this.RejectSpeech(e.Result); //reject if to low

return;

}

switch (e.Result.Text.ToUpperInvariant()) //if accepted find the case that matches

{

case“WORD”: //add the string to a possible case

voiceInput = “Ok one word that was easy”; //input a response string so it can be printed to the console easily.

Console.WriteLine(voiceInput); //print to the console

break;

case“THIS IS A SENTANCE”:

voiceInput =“wow a sentance clever you :P”;

Console.WriteLine(voiceInput);

break;

case“YOU CAN ADD WHAT YOU WANT”:

voiceInput = “ok so your adding new things”;

Console.WriteLine(voiceInput);

break;

case“EVEN HAHAHAHA”:

voiceInput =“was that supposed to be an evil laugh???”;

Console.WriteLine(voiceInput);

break;

default:

voiceInput =“your talking Jibberish to me :D”;

Console.WriteLine(voiceInput);

break;

}

}

Now with that all set up and ready to go just the kinect stuff left first initilise the kinect.

privatebool IntialiseKinect()

{

enableEchoCancellation(kinectSensor);//initilsie echo cancellation

speechEngine = this.CreateSpeechRecognizer(); //create speech engine

try

{

kinectSensor.Start();

//start kinect

Console.WriteLine(“speech Recognizer created”);

}

catch (Exception)

{

Console.WriteLine(“kinect didnt start oops”);

}

Console.WriteLine(“Recognizing Speech”);

try

{

kinectSensor.Start();

}

catch

{

return false;

}

return true;

}

The initilisation is called within a discoverKinectSensor function

private void DiscoverKinectSensor()

{

foreach (KinectSensor sensor inKinectSensor.KinectSensors)

{

if (sensor.Status == KinectStatus.Connected)

{

kinectSensor = sensor;

break;

}

}

if (this.kinectSensor == null)

{

return;

}

switch (kinectSensor.Status)

{

case KinectStatus.Connected:

{

connectedStatus =” Status connected”;

break;

}

case KinectStatus.Disconnected:

{

connectedStatus = ” Status disconnected”;

break;

}

case KinectStatus.NotPowered:

{

connectedStatus =” Status please connect the power”;

break;

}

default:

{

connectedStatus =” Unknown Error”;

break;

}

}

if (kinectSensor.Status == KinectStatus.Connected)

{

IntialiseKinect();

}

}

A recognizer has to be created this is responsible for comparing the input sounds to a language.

private static RecognizerInfo GetKinectRecognizer()

{

Func<RecognizerInfo, bool> matchingFunc = r =>

{

string value;

r.AdditionalInfo.TryGetValue(“Kinect”, out value);

return“True”.Equals(value, StringComparison.CurrentCulture) && “en-US”.Equals(r.Culture.Name, StringComparison.CurrentCulture);// set to US need to find GB one there is also a japanese one

};

returnSpeechRecognitionEngine.InstalledRecognizers().Where(matchingFunc).FirstOrDefault(); //uses the 32 bit sound dll file or it wont work used for matching speech to engine

}

next enable echo cancellation

private void enableEchoCancellation(object sender)

{

this.kinectSensor.AudioSource.EchoCancellationMode = EchoCancellationMode.CancellationAndSuppression; //do echo and noise cancellation to allow for better sound

}

and finally start the voice recognition

private void startSpeechRecogntion()

{

var audioSource = this.kinectSensor.AudioSource; //set the audio source to be the kinect audio

audioSource.BeamAngleMode = BeamAngleMode.Adaptive; //set the beam to adapt to the surrounding

var kinectStream = audioSource.Start(); //start the kinect audio (has to be done in the above order first)

this.stream = newEnergyCalculatingPassThroughStream(kinectStream); //take the kinect audio as the input stream

this.speechEngine.SetInputToAudioStream(this.stream, newSpeechAudioFormatInfo(EncodingFormat.Pcm, 16000, 16, 1, 32000, 2, null)); //convert the microphone input into a stream to pass into the engine.

this.speechEngine.RecognizeAsync(RecognizeMode.Multiple); //seems to work may need more info

}

With all this done just put it in the initilisation stage of your program and run. It will take a few seconds for it to start responding as it sets itself up but it should work without a hitch.

protectedoverridevoid Initialize()

{

// TODO: Add your initialization logic here

DiscoverKinectSensor();

startSpeechRecogntion();

base.Initialize();

}

Good luck and hope it comes in useful to someone 😀

Posted in Tutorial, Uncategorized | Tagged , , , , | 1 Comment

Kinect Game

This game was written in C# and XNA along with the Kinect SDK from microsoft. All the animations within the game are motion captured and then put into the game using motionbuilder. I did the coding and animation of the game whilst someone else designed and provided me with the models.

The aim of the game is to get the humans on the mountain to the top without being killed by the yetis. If a yeti gets close to the human the human stops moving and will take damage if the yeti attacks. The player needs to hit the yetis to the side so the human can climb the ladder allowing them to reach there goal. The higher up the mountain the player gets the more humans appear on different levels to make it more challenging. If the main human (the one the player starts with) dies then the game is over. If you make it to the top you win.

The game is played using Microsoft Kinect. Voice recogntion replaces the need for any interface allowing the game to play and pause. Additionally the game play can also be sped up and slowed down and cheats can be activated to either stop the yetis from spawning; or putting god mode on thjat prevents the human characters from dying.

The player uses there left and right hand to control the gloves on screen. These gloves can then be used to move the yetis away from the humans on screen. The direction the yeti is being hit moves, depends on what direction it is being hit from. The player hits the yeti from the left it moves right, or if there hit from the right they move left. This happens with bot hthe left and right hands.

The first thing needed to achieve this is to be able to calculate where the players hands are in 3D with the kinect, and then convert this information so they can be displayed on the screen in 2D.

The code below is given as a guide to achieve this.

void kinectSensor_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)

{

using (SkeletonFrame skeletonFrame = e.OpenSkeletonFrame())

{

if(skeletonFrame != null)

{

int skeletonSlot = 0;

Skeleton[] skeletonData = newSkeleton[skeletonFrame.SkeletonArrayLength];

skeletonFrame.CopySkeletonDataTo(skeletonData);

Skeleton playerSkeleton = (from s in skeletonData where s.TrackingState == SkeletonTrackingState.Tracked select s).FirstOrDefault();

if (playerSkeleton != null)

{

Joint rightHand = playerSkeleton.Joints[JointType.HandRight];

float halfScreenWidth = GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Width / 2;

float halfScreenHeight = GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Height / 2;

Vector3 Test = Vector3.Transform(Vector3.Zero, Matrix.Invert(viewMatrix));

Joint leftHand = playerSkeleton.Joints[JointType.HandLeft];

float halfScreenWidth2 = GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Width / 2;

float halfScreenHeight2 = GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Height / 2;

Vector3 Test2 = Vector3.Transform(Vector3.Zero, Matrix.Invert(viewMatrix));

handPosition = newVector3(((((rightHand.Position.X) / rightHand.Position.Z) + 0.5f) * (640 * screenConversion())), ((-(rightHand.Position.Y) / rightHand.Position.Z) + 0.5f) * (480 * screenConversion()), (3.15f – (0.5f * (rightHand.Position.Z))));

leftHandPosition = newVector3(((((leftHand.Position.X) / leftHand.Position.Z) + 0.5f) * (640 * screenConversion())), ((-(leftHand.Position.Y) / leftHand.Position.Z) + 0.5f) * (480 * screenConversion()), (3.15f – (0.5f * (leftHand.Position.Z)))); }

}

}

}

Within the code above a function is used that converts from the Kinect size to the screen resolution. The Kinect by deafult can only capture at a certain resolution (in the game it is set to 640 by 480). This will then only allow the 2D gloves to move around the screen by this resolution. The gloves as a result can only move near the centre of the screen and not to the edges. The small function below takes the screen resolution and converts the Kinect resolution so it can be used on the entire screen.

protectedfloat screenConversion()

{

float convertedValue = GraphicsAdapter.DefaultAdapter.CurrentDisplayMode.Width/640;

return convertedValue;

}

Since the Kinect is in 2D it makes sense to then test if the player is hitting a yeti in 2D. If a yeti is behind the mountain then no checks are performed as the player shouldn’t be able to hit them. If the yeti is front of the mountain its 3D co-ordinates are converted into 2D. A quick 2D Axis Aligned bounding Box (AABB) Test is then performed to see is a collision occurs.

The 2D bound Box test is given below

public bool BoundingBox(float x1, float y1, float x2, float y2, float x3,float y3, float x4, float y4)

{

if (!((x4 < x1) || (y4 > y1) || (x3 > x2) || (y2 > y3)))

{

returntrue;

}

else

{

returnfalse;

}

}

The code below shows how this is then used to test if a collision with a yeti has occured. It can also tell if the yeti has been hit from the left and or right and move the yeti in the appropriate direction.

foreach

(Yeti yeti in yetiArray)

{

Matrix worldMatrix = Matrix.CreateScale(10.00f, 10.00f, 10.00f) * Matrix.CreateRotationY(MathHelper.ToRadians(yeti.localAxis)) * Matrix.CreateTranslation(yeti.xPos, yeti.yPos, yeti.zPos) * Matrix.CreateRotationY(MathHelper.TwoPi) * Matrix.CreateRotationY(MathHelper.ToRadians(yeti.angle));

Vector3 yeti2D = device.Viewport.Project(testProjection, projectionMatrix, viewMatrix, worldMatrix); //yeti position in 2D

float distanceX = (float)(handPosition.X); //right hand position

float distanceY = (float)(handPosition.Y);

float leftdistanceX = (float)(leftHandPosition.X); //left hand position

float leftdistanceY = (float)(leftHandPosition.Y);

bool kinectCollision = collision.BoundingBox(distanceX – 60, distanceY + 120, distanceX + 60, distanceY – 120, yeti2D.X – 100, yeti2D.Y + 50, yeti2D.X + 80, yeti2D.Y – 50); //collision box of yeti and right hand position of player

bool leftKinectCollision = collision.BoundingBox(leftdistanceX – 60, leftdistanceY + 120, leftdistanceX + 60, leftdistanceY – 120, yeti2D.X – 100, yeti2D.Y + 50, yeti2D.X + 80, yeti2D.Y – 50);//collision box of yeti and left hand position of player

if (kinectCollision) // if a collision occurs

{

if (distanceX < yeti2D.X && (yeti.angle > -30 && yeti.angle < 30)) //ensure the yeti is in front of the mountain check to see if the player is on the left

{

yeti.angle += 2.0f;//move yeti right

yeti.attack =false;

}

else

{

yeti.angle -= 2.0f;//move yeti left

}

}

elseif(leftKinectCollision) //if a collision occurs

{

if (leftdistanceX < yeti2D.X && (yeti.angle > -30 && yeti.angle < 30))

{

yeti.angle += 2.0f;

yeti.attack =false;

}

else

{

yeti.angle -= 2.0f;

}

}

}

An environment class is used to allow both the humans and yetis to move and interact. This class contains the x,y,z position of the ladders. The x,y,z position of the caves and the angle the cave is at on the platform.

A yeti is spawned every few seconds on a platform that a human is on.The yeti starts off inside the cave and walks out. As it is spawned a random number is generated, this number decides if the yeti will move left or right around the mountain. Once it emerges from the cave it roates in the direction and a walk animation is played as it walsk around the mountain, this happens unles it encounters a human. If this happens it stops walking and an attack animation is played.

The main general algorithm is

public void CalculatePosition(Environment caveLocation,Player human,ContentManager Content)

{

if (attack) //if attack stand still

{

angle += 0.0f;

xPos += 0.0f;

yPos += 0.0f;

zPos += 0.0f;

}

else

{

if (zPos > caveLocation.caveZ) //if it hasnt moved out of the cave continue to move from the cave

{

emergeFromCave();

spawned =true;

}

if (zPos < caveLocation.caveZ && !spawned)

{

zPos -= 10f;

}

else

{

if (localAngle < 90 && localAngle > -90)

{

rotateToFaceDirection();

}

else if (localAngle > 90)

{

localAngle = angle + 90; //move around the platform the same direction as the yeti is facing

}

elseif (localAngle < -90)

{

localAngle = angle – 90;

}

if (localAngle >= 90 || localAngle <= -90)

{

if (platformNumber == human.currentPlatform && human.active && !human.climb) // if the humna isnt climbing and is still on the same platfomr as the yeti

{

if (!attack)    {

detectHuman(human); //look for the human if not behind the mountain

}

}

else

moveAroundPlatform();//walk around the platform

}

}

}

}

To allow the yeti to move from out of a cave when it is first spawn the codebelow is used.

void emergeFromCave()

{

zPos -= 10f;

}

To allow the yeti to rotate and face the right diredction the algorihtm below is used.

void rotateToFaceDirection()

{

if (localAngle < 90 && localAngle > -90)

{

if (directionToFace <= 5)

{

localAngle += 0.5f;//move right

}

elseif (directionToFace > 5)

{//move left

localAngle -= 0.5f;

}

}

}

The human uses a similar system the environment class is used to look at the position of the ladder compared to the current position if the person. A decision is then used to move either left or right. As long as yeti isnt attacking the human it will travel to the ladder using a walk animation. If a yeti is atacking it stands still and plays a scared animation. It then also takes damage every second this occurs.

Below is the fucntion that shows how a human decides which way to rotate and walk based upon the ladder position

public void calculateLadderPosition(Environment platform)

{

Environment tempEnvironment = newEnvironment(platform); //current platform

tempEnvironment.currentLevel(currentPlatform + 1);//next platfrom

if (yPos >= tempEnvironment.ladderY && !climb)

{

currentPlatform++;//move to next platform so the human doesn’t clim infinitly

}

platform.currentLevel(currentPlatform);

if (xPos >= platform.ladderX – 20 && xPos <= platform.ladderX + 20) //if the human is in the same position as the ladder

climb =true;

elseif (xPos > platform.ladderX)

{

if (localAxis > -90)

{

localAxis -= 1.0f;//rotate human

}

else

moveToNextLadder(true); //move left

}

elseif (xPos < platform.ladderX)

{

if (localAxis < 90)

{

localAxis += 1.0f;

}

else

moveToNextLadder(false); // move right

}

if (climb == false)

{

if ((zPos – platform.ladderZ) > 0.0f)

{

calculateZPosition(true);

}

elseif ((zPos – platform.ladderZ) < 0.0f)

{

calculateZPosition(false);

}

else zPos += 0.0f;

}

}

All the animations were done using motion capture. The data was captured and cleaned up as appropriate to create nice smooth looking animations. This was then put into Autodesks MotionBuilder and the animation were put onto an actor so it could be used on a 3D model. I then gave this to some one else who did all the models and necessary skinning and boning to make the animations look smooth. Once I a copy of the models I then put them in the game and cut out the beginning t-pose and end part of the poses to allow for a smoother motion between animations.

The music is from an old Banjo Kazooie game.

KinectGame from philip orrill on Vimeo.

 

Posted in Porfolio Work, Uncategorized | Leave a comment

Robot Game

This is a 3D game written in C++ and OpenGL. The game uses an Xbox 360 controller as its input. The analogue stick move the player and look around, the right and left trigger fire the right and left guns and the bumpers move the arms. As the player shoots the side of the controller vibrates to respond. The animation of the walking and aiming is all done natively in OpenGL.

There are two separate classes for the 360 Controller. The first one initiates the hardware and allows the application to access the functions such as pushing a button and vibration.

The second class is more abstract and lists what happens when certain presses or actions occur within the game.

The code below initilises the controller

CXBOXController::CXBOXController(int playerNumber)

{

// Set the Controller Number

controllerNumber = playerNumber – 1;

//controller number = 0,1,2,3

}

XINPUT_STATE CXBOXController::GetState()

{// Zeroise the state

ZeroMemory(&_controllerState,sizeof(XINPUT_STATE));// Get the state

XInputGetState(controllerNumber, &_controllerState);

return _controllerState;

}

bool CXBOXController::IsConnected()

{// Zeroise the state

ZeroMemory(&_controllerState,sizeof(XINPUT_STATE));// Get the state

DWORD Result = XInputGetState(controllerNumber, &_controllerState);

if(Result == ERROR_SUCCESS)

{

returntrue;

}

else

{

returnfalse;

}

}

void CXBOXController::Vibrate(int leftVal, int rightVal)

{

// Create a Vibraton State

XINPUT_VIBRATION Vibration; // Zeroise the Vibration

ZeroMemory(&Vibration, sizeof(XINPUT_VIBRATION)); // Set the Vibration Values

Vibration.wLeftMotorSpeed = leftVal;

Vibration.wRightMotorSpeed = rightVal; // Vibrate the controller

XInputSetState(controllerNumber, &Vibration);

}

To see if a button is being pressed

void PlayerControls::buttonPush(CXBOXController* player)

{

if(player->IsConnected())

{

if(player->GetState().Gamepad.wButtons & XINPUT_GAMEPAD_A)

{

aPressed =true;

}

}

you can change this to a not(!) statement to check if the button isn’t being pressed. This comes in useful as some weapons are semi-automatic and so you only want one bullet to shoot each time a trigger or button is pressed.

The analogue sticks allow for movement in a conventional FPS way.

The player starts at position 0, in the x and 0 in the z. The y axis is then calculated using a height map. As an analogue stick is moved it increase by an integer value. This can then be used to move the player.

The code below moves the player in the z axis. If a Controller isn’t plugged in then the keyboard controls can be used.

void PlayerControls::calculateZPosition(CXBOXController* player, double timeFactor)

{

staticfloat TempZPosition = 0;

if(player->IsConnected())

{

if(player->GetState().Gamepad.sThumbLY >6000 || player->GetState().Gamepad.sThumbLY < -6000)

{

TempZPosition = (float)(player->GetState().Gamepad.sThumbLY/32767.0f);

speed = TempZPosition*(float)(timeFactor*30.0);

}

else

{

speed = 0;

}

}

else

{

if(keys[‘W’] || keys[‘S’])

{

TempZPosition = (keys[‘W’] ? (keys[VK_SHIFT] ? 1.0f : 0.5f) : -0.5f);

speed = TempZPosition*(float)(timeFactor*30.0);

}

else

{

speed = 0;

}

}

}

A timefactor is taken in. This is so that the player moves the same on every computer independent to the hardware. This is important within the animation. Without it the Player may move very fast or slow depending on how many Frames per second the computer is refreshing the screen.

The waves in the water are created by using Perlin noise to move points. The main terrain and objects (the cacti and rocks) are two independent models. To stop the player from being able to just walk through them two separate heightmaps are created. One with the terrain and one with the terrain and objects. The two heightmaps are compared and anywhere that there is a difference an object is known to be there and so the player can not travel.

If the player is close to the building and rotate in third person then the camera will automatically adjust itself to move over the top of the building. This stops the camera from moving inside the building and everything looking messy.

When the program is loading it will generate a random start position for each of the enemy robots. It then checks the heightmap to ensure that the position is valid (not in a rock or cactus). If it isn’t it will generate a new point until it is. Additionally it also randomly generate a weapon for the right and left hand. This makes the game different each time the game is played. The enemies also have a randomly generated amount of health and Armour. The lower and upper bounds are increased in each round so that they get harder.

void Enemy::createEnemy(HeightMap *heightMap)

{

std::cout<<“Initilising enemies”<<std::endl;

for(int i = 0; i <firstWave; i++)

{

//randomly generate number 1

int leftWeapon = (rand()%4)+1;//assign to left wepaon

enemyLeftWeapons[i] = leftWeapon;

//randomly generate number 2

int rightWeapon = (rand()%4)+1;//assign to right weapon

enemyRightWeapons[i] = rightWeapon;

enemy[i].setHealth((float)(rand()%50+50)) ; //generate Health

enemy[i].setArmour((float)(rand()%20+50)) ;//generate Armour

enemy[i].x = ((float)(rand()%800-400));//set start x

enemy[i].z = ((float)(rand()%800-400)); //set start z

enemy[i].y = (heightMap->getHeight(enemy[i].x,enemy[i].z));

enemy[i].angle = ((float)(rand()%90+10));

while(enemy[i].y < 7.5) //put in collision test for objects such as building or rocks

{

enemy[i].x = ((float)(rand()%800-400));//set start x

enemy[i].z = ((float)(rand()%800-400)); //set start z

enemy[i].y = (heightMap->getHeight(enemy[i].x,enemy[i].z));

}

enemy[i].initiliseRobot();

alive[i] = true;

enemyWeapons[i][0].initallWeapons();

enemyWeapons[i][1].initallWeapons();

enemyWeapons[i][0].initSmoke();

enemyWeapons[i][1].initSmoke();

}

The enemy robots move and shoot at a position close to the player. If two enemies move near each other they then create new random point away from each other to avoid a collision, they also do this to move around the scenery without walking into much. Once a point that the enemy wishes to shoot at has been generated it then decides if it is to the left or right of this point, it then rotates whilst continually moving forward until it is almost in-line with the point, if it is then the robot simply moves forwards without rotating.

The code for how the enemy moves is given below

void Enemy::enemyMovement(HeightMap *heightMap, double timeFactor,Robot &player, int waveNumber)

{

staticint loopVariables1;

staticint loopVariables2;

if(waveNumber == 1)

{

loopVariables1 = 0;

loopVariables2 = firstWave;

}

if(waveNumber == 2)

{

loopVariables1 = firstWave;

loopVariables2 = secondWave;

}

if(waveNumber == 3)

{

loopVariables1 = secondWave;

loopVariables2 = thirdWave;

}

for(int i = loopVariables1; i <loopVariables2; i++)

{

int tooCloseToOtherEnemies = enemyCollisionWithenemy(i,waveNumber);

float radianAngle = myMath::DegreesToRadian(enemy[i].angle+90);

if(tooCloseToOtherEnemies == 1 && timeFromLastCollision < 0.5  && alive[i] == true)

{

timeFromLastCollision = 3.0;

enemy[i].angle -= 130*timeFactor;

timeFromLastCollision -= (float)timeFactor;

enemy[i].x -= ((sin(radianAngle))*((0.8/100)*(float)-timeFactor*30)); //used for forwards backward movement

enemy[i].z -= ((cos(radianAngle))*((0.8/100)*(float)-timeFactor*30));

enemy[i].y = (heightMap->getHeight(enemy[i].x,enemy[i].z));

enemy[i].calculateArmAngles(timeFactor);

enemy[i].calculateLegAngles(timeFactor);

enemy[i].setMotionType(RUN);

enemy[i].setRightArmHeading(AIM);

enemy[i].setLeftArmHeading(AIM);

}

if(tooCloseToOtherEnemies == 2 && timeFromLastCollision < 0.5  && alive[i] == true)

{

timeFromLastCollision = 3.0;

enemy[i].angle += 130*timeFactor;

timeFromLastCollision -= (float)timeFactor;

enemy[i].x -= ((sin(radianAngle))*((0.8/100)*(float)-timeFactor*30)); //used for forwards backward movement

enemy[i].z -= ((cos(radianAngle))*((0.8/100)*(float)-timeFactor*30));

enemy[i].y = (heightMap->getHeight(enemy[i].x,enemy[i].z));

enemy[i].calculateArmAngles(timeFactor);

enemy[i].calculateLegAngles(timeFactor);

enemy[i].setMotionType(RUN);

enemy[i].setRightArmHeading(AIM);

enemy[i].setLeftArmHeading(AIM);

}

else

{

timeFromLastCollision = 0.0;

int robotHeadingX = playerOffsetX();

int robotHeadingZ = playerOffsetZ();

float rotationHeading = OLTest(enemy[i].x,enemy[i].z,player.x+robotHeadingX,player.z+robotHeadingZ,enemy[i].x+sin(radianAngle),enemy[i].z+cos(radianAngle)); //do on left test to rotate robot

enemylookAngle(heightMap,timeFactor,player,waveNumber);

if(rotationHeading != 0 && rotationHeading<-1)

{

enemy[i].angle += 130*timeFactor;

}elseif(rotationHeading != 0 && rotationHeading >1)

{

enemy[i].angle -= 130*timeFactor;

}

enemy[i].x -= ((sin(radianAngle))*(0.8*(float)timeFactor*30)); //used for forwards backward movement

enemy[i].z -= ((cos(radianAngle))*(0.8*(float)timeFactor*30));

enemy[i].y = (heightMap->getHeight(enemy[i].x,enemy[i].z));

enemy[i].calculateLegAngles(timeFactor);

enemy[i].setMotionType(RUN);

enemy[i].calculateArmAngles(timeFactor);

enemy[i].setRightArmHeading(SIDE);

enemy[i].setLeftArmHeading(SIDE);

}

}

}

Currently the enemy is created with a random weapon, amount of health and Armour. They are also able to move and find there way towards a player. Other functions then take care of the correct animation and when to shoot. However the enemies currently always look straight ahead. This looks wrong especially if the player is on a mountain above them. So a calculations has to be done to find the angle difference between the enemy and the player. This is then used to make the enemy face the player correctly.

void Enemy::enemylookAngle(HeightMap *heightMap,double timeFactor,Robot &player, int waveNumber)

{

staticint loopVariables1;

staticint loopVariables2;

if(waveNumber == 1)

{

loopVariables1 = 0;

loopVariables2 = firstWave;

}

if(waveNumber == 2)

{

loopVariables1 = firstWave;

loopVariables2 = secondWave;

}

if(waveNumber == 3)

{

loopVariables1 = secondWave;

loopVariables2 = thirdWave;

}

for(int i = loopVariables1; i <loopVariables2; i++)

{

float newLookAngle;

newLookAngle = -angleBetween<float>(player.x – enemy[i].x,player.y – enemy[i].y,player.z – enemy[i].z,player.x – enemy[i].x,0.0,player.z – enemy[i].z);

if(newLookAngle > 45.0)

{

newLookAngle = 45.0;

}

elseif(newLookAngle <-65)

{

newLookAngle = -65.0;

}

enemy[i].lookAngle = newLookAngle;

}

}

Collision detection between the enemy robots and other enemy robots is done using 2D circle tests. All the circles are at position 0. This is a quick test with few overheads but allows for the robots to behave accordingly stopping them from running into each other, or all being drawn in the exact same position giving the illusion of one robot when there may be 5 or 10.

The HUD and text is also all done in OpenGL. A class was set up that would create a font and display it on screen. Certain factors could be chosen such as the size, position and colour of the font. Any text is then input as a string and it is displayed in the desired area. The colour of the font changes as the users health goes down, it starts off as green when they are at full health and it gradually turns red when the user is low on health.

There are 3 different waves of enemy each one gets slightly more difficult and the last wave had 10 enemy robots. The robots spawn in a different position each time to stop spawn camping and to keep the player on there toes.

RobotGame from philip orrill on Vimeo.

Posted in Porfolio Work, Uncategorized | Leave a comment

My first 2D game

This was the first 2D game I wrote it uses C++ and OpenGL. All the animations are simply sprites that are loaded at different time frames when required to create the effect of an animation. The files are just TGA drawing with an alpha channel. The player uses the arrow keys to move and the spacebar to shoot. If you hit an enemy it takes an amount of damage, if the enemy dies then it randomly drops health, Armour or nothing.

The enemy planes don’t fly or shoot into the path on the outside. The player can fly in the path but doing so will cause them to lose Armour every second. If the player is out of Armour then there health is heavily penalised.

A set of points are randomly generated at the start of each game these points are then joined using line segments.

The points are generated using the function

void LeftPath()

{

glPointSize(5.0);

glBegin(GL_POINT);

for(int i = 0; i < 25; i++)

{

float x = (rand()%100)-160;

leftPath[i] = x;

}

glEnd();

}

These are then joined together with line strips.

glPushMatrix();

glTranslatef(0,scrolling,0);

glPointSize(5.0);

glColor3f(0, 0,0);

glLineWidth(3);

glBegin(GL_LINE_STRIP);

for(int path=0; path<20-1; path++){

int y = path * 400;

glVertex2f(leftPath[path],y);

}

glEnd();

glPopMatrix();

The blacked out areas are specified with

glPushMatrix();

glTranslatef(0,scrolling,0);

for(int path=0; path<20-1; path++){

glEnable (GL_BLEND);

glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

glBegin(GL_POLYGON);

glColor4f(0, 0,0, 0.6f); glVertex2f(leftPath[path],path*400);

glColor4f(0, 0,0, 0.6f); glVertex2f(leftPath[path+1],(path+1)*400);

glColor4f(0, 0,0, 0.6f); glVertex2f(-200,(path+1)*400);

glColor4f(0, 0,0, 0.6f); glVertex2f(-200,path*400);

glEnd();

}

glPopMatrix();

glDisable(GL_BLEND);

A quick test is then performed to see if the player is to the left or right of the line, this is then used to affect the player.

// check which line to test against

int lineNumber = floor((movY-scrolling)/800.0);

// test

float ax = leftPath[lineNumber];

float ay = 800 * lineNumber;

float bx = leftPath[lineNumber+1];

float by = 800 * (lineNumber+1);

float cx = movX;

float cy = movY – scrolling;

float result = (bx-ax)*(cy-ay) – (cx-ax)*(by-ay);

float OLtest = result;

This allows the game to be slightly different every time the game is played. The collision is done using a couple of Axis Aligned Bounding Boxes (AABB) on the bullets and planes, and a circle test between the bullets and turrets.

When a plane or turret is dead an explosion animation happens. A set of still tga images are produced for the animation. A timer function is set up within the program. The program then changes the flips through the explosion images afetr a certain amount of time has passed.

if(turret[i].Thealth<1)

{

glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

glEnable(GL_BLEND);

glEnable(GL_TEXTURE_2D);

glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);

if(turret[i].timer1 == true){

turret[i].turretAnimation = (float)clock()/(float)CLOCKS_PER_SEC;

turret[i].timer1 = false;

}

float turretExplosion = (float)clock()/(float)CLOCKS_PER_SEC;

turretTime = (turretExplosion- turret[i].turretAnimation);

glBindTexture(GL_TEXTURE_2D, texTXplo1);

if(turretTime >=0.15 && turretTime <0.3)

{

glBindTexture(GL_TEXTURE_2D, texTXplo2);

turretKills+1;

}

elseif(turretTime >=0.45 && turretTime <0.6)

glBindTexture(GL_TEXTURE_2D, texTXplo3);

elseif(turretTime >=0.6 && turretTime <0.75)

glBindTexture(GL_TEXTURE_2D, texTXplo4);

elseif(turretTime >=0.75 && turretTime <0.9)

glBindTexture(GL_TEXTURE_2D, texTXplo5);

elseif(turretTime >=0.9 && turretTime <1.05)

glBindTexture(GL_TEXTURE_2D, texTXplo6);

elseif(turretTime >=1.05 && turretTime <1.2)

glBindTexture(GL_TEXTURE_2D, texTXplo7);

if(turretTime >1.2)

dead = true;

The scenery is a section taken from google maps that was then duplicated and rotated to make it tile-able. This is then repeated a few times til the end of the map. The scrolling is done by having a scrolling variable linked to the y axis of the screen. The players plane stays stationary; the background, enemy planes  and enemy turrets all translated by the scrolling variable.

glPushMatrix();

//background

glTranslatef(0,scrolling,0);

glColor3f(0.0,0.0,1.0);

glBegin (GL_POLYGON);

glVertex2f(-200,3800);

glVertex2f(200,3800);

glVertex2f(200,-200);

glVertex2f(-200,200);

//main background

glEnd();

The collision detection takes in the 2D position of the enemy plane + the offset to make it a collision box, the scrolling variable is the nadded to the y axis to give it its correct position.

2D WW2 fighter from philip orrill on Vimeo.

)

Posted in Porfolio Work, Uncategorized | Leave a comment

Fire Safety Application

This was a 2D application designed to ensure that all people within a building could become both aware of a fire and were able to leave the building safely.

A person becomes aware of a fire in 3 ways. If they are close to a fire then they are aware and being exiting the building. If an aware person comes within a proximity to someone who is unaware they to leave the building. If a fire alarm of safety device such as a sprinkler is activated then all people with the radius become aware and exit the building by the nearest exit. If the nearest exit becomes blocked because of the fire, the people then automatically find there way to the next nearest exit. This happens until no exit can be found. Only one person can stand can occupy a single space at any given time. This shows how people are forced to queue and can become bottlenecked in certain corridors.

The alarms and sprinklers can either be linked or not by using the check box. If they are linked as soon as one is activated they all become activated. The system also has built in error checking to stop a user from inputting the wrong objects or trying to start a simulation without all the necessary parameters being matched.

Additionally a user can load a blueprint into the application They can trace around this or add new items onto it if they wish. Once they have fully created the scene they can save it as a .fsb(fire safety building) file, this can then be loaded and used at a later date.

This was written in manged C++ and OpenGL with windows forms.

phil1 from philip orrill on Vimeo.

Posted in Porfolio Work, Uncategorized | Leave a comment