Student Research Topics
This page describes some of the research projects that students
have done here at CSUS under my supervision. Most are M.S. projects,
but some are senior projects or independent work. A few are students
from other universities. Whenever possible, I try to steer student
research into directions which have the potential of producing
publishable work. Much of the work described on this page has been
published in respected, peer-reviewed venues.
In general, I prefer to supervise students who have taken one of my advanced
Artificial Intelligence (CSc-215, CSc-180) or Graphics (CSc-155, CSc-165) courses.
Although I sometimes make exceptions. Feel free to come by and talk to me
about your ideas and interests.
3D Graphics Programming
I am an author (with John Clevenger) of the 2017 textbook
Computer Graphics Programming in OpenGL with Java.
Many of the examples in the book were developed with the assistance of
students at Sac State. There are many opportunities for expanding this
development. Projects in this area require background in OpenGL GPU shader
programming, such as is learned in CSc-155.
We also maintain a Java mathematics library for 3D graphics support
called graphicslib3D. This package is an integral part of the textbook
and therefore requires continual maintenance and improvement.
There are too many possible projects to list them all. A few examples include:
- Developing complex or instructive 3D procedural models
- Translating JOGL examples to LWJGL or Android
- Expanding the examples for Virtual Reality
- Implementing advanced special effects, such as water or hair
- Adapting tessellation or geometry shaders for improved render detail
- Making particular additions or improvements to graphicslib3D
Game Engine Architecture / Game Engineering
Computer game design and implementation is a complex and constantly changing field.
Today, games are built using game engines - some popular ones are
At Sac State, students in CSc-165 learn about game engine internals, as well as
game components and game engineering and implementation. We have our own
simplified game engine called "SAGE", that is always in need of support.
As such, there are always opportunities for SAGE-related projects
for students who have graphics or game engine experience at the CSc-155/165 level.
Genetic Algorithms are search methods based loosely
on biological evolution, where populations of candidate strings
evolve solutions to difficult problems. Evolutionary concepts
of natural selection, crossover, and mutation are used to
improve the solutions over time.
Evolving QWOP Gaits
QWOP is an online game in which a human player controls the
limbs of a sprinter so as to complete a simulated 100-yard race.
Steven Ray's masters project utilized a cellular genetic algorithm (CGA)
to evolve the first software system to successfully complete a QWOP race.
Steven and I presented the results
at GECCO 2014, and the published paper is
It was also a "Humies" finalist (Human-competitive results competition).
Although completing a QWOP race is a significant result, winning
a QWOP race would be even better. It would require detecting more
on-screen features in order to evolve a system that can adapt more
effectively to the runner's movements. This would be an interesting
follow-up project and would have an excellent chance of winning a Humie award.
Genetic Programming (GP) is an extension of the Genetic Algorithm concept,
in which the system evolves computer programs. The idea was first
popularized by John Koza in the early 1990s.
Sac State CSC undergrad Michael Vollmer (now a PhD student at the University of Indiana)
developed an elegant GP framework in 2013 using the language Clojure, as a
project in CSc-215. His system, called "fungp" is
We use fungp in CSc-215, and there are many opportunities for follow-on
Terrain-Based Genetic Algorithm (TBGA)
In a Terrain-Based Genetic Algorithm (TBGA), the strings
reside on a grid, and the genetic operations (e.g. crossover,
mutation) function differently on various points of the grid.
The differing environments form a sort of terrain, and the
best solutions evolve wherever the parameters are the
most agreeable for the problem at hand.
Initial implementation and testing of the TBGA was done by
Rebecca Pirie, Adam Wachter, and Scottie Sharp
at Sonoma State University in 1998. Their work was supported by
a RSCAP mini-grant and was presented at the International Conference
on Genetic Algorithms. They showed that the TBGA outperforms
a variety of other popular genetic algorithms on a suite of
difficult test problems, and required less tuning.
The four of us published a report at GECCO'1999 in
Orlando, which is available
TBGA Visualization Tool
The solid-rendering shown at the right was created in 2004 by the TBGA
Visualization Tool, developed by a senior project team here at CSUS.
The team members were Jim Thein, Michael Pope,
Robert Carlin, Reuben Fauth, Matthew SantaCruz,
and James Sanguinetti. The surface represents where on the
terrain the best solutions are evolving. The higher the peak, the more
solutions have been found at that location.
Jim and I wrote a report that appeared
in ICTAI'2004 in Boca Raton, which is available
Terrain-Based Memetic Algorithm (TBMA)
The TBGA was expanded into a Terrain-Based Memetic algorithm (TBMA)
starting in 2009, through a collaboration with Carlos Azevedo, at the
time a student at the University of Pernambuco (Recife, Brazil).
In a "memetic" algorithm, a local search is incorporated into the
genetic algorithm. In the TBMA, the local search is tuned using
terrain parameters. Carlos used the TBMA to achieve superior results
on an image-compression application. We presented the findings
at GECCO'2009 in Montreal, and the paper is available
A follow-up study, which explored additional applications, was done
in collaboration with Claus Aranha (a student at the University
of Tokyo) and Carlos. The results were presetnted
at the 2011 Brazilian Congress of Computational Intelligence, and
the paper is available here.
Vis-TBGA was expanded into a TBMA visualization tool (Vis-TBMA)
in 2011 by CSUS M.S. student Brad Johnson.
The genetic algorithm can be used to solve the classic'
knight's tour problem. Terry Slocum worked on this problem
at Sonoma State University, as part of my Artificial Intelligence course.
More work needs to be done to compare the performance of several
more sophisticated genetic algorithms on this same application.
Terry and I published the results at CEC'2004 in Portland,
which is available
The genetic encoding used to solve the Knight's Tour can also be
used to solve the problem of finding all paths through a maze.
Zach Matley implemented a set of agents for his Master's project,
evolving static direction maps for a game.
The image at the right shows a static direction map evolved
for his application using a simple genetic algorithm.
Zach and I published a report at CEC'2004 in Portland
Similarly, genetic algorithms can be used to find solutions to
traveling salesman problems. Brian Johnson utilized a
a genetic algorithm in conjunction with a GIS system (ArcGIS) to find
delivery truck routes.
Two-Player Game Trees
I am also interested in two-player strategy games such as chess, Go,
checkers, Othello, etc. If you have an interesting idea, I would
enjoy trying to help you try it. I also have some project ideas of my own.
You would need to become familiar with the standard minimax
and alpha-beta search algorithms.
You can learn these in CSc-180 or CSc-215, or by reading web tutorials
such as the one at
Trappy Minimax Algorithm
Normal minimax search assumes best play by the opponent.
It also evaluates positions at the leaves of the tree only.
Trappy minimax is different in both respects. It evaluates
leaf nodes and interior nodes, and when a path evaluates very
differently depending on the depth searched, that path may
contain a trap. For example, if an opponent needs to
search 5-ply to see that move A fails, but with only a 3-ply search
might conclude that move A wins, then move A is a trap.
Trappy minimax search includes this trap potential
in its evaluation of each variation, sometimes favoring moves with
higher trap potential. The resulting programs typically have
lower ratings against computers, which are not susceptible to such
tricks, but higher ratings against us fallible humans.
The algorithm was implemented and tested by two graduate students here
at CSUS: Ahmed Reda (using the game Othello), and Linda Huynh
(using the game Checkers). Linda's version only looked for traps
at the first ply, and its results were not much different than
for standard minimax. However, Ahmed's version more cleverly
looked for traps at the 2nd ply... his version, dubbed "Desdemona",
achieved a rating
of 1702 when competing on Yahoo Games against strong human opponents,
while his same program using standard minimax only reached 1680.
Ahmed and I published a report at the 2006 IEEE Symposium on
Computational Intelligence and Games, which is available
In 2014, Sac State undergraduate student Michael Vollmer (now a PhD student
at University of Indiana) adapted trappy minimax to an existing chess
program called "Chess Brain" (developed by Colin Frayn). The resulting
system, dubbed Trappy Beowulf, had similar results as were
observed above in Desdemona. The preliminary results are described in
this white paper.
A more comprehensive set of tests are being developed
as the result of a recently completed senior project.
For anyone interested in continuing work on Trappy Beowulf, this project
has a high likelihood of resulting in a publication, as the ICGA
Journal has expressed an interest in the results.
Test yourself: Here is an example game tree below contains a trap.
The black values are based on a 5-ply search, while the red values are based
on a 4-ply search. Can you see the trap that a 3-ply opponent would
Neural Networks are simplistic artificial "brains" made up
of simulated neurons which are trained in various ways. The most
common training method is called backpropagation. Sometimes,
when there is no obvious solution to a problem, a system can be trained
to solve it if examples can be provided. For example, Paul Skaggs
trained a neural network to play a complex video game by learning from
the movements of an expert human player, as part of his Master's project.
Paul found that the network not only learned to play decently, it also
emulated the style of the human trainer.
Stereoscopic Tracking with Neural Network Hardware
Cognimem is a local company (in Folsom) that manufactures high-speed
neural network chips. They are mounted on boards of various
configurations. They have provided us with two V1KU boards, which
include the neural network chips incorporated with a small camera.
Included was an application for learning and recognizing objects.
Hitesh Wadhwani extended this application to create a simple
learning and tracking system using one V1KU board.
As part of my 2011 sabbatical leave, I extended Hitesh's
single-camera tracking system into a two-camera stereoscopic
tracking system. A video of my crude but effective system can be
Adam Ruggles further improved the stereoscopic tracking
system by building a better harness, and strengthening the learning
and tracking algorithms by incorporating a Kalman filter.
Self-Splitting Modular Neural Network
Several students and I have been experimenting with alternative
training methods, including PSO (particle swarm optimization),
genetic algorithms, as well as an extremely simple method that
we have dubbed Neighbor Annealing. Our first results
using the neighbor annealing algorithm have been written up in a
paper that was presented at IJCNN (IEEE International Joint Conference
on Neural Networks) in Hong Kong, June 2008.
A .pdf of the paper is available here.
We have also been experimenting with
Modular Neural Networks, where the problem domain is
partitioned and several neural networks are used to solve it:
Jeb Crouson developed a self-splitting
modular neural network. It works by first trying to solve a problem with
a single neural network, and then if that fails, it looks for chunks of
the problem that have already been solved. It then splits the input
domain along those boundaries and generates new neural networks to solve
the remaining portions. Jeb's self-splitting neural network has solved
a number of very large training sets quickly. For example, it solves
a large version of the famous 2-spiral problem (1058 training cases)
in under 30 seconds, and generalizes with 98% accuracy over the 1056
untrained test cases:
Jeb and I detailed the results in a paper that was presented
at IJCNN (IEEE International Joint Conference on Neural Networks),
in Hong Kong, June 2008.
A .pdf of the paper is available
Jeb's work built upon earlier work by Michael Olsen-Darter (see below).
Tim Bender and Michael Daniels studied various
for the SSNN, concluding that doing single divisions and delaying
building complete partitions is the best approach.
We published a report in the 2009 International Joint
Conference on Neural Networks (IJCNN'09), and a .pdf is
SSNN Visualization Tool
The partial screenshot shown at the right was created in 2008 by a
Visualization Tool for the Self-Splitting Neural Network, developed by a
senior project team here at CSUS. The team members were
Michael Daniels, James Boheman, Marcus Watstein,
Derek Goering, and Brandon Urban. The green rectangles
represent regions of a classification (2-spiral) problem that have
been solved by separate neural networks during training. The visualization
makes it easier to observe what the SSNN is doing and test alternative
The team and I published a report in the 2009 International Joint
Conference on Neural Networks (IJCNN'09), and a .pdf is
Here is a photo of the team:
Snowplow with Neural-Network-Assisted Driving
Michael Olsen-Darter used a collection of neural networks to
control the steering of a snowplow. Initially, he tried to solve the
problem with a single neural network, but the problem was too large for
the neural network to learn the training data. He came up with the
idea of using several neural networks in a modular fashion, and
divided the input domain uniformly among the networks. His final
system utilized hundreds of neural nets and mimicked an existing
mathematical steering model, with greater efficiency.
AHMCT (the Advanced Highway
Maintenance and Construction Technology Research Center) developed
the snowplow which includes a GPS screen as an aid to a driver during
low-visibility conditions. Michael Olsen Darter worked at UC Davis for
AHMCT, and developed the neural-network tool for his CSUS Masters
project. It shows a prediction on a video screen of where the
snowplow will be in the immediate future given its current heading.
Michael and I published a report at IRI'05 and you can read it
You can also see a video of the prototype system
Completed M.S. students
2016 - John Keeling - "SpeKculator: Genetic Algorithm Chord Progression Generator"
2016 - Charu Khosla - "YouTube Data Analysis Using Hadoop"
2016 - Kalyani Gandhi - "Earthquake Prediction Using Spark"
2015 - Sripriya Shekar - "Content Analysis of Participant ITEST Journals"
2014 - Aditya Hiran Pilla - "Career Plan Tracker"
2014 - Eric Henderson - "Terrain Based Memetic Algorithm for a Self Splitting Neural Network"
2014 - Shirish Gajera - "An Improved LightsOut app for the Android"
2013 - Arun Pandian - "Training Neural Networks with Ant Colony Optimization"
2013 - Amer Harb - "Face Recognition in Hyper Facial Feature Space"
2013 - Sonal Sharma - "NNGo9x9: Evolving a Neural Network to play Go"
2013 - Greg Cantrell - "Viewer Perspective Kiosk"
2013 - Steven Ray - "Genetic QWOP - Evolving a Bipedal Gait in a 2D World"
2012 - Avinash Pandey - "Model for Communication with a Cognitive Memory"
2012 - Adam Ruggles - "Stereoscopic Tracking in Three Dimensions with Neural Network Hardware"
2012 - Nadir Hajiyani - "Web-Based Technology Solutions for Not-For-Profits"
2012 - Ameya Deo - "Neural Network for Intelligent System Monitoring"
2011 - Phaedra Krizo - "A High School Game Programming Curriculum and its Effects on Student Motivation"
2011 - Brad Johnson - "Terrain-Based Memetic Algorithms Visualization"
2011 - P. Hatwalne - "SQLiDetect: A Web-Based Intrusion Detection Application for SQL Injections"
2011 - Alok Nakate - "Neural Networks Non-Linear Scaling"
2010 - Brian Lavender - "Implementation of Genetic Algorithms into a Network Intrusion Detection System"
2010 - Hitesh Wadhwani - "A Tool for Tracking Objects through V1KU, a Neural Network System"
2010 - Ryan Norton - "Self-Splitting Neural Network Visualization Tool Enhancements"
2010 - Akshaya Togrikar - "Image Sharing Application (IShA)"
2010 - Dylan Bartholemew - "Two-Dimensional Chunk Extraction in Self-Splitting Neural Networks"
2010 - Jake Locken - "An Integrated Solution to Reducing Foam Waste in an Automated Beverage Dispenser"
2009 - Joseph Graf - "Checkers Game for the iPhone with Trappy Minimax"
2009 - Leo Lu - "Xiangqi with Trappy Minimax"
2009 - Teza Madalasa - "RSS Feeds System for News Websites"
2008 - Antoine Hersen - "Holonomic Functions and Ore Algebra in Axion"
2008 - Philip Truong - "Training Self-Splitting Neural Network with Particle Swarm Optimization"
2008 - Ratnesh Raval - "Retriever for the Desktop"
2007 - Hamid Ghasimeyeh - "Analysis and Implementation of a New Multiuser Strategy Game"
2006 - Tahar Lemseffer - "Course on Oracle Query Optimization"
2006 - Satish Chollangi - "Academic Advising Application Using ASP.NET2.0, C# and PSP2.1"
2006 - Paul Andrew Skaggs - "Games with Personality: Controlling Game Agents Using Artificial Neural Networks Trained for Unique Play Styles"
2006 - Jeb Crouson - "A Self-Splitting Modular Neural Network"
2006 - Venugopal Chilamkur - "Ingres-Oracle Bridge Tool"
2006 - Michael Pope - "Terrain-Based Genetic Algorithm - A Comparative and Visual Study"
2005 - Brian Johnson - "GIS Routing with a Genetic Algorithm"
2005 - Kamal Lemseffer - "Oracle Query Optimization"
2005 - Michael Negley - "A Java Applet that Uses Optimal Expectation to Play Yahtzee"
2005 - Linda Marie Huynh - "Checkers with Trappy Minimax"
2005 - Ahmed Reda - "A Trappy Minimax Algorithm"
2005 - Michael Olsen Darter - "Vehicle Steering Control Using Modular Artificial Neural Networks"
2004 - Hoa Vin Quach - "User Authentication Using Keystroke Analysis and Neural Network"
2003 - Zach Matley - "Evolving Static Direction Maps for Fast Pathfinding"
click here to return to Scott's homepage