instructor: Andrei Jalba (MF 4.063, aDOTcDOTjalbaATtueDOTnl)
task | day | date | place | student deliverable |
Intro. Assignment choice | Tuesday | 06/02 | see Osiris | - |
Proposal submission | Tuesday | 13/02, before 24h | Canvas | project proposal |
Evaluation of proposals | Tuesday | 20/02 | Canvas | - |
Presentation day I (groups 1--7) | Tuesday | 06/03 | see Osiris | presentation + demo |
Presentation day II (groups 8--14) | Thursday | 08/03 | see Osiris | presentation + demo |
Submission I | Thursday | 15/03, before 24h | Canvas | report: concept version |
Review | Thursday | 22/03 | Canvas | - |
Demo day | Tuesday | 03/04 | see Osiris | full demo in a market setting |
Final Submission | Thursday | 12/04, before 24h | Canvas | report: final version; all other final deliverables |
Remarks
1. In the column "day", bold font is used to indicate meeting days; participation is mandatory.
2. Deliverables are handed in via Canvas, so it is essential to enroll and be part of a group in Canvas.
3. Feedback on the concept report will be given either during/after the presentation + demo days, or via Canvas.
This project is aimed at improving the practical skills in creating computer graphics and visualization applications. Programming language and environment are at the choice of the student.
Procedure:
The proposal (as well as all other reports, see below) should be handed-in via Canvas and it should be in pdf format. The teacher will provide feedback about your proposal via Canvas. Typically, responses are suggestions for addition or removal of functionality or changes in planning.
The report should contain the following elements.
⊕ Braitenberg vehicles (project 1)
Build an interactive educational virtual environments in which
Braitenberg vehicles may live and do their thing.
The educational part of this assignment should be that with interaction
with the vehicles and environment, the working
of the Braitenberg vehicles can be clearly illustrated. Furthermore, the
end result should show superb 3D-modelling, rendering and interaction.
Additional information
⊕ Interactive display method for digital photographs (project 2)
Build an interactive environment for displaying collections of
photographs, using different arrangements. Various requirements for photo arrangements should be flexibly replaced
or added through the interaction and the results should be dynamically displayed.
Additional information
⊕ Rendering and visualization engine for crowd simulation (project 3)
The goal of this project is to build a crowd visualisation tool that is capable of 2D and 3D views of crowds in evacuation scenarios. Specifically, the visualisation should contain the following elements:
Additional information
⊕ The Corridor map method for
path planning (project 4)
A central problem in games is planning
high-equality paths for characters avoiding obstacles in the
environment. Current games require a path planner that is fast (to
ensure real-time interaction) and flexible (to avoid local
hazards). In addition, a path needs to be natural, meaning that the
path is smooth, short, keeps some clearance to obstacles, avoids other
characters, etc. The Corridor Map Method (CMM) satisfies the
requirements mentioned above.
Build an interactive simulation environment for large crowds based on
the CMM. Extend the CMM to allow dynamic manipulation of obstacles:
creation of new obstacles, 'soft' obstacles, etc. The focus of this
project should be on path planning and not on crowd dynamics.
Additional information
⊕ Burst photography for HDR and low-light imaging on mobile
cameras (project 5)
Cell phone cameras have small apertures,
which limits the number of photons they can gather, leading to noisy
images in low light. They also have small sensor pixels, which limits
the number of electrons each pixel can store, leading to limited
dynamic range. One possibility to address these issues is to use
information from multiple, successive photos of a burst.
Implement the HDR+ method (see below) for photography enhancement.You may use a computer-vision library (e.g. OpenCV) to help you implement the multiple processing stages of the method. Strive to optimize your implementation, use parallelism and if possible port it to mobile platforms.
Additional information
⊕ procedural city modelling (project 6)
Creation of environments for games is a laborious task. A way to simplify this task is by automatic generation
of environments using parameterized procedures. For this assignment you are asked to,
fully automatically and in a higly parameterizable way,
generate a large city plan and fill that in with buildings, parks, bridges, traffic signs, etcetera.
To increase level of realism and at the same time reduce the size of the model the generation of textures is highly advisable.
Some additional requirements for this assignment could be: on demand city generation, walkthrough animation, height map/landscape generation, high level parameterization (Roman buildings, Chinese town, Gaudi style, ...). Proposals that make good use of a city map (e.g. OpenStreetMap), and other real data, statistics, or maps are preferred. Additional information: |
|
⊕ object/shape recognition from silhouettes (project 7)
The reconstruction of a 3D object model from a set of images taken from
different viewpoints is an important problem in computer vision. One of the
simplest ways to do this is to use the silhouettes of the object (the binary
classification of images into object and background) to construct a bounding
volume for the object. To efficiently represent this volume, Szeliski [1] uses an
octree, which represents the object as a tree of recursively subdivided cubes.
The algorithm starts with a small number of black cubes. Black cubes are
believed to lie completely within the object, white cubes are known to lie
outside of the object, and gray cubes are ambiguous (e.g. along object's
boundary). When a new image is acquired, all current cubes are projected in
the image plane and tested whether they lie totally within or outside the
silhouette. Then, the color of each cube is updated according to the outcome
of the cube-silhouette intersection test.
Implement the shape-from-silhouettes method above using images
captured by a single, hand-held camera (webcam, mobile phone, etc.).
Use a computer-vision library (e.g. OpenCV) to help you perform
low-level computer vision tasks such as: feature detection, feature
matching, segmentation, camera calibration, camera pose estimation,
etc. Alternatively, sensors available in your phone can be used to
help with camera pose estimation, motion, etc. If necessary, use a
so-called camera calibration pattern (chessboard pattern).
Additional information
⊕ open data (project 8)
Making data available to the general public is only half of the story. Being able to use the data to its full potential requires processing this data. However, how to process the data and what to look for is not at all obvious. Here visualization may come to the rescue to get a good feel of what kind of information is in a dataset. For this assignment you need to pick a non-trivial dataset and create a tool for visually browsing that dataset for interesting information.
For this assignment as much as possible of the following features should be fulfilled by your proposal:
Several organizations make their data openly available, see for istance: |