2IMV10 - Visual computing project - 2019-2020 Q3
For previous versions see 2IMV10.

instructor: Andrei Jalba (MF 4.063, aDOTcDOTjalbaATtueDOTnl)

instructor: Michael Burch (MF 4.119, mDOTburchATtueDOTnl)



Agenda
taskdaydateplacestudent
deliverable
Intro. Assignment choice Tuesday 04/02 see Osiris -
Proposal submission Tuesday 11/02, before 24h Canvas project proposal
Evaluation of proposals & feedback Tuesday 18/02 Canvas -
Presentation day I (groups 1--10) Tuesday 10/03 see Osiris presentation + demo
Presentation day II (groups 11--21) Thursday 12/03 see Osiris presentation + demo
Submission I Tuesday 24/03, before 24h Canvas report: concept version
Review & feedback Thursday 26/03 Canvas -
Demo day Tuesday 31/03 see Osiris full demo in a market setting
Final Submission Tuesday 07/04, before 24h Canvas report: final version;
all other final deliverables

Remarks

1. In the column "day", bold font is used to indicate meeting days; participation is mandatory.

2. Deliverables are handed in via Canvas, so it is essential to enroll and be part of a group in Canvas.

3. Feedback on the concept report will be given either during/after the presentation + demo days, or via Canvas.



Description

This project is aimed at improving the practical skills in creating computer graphics and visualization applications. Programming language and environment are at the choice of the student.

Procedure:

  1. Students work in groups of three and register as such in Canvas.
  2. Each group chooses one of the assignments given below. Only up to two groups are allowed to work on the same assignment. To avoid conflicts, make two or even three choices for interesting assignments, from those given below. The final assignment choice is finalized during the first meeting; see table above.
  3. Each group writes a proposal for the assignment.
    The proposal should contain: The total size of this proposal should be up to 3 pages.

    The proposal (as well as all other reports, see below) should be handed-in via Canvas and it should be in pdf format. The teacher will provide feedback about your proposal via Canvas. Typically, responses are suggestions for addition or removal of functionality or changes in planning.

  4. The students start working on their assignments and can consult the teacher after making an appointment.
  5. Submission I: concept version of the final report, featuring sections and subsections with an appropriate description of the intended content, or the actual content where that is possible. The teacher will review this report and notify the students of his findings.
  6. Presentation day I/II: each group gives a short presentation (about 10 minutes) about the project and shows a running prototype illustrating the already implemented concepts. Clearly explain requirements, state problems and their solutions, as far as feasible. Care must be taken to state clearly the status of the project: what is already finished, what still needs to be done, etc.
  7. Demo day: Students present their results to fellow students and to a selection of staff members of the visualization group.
  8. Final Submission: final deliverables, including the final report. Students should also send a separate report stating the changes wrt submission I. This change report should not only state what has been changed in the report but also clearly state what the students did as a result of the review they received before. See below for more remarks on deliverables.



Final deliverables




Report requirements

The report should contain the following elements:

The report should be up to 8 pages, including up to 2 pages of figures; it should be done in Latex using the provided template (see Files under Canvas).




Hint
Please give your application attractive and functional interactivity. This will not only help you to develop and test your own program but, for sure, others will not like your program, if you don't like it yourself. Both functionality and usability of the system will be graded.




Assignments 2019-2020 Q3

Braitenberg vehicles (project 1) - AJ
Build an interactive educational virtual environments in which Braitenberg vehicles may live and do their thing. The educational part of this assignment should be that with interaction with the vehicles and environment, the working of the Braitenberg vehicles can be clearly illustrated. Furthermore, the end result should show superb 3D-modelling, rendering and interaction.

Additional information


Interactive display method for digital photographs (project 2) - AJ
Build an interactive environment for displaying collections of photographs, using different arrangements. Various requirements for photo arrangements should be flexibly replaced or added through the interaction and the results should be dynamically displayed.

Additional information


Ant-Based Bundling (project 3) - MB
Node-link diagrams are a popular visual metaphor for visualizing graph data. However, they typically suffer from visual clutter if too many links are drawn caused by the fact that a graph can become dense and large. Edge bundling has become a nice way to show the node-link diagrams in a more aesthetically appealing way, however, only hierarchical and force-directed bundling approaches have been tried in the past. In this project we want to learn about how efficient swarm-based bundling methods can be, if applied to the problem of producing nicer looking node-link layouts.

Additional information


Burst photography for HDR and low-light imaging on mobile cameras (project 4) - AJ
Cell phone cameras have small apertures, which limits the number of photons they can gather, leading to noisy images in low light. They also have small sensor pixels, which limits the number of electrons each pixel can store, leading to limited dynamic range. One possibility to address these issues is to use information from multiple, successive photos of a burst.

Implement the HDR+ method (see below) for photography enhancement.You may use a computer-vision library (e.g. OpenCV) to help you implement the multiple processing stages of the method. Strive to optimize your implementation, use parallelism and if possible port it to mobile platforms.

Additional information


Radial Space-Reclaiming Icicle Plots (project 5) - MB
A recent publication introduced the space-reclaiming icicle plots that have benefits compared to standard icicle plots for showing the hierarchical structure in deeper hierarchy levels. Moreover, they reclaim empty space that would otherwise be lost as in standard icicle plots.

In your project you should develop a radial version of the space-reclaiming icicle plots that have even more benefits for special hierarchy data that does not easily allow to reclaim space due to the restrictions of the left and right borders in Cartesian (standard) space-reclaiming icicle plots. You should also integrate interaction techniques and play around with different hierarchy datasets and parameter configurations, also with hierarchy traversal algorithms.


Visual Analysis of Police, Fire Brigade, and Ambulance Events Around Eindhoven (project 6) - MB
In big cities there are various of events happening day-by-day, hour-by-hour. Understanding correlations between those events can give support for certain problematic environments or traffic issues.

In this project you should use data from typical webpages that support time-based and event-based information about special events happening in a city like Eindhoven.

It is requested in this project that the students search for further useful datasets, also from other cities in the world and build a visual analysis tool that helps to identify typical issues in a city, also helping to comparing them with other cities in the world.

Additional information


object/shape recognition from silhouettes (project 7) - AJ
The reconstruction of a 3D object model from a set of images taken from different viewpoints is an important problem in computer vision. One of the simplest ways to do this is to use the silhouettes of the object (the binary classification of images into object and background) to construct a bounding volume for the object. To efficiently represent this volume, Szeliski [1] uses an octree, which represents the object as a tree of recursively subdivided cubes.

The algorithm starts with a small number of black cubes. Black cubes are believed to lie completely within the object, white cubes are known to lie outside of the object, and gray cubes are ambiguous (e.g. along object's boundary). When a new image is acquired, all current cubes are projected in the image plane and tested whether they lie totally within or outside the silhouette. Then, the color of each cube is updated according to the outcome of the cube-silhouette intersection test. Implement the shape-from-silhouettes method above using images captured by a single, hand-held camera (webcam, mobile phone, etc.). Use a computer-vision library (e.g. OpenCV) to help you perform low-level computer vision tasks such as: feature detection, feature matching, segmentation, camera calibration, camera pose estimation, etc. Alternatively, sensors available in your phone can be used to help with camera pose estimation, motion, etc. If necessary, use a so-called camera calibration pattern (chessboard pattern).

Additional information


Comparing Dimensionality Reductions for Eye Movement Data (project 8) - MB
There was a recent paper on using t-SNE for eye movement data. The approach developed in this paper was not quite successful and promising as expected.

In this project the students should experiment with further dimensionality reduction methods like multidimensional scaling and so on to compare the results. The major goal of this project is to reduce spatio-temporal eye movement data to feature vectors that are again reduced to 2D to see groups and clusters among the visual attention behavior of eye tracked people.


Real-time 2D to 3D video conversion (project 9) - AJ
Using the motion compensation (MC) data of advanced video codecs such as H264, it is possible to recover depth information in real-time, starting from 2D video sequences. The MC data, which is extracted during the video decoding process, is used to calculate the relative projected velocity of the elements in the scene. This velocity is then used to estimate the relative depths of pixel patches in each picture of the video scene sequence. The depth estimation process is performed alongside the standard H.264 decoding process, leading to real-time efficiency. Implement this method using any of the freely available H264 video decoders. Aim at real-time performance and render the recovered 3D scene simultaneous with the 2D video sequence.

Additional information


Voronoi Diagrams Based on n-Nearest Centroids (project 10) - MB
Voronoi diagrams have been developed for creating aesthetically appealing visualizations based on a space separation into Voronoi cells. The general idea starts with a number of centroids while each point in 2D gets mapped to the nearest centroid. Color coding those mappings by taking into account the category to which a centroid belongs produces useful visualizations. However, if not the nearest centroid but the n-nearest or a combination of several is taken into account we can get different results.

The goal of this project is to generate (interactive) Voronoi diagrams from given images or scenes. Each image is first transformed into a spatial point cloud and then those points build the centroids for the approach. Filtering, color coding, and manual point adaptation should be allowed.


Reinforcement learning for simulating crawler vehicles (project 11) - AJ
Build an interactive educational virtual environments in which various crawler-type vehicles (see below) are simulated. You will have to use some physics engine to implement the dynamics of the vehicle and sensing of the environment. Start with a basic crawler using simple rules for reinforcement learning and gradually enhance it to perform more and more actions. The educational part of this assignment should be that the interaction with the vehicles and environment and the working of the vehicles can be clearly illustrated. Furthermore, the end result should show superb 3D-modelling, rendering and interaction.

Additional information


Eye Movement Roses (project 12) - MB
Eye movement data comes in the form of spatio-temporal trajectories for each participant in an eye tracking study, for example. Exploring vast amounts of such eye movements is pretty challenging since visual clutter or aggregations over time and participants are produced in typical visualizations like visual attention maps or gaze plots.

In this project the students should develop a radial visualization, called the Eye Movement Roses. The intention is to show the temporal behavior of the eye movement patterns based on certain movement orientations. Interactions should allow to identify dynamic visual attention patterns in the data while there should also be a linking to the shown stimulus with the eye movement data visualized on top of that.


3D modeling tool using Google poly (project 13) - AJ
Build a 3D modeling tool which uses as input the various objects/models from the Google play database. The tool should support various CAD operations and should allow for easy placement and manipulation of basic 3D models from the database. The focus should be on rendering, interaction and presentation, in the siprit of scenario editors for popular 3D games.

Additional information


Splatting the Hairball (project 14) - MB
Line-based diagrams produce visual clutter if the data gets too big or too dense. Examples for that are node-link diagrams for graphs, parallel coordinates for multivariate data, or gaze plots for eye movement data (just a few examples of a longer list).

In this project the students should experiment with different parameters to splat the line-based diagrams, allowing to see the structure of the lines again. Interactions should be integrated to adapt certain parameters or to filter the data.


The Source Code Explorer (project 15) - MB
Source code can be quite large going into the thousands of lines of code. Understanding the structure of the code or several metrics included is pretty challenging.

In this project the students should develop a scalable (pixel-based) visualization for source code that provides an overview about a large project. Similar approaches have been researched in the SeeSoft project but a more aggregating technique with further statistical analyses could be an extension to previous work.


Treemap-Based Views on File Systems(project 16) - MB
Treemaps are a great way to get an overview about hierarchies due to their space-filling nature. However, in many cases a hierarchy contains much more information than just the hierarchical organization of the elements. For example, a file system has information about file sizes, file types, or file content. Moreover, the hierarchy itself carry additional properties that are sometimes hard to see, for example, the number of subtrees, of leaf nodes, the average branching factor, and so on.

In this project the students should develop a treemap-based interactive visualization with which typical file systems can be visually explored. A web-based visualization would be appreciated letting users see their file system in one view and interactively play around with the parameters to identify lost files, or strange behaviors. Moreover, relations between files should be indicated as arcs or straight lines on top of the treemap visualization.


3D Generalized Pythagoras Trees (project 17) - MB
There was a recent publication on Generalized Pythagoras Trees with the focus on reducing the overlaps. The described approach was developed in 2D.

The general research problem in this project is the question if the same idea can also be implemented in 3D, however, the overlap cannot be completely reduced, but at least there should be good heuristics for that. The students should integrate interaction and apply the technique to hierarchical datasets, for example, file systems.


Visualizing Information Hierarchies with Fractals (project 18) - MB
Fractals like Koch's snowflake, the Sierpinski triangle and many more can produce aesthetically appealing results, in particular if the focus is on showing self-similarities.

In this project the students should develop a tool that produces typical fractal visualizations, but, as an add-on adapt fractal to also include hierarchical datasets, for example, their own file systems or NCBI taxonomies. Interactions should be integrated and a way to switch between different fractal approaches.