Projects




Protosphere - A GPU-Assisted Prototype Guided Sphere Packing Algorithm for Arbitrary Objects

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

We present a new algorithm that is able to efficiently compute a space filling sphere packing for arbitrary objects. It is independent of the object's representation and can be easily extended to higher dimensions. The basic idea is very simple and related to prototype based approaches known from machine learning. This approach directly leads to a parallel algorithm that we have implemented using CUDA. As a byproduct, our algorithm yields an approximation of the object's medial axis.

This project is partially funded by BMBF grant Avilus / 01 IM 08 001 U.

For further information please visit our project homepage.

Reference: [BiBTeX]

E-Mail: rene.weller@collision-detection.com
ReneSpherePacked ProtoSphereLogo GabrielSpherePacked



Haptesha - A Collaborative Multi-User Haptic Workspace

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Haptesha is a haptic workspace that allows high fidelity two-handed multi-user interactions in scenarios containing a large number of dynamically simulated rigid objects and a polygon count that is only limited by the capabilities of the graphics card.

This project is partially funded by BMBF grant Avilus / 01 IM 08 001 U.

For further information please visit our project homepage.

Awards: Winner of RTT Emerging Technology Contest 2010

Reference: [BiBTeX]

E-Mail: rene.weller@collision-detection.com

HapteshaLogo HapteshaWorkspace



Inner Sphere Trees

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Collision detection between rigid objects plays an important role in many fields of robotics and computer graphics, e.g. for path-planning, haptics, physically-based simulations, and medical applications. Today, there exist a wide variety of freely available collision detection libraries and nearly all of them are able to work at interactive rates, even for very complex objects
Most collision detection algorithms dealing with rigid objects use some kind of bounding volume hierarchy (BVH). The main idea behind a BVH is to subdivide the primitives of an object hierarchically until there are only single primitives left at the leaves. BVHs guarantee very fast responses at query time, as long as no further information than the set of colliding polygons is required for the collision response. However, most applications require much more information in order to solve or avoid the collisions.
One way to do this is to is to compute repelling forces based on the penetration depth. However, there is no universally accepted definition of the penetration depth between a pair of polygonal models. Mostly, the minimum translation vector to separate the objects is used, but this may lead to discontinuous forces. Moreover, haptic rendering requires update rates of at least 200 Hz, but preferably 1 kHz to guarantee a stable force feedback. Consequently, the collision detection time should never exceed 5 msec.
We present a novel geometric data structure for approximate collision detection at haptic rates between rigid objects. Our data structure, which we call inner sphere trees, supports both proximity queries and the penetration volume; the latter is related to the water displacement of the overlapping region and, thus, corresponds to a physically motivated force. Our method is able to compute continuous contact forces and torques that enable a stable rendering of 6-DOF penalty-based distributed contacts.
The main idea of our new data structure is to bound the object from the inside with a bounding volume hierarchy, which can be built based on dense sphere packings. The results show performance at haptic rates both for proximity and penetration volume queries for models consisting of hundreds of thousands of polygons.

This project is partially funded by DFG grant ZA292/1-1 and BMBF grant Avilus / 01 IM 08 001 U.

For further information please visit our project homepage.

Reference: [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX]

E-Mail: rene.weller@collision-detection.com
ist-logo ist-oilpump ist-ateneam



Open-Source Benchmarking Suite for Collision Detection Libraries

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Fast algorithms for collision detection between polygonal objects are needed in many fields of computer science. In nearly all of these applications, collision detection is the computational bottleneck. In order to gain a maximum speed of applications, it is essential to select the best suited algorithm.
The design of a standardized benchmarking suite for collision detection would make fair comparisons between algorithms much easier. Such a benchmark must be designed with care, so that it includes a broad spectrum of different and interesting contact scenarios. However, there are no standard benchmarks available to compare different algorithms. As a result, it is nontrivial to compare two algorithms and their implementations. In this project, we developed a simple benchmark procedure which eliminates these effects. It has been kept very simple so that other researchers can easily reproduce the results and compare their algorithms.
Our benchmarking suite is flexible, robust, and it is easy to integrate other collision detection libraries. Moreover, the benchmarking suite is freely available and can be downloaded here together with a set of objects in different resolutions that cover a wide range of possible scenarios for collision detection algorithms, and a set of precomputed test points for these objects.

For further information please visit our project homepage.

Reference: [BiBTeX] [BiBTeX] [BiBTeX]

E-Mail: rene.weller@collision-detection.com
benchmark-diagram benchmark-objects



Open-Source Collision Detection Library

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Fast and exact collision detection between a pair of graphical objects undergoing rigid motions is at the core of many simulation and planning algorithms in computer graphics and related areas (for instance, automatic path finding, or tolerance checking). In particular, virtual reality applications such as virtual prototyping or haptic rendering need exact collision detection at interactive speed for very complex, arbitrary ``polygon soups''. It is also a fundamental problem of dynamic simulation of rigid bodies, simulation of natural interaction with objects, haptic rendering, path planning, and CAD/CAM.
In order to provide an easy-to-use library for other researchers and open-source projects, we have implemented our algorithms in an object-oriented library, which is based on OpenSG. It is structured as a pipeline (proposed by [Zach01a]), contains algorithms for the broad phase (grid, convex hull test, separating planes), and the narrow phase (Dop-Tree, BoxTree, etc.).

For further information please visit our project homepage.

Reference: [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX] [BiBTeX]

E-Mail: rene.weller@collision-detection.com
coll_pipeline
colldet



Kinetic Bounding Volume Hierarchies for Deformable Objects

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Bounding volume hierarchies for geometric objects are widely employed in many areas of computer science to accelerate geometric queries, e.g., in computer graphics for ray-tracing, occlusion culling and collision detection. Usually, a bounding volume hierarchy is constructed in a pre-processing step which is suitable as long as the objects are rigid. However, deformable objects play an important role, e.g., for creating virtual environments in medical applications or cloth simulation. If such an object deforms, the pre-processed hierarchy becomes invalid.
In order to still use this method for deforming objects as well, it is necessary to update the hierarchies after the deformation happened.
In this project, we utilize the framework of event-based kinetic data structures for designing and analyzing new algorithms for updating bounding volume hierarchies undergoing arbitrary deformations. In addition, we apply our new algorithms and data structures to the problem of collision detection.

Reference: [BiBTeX][publication][Book]

E-Mail: rene.weller@collision-detection.com
kinetic



Natural Interaction in Virtual Environments

Project member: Prof. Dr. Gabriel Zachmann, Dr. René Weller

Virtual reality (VR) promised to allow users to experience and work with three-dimensional computer-simulated environments just like with the real world. Currently, VR offers a lot of efficient and more or less intuitive interaction paradigms.
However, users still cannot interact with virtual environments in a way they are used to in the real world. In particular, the human hand, which is our most versatile tool, is still only very crudely represented in the virtual world. Natural manual operations, such as grasping, pinching, pushing, etc., cannot be performed with the virtual hand in a plausible and efficient way in real-time.
Therefore, the goal of this project is to model and simulate the real human hand by a virtual hand. Such a virtual hand is controlled by the user of a virtual environment via hand tracking technologies, such as a CyberGlove or camera-based hand tracking (see our companion project). Then, the interaction between such a human hand model and the graphical objects in the virtual environment is to be modelled and simulated, such that the afore mentioned natural hand operations can be performed efficiently. Note that our approach is not to try to achieve physical correctness of the interactions but to achieve real-time under all circumstances while maintaining physical plausbility.
In order to achieve our goal, we focus our research on deformable collision detection, physically-based simulation, and realistic animation of the virtual hand.
This technology will have a number of very useful applications, which can, until now, not be performed effectively and satisfactorily. Some of them are virtual assembly simulation, 3D sketching, medical surgery training, or simulation games.

This project is partially funded by DFG grant ZA292/1-1.

Reference: [BiBTeX][BiBTeX][BiBTeX]

E-Mail: rene.weller@collision-detection.com
hand



Shader Maker

Markus Kramer, Dr. Rene Weller, Pro. Dr. Gabriel Zachmann

Actually, this is not a regular publication, but a software release.

Shader Maker is a simple, cross-platform GLSL editor. It works on Windows, Linux, and Mac OS X.

It provides the basics of a shader editor, such that students can get started with writing their own shaders as quickly as possible. This includes: syntax highlighting in the GLSL editors; geometry shader editor (as well as vertex and fragment shader editors, of course); interactive editing of the uniform variables; light source parameters; pre-defined simple shapes (e.g., torus et al.) and a simple OBJ loader; and a few more.

For download and further information please visit our project website