Header logo is


2014


no image
Haptic Robotization of Human Body via Data-Driven Vibrotactile Feedback

Kurihara, Y., Takei, S., Nakai, Y., Hachisu, T., Kuchenbecker, K. J., Kajimoto, H.

Entertainment Computing, 5(4):485-494, December 2014 (article)

hi

[BibTex]

2014


[BibTex]


Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds
Omnidirectional 3D Reconstruction in Augmented Manhattan Worlds

Schoenbein, M., Geiger, A.

International Conference on Intelligent Robots and Systems, pages: 716 - 723, IEEE, Chicago, IL, USA, IEEE/RSJ International Conference on Intelligent Robots and System, October 2014 (conference)

Abstract
This paper proposes a method for high-quality omnidirectional 3D reconstruction of augmented Manhattan worlds from catadioptric stereo video sequences. In contrast to existing works we do not rely on constructing virtual perspective views, but instead propose to optimize depth jointly in a unified omnidirectional space. Furthermore, we show that plane-based prior models can be applied even though planes in 3D do not project to planes in the omnidirectional domain. Towards this goal, we propose an omnidirectional slanted-plane Markov random field model which relies on plane hypotheses extracted using a novel voting scheme for 3D planes in omnidirectional space. To quantitatively evaluate our method we introduce a dataset which we have captured using our autonomous driving platform AnnieWAY which we equipped with two horizontally aligned catadioptric cameras and a Velodyne HDL-64E laser scanner for precise ground truth depth measurements. As evidenced by our experiments, the proposed method clearly benefits from the unified view and significantly outperforms existing stereo matching techniques both quantitatively and qualitatively. Furthermore, our method is able to reduce noise and the obtained depth maps can be represented very compactly by a small number of image segments and plane parameters.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


no image
Automatic Skill Evaluation for a Needle Passing Task in Robotic Surgery

Leung, S., Kuchenbecker, K. J.

In Proc. IROS Workshop on the Role of Human Sensorimotor Control in Robotic Surgery, Chicago, Illinois, sep 2014, Poster presentation given by Kuchenbecker. Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Modeling and Rendering Realistic Textures from Unconstrained Tool-Surface Interactions

Culbertson, H., Unwin, J., Kuchenbecker, K. J.

IEEE Transactions on Haptics, 7(3):381-292, July 2014 (article)

hi

[BibTex]

[BibTex]


Optimizing Average Precision using Weakly Supervised Data
Optimizing Average Precision using Weakly Supervised Data

Behl, A., Jawahar, C. V., Kumar, M. P.

IEEE Conf. on Computer Vision and Pattern Recognition (CVPR) 2014, IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), June 2014 (conference)

avg

[BibTex]

[BibTex]


no image
A Data-driven Approach to Remote Tactile Interaction: From a BioTac Sensor to Any Fingertip Cutaneous Device

Pacchierotti, C., Prattichizzo, D., Kuchenbecker, K. J.

In Haptics: Neuroscience, Devices, Modeling, and Applications, Proc. EuroHaptics, Part I, 8618, pages: 418-424, Lecture Notes in Computer Science, Springer-Verlag, Berlin Heidelberg, June 2014, Poster presentation given by Pacchierotti in Versailles, France (inproceedings)

hi

[BibTex]

[BibTex]


no image
Evaluating the BioTac’s Ability to Detect and Characterize Lumps in Simulated Tissue

Hui, J. C. T., Kuchenbecker, K. J.

In Haptics: Neuroscience, Devices, Modeling, and Applications, Proc. EuroHaptics, Part II, 8619, pages: 295-302, Lecture Notes in Computer Science, Springer-Verlag, Berlin Heidelberg, June 2014, Poster presentation given by Hui in Versailles, France (inproceedings)

hi

[BibTex]

[BibTex]


Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo
Simultaneous Underwater Visibility Assessment, Enhancement and Improved Stereo

Roser, M., Dunbabin, M., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 3840 - 3847 , Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows real-time implementation onboard an Autonomous Underwater Vehicles for improved navigation and obstacle avoidance performance.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


Calibrating and Centering Quasi-Central Catadioptric Cameras
Calibrating and Centering Quasi-Central Catadioptric Cameras

Schoenbein, M., Strauss, T., Geiger, A.

IEEE International Conference on Robotics and Automation, pages: 4443 - 4450, Hong Kong, China, IEEE International Conference on Robotics and Automation, June 2014 (conference)

Abstract
Non-central catadioptric models are able to cope with irregular camera setups and inaccuracies in the manufacturing process but are computationally demanding and thus not suitable for robotic applications. On the other hand, calibrating a quasi-central (almost central) system with a central model introduces errors due to a wrong relationship between the viewing ray orientations and the pixels on the image sensor. In this paper, we propose a central approximation to quasi-central catadioptric camera systems that is both accurate and efficient. We observe that the distance to points in 3D is typically large compared to deviations from the single viewpoint. Thus, we first calibrate the system using a state-of-the-art non-central camera model. Next, we show that by remapping the observations we are able to match the orientation of the viewing rays of a much simpler single viewpoint model with the true ray orientations. While our approximation is general and applicable to all quasi-central camera systems, we focus on one of the most common cases in practice: hypercatadioptric cameras. We compare our model to a variety of baselines in synthetic and real localization and motion estimation experiments. We show that by using the proposed model we are able to achieve near non-central accuracy while obtaining speed-ups of more than three orders of magnitude compared to state-of-the-art non-central models.

avg ps

pdf DOI [BibTex]

pdf DOI [BibTex]


3D Traffic Scene Understanding from Movable Platforms
3D Traffic Scene Understanding from Movable Platforms

Geiger, A., Lauer, M., Wojek, C., Stiller, C., Urtasun, R.

IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 36(5):1012-1025, published, IEEE, Los Alamitos, CA, May 2014 (article)

Abstract
In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow and occupancy grids. For each of these cues we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

avg ps

pdf link (url) [BibTex]

pdf link (url) [BibTex]


no image
Analyzing Human High-Fives to Create an Effective High-Fiving Robot

Fitter, N. T., Kuchenbecker, K. J.

In Proc. ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages: 156-157, Bielefeld, Germany, March 2014, Poster presentation given by Fitter (inproceedings)

hi

[BibTex]

[BibTex]


no image
Dynamic Modeling and Control of Voice-Coil Actuators for High-Fidelity Display of Haptic Vibrations

McMahan, W., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 115-122, Houston, Texas, USA, February 2014, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
A Wearable Device for Controlling a Robot Gripper With Fingertip Contact, Pressure, Vibrotactile, and Grip Force Feedback

Pierce, R. M., Fedalei, E. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 19-25, Houston, Texas, USA, February 2014, Oral presentation given by Pierce (inproceedings)

hi

[BibTex]

[BibTex]


no image
Methods for Robotic Tool-Mediated Haptic Surface Recognition

Romano, J. M., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 49-56, Houston, Texas, USA, February 2014, Oral presentation given by Kuchenbecker. Finalist for Best Paper Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
One Hundred Data-Driven Haptic Texture Models and Open-Source Methods for Rendering on 3D Objects

Culbertson, H., Delgado, J. J. L., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 319-325, Houston, Texas, USA, February 2014, Poster presentation given by Culbertson. Finalist for Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


Learning to Rank using High-Order Information
Learning to Rank using High-Order Information

Dokania, P. K., Behl, A., Jawahar, C. V., Kumar, M. P.

International Conference on Computer Vision, 2014 (conference)

avg

[BibTex]

[BibTex]


no image
Cutaneous Feedback of Planar Fingertip Deformation and Vibration on a da Vinci Surgical Robot

Pacchierotti, C., Shirsat, P., Koehn, J. K., Prattichizzo, D., Kuchenbecker, K. J.

In Proc. IROS Workshop on the Role of Human Sensorimotor Control in Robotic Surgery, Chicago, Illinois, 2014, Poster presentation given by Koehn (inproceedings)

hi

[BibTex]

[BibTex]

2010


no image
Lack of Discriminatory Function for Endoscopy Skills on a Computer-based Simulator

Kim, S., Spencer, G., Makar, G., Ahmad, N., Jaffe, D., Ginsberg, G., Kuchenbecker, K. J., Kochman, M.

Surgical Endoscopy, 24(12):3008-3015, December 2010 (article)

hi

[BibTex]

2010


[BibTex]


no image
VerroTouch: High-Frequency Acceleration Feedback for Telerobotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Martin, P., Bohren, J., Mendoza, P. J., Lee, D. I.

In Haptics: Generating and Perceiving Tangible Sensations, Proc. EuroHaptics, Part I, 6191, pages: 189-196, Lecture Notes in Computer Science, Springer, Amsterdam, Netherlands, July 2010, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Identifying the Role of Proprioception in Upper-Limb Prosthesis Control: Studies on Targeted Motion

Blank, A., Okamura, A. M., Kuchenbecker, K. J.

ACM Transactions on Applied Perception, 7(3):1-23, June 2010 (article)

hi

[BibTex]

[BibTex]


no image
Automatic Filter Design for Synthesis of Haptic Textures from Recorded Acceleration Data

Romano, J. M., Yoshioka, T., Kuchenbecker, K. J.

In Proc. IEEE International Conference on Robotics and Automation, pages: 1815-1821, Anchorage, Alaska, USA, May 2010, Oral presentation given by Romano (inproceedings)

hi

[BibTex]

[BibTex]


no image
Control of a High Fidelity Ungrounded Torque Feedback Device: The iTorqU 2.1

Winfree, K. N., Romano, J. M., Gewirtz, J., Kuchenbecker, K. J.

In Proc. IEEE International Conference on Robotics and Automation, pages: 1347-1352, Anchorage, Alaska, May 2010, Oral presentation given by Winfree (inproceedings)

hi

[BibTex]

[BibTex]


no image
High Frequency Acceleration Feedback Significantly Increases the Realism of Haptically Rendered Textured Surfaces

McMahan, W., Romano, J. M., Rahuman, A. M. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 141-148, Waltham, Massachusetts, March 2010, Oral presentation given by McMahan (inproceedings)

hi

[BibTex]

[BibTex]


no image
Spatially distributed tactile feedback for kinesthetic motion guidance

Kapur, P., Jensen, M., Buxbaum, L. J., Jax, S. A., Kuchenbecker, K. J.

In Proc. IEEE Haptics Symposium, pages: 519-526, Waltham, Massachusetts, USA, March 2010, Poster presentation given by Kapur. {F}inalist for Best Poster Award (inproceedings)

hi

[BibTex]

[BibTex]


no image
Dimensional Reduction of High-Frequency Accelerations for Haptic Rendering

Landin, N., Romano, J. M., McMahan, W., Kuchenbecker, K. J.

In Haptics: Generating and Perceiving Tangible Sensations: Part II (Proceedings of EuroHaptics), 6192, pages: 79-86, Lecture Notes in Computer Science, Springer, Amsterdam, Netherlands, 2010, Poster presentation given by Landin (inproceedings)

hi

[BibTex]

[BibTex]


no image
VerroTouch: A Vibrotactile Feedback System for Minimally Invasive Robotic Surgery

Kuchenbecker, K. J., Gewirtz, J., McMahan, W., Standish, D., Bohren, J., Martin, P., Wedmid, A., Mendoza, P. J., Lee, D. I.

In Proc. 28th World Congress of Endourology, 2010, PS8-14. Poster presentation given by Wedmid (inproceedings)

hi

[BibTex]

[BibTex]

2007


no image
The power of external mentors for women pursuing academic careers in engineering and science: Stories of MentorNet ACE and its Proteges and Mentors

Muller, C. B., Smith, E. H. B., Chou-Green, J., Daniels-Race, T., Drummond, A., Kuchenbecker, K. J.

In Proc. Women in Engineering Programs and Advocates Network (WEPAN) National Conference, Lake Buena Vista, Florida, USA, June 2007, Oral presentation given by Muller (inproceedings)

hi

[BibTex]

2007


[BibTex]


no image
Effects of Visual and Proprioceptive Position Feedback on Human Control of Targeted Movement

Kuchenbecker, K. J., Gurari, N., Okamura, A. M.

In Proc. IEEE International Conference on Rehabilitation Robotics, pages: 513-524, Noordwijk, Netherlands, June 2007, Oral and poster presentations given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Quantifying the value of visual and haptic position feedback in force-based motion control

Kuchenbecker, K. J., Gurari, N., Okamura, A. M.

In Proc. IEEE World Haptics Conference, pages: 561-562, Tsukuba, Japan, March 2007, Poster presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Shaping event-based haptic transients via an improved understanding of real contact dynamics

Fiene, J. P., Kuchenbecker, K. J.

In Proc. IEEE World Haptics Conference, pages: 170-175, Tsukuba, Japan, March 2007, Oral presentation given by Fiene. {B}est Haptic Technology Paper Award (inproceedings)

hi

[BibTex]

[BibTex]

2006


no image
Induced Master Motion in Force-Reflecting Teleoperation

Kuchenbecker, K. J., Niemeyer, G.

ASME Journal of Dynamic Systems, Measurement, and Control, 128(4):800-810, December 2006 (article)

hi

[BibTex]

2006


[BibTex]


no image
Improving Telerobotic Touch Via High-Frequency Acceleration Matching

Kuchenbecker, K. J., Niemeyer, G.

In Proc. IEEE International Conference on Robotics and Automation, pages: 3893-3898, Orlando, Florida, USA, May 2006, Oral presentation given by Kuchenbecker (inproceedings)

hi

[BibTex]

[BibTex]


no image
Event-Based Haptic Tapping with Grip Force Compensation

Fiene, J. P., Kuchenbecker, K. J., Niemeyer, G.

In Proc. IEEE Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems, pages: 117-123, Arlington, Virginia, USA, March 2006, Oral presentation given by Fiene (inproceedings)

hi

[BibTex]

[BibTex]


no image
Improving Contact Realism Through Event-Based Haptic Feedback

Kuchenbecker, K. J., Fiene, J. P., Niemeyer, G.

IEEE Transactions on Visualization and Computer Graphics, 12(2):219-230, March 2006 (article)

hi

[BibTex]

[BibTex]