Header logo is


2018


no image
Multi-objective Optimization of Nonconventional Laminated Composite Panels

Serhat, G.

Koc University, October 2018 (phdthesis)

Abstract
Laminated composite panels are extensively used in various industries due to their high stiffness-to-weight ratio and directional properties that allow optimization of stiffness characteristics for specific applications. With the recent improvements in the manufacturing techniques, the technology trend has been shifting towards the development of nonconventional composites. This work aims to develop new methods for the design and optimization of nonconventional laminated composites. Lamination parameters method is used to characterize laminate stiffness matrices in a compact form. An optimization framework based on finite element analysis was developed to calculate the solutions for different panel geometries, boundary conditions and load cases. The first part of the work addresses the multi-objective optimization of composite laminates to maximize dynamic and load-carrying performances simultaneously. Conforming and conflicting behaviors of multiple objective functions are investigated by determining Pareto-optimal solutions, which provide a valuable insight for multi-objective optimization problems. In the second part, design of curved laminated panels for optimal dynamic response is studied in detail. Firstly, the designs yielding maximum fundamental frequency values are computed. Next, optimal designs minimizing equivalent radiated power are obtained for the panels under harmonic pressure excitation, and their effective frequency bands are shown. The relationship between these two design sets is investigated to study the effectiveness of the frequency maximization technique. In the last part, a new method based on lamination parameters is proposed for the design of variable-stiffness composite panels. The results demonstrate that the proposed method provides manufacturable designs with smooth fiber paths that outperform the constant-stiffness laminates, while utilizing the advantages of lamination parameters formulation.

hi

Multi-objective Optimization of Nonconventional Laminated Composite Panels DOI [BibTex]


no image
Instrumentation, Data, and Algorithms for Visually Understanding Haptic Surface Properties

Burka, A. L.

University of Pennsylvania, Philadelphia, USA, August 2018, Department of Electrical and Systems Engineering (phdthesis)

Abstract
Autonomous robots need to efficiently walk over varied surfaces and grasp diverse objects. We hypothesize that the association between how such surfaces look and how they physically feel during contact can be learned from a database of matched haptic and visual data recorded from various end-effectors' interactions with hundreds of real-world surfaces. Testing this hypothesis required the creation of a new multimodal sensing apparatus, the collection of a large multimodal dataset, and development of a machine-learning pipeline. This thesis begins by describing the design and construction of the Portable Robotic Optical/Tactile ObservatioN PACKage (PROTONPACK, or Proton for short), an untethered handheld sensing device that emulates the capabilities of the human senses of vision and touch. Its sensory modalities include RGBD vision, egomotion, contact force, and contact vibration. Three interchangeable end-effectors (a steel tooling ball, an OptoForce three-axis force sensor, and a SynTouch BioTac artificial fingertip) allow for different material properties at the contact point and provide additional tactile data. We then detail the calibration process for the motion and force sensing systems, as well as several proof-of-concept surface discrimination experiments that demonstrate the reliability of the device and the utility of the data it collects. This thesis then presents a large-scale dataset of multimodal surface interaction recordings, including 357 unique surfaces such as furniture, fabrics, outdoor fixtures, and items from several private and public material sample collections. Each surface was touched with one, two, or three end-effectors, comprising approximately one minute per end-effector of tapping and dragging at various forces and speeds. We hope that the larger community of robotics researchers will find broad applications for the published dataset. Lastly, we demonstrate an algorithm that learns to estimate haptic surface properties given visual input. Surfaces were rated on hardness, roughness, stickiness, and temperature by the human experimenter and by a pool of purely visual observers. Then we trained an algorithm to perform the same task as well as infer quantitative properties calculated from the haptic data. Overall, the task of predicting haptic properties from vision alone proved difficult for both humans and computers, but a hybrid algorithm using a deep neural network and a support vector machine achieved a correlation between expected and actual regression output between approximately ρ = 0.3 and ρ = 0.5 on previously unseen surfaces.

hi

Project Page [BibTex]

Project Page [BibTex]


Robust Visual Augmented Reality in Robot-Assisted Surgery
Robust Visual Augmented Reality in Robot-Assisted Surgery

Forte, M.

Politecnico di Milano, Milan, Italy, July 2018, Department of Electronic, Information, and Biomedical Engineering (mastersthesis)

Abstract
The broader research objective of this line of research is to test the hypothesis that real-time stereo video analysis and augmented reality can increase safety and task efficiency in robot-assisted surgery. This master’s thesis aims to solve the first step needed to achieve this goal: the creation of a robust system that delivers the envisioned feedback to a surgeon while he or she controls a surgical robot that is identical to those used on human patients. Several approaches for applying augmented reality to da Vinci Surgical Systems have been proposed, but none of them entirely rely on a clinical robot; specifically, they require additional sensors, depend on access to the da Vinci API, are designed for a very specific task, or were tested on systems that are starkly different from those in clinical use. There has also been prior work that presents the real-world camera view and the computer graphics on separate screens, or not in real time. In other scenarios, the digital information is overlaid manually by the surgeons themselves or by computer scientists, rather than being generated automatically in response to the surgeon’s actions. We attempted to overcome the aforementioned constraints by acquiring input signals from the da Vinci stereo endoscope and providing augmented reality to the console in real time (less than 150 ms delay, including the 62 ms of inherent latency of the da Vinci). The potential benefits of the resulting system are broad because it was built to be general, rather than customized for any specific task. The entire platform is compatible with any generation of the da Vinci System and does not require a dVRK (da Vinci Research Kit) or access to the API. Thus, it can be applied to existing da Vinci Systems in operating rooms around the world.

hi

Project Page [BibTex]

Project Page [BibTex]


Tactile perception by electrovibration
Tactile perception by electrovibration

Vardar, Y.

Koc University, 2018 (phdthesis)

Abstract
One approach to generating realistic haptic feedback on touch screens is electrovibration. In this technique, the friction force is altered via electrostatic forces, which are generated by applying an alternating voltage signal to the conductive layer of a capacitive touchscreen. Although the technology for rendering haptic effects on touch surfaces using electrovibration is already in place, our knowledge of the perception mechanisms behind these effects is limited. This thesis aims to explore the mechanisms underlying haptic perception of electrovibration in two parts. In the first part, the effect of input signal properties on electrovibration perception is investigated. Our findings indicate that the perception of electrovibration stimuli depends on frequency-dependent electrical properties of human skin and human tactile sensitivity. When a voltage signal is applied to a touchscreen, it is filtered electrically by human finger and it generates electrostatic forces in the skin and mechanoreceptors. Depending on the spectral energy content of this electrostatic force signal, different psychophysical channels may be activated. The channel which mediates the detection is determined by the frequency component which has a higher energy than the sensory threshold at that frequency. In the second part, effect of masking on the electrovibration perception is investigated. We show that the detection thresholds are elevated as linear functions of masking levels for simultaneous and pedestal masking. The masking effectiveness is larger for pedestal masking compared to simultaneous masking. Moreover, our results suggest that sharpness perception depends on the local contrast between background and foreground stimuli, which varies as a function of masking amplitude and activation levels of frequency-dependent psychophysical channels.

hi

Tactile perception by electrovibration [BibTex]

2013


no image
Determination of an Analysis Procedure for FEM-Based Fatigue Calculations

Serhat, G.

Technical University of Munich, December 2013 (mastersthesis)

hi

[BibTex]

2013


[BibTex]


Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms
Probabilistic Models for 3D Urban Scene Understanding from Movable Platforms

Geiger, A.

Karlsruhe Institute of Technology, Karlsruhe Institute of Technology, April 2013 (phdthesis)

Abstract
Visual 3D scene understanding is an important component in autonomous driving and robot navigation. Intelligent vehicles for example often base their decisions on observations obtained from video cameras as they are cheap and easy to employ. Inner-city intersections represent an interesting but also very challenging scenario in this context: The road layout may be very complex and observations are often noisy or even missing due to heavy occlusions. While Highway navigation and autonomous driving on simple and annotated intersections have already been demonstrated successfully, understanding and navigating general inner-city crossings with little prior knowledge remains an unsolved problem. This thesis is a contribution to understanding multi-object traffic scenes from video sequences. All data is provided by a camera system which is mounted on top of the autonomous driving platform AnnieWAY. The proposed probabilistic generative model reasons jointly about the 3D scene layout as well as the 3D location and orientation of objects in the scene. In particular, the scene topology, geometry as well as traffic activities are inferred from short video sequences. The model takes advantage of monocular information in the form of vehicle tracklets, vanishing lines and semantic labels. Additionally, the benefit of stereo features such as 3D scene flow and occupancy grids is investigated. Motivated by the impressive driving capabilities of humans, no further information such as GPS, lidar, radar or map knowledge is required. Experiments conducted on 113 representative intersection sequences show that the developed approach successfully infers the correct layout in a variety of difficult scenarios. To evaluate the importance of each feature cue, experiments with different feature combinations are conducted. Additionally, the proposed method is shown to improve object detection and object orientation estimation performance.

avg ps

pdf [BibTex]

pdf [BibTex]