962 resultados para 3D user Interfaces
Resumo:
Die Hämocyanine der Cephalopoden Nautilus pompilius und Sepia officinalis sorgen für den Sauerstofftransport zwischen den Kiemen und den Geweben. Sie bestehen aus einem zylindrischen Dekamer mit interner Kragenstruktur. Während eine Untereinheit (also eine Polypeptidkette) bei NpH aus sieben paralogen funktionellen Domänen (FU-a bis FU-g) besteht, führte ein Genduplikationsereignis der FU-d zu acht FUs in SoH (a, b, c, d, d´, e, f, g). In allen Mollusken Hämocyaninen bilden sechs dieser FUs den äußeren Ring und die restlichen die interne Kragenstruktur. rnrnIn dieser Arbeit wurde ein dreidimensionales Modell des Hämocyanins von Sepia officinalis (SoH) erstellt. Die Rekonstruktion, mit einer Auflösung von 8,8Å (FSC=0,5), erlaubt das Einpassen von Homolologiemodellen und somit das Erstellen eines molekularen Modells mit pseudo atomarer Auflösung. Des Weiteren wurden zwei Rekonstruktionen des Hämocyanins von Nautilus pompilius (NpH) in verschiedenen Oxygenierungszuständen erstellt. Die auf 10 und 8,1Å aufgelösten Modelle zeigen zwei verschiedene Konformationen des Proteins. Daraus ließ sich eine Modellvorstellung über die allosterische Funktionsweise ableiten. Die hier erreichte Auflösung von 8Å ist die momentan höchste eines Molluskenhämocyanins. rnAuf Grundlage des molekularen Modells von SoH konnte die Topologie des Proteins aufgeklärt werden. Es wurde gezeigt, dass die zusätzliche FU-d´ in den Kragen integriert ist und somit die prinzipielle Wandarchitektur aller Mollusken Hämocyanine identisch ist. Wie die Analyse des erstellten molekularen Modells zeigt werden sind die beiden Isoformen (SoH1 und SoH2) in den Bereichen der Interfaces nahezu identisch; auch der Vergleich mit NpH zeigt grosse Übereinstimmungen. Des weiteren konnte eine Fülle von Informationen bezüglich der allosterischen Signalübertragung innerhalb des Moleküls gewonnen werden. rnDer Versuch, NpH in verschiedenen Oxygenierungszuständen zu zeigen, war erfolgreich. Die Datensätze, die unter zwei atmosphärischen Bedingungen präpariert wurden, führten reproduzierbar zu zwei unterschiedlichen Rekonstruktionen. Dies zeigt, daß der hier entwickelte experimentelle Ansatz funktioniert. Er kann nun routinemäßig auf andere Proteine angewandt werden. Wie der strukturelle Vergleich zeigte, verändert sich die Orientierung der FUs durch die Oxygenierung leicht. Dies wiederum beeinflusst die Anordnung innerhalb der Interfaces sowie die Abstände zwischen den beteiligten Aminosäuren. Aus dieser Analyse konnte eine Modellvorstellung zum allosterischen Signaltransfer innerhalb des Moleküls abgeleitet werden, die auf einer Umordnung von Salzbrücken basiert.
Resumo:
Diese Arbeit präsentiert die bislang höchst aufgelösten KryoEM-Strukturen für ein Cephalopoden hämocyanin Dekamer (Nautilus pompilus Hämocyanin, NpH) und ein Gastropoden Hämocyanin Didekamer (keyhole limpet hemocyanin isoform 1). Durch die Methoden des “molecular modelling” und “rigid-body-fiting” wurde auch eine detaillierte Beschreibung beider Strukturen auf atomarem Niveau erstmalig möglich. Hämocyanine sind kupferhaltige Sauerstoff-Transportproteine die frei gelöst in Blut zahlreicher Arthropoden und Mollusken vorkommen. Allgemein sind Molluskenhämocyanine als Dekamere (Hohlzylinder aus 5 Untereinheiten-dimere) oder Didecamere (Zusammenlagerung von zwei Dekameren) zu finden. Durch Anlagerung weiterer Dekamere bilden sich teilweise tubuläre Multidekamere. Hämocyanine der Cephalopoden bestehen ausschließlich aus solitären Decameren. In Octopus und Nautilus bestehen die 10 Untereinheiten aus 7 funktionellen Einheiten(FU-a bis FU-g), wobei jede FU ein Sauerstoffmolekül binden kann. FUs a-f bilden die Wand des ringförmigen Moleküls und 10 Kopien der FU-g bilden einen sogenannten „inneren Kragenkomplex“. Das im Rahmen dieser Arbeit erstelltes molekulares Modell von NpH klärt die Struktur des Dekamers vollständig auf. Wir waren zum ersten Mal in der Lage das Untereinheiten-dimer, den Verlauf der Polypeptidkette und 15 unterschiedliche Kontaktstellen zwischen FUs zu identifizieren. Viele der inter-FU-Kontakte weisen Aminosäurenkonstellationen auf, die die Basis für die Übertragung allosterischer Wechselwirkungen zwischen FUs darstellen könnten und Hinweise für den Aufbau der allosterische Einheit geben. Potentielle Bindungsstellen für N-glykosidische Zucker und bivalente Kationen wurden auch identifiziert. Im Gegensatz zu NpH, kommen Gastropoden Hämocyanine (inkl. KLH) hauptsächlich als Didekamere vor und der Kragenkomplex wird in diesem Fall aus 2 FUs gebildet (Fu-g und FU-h). Die zusätzliche C'-terminale FU-h zeichnet sich durch eine spezielle Verlängerung von ~ 100 Aminosäuren aus. KLH stammt aus der kalifornische Schnecke Megathura crenulata und kommt seit mehreren Jahrzehnten als Immunostimulator in der immunologischen Grundlagenforschung und klinischen Anwendung zum Einsatz. KLH weist zwei Isoformen auf, KLH1 und KLH2. Das vorliegende Modell von KLH1 erlaubt die komplexe Architektur dieses riesigen Proteins in allen Details zu verstehen, sowie einen Vergleich zum dem NpH Dekamer auf atomare Ebene. Es wurde gefunden, dass das Untereinheitensegment a-b-c-d-e-f-g, sowie die equivalenten Kontaktstellen zwichen FUs stark konserviert sind. Dies deutet darauf hin, dass in Bezug auf die Übertragung allosterische Signale zwischen benachbarten FUs, grundlegende Mechanismen in beiden Molekülen beibehalten wurden. Weiterhin, konnten die Verbindungen zwischen den zwei Dekameren ertsmalig identifiziert werden. Schließlich, wurde die Topologie der N-glycosidischen Zucker, welche für die immunologische Eigenschaften von KLH1 von großer Bedeutung sind, auch aufgeklärt. Somit leistet die vorliegende Arbeit einen wesentlichen Schritt zum Verständnis der Quartärstruktur und Funktion der Molluskenhämocyanine.rn
Resumo:
This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.
Resumo:
The research activity focused on the study, design and evaluation of innovative human-machine interfaces based on virtual three-dimensional environments. It is based on the brain electrical activities recorded in real time through the electrical impulses emitted by the brain waves of the user. The achieved target is to identify and sort in real time the different brain states and adapt the interface and/or stimuli to the corresponding emotional state of the user. The setup of an experimental facility based on an innovative experimental methodology for “man in the loop" simulation was established. It allowed involving during pilot training in virtually simulated flights, both pilot and flight examiner, in order to compare the subjective evaluations of this latter to the objective measurements of the brain activity of the pilot. This was done recording all the relevant information versus a time-line. Different combinations of emotional intensities obtained, led to an evaluation of the current situational awareness of the user. These results have a great implication in the current training methodology of the pilots, and its use could be extended as a tool that can improve the evaluation of a pilot/crew performance in interacting with the aircraft when performing tasks and procedures, especially in critical situations. This research also resulted in the design of an interface that adapts the control of the machine to the situation awareness of the user. The new concept worked on, aimed at improving the efficiency between a user and the interface, and gaining capacity by reducing the user’s workload and hence improving the system overall safety. This innovative research combining emotions measured through electroencephalography resulted in a human-machine interface that would have three aeronautical related applications: • An evaluation tool during the pilot training; • An input for cockpit environment; • An adaptation tool of the cockpit automation.
Resumo:
Until few years ago, 3D modelling was a topic confined into a professional environment. Nowadays technological innovations, the 3D printer among all, have attracted novice users to this application field. This sudden breakthrough was not supported by adequate software solutions. The 3D editing tools currently available do not assist the non-expert user during the various stages of generation, interaction and manipulation of 3D virtual models. This is mainly due to the current paradigm that is largely supported by two-dimensional input/output devices and strongly affected by obvious geometrical constraints. We have identified three main phases that characterize the creation and management of 3D virtual models. We investigated these directions evaluating and simplifying the classic editing techniques in order to propose more natural and intuitive tools in a pure 3D modelling environment. In particular, we focused on freehand sketch-based modelling to create 3D virtual models, interaction and navigation in a 3D modelling environment and advanced editing tools for free-form deformation and objects composition. To pursuing these goals we wondered how new gesture-based interaction technologies can be successfully employed in a 3D modelling environments, how we could improve the depth perception and the interaction in 3D environments and which operations could be developed to simplify the classical virtual models editing paradigm. Our main aims were to propose a set of solutions with which a common user can realize an idea in a 3D virtual model, drawing in the air just as he would on paper. Moreover, we tried to use gestures and mid-air movements to explore and interact in 3D virtual environment, and we studied simple and effective 3D form transformations. The work was carried out adopting the discrete representation of the models, thanks to its intuitiveness, but especially because it is full of open challenges.
Resumo:
It is a central premise of the advertising campaigns for nearly all digital communication devices that buying them augments the user: they give us a larger, better memory; make us more “creative” and “productive”; and/or empower us to access whatever information we desire from wherever we happen to be. This study is about how recent popular cinema represents the failure of these technological devices to inspire the enchantment that they once did and opens the question of what is causing this failure. Using examples from the James Bond films, the essay analyzes the ways in which human users are frequently represented as the media connecting and augmenting digital devices and NOT the reverse. It makes use of the debates about the ways in which our subjectivity is itself a networked phenomenon and the extended mind debate from the philosophy of mind. It will prove (1) that this represents an important counter-narrative to the technophilic optimism about augmentation that pervades contemporary advertising, consumer culture, and educational debates; and (2) that this particular discourse of augmentation is really about technological advances and not advances in human capacity.
Resumo:
We developed an object-oriented cross-platform program to perform three-dimensional (3D) analysis of hip joint morphology using two-dimensional (2D) anteroposterior (AP) pelvic radiographs. Landmarks extracted from 2D AP pelvic radiographs and optionally an additional lateral pelvic X-ray were combined with a cone beam projection model to reconstruct 3D hip joints. Since individual pelvic orientation can vary considerably, a method for standardizing pelvic orientation was implemented to determine the absolute tilt/rotation. The evaluation of anatomically morphologic differences was achieved by reconstructing the projected acetabular rim and the measured hip parameters as if obtained in a standardized neutral orientation. The program had been successfully used to interactively objectify acetabular version in hips with femoro-acetabular impingement or developmental dysplasia. Hip(2)Norm is written in object-oriented programming language C++ using cross-platform software Qt (TrollTech, Oslo, Norway) for graphical user interface (GUI) and is transportable to any platform.
Resumo:
Electronic apppliances are increasingly a part of our everyday lives. In particular, mobile devices, with their reduced dimensions with power rivaling desktop computers, have substantially augmented our communication abilities offering instant availability, anywhere, to everyone. These devices have become essential for human communication but also include a more comprehensive tool set to support productivity and leisure applications. However, the many applications commonly available are not adapted to people with special needs. Rather, most popular devices are targeted at teenagers or young adults with excellent eyesight and coordination. What is worse, most of the commonly used assistive control interfaces are not available in a mobile environment where user's position, accommodation and capacities can vary even widely. To try and address people with special needs new approaches and techniques are sorely needed. This paper presents a control interface to allow tetraplegic users to interact with electronic devices. Our method uses myographic information (Electromyography or EMG) collected from residually controlled body areas. User evaluations validate electromyography as a daily wearable interface. In particular our results show that EMG can be used even in mobility contexts.
Resumo:
Tracking user’s visual attention is a fundamental aspect in novel human-computer interaction paradigms found in Virtual Reality. For example, multimodal interfaces or dialogue-based communications with virtual and real agents greatly benefit from the analysis of the user’s visual attention as a vital source for deictic references or turn-taking signals. Current approaches to determine visual attention rely primarily on monocular eye trackers. Hence they are restricted to the interpretation of two-dimensional fixations relative to a defined area of projection. The study presented in this article compares precision, accuracy and application performance of two binocular eye tracking devices. Two algorithms are compared which derive depth information as required for visual attention-based 3D interfaces. This information is further applied to an improved VR selection task in which a binocular eye tracker and an adaptive neural network algorithm is used during the disambiguation of partly occluded objects.
Resumo:
wo methods for registering laser-scans of human heads and transforming them to a new semantically consistent topology defined by a user-provided template mesh are described. Both algorithms are stated within the Iterative Closest Point framework. The first method is based on finding landmark correspondences by iteratively registering the vicinity of a landmark with a re-weighted error function. Thin-plate spline interpolation is then used to deform the template mesh and finally the scan is resampled in the topology of the deformed template. The second algorithm employs a morphable shape model, which can be computed from a database of laser-scans using the first algorithm. It directly optimizes pose and shape of the morphable model. The use of the algorithm with PCA mixture models, where the shape is split up into regions each described by an individual subspace, is addressed. Mixture models require either blending or regularization strategies, both of which are described in detail. For both algorithms, strategies for filling in missing geometry for incomplete laser-scans are described. While an interpolation-based approach can be used to fill in small or smooth regions, the model-driven algorithm is capable of fitting a plausible complete head mesh to arbitrarily small geometry, which is known as "shape completion". The importance of regularization in the case of extreme shape completion is shown.
Resumo:
Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
Recently, stable markerless 6 DOF video based handtracking devices became available. These devices simultaneously track the positions and orientations of both user hands in different postures with at least 25 frames per second. Such hand-tracking allows for using the human hands as natural input devices. However, the absence of physical buttons for performing click actions and state changes poses severe challenges in designing an efficient and easy to use 3D interface on top of such a device. In particular, for coupling and decoupling a virtual object’s movements to the user’s hand (i.e. grabbing and releasing) a solution has to be found. In this paper, we introduce a novel technique for efficient two-handed grabbing and releasing objects and intuitively manipulating them in the virtual space. This technique is integrated in a novel 3D interface for virtual manipulations. A user experiment shows the superior applicability of this new technique. Last but not least, we describe how this technique can be exploited in practice to improve interaction by integrating it with RTT DeltaGen, a professional CAD/CAS visualization and editing tool.
Resumo:
Immersive virtual environments (IVEs) have the potential to afford natural interaction in the three-dimensional (3D) space around a user. However, interaction performance in 3D mid-air is often reduced and depends on a variety of ergonomics factors, the user's endurance, muscular strength, as well as fitness. In particular, in contrast to traditional desktop-based setups, users often cannot rest their arms in a comfortable pose during the interaction. In this article we analyze the impact of comfort on 3D selection tasks in an immersive desktop setup. First, in a pre-study we identified how comfortable or uncomfortable specific interaction positions and poses are for users who are standing upright. Then, we investigated differences in 3D selection task performance when users interact with their hands in a comfortable or uncomfortable body pose, while sitting on a chair in front of a table while the VE was displayed on a headmounted display (HMD). We conducted a Fitts' Law experiment to evaluate selection performance in different poses. The results suggest that users achieve a significantly higher performance in a comfortable pose when they rest their elbow on the table.
Resumo:
We describe a user assisted technique for 3D stereo conversion from 2D images. Our approach exploits the geometric structure of perspective images including vanishing points. We allow a user to indicate lines, planes, and vanishing points in the input image, and directly employ these as constraints in an image warping framework to produce a stereo pair. By sidestepping explicit construction of a depth map, our approach is applicable to more general scenes and avoids potential artifacts of depth-image-based rendering. Our method is most suitable for scenes with large scale structures such as buildings.