998 resultados para Virtual prototyping
Resumo:
The full-body control of virtual characters is a promising technique for application fields such as Virtual Prototyping. However it is important to assess to what extent the user full-body behavior is modified when immersed in a virtual environment. In the present study we have measured reach durations for two types of task (controlling a simple rigid shape vs. a virtual character) and two types of viewpoint (1st person vs. 3rd person). The paper first describes the architecture of the motion capture approach retained for the on-line full-body reach experiment. We then present reach measurement results performed in a non-virtual environment. They show that the target height parameter leads to reach duration variation of ∓25% around the average duration for the highest and lowest targets. This characteristic is highly accentuated in the virtual world as analyzed in the discussion section. In particular, the discrepancy observed for the first person viewpoint modality suggests to adopt a third person viewpoint when controling the posture of a virtual character in a virtual environment.
Resumo:
The aim of this study, conducted in collaboration with Lawrence Technological University in Detroit, is to create, through the method of the Industrial Design Structure (IDeS), a new concept for a sport-coupe car, based on a restyling of a retro model (Ford Mustang 1967). To date, vintage models of cars always arouse great interest both for the history behind them and for the classic and elegant style. Designing a model of a vehicle that can combine the charm of retro style with the innovation and comfort of modern cars would allow to meet the needs and desires of a large segment of the market that today is forced to choose between past and future. Thanks to a well-conceived concept car an automaker company is able to express its future policy, to make a statement of intent as, such a prototype, ticks all the boxes, from glamour and visual wow-factor to technical intrigue and design fascination. IDeS is an approach that makes use of many engineering tools to realize a study developed on several steps that must be meticulously organized and timed. With a deep analysis of the trends dominating the automotive industry it is possible to identify a series of product requirements using quality function deployment (QFD). The considerations from this first evaluation led to the definition of the technical specifications via benchmarking (BM) and top-flop analysis (TFA). Then, the structured methodology of stylistic design engineering (SDE) is applied through six phases: (1) stylistic trends analysis; (2) sketches; (3) 2D CAD drawings; (4) 3D CAD models; (5) virtual prototyping; (6) solid stylistic model. Finally, Developing the IDeS method up to the final stages of Prototypes and Testing you get a product as close as possible to the ideal vehicle conceptualized in the initial analysis.
Resumo:
The aim of this study was to evaluate the accuracy of virtual three-dimensional (3D) reconstructions of human dry mandibles, produced from two segmentation protocols (outline only and all-boundary lines).Twenty virtual three-dimensional (3D) images were built from computed tomography exam (CT) of 10 dry mandibles, in which linear measurements between anatomical landmarks were obtained and compared to an error probability of 5 %.The results showed no statistically significant difference among the dry mandibles and the virtual 3D reconstructions produced from segmentation protocols tested (p = 0,24).During the designing of a virtual 3D reconstruction, both outline only and all-boundary lines segmentation protocols can be used.Virtual processing of CT images is the most complex stage during the manufacture of the biomodel. Establishing a better protocol during this phase allows the construction of a biomodel with characteristics that are closer to the original anatomical structures. This is essential to ensure a correct preoperative planning and a suitable treatment.
Resumo:
Ubiquitous computing raises new usability challenges that cut across design and development. We are particularly interested in environments enhanced with sensors, public displays and personal devices. How can prototypes be used to explore the users' mobility and interaction, both explicitly and implicitly, to access services within these environments? Because of the potential cost of development and design failure, these systems must be explored using early assessment techniques and versions of the systems that could disrupt if deployed in the target environment. These techniques are required to evaluate alternative solutions before making the decision to deploy the system on location. This is crucial for a successful development, that anticipates potential user problems, and reduces the cost of redesign. This thesis reports on the development of a framework for the rapid prototyping and analysis of ubiquitous computing environments that facilitates the evaluation of design alternatives. It describes APEX, a framework that brings together an existing 3D Application Server with a modelling tool. APEX-based prototypes enable users to navigate a virtual world simulation of the envisaged ubiquitous environment. By this means users can experience many of the features of the proposed design. Prototypes and their simulations are generated in the framework to help the developer understand how the user might experience the system. These are supported through three different layers: a simulation layer (using a 3D Application Server); a modelling layer (using a modelling tool) and a physical layer (using external devices and real users). APEX allows the developer to move between these layers to evaluate different features. It supports exploration of user experience through observation of how users might behave with the system as well as enabling exhaustive analysis based on models. The models support checking of properties based on patterns. These patterns are based on ones that have been used successfully in interactive system analysis in other contexts. They help the analyst to generate and verify relevant properties. Where these properties fail then scenarios suggested by the failure provide an important aid to redesign.
Resumo:
An approximately 9-month-old fox (Pseudalopex ventulus) was presented With malocclusion and deviation of the lower jaw to the right side. Orthodontic treatment was performed using the inclined plane technique. Virtual 3D models and prototypes of the head were based on computed tomography (CT) image data to assist in diagnosis and treatment.
Resumo:
Veterinary surgery for treatment of wild animals is becoming an increasingly demanding task because it involves animals of different anatomy, many of them are already stressed and treatment must be performed to the highest standard in the minimum period of time. Craniofacial alterations may occur for three main reasons: genetic, functional or a combination of both. It is possible to modify the functional cause using intraoral devices like inclined plane. The treatment planning can be made based on virtual 3D models and rapid prototyping. An approximately 9 months old, 3.7 kg male Brazilian fox (Lycalopex vetulus) was referred to the Veterinary Hospital. Physical examination showed malocclusion with a deviation of the mandible to the right side. The virtual 3D model of the head was generated based on CT image data. The 3D models and rapid prototyping opened up new possibilities for the surgical planning and treatment of wild animals.
Resumo:
This paper presents the virtual environment implementation for project simulation and conception of supervision and control systems for mobile robots, that are capable to operate and adapting in different environments and conditions. This virtual system has as purpose to facilitate the development of embedded architecture systems, emphasizing the implementation of tools that allow the simulation of the kinematic conditions, dynamic and control, with real time monitoring of all important system points. For this, an open control architecture is proposal, integrating the two main techniques of robotic control implementation in the hardware level: systems microprocessors and reconfigurable hardware devices. The implemented simulator system is composed of a trajectory generating module, a kinematic and dynamic simulator module and of a analysis module of results and errors. All the kinematic and dynamic results shown during the simulation can be evaluated and visualized in graphs and tables formats, in the results analysis module, allowing an improvement in the system, minimizing the errors with the necessary adjustments optimization. For controller implementation in the embedded system, it uses the rapid prototyping, that is the technology that allows, in set with the virtual simulation environment, the development of a controller project for mobile robots. The validation and tests had been accomplish with nonholonomics mobile robots models with diferencial transmission. © 2008 IEEE.
Resumo:
Virtual platforms are of paramount importance for design space exploration and their usage in early software development and verification is crucial. In particular, enabling accurate and fast simulation is specially useful, but such features are usually conflicting and tradeoffs have to be made. In this paper we describe how we integrated TLM communication mechanisms into a state-of-the-art, cycle-accurate, MPSoC simulation platform. More specifically, we show how we adapted ArchC fast functional instruction set simulators to the MPARM platform in order to achieve both fast simulation speed and accuracy. Our implementation led to a much faster hybrid platform, reaching speedups of up to 2.9 and 2.1x on average with negligible impact on power estimation accuracy (average 3.26% and 2.25% of standard deviation). © 2011 IEEE.
Resumo:
This thesis proposes a novel technology in the field of swarm robotics that allows a swarm of robots to sense a virtual environment through virtual sensors. Virtual sensing is a desirable and helpful technology in swarm robotics research activity, because it allows the researchers to efficiently and quickly perform experiments otherwise more expensive and time consuming, or even impossible. In particular, we envision two useful applications for virtual sensing technology. On the one hand, it is possible to prototype and foresee the effects of a new sensor on a robot swarm, before producing it. On the other hand, thanks to this technology it is possible to study the behaviour of robots operating in environments that are not easily reproducible inside a lab for safety reasons or just because physically infeasible. The use of virtual sensing technology for sensor prototyping aims to foresee the behaviour of the swarm enhanced with new or more powerful sensors, without producing the hardware. Sensor prototyping can be used to tune a new sensor or perform performance comparison tests between alternative types of sensors. This kind of prototyping experiments can be performed through the presented tool, that allows to rapidly develop and test software virtual sensors of different typologies and quality, emulating the behaviour of several hardware real sensors. By investigating on which sensors is better to invest, a researcher can minimize the sensors’ production cost while achieving a given swarm performance. Through augmented reality, it is possible to test the performance of the swarm in a desired virtual environment that cannot be set into the lab for physical, logistic or economical reasons. The virtual environment is sensed by the robots through properly designed virtual sensors. Virtual sensing technology allows a researcher to quickly carry out real robots experiment in challenging scenarios without all the required hardware and environment.
Resumo:
In this paper we present XSAMPL3D, a novel language for the high-level representation of actions performed on objects by (virtual) humans. XSAMPL3D was designed to serve as action representation language in an imitation-based approach to character animation: First, a human demonstrates a sequence of object manipulations in an immersive Virtual Reality (VR) environment. From this demonstration, an XSAMPL3D description is automatically derived that represents the actions in terms of high-level action types and involved objects. The XSAMPL3D action description can then be used for the synthesis of animations where virtual humans of different body sizes and proportions reproduce the demonstrated action. Actions are encoded in a compact and human-readable XML-format. Thus, XSAMPL3D describtions are also amenable to manual authoring, e.g. for rapid prototyping of animations when no immersive VR environment is at the animator's disposal. However, when XSAMPL3D descriptions are derived from VR interactions, they can accomodate many details of the demonstrated action, such as motion trajectiories,hand shapes and other hand-object relations during grasping. Such detail would be hard to specify with manual motion authoring techniques only. Through the inclusion of language features that allow the representation of all relevant aspects of demonstrated object manipulations, XSAMPL3D is a suitable action representation language for the imitation-based approach to character animation.
Resumo:
Within the framework of heritage preservation, 3D scanning and modeling for heritage documentation has increased significantly in recent years, mainly due to the evolution of laser and image-based techniques, modeling software, powerful computers and virtual reality. 3D laser acquisition constitutes a real development opportunity for 3D modeling based previously on theoretical data. The representation of the object information rely on the knowledge of its historic and theoretical frame to reconstitute a posteriori its previous states. This project proposes an approach dealing with data extraction based on architectural knowledge and Laser statement informing measurements, the whole leading to 3D reconstruction. The experimented Khmer objects are exposed at Guimet museum in Paris. The purpose of this digital modeling meets the need of exploitable models for simulation projects, prototyping, exhibitions, promoting cultural tourism and particularly for archiving against any likely disaster and as an aided tool for the formulation of virtual museum concept.
Resumo:
Rotational moulding is a unique manufacturing technique for the production of hollow plastic parts manufacturing. Moulds for rotational moulding are generally not standardized, such as for injection moulding, so each new mould must be completely manufactured except for a few ancillary parts like screws or clamps. The aim of this work has been to adapt and apply the advantages of rapid prototyping and electroforming technologies to try to achieve an innovative mould design for rotational moulding. The new innovative design integrates an electroformed shell, manufactured starting from a rapid prototyping mandrel, with different designed standard aluminium tools. The shell holder enables mould assembly with high precision a shell in a few minutes with the advantage of changing different geometries of the electroformed shells in the same tool. The overall mould cost is significantly decreased because it is only necessary to manufacture one or two shells each time, however the rest of the elements of the mould are standard and usable for an infinite number of shells, depending on size. The rapid prototyping of the mandrel enables a significant decrease the global cost of mould manufacturing as well. © 2008 Taylor & Francis Group.
Design and Development of a Research Framework for Prototyping Control Tower Augmented Reality Tools
Resumo:
The purpose of the air traffic management system is to ensure the safe and efficient flow of air traffic. Therefore, while augmenting efficiency, throughput and capacity in airport operations, attention has rightly been placed on doing it in a safe manner. In the control tower, many advances in operational safety have come in the form of visualization tools for tower controllers. However, there is a paradox in developing such systems to increase controllers' situational awareness: by creating additional computer displays, the controller's vision is pulled away from the outside view and the time spent looking down at the monitors is increased. This reduces their situational awareness by forcing them to mentally and physically switch between the head-down equipment and the outside view. This research is based on the idea that augmented reality may be able to address this issue. The augmented reality concept has become increasingly popular over the past decade and is being proficiently used in many fields, such as entertainment, cultural heritage, aviation, military & defense. This know-how could be transferred to air traffic control with a relatively low effort and substantial benefits for controllers’ situation awareness. Research on this topic is consistent with SESAR objectives of increasing air traffic controllers’ situation awareness and enable up to 10 % of additional flights at congested airports while still increasing safety and efficiency. During the Ph.D., a research framework for prototyping augmented reality tools was set up. This framework consists of methodological tools for designing the augmented reality overlays, as well as of hardware and software equipment to test them. Several overlays have been designed and implemented in a simulated tower environment, which is a virtual reconstruction of Bologna airport control tower. The positive impact of such tools was preliminary assessed by means of the proposed methodology.
Resumo:
Advanced Driver Assistance Systems (ADAS) are proving to have huge potential in road safety, comfort, and efficiency. In recent years, car manufacturers have equipped their high-end vehicles with Level 2 ADAS, which are, according to SAE International, systems that combine both longitudinal and lateral active motion control. These automated driving features, while only available in highway scenarios, appear to be very promising towards the introduction of hands-free driving. However, as they rely only on an on-board sensor suite, their continuative operation may be affected by the current environmental conditions: this prevents certain functionalities such as the automated lane change, other than requiring the driver to keep constantly the hands on the steering wheel. The enabling factor for hands-free highway driving proposed by Mobileye is the integration of high-definition maps, thus leading to the so-called Level 2+. This thesis was carried out during an internship in Maserati's Virtual Engineering team. The activity consisted of the design of an L2+ Highway Assist System following the Rapid Control Prototyping approach, starting from the definition of the requirements up to the real-time implementation and testing on a simulator of the brand new compact SUV Maserati Grecale. The objective was to enhance the current Level 2 highway driving assistance system with hands-free driving capability; for this purpose an Autonomous Lane Change functionality has been designed, proposing a Model Predictive Control-based decision-maker, in charge of assessing both the feasibility and convenience of performing a lane-change maneuver. The result is a Highway Assist System capable of driving the vehicle in a traffic scenario safely and efficiently, never requiring driver intervention.
Resumo:
We have shown how the analysis of the angiotomography reconstruction through OsiriX program has assisted in endovascular perioperative programming. We presented its application in situations when an unexpected existence of metallic overlapping artifact (orthopedic osteosynthesis) compromised the adequate visualization of the arterial lesion during the procedure. Through manipulation upon OsiriX software, with assistance of preview under virtual fluoroscopy, it was possible to obtain the angles that would avoid this juxtaposition. These angles were reproduced in the C-arm, allowing visualization of the occluded segment, reducing the need for repeated image acquisitions and contrast overload, allowing the continuation of the procedure.