128 resultados para Morphing aircrafts
Resumo:
Recent developments in piston engine technology have increased performance in a very significant way. Diesel turbocharged/turbo compound engines, fuelled by jet fuels, have great performances. The focal point of this thesis is the transformation of the FIAT 1900 jtd diesel common rail engine for the installation on general aviation aircrafts like the CESSNA 172. All considerations about the diesel engine are supported by the studies that have taken place in the laboratories of the II Faculty of Engineering in Forlì. This work, mostly experimental, concerns the transformation of the automotive FIAT 1900 jtd – 4 cylinders – turbocharged – diesel common rail into an aircraft engine. The design philosophy of the aluminium alloy basement of the spark ignition engine have been transferred to the diesel version while the pistons and the head of the FIAT 1900 jtd are kept in the aircraft engine. Different solutions have been examined in this work. A first V 90° cylinders version that can develop up to 300 CV and whose weight is 30 kg, without auxiliaries and turbocharging group. The second version is a development of e original version of the diesel 1900 cc engine with an optimized crankshaft, that employ a special steel, 300M, and that is verified for the aircraft requirements. Another version with an augmented stroke and with a total displacement of 2500 cc has been examined; the result is a 30% engine heavier. The last version proposed is a 1600 cc diesel engine that work at 5000 rpm, with a reduced stroke and capable of more than 200 CV; it was inspired to the Yamaha R1 motorcycle engine. The diesel aircraft engine design keeps the bore of 82 mm, while the stroke is reduced to 64.6 mm, so the engine size is reduced along with weight. The basement weight, in GD AlSi 9 MgMn alloy, is 8,5 kg. Crankshaft, rods and accessories have been redesigned to comply to aircraft standards. The result is that the overall size is increased of only the 8% when referred to the Yamaha engine spark ignition version, while the basement weight increases of 53 %, even if the bore of the diesel version is 11% lager. The original FIAT 1900 jtd piston has been slightly modified with the combustion chamber reworked to the compression ratio of 15:1. The material adopted for the piston is the aluminium alloy A390.0-T5 commonly used in the automotive field. The piston weight is 0,5 kg for the diesel engine. The crankshaft is verified to torsional vibrations according to the Lloyd register of shipping requirements. The 300M special steel crankshaft total weight is of 14,5 kg. The result reached is a very small and light engine that may be certified for general aviation: the engine weight, without the supercharger, air inlet assembly, auxiliary generators and high pressure body, is 44,7 kg and the total engine weight, with enlightened HP pump body and the titanium alloy turbocharger is less than 100 kg, the total displacement is 1365 cm3 and the estimated output power is 220 CV. The direct conversion of automotive piston engine to aircrafts pays too huge weight penalties. In fact the main aircraft requirement is to optimize the power to weight ratio in order to obtain compact and fast engines for aeronautical use: this 1600 common rail diesel engine version demonstrates that these results can be reached.
Resumo:
This dissertation concerns active fibre-reinforced composites with embedded shape memory alloy wires. The structural application of active materials allows to develop adaptive structures which actively respond to changes in the environment, such as morphing structures, self-healing structures and power harvesting devices. In particular, shape memory alloy actuators integrated within a composite actively control the structural shape or stiffness, thus influencing the composite static and dynamic properties. Envisaged applications include, among others, the prevention of thermal buckling of the outer skin of air vehicles, shape changes in panels for improved aerodynamic characteristics and the deployment of large space structures. The study and design of active composites is a complex and multidisciplinary topic, requiring in-depth understanding of both the coupled behaviour of active materials and the interaction between the different composite constituents. Both fibre-reinforced composites and shape memory alloys are extremely active research topics, whose modelling and experimental characterisation still present a number of open problems. Thus, while this dissertation focuses on active composites, some of the research results presented here can be usefully applied to traditional fibre-reinforced composites or other shape memory alloy applications. The dissertation is composed of four chapters. In the first chapter, active fibre-reinforced composites are introduced by giving an overview of the most common choices available for the reinforcement, matrix and production process, together with a brief introduction and classification of active materials. The second chapter presents a number of original contributions regarding the modelling of fibre-reinforced composites. Different two-dimensional laminate theories are derived from a parent three-dimensional theory, introducing a procedure for the a posteriori reconstruction of transverse stresses along the laminate thickness. Accurate through the thickness stresses are crucial for the composite modelling as they are responsible for some common failure mechanisms. A new finite element based on the First-order Shear Deformation Theory and a hybrid stress approach is proposed for the numerical solution of the two-dimensional laminate problem. The element is simple and computationally efficient. The transverse stresses through the laminate thickness are reconstructed starting from a general finite element solution. A two stages procedure is devised, based on Recovery by Compatibility in Patches and three-dimensional equilibrium. Finally, the determination of the elastic parameters of laminated structures via numerical-experimental Bayesian techniques is investigated. Two different estimators are analysed and compared, leading to the definition of an alternative procedure to improve convergence of the estimation process. The third chapter focuses on shape memory alloys, describing their properties and applications. A number of constitutive models proposed in the literature, both one-dimensional and three-dimensional, are critically discussed and compared, underlining their potential and limitations, which are mainly related to the definition of the phase diagram and the choice of internal variables. Some new experimental results on shape memory alloy material characterisation are also presented. These experimental observations display some features of the shape memory alloy behaviour which are generally not included in the current models, thus some ideas are proposed for the development of a new constitutive model. The fourth chapter, finally, focuses on active composite plates with embedded shape memory alloy wires. A number of di®erent approaches can be used to predict the behaviour of such structures, each model presenting different advantages and drawbacks related to complexity and versatility. A simple model able to describe both shape and stiffness control configurations within the same context is proposed and implemented. The model is then validated considering the shape control configuration, which is the most sensitive to model parameters. The experimental work is divided in two parts. In the first part, an active composite is built by gluing prestrained shape memory alloy wires on a carbon fibre laminate strip. This structure is relatively simple to build, however it is useful in order to experimentally demonstrate the feasibility of the concept proposed in the first part of the chapter. In the second part, the making of a fibre-reinforced composite with embedded shape memory alloy wires is investigated, considering different possible choices of materials and manufacturing processes. Although a number of technological issues still need to be faced, the experimental results allow to demonstrate the mechanism of shape control via embedded shape memory alloy wires, while showing a good agreement with the proposed model predictions.
Resumo:
Recently in most of the industrial automation process an ever increasing degree of automation has been observed. This increasing is motivated by the higher requirement of systems with great performance in terms of quality of products/services generated, productivity, efficiency and low costs in the design, realization and maintenance. This trend in the growth of complex automation systems is rapidly spreading over automated manufacturing systems (AMS), where the integration of the mechanical and electronic technology, typical of the Mechatronics, is merging with other technologies such as Informatics and the communication networks. An AMS is a very complex system that can be thought constituted by a set of flexible working stations, one or more transportation systems. To understand how this machine are important in our society let considerate that every day most of us use bottles of water or soda, buy product in box like food or cigarets and so on. Another important consideration from its complexity derive from the fact that the the consortium of machine producers has estimated around 350 types of manufacturing machine. A large number of manufacturing machine industry are presented in Italy and notably packaging machine industry,in particular a great concentration of this kind of industry is located in Bologna area; for this reason the Bologna area is called “packaging valley”. Usually, the various parts of the AMS interact among them in a concurrent and asynchronous way, and coordinate the parts of the machine to obtain a desiderated overall behaviour is an hard task. Often, this is the case in large scale systems, organized in a modular and distributed manner. Even if the success of a modern AMS from a functional and behavioural point of view is still to attribute to the design choices operated in the definition of the mechanical structure and electrical electronic architecture, the system that governs the control of the plant is becoming crucial, because of the large number of duties associated to it. Apart from the activity inherent to the automation of themachine cycles, the supervisory system is called to perform other main functions such as: emulating the behaviour of traditional mechanical members thus allowing a drastic constructive simplification of the machine and a crucial functional flexibility; dynamically adapting the control strategies according to the different productive needs and to the different operational scenarios; obtaining a high quality of the final product through the verification of the correctness of the processing; addressing the operator devoted to themachine to promptly and carefully take the actions devoted to establish or restore the optimal operating conditions; managing in real time information on diagnostics, as a support of the maintenance operations of the machine. The kind of facilities that designers can directly find on themarket, in terms of software component libraries provides in fact an adequate support as regard the implementation of either top-level or bottom-level functionalities, typically pertaining to the domains of user-friendly HMIs, closed-loop regulation and motion control, fieldbus-based interconnection of remote smart devices. What is still lacking is a reference framework comprising a comprehensive set of highly reusable logic control components that, focussing on the cross-cutting functionalities characterizing the automation domain, may help the designers in the process of modelling and structuring their applications according to the specific needs. Historically, the design and verification process for complex automated industrial systems is performed in empirical way, without a clear distinction between functional and technological-implementation concepts and without a systematic method to organically deal with the complete system. Traditionally, in the field of analog and digital control design and verification through formal and simulation tools have been adopted since a long time ago, at least for multivariable and/or nonlinear controllers for complex time-driven dynamics as in the fields of vehicles, aircrafts, robots, electric drives and complex power electronics equipments. Moving to the field of logic control, typical for industrial manufacturing automation, the design and verification process is approached in a completely different way, usually very “unstructured”. No clear distinction between functions and implementations, between functional architectures and technological architectures and platforms is considered. Probably this difference is due to the different “dynamical framework”of logic control with respect to analog/digital control. As a matter of facts, in logic control discrete-events dynamics replace time-driven dynamics; hence most of the formal and mathematical tools of analog/digital control cannot be directly migrated to logic control to enlighten the distinction between functions and implementations. In addition, in the common view of application technicians, logic control design is strictly connected to the adopted implementation technology (relays in the past, software nowadays), leading again to a deep confusion among functional view and technological view. In Industrial automation software engineering, concepts as modularity, encapsulation, composability and reusability are strongly emphasized and profitably realized in the so-calledobject-oriented methodologies. Industrial automation is receiving lately this approach, as testified by some IEC standards IEC 611313, IEC 61499 which have been considered in commercial products only recently. On the other hand, in the scientific and technical literature many contributions have been already proposed to establish a suitable modelling framework for industrial automation. During last years it was possible to note a considerable growth in the exploitation of innovative concepts and technologies from ICT world in industrial automation systems. For what concerns the logic control design, Model Based Design (MBD) is being imported in industrial automation from software engineering field. Another key-point in industrial automated systems is the growth of requirements in terms of availability, reliability and safety for technological systems. In other words, the control system should not only deal with the nominal behaviour, but should also deal with other important duties, such as diagnosis and faults isolations, recovery and safety management. Indeed, together with high performance, in complex systems fault occurrences increase. This is a consequence of the fact that, as it typically occurs in reliable mechatronic systems, in complex systems such as AMS, together with reliable mechanical elements, an increasing number of electronic devices are also present, that are more vulnerable by their own nature. The diagnosis problem and the faults isolation in a generic dynamical system consists in the design of an elaboration unit that, appropriately processing the inputs and outputs of the dynamical system, is also capable of detecting incipient faults on the plant devices, reconfiguring the control system so as to guarantee satisfactory performance. The designer should be able to formally verify the product, certifying that, in its final implementation, it will perform itsrequired function guarantying the desired level of reliability and safety; the next step is that of preventing faults and eventually reconfiguring the control system so that faults are tolerated. On this topic an important improvement to formal verification of logic control, fault diagnosis and fault tolerant control results derive from Discrete Event Systems theory. The aimof this work is to define a design pattern and a control architecture to help the designer of control logic in industrial automated systems. The work starts with a brief discussion on main characteristics and description of industrial automated systems on Chapter 1. In Chapter 2 a survey on the state of the software engineering paradigm applied to industrial automation is discussed. Chapter 3 presentes a architecture for industrial automated systems based on the new concept of Generalized Actuator showing its benefits, while in Chapter 4 this architecture is refined using a novel entity, the Generalized Device in order to have a better reusability and modularity of the control logic. In Chapter 5 a new approach will be present based on Discrete Event Systems for the problemof software formal verification and an active fault tolerant control architecture using online diagnostic. Finally conclusive remarks and some ideas on new directions to explore are given. In Appendix A are briefly reported some concepts and results about Discrete Event Systems which should help the reader in understanding some crucial points in chapter 5; while in Appendix B an overview on the experimental testbed of the Laboratory of Automation of University of Bologna, is reported to validated the approach presented in chapter 3, chapter 4 and chapter 5. In Appendix C some components model used in chapter 5 for formal verification are reported.
Resumo:
One of the most interesting challenge of the next years will be the Air Space Systems automation. This process will involve different aspects as the Air Traffic Management, the Aircrafts and Airport Operations and the Guidance and Navigation Systems. The use of UAS (Uninhabited Aerial System) for civil mission will be one of the most important steps in this automation process. In civil air space, Air Traffic Controllers (ATC) manage the air traffic ensuring that a minimum separation between the controlled aircrafts is always provided. For this purpose ATCs use several operative avoidance techniques like holding patterns or rerouting. The use of UAS in these context will require the definition of strategies for a common management of piloted and piloted air traffic that allow the UAS to self separate. As a first employment in civil air space we consider a UAS surveillance mission that consists in departing from a ground base, taking pictures over a set of mission targets and coming back to the same ground base. During all mission a set of piloted aircrafts fly in the same airspace and thus the UAS has to self separate using the ATC avoidance as anticipated. We consider two objective, the first consists in the minimization of the air traffic impact over the mission, the second consists in the minimization of the impact of the mission over the air traffic. A particular version of the well known Travelling Salesman Problem (TSP) called Time-Dependant-TSP has been studied to deal with traffic problems in big urban areas. Its basic idea consists in a cost of the route between two clients depending on the period of the day in which it is crossed. Our thesis supports that such idea can be applied to the air traffic too using a convenient time horizon compatible with aircrafts operations. The cost of a UAS sub-route will depend on the air traffic that it will meet starting such route in a specific moment and consequently on the avoidance maneuver that it will use to avoid that conflict. The conflict avoidance is a topic that has been hardly developed in past years using different approaches. In this thesis we purpose a new approach based on the use of ATC operative techniques that makes it possible both to model the UAS problem using a TDTSP framework both to use an Air Traffic Management perspective. Starting from this kind of mission, the problem of the UAS insertion in civil air space is formalized as the UAS Routing Problem (URP). For this reason we introduce a new structure called Conflict Graph that makes it possible to model the avoidance maneuvers and to define the arc cost function of the departing time. Two Integer Linear Programming formulations of the problem are proposed. The first is based on a TDTSP formulation that, unfortunately, is weaker then the TSP formulation. Thus a new formulation based on a TSP variation that uses specific penalty to model the holdings is proposed. Different algorithms are presented: exact algorithms, simple heuristics used as Upper Bounds on the number of time steps used, and metaheuristic algorithms as Genetic Algorithm and Simulated Annealing. Finally an air traffic scenario has been simulated using real air traffic data in order to test our algorithms. Graphic Tools have been used to represent the Milano Linate air space and its air traffic during different days. Such data have been provided by ENAV S.p.A (Italian Agency for Air Navigation Services).
Resumo:
Im Rahmen dieser Arbeit wurde ein flugzeuggetragenes Laserablations-Einzelpartikel-Massenspektrometer von Grund auf entworfen, gebaut, charakterisiert und auf verschiedenen Feldmesskampagnen eingesetzt. Das ALABAMA (Aircraft-based Laser ABlation Aerosol MAss Spectrometer) ist in der Lage die chemische Zusammensetzung und Größe von einzelnen Aerosolpartikeln im submikrometer-Bereich (135 – 900 nm) zu untersuchen.rnNach dem Fokussieren in einer aerodynamischen Linse wird dafür zunächst derrnaerodynamische Durchmesser der einzelnen Partikel mit Hilfe einer Flugzeitmessung zwischen zwei Dauerstrichlasern bestimmt. Anschließend werden die zuvor detektierten und klassifizierten Partikel durch einen gezielten Laserpuls einzeln verdampft und ionisiert. Die Ionen werden in einem bipolaren Flugzeit-Massenspektrometer entsprechend ihrem Masse zu- Ladungs Verhältnisses getrennt und detektiert. Die entstehenden Massenspektren bieten einen detaillierten Einblick in die chemische Struktur der einzelnen Partikel.rnDas gesamte Instrument wurde so konzipiert, dass es auf dem neuen Höhenforschungsflugzeug HALO und anderen mobilen Plattformen eingesetzt werden kann. Um dies zu ermöglichen wurden alle Komponenten in einem Rahmen mit weniger als 0.45 m³ Volumen untergebracht. Das gesamte Instrument inklusive Rahmen wiegt weniger als 150 kg und erfüllt die strengen sicherheitsvorschriften für den Betrieb an Bord von Forschungsflugzeugen. Damit ist ALABAMA das kleinste und leichteste Instrument seiner Art.rnNach dem Aufbau wurden die Eigenschaften und Grenzen aller Komponenten detailliert im Labor und auf Messkampagnen charakterisiert. Dafür wurden zunächst die Eigenschaften des Partikelstrahls, wie beispielsweise Strahlbreite und –divergenz, ausführlich untersucht. Die Ergebnisse waren wichtig, um die späteren Messungen der Detektions- und Ablationseffizienz zu validieren.rnBei den anschließenden Effizienzmessungen wurde gezeigt, dass abhängig von ihrer Größe und Beschaffenheit, bis zu 86 % der vorhandenen Aerosolpartikel erfolgreich detektiert und größenklassifiziert werden. Bis zu 99.5 % der detektierten Partikel konnten ionisiert und somit chemisch untersucht werden. Diese sehr hohen Effizienzen sind insbesondere für Messungen in großer Höhe entscheidend, da dort zum Teil nur sehr geringe Partikelkonzentrationen vorliegen.rnDas bipolare Massenspektrometer erzielt durchschnittliche Massenauflösungen von bis zu R=331. Während Labor- und Feldmessungen konnten dadurch Elemente wie Au, Rb, Co, Ni, Si, Ti und Pb eindeutig anhand ihres Isotopenmusters zugeordnet werden.rnErste Messungen an Bord eines ATR-42 Forschungsflugzeuges während der MEGAPOLI Kampagne in Paris ergaben einen umfassenden Datensatz von Aerosolpartikeln innerhalb der planetaren Grenzschicht. Das ALABAMA konnte unter harten physischen Bedingungen (Temperaturen > 40°C, Beschleunigungen +/- 2 g) verlässlich und präzise betrieben werden. Anhand von charakteristischen Signalen in den Massenspektren konnten die Partikel zuverlässig in 8 chemische Klassen unterteilt werden. Einzelne Klassen konnten dabei bestimmten Quellen zugeordnet werden. So ließen sich beispielsweise Partikel mit starkerrnNatrium- und Kaliumsignatur eindeutig auf die Verbrennung von Biomasse zurückführen.rnALABAMA ist damit ein wertvolles Instrument um Partikel in-situ zu charakterisieren und somit verschiedenste wissenschaftliche Fragestellungen, insbesondere im Bereich der Atmosphärenforschung, zu untersuchen.
Resumo:
Progetto e analisi delle performance di un controllore realizzato con la metodologia fuzzy per una manovra di docking fra due dirigibili. Propedeutica a questo, è stata la campagna, presso la galleria del vento messa a disposizione dalla Clarkson University, di raccolta di dati sperimentali, che sono stati poi utilizzati per realizzare un simulatore con cui testare il controllore. Nel primo capitolo, si è presentato la tecnologia dei dirigibili, le varie tipologie ed una descrizione dei moderni concepts. Successivamente, sono state presentate le applicazioni nelle quali i moderni dirigibili possono essere impiegati. L’ultima parte tratta di due esempi di docking fra mezzi aerei: il rifornimento in volo e i “parasite aircrafts”. Il secondo capitolo, tratta della logica utilizzata dal controllore: la logica fuzzy. Le basi della teoria insiemistica classica sono state il punto di partenza per mostrare come, introducendo le funzioni di appartenenza, sia possibile commutare tra la teoria classica e fuzzy. La seconda parte del capitolo affronta le nozioni della teoria fuzzy, esponendo la metodologia con la quale è possibile inserire un controllore di questo tipo in un sistema “tradizionale”. Il terzo capitolo presenta il modello di volo dei dirigibili. Partendo dalla legge di Newton, introdotto il concetto di inerzia e massa aggiunte, si arriva alle equazioni del moto non lineari. L’ultima parte è stata dedicata alla linearizzazione delle equazioni e alla condizione di trim. Il quarto capitolo riguarda la campagna sperimentale di test in galleria del vento, con la realizzazione dei modelli in scala e la calibrazione della bilancia; successivamente, nel capitolo si commentano i dati sperimentali raccolti. Il quinto capitolo, mostra la metodologia con cui è possibile progettare un controllore fuzzy per il controllo della manovra di docking fra dirigibili. La seconda parte mostra le performance ottenute con questo tipo di sistema.
Resumo:
Obiettivo principale di questo lavoro di tesi è valutare la robustezza delle tecniche di riconoscimento del volto allo stato dell’arte rispetto a modifiche delle immagini operate tramite morphing. Tale tipo di alterazioni potrebbero essere operate con scopi criminali legati all’utilizzo di documenti di identità con elementi biometrici e, in particolare, del passaporto elettronico che utilizza il volto come caratteristica biometrica primaria per la verifica di identità. Il lavoro di tesi ha richiesto quindi: - La creazione di immagini morphed a partire da volti di diversi soggetti; - L’esecuzione di test con software commerciali per il riconoscimento del volto al fine di valutarne la robustezza.
Resumo:
I Software di editing o manipolazione delle immagini sono divenuti facilmente disponibili nel mercato e sempre più facili da utilizzare. Attraverso questi potenti tool di editing è consentito fare delle modifiche al contenuto delle immagini digitali e violarne l'autenticità. Oggigiorno le immagini digitali vengono utilizzate in maniera sempre più diffusa anche in ambito legislativo quindi comprovarne l'autenticità e veridicità è diventato un ambito molto rilevante. In questa tesi vengono studiati alcuni approcci presenti in letteratura per l'individuazione di alterazioni nelle immagini digitali. In particolare modo è stata approfondita la tecnica di alterazione digitale definita Morphing che, utilizzata in fotografie per il rilascio di documenti di identità di viaggio con elementi biometrici, potrebbe comportare dei rischi per la sicurezza. Il lavoro di questa tesi include, infine, la verifica del comportamento di alcuni Software in commercio in presenza di immagini campione soggette a diversi tipi di alterazione.
Resumo:
Statistical models have been recently introduced in computational orthopaedics to investigate the bone mechanical properties across several populations. A fundamental aspect for the construction of statistical models concerns the establishment of accurate anatomical correspondences among the objects of the training dataset. Various methods have been proposed to solve this problem such as mesh morphing or image registration algorithms. The objective of this study is to compare a mesh-based and an image-based statistical appearance model approaches for the creation of nite element(FE) meshes. A computer tomography (CT) dataset of 157 human left femurs was used for the comparison. For each approach, 30 finite element meshes were generated with the models. The quality of the obtained FE meshes was evaluated in terms of volume, size and shape of the elements. Results showed that the quality of the meshes obtained with the image-based approach was higher than the quality of the mesh-based approach. Future studies are required to evaluate the impact of this finding on the final mechanical simulations.
Resumo:
Weltweit ist eine Zunahme terroristischer Aktivitäten zu verzeichnen, sodass allgemein damit gerechnet werden muss, dass auch das zivile Verkehrs- und Transportwesen ein bevorzugtes Ziel terroristischer Anschläge darstellt. Mehrfach wurden schon Sprengkörper in Transportmittel des öffentlichen und zivilen Personen- aber auch Güterverkehrs eingeschleust, um die Bevölkerung durch materielle Zerstörung und massive Personenschäden einzuschüchtern und zu beängstigen. Daher ist eine Anpassung der sich derzeit im Einsatz befindlichen Transportbehälter an die geänderten Rahmenbedingungen unerlässlich, um auch den Schutz vor Sprengkörpern, die gemeinsam mit dem Handgepäck in Luft-, Land- und Wasserfahrzeuge eingeschleust werden, zu gewährleisten.
Resumo:
We present an algorithm for estimating dense image correspondences. Our versatile approach lends itself to various tasks typical for video post-processing, including image morphing, optical flow estimation, stereo rectification, disparity/depth reconstruction, and baseline adjustment. We incorporate recent advances in feature matching, energy minimization, stereo vision, and data clustering into our approach. At the core of our correspondence estimation we use Efficient Belief Propagation for energy minimization. While state-of-the-art algorithms only work on thumbnail-sized images, our novel feature downsampling scheme in combination with a simple, yet efficient data term compression, can cope with high-resolution data. The incorporation of SIFT (Scale-Invariant Feature Transform) features into data term computation further resolves matching ambiguities, making long-range correspondence estimation possible. We detect occluded areas by evaluating the correspondence symmetry, we further apply Geodesic matting to automatically determine plausible values in these regions.
Resumo:
The human face is a vital component of our identity and many people undergo medical aesthetics procedures in order to achieve an ideal or desired look. However, communication between physician and patient is fundamental to understand the patient’s wishes and to achieve the desired results. To date, most plastic surgeons rely on either “free hand” 2D drawings on picture printouts or computerized picture morphing. Alternatively, hardware dependent solutions allow facial shapes to be created and planned in 3D, but they are usually expensive or complex to handle. To offer a simple and hardware independent solution, we propose a web-based application that uses 3 standard 2D pictures to create a 3D representation of the patient’s face on which facial aesthetic procedures such as filling, skin clearing or rejuvenation, and rhinoplasty are planned in 3D. The proposed application couples a set of well-established methods together in a novel manner to optimize 3D reconstructions for clinical use. Face reconstructions performed with the application were evaluated by two plastic surgeons and also compared to ground truth data. Results showed the application can provide accurate 3D face representations to be used in clinics (within an average of 2 mm error) in less than 5 min.
Resumo:
The application of pesticides and fertilizers in agricultural areas is of crucial importance for crop yields. The use of aircrafts is becoming increasingly common in carrying out this task mainly because of their speed and effectiveness in the spraying operation. However, some factors may reduce the yield, or even cause damage (e.g., crop areas not covered in the spraying process, overlapping spraying of crop areas, applying pesticides on the outer edge of the crop). Weather conditions, such as the intensity and direction of the wind while spraying, add further complexity to the problem of maintaining control. In this paper, we describe an architecture to address the problem of self-adjustment of the UAV routes when spraying chemicals in a crop field. We propose and evaluate an algorithm to adjust the UAV route to changes in wind intensity and direction. The algorithm to adapt the path runs in the UAV and its input is the feedback obtained from the wireless sensor network (WSN) deployed in the crop field. Moreover, we evaluate the impact of the number of communication messages between the UAV and the WSN. The results show that the use of the feedback information from the sensors to make adjustments to the routes could significantly reduce the waste of pesticides and fertilizers.
Resumo:
In attempts to elucidate the underlying mechanisms of spinal injuries and spinal deformities, several experimental and numerical studies have been conducted to understand the biomechanical behavior of the spine. However, numerical biomechanical studies suffer from uncertainties associated with hard- and soft-tissue anatomies. Currently, these parameters are identified manually on each mesh model prior to simulations. The determination of soft connective tissues on finite element meshes can be a tedious procedure, which limits the number of models used in the numerical studies to a few instances. In order to address these limitations, an image-based method for automatic morphing of soft connective tissues has been proposed. Results showed that the proposed method is capable to accurately determine the spatial locations of predetermined bony landmarks. The present method can be used to automatically generate patient-specific models, which may be helpful in designing studies involving a large number of instances and to understand the mechanical behavior of biomechanical structures across a given population.
Resumo:
Statistical appearance models have recently been introduced in bone mechanics to investigate bone geometry and mechanical properties in population studies. The establishment of accurate anatomical correspondences is a critical aspect for the construction of reliable models. Depending on the representation of a bone as an image or a mesh, correspondences are detected using image registration or mesh morphing. The objective of this study was to compare image-based and mesh-based statistical appearance models of the femur for finite element (FE) simulations. To this aim, (i) we compared correspondence detection methods on bone surface and in bone volume; (ii) we created an image-based and a mesh-based statistical appearance models from 130 images, which we validated using compactness, representation and generalization, and we analyzed the FE results on 50 recreated bones vs. original bones; (iii) we created 1000 new instances, and we compared the quality of the FE meshes. Results showed that the image-based approach was more accurate in volume correspondence detection and quality of FE meshes, whereas the mesh-based approach was more accurate for surface correspondence detection and model compactness. Based on our results, we recommend the use of image-based statistical appearance models for FE simulations of the femur.