880 resultados para Speed limits.
Resumo:
Theoretical models are developed for the continuous-wave and pulsed laser incision and cut of thin single and multi-layer films. A one-dimensional steady-state model establishes the theoretical foundations of the problem by combining a power-balance integral with heat flow in the direction of laser motion. In this approach, classical modelling methods for laser processing are extended by introducing multi-layer optical absorption and thermal properties. The calculation domain is consequently divided in correspondence with the progressive removal of individual layers. A second, time-domain numerical model for the short-pulse laser ablation of metals accounts for changes in optical and thermal properties during a single laser pulse. With sufficient fluence, the target surface is heated towards its critical temperature and homogeneous boiling or "phase explosion" takes place. Improvements are seen over previous works with the more accurate calculation of optical absorption and shielding of the incident beam by the ablation products. A third, general time-domain numerical laser processing model combines ablation depth and energy absorption data from the short-pulse model with two-dimensional heat flow in an arbitrary multi-layer structure. Layer removal is the result of both progressive short-pulse ablation and classical vaporisation due to long-term heating of the sample. At low velocity, pulsed laser exposure of multi-layer films comprising aluminium-plastic and aluminium-paper are found to be characterised by short-pulse ablation of the metallic layer and vaporisation or degradation of the others due to thermal conduction from the former. At high velocity, all layers of the two films are ultimately removed by vaporisation or degradation as the average beam power is increased to achieve a complete cut. The transition velocity between the two characteristic removal types is shown to be a function of the pulse repetition rate. An experimental investigation validates the simulation results and provides new laser processing data for some typical packaging materials.
Resumo:
L'elimodellismo è una passione che lega un numero sempre maggiore di persone: nuove manifestazioni vengono organizzate in tutto il mondo e nuove discipline vengono continuamente proposte. Questo è il caso della disciplina speed in cui i piloti si sfidano a far volare i propri elimodelli alle massime velocità. L'azienda SAB Heli Division s.r.l., come produttore di pale per elimodelli e della serie di elicotteri Goblin, ha interesse a sostenere i propri piloti con le proprie macchine, facendo sì che siano veloci e competitive. Per questo ha voluto sviluppare una pala che, montata sul proprio elicottero specifico per questa disciplina, possa vincere la concorrenza con l'ambizione di stabilire un primato di velocità a livello internazionale. Il problema è quindi quello di sviluppare una pala che ottimizzasse al meglio le caratteristiche dell'elimodello Goblin Speed, in modo da sfruttare al meglio la potenza installata a bordo. Per via dei limiti sui mezzi a disposizione l'ottimizzazione è stata portata avanti mediante la teoria dell'elemento di pala. Si è impostato il calcolo determinando la potenza media su una rotazione del rotore in volo avanzato a 270 km/h e quindi attraverso gli algoritmi di ottimizzazione globale presenti nel codice di calcolo MATLAB si è cercato il rotore che permettesse il volo a tale velocità al variare del raggio del disco del rotore, dello svergolamento della pala e della distribuzione di corda lungo la pala. Per far sì che si abbiano risultati più precisi si sono sfruttati alcuni modelli per stimare il campo di velocità indotta o gli effetti dello stallo dinamico. Inoltre sono state stimate altre grandezze di cui non sono noti i dati reali o di cui è troppo complesso, per le conoscenze a disposizione, avere un dato preciso. Si è tuttavia cercato di avere stime verosimili. Alcune di queste grandezze sono le caratteristiche aerodinamiche del profilo NACA 0012 utilizzato, ottenute mediante analisi CFD bidimensionale, i comandi di passo collettivo e ciclico che equilibrano il velivolo e la resistenza aerodinamica dell'intero elimodello. I risultati del calcolo sono stati confrontati innanzitutto con le soluzioni già adottate dall'azienda. Quindi si è proceduto alla realizzazione della pala e mediante test di volo si è cercato di valutare le prestazioni della macchina che monta la pala ottenuta. Nonostante le approssimazioni adottate si è osservato che la pala progettata a partire dai risultati dell'ottimizzazione rispecchia la filosofia adottata: per velocità paragonabili a quelle ottenute con le pale prodotte da SAB Heli Division, le potenze richieste sono effettivamente inferiori. Tuttavia non è stato possibile ottenere un vero e proprio miglioramento della velocità di volo, presumibilmente a causa delle stime delle caratteristiche aerodinamiche delle diverse parti del Goblin Speed.
Resumo:
Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.
Resumo:
The aim of this study was to examine whether a real high speed-short term competition influences clinicopathological data focusing on muscle enzymes, iron profile and Acute Phase Proteins. 30 Thoroughbred racing horses (15 geldings and 15 females) aged between 4-12 years (mean 7 years), were used for the study. All the animals performed a high speed-short term competition for a total distance of 154 m in about 12 seconds, repeated 8 times, within approximately one hour (Niballo Horse Race). Blood samples were obtained 24 hours before and within 30 minutes after the end of the races. On all samples were performed a complete blood count (CBC), biochemical and haemostatic profiles. The post-race concentrations for the single parameter were corrected using an estimation of the plasma volume contraction according to the individual Alb concentration. Data were analysed with descriptive statistics and the percentage of variation from the baseline values were recorded. Pre- and post-race results were compared with non-parametric statistics (Mann Whitney U test). A difference was considered significant at p<0.05. A significant plasma volume contraction after the race was detected (Hct, Alb; p<0.01). Other relevant findings were increased concentrations of muscular enzymes (CK, LDH; p<0.01), Crt (p<0.01), significant increased uric acid (p<0.01), a significant decrease of haptoglobin (p<0.01) associated to an increase of ferritin concentrations (p<0.01), significant decrease of fibrinogen (p<0.05) accompanied by a non-significant increase of D-Dimers concentrations (p=0.08). This competition produced relevant abnormalities on clinical pathology in galloping horses. This study confirms a significant muscular damage, oxidative stress, intravascular haemolysis and subclinical hemostatic alterations. Further studies are needed to better understand the pathogenesis, the medical relevance and the impact on performance of these alterations in equine sport medicine.
Resumo:
Due to its practical importance and inherent complexity, the optimisation of distribution networks for supplying drinking water has been the subject of extensive study for the past 30 years. The optimization is governed by sizing the pipes in the water distribution network (WDN) and / or optimises specific parts of the network such as pumps, tanks etc. or try to analyse and optimise the reliability of a WDN. In this thesis, the author has analysed two different WDNs (Anytown City and Cabrera city networks), trying to solve and optimise a multi-objective optimisation problem (MOOP). The main two objectives in both cases were the minimisation of Energy Cost (€) or Energy consumption (kWh), along with the total Number of pump switches (TNps) during a day. For this purpose, a decision support system generator for Multi-objective optimisation used. Its name is GANetXL and has been developed by the Center of Water System in the University of Exeter. GANetXL, works by calling the EPANET hydraulic solver, each time a hydraulic analysis has been fulfilled. The main algorithm used, was a second-generation algorithm for multi-objective optimisation called NSGA_II that gave us the Pareto fronts of each configuration. The first experiment that has been carried out was the network of Anytown city. It is a big network with a pump station of four fixed speed parallel pumps that are boosting the water dynamics. The main intervention was to change these pumps to new Variable speed driven pumps (VSDPs), by installing inverters capable to diverse their velocity during the day. Hence, it’s been achieved great Energy and cost savings along with minimisation in the number of pump switches. The results of the research are thoroughly illustrated in chapter 7, with comments and a variety of graphs and different configurations. The second experiment was about the network of Cabrera city. The smaller WDN had a unique FS pump in the system. The problem was the same as far as the optimisation process was concerned, thus, the minimisation of the energy consumption and in parallel the minimisation of TNps. The same optimisation tool has been used (GANetXL).The main scope was to carry out several and different experiments regarding a vast variety of configurations, using different pump (but this time keeping the FS mode), different tank levels, different pipe diameters and different emitters coefficient. All these different modes came up with a large number of results that were compared in the chapter 8. Concluding, it should be said that the optimisation of WDNs is a very interested field that has a vast space of options to deal with. This includes a large number of algorithms to choose from, different techniques and configurations to be made and different support system generators. The researcher has to be ready to “roam” between these choices, till a satisfactory result will convince him/her that has reached a good optimisation point.
Resumo:
Evidence accumulated in the last ten years has demonstrated that a large proportion of the mitochondrial respiratory chain complexes in a variety of organisms is arranged in supramolecular assemblies called supercomplexes or respirasomes. Besides conferring a kinetic advantage (substrate channeling) and being required for the assembly and stability of Complex I, indirect considerations support the view that supercomplexes may also prevent excessive formation of reactive oxygen species (ROS) from the respiratory chain. Following this line of thought we have decided to directly investigate ROS production by Complex I under conditions in which the complex is arranged as a component of the supercomplex I1III2 or it is dissociated as an individual enzyme. The study has been addressed both in bovine heart mitochondrial membranes and in reconstituted proteoliposomes composed of complexes I and III in which the supramolecular organization of the respiratory assemblies is impaired by: (i) treatment either of bovine heart mitochondria or liposome-reconstituted supercomplex I-III with dodecyl maltoside; (ii) reconstitution of Complexes I and III at high phospholipids to protein ratio. The results of this investigation provide experimental evidence that the production of ROS is strongly increased in either model; supporting the view that disruption or prevention of the association between Complex I and Complex III by different means enhances the generation of superoxide from Complex I . This is the first demonstration that dissociation of the supercomplex I1III2 in the mitochondrial membrane is a cause of oxidative stress from Complex I. Previous work in our laboratory demonstrated that lipid peroxidation can dissociate the supramolecular assemblies; thus, here we confirm that preliminary conclusion that primary causes of oxidative stress may perpetuate reactive oxygen species (ROS) generation by a vicious circle involving supercomplex dissociation as a major determinant.
Resumo:
Die Invarianz physikalischer Gesetze unter Lorentztransformationen ist eines der fundamentalen Postulate der modernen Physik und alle Theorien der grundlegenden Wechselwirkungen sind in kovarianter Form formuliert. Obwohl die Spezielle Relativitätstheorie (SRT) in einer Vielzahl von Experimenten mit hoher Genauigkeit überprüft und bestätigt wurde, sind aufgrund der weitreichenden Bedeutung dieses Postulats weitere verbesserte Tests von grundsätzlichem Interesse. Darüber hinaus weisen moderne Ansätze zur Vereinheitlichung der Gravitation mit den anderen Wechselwirkungen auf eine mögliche Verletzung der Lorentzinvarianz hin. In diesem Zusammenhang spielen Ives-Stilwell Experimente zum Test der Zeitdilatation in der SRT eine bedeutende Rolle. Dabei wird die hochauflösende Laserspektroskopie eingesetzt, um die Gültigkeit der relativistischen Dopplerformel – und damit des Zeitdilatationsfaktors γ – an relativistischen Teilchenstrahlen zu untersuchen. Im Rahmen dieser Arbeit wurde ein Ives-Stilwell Experiment an 7Li+-Ionen, die bei einer Geschwindigkeit von 34 % der Lichtgeschwindigkeit im Experimentierspeicherring (ESR) des GSI Helmholtzzentrums für Schwerionenforschung gespeichert waren, durchgeführt. Unter Verwendung des 1s2s3S1→ 1s2p3P2-Übergangs wurde sowohl Λ-Spektroskopie als auch Sättigungsspektroskopie betrieben. Durch die computergestützte Analyse des Fluoreszenznachweises und unter Verwendung optimierter Kantenfilter für den Nachweis konnte das Signal zu Rauschverhältnis entscheidend verbessert und unter Einsatz eines zusätzlichen Pumplasers erstmals ein Sättigungssignal beobachtet werden. Die Frequenzstabilität der beiden verwendeten Lasersysteme wurde mit Hilfe eines Frequenzkamms spezifiziert, um eine möglichst hohe Genauigkeit zu erreichen. Die aus den Strahlzeiten gewonnen Daten wurden im Rahmen der Robertson-Mansouri-Sexl-Testtheorie (RMS) und der Standard Model Extension (SME) interpretiert und entsprechende Obergrenzen für die relevanten Testparameter der jeweiligen Theorie bestimmt. Die Obergrenze für den Testparameter α der RMS-Theorie konnte gegenüber den früheren Messungen bei 6,4 % der Lichtgeschwindigkeit am Testspeicherring (TSR) des Max-Planck-Instituts für Kernphysik in Heidelberg um einen Faktor 4 verbessert werden.
Resumo:
The aim of this thesis is to investigate the nature of quantum computation and the question of the quantum speed-up over classical computation by comparing two different quantum computational frameworks, the traditional quantum circuit model and the cluster-state quantum computer. After an introductory survey of the theoretical and epistemological questions concerning quantum computation, the first part of this thesis provides a presentation of cluster-state computation suitable for a philosophical audience. In spite of the computational equivalence between the two frameworks, their differences can be considered as structural. Entanglement is shown to play a fundamental role in both quantum circuits and cluster-state computers; this supports, from a new perspective, the argument that entanglement can reasonably explain the quantum speed-up over classical computation. However, quantum circuits and cluster-state computers diverge with regard to one of the explanations of quantum computation that actually accords a central role to entanglement, i.e. the Everett interpretation. It is argued that, while cluster-state quantum computation does not show an Everettian failure in accounting for the computational processes, it threatens that interpretation of being not-explanatory. This analysis presented here should be integrated in a more general work in order to include also further frameworks of quantum computation, e.g. topological quantum computation. However, what is revealed by this work is that the speed-up question does not capture all that is at stake: both quantum circuits and cluster-state computers achieve the speed-up, but the challenges that they posit go besides that specific question. Then, the existence of alternative equivalent quantum computational models suggests that the ultimate question should be moved from the speed-up to a sort of “representation theorem” for quantum computation, to be meant as the general goal of identifying the physical features underlying these alternative frameworks that allow for labelling those frameworks as “quantum computation”.
Resumo:
Oceans are key sources and sinks in the global budgets of significant atmospheric trace gases, termed Volatile Organic Compounds (VOCs). Despite their low concentrations, these species have an important role in the atmosphere, influencing ozone photochemistry and aerosol physics. Surprisingly, little work has been done on assessing their emissions or transport mechanisms and rates between ocean and atmosphere, all of which are important when modelling the atmosphere accurately.rnA new Needle Trap Device (NTD) - GC-MS method was developed for the effective sampling and analysis of VOCs in seawater. Good repeatability (RSDs <16 %), linearity (R2 = 0.96 - 0.99) and limits of detection in the range of pM were obtained for DMS, isoprene, benzene, toluene, p-xylene, (+)-α-pinene and (-)-α-pinene. Laboratory evaluation and subsequent field application indicated that the proposed method can be used successfully in place of the more usually applied extraction techniques (P&T, SPME) to extend the suite of species typically measured in the ocean and improve detection limits. rnDuring a mesocosm CO2 enrichment study, DMS, isoprene and α-pinene were identified and quantified in seawater samples, using the above mentioned method. Based on correlations with available biological datasets, the effects of ocean acidification as well as possible ocean biological sources were investigated for all examined compounds. Future ocean's acidity was shown to decrease oceanic DMS production, possibly impact isoprene emissions but not affect the production of α-pinene. rnIn a separate activity, ocean - atmosphere interactions were simulated in a large scale wind-wave canal facility, in order to investigate the gas exchange process and its controlling mechanisms. Air-water exchange rates of 14 chemical species (of which 11 VOCs) spanning a wide range of solubility (dimensionless solubility, α = 0:4 to 5470) and diffusivity (Schmidt number in water, Scw = 594 to 1194) were obtained under various turbulent (wind speed at ten meters height, u10 = 0:8 to 15ms-1) and surfactant modulated (two different sized Triton X-100 layers) surface conditions. Reliable and reproducible total gas transfer velocities were obtained and the derived values and trends were comparable to previous investigations. Through this study, a much better and more comprehensive understanding of the gas exchange process was accomplished. The role of friction velocity, uw* and mean square slope, σs2 in defining phenomena such as waves and wave breaking, near surface turbulence, bubbles and surface films was recognized as very significant. uw* was determined as the ideal turbulent parameter while σs2 described best the related surface conditions. A combination of both uw* and σs2 variables, was found to reproduce faithfully the air-water gas exchange process. rnA Total Transfer Velocity (TTV) model provided by a compilation of 14 tracers and a combination of both uw* and σs2 parameters, is proposed for the first time. Through the proposed TTV parameterization, a new physical perspective is presented which provides an accurate TTV for any tracer within the examined solubility range. rnThe development of such a comprehensive air-sea gas exchange parameterization represents a highly useful tool for regional and global models, providing accurate total transfer velocity estimations for any tracer and any sea-surface status, simplifying the calculation process and eliminating inevitable calculation uncertainty connected with the selection or combination of different parameterizations.rnrn
Resumo:
Measurements of the self coupling between bosons are important to test the electroweak sector of the Standard Model (SM). The production of pairs of Z bosons through the s-channel is forbidden in the SM. The presence of physics, beyond the SM, could lead to a deviation of the expected production cross section of pairs of Z bosons due to the so called anomalous Triple Gauge Couplings (aTGC). Proton-proton data collisions at the Large Hadron Collider (LHC) recorded by the ATLAS detector at a center of mass energy of 8 TeV were analyzed corresponding to an integrated luminosity of 20.3 fb-1. Pairs of Z bosons decaying into two electron-positron pairs are searched for in the data sample. The effect of the inclusion of detector regions corresponding to high values of the pseudorapidity was studied to enlarge the phase space available for the measurement of the ZZ production. The number of ZZ candidates was determined and the ZZ production cross section was measured to be: rn7.3±1.0(Stat.)±0.4(Sys.)±0.2(lumi.)pb, which is consistent with the SM expectation value of 7.2±0.3pb. Limits on the aTGCs were derived using the observed yield, which are twice as stringent as previous limits obtained by ATLAS at a center of mass energy of 7 TeV.
Resumo:
Il concetto di inflazione e' stato introdotto nei primi anni ’80 per risolvere alcuni problemi del modello cosmologico standard, quali quello dell’orizzonte e quello della piattezza. Le predizioni dei piu' semplici modelli inflazionari sono in buon accordo con le osservazioni cosmologiche piu'recenti, che confermano sezioni spaziali piatte e uno spettro di fluttuazioni primordiali con statistica vicina a quella gaussiana. I piu' recenti dati di Planck, pur in ottimo accordo con una semplice legge di potenza per lo spettro a scale k > 0.08 Mpc−1 , sembrano indicare possibili devi- azioni a scale maggiori, seppur non a un livello statisticamente significativo a causa della varianza cosmica. Queste deviazioni nello spettro possono essere spiegate da modelli inflazionari che includono una violazione della condizione di lento rotolamento (slow-roll ) e che hanno precise predizioni per lo spettro. Per uno dei primi modelli, caratterizzato da una discontinuita' nella derivata prima del potenziale proposto da Starobinsky, lo spettro ed il bispettro delle fluttuazioni primordiali sono noti analiticamente. In questa tesi estenderemo tale modello a termini cinetici non standard, calcolandone analiticamente il bispettro e confrontando i risultati ottenuti con quanto presente in letteratura. In particolare, l’introduzione di un termine cinetico non standard permettera' di ottenere una velocita' del suono per l’inflatone non banale, che consentira' di estendere i risultati noti, riguardanti il bispettro, per questo modello. Innanzitutto studieremo le correzioni al bispettro noto in letteratura dovute al fatto che in questo caso la velocita' del suono e' una funzione dipendente dal tempo; successivamente, cercheremo di calcolare analiticamente un ulteriore contributo al bispettro proporzionale alla derivata prima della velocita' del suono (che per il modello originale e' nullo).
Resumo:
The primary goal of this work is related to the extension of an analytic electro-optical model. It will be used to describe single-junction crystalline silicon solar cells and a silicon/perovskite tandem solar cell in the presence of light-trapping in order to calculate efficiency limits for such a device. In particular, our tandem system is composed by crystalline silicon and a perovskite structure material: metilammoniumleadtriiodide (MALI). Perovskite are among the most convenient materials for photovoltaics thanks to their reduced cost and increasing efficiencies. Solar cell efficiencies of devices using these materials increased from 3.8% in 2009 to a certified 20.1% in 2014 making this the fastest-advancing solar technology to date. Moreover, texturization increases the amount of light which can be absorbed through an active layer. Using Green’s formalism it is possible to calculate the photogeneration rate of a single-layer structure with Lambertian light trapping analytically. In this work we go further: we study the optical coupling between the two cells in our tandem system in order to calculate the photogeneration rate of the whole structure. We also model the electronic part of such a device by considering the perovskite top cell as an ideal diode and solving the drift-diffusion equation with appropriate boundary conditions for the silicon bottom cell. We have a four terminal structure, so our tandem system is totally unconstrained. Then we calculate the efficiency limits of our tandem including several recombination mechanisms such as Auger, SRH and surface recombination. We focus also on the dependence of the results on the band gap of the perovskite and we calculare an optimal band gap to optimize the tandem efficiency. The whole work has been continuously supported by a numerical validation of out analytic model against Silvaco ATLAS which solves drift-diffusion equations using a finite elements method. Our goal is to develop a simpler and cheaper, but accurate model to study such devices.
Resumo:
I Big Data hanno forgiato nuove tecnologie che migliorano la qualità della vita utilizzando la combinazione di rappresentazioni eterogenee di dati in varie discipline. Occorre, quindi, un sistema realtime in grado di computare i dati in tempo reale. Tale sistema viene denominato speed layer, come si evince dal nome si è pensato a garantire che i nuovi dati siano restituiti dalle query funcions con la rapidità in cui essi arrivano. Il lavoro di tesi verte sulla realizzazione di un’architettura che si rifaccia allo Speed Layer della Lambda Architecture e che sia in grado di ricevere dati metereologici pubblicati su una coda MQTT, elaborarli in tempo reale e memorizzarli in un database per renderli disponibili ai Data Scientist. L’ambiente di programmazione utilizzato è JAVA, il progetto è stato installato sulla piattaforma Hortonworks che si basa sul framework Hadoop e sul sistema di computazione Storm, che permette di lavorare con flussi di dati illimitati, effettuando l’elaborazione in tempo reale. A differenza dei tradizionali approcci di stream-processing con reti di code e workers, Storm è fault-tolerance e scalabile. Gli sforzi dedicati al suo sviluppo da parte della Apache Software Foundation, il crescente utilizzo in ambito di produzione di importanti aziende, il supporto da parte delle compagnie di cloud hosting sono segnali che questa tecnologia prenderà sempre più piede come soluzione per la gestione di computazioni distribuite orientate agli eventi. Per poter memorizzare e analizzare queste moli di dati, che da sempre hanno costituito una problematica non superabile con i database tradizionali, è stato utilizzato un database non relazionale: HBase.
Resumo:
In the field of computer assisted orthopedic surgery (CAOS) the anterior pelvic plane (APP) is a common concept to determine the pelvic orientation by digitizing distinct pelvic landmarks. As percutaneous palpation is - especially for obese patients - known to be error-prone, B-mode ultrasound (US) imaging could provide an alternative means. Several concepts of using ultrasound imaging to determine the APP landmarks have been introduced. In this paper we present a novel technique, which uses local patch statistical shape models (SSMs) and a hierarchical speed of sound compensation strategy for an accurate determination of the APP. These patches are independently matched and instantiated with respect to associated point clouds derived from the acquired ultrasound images. Potential inaccuracies due to the assumption of a constant speed of sound are compensated by an extended reconstruction scheme. We validated our method with in-vitro studies using a plastic bone covered with a soft-tissue simulation phantom and with a preliminary cadaver trial.
Resumo:
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.