896 resultados para Computer aided analysis, Machine vision, Video surveillance


Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The aortic dissection is a disease that can cause a deadly situation, even with a correct treatment. It consists in a rupture of a layer of the aortic artery wall, causing a blood flow inside this rupture, called dissection. The aim of this paper is to contribute to its diagnosis, detecting the dissection edges inside the aorta. A subpixel accuracy edge detector based on the hypothesis of partial volume effect is used, where the intensity of an edge pixel is the sum of the contribution of each color weighted by its relative area inside the pixel. The method uses a floating window centred on the edge pixel and computes the edge features. The accuracy of our method is evaluated on synthetic images of different hickness and noise levels, obtaining an edge detection with a maximal mean error lower than 16 percent of a pixel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] The aim of this work is to propose a model for computing the optical flow in a sequence of images. We introduce a new temporal regularizer that is suitable for large displacements. We propose to decouple the spatial and temporal regularizations to avoid an incongruous formulation. For the spatial regularization we use the Nagel-Enkelmann operator and a newly designed temporal regularization. Our model is based on an energy functional that yields a partial differential equation (PDE). This PDE is embedded into a multipyramidal strategy to recover large displacements. A gradient descent technique is applied at each scale to reach the minimum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN] In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction.We have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. In order to calibrate the camera we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. We describe two general techniques to extract a sequence of corresponding points from multiple views of an object. The resulting sequence of points will be used later to reconstruct a set of 3D points representing the object surfaces on the scene. We have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer aided design of Monolithic Microwave Integrated Circuits (MMICs) depends critically on active device models that are accurate, computationally efficient, and easily extracted from measurements or device simulators. Empirical models of active electron devices, which are based on actual device measurements, do not provide a detailed description of the electron device physics. However they are numerically efficient and quite accurate. These characteristics make them very suitable for MMIC design in the framework of commercially available CAD tools. In the empirical model formulation it is very important to separate linear memory effects (parasitic effects) from the nonlinear effects (intrinsic effects). Thus an empirical active device model is generally described by an extrinsic linear part which accounts for the parasitic passive structures connecting the nonlinear intrinsic electron device to the external world. An important task circuit designers deal with is evaluating the ultimate potential of a device for specific applications. In fact once the technology has been selected, the designer would choose the best device for the particular application and the best device for the different blocks composing the overall MMIC. Thus in order to accurately reproducing the behaviour of different-in-size devices, good scalability properties of the model are necessarily required. Another important aspect of empirical modelling of electron devices is the mathematical (or equivalent circuit) description of the nonlinearities inherently associated with the intrinsic device. Once the model has been defined, the proper measurements for the characterization of the device are performed in order to identify the model. Hence, the correct measurement of the device nonlinear characteristics (in the device characterization phase) and their reconstruction (in the identification or even simulation phase) are two of the more important aspects of empirical modelling. This thesis presents an original contribution to nonlinear electron device empirical modelling treating the issues of model scalability and reconstruction of the device nonlinear characteristics. The scalability of an empirical model strictly depends on the scalability of the linear extrinsic parasitic network, which should possibly maintain the link between technological process parameters and the corresponding device electrical response. Since lumped parasitic networks, together with simple linear scaling rules, cannot provide accurate scalable models, either complicate technology-dependent scaling rules or computationally inefficient distributed models are available in literature. This thesis shows how the above mentioned problems can be avoided through the use of commercially available electromagnetic (EM) simulators. They enable the actual device geometry and material stratification, as well as losses in the dielectrics and electrodes, to be taken into account for any given device structure and size, providing an accurate description of the parasitic effects which occur in the device passive structure. It is shown how the electron device behaviour can be described as an equivalent two-port intrinsic nonlinear block connected to a linear distributed four-port passive parasitic network, which is identified by means of the EM simulation of the device layout, allowing for better frequency extrapolation and scalability properties than conventional empirical models. Concerning the issue of the reconstruction of the nonlinear electron device characteristics, a data approximation algorithm has been developed for the exploitation in the framework of empirical table look-up nonlinear models. Such an approach is based on the strong analogy between timedomain signal reconstruction from a set of samples and the continuous approximation of device nonlinear characteristics on the basis of a finite grid of measurements. According to this criterion, nonlinear empirical device modelling can be carried out by using, in the sampled voltage domain, typical methods of the time-domain sampling theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

[EN]We present a new strategy for constructing spline spaces over hierarchical T-meshes with quad- and octree subdivision scheme. The proposed technique includes some simple rules for inferring local knot vectors to define C 2 -continuous cubic tensor product spline blending functions. Our conjecture is that these rules allow to obtain, for a given T-mesh, a set of linearly independent spline functions with the property that spaces spanned by nested T-meshes are also nested, and therefore, the functions can reproduce cubic polynomials. In order to span spaces with these properties applying the proposed rules, the T-mesh should fulfill the only requirement of being a 0- balanced mesh...

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ambient Intelligence (AmI) envisions a world where smart, electronic environments are aware and responsive to their context. People moving into these settings engage many computational devices and systems simultaneously even if they are not aware of their presence. AmI stems from the convergence of three key technologies: ubiquitous computing, ubiquitous communication and natural interfaces. The dependence on a large amount of fixed and mobile sensors embedded into the environment makes of Wireless Sensor Networks one of the most relevant enabling technologies for AmI. WSN are complex systems made up of a number of sensor nodes, simple devices that typically embed a low power computational unit (microcontrollers, FPGAs etc.), a wireless communication unit, one or more sensors and a some form of energy supply (either batteries or energy scavenger modules). Low-cost, low-computational power, low energy consumption and small size are characteristics that must be taken into consideration when designing and dealing with WSNs. In order to handle the large amount of data generated by a WSN several multi sensor data fusion techniques have been developed. The aim of multisensor data fusion is to combine data to achieve better accuracy and inferences than could be achieved by the use of a single sensor alone. In this dissertation we present our results in building several AmI applications suitable for a WSN implementation. The work can be divided into two main areas: Multimodal Surveillance and Activity Recognition. Novel techniques to handle data from a network of low-cost, low-power Pyroelectric InfraRed (PIR) sensors are presented. Such techniques allow the detection of the number of people moving in the environment, their direction of movement and their position. We discuss how a mesh of PIR sensors can be integrated with a video surveillance system to increase its performance in people tracking. Furthermore we embed a PIR sensor within the design of a Wireless Video Sensor Node (WVSN) to extend its lifetime. Activity recognition is a fundamental block in natural interfaces. A challenging objective is to design an activity recognition system that is able to exploit a redundant but unreliable WSN. We present our activity in building a novel activity recognition architecture for such a dynamic system. The architecture has a hierarchical structure where simple nodes performs gesture classification and a high level meta classifiers fuses a changing number of classifier outputs. We demonstrate the benefit of such architecture in terms of increased recognition performance, and fault and noise robustness. Furthermore we show how we can extend network lifetime by performing a performance-power trade-off. Smart objects can enhance user experience within smart environments. We present our work in extending the capabilities of the Smart Micrel Cube (SMCube), a smart object used as tangible interface within a tangible computing framework, through the development of a gesture recognition algorithm suitable for this limited computational power device. Finally the development of activity recognition techniques can greatly benefit from the availability of shared dataset. We report our experience in building a dataset for activity recognition. Such dataset is freely available to the scientific community for research purposes and can be used as a testbench for developing, testing and comparing different activity recognition techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In case of severe osteoarthritis at the knee causing pain, deformity, and loss of stability and mobility, the clinicians consider that the substitution of these surfaces by means of joint prostheses. The objectives to be pursued by this surgery are: complete pain elimination, restoration of the normal physiological mobility and joint stability, correction of all deformities and, thus, of limping. The knee surgical navigation systems have bee developed in computer-aided surgery in order to improve the surgical final outcome in total knee arthroplasty. These systems provide the surgeon with quantitative and real-time information about each surgical action, like bone cut executions and prosthesis component alignment, by mean of tracking tools rigidly fixed onto the femur and the tibia. Nevertheless, there is still a margin of error due to the incorrect surgical procedures and to the still limited number of kinematic information provided by the current systems. Particularly, patello-femoral joint kinematics is not considered in knee surgical navigation. It is also unclear and, thus, a source of misunderstanding, what the most appropriate methodology is to study the patellar motion. In addition, also the knee ligamentous apparatus is superficially considered in navigated total knee arthroplasty, without taking into account how their physiological behavior is altered by this surgery. The aim of the present research work was to provide new functional and biomechanical assessments for the improvement of the surgical navigation systems for joint replacement in the human lower limb. This was mainly realized by means of the identification and development of new techniques that allow a thorough comprehension of the functioning of the knee joint, with particular attention to the patello-femoral joint and to the main knee soft tissues. A knee surgical navigation system with active markers was used in all research activities presented in this research work. Particularly, preliminary test were performed in order to assess the system accuracy and the robustness of a number of navigation procedures. Four studies were performed in-vivo on patients requiring total knee arthroplasty and randomly implanted by means of traditional and navigated procedures in order to check for the real efficacy of the latter with respect to the former. In order to cope with assessment of patello-femoral joint kinematics in the intact and replaced knees, twenty in-vitro tests were performed by using a prototypal tracking tool also for the patella. In addition to standard anatomical and articular recommendations, original proposals for defining the patellar anatomical-based reference frame and for studying the patello-femoral joint kinematics were reported and used in these tests. These definitions were applied to two further in-vitro tests in which, for the first time, also the implant of patellar component insert was fully navigated. In addition, an original technique to analyze the main knee soft tissues by means of anatomical-based fiber mappings was also reported and used in the same tests. The preliminary instrumental tests revealed a system accuracy within the millimeter and a good inter- and intra-observer repeatability in defining all anatomical reference frames. In in-vivo studies, the general alignments of femoral and tibial prosthesis components and of the lower limb mechanical axis, as measured on radiographs, was more satisfactory, i.e. within ±3°, in those patient in which total knee arthroplasty was performed by navigated procedures. As for in-vitro tests, consistent patello-femoral joint kinematic patterns were observed over specimens throughout the knee flexion arc. Generally, the physiological intact knee patellar motion was not restored after the implant. This restoration was successfully achieved in the two further tests where all component implants, included the patellar insert, were fully navigated, i.e. by means of intra-operative assessment of also patellar component positioning and general tibio-femoral and patello-femoral joint assessment. The tests for assessing the behavior of the main knee ligaments revealed the complexity of the latter and the different functional roles played by the several sub-bundles compounding each ligament. Also in this case, total knee arthroplasty altered the physiological behavior of these knee soft tissues. These results reveal in-vitro the relevance and the feasibility of the applications of new techniques for accurate knee soft tissues monitoring, patellar tracking assessment and navigated patellar resurfacing intra-operatively in the contest of the most modern operative techniques. This present research work gives a contribution to the much controversial knowledge on the normal and replaced of knee kinematics by testing the reported new methodologies. The consistence of these results provides fundamental information for the comprehension and improvements of knee orthopedic treatments. In the future, the reported new techniques can be safely applied in-vivo and also adopted in other joint replacements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oggi, grazie al continuo progredire della tecnologia, in tutti i sistemi di produzione industriali si trova almeno un macchinario che permette di automatizzare determinate operazioni. Alcuni di questi macchinari hanno un sistema di visione industriale (machine vision), che permette loro di osservare ed analizzare ciò che li circonda, dotato di algoritmi in grado di operare alcune scelte in maniera automatica. D’altra parte, il continuo progresso tecnologico che caratterizza la realizzazione di sensori di visione, ottiche e, nell’insieme, di telecamere, consente una sempre più precisa e accurata acquisizione della scena inquadrata. Oggi, esigenze di mercato fanno si che sia diventato necessario che macchinari dotati dei moderni sistemi di visione permettano di fare misure morfometriche e dimensionali non a contatto. Ma le difficoltà annesse alla progettazione ed alla realizzazione su larga scala di sistemi di visione industriali che facciano misure dimensioni non a contatto, con sensori 2D, fanno sì che in tutto il mondo il numero di aziende che producono questo tipo di macchinari sia estremamente esiguo. A fronte di capacità di calcolo avanzate, questi macchinari necessitano dell’intervento di un operatore per selezionare quali parti dell’immagine acquisita siano d’interesse e, spesso, anche di indicare cosa misurare in esse. Questa tesi è stata sviluppata in sinergia con una di queste aziende, che produce alcuni macchinari per le misure automatiche di pezzi meccanici. Attualmente, nell’immagine del pezzo meccanico vengono manualmente indicate le forme su cui effettuare misure. Lo scopo di questo lavoro è quello di studiare e prototipare un algoritmo che fosse in grado di rilevare e interpretare forme geometriche note, analizzando l’immagine acquisita dalla scansione di un pezzo meccanico. Le difficoltà affrontate sono tipiche dei problemi del “mondo reale” e riguardano tutti i passaggi tipici dell’elaborazione di immagini, dalla “pulitura” dell’immagine acquisita, alla sua binarizzazione fino, ovviamente, alla parte di analisi del contorno ed identificazione di forme caratteristiche. Per raggiungere l’obiettivo, sono state utilizzate tecniche di elaborazione d’immagine che hanno permesso di interpretare nell'immagine scansionata dalla macchina tutte le forme note che ci siamo preposti di interpretare. L’algoritmo si è dimostrato molto robusto nell'interpretazione dei diametri e degli spallamenti trovando, infatti, in tutti i benchmark utilizzati tutte le forme di questo tipo, mentre è meno robusto nella determinazione di lati obliqui e archi di circonferenza a causa del loro campionamento non lineare.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das Wolf-Hirschhorn-Syndrom (WHS) ist ein komplexes und variables Fehlbildungs- Retardierungssyndrom, das durch Deletion in der distalen Chromosomenregion 4p16.3 hervorgerufen wird und dessen Ätiologie und Pathogenese bisher weitgehend unverstanden sind. Die Zielsetzung in der vorliegenden Arbeit bestand in der Identifizierung und vorläufigen Charakterisierung neuer Gene, die an der Entstehung des Syndroms beteiligt sein könnten. Die Wolf-Hirschhorn-Syndrom-kritische Region (WHSCR) konnte zu Beginn der vorliegenden Arbeit auf einen ca. 2 Mb großen Bereich zwischen den Markern D4S43 und D4S142 eingegrenzt werden. Für die Identifizierung neuer Gene wurden zunächst drei größere genomische Cosmid-/PAC-Contigs (I-III) im Bereich der Marker D4S114 bis D4S142 erstellt und mittels Exonamplifikation auf transkribierte Bereiche (Exons) untersucht. Es konnten insgesamt 67 putative 'Exons' isoliert werden, von denen einige bereits bekannten Genen (ZNF141, PDEB, MYL5, GAK, DAGK4 und FGFR3) entsprechen. Zwei dieser Gene konnten im Rahmen dieser Arbeit erstmals (DAGK4) bzw. genauer (GAK) in die distale Region 4p16.3 kartiert werden. Die restlichen Exons können aufgrund von Homologievergleichen und/oder EST-cDNA-Homologien vermutlich neuen Genen oder auch Pseudogenen (z. B. YWEE1hu) zugeordnet werden. Durch die im Verlaufe der vorliegenden Arbeit publizierte weitere Eingrenzung der WHSCR auf einen 165 Kb-großen Bereich proximal des FGFR3-Gens konzentrierten sich weitere Untersuchungen auf die detaillierte Analyse der WHSCR zwischen dem Marker D4S43 und FGFR3. Mit Hilfe von Exonamplifikation bzw. computergestützter Auswertung vorliegender Sequenzdaten aus diesem Bereich ('GRAIL', 'GENSCAN' und Homologievergleiche in den EST-Datenbanken des NCBI) konnten mehrere neue Gene identifiziert werden. In distaler-proximaler Reihenfolge handelt es sich dabei um die Gene LETM1, 51, 43, 45, 57 und POL4P. LETM1 kodiert für ein putatives Transmembran-Protein mit einem Leucin-Zipper- und zwei EF-Hand-Motiven und könnte aufgrund seiner möglichen Beteiligung an der Ca2+-Homeostase und/oder der Signal-transduktion zu Merkmalen des WHS (Krampfanfällen, mentale Retardierung und muskuläre Hypotonie) beitragen. Das Gen 51 entspricht einem in etwa zeitgleich durch Stec et al. (1998) und Chesi et al. (1998) als WHSC1 bzw. MMSET bezeichnetem Gen und wurde daher nicht weiter charakterisiert. Es wird genauso wie das Gen 43, das zeitgleich von Wright et al. (1999b) als WHSC2 beschrieben werden konnte und eine mögliche Rolle bei der Transkriptionselongation spielt, ubiquitär exprimiert. Das in der vorliegenden Arbeit identifizierte Gen 45 zeigt demgegenüber ein ausgesprochen spezifisches Expressionsmuster (in Nervenzellen des Gehirns sowie in Spermatiden). Dies stellt zusammen mit der strukturellen Ähnlichkeit des putativen Genprodukts zu Signalmolekülen einen interessanten Zusammenhang zu Merkmalen des WHS (beispielsweise Kryptorchismus, Uterusfehlbildungen oder auch neurologische Defekte) her. Demgegenüber handelt es sich bei dem Gen 57 möglicherweise um ein trunkiertes Pseudogen des eRFS-Gens auf Chromosom 6q24 (Wallrapp et al., 1998). Das POL4P-Gen schließlich stellt allein aufgrund seiner genomischen Lokalisation sowie seiner möglichen Funktion (als DNA-Polymerase-ähnliches Gen) kein gutes Kandidatengen für spezifische Merkmale des Syndroms dar und wurde daher nicht im Detail charakterisiert. Um die Beteiligung der Gene an der Ätiologie und Pathogenese des Syndroms zu verstehen, ist die Entwicklung eines Mausmodells (über das Einfügen gezielter Deletionen in das Mausgenom) geplant. Um dies zu ermöglichen, wurde in der vorliegenden Arbeit die Charakterisierung der orthologen Region bei der Maus vorgenommen. Zunächst wurden die orthologen Gene der Maus (Letm1, Whsc1, Gen 43 (Whsc2h), Gen 45 und Pol4p) identifiziert. Durch die Erstellung sowie die genaue Kartierung eines murinen genomischen P1/PAC-Klon-Contigs konnte gezeigt werden, daß die murinen Gene Fgfr3, Letm1, Whsc1, Gen 43 (Whsc2h), Gen 45 und Pol4p sowie einige weitere der überprüften EST-cDNA-Klone der Maus in einem durchgehenden Syntänieblock zwischen Mensch (POL4P bis FGFR3) und Maus (Mmu 5.20) enthalten sind, der in seiner genomischen Ausdehnung in etwa den Verhältnissen beim Menschen (zwischen POL4P und FGFR3) entspricht.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde das Objektbewegungssehen des Goldfischs betrachtet. Zuerst musste eine geeignete Methode gefunden werden, diese Form der Bewegungswahrnehmung untersuchen zu können, da bisherige Experimente zum Bewegungssehen beim Goldfisch ausschließlich mit Hilfe der optomotorischen Folgereaktion gemacht wurden. Anschließend sollte die Frage geklärt werden, ob das Objektbewegungssehen genau wie das Bewegungssehen einer Großfeldbewegung farbenblind ist und welcher Zapfentyp daran beteiligt ist. Die Verwendung eines Zufallpunktmusters zur Dressur auf ein bewegtes Objekt hat sich als äußert erfolgreich herausgestellt. Diese Methode hat den Vorteil, dass sich die Versuchstiere ausschließlich aufgrund der Bewegungsinformation orientieren können. In den Rot-Grün- und Blau-Grün-Transferversuchen zeigte sich, dass das Objektbewegungssehen beim Goldfisch farbenblind ist, aber erstaunlicherweise nicht vom L-Zapfen vermittelt wird, sondern wahrscheinlich vom M-Zapfen. Welchen Vorteil es haben könnte, dass für die verschiedenen Formen der Bewegungswahrnehmung verschiedene Eingänge benutzt werden, kann mit diesen Versuchen nicht geklärt werden. Farbenblindheit des Bewegungssehens scheint eine Eigenschaft visueller Systeme allgemein zu sein. Beim Menschen ist diese Frage im Moment noch nicht geklärt und wird weiterhin diskutiert, da es sowohl Experimente gibt, die zeigen, dass es farbenblind ist, als auch andere, die Hinweise darauf geben, dass es nicht farbenblind ist. Der Vorteil der Farbenblindheit eines bewegungsdetektierenden visuellen Systems zeigt sich auch in der Technik beim Maschinen Sehen. Hier wird ebenfalls auf Farbinformation verzichtet, was zum einen eine Datenreduktion mit sich bringt und zum anderen dazu führt, dass korrespondierende Bildpunkte leichter gefunden werden können. Diese werden benötigt, um Bewegungsvektoren zu bestimmen und letztlich Bewegung zu detektieren.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer simulations play an ever growing role for the development of automotive products. Assembly simulation, as well as many other processes, are used systematically even before the first physical prototype of a vehicle is built in order to check whether particular components can be assembled easily or whether another part is in the way. Usually, this kind of simulation is limited to rigid bodies. However, a vehicle contains a multitude of flexible parts of various types: cables, hoses, carpets, seat surfaces, insulations, weatherstrips... Since most of the problems using these simulations concern one-dimensional components and since an intuitive tool for cable routing is still needed, we have chosen to concentrate on this category, which includes cables, hoses and wiring harnesses. In this thesis, we present a system for simulating one dimensional flexible parts such as cables or hoses. The modeling of bending and torsion follows the Cosserat model. For this purpose we use a generalized spring-mass system and describe its configuration by a carefully chosen set of coordinates. Gravity and contact forces as well as the forces responsible for length conservation are expressed in Cartesian coordinates. But bending and torsion effects can be dealt with more effectively by using quaternions to represent the orientation of the segments joining two neighboring mass points. This augmented system allows an easy formulation of all interactions with the best appropriate coordinate type and yields a strongly banded Hessian matrix. An energy minimizing process accounts for a solution exempt from the oscillations that are typical of spring-mass systems. The use of integral forces, similar to an integral controller, allows to enforce exactly the constraints. The whole system is numerically stable and can be solved at interactive frame rates. It is integrated in the DaimlerChrysler in-house Virtual Reality Software veo for use in applications such as cable routing and assembly simulation and has been well received by users. Parts of this work have been published at the ACM Solid and Physical Modeling Conference 2006 and have been selected for the special issue of the Computer-Aided-Design Journal to the conference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1.Ricostruzione mandibolare La ricostruzione mandibolare è comunemente eseguita utilizzando un lembo libero perone. Il metodo convenzionale (indiretto) di Computer Aided Design e Computer Aided Manifacturing prevede il modellamento manuale preoperatorio di una placca di osteosintesi standard su un modello stereolitografico della mandibola. Un metodo innovativo CAD CAM diretto comprende 3 fasi: 1) pianificazione virtuale 2) computer aided design della dima di taglio mandibolari, della dima di taglio del perone e della placca di osteosintesi e 3) Computer Aided Manufacturing dei 3 dispositivi chirurgici personalizzati. 7 ricostruzioni mandibolari sono state effettuate con il metodo diretto. I risultati raggiunti e le modalità di pianificazione sono descritte e discusse. La progettazione assistita da computer e la tecnica di fabbricazione assistita da computer facilita un'accurata ricostruzione mandibolare ed apporta un miglioramento statisticamente significativo rispetto al metodo convenzionale. 2. Cavità orale e orofaringe Un metodo ricostruttivo standard per la cavità orale e l'orofaringe viene descritto. 163 pazienti affetti da cancro della cavità orale e dell'orofaringe, sono stati trattati dal 1992 al 2012 eseguendo un totale di 175 lembi liberi. La strategia chirurgica è descritta in termini di scelta del lembo, modellamento ed insetting. I modelli bidimensionali sono utilizzati per pianificare una ricostruzione tridimensionale con il miglior risultato funzionale ed estetico. I modelli, la scelta del lembo e l' insetting sono descritti per ogni regione. Complicazioni e risultati funzionali sono stati valutati sistematicamente. I risultati hanno mostrato un buon recupero funzionale con le tecniche ricostruttive descritte. Viene proposto un algoritmo ricostruttivo basato su template standard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lo studio dell’intelligenza artificiale si pone come obiettivo la risoluzione di una classe di problemi che richiedono processi cognitivi difficilmente codificabili in un algoritmo per essere risolti. Il riconoscimento visivo di forme e figure, l’interpretazione di suoni, i giochi a conoscenza incompleta, fanno capo alla capacità umana di interpretare input parziali come se fossero completi, e di agire di conseguenza. Nel primo capitolo della presente tesi sarà costruito un semplice formalismo matematico per descrivere l’atto di compiere scelte. Il processo di “apprendimento” verrà descritto in termini della massimizzazione di una funzione di prestazione su di uno spazio di parametri per un ansatz di una funzione da uno spazio vettoriale ad un insieme finito e discreto di scelte, tramite un set di addestramento che descrive degli esempi di scelte corrette da riprodurre. Saranno analizzate, alla luce di questo formalismo, alcune delle più diffuse tecniche di artificial intelligence, e saranno evidenziate alcune problematiche derivanti dall’uso di queste tecniche. Nel secondo capitolo lo stesso formalismo verrà applicato ad una ridefinizione meno intuitiva ma più funzionale di funzione di prestazione che permetterà, per un ansatz lineare, la formulazione esplicita di un set di equazioni nelle componenti del vettore nello spazio dei parametri che individua il massimo assoluto della funzione di prestazione. La soluzione di questo set di equazioni sarà trattata grazie al teorema delle contrazioni. Una naturale generalizzazione polinomiale verrà inoltre mostrata. Nel terzo capitolo verranno studiati più nel dettaglio alcuni esempi a cui quanto ricavato nel secondo capitolo può essere applicato. Verrà introdotto il concetto di grado intrinseco di un problema. Verranno inoltre discusse alcuni accorgimenti prestazionali, quali l’eliminazione degli zeri, la precomputazione analitica, il fingerprinting e il riordino delle componenti per lo sviluppo parziale di prodotti scalari ad alta dimensionalità. Verranno infine introdotti i problemi a scelta unica, ossia quella classe di problemi per cui è possibile disporre di un set di addestramento solo per una scelta. Nel quarto capitolo verrà discusso più in dettaglio un esempio di applicazione nel campo della diagnostica medica per immagini, in particolare verrà trattato il problema della computer aided detection per il rilevamento di microcalcificazioni nelle mammografie.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Come risposta positiva alle richieste provenienti dal mondo dei giuristi, spesso troppo distante da quello scientifico, si vuole sviluppare un sistema solido dal punto di vista tecnico e chiaro dal punto di vista giurico finalizzato ad migliore ricerca della verità. L’obiettivo ci si prefigge è quello di creare uno strumento versatile e di facile utilizzo da mettere a disposizione dell’A.G. ed eventualmente della P.G. operante finalizzato a consentire il proseguo dell’attività d’indagine in tempi molto rapidi e con un notevole contenimento dei costi di giustizia rispetto ad una normale CTU. La progetto verterà su analisi informatiche forensi di supporti digitali inerenti vari tipi di procedimento per cui si dovrebbe richiedere una CTU o una perizia. La sperimentazione scientifica prevede un sistema di partecipazione diretta della P.G. e della A.G. all’analisi informatica rendendo disponibile, sottoforma di macchina virtuale, il contenuto dei supporti sequestrati in modo che possa essere visionato alla pari del supporto originale. In questo modo il CT diventa una mera guida per la PG e l’AG nell’ambito dell’indagine informatica forense che accompagna il giudice e le parti alla migliore comprensione delle informazioni richieste dal quesito. Le fasi chiave della sperimentazione sono: • la ripetibilità delle operazioni svolte • dettare delle chiare linee guida per la catena di custodia dalla presa in carico dei supporti • i metodi di conservazione e trasmissione dei dati tali da poter garantire integrità e riservatezza degli stessi • tempi e costi ridotti rispetto alle normali CTU/perizie • visualizzazione diretta dei contenuti dei supporti analizzati delle Parti e del Giudice circoscritte alle informazioni utili ai fini di giustizia