863 resultados para camera motion


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several methods to reduce respiratory-induced motion have been described in literature, with the goal of increasing accuracy of treatment to minimize normal tissue toxicity or increase dose to the target volume. We analyzed two different techniques of respiratory gating: the deep inspiration breath hold technique and the respiratory gating using the Real-time Position Management (RPM) system. The first method is a self-gating technique in which radiation treatment take place during a phase of breath-holding. The second technique use a reflective marker placed on the patient’s anterior surface. The motion of the marker is tracked using a camera interfaced to a computer. The gating thresholds are set when the tumor is in the desired portion of the respiratory cycle. These thresholds determine when the gating system turns the treatment beam on and off. We compared both techniques with a standard external radiation treatment. The dosimetric analysis has led to considerable advantage of these methods compared to the external radiation treatment, particularly in reducing the dose to the lung.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a non linear technique to invert strong motion records with the aim of obtaining the final slip and rupture velocity distributions on the fault plane. In this thesis, the ground motion simulation is obtained evaluating the representation integral in the frequency. The Green’s tractions are computed using the discrete wave-number integration technique that provides the full wave-field in a 1D layered propagation medium. The representation integral is computed through a finite elements technique, based on a Delaunay’s triangulation on the fault plane. The rupture velocity is defined on a coarser regular grid and rupture times are computed by integration of the eikonal equation. For the inversion, the slip distribution is parameterized by 2D overlapping Gaussian functions, which can easily relate the spectrum of the possible solutions with the minimum resolvable wavelength, related to source-station distribution and data processing. The inverse problem is solved by a two-step procedure aimed at separating the computation of the rupture velocity from the evaluation of the slip distribution, the latter being a linear problem, when the rupture velocity is fixed. The non-linear step is solved by optimization of an L2 misfit function between synthetic and real seismograms, and solution is searched by the use of the Neighbourhood Algorithm. The conjugate gradient method is used to solve the linear step instead. The developed methodology has been applied to the M7.2, Iwate Nairiku Miyagi, Japan, earthquake. The estimated magnitude seismic moment is 2.6326 dyne∙cm that corresponds to a moment magnitude MW 6.9 while the mean the rupture velocity is 2.0 km/s. A large slip patch extends from the hypocenter to the southern shallow part of the fault plane. A second relatively large slip patch is found in the northern shallow part. Finally, we gave a quantitative estimation of errors associates with the parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Images of a scene, static or dynamic, are generally acquired at different epochs from different viewpoints. They potentially gather information about the whole scene and its relative motion with respect to the acquisition device. Data from different (in the spatial or temporal domain) visual sources can be fused together to provide a unique consistent representation of the whole scene, even recovering the third dimension, permitting a more complete understanding of the scene content. Moreover, the pose of the acquisition device can be achieved by estimating the relative motion parameters linking different views, thus providing localization information for automatic guidance purposes. Image registration is based on the use of pattern recognition techniques to match among corresponding parts of different views of the acquired scene. Depending on hypotheses or prior information about the sensor model, the motion model and/or the scene model, this information can be used to estimate global or local geometrical mapping functions between different images or different parts of them. These mapping functions contain relative motion parameters between the scene and the sensor(s) and can be used to integrate accordingly informations coming from the different sources to build a wider or even augmented representation of the scene. Accordingly, for their scene reconstruction and pose estimation capabilities, nowadays image registration techniques from multiple views are increasingly stirring up the interest of the scientific and industrial community. Depending on the applicative domain, accuracy, robustness, and computational payload of the algorithms represent important issues to be addressed and generally a trade-off among them has to be reached. Moreover, on-line performance is desirable in order to guarantee the direct interaction of the vision device with human actors or control systems. This thesis follows a general research approach to cope with these issues, almost independently from the scene content, under the constraint of rigid motions. This approach has been motivated by the portability to very different domains as a very desirable property to achieve. A general image registration approach suitable for on-line applications has been devised and assessed through two challenging case studies in different applicative domains. The first case study regards scene reconstruction through on-line mosaicing of optical microscopy cell images acquired with non automated equipment, while moving manually the microscope holder. By registering the images the field of view of the microscope can be widened, preserving the resolution while reconstructing the whole cell culture and permitting the microscopist to interactively explore the cell culture. In the second case study, the registration of terrestrial satellite images acquired by a camera integral with the satellite is utilized to estimate its three-dimensional orientation from visual data, for automatic guidance purposes. Critical aspects of these applications are emphasized and the choices adopted are motivated accordingly. Results are discussed in view of promising future developments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relevance of human joint models was shown in the literature. In particular, the great importance of models for the joint passive motion simulation (i.e. motion under virtually unloaded conditions) was outlined. They clarify the role played by the principal anatomical structures of the articulation, enhancing the comprehension of surgical treatments, and in particular the design of total ankle replacement and ligament reconstruction. Equivalent rigid link mechanisms proved to be an efficient tool for an accurate simulation of the joint passive motion. This thesis focuses on the ankle complex (i.e. the anatomical structure composed of the tibiotalar and the subtalar joints), which has a considerable role in human locomotion. The lack of interpreting models of this articulation and the poor results of total ankle replacement arthroplasty have strongly suggested devising new mathematical models capable of reproducing the restraining function of each structure of the joint and of replicating the relative motion of the bones which constitute the joint itself. In this contest, novel equivalent mechanisms are proposed for modelling the ankle passive motion. Their geometry is based on the joint’s anatomical structures. In particular, the role of the main ligaments of the articulation is investigated under passive conditions by means of nine 5-5 fully parallel mechanisms. Based on this investigation, a one-DOF spatial mechanism is developed for modelling the passive motion of the lower leg. The model considers many passive structures constituting the articulation, overcoming the limitations of previous models which took into account few anatomical elements of the ankle complex. All the models have been identified from experimental data by means of optimization procedure. Then, the simulated motions have been compared to the experimental one, in order to show the efficiency of the approach and thus to deduce the role of each anatomical structure in the ankle kinematic behavior.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The determination of skeletal loading conditions in vivo and their relationship to the health of bone tissues, remain an open question. Computational modeling of the musculoskeletal system is the only practicable method providing a valuable approach to muscle and joint loading analyses, although crucial shortcomings limit the translation process of computational methods into the orthopedic and neurological practice. A growing attention focused on subject-specific modeling, particularly when pathological musculoskeletal conditions need to be studied. Nevertheless, subject-specific data cannot be always collected in the research and clinical practice, and there is a lack of efficient methods and frameworks for building models and incorporating them in simulations of motion. The overall aim of the present PhD thesis was to introduce improvements to the state-of-the-art musculoskeletal modeling for the prediction of physiological muscle and joint loads during motion. A threefold goal was articulated as follows: (i) develop state-of-the art subject-specific models and analyze skeletal load predictions; (ii) analyze the sensitivity of model predictions to relevant musculotendon model parameters and kinematic uncertainties; (iii) design an efficient software framework simplifying the effort-intensive phases of subject-specific modeling pre-processing. The first goal underlined the relevance of subject-specific musculoskeletal modeling to determine physiological skeletal loads during gait, corroborating the choice of full subject-specific modeling for the analyses of pathological conditions. The second goal characterized the sensitivity of skeletal load predictions to major musculotendon parameters and kinematic uncertainties, and robust probabilistic methods were applied for methodological and clinical purposes. The last goal created an efficient software framework for subject-specific modeling and simulation, which is practical, user friendly and effort effective. Future research development aims at the implementation of more accurate models describing lower-limb joint mechanics and musculotendon paths, and the assessment of an overall scenario of the crucial model parameters affecting the skeletal load predictions through probabilistic modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La riforma del bicameralismo rappresenta nell’ordinamento italiano una delle tematiche più dibattute sin dalla “concessione” dello Statuto Albertino. In Assemblea Costituente, infatti, l’opzione tra monocamerali e bicameralismo - prima - e il dibattito su che tipo di bicameralismo si sarebbe dovuto adottare - poi - hanno dato vita ad una parte organizzativa particolarmente debole e in gran parte antiquata rispetto alle esigenze della prima parte della Costituzione che, al contrario, ha rappresentato il precipitato di una profonda consonanza di ideali. La tesi si propone dunque l’obiettivo di dimostrare che l’esigenza di procedere ad una riforma del sistema bicamerale in Italia sia oggi quanto mai attuale e necessaria. Da un lato, infatti, essa favorirebbe il superamento delle inefficienze del sistema parlamentare e potrebbe rappresentare uno strumento per ovviare alla debole razionalizzazione della forma di governo che, da sempre, ha determinato la strutturale instabilità degli esecutivi. Dall’altro lato, la riforma servirebbe soprattutto a realizzare quella connessione organica tra Stato e regioni necessaria per completare il disegno regionalistico che si ricava dalla Costituzione stessa. Esigenza questa che, peraltro, si è notevolmente rafforzata con la riforma del titolo V della Costituzione, la cui portata innovativa è stata sostanzialmente svuotata di contenuto a causa delle difficoltà che si sono incontrate nella sua attuazione.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La tesi è il risultato di un tirocinio, della durata di cinque mesi, svolto presso l'Azienda 'Aliva', che si occupa di sistemi di facciate. Lo scopo della tesi è la realizzazione di un nuovo sistema di fissaggio a scomparsa di lastre in vetro. Tutta la ricerca è scaturita dalla richiesta dell'azienda per realizzare un nuovo prodotto di fissaggio di lastre di grandi dimensioni in vetro con impatto visivo molto ridotto. Il sistema unisce due tecnologie: - sistema meccanico; - sistema adesivo strutturale di alte performance. Partendo da dei test di laboratorio, la tesi consisterà nell'effettuare le verifiche agli elementi finiti, con l'ausilio del software Strauss7, di lastre in vetro stratificato e vetro camera.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research aims at developing a framework for semantic-based digital survey of architectural heritage. Rooted in knowledge-based modeling which extracts mathematical constraints of geometry from architectural treatises, as-built information of architecture obtained from image-based modeling is integrated with the ideal model in BIM platform. The knowledge-based modeling transforms the geometry and parametric relation of architectural components from 2D printings to 3D digital models, and create large amount variations based on shape grammar in real time thanks to parametric modeling. It also provides prior knowledge for semantically segmenting unorganized survey data. The emergence of SfM (Structure from Motion) provides access to reconstruct large complex architectural scenes with high flexibility, low cost and full automation, but low reliability of metric accuracy. We solve this problem by combing photogrammetric approaches which consists of camera configuration, image enhancement, and bundle adjustment, etc. Experiments show the accuracy of image-based modeling following our workflow is comparable to that from range-based modeling. We also demonstrate positive results of our optimized approach in digital reconstruction of portico where low-texture-vault and dramatical transition of illumination bring huge difficulties in the workflow without optimization. Once the as-built model is obtained, it is integrated with the ideal model in BIM platform which allows multiple data enrichment. In spite of its promising prospect in AEC industry, BIM is developed with limited consideration of reverse-engineering from survey data. Besides representing the architectural heritage in parallel ways (ideal model and as-built model) and comparing their difference, we concern how to create as-built model in BIM software which is still an open area to be addressed. The research is supposed to be fundamental for research of architectural history, documentation and conservation of architectural heritage, and renovation of existing buildings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Il presente lavoro è motivato dal problema della constituzione di unità percettive a livello della corteccia visiva primaria V1. Si studia dettagliatamente il modello geometrico di Citti-Sarti con particolare attenzione alla modellazione di fenomeni di associazione visiva. Viene studiato nel dettaglio un modello di connettività. Il contributo originale risiede nell'adattamento del metodo delle diffusion maps, recentemente introdotto da Coifman e Lafon, alla geometria subriemanniana della corteccia visiva. Vengono utilizzati strumenti di teoria del potenziale, teoria spettrale, analisi armonica in gruppi di Lie per l'approssimazione delle autofunzioni dell'operatore del calore sul gruppo dei moti rigidi del piano. Le autofunzioni sono utilizzate per l'estrazione di unità percettive nello stimolo visivo. Sono presentate prove sperimentali e originali delle capacità performanti del metodo.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Joseph Nicolas Cugnot built the first primitive car in 1769 and approximately one hundred year later the first automotive race took place. Thanks to this, for the first time the aerodynamics principles began to be applied to cars. The aerodynamic study of a car is important to improve the performance on the road, or on the track. It purposely enhances the stability in the turns and increases the maximum velocity. However, it is also useful, decrease the fuel consumption, in order to reduce the pollution. Given that cars are a very complex body, the aerodynamic study cannot be conducted following an analytical method, but it is possible, in general, to choose between two different approaches: the numerical or the experimental one. The results of numerical studies depend on the computers’ potential and on the method use to implement the mathematical model. Today, the best way to perform an aerodynamic study is still experimental, which means that in the first phase of the design process the study is performed in a wind tunnel and in later phases directly on track. The automotive wind tunnels are singular mainly due to the test chamber, which typically contains a ground simulation system. The test chamber can have different types of walls: open walls, closed walls, adaptive walls or slotted walls. The best solution is to use the slotted walls because they minimize the interference between the walls and the streamlines, the interaction between the flow and the environment, and also to contain the overall costs. Furthermore, is necessary minimize the boundary layer at the walls, without accelerating the flow, in order to provide the maximum section of homogeneous flow. This thesis aims at redefining the divergent angle of the Dallara Automobili S.P.A. wind tunnel’s walls, in order to improve the overall homogeneity. To perform this study it was necessary to acquire the pressure data of the boundary layer, than it was created the profile of the boundary layer velocity and, to minimize the experimental errors, it was calculated the displacement thickness. The results obtained shows, even if the instrument used to the experiment was not the best one, that the boundary layer thickness could be minor in case of a low diffusion angle. So it is convenient to perform another experiment with a most sensitive instrument to verified what is the better wall configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In vielen Bereichen der industriellen Fertigung, wie zum Beispiel in der Automobilindustrie, wer- den digitale Versuchsmodelle (sog. digital mock-ups) eingesetzt, um die Entwicklung komplexer Maschinen m ̈oglichst gut durch Computersysteme unterstu ̈tzen zu k ̈onnen. Hierbei spielen Be- wegungsplanungsalgorithmen eine wichtige Rolle, um zu gew ̈ahrleisten, dass diese digitalen Pro- totypen auch kollisionsfrei zusammengesetzt werden k ̈onnen. In den letzten Jahrzehnten haben sich hier sampling-basierte Verfahren besonders bew ̈ahrt. Diese erzeugen eine große Anzahl von zuf ̈alligen Lagen fu ̈r das ein-/auszubauende Objekt und verwenden einen Kollisionserken- nungsmechanismus, um die einzelnen Lagen auf Gu ̈ltigkeit zu u ̈berpru ̈fen. Daher spielt die Kollisionserkennung eine wesentliche Rolle beim Design effizienter Bewegungsplanungsalgorith- men. Eine Schwierigkeit fu ̈r diese Klasse von Planern stellen sogenannte “narrow passages” dar, schmale Passagen also, die immer dort auftreten, wo die Bewegungsfreiheit der zu planenden Objekte stark eingeschr ̈ankt ist. An solchen Stellen kann es schwierig sein, eine ausreichende Anzahl von kollisionsfreien Samples zu finden. Es ist dann m ̈oglicherweise n ̈otig, ausgeklu ̈geltere Techniken einzusetzen, um eine gute Performance der Algorithmen zu erreichen.rnDie vorliegende Arbeit gliedert sich in zwei Teile: Im ersten Teil untersuchen wir parallele Kollisionserkennungsalgorithmen. Da wir auf eine Anwendung bei sampling-basierten Bewe- gungsplanern abzielen, w ̈ahlen wir hier eine Problemstellung, bei der wir stets die selben zwei Objekte, aber in einer großen Anzahl von unterschiedlichen Lagen auf Kollision testen. Wir im- plementieren und vergleichen verschiedene Verfahren, die auf Hu ̈llk ̈operhierarchien (BVHs) und hierarchische Grids als Beschleunigungsstrukturen zuru ̈ckgreifen. Alle beschriebenen Verfahren wurden auf mehreren CPU-Kernen parallelisiert. Daru ̈ber hinaus vergleichen wir verschiedene CUDA Kernels zur Durchfu ̈hrung BVH-basierter Kollisionstests auf der GPU. Neben einer un- terschiedlichen Verteilung der Arbeit auf die parallelen GPU Threads untersuchen wir hier die Auswirkung verschiedener Speicherzugriffsmuster auf die Performance der resultierenden Algo- rithmen. Weiter stellen wir eine Reihe von approximativen Kollisionstests vor, die auf den beschriebenen Verfahren basieren. Wenn eine geringere Genauigkeit der Tests tolerierbar ist, kann so eine weitere Verbesserung der Performance erzielt werden.rnIm zweiten Teil der Arbeit beschreiben wir einen von uns entworfenen parallelen, sampling- basierten Bewegungsplaner zur Behandlung hochkomplexer Probleme mit mehreren “narrow passages”. Das Verfahren arbeitet in zwei Phasen. Die grundlegende Idee ist hierbei, in der er- sten Planungsphase konzeptionell kleinere Fehler zuzulassen, um die Planungseffizienz zu erh ̈ohen und den resultierenden Pfad dann in einer zweiten Phase zu reparieren. Der hierzu in Phase I eingesetzte Planer basiert auf sogenannten Expansive Space Trees. Zus ̈atzlich haben wir den Planer mit einer Freidru ̈ckoperation ausgestattet, die es erlaubt, kleinere Kollisionen aufzul ̈osen und so die Effizienz in Bereichen mit eingeschr ̈ankter Bewegungsfreiheit zu erh ̈ohen. Optional erlaubt unsere Implementierung den Einsatz von approximativen Kollisionstests. Dies setzt die Genauigkeit der ersten Planungsphase weiter herab, fu ̈hrt aber auch zu einer weiteren Perfor- mancesteigerung. Die aus Phase I resultierenden Bewegungspfade sind dann unter Umst ̈anden nicht komplett kollisionsfrei. Um diese Pfade zu reparieren, haben wir einen neuartigen Pla- nungsalgorithmus entworfen, der lokal beschr ̈ankt auf eine kleine Umgebung um den bestehenden Pfad einen neuen, kollisionsfreien Bewegungspfad plant.rnWir haben den beschriebenen Algorithmus mit einer Klasse von neuen, schwierigen Metall- Puzzlen getestet, die zum Teil mehrere “narrow passages” aufweisen. Unseres Wissens nach ist eine Sammlung vergleichbar komplexer Benchmarks nicht ̈offentlich zug ̈anglich und wir fan- den auch keine Beschreibung von vergleichbar komplexen Benchmarks in der Motion-Planning Literatur.