651 resultados para SMOOTHING SPLINE
Resumo:
Il contesto della tesi è quello dello sviluppo di applicazioni grafiche interattive. Si parlerà di come si implementano determinate tecniche per la progettazione di un videogioco basato su un motore grafico 3D. La tesi tratterà sia della teoria delle curve, cercando di spiegare come è possibile descrivere dei percorsi nel calcolatore, giustificando per quale motivo sono stati scelti determinati algoritmi, e sia di quali strumenti sono stati utilizzati per la creazione del videogame, soffermandosi sul funzionamento dell’engine (Unity3D) e fornendo informazioni sull’implementazione del codice. Non saranno escluse dalla tesi informazioni riguardanti lo sviluppo dell’idea e del lato artistico di un videogame.
Resumo:
Questo lavoro di tesi riguarda lo studio e la realizzazione dei principali algoritmi di rappresentazione e modellazione di superfici T-Spline. In particolare si è cercato di determinare i vantaggi e gli svantaggi che queste superfici presentano rispetto alle superfici NURBS, utilizzate nei software CAD.
Resumo:
In questo elaborato si esplora ed estende la teoria sugli spazi di funzioni spline generalizzate utili nell'ambito della modellazione geometrica. In particolare si è analizzata la vasta letteratura e i diversi frammentati approcci, provenienti da paesi e epoche differenti con lo scopo di definire una teoria unica e completa effettivamente utilizzabile nelle applicazioni, in particolare nell'ambito della modellazione geometrica. In questo ambiente infatti lo spazio spline in cui si decide di lavorare deve necessariamente possedere una base con ben precise proprietà, sulle quali si focalizza la nostra trattazione. Supportati dalla sperimentazione numerica e simbolica, abbiamo dimostrato la proprietà di Variation Diminishing e trovato diversi spazi spline con le caratteristiche volute.
Resumo:
The thesis is concerned with local trigonometric regression methods. The aim was to develop a method for extraction of cyclical components in time series. The main results of the thesis are the following. First, a generalization of the filter proposed by Christiano and Fitzgerald is furnished for the smoothing of ARIMA(p,d,q) process. Second, a local trigonometric filter is built, with its statistical properties. Third, they are discussed the convergence properties of trigonometric estimators, and the problem of choosing the order of the model. A large scale simulation experiment has been designed in order to assess the performance of the proposed models and methods. The results show that local trigonometric regression may be a useful tool for periodic time series analysis.
Resumo:
Questa tesi presenta un metodo generale per la costruzione di curve spline generalizzate di interpolazione locale. Costruiremo quest'ultime miscelando polinomi interpolanti generalizzati a blending function generalizzate. Verrano inoltre verificate sperimentalmente alcune delle proprietà di queste curve.
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
We present a new approach for corpus-based speech enhancement that significantly improves over a method published by Xiao and Nickel in 2010. Corpus-based enhancement systems do not merely filter an incoming noisy signal, but resynthesize its speech content via an inventory of pre-recorded clean signals. The goal of the procedure is to perceptually improve the sound of speech signals in background noise. The proposed new method modifies Xiao's method in four significant ways. Firstly, it employs a Gaussian mixture model (GMM) instead of a vector quantizer in the phoneme recognition front-end. Secondly, the state decoding of the recognition stage is supported with an uncertainty modeling technique. With the GMM and the uncertainty modeling it is possible to eliminate the need for noise dependent system training. Thirdly, the post-processing of the original method via sinusoidal modeling is replaced with a powerful cepstral smoothing operation. And lastly, due to the improvements of these modifications, it is possible to extend the operational bandwidth of the procedure from 4 kHz to 8 kHz. The performance of the proposed method was evaluated across different noise types and different signal-to-noise ratios. The new method was able to significantly outperform traditional methods, including the one by Xiao and Nickel, in terms of PESQ scores and other objective quality measures. Results of subjective CMOS tests over a smaller set of test samples support our claims.
Resumo:
2D-3D registration of pre-operative 3D volumetric data with a series of calibrated and undistorted intra-operative 2D projection images has shown great potential in CT-based surgical navigation because it obviates the invasive procedure of the conventional registration methods. In this study, a recently introduced spline-based multi-resolution 2D-3D image registration algorithm has been adapted together with a novel least-squares normalized pattern intensity (LSNPI) similarity measure for image guided minimally invasive spine surgery. A phantom and a cadaver together with their respective ground truths were specially designed to experimentally assess possible factors that may affect the robustness, accuracy, or efficiency of the registration. Our experiments have shown that it is feasible for the assessed 2D-3D registration algorithm to achieve sub-millimeter accuracy in a realistic setup in less than one minute.
Resumo:
This paper proposes a numerically simple routine for locally adaptive smoothing. The locally heterogeneous regression function is modelled as a penalized spline with a smoothly varying smoothing parameter modelled as another penalized spline. This is being formulated as hierarchical mixed model, with spline coe±cients following a normal distribution, which by itself has a smooth structure over the variances. The modelling exercise is in line with Baladandayuthapani, Mallick & Carroll (2005) or Crainiceanu, Ruppert & Carroll (2006). But in contrast to these papers Laplace's method is used for estimation based on the marginal likelihood. This is numerically simple and fast and provides satisfactory results quickly. We also extend the idea to spatial smoothing and smoothing in the presence of non normal response.
Resumo:
Numerous time series studies have provided strong evidence of an association between increased levels of ambient air pollution and increased levels of hospital admissions, typically at 0, 1, or 2 days after an air pollution episode. An important research aim is to extend existing statistical models so that a more detailed understanding of the time course of hospitalization after exposure to air pollution can be obtained. Information about this time course, combined with prior knowledge about biological mechanisms, could provide the basis for hypotheses concerning the mechanism by which air pollution causes disease. Previous studies have identified two important methodological questions: (1) How can we estimate the shape of the distributed lag between increased air pollution exposure and increased mortality or morbidity? and (2) How should we estimate the cumulative population health risk from short-term exposure to air pollution? Distributed lag models are appropriate tools for estimating air pollution health effects that may be spread over several days. However, estimation for distributed lag models in air pollution and health applications is hampered by the substantial noise in the data and the inherently weak signal that is the target of investigation. We introduce an hierarchical Bayesian distributed lag model that incorporates prior information about the time course of pollution effects and combines information across multiple locations. The model has a connection to penalized spline smoothing using a special type of penalty matrix. We apply the model to estimating the distributed lag between exposure to particulate matter air pollution and hospitalization for cardiovascular and respiratory disease using data from a large United States air pollution and hospitalization database of Medicare enrollees in 94 counties covering the years 1999-2002.