7 resultados para Goodness
em AMS Tesi di Dottorato - Alm@DL - Università di Bologna
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Dielectric Elastomers (DE) are incompressible dielectrics which can experience deviatoric (isochoric) finite deformations in response to applied large electric fields. Thanks to the strong electro-mechanical coupling, DE intrinsically offer great potentialities for conceiving novel solid-state mechatronic devices, in particular linear actuators, which are more integrated, lightweight, economic, silent, resilient and disposable than equivalent devices based on traditional technologies. Such systems may have a huge impact in applications where the traditional technology does not allow coping with the limits of weight or encumbrance, and with problems involving interaction with humans or unknown environments. Fields such as medicine, domotic, entertainment, aerospace and transportation may profit. For actuation usage, DE are typically shaped in thin films coated with compliant electrodes on both sides and piled one on the other to form a multilayered DE. DE-based Linear Actuators (DELA) are entirely constituted by polymeric materials and their overall performance is highly influenced by several interacting factors; firstly by the electromechanical properties of the film, secondly by the mechanical properties and geometry of the polymeric frame designed to support the film, and finally by the driving circuits and activation strategies. In the last decade, much effort has been focused in the devolvement of analytical and numerical models that could explain and predict the hyperelastic behavior of different types of DE materials. Nevertheless, at present, the use of DELA is limited. The main reasons are 1) the lack of quantitative and qualitative models of the actuator as a whole system 2) the lack of a simple and reliable design methodology. In this thesis, a new point of view in the study of DELA is presented which takes into account the interaction between the DE film and the film supporting frame. Hyperelastic models of the DE film are reported which are capable of modeling the DE and the compliant electrodes. The supporting frames are analyzed and designed as compliant mechanisms using pseudo-rigid body models and subsequent finite element analysis. A new design methodology is reported which optimize the actuator performances allowing to specifically choose its inherent stiffness. As a particular case, the methodology focuses on the design of constant force actuators. This class of actuators are an example of how the force control could be highly simplified. Three new DE actuator concepts are proposed which highlight the goodness of the proposed method.
Resumo:
In the thesis is presented the measurement of the neutrino velocity with the OPERA experiment in the CNGS beam, a muon neutrino beam produced at CERN. The OPERA detector observes muon neutrinos 730 km away from the source. Previous measurements of the neutrino velocity have been performed by other experiments. Since the OPERA experiment aims the direct observation of muon neutrinos oscillations into tau neutrinos, a higher energy beam is employed. This characteristic together with the higher number of interactions in the detector allows for a measurement with a much smaller statistical uncertainty. Moreover, a much more sophisticated timing system (composed by cesium clocks and GPS receivers operating in “common view mode”), and a Fast Waveform Digitizer (installed at CERN and able to measure the internal time structure of the proton pulses used for the CNGS beam), allows for a new measurement with a smaller systematic error. Theoretical models on Lorentz violating effects can be investigated by neutrino velocity measurements with terrestrial beams. The analysis has been carried out with blind method in order to guarantee the internal consistency and the goodness of each calibration measurement. The performed measurement is the most precise one done with a terrestrial neutrino beam, the statistical accuracy achieved by the OPERA measurement is about 10 ns and the systematic error is about 20 ns.
Resumo:
La specificità dell'acquisizione di contenuti attraverso le interfacce digitali condanna l'agente epistemico a un'interazione frammentata, insufficiente da un punto di vista computazionale, mnemonico e temporale, rispetto alla mole informazionale oggi accessibile attraverso una qualunque implementazione della relazione uomo-computer, e invalida l'applicabilità del modello standard di conoscenza, come credenza vera e giustificata, sconfessando il concetto di credenza razionalmente fondata, per formare la quale, sarebbe invece richiesto all'agente di poter disporre appunto di risorse concettuali, computazionali e temporali inaccessibili. La conseguenza è che l'agente, vincolato dalle limitazioni ontologiche tipiche dell'interazione con le interfacce culturali, si vede costretto a ripiegare su processi ambigui, arbitrari e spesso più casuali di quanto creda, di selezione e gestione delle informazioni che danno origine a veri e propri ibridi (alla Latour) epistemologici, fatti di sensazioni e output di programmi, credenze non fondate e bit di testimonianze indirette e di tutta una serie di relazioni umano-digitali che danno adito a rifuggire in una dimensione trascendente che trova nel sacro il suo più immediato ambito di attuazione. Tutto ciò premesso, il presente lavoro si occupa di costruire un nuovo paradigma epistemologico di conoscenza proposizionale ottenibile attraverso un'interfaccia digitale di acquisizione di contenuti, fondato sul nuovo concetto di Tracciatura Digitale, definito come un un processo di acquisizione digitale di un insieme di tracce, ossia meta-informazioni di natura testimoniale. Tale dispositivo, una volta riconosciuto come un processo di comunicazione di contenuti, si baserà sulla ricerca e selezione di meta-informazioni, cioè tracce, che consentiranno l'implementazione di approcci derivati dall'analisi decisionale in condizioni di razionalità limitata, approcci che, oltre ad essere quasi mai utilizzati in tale ambito, sono ontologicamente predisposti per una gestione dell'incertezza quale quella riscontrabile nell'istanziazione dell'ibrido informazionale e che, in determinate condizioni, potranno garantire l'agente sulla bontà epistemica del contenuto acquisito.
Resumo:
Hybrid vehicles (HV), comprising a conventional ICE-based powertrain and a secondary energy source, to be converted into mechanical power as well, represent a well-established alternative to substantially reduce both fuel consumption and tailpipe emissions of passenger cars. Several HV architectures are either being studied or already available on market, e.g. Mechanical, Electric, Hydraulic and Pneumatic Hybrid Vehicles. Among the others, Electric (HEV) and Mechanical (HSF-HV) parallel Hybrid configurations are examined throughout this Thesis. To fully exploit the HVs potential, an optimal choice of the hybrid components to be installed must be properly designed, while an effective Supervisory Control must be adopted to coordinate the way the different power sources are managed and how they interact. Real-time controllers can be derived starting from the obtained optimal benchmark results. However, the application of these powerful instruments require a simplified and yet reliable and accurate model of the hybrid vehicle system. This can be a complex task, especially when the complexity of the system grows, i.e. a HSF-HV system assessed in this Thesis. The first task of the following dissertation is to establish the optimal modeling approach for an innovative and promising mechanical hybrid vehicle architecture. It will be shown how the chosen modeling paradigm can affect the goodness and the amount of computational effort of the solution, using an optimization technique based on Dynamic Programming. The second goal concerns the control of pollutant emissions in a parallel Diesel-HEV. The emissions level obtained under real world driving conditions is substantially higher than the usual result obtained in a homologation cycle. For this reason, an on-line control strategy capable of guaranteeing the respect of the desired emissions level, while minimizing fuel consumption and avoiding excessive battery depletion is the target of the corresponding section of the Thesis.
Resumo:
Tradizionalmente, l'obiettivo della calibrazione di un modello afflussi-deflussi è sempre stato quello di ottenere un set di parametri (o una distribuzione di probabilità dei parametri) che massimizzasse l'adattamento dei dati simulati alla realtà osservata, trattando parzialmente le finalità applicative del modello. Nel lavoro di tesi viene proposta una metodologia di calibrazione che trae spunto dell'evidenza che non sempre la corrispondenza tra dati osservati e simulati rappresenti il criterio più appropriato per calibrare un modello idrologico. Ai fini applicativi infatti, può risultare maggiormente utile una miglior rappresentazione di un determinato aspetto dell'idrogramma piuttosto che un altro. Il metodo di calibrazione che viene proposto mira a valutare le prestazioni del modello stimandone l'utilità nell'applicazione prevista. Tramite l'utilizzo di opportune funzioni, ad ogni passo temporale viene valutata l'utilità della simulazione ottenuta. La calibrazione viene quindi eseguita attraverso la massimizzazione di una funzione obiettivo costituita dalla somma delle utilità stimate nei singoli passi temporali. Le analisi mostrano come attraverso l'impiego di tali funzioni obiettivo sia possibile migliorare le prestazioni del modello laddove ritenute di maggior interesse per per le finalità applicative previste.
Resumo:
The evaluation of the knee joint behavior is fundamental in many applications, such as joint modeling, prosthesis and orthosis design. In-vitro tests are important in order to analyse knee behavior when simulating various loading conditions and studying physiology of the joint. A new test rig for in-vitro evaluation of the knee joint behavior is presented in this paper. It represents the evolution of a previously proposed rig, designed to overcome its principal limitations and to improve its performances. The design procedure and the adopted solution in order to satisfy the specifications are presented here. Thanks to its 6-6 Gough-Stewart parallel manipulator loading system, the rig replicates general loading conditions, like daily actions or clinical tests, on the specimen in a wide range of flexion angles. The restraining actions of knee muscles can be simulated when active actions are simulated. The joint motion in response to the applied loads, guided by passive articular structures and muscles, is permitted by the characteristics of the loading system which is force controlled. The new test rig guarantees visibility so that motion can be measured by an optoelectronic system. Furthermore, the control system of the new test rig allows the estimation of the contribution of the principal leg muscles in guaranteeing the equilibrium of the joint by the system for muscle simulation. Accuracy in positioning is guaranteed by the designed tibia and femur fixation systems,which allow unmounting and remounting the specimen in the same pose. The test rig presented in this paper permits the analysis of the behavior of the knee joint and comparative analysis on the same specimen before and after surgery, in a way to assess the goodness of prostheses or surgical treatments.