960 resultados para Probability Metrics
Resumo:
The aim of the present study was to determine the impact of trabecular bone score on the probability of fracture above that provided by the clinical risk factors utilized in FRAX. We performed a retrospective cohort study of 33,352 women aged 40-99 years from the province of Manitoba, Canada, with baseline measurements of lumbar spine trabecular bone score (TBS) and FRAX risk variables. The analysis was cohort-specific rather than based on the Canadian version of FRAX. The associations between trabecular bone score, the FRAX risk factors and the risk of fracture or death were examined using an extension of the Poisson regression model and used to calculate 10-year probabilities of fracture with and without TBS and to derive an algorithm to adjust fracture probability to take account of the independent contribution of TBS to fracture and mortality risk. During a mean follow-up of 4.7 years, 1754 women died and 1639 sustained one or more major osteoporotic fractures excluding hip fracture and 306 women sustained one or more hip fracture. When fully adjusted for FRAX risk variables, TBS remained a statistically significant predictor of major osteoporotic fractures excluding hip fracture (HR/SD 1.18, 95 % CI 1.12-1.24), death (HR/SD 1.20, 95 % CI 1.14-1.26) and hip fracture (HR/SD 1.23, 95 % CI 1.09-1.38). Models adjusting major osteoporotic fracture and hip fracture probability were derived, accounting for age and trabecular bone score with death considered as a competing event. Lumbar spine texture analysis using TBS is a risk factor for osteoporotic fracture and a risk factor for death. The predictive ability of TBS is independent of FRAX clinical risk factors and femoral neck BMD. Adjustment of fracture probability to take account of the independent contribution of TBS to fracture and mortality risk requires validation in independent cohorts.
Resumo:
Tämä tutkimus pyrkii selvittämään, miten toimitusketjun suorituskykyä voidaan mitata kohdeyrityksessä. Supply Chain Council (SCC) on vuonna 1996 kehittänyt Supply Chain Operations Reference (SCOR) – mallin, joka mahdollistaa myös suorituskyvyn mittaamisen. Tämän tutkimuksen tarkoituksena on soveltaa SCOR-mallin suorituskyvyn mittausmallia kohdeyrityksessä. Työ on kvalitatiivinen tapaustutkimus. Työn teoriaosassa on pääasiallisesti käsitelty toimitusketjua ja suorituskyvyn mittaamista koskevaa kirjallisuutta. Mittausjärjestelmän luominen alkaa kohdeyrityksen esittelyllä. SCOR – mallin mittarit on kohdeyrityksessä rakennettu SCC:n ehdotusten mukaisesti, jotta mittareiden tulokset olisivat käyttökelpoisia myös benchmarkkausta varten. Malli sisältää 10 SCOR – mittaria, sekä muutamia muita Haltonin omia mittareita. Lopputuloksena voidaan nähdä, että SCOR – malli antaa hyvän yleiskuvan toimitusketjun suorituskyvystä, mutta kohdeyrityksessä on silti tarvetta kehittää edelleen informatiivisempia mittareita, jotka antaisivat yksityiskohtaisempaa tietoa kohdeyrityksen johdolle.
Resumo:
In this study we used market settlement prices of European call options on stock index futures to extract implied probability distribution function (PDF). The method used produces a PDF of returns of an underlying asset at expiration date from implied volatility smile. With this method, the assumption of lognormal distribution (Black-Scholes model) is tested. The market view of the asset price dynamics can then be used for various purposes (hedging, speculation). We used the so called smoothing approach for implied PDF extraction presented by Shimko (1993). In our analysis we obtained implied volatility smiles from index futures markets (S&P 500 and DAX indices) and standardized them. The method introduced by Breeden and Litzenberger (1978) was then used on PDF extraction. The results show significant deviations from the assumption of lognormal returns for S&P500 options while DAX options mostly fit the lognormal distribution. A deviant subjective view of PDF can be used to form a strategy as discussed in the last section.
Resumo:
La tomodensitométrie (TDM) est une technique d'imagerie pour laquelle l'intérêt n'a cessé de croitre depuis son apparition au début des années 70. De nos jours, l'utilisation de cette technique est devenue incontournable, grâce entre autres à sa capacité à produire des images diagnostiques de haute qualité. Toutefois, et en dépit d'un bénéfice indiscutable sur la prise en charge des patients, l'augmentation importante du nombre d'examens TDM pratiqués soulève des questions sur l'effet potentiellement dangereux des rayonnements ionisants sur la population. Parmi ces effets néfastes, l'induction de cancers liés à l'exposition aux rayonnements ionisants reste l'un des risques majeurs. Afin que le rapport bénéfice-risques reste favorable au patient il est donc nécessaire de s'assurer que la dose délivrée permette de formuler le bon diagnostic tout en évitant d'avoir recours à des images dont la qualité est inutilement élevée. Ce processus d'optimisation, qui est une préoccupation importante pour les patients adultes, doit même devenir une priorité lorsque l'on examine des enfants ou des adolescents, en particulier lors d'études de suivi requérant plusieurs examens tout au long de leur vie. Enfants et jeunes adultes sont en effet beaucoup plus sensibles aux radiations du fait de leur métabolisme plus rapide que celui des adultes. De plus, les probabilités des évènements auxquels ils s'exposent sont également plus grandes du fait de leur plus longue espérance de vie. L'introduction des algorithmes de reconstruction itératifs, conçus pour réduire l'exposition des patients, est certainement l'une des plus grandes avancées en TDM, mais elle s'accompagne de certaines difficultés en ce qui concerne l'évaluation de la qualité des images produites. Le but de ce travail est de mettre en place une stratégie pour investiguer le potentiel des algorithmes itératifs vis-à-vis de la réduction de dose sans pour autant compromettre la qualité du diagnostic. La difficulté de cette tâche réside principalement dans le fait de disposer d'une méthode visant à évaluer la qualité d'image de façon pertinente d'un point de vue clinique. La première étape a consisté à caractériser la qualité d'image lors d'examen musculo-squelettique. Ce travail a été réalisé en étroite collaboration avec des radiologues pour s'assurer un choix pertinent de critères de qualité d'image. Une attention particulière a été portée au bruit et à la résolution des images reconstruites à l'aide d'algorithmes itératifs. L'analyse de ces paramètres a permis aux radiologues d'adapter leurs protocoles grâce à une possible estimation de la perte de qualité d'image liée à la réduction de dose. Notre travail nous a également permis d'investiguer la diminution de la détectabilité à bas contraste associée à une diminution de la dose ; difficulté majeure lorsque l'on pratique un examen dans la région abdominale. Sachant que des alternatives à la façon standard de caractériser la qualité d'image (métriques de l'espace Fourier) devaient être utilisées, nous nous sommes appuyés sur l'utilisation de modèles d'observateurs mathématiques. Nos paramètres expérimentaux ont ensuite permis de déterminer le type de modèle à utiliser. Les modèles idéaux ont été utilisés pour caractériser la qualité d'image lorsque des paramètres purement physiques concernant la détectabilité du signal devaient être estimés alors que les modèles anthropomorphes ont été utilisés dans des contextes cliniques où les résultats devaient être comparés à ceux d'observateurs humain, tirant profit des propriétés de ce type de modèles. Cette étude a confirmé que l'utilisation de modèles d'observateurs permettait d'évaluer la qualité d'image en utilisant une approche basée sur la tâche à effectuer, permettant ainsi d'établir un lien entre les physiciens médicaux et les radiologues. Nous avons également montré que les reconstructions itératives ont le potentiel de réduire la dose sans altérer la qualité du diagnostic. Parmi les différentes reconstructions itératives, celles de type « model-based » sont celles qui offrent le plus grand potentiel d'optimisation, puisque les images produites grâce à cette modalité conduisent à un diagnostic exact même lors d'acquisitions à très basse dose. Ce travail a également permis de clarifier le rôle du physicien médical en TDM: Les métriques standards restent utiles pour évaluer la conformité d'un appareil aux requis légaux, mais l'utilisation de modèles d'observateurs est inévitable pour optimiser les protocoles d'imagerie. -- Computed tomography (CT) is an imaging technique in which interest has been quickly growing since it began to be used in the 1970s. Today, it has become an extensively used modality because of its ability to produce accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase in the number of CT examinations performed has raised concerns about the potential negative effects of ionising radiation on the population. Among those negative effects, one of the major risks remaining is the development of cancers associated with exposure to diagnostic X-ray procedures. In order to ensure that the benefits-risk ratio still remains in favour of the patient, it is necessary to make sure that the delivered dose leads to the proper diagnosis without producing unnecessarily high-quality images. This optimisation scheme is already an important concern for adult patients, but it must become an even greater priority when examinations are performed on children or young adults, in particular with follow-up studies which require several CT procedures over the patient's life. Indeed, children and young adults are more sensitive to radiation due to their faster metabolism. In addition, harmful consequences have a higher probability to occur because of a younger patient's longer life expectancy. The recent introduction of iterative reconstruction algorithms, which were designed to substantially reduce dose, is certainly a major achievement in CT evolution, but it has also created difficulties in the quality assessment of the images produced using those algorithms. The goal of the present work was to propose a strategy to investigate the potential of iterative reconstructions to reduce dose without compromising the ability to answer the diagnostic questions. The major difficulty entails disposing a clinically relevant way to estimate image quality. To ensure the choice of pertinent image quality criteria this work was continuously performed in close collaboration with radiologists. The work began by tackling the way to characterise image quality when dealing with musculo-skeletal examinations. We focused, in particular, on image noise and spatial resolution behaviours when iterative image reconstruction was used. The analyses of the physical parameters allowed radiologists to adapt their image acquisition and reconstruction protocols while knowing what loss of image quality to expect. This work also dealt with the loss of low-contrast detectability associated with dose reduction, something which is a major concern when dealing with patient dose reduction in abdominal investigations. Knowing that alternative ways had to be used to assess image quality rather than classical Fourier-space metrics, we focused on the use of mathematical model observers. Our experimental parameters determined the type of model to use. Ideal model observers were applied to characterise image quality when purely objective results about the signal detectability were researched, whereas anthropomorphic model observers were used in a more clinical context, when the results had to be compared with the eye of a radiologist thus taking advantage of their incorporation of human visual system elements. This work confirmed that the use of model observers makes it possible to assess image quality using a task-based approach, which, in turn, establishes a bridge between medical physicists and radiologists. It also demonstrated that statistical iterative reconstructions have the potential to reduce the delivered dose without impairing the quality of the diagnosis. Among the different types of iterative reconstructions, model-based ones offer the greatest potential, since images produced using this modality can still lead to an accurate diagnosis even when acquired at very low dose. This work has clarified the role of medical physicists when dealing with CT imaging. The use of the standard metrics used in the field of CT imaging remains quite important when dealing with the assessment of unit compliance to legal requirements, but the use of a model observer is the way to go when dealing with the optimisation of the imaging protocols.
Resumo:
Background Virtual reality (VR) simulation is increasingly used in surgical disciplines. Since VR simulators measure multiple outcomes, standardized reporting is needed. Methods We present an algorithm for combining multiple VR outcomes into dimension summary measures, which are then integrated into a meaningful total score. We reanalyzed the data of two VR studies applying the algorithm. Results The proposed algorithm was successfully applied to both VR studies. Conclusions The algorithm contributes to standardized and transparent reporting in VR-related research.
Resumo:
Abstract Solitary pulmonary nodule corresponds to a common radiographic finding, which is frequently detected incidentally. The investigation of this entity remains complex, since characteristics of benign and malignant processes overlap in the differential diagnosis. Currently, many strategies are available to evaluate solitary pulmonary nodules with the main objective of characterizing benign lesions as best as possible, while avoiding to expose patients to the risks inherent to invasive methods, besides correctly detecting cases of lung cancer so as the potential curative treatment is not delayed. This first part of the study focuses on the epidemiology, the morfological evaluation and the methods to determine the likelihood of cancer in cases of indeterminate solitary pulmonary nodule.
Resumo:
The speed of traveling fronts for a two-dimensional model of a delayed reactiondispersal process is derived analytically and from simulations of molecular dynamics. We show that the one-dimensional (1D) and two-dimensional (2D) versions of a given kernel do not yield always the same speed. It is also shown that the speeds of time-delayed fronts may be higher than those predicted by the corresponding non-delayed models. This result is shown for systems with peaked dispersal kernels which lead to ballistic transport
Resumo:
This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.
Resumo:
This Ph.D. thesis consists of four original papers. The papers cover several topics from geometric function theory, more specifically, hyperbolic type metrics, conformal invariants, and the distortion properties of quasiconformal mappings. The first paper deals mostly with the quasihyperbolic metric. The main result gives the optimal bilipschitz constant with respect to the quasihyperbolic metric for the M¨obius self-mappings of the unit ball. A quasiinvariance property, sharp in a local sense, of the quasihyperbolic metric under quasiconformal mappings is also proved. The second paper studies some distortion estimates for the class of quasiconformal self-mappings fixing the boundary values of the unit ball or convex domains. The distortion is measured by the hyperbolic metric or hyperbolic type metrics. The results provide explicit, asymptotically sharp inequalities when the maximal dilatation of quasiconformal mappings tends to 1. These explicit estimates involve special functions which have a crucial role in this study. In the third paper, we investigate the notion of the quasihyperbolic volume and find the growth estimates for the quasihyperbolic volume of balls in a domain in terms of the radius. It turns out that in the case of domains with Ahlfors regular boundaries, the rate of growth depends not merely on the radius but also on the metric structure of the boundary. The topic of the fourth paper is complete elliptic integrals and inequalities. We derive some functional inequalities and elementary estimates for these special functions. As applications, some functional inequalities and the growth of the exterior modulus of a rectangle are studied.
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Simple reaction time (SRT) in response to visual stimuli can be influenced by many stimulus features. The speed and accuracy with which observers respond to a visual stimulus may be improved by prior knowledge about the stimulus location, which can be obtained by manipulating the spatial probability of the stimulus. However, when higher spatial probability is achieved by holding constant the stimulus location throughout successive trials, the resulting improvement in performance can also be due to local sensory facilitation caused by the recurrent spatial location of a visual target (position priming). The main objective of the present investigation was to quantitatively evaluate the modulation of SRT by the spatial probability structure of a visual stimulus. In two experiments the volunteers had to respond as quickly as possible to the visual target presented on a computer screen by pressing an optic key with the index finger of the dominant hand. Experiment 1 (N = 14) investigated how SRT changed as a function of both the different levels of spatial probability and the subject's explicit knowledge about the precise probability structure of visual stimulation. We found a gradual decrease in SRT with increasing spatial probability of a visual target regardless of the observer's previous knowledge concerning the spatial probability of the stimulus. Error rates, below 2%, were independent of the spatial probability structure of the visual stimulus, suggesting the absence of a speed-accuracy trade-off. Experiment 2 (N = 12) examined whether changes in SRT in response to a spatially recurrent visual target might be accounted for simply by sensory and temporally local facilitation. The findings indicated that the decrease in SRT brought about by a spatially recurrent target was associated with its spatial predictability, and could not be accounted for solely in terms of sensory priming.
Resumo:
The calyx of Held, a specialized synaptic terminal in the medial nucleus of the trapezoid body, undergoes a series of changes during postnatal development that prepares this synapse for reliable high frequency firing. These changes reduce short-term synaptic depression during tetanic stimulation and thereby prevent action potential failures during a stimulus train. We measured presynaptic membrane capacitance changes in calyces from young postnatal day 5-7 (p5-7) or older (p10-12) rat pups to examine the effect of calcium buffer capacity on vesicle pool size and the efficiency of exocytosis. Vesicle pool size was sensitive to the choice and concentration of exogenous Ca2+ buffer, and this sensitivity was much stronger in younger animals. Pool size and exocytosis efficiency in p5-7 calyces were depressed by 0.2 mM EGTA to a greater extent than with 0.05 mM BAPTA, even though BAPTA is a 100-fold faster Ca2+ buffer. However, this was not the case for p10-12 calyces. With 5 mM EGTA, exocytosis efficiency was reduced to a much larger extent in young calyces compared to older calyces. Depression of exocytosis using pairs of 10-ms depolarizations was reduced by 0.2 mM EGTA compared to 0.05 mM BAPTA to a similar extent in both age groups. These results indicate a developmentally regulated heterogeneity in the sensitivity of different vesicle pools to Ca2+ buffer capacity. We propose that, during development, a population of vesicles that are tightly coupled to Ca2+ channels expands at the expense of vesicles more distant from Ca2+ channels.