25 resultados para Standard models
Resumo:
PURPOSE Computed tomography (CT) accounts for more than half of the total radiation exposure from medical procedures, which makes dose reduction in CT an effective means of reducing radiation exposure. We analysed the dose reduction that can be achieved with a new CT scanner [Somatom Edge (E)] that incorporates new developments in hardware (detector) and software (iterative reconstruction). METHODS We compared weighted volume CT dose index (CTDIvol) and dose length product (DLP) values of 25 consecutive patients studied with non-enhanced standard brain CT with the new scanner and with two previous models each, a 64-slice 64-row multi-detector CT (MDCT) scanner with 64 rows (S64) and a 16-slice 16-row MDCT scanner with 16 rows (S16). We analysed signal-to-noise and contrast-to-noise ratios in images from the three scanners and performed a quality rating by three neuroradiologists to analyse whether dose reduction techniques still yield sufficient diagnostic quality. RESULTS CTDIVol of scanner E was 41.5 and 36.4 % less than the values of scanners S16 and S64, respectively; the DLP values were 40 and 38.3 % less. All differences were statistically significant (p < 0.0001). Signal-to-noise and contrast-to-noise ratios were best in S64; these differences also reached statistical significance. Image analysis, however, showed "non-inferiority" of scanner E regarding image quality. CONCLUSIONS The first experience with the new scanner shows that new dose reduction techniques allow for up to 40 % dose reduction while still maintaining image quality at a diagnostically usable level.
Resumo:
A search for direct chargino production in anomaly-mediated supersymmetry breaking scenarios is performed in p p collisions at root s = 7 TeV using 4.7 fb(-1) of data collected with the ATLAS experiment at the LHC. In these models, the lightest chargino is predicted to have a lifetime long enough to be detected in the tracking detectors of collider experiments. This analysis explores such models by searching for chargino decays that result in tracks with few associated hits in the outer region of the tracking system. The transverse-momentum spectrum of candidate tracks is found to be consistent with the expectation from the Standard Model background processes and constraints on chargino properties are obtained.
Resumo:
Since 2010, the client base of online-trading service providers has grown significantly. Such companies enable small investors to access the stock market at advantageous rates. Because small investors buy and sell stocks in moderate amounts, they should consider fixed transaction costs, integral transaction units, and dividends when selecting their portfolio. In this paper, we consider the small investor’s problem of investing capital in stocks in a way that maximizes the expected portfolio return and guarantees that the portfolio risk does not exceed a prescribed risk level. Portfolio-optimization models known from the literature are in general designed for institutional investors and do not consider the specific constraints of small investors. We therefore extend four well-known portfolio-optimization models to make them applicable for small investors. We consider one nonlinear model that uses variance as a risk measure and three linear models that use the mean absolute deviation from the portfolio return, the maximum loss, and the conditional value-at-risk as risk measures. We extend all models to consider piecewise-constant transaction costs, integral transaction units, and dividends. In an out-of-sample experiment based on Swiss stock-market data and the cost structure of the online-trading service provider Swissquote, we apply both the basic models and the extended models; the former represent the perspective of an institutional investor, and the latter the perspective of a small investor. The basic models compute portfolios that yield on average a slightly higher return than the portfolios computed with the extended models. However, all generated portfolios yield on average a higher return than the Swiss performance index. There are considerable differences between the four risk measures with respect to the mean realized portfolio return and the standard deviation of the realized portfolio return.
Resumo:
INTRODUCTION The aim of this study was to determine the reproducibility and accuracy of linear measurements on 2 types of dental models derived from cone-beam computed tomography (CBCT) scans: CBCT images, and Anatomodels (InVivoDental, San Jose, Calif); these were compared with digital models generated from dental impressions (Digimodels; Orthoproof, Nieuwegein, The Netherlands). The Digimodels were used as the reference standard. METHODS The 3 types of digital models were made from 10 subjects. Four examiners repeated 37 linear tooth and arch measurements 10 times. Paired t tests and the intraclass correlation coefficient were performed to determine the reproducibility and accuracy of the measurements. RESULTS The CBCT images showed significantly smaller intraclass correlation coefficient values and larger duplicate measurement errors compared with the corresponding values for Digimodels and Anatomodels. The average difference between measurements on CBCT images and Digimodels ranged from -0.4 to 1.65 mm, with limits of agreement values up to 1.3 mm for crown-width measurements. The average difference between Anatomodels and Digimodels ranged from -0.42 to 0.84 mm with limits of agreement values up to 1.65 mm. CONCLUSIONS Statistically significant differences between measurements on Digimodels and Anatomodels, and between Digimodels and CBCT images, were found. Although the mean differences might be clinically acceptable, the random errors were relatively large compared with corresponding measurements reported in the literature for both Anatomodels and CBCT images, and might be clinically important. Therefore, with the CBCT settings used in this study, measurements made directly on CBCT images and Anatomodels are not as accurate as measurements on Digimodels.
Resumo:
The counterfactual decomposition technique popularized by Blinder (1973, Journal of Human Resources, 436–455) and Oaxaca (1973, International Economic Review, 693–709) is widely used to study mean outcome differences between groups. For example, the technique is often used to analyze wage gaps by sex or race. This article summarizes the technique and addresses several complications, such as the identification of effects of categorical predictors in the detailed decomposition or the estimation of standard errors. A new command called oaxaca is introduced, and examples illustrating its usage are given.
Resumo:
When a firearm projectile hits a biological target a spray of biological material (e.g., blood and tissue fragments) can be propelled from the entrance wound back towards the firearm. This phenomenon has become known as "backspatter" and if caused by contact shots or shots from short distances traces of backspatter may reach, consolidate on, and be recovered from, the inside surfaces of the firearm. Thus, a comprehensive investigation of firearm-related crimes must not only comprise of wound ballistic assessment but also backspatter analysis, and may even take into account potential correlations between these emergences. The aim of the present study was to evaluate and expand the applicability of the "triple contrast" method by probing its compatibility with forensic analysis of nuclear and mitochondrial DNA and the simultaneous investigation of co-extracted mRNA and miRNA from backspatter collected from internal components of different types of firearms after experimental shootings. We demonstrate that "triple contrast" stained biological samples collected from the inside surfaces of firearms are amenable to forensic co-analysis of DNA and RNA and permit sequence analysis of the entire mtDNA displacement-loop, even for "low template" DNA amounts that preclude standard short tandem repeat DNA analysis. Our findings underscore the "triple contrast" method's usefulness as a research tool in experimental forensic ballistics.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
High-resolution, ground-based and independent observations including co-located wind radiometer, lidar stations, and infrasound instruments are used to evaluate the accuracy of general circulation models and data-constrained assimilation systems in the middle atmosphere at northern hemisphere midlatitudes. Systematic comparisons between observations, the European Centre for Medium-Range Weather Forecasts (ECMWF) operational analyses including the recent Integrated Forecast System cycles 38r1 and 38r2, the NASA’s Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalyses, and the free-running climate Max Planck Institute–Earth System Model–Low Resolution (MPI-ESM-LR) are carried out in both temporal and spectral dom ains. We find that ECMWF and MERRA are broadly consistent with lidar and wind radiometer measurements up to ~40 km. For both temperature and horizontal wind components, deviations increase with altitude as the assimilated observations become sparser. Between 40 and 60 km altitude, the standard deviation of the mean difference exceeds 5 K for the temperature and 20 m/s for the zonal wind. The largest deviations are observed in winter when the variability from large-scale planetary waves dominates. Between lidar data and MPI-ESM-LR, there is an overall agreement in spectral amplitude down to 15–20 days. At shorter time scales, the variability is lacking in the model by ~10 dB. Infrasound observations indicate a general good agreement with ECWMF wind and temperature products. As such, this study demonstrates the potential of the infrastructure of the Atmospheric Dynamics Research Infrastructure in Europe project that integrates various measurements and provides a quantitative understanding of stratosphere-troposphere dynamical coupling for numerical weather prediction applications.
Resumo:
BACKGROUND Pelvic floor muscle training is effective and recommended as first-line therapy for female patients with stress urinary incontinence. However, standard pelvic floor physiotherapy concentrates on voluntary contractions even though the situations provoking stress urinary incontinence (for example, sneezing, coughing, running) require involuntary fast reflexive pelvic floor muscle contractions. Training procedures for involuntary reflexive muscle contractions are widely implemented in rehabilitation and sports but not yet in pelvic floor rehabilitation. Therefore, the research group developed a training protocol including standard physiotherapy and in addition focused on involuntary reflexive pelvic floor muscle contractions. METHODS/DESIGN The aim of the planned study is to compare this newly developed physiotherapy program (experimental group) and the standard physiotherapy program (control group) regarding their effect on stress urinary incontinence. The working hypothesis is that the experimental group focusing on involuntary reflexive muscle contractions will have a higher improvement of continence measured by the International Consultation on Incontinence Modular Questionnaire Urinary Incontinence (short form), and - regarding secondary and tertiary outcomes - higher pelvic floor muscle activity during stress urinary incontinence provoking activities, better pad-test results, higher quality of life scores (International Consultation on Incontinence Modular Questionnaire) and higher intravaginal muscle strength (digitally tested) from before to after the intervention phase. This study is designed as a prospective, triple-blinded (participant, investigator, outcome assessor), randomized controlled trial with two physiotherapy intervention groups with a 6-month follow-up including 48 stress urinary incontinent women per group. For both groups the intervention will last 16 weeks and will include 9 personal physiotherapy consultations and 78 short home training sessions (weeks 1-5 3x/week, 3x/day; weeks 6-16 3x/week, 1x/day). Thereafter both groups will continue with home training sessions (3x/week, 1x/day) until the 6-month follow-up. To compare the primary outcome, International Consultation on Incontinence Modular Questionnaire (short form) between and within the two groups at ten time points (before intervention, physiotherapy sessions 2-9, after intervention) ANOVA models for longitudinal data will be applied. DISCUSSION This study closes a gap, as involuntary reflexive pelvic floor muscle training has not yet been included in stress urinary incontinence physiotherapy, and if shown successful could be implemented in clinical practice immediately. TRIAL REGISTRATION NCT02318251 ; 4 December 2014 First patient randomized: 11 March 2015.
Resumo:
We present a novel surrogate model-based global optimization framework allowing a large number of function evaluations. The method, called SpLEGO, is based on a multi-scale expected improvement (EI) framework relying on both sparse and local Gaussian process (GP) models. First, a bi-objective approach relying on a global sparse GP model is used to determine potential next sampling regions. Local GP models are then constructed within each selected region. The method subsequently employs the standard expected improvement criterion to deal with the exploration-exploitation trade-off within selected local models, leading to a decision on where to perform the next function evaluation(s). The potential of our approach is demonstrated using the so-called Sparse Pseudo-input GP as a global model. The algorithm is tested on four benchmark problems, whose number of starting points ranges from 102 to 104. Our results show that SpLEGO is effective and capable of solving problems with large number of starting points, and it even provides significant advantages when compared with state-of-the-art EI algorithms.