79 resultados para Estimator standard error and efficiency

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to compare the diagnostic efficiency of plain film and spiral CT examinations with 3D reconstructions of 42 tibial plateau fractures and to assess the accuracy of these two techniques in the pre-operative surgical plan in 22 cases. Forty-two tibial plateau fractures were examined with plain film (anteroposterior, lateral, two obliques) and spiral CT with surface-shaded-display 3D reconstructions. The Swiss AO-ASIF classification system of bone fracture from Muller was used. In 22 cases the surgical plans and the sequence of reconstruction of the fragments were prospectively determined with both techniques, successively, and then correlated with the surgical reports and post-operative plain film. The fractures were underestimated with plain film in 18 of 42 cases (43%). Due to the spiral CT 3D reconstructions, and precise pre-operative information, the surgical plans based on plain film were modified and adjusted in 13 cases among 22 (59%). Spiral CT 3D reconstructions give a better and more accurate demonstration of the tibial plateau fracture and allows a more precise pre-operative surgical plan.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Summary points: - The bias introduced by random measurement error will be different depending on whether the error is in an exposure variable (risk factor) or outcome variable (disease) - Random measurement error in an exposure variable will bias the estimates of regression slope coefficients towards the null - Random measurement error in an outcome variable will instead increase the standard error of the estimates and widen the corresponding confidence intervals, making results less likely to be statistically significant - Increasing sample size will help minimise the impact of measurement error in an outcome variable but will only make estimates more precisely wrong when the error is in an exposure variable

Relevância:

100.00% 100.00%

Publicador:

Resumo:

MOTIVATION: Microarray results accumulated in public repositories are widely reused in meta-analytical studies and secondary databases. The quality of the data obtained with this technology varies from experiment to experiment, and an efficient method for quality assessment is necessary to ensure their reliability. RESULTS: The lack of a good benchmark has hampered evaluation of existing methods for quality control. In this study, we propose a new independent quality metric that is based on evolutionary conservation of expression profiles. We show, using 11 large organ-specific datasets, that IQRray, a new quality metrics developed by us, exhibits the highest correlation with this reference metric, among 14 metrics tested. IQRray outperforms other methods in identification of poor quality arrays in datasets composed of arrays from many independent experiments. In contrast, the performance of methods designed for detecting outliers in a single experiment like Normalized Unscaled Standard Error and Relative Log Expression was low because of the inability of these methods to detect datasets containing only low-quality arrays and because the scores cannot be directly compared between experiments. AVAILABILITY AND IMPLEMENTATION: The R implementation of IQRray is available at: ftp://lausanne.isb-sib.ch/pub/databases/Bgee/general/IQRray.R. CONTACT: Marta.Rosikiewicz@unil.ch SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Advances in nebulizer design have produced both ultrasonic nebulizers and devices based on a vibrating mesh (vibrating mesh nebulizers), which are expected to enhance the efficiency of aerosol drug therapy. The aim of this study was to compare 4 different nebulizers, of 3 different types, in an in vitro model using albuterol delivery and physical characteristics as benchmarks. METHODS: The following nebulizers were tested: Sidestream Disposable jet nebulizer, Multisonic Infra Control ultrasonic nebulizer, and the Aerogen Pro and Aerogen Solo vibrating mesh nebulizers. Aerosol duration, temperature, and drug solution osmolality were measured during nebulization. Albuterol delivery was measured by a high-performance liquid chromatography system with fluorometric detection. The droplet size distribution was analyzed with a laser granulometer. RESULTS: The ultrasonic nebulizer was the fastest device based on the duration of nebulization; the jet nebulizer was the slowest. Solution temperature decreased during nebulization when the jet nebulizer and vibrating mesh nebulizers were used, but it increased with the ultrasonic nebulizer. Osmolality was stable during nebulization with the vibrating mesh nebulizers, but increased with the jet nebulizer and ultrasonic nebulizer, indicating solvent evaporation. Albuterol delivery was 1.6 and 2.3 times higher with the ultrasonic nebulizer and vibrating mesh nebulizers devices, respectively, than with the jet nebulizer. Particle size was significantly higher with the ultrasonic nebulizer. CONCLUSIONS: The in vitro model was effective for comparing nebulizer types, demonstrating important differences between nebulizer types. The new devices, both the ultrasonic nebulizers and vibrating mesh nebulizers, delivered more aerosolized drug than traditional jet nebulizers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many complex systems may be described by not one but a number of complex networks mapped on each other in a multi-layer structure. Because of the interactions and dependencies between these layers, the state of a single layer does not necessarily reflect well the state of the entire system. In this paper we study the robustness of five examples of two-layer complex systems: three real-life data sets in the fields of communication (the Internet), transportation (the European railway system), and biology (the human brain), and two models based on random graphs. In order to cover the whole range of features specific to these systems, we focus on two extreme policies of system's response to failures, no rerouting and full rerouting. Our main finding is that multi-layer systems are much more vulnerable to errors and intentional attacks than they appear from a single layer perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To compare image quality of a standard-dose (SD) and a low-dose (LD) cervical spine CT protocol using filtered back-projection (FBP) and iterative reconstruction (IR). MATERIALS AND METHODS: Forty patients investigated by cervical spine CT were prospectively randomised into two groups: SD (120 kVp, 275 mAs) and LD (120 kVp, 150 mAs), both applying automatic tube current modulation. Data were reconstructed using both FBP and sinogram-affirmed IR. Image noise, signal-to-noise (SNR) and contrast-to-noise (CNR) ratios were measured. Two radiologists independently and blindly assessed the following anatomical structures at C3-C4 and C6-C7 levels, using a four-point scale: intervertebral disc, content of neural foramina and dural sac, ligaments, soft tissues and vertebrae. They subsequently rated overall image quality using a ten-point scale. RESULTS: For both protocols and at each disc level, IR significantly decreased image noise and increased SNR and CNR, compared with FBP. SNR and CNR were statistically equivalent in LD-IR and SD-FBP protocols. Regardless of the dose and disc level, the qualitative scores with IR compared with FBP, and with LD-IR compared with SD-FBP, were significantly higher or not statistically different for intervertebral discs, neural foramina and ligaments, while significantly lower or not statistically different for soft tissues and vertebrae. The overall image quality scores were significantly higher with IR compared with FBP, and with LD-IR compared with SD-FBP. CONCLUSION: LD-IR cervical spine CT provides better image quality for intervertebral discs, neural foramina and ligaments, and worse image quality for soft tissues and vertebrae, compared with SD-FBP, while reducing radiation dose by approximately 40 %.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The comparison of the operations of the administration of justice among cantons shows on one side large differences in the three major types of sentencing, in the use of pre-trial detention and the unsuspended prison sanction. When combined, one finds however very weak relationships when considering absolute, percentage or weighted results. On the other side, the outcome of these different policies is much paradoxical as there are no differences when comparing recidivism rates among cantons, despite strong differences in the use of pre-trial detention and the sentencing with prison sanctions. The paradoxical outcome of crime policies in terms of recidivism - e.g. the absence of differences of the outcome based on sanctions in the domain of less severe delinquency - suggests the need for more empirically informed crime policies. The role of justice administrators could be to participate in the dissemination of those findings as well as the dissemination of best practices among cantons with regard to outcomes and the use of resources - especially with consideration to the use of the prison sanction as it is the most costly and the most inefficient of all sanctions. Furthermore, the observance of the principle of equality before the law would be most likely be promoted.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Pathogen reduction of platelets (PRT-PLTs) using riboflavin and ultraviolet light treatment has undergone Phase 1 and 2 studies examining efficacy and safety. This randomized controlled clinical trial (RCT) assessed the efficacy and safety of PRT-PLTs using the 1-hour corrected count increment (CCI(1hour) ) as the primary outcome. STUDY DESIGN AND METHODS: A noninferiority RCT was performed where patients with chemotherapy-induced thrombocytopenia (six centers) were randomly allocated to receive PRT-PLTs (Mirasol PRT, CaridianBCT Biotechnologies) or reference platelet (PLT) products. The treatment period was 28 days followed by a 28-day follow-up (safety) period. The primary outcome was the CCI(1hour) determined using up to the first eight on-protocol PLT transfusions given during the treatment period. RESULTS: A total of 118 patients were randomly assigned (60 to PRT-PLTs; 58 to reference). Four patients per group did not require PLT transfusions leaving 110 patients in the analysis (56 PRT-PLTs; 54 reference). A total of 541 on-protocol PLT transfusions were given (303 PRT-PLTs; 238 reference). The least square mean CCI was 11,725 (standard error [SE], 1.140) for PRT-PLTs and 16,939 (SE, 1.149) for the reference group (difference, -5214; 95% confidence interval, -7542 to -2887; p<0.0001 for a test of the null hypothesis of no difference between the two groups). CONCLUSION: The study failed to show noninferiority of PRT-PLTs based on predefined CCI criteria. PLT and red blood cell utilization in the two groups was not significantly different suggesting that the slightly lower CCIs (PRT-PLTs) did not increase blood product utilization. Safety data showed similar findings in the two groups. Further studies are required to determine if the lower CCI observed with PRT-PLTs translates into an increased risk of bleeding.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Restriction site-associated DNA sequencing (RADseq) provides researchers with the ability to record genetic polymorphism across thousands of loci for nonmodel organisms, potentially revolutionizing the field of molecular ecology. However, as with other genotyping methods, RADseq is prone to a number of sources of error that may have consequential effects for population genetic inferences, and these have received only limited attention in terms of the estimation and reporting of genotyping error rates. Here we use individual sample replicates, under the expectation of identical genotypes, to quantify genotyping error in the absence of a reference genome. We then use sample replicates to (i) optimize de novo assembly parameters within the program Stacks, by minimizing error and maximizing the retrieval of informative loci; and (ii) quantify error rates for loci, alleles and single-nucleotide polymorphisms. As an empirical example, we use a double-digest RAD data set of a nonmodel plant species, Berberis alpina, collected from high-altitude mountains in Mexico.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Zero correlation between measurement error and model error has been assumed in existing panel data models dealing specifically with measurement error. We extend this literature and propose a simple model where one regressor is mismeasured, allowing the measurement error to correlate with model error. Zero correlation between measurement error and model error is a special case in our model where correlated measurement error equals zero. We ask two research questions. First, we wonder if the correlated measurement error can be identified in the context of panel data. Second, we wonder if classical instrumental variables in panel data need to be adjusted when correlation between measurement error and model error cannot be ignored. Under some regularity conditions the answer is yes to both questions. We then propose a two-step estimation corresponding to the two questions. The first step estimates correlated measurement error from a reverse regression; and the second step estimates usual coefficients of interest using adjusted instruments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the important role of the shoulder sensorimotor system in shoulder stability, its assessment appears of interest. Force platform monitoring of centre of pressure (CoP) in upper-limb weight-bearing positions is of interest as it allows integration of all aspects of shoulder sensorimotor control. This study aimed to determine the feasibility and reliability of shoulder sensorimotor control assessment by force platform. Forty-five healthy subjects performed two sessions of CoP measurement using Win-Posturo(®) Medicapteurs force platform in an upper-limb weight-bearing position with the lower limbs resting on a table to either the anterior superior iliac spines (P1) or upper patellar poles (P2). Four different conditions were tested in each position in random order: eyes open or eyes closed with trunk supported by both hands and eyes open with trunk supported on the dominant or non-dominant side. P1 reliability values were globally moderate to high for CoP length, CoP velocity and CoP standard deviation (SD), standard error of measurement ranged from 6·0% to 26·5%, except for CoP area. P2 reliability values were globally low and not clinically acceptable. Our results suggest that shoulder sensorimotor control assessment by force platform is feasible and has good reliability in upper-limb weight-bearing positions when the lower limbs are resting on a table to the anterior superior iliac spines. CoP length, CoP velocity and CoP SD velocity appear to be the most reliable variables.