86 resultados para Asymptotic Mean Squared Errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of observer-rated scales requires that raters be trained until they have become reliable in using the scales. However, few studies properly report how training in using a given rating scale is conducted or indeed how it should be conducted. This study examined progress in interrater reliability over 6 months of training with two observer-rated scales, the Cognitive Errors Rating Scale and the Coping Action Patterns Rating Scale. The evolution of the intraclass correlation coefficients was modeled using hierarchical linear modeling. Results showed an overall training effect as well as effects of the basic training phase and of the rater calibration phase, the latter being smaller than the former. The results are discussed in terms of implications for rater training in psychotherapy research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurement of three-dimensional (3D) knee joint angle outside a laboratory is of benefit in clinical examination and therapeutic treatment comparison. Although several motion capture devices exist, there is a need for an ambulatory system that could be used in routine practice. Up-to-date, inertial measurement units (IMUs) have proven to be suitable for unconstrained measurement of knee joint differential orientation. Nevertheless, this differential orientation should be converted into three reliable and clinically interpretable angles. Thus, the aim of this study was to propose a new calibration procedure adapted for the joint coordinate system (JCS), which required only IMUs data. The repeatability of the calibration procedure, as well as the errors in the measurement of 3D knee angle during gait in comparison to a reference system were assessed on eight healthy subjects. The new procedure relying on active and passive movements reported a high repeatability of the mean values (offset<1 degrees) and angular patterns (SD<0.3 degrees and CMC>0.9). In comparison to the reference system, this functional procedure showed high precision (SD<2 degrees and CC>0.75) and moderate accuracy (between 4.0 degrees and 8.1 degrees) for the three knee angle. The combination of the inertial-based system with the functional calibration procedure proposed here resulted in a promising tool for the measurement of 3D knee joint angle. Moreover, this method could be adapted to measure other complex joint, such as ankle or elbow.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examined the validity and reliability of the French version of two observer-rated measures developed to assess cognitive errors (cognitive errors rating system [CERS]) [6] and coping action patterns (coping action patterns rating system [CAPRS]) [22,24]. The CE measures 14 cognitive errors, broken down according to their valence positive or negative (see the definitions by A.T. Beck), and the CAP measures 12 coping categories, based on an comprehensive review literature, each broken down into three levels of action (affective, behavioural, cognitive). Thirty (N = 30) subjects recruited in a community sample participated in the study. They were interviewed according to a standardized clinical protocol: these interviews were transcribed and analysed with both observer-rated systems. Results showed that the inter-rater reliability of the two measures is good and that their internal validity is satisfactory, due to a non-significant canonical correlation between CAP and CE. With regard to discriminant validity, we found a non-significant canonical correlation between CAPRS and CISS, one of most widely used self-report questionnaire measuring coping. The same can be said for the correlation with a self-report questionnaire measuring symptoms (SCL-90-R). These results confirm the absence of confounds in the assessment of cognitive errors and of coping as assessed by these observer-rated scales and add an argument in favour of the French validation of the CE-CAP rating scales. (C) 2010 Elsevier Masson SAS. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Résumé : Ce travail porte sur l'étude rétrospective d'une série de jeunes patients opérés de glaucomes pédiatriques. Le but est d'évaluer le résultat au long cours d'une intervention chirurgicale combinant une sclérectomie profonde et une trabéculectomie (sclérectomie profonde pénétrante). Durant la période de mars 1997 à octobre 2006, 28 patients on été suivis pour évaluer le résultat de cette chirurgie effectuées sur 35 yeux. Un examen ophtalmologique complet a été pratiqué avant la chirurgie, 1 et 7 jours, puis 1, 2, 3, 4, 6, 9, 12 mois, enfin tous les 6 mois après l'opération. Les critères d'évaluation du résultat postopératoire sont : les changements de pression intraoculaire, le traitement antiglaucomateux adjuvant, le taux de complication, le nombre de reprises chirurgicales,- l'erreur de réfraction, la meilleure acuité visuelle corrigée, l'état et le diamètre de la cornée. L'âge moyen est de 3.6 ± 4.5 ans et le suivi moyen de 3.6 ± 2.9 ans. La pression intraoculaire préopératoire de 31.9 ± 11.5 mmHg baisse de 58.3% (p<0.005) à la fin du suivi. Sur les 14 patients dont l'acuité visuelle a pu être mesurée, 8 (57.1 %) ont une acuité égale ou supérieure à 5/10e, 3 (21.4%) une acuité de 2/10e après intervention. Le taux de succès cumulatif complet à 9 ans est de 52.3%, le succès relatif 70.6%. Les complications menaçant la vision (8.6%) ont été plus fréquentes dans les cas de glaucome réfractaire. Pour conclure la sclérectomie profonde combinée à une trabéculectomie est une technique chirurgicale développée afin de contrôler la pression intraoculaire dans les cas de glaucomes congénitaux, juvéniles et secondaires. Les résultats intermédiaires sont encourageants et prometteurs. Les cas préalablement opérés avant cette nouvelle technique ont cependant un pronostic moins favorable. Le nombre de complications menaçant la vision est essentiellement lié à la sévérité du glaucome et au nombre d'interventions préalables. Abstract : Purpose : To evaluate the outcomes of combined deep sclerectomy and trabeculectomy (penetrating deep sclerectomy) in pediatric glaucoma. Design : Retrospective, non-consecutive, non-comparative, interventional case series. Participants : Children suffering from pediatric glaucoma who underwent surgery between March 1997 and October 2006 were included in this study. Methods : A primary combined deep sclerectomy and trabeculectomy was performed in 35 eyes of 28 patients. Complete examinations were performed before surgery, postoperatively at 1 and 7 days, at 1, 2, 3, 4, 6, 9, 12 months and then every 6 months after surgery. Main Outcome Measures : Surgical outcome was assessed in terms of intraocular pressure (IOP) change, additional glaucoma medication, complication rate, need for surgical revision, as well as refractive errors, best corrected visual acuity (BCVA), and corneal clarity and diameters. Results : The mean age before surgery was 3.6 ± 4.5 years, and the mean follow-up was 3.5 ± 2.9 years. The mean preoperative IOP was 31.9 ± 11.5 mmHg. At the end of follow-up, the mean IOP decreased by 58.3% (p<0.005), and from 14 patients with available BCVA 8 patients (57.1 %) achieved. 0.5 (20/40) or better, 3 (21.4%) 0.2 (20/100), and 2 (14.3%) 0.1 (20/200) in their better eye. The mean refractive error (spherical equivalent) at final follow-up visits was +0.83 ± 5.4. Six patients (43%) were affected by myopia. The complete and qualified success rates, based on a cumulative survival curve, after- 9 years were 52.3% and 70.6%, respectively (p<0.05). Sight threatening complications were more common (8.6%) in refractory glaucomas. Conclusions : Combined deep sclerectomy and trabeculectomy is a surgical technique developed to control IOP in congenital, secondary and juvenile glaucomas. The intermediate results are satisfactory and promising. Previous classic glaucoma surgeries performed before this new technique had less favourable results. The number of sight threatening complications is related to the severity of glaucoma and number of previous surgeries.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper discusses five strategies to deal with five types of errors in Qualitative Comparative Analysis (QCA): condition errors, systematic errors, random errors, calibration errors, and deviant case errors. These strategies are the comparative inspection of complex, intermediary, and parsimonious solutions; the use of an adjustment factor, the use of probabilistic criteria, the test of the robustness of calibration parameters, and the use of a frequency threshold for observed combinations of conditions. The strategies are systematically reviewed, assessed, and evaluated as regards their applicability, advantages, limitations, and complementarities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Maintaining therapeutic concentrations of drugs with a narrow therapeutic window is a complex task. Several computer systems have been designed to help doctors determine optimum drug dosage. Significant improvements in health care could be achieved if computer advice improved health outcomes and could be implemented in routine practice in a cost effective fashion. This is an updated version of an earlier Cochrane systematic review, by Walton et al, published in 2001. OBJECTIVES: To assess whether computerised advice on drug dosage has beneficial effects on the process or outcome of health care. SEARCH STRATEGY: We searched the Cochrane Effective Practice and Organisation of Care Group specialized register (June 1996 to December 2006), MEDLINE (1966 to December 2006), EMBASE (1980 to December 2006), hand searched the journal Therapeutic Drug Monitoring (1979 to March 2007) and the Journal of the American Medical Informatics Association (1996 to March 2007) as well as reference lists from primary articles. SELECTION CRITERIA: Randomized controlled trials, controlled trials, controlled before and after studies and interrupted time series analyses of computerized advice on drug dosage were included. The participants were health professionals responsible for patient care. The outcomes were: any objectively measured change in the behaviour of the health care provider (such as changes in the dose of drug used); any change in the health of patients resulting from computerized advice (such as adverse reactions to drugs). DATA COLLECTION AND ANALYSIS: Two reviewers independently extracted data and assessed study quality. MAIN RESULTS: Twenty-six comparisons (23 articles) were included (as compared to fifteen comparisons in the original review) including a wide range of drugs in inpatient and outpatient settings. Interventions usually targeted doctors although some studies attempted to influence prescriptions by pharmacists and nurses. Although all studies used reliable outcome measures, their quality was generally low. Computerized advice for drug dosage gave significant benefits by:1.increasing the initial dose (standardised mean difference 1.12, 95% CI 0.33 to 1.92)2.increasing serum concentrations (standradised mean difference 1.12, 95% CI 0.43 to 1.82)3.reducing the time to therapeutic stabilisation (standardised mean difference -0.55, 95%CI -1.03 to -0.08)4.reducing the risk of toxic drug level (rate ratio 0.45, 95% CI 0.30 to 0.70)5.reducing the length of hospital stay (standardised mean difference -0.35, 95% CI -0.52 to -0.17). AUTHORS' CONCLUSIONS: This review suggests that computerized advice for drug dosage has some benefits: it increased the initial dose of drug, increased serum drug concentrations and led to a more rapid therapeutic control. It also reduced the risk of toxic drug levels and the length of time spent in the hospital. However, it had no effect on adverse reactions. In addition, there was no evidence to suggest that some decision support technical features (such as its integration into a computer physician order entry system) or aspects of organization of care (such as the setting) could optimise the effect of computerised advice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new method of measuring joint angle using a combination of accelerometers and gyroscopes is presented. The method proposes a minimal sensor configuration with one sensor module mounted on each segment. The model is based on estimating the acceleration of the joint center of rotation by placing a pair of virtual sensors on the adjacent segments at the center of rotation. In the proposed technique, joint angles are found without the need for integration, so absolute angles can be obtained which are free from any source of drift. The model considers anatomical aspects and is personalized for each subject prior to each measurement. The method was validated by measuring knee flexion-extension angles of eight subjects, walking at three different speeds, and comparing the results with a reference motion measurement system. The results are very close to those of the reference system presenting very small errors (rms = 1.3, mean = 0.2, SD = 1.1 deg) and excellent correlation coefficients (0.997). The algorithm is able to provide joint angles in real-time, and ready for use in gait analysis. Technically, the system is portable, easily mountable, and can be used for long term monitoring without hindrance to natural activities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: Current hypertension guidelines stress the importance to assess total cardiovascular risk but do not describe precisely how to use ambulatory blood pressures in the cardiovascular risk stratification. METHOD: We calculated here global cardiovascular risk according to 2003 European Society of Hypertension/European Society of Cardiology guidelines in 127 patients in whom daytime ambulatory blood pressures were recorded and carotid/femoral ultrasonography performed. RESULTS: The presence of ambulatory blood pressures >or =135/85 mmHg shifted cardiovascular risk to higher categories, as did the presence of hypercholesterolemia and, even more so, the presence of atherosclerotic plaques. CONCLUSION: Further studies are, however, needed to define the position of ambulatory blood pressures in the assessment of cardiovascular risk.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the outcomes of combined deep sclerectomy and trabeculectomy (penetrating deep sclerectomy) in pediatric glaucoma. DESIGN: Retrospective, nonconsecutive, noncomparative, interventional case series. PARTICIPANTS: Children suffering from pediatric glaucoma who underwent surgery between March 1997 and October 2006 were included in this study. METHODS: A primary combined deep sclerectomy and trabeculectomy was performed in 35 eyes of 28 patients. Complete examinations were performed before surgery, postoperatively at 1 and 7 days, at 1, 2, 3, 4, 6, 9, and 12 months, and then every 6 months after surgery. MAIN OUTCOME MEASURES: Surgical outcome was assessed in terms of intraocular pressure (IOP) change, additional glaucoma medication, complication rate, need for surgical revision, as well as refractive errors, best-corrected visual acuity (BCVA), and corneal clarity and diameters. RESULTS: The mean age before surgery was 3.6+/-4.5 years, and the mean follow-up was 3.5+/-2.9 years. The mean preoperative IOP was 31.9+/-11.5 mmHg. At the end of follow-up, the mean IOP decreased by 58.3% (P&lt;0.005), and from 14 patients with available BCVA 8 patients (57.1%) achieved 0.5 (20/40) or better, 3 (21.4%) 0.2 (20/100), and 2 (14.3%) 0.1 (20/200) in their better eye. The mean refractive error (spherical equivalent [SE]) at final follow-up visits was +0.83+/-5.4. Six patients (43%) were affected by myopia. The complete and qualified success rates, based on a cumulative survival curve, after 9 years were 52.3% and 70.6%, respectively (P&lt;0.05). Sight-threatening complications were more common (8.6%) in refractory glaucomas. CONCLUSIONS: Combined deep sclerectomy and trabeculectomy is an operative technique developed to control IOP in congenital, secondary, and juvenile glaucomas. The intermediate results are satisfactory and promising. Previous classic glaucoma surgeries performed before this new technique had less favorable results. The number of sight-threatening complications is related to the severity of glaucoma and number of previous surgeries. FINANCIAL DISCLOSURE(S): The authors have no proprietary or commercial interest in any materials discussed in this article.