942 resultados para mean-square error
Resumo:
Synchronization of data coming from different sources is of high importance in biomechanics to ensure reliable analyses. This synchronization can either be performed through hardware to obtain perfect matching of data, or post-processed digitally. Hardware synchronization can be achieved using trigger cables connecting different devices in many situations; however, this is often impractical, and sometimes impossible in outdoors situations. The aim of this paper is to describe a wireless system for outdoor use, allowing synchronization of different types of - potentially embedded and moving - devices. In this system, each synchronization device is composed of: (i) a GPS receiver (used as time reference), (ii) a radio transmitter, and (iii) a microcontroller. These components are used to provide synchronized trigger signals at the desired frequency to the measurement device connected. The synchronization devices communicate wirelessly, are very lightweight, battery-operated and thus very easy to set up. They are adaptable to every measurement device equipped with either trigger input or recording channel. The accuracy of the system was validated using an oscilloscope. The mean synchronization error was found to be 0.39 μs and pulses are generated with an accuracy of <2 μs. The system provides synchronization accuracy about two orders of magnitude better than commonly used post-processing methods, and does not suffer from any drift in trigger generation.
Resumo:
The investigation of perceptual and cognitive functions with non-invasive brain imaging methods critically depends on the careful selection of stimuli for use in experiments. For example, it must be verified that any observed effects follow from the parameter of interest (e.g. semantic category) rather than other low-level physical features (e.g. luminance, or spectral properties). Otherwise, interpretation of results is confounded. Often, researchers circumvent this issue by including additional control conditions or tasks, both of which are flawed and also prolong experiments. Here, we present some new approaches for controlling classes of stimuli intended for use in cognitive neuroscience, however these methods can be readily extrapolated to other applications and stimulus modalities. Our approach is comprised of two levels. The first level aims at equalizing individual stimuli in terms of their mean luminance. Each data point in the stimulus is adjusted to a standardized value based on a standard value across the stimulus battery. The second level analyzes two populations of stimuli along their spectral properties (i.e. spatial frequency) using a dissimilarity metric that equals the root mean square of the distance between two populations of objects as a function of spatial frequency along x- and y-dimensions of the image. Randomized permutations are used to obtain a minimal value between the populations to minimize, in a completely data-driven manner, the spectral differences between image sets. While another paper in this issue applies these methods in the case of acoustic stimuli (Aeschlimann et al., Brain Topogr 2008), we illustrate this approach here in detail for complex visual stimuli.
Resumo:
An ab initio structure prediction approach adapted to the peptide-major histocompatibility complex (MHC) class I system is presented. Based on structure comparisons of a large set of peptide-MHC class I complexes, a molecular dynamics protocol is proposed using simulated annealing (SA) cycles to sample the conformational space of the peptide in its fixed MHC environment. A set of 14 peptide-human leukocyte antigen (HLA) A0201 and 27 peptide-non-HLA A0201 complexes for which X-ray structures are available is used to test the accuracy of the prediction method. For each complex, 1000 peptide conformers are obtained from the SA sampling. A graph theory clustering algorithm based on heavy atom root-mean-square deviation (RMSD) values is applied to the sampled conformers. The clusters are ranked using cluster size, mean effective or conformational free energies, with solvation free energies computed using Generalized Born MV 2 (GB-MV2) and Poisson-Boltzmann (PB) continuum models. The final conformation is chosen as the center of the best-ranked cluster. With conformational free energies, the overall prediction success is 83% using a 1.00 Angstroms crystal RMSD criterion for main-chain atoms, and 76% using a 1.50 Angstroms RMSD criterion for heavy atoms. The prediction success is even higher for the set of 14 peptide-HLA A0201 complexes: 100% of the peptides have main-chain RMSD values < or =1.00 Angstroms and 93% of the peptides have heavy atom RMSD values < or =1.50 Angstroms. This structure prediction method can be applied to complexes of natural or modified antigenic peptides in their MHC environment with the aim to perform rational structure-based optimizations of tumor vaccines.
Resumo:
Recently, the spin-echo full-intensity acquired localized (SPECIAL) spectroscopy technique was proposed to unite the advantages of short TEs on the order of milliseconds (ms) with full sensitivity and applied to in vivo rat brain. In the present study, SPECIAL was adapted and optimized for use on a clinical platform at 3T and 7T by combining interleaved water suppression (WS) and outer volume saturation (OVS), optimized sequence timing, and improved shimming using FASTMAP. High-quality single voxel spectra of human brain were acquired at TEs below or equal to 6 ms on a clinical 3T and 7T system for six volunteers. Narrow linewidths (6.6 +/- 0.6 Hz at 3T and 12.1 +/- 1.0 Hz at 7T for water) and the high signal-to-noise ratio (SNR) of the artifact-free spectra enabled the quantification of a neurochemical profile consisting of 18 metabolites with Cramér-Rao lower bounds (CRLBs) below 20% at both field strengths. The enhanced sensitivity and increased spectral resolution at 7T compared to 3T allowed a two-fold reduction in scan time, an increased precision of quantification for 12 metabolites, and the additional quantification of lactate with CRLB below 20%. Improved sensitivity at 7T was also demonstrated by a 1.7-fold increase in average SNR (= peak height/root mean square [RMS]-of-noise) per unit-time.
Resumo:
INTRODUCTION: Spectral frequencies of the surface electromyogram (sEMG) increase with contraction force, but debate still exists on whether this increase is affected by various methodological and anatomical factors. This study aimed to investigate the influence of inter-electrode distance (IED) and contraction modality (step-wise vs. ramp) on the changes in spectral frequencies with increasing contraction strength for the vastus lateralis (VL) and vastus medialis (VM) muscles. METHODS: Twenty healthy male volunteers were assessed for isometric sEMG activity of the VM and VL, with the knee at 90° flexion. Subjects performed isometric ramp contractions in knee extension (6-s duration) with the force gradually increasing from 0 to 80 % MVC. Also, subjects performed 4-s step-wise isometric contractions at 10, 20, 30, 40, 50, 60, 70, and 80 % MVC. Interference sEMG signals were recorded simultaneously at different IEDs: 10, 20, 30, and 50 mm. The mean (F mean) and median (F median) frequencies and root mean square (RMS) of sEMG signals were calculated. RESULTS: For all IEDs, contraction modalities, and muscles tested, spectral frequencies increased significantly with increasing level of force up to 50-60 % MVC force. Spectral indexes increased systematically as IED was decreased. The sensitivity of spectral frequencies to changes in contraction force was independent of IED. The behaviour of spectral indexes with increasing contraction force was similar for step-wise and ramp contractions. CONCLUSIONS: In the VL and VM muscles, it is highly unlikely that a particular inter-electrode distance or contraction modality could have prevented the observation of the full extent of the increase in spectral frequencies with increasing force level.
Resumo:
Neutrality tests in quantitative genetics provide a statistical framework for the detection of selection on polygenic traits in wild populations. However, the existing method based on comparisons of divergence at neutral markers and quantitative traits (Q(st)-F(st)) suffers from several limitations that hinder a clear interpretation of the results with typical empirical designs. In this article, we propose a multivariate extension of this neutrality test based on empirical estimates of the among-populations (D) and within-populations (G) covariance matrices by MANOVA. A simple pattern is expected under neutrality: D = 2F(st)/(1 - F(st))G, so that neutrality implies both proportionality of the two matrices and a specific value of the proportionality coefficient. This pattern is tested using Flury's framework for matrix comparison [common principal-component (CPC) analysis], a well-known tool in G matrix evolution studies. We show the importance of using a Bartlett adjustment of the test for the small sample sizes typically found in empirical studies. We propose a dual test: (i) that the proportionality coefficient is not different from its neutral expectation [2F(st)/(1 - F(st))] and (ii) that the MANOVA estimates of mean square matrices between and among populations are proportional. These two tests combined provide a more stringent test for neutrality than the classic Q(st)-F(st) comparison and avoid several statistical problems. Extensive simulations of realistic empirical designs suggest that these tests correctly detect the expected pattern under neutrality and have enough power to efficiently detect mild to strong selection (homogeneous, heterogeneous, or mixed) when it is occurring on a set of traits. This method also provides a rigorous and quantitative framework for disentangling the effects of different selection regimes and of drift on the evolution of the G matrix. We discuss practical requirements for the proper application of our test in empirical studies and potential extensions.
Resumo:
CONTEXT: Hamstrings strains are common and debilitating injuries in many sports. Most hamstrings exercises are performed at an inadequately low hip-flexion angle because this angle surpasses 70° at the end of the sprinting leg's swing phase, when most injuries occur. OBJECTIVE: To evaluate the influence of various hip-flexion angles on peak torques of knee flexors in isometric, concentric, and eccentric contractions and on the hamstrings-to-quadriceps ratio. DESIGN: Descriptive laboratory study. SETTING: Research laboratory. Patients and Other Participants: Ten national-level sprinters (5 men, 5 women; age = 21.2 ± 3.6 years, height = 175 ± 6 cm, mass = 63.8 ± 9.9 kg). Intervention(s): For each hip position (0°, 30°, 60°, and 90° of flexion), participants used the right leg to perform (1) 5 seconds of maximal isometric hamstrings contraction at 45° of knee flexion, (2) 5 maximal concentric knee flexion-extensions at 60° per second, (3) 5 maximal eccentric knee flexion-extensions at 60° per second, and (4) 5 maximal eccentric knee flexionextensions at 150° per second. Main Outcome Measure(s): Hamstrings and quadriceps peak torque, hamstrings-to-quadriceps ratio, lateral and medial hamstrings root mean square. RESULTS: We found no difference in quadriceps peak torque for any condition across all hip-flexion angles, whereas hamstrings peak torque was lower at 0° of hip flexion than at any other angle (P < .001) and greater at 90° of hip flexion than at 30° and 60° (P < .05), especially in eccentric conditions. As hip flexion increased, the hamstrings-to-quadriceps ratio increased. No difference in lateral or medial hamstrings root mean square was found for any condition across all hip-flexion angles (P > .05). CONCLUSIONS: Hip-flexion angle influenced hamstrings peak torque in all muscular contraction types; as hip flexion increased, hamstrings peak torque increased. Researchers should investigate further whether an eccentric resistance training program at sprint-specific hip-flexion angles (70° to 80°) could help prevent hamstrings injuries in sprinters. Moreover, hamstrings-to-quadriceps ratio assessment should be standardized at 80° of hip flexion.
Resumo:
INTRODUCTION: Anhedonia is defined as a diminished capacity to experience pleasant emotion and is commonly included among the negative symptoms of schizophrenia. However, if patients report experiencing a lower level of pleasure than controls, they report experiencing as much pleasure as controls with online measurements of emotion. OBJECTIVE: The Temporal Experience of Pleasure Scale (TEPS) measures pleasure experienced in the moment and in anticipation of future activities. The TEPS is an 18-item self-report measurement of anticipatory (10 items) and consummatory (eight items) pleasure. The goal of this paper is to assess the psychometric characteristics of the French translation of this scale. METHODS: A control sample was composed of 60 women and 22 men, with a mean age of 38.1 years (S.D.: 10.8). Thirty-six were without qualification and 46 with qualified professional diploma. A sample of 21 patients meeting DSM IV-TR criteria for schizophrenia was recruited among the community psychiatry service of the department of psychiatry in Lausanne. They were five women and 16 men; mean age was of 34.1 years (S.D.: 7.5). Ten obtained a professional qualification and 11 were without qualification. None worked in competitive employment. Their mean dose of chlorpromazine equivalent was 431mg (S.D.: 259). All patients were on atypical antipsychotics. The control sample fulfilled the TEPS and the Physical Anhedonia Scale (PAS). The patient sample fulfilled the TEPS and was independently rated on the Calgary Depression Scale and the Scale for Assessment of Negative Symptoms. For comparison with controls, patients were matched on age, sex and professional qualification. This required the supplementary recruitment of two control subjects. RESULTS: Results with the control sample indicate that the TEPS presents an acceptable internal validity with Crombach alphas of 0.84 for the total scale, 0.74 for the anticipatory pleasure scale and 0.79 for the consummatory pleasure scale. The confirmatory factor analysis indicated that the model is well adapted to our data (chi(2)/dl=1.333; df=134; p<0.0006; root mean square residual, RMSEA=0.064). External validity measured with the PAS showed R=-0.27 (p<0.05) for the consummatory scale and R=-0.26 for the total score. Comparisons between patients and matched controls indicated that patients were significantly lower than control on anticipatory pleasure (t=2.7, df(40), 2-tailed p=0.01; cohen's d=0.83) and on total score of the TEPS (t=2.8, df (40), 2-tailed p=0.01; cohen's d=0.87). The two samples did not differ on consummatory pleasure. The anticipatory pleasure factor and the total TEPS showed significant negative correlation with the SANS anhedonia, respectively R=-0.78 (p<0.01) for the anticipatory factor and R=-0.61 (p<0.01) for the total TEPS. There was also a negative correlation between the anticipatory factor and the SANS avolition of R=-0.50 (p<0.05). These correlations were maintained, with partial correlations controlling for depression and chlorpromazine equivalents. CONCLUSION: The results of this validation show that the French version of the TEPS has psychometric characteristics similar to the original version. These results highlight the discrepancy between results of direct or indirect report of experienced pleasure in patients with schizophrenia. Patients may have difficulties in anticipating the pleasure of future enjoyable activities, but not in experiencing pleasure once in an enjoyable activity. Medication and depression do not seems to modify our results, but this should be better controlled in a longitudinal study. The anticipatory versus consummatory pleasure distinction appears to be useful for the development of new psychosocial interventions, tailored to improve desire in patients suffering from schizophrenia. Major limitations of the study are the small size of patient sample and the under representation of men in the control sample.
Resumo:
Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
Resumo:
Résumé : Ce travail porte sur l'étude rétrospective d'une série de jeunes patients opérés de glaucomes pédiatriques. Le but est d'évaluer le résultat au long cours d'une intervention chirurgicale combinant une sclérectomie profonde et une trabéculectomie (sclérectomie profonde pénétrante). Durant la période de mars 1997 à octobre 2006, 28 patients on été suivis pour évaluer le résultat de cette chirurgie effectuées sur 35 yeux. Un examen ophtalmologique complet a été pratiqué avant la chirurgie, 1 et 7 jours, puis 1, 2, 3, 4, 6, 9, 12 mois, enfin tous les 6 mois après l'opération. Les critères d'évaluation du résultat postopératoire sont : les changements de pression intraoculaire, le traitement antiglaucomateux adjuvant, le taux de complication, le nombre de reprises chirurgicales,- l'erreur de réfraction, la meilleure acuité visuelle corrigée, l'état et le diamètre de la cornée. L'âge moyen est de 3.6 ± 4.5 ans et le suivi moyen de 3.6 ± 2.9 ans. La pression intraoculaire préopératoire de 31.9 ± 11.5 mmHg baisse de 58.3% (p<0.005) à la fin du suivi. Sur les 14 patients dont l'acuité visuelle a pu être mesurée, 8 (57.1 %) ont une acuité égale ou supérieure à 5/10e, 3 (21.4%) une acuité de 2/10e après intervention. Le taux de succès cumulatif complet à 9 ans est de 52.3%, le succès relatif 70.6%. Les complications menaçant la vision (8.6%) ont été plus fréquentes dans les cas de glaucome réfractaire. Pour conclure la sclérectomie profonde combinée à une trabéculectomie est une technique chirurgicale développée afin de contrôler la pression intraoculaire dans les cas de glaucomes congénitaux, juvéniles et secondaires. Les résultats intermédiaires sont encourageants et prometteurs. Les cas préalablement opérés avant cette nouvelle technique ont cependant un pronostic moins favorable. Le nombre de complications menaçant la vision est essentiellement lié à la sévérité du glaucome et au nombre d'interventions préalables. Abstract : Purpose : To evaluate the outcomes of combined deep sclerectomy and trabeculectomy (penetrating deep sclerectomy) in pediatric glaucoma. Design : Retrospective, non-consecutive, non-comparative, interventional case series. Participants : Children suffering from pediatric glaucoma who underwent surgery between March 1997 and October 2006 were included in this study. Methods : A primary combined deep sclerectomy and trabeculectomy was performed in 35 eyes of 28 patients. Complete examinations were performed before surgery, postoperatively at 1 and 7 days, at 1, 2, 3, 4, 6, 9, 12 months and then every 6 months after surgery. Main Outcome Measures : Surgical outcome was assessed in terms of intraocular pressure (IOP) change, additional glaucoma medication, complication rate, need for surgical revision, as well as refractive errors, best corrected visual acuity (BCVA), and corneal clarity and diameters. Results : The mean age before surgery was 3.6 ± 4.5 years, and the mean follow-up was 3.5 ± 2.9 years. The mean preoperative IOP was 31.9 ± 11.5 mmHg. At the end of follow-up, the mean IOP decreased by 58.3% (p<0.005), and from 14 patients with available BCVA 8 patients (57.1 %) achieved. 0.5 (20/40) or better, 3 (21.4%) 0.2 (20/100), and 2 (14.3%) 0.1 (20/200) in their better eye. The mean refractive error (spherical equivalent) at final follow-up visits was +0.83 ± 5.4. Six patients (43%) were affected by myopia. The complete and qualified success rates, based on a cumulative survival curve, after- 9 years were 52.3% and 70.6%, respectively (p<0.05). Sight threatening complications were more common (8.6%) in refractory glaucomas. Conclusions : Combined deep sclerectomy and trabeculectomy is a surgical technique developed to control IOP in congenital, secondary and juvenile glaucomas. The intermediate results are satisfactory and promising. Previous classic glaucoma surgeries performed before this new technique had less favourable results. The number of sight threatening complications is related to the severity of glaucoma and number of previous surgeries.
Resumo:
We introduce simple nonparametric density estimators that generalize theclassical histogram and frequency polygon. The new estimators are expressed as linear combination of density functions that are piecewisepolynomials, where the coefficients are optimally chosen in order to minimize the integrated square error of the estimator. We establish the asymptotic behaviour of the proposed estimators, and study theirperformance in a simulation study.
Resumo:
Nonlinear regression problems can often be reduced to linearity by transforming the response variable (e.g., using the Box-Cox family of transformations). The classic estimates of the parameter defining the transformation as well as of the regression coefficients are based on the maximum likelihood criterion, assuming homoscedastic normal errors for the transformed response. These estimates are nonrobust in the presence of outliers and can be inconsistent when the errors are nonnormal or heteroscedastic. This article proposes new robust estimates that are consistent and asymptotically normal for any unimodal and homoscedastic error distribution. For this purpose, a robust version of conditional expectation is introduced for which the prediction mean squared error is replaced with an M scale. This concept is then used to develop a nonparametric criterion to estimate the transformation parameter as well as the regression coefficients. A finite sample estimate of this criterion based on a robust version of smearing is also proposed. Monte Carlo experiments show that the new estimates compare favorably with respect to the available competitors.
Resumo:
We compare a set of empirical Bayes and composite estimators of the population means of the districts (small areas) of a country, and show that the natural modelling strategy of searching for a well fitting empirical Bayes model and using it for estimation of the area-level means can be inefficient.
Resumo:
By means of classical Itô's calculus we decompose option prices asthe sum of the classical Black-Scholes formula with volatility parameterequal to the root-mean-square future average volatility plus a term dueby correlation and a term due to the volatility of the volatility. Thisdecomposition allows us to develop first and second-order approximationformulas for option prices and implied volatilities in the Heston volatilityframework, as well as to study their accuracy. Numerical examples aregiven.
Resumo:
A national survey designed for estimating a specific population quantity is sometimes used for estimation of this quantity also for a small area, such as a province. Budget constraints do not allow a greater sample size for the small area, and so other means of improving estimation have to be devised. We investigate such methods and assess them by a Monte Carlo study. We explore how a complementary survey can be exploited in small area estimation. We use the context of the Spanish Labour Force Survey (EPA) and the Barometer in Spain for our study.