958 resultados para Quantitative EEG analysis
Quantitative analysis of benign paroxysmal positional vertigo fatigue under canalithiasis conditions
Resumo:
In our daily life, small flows in the semicircular canals (SCCs) of the inner ear displace a sensory structure called the cupula which mediates the transduction of head angular velocities to afferent signals. We consider a dysfunction of the SCCs known as canalithiasis. Under this condition, small debris particles disturb the flow in the SCCs and can cause benign paroxysmal positional vertigo (BPPV), arguably the most common form of vertigo in humans. The diagnosis of BPPV is mainly based on the analysis of typical eye movements (positional nystagmus) following provocative head maneuvers that are known to lead to vertigo in BPPV patients. These eye movements are triggered by the vestibulo-ocular reflex, and their velocity provides an indirect measurement of the cupula displacement. An attenuation of the vertigo and the nystagmus is often observed when the provocative maneuver is repeated. This attenuation is known as BPPV fatigue. It was not quantitatively described so far, and the mechanisms causing it remain unknown. We quantify fatigue by eye velocity measurements and propose a fluid dynamic interpretation of our results based on a computational model for the fluid–particle dynamics of a SCC with canalithiasis. Our model suggests that the particles may not go back to their initial position after a first head maneuver such that a second head maneuver leads to different particle trajectories causing smaller cupula displacements.
Resumo:
Linkage disequilibrium methods can be used to find genes influencing quantitative trait variation in humans. Linkage disequilibrium methods can require smaller sample sizes than linkage equilibrium methods, such as the variance component approach to find loci with a specific effect size. The increase in power is at the expense of requiring more markers to be typed to scan the entire genome. This thesis compares different linkage disequilibrium methods to determine which factors influence the power to detect disequilibrium. The costs of disequilibrium and equilibrium tests were compared to determine whether the savings in phenotyping costs when using disequilibrium methods outweigh the additional genotyping costs.^ Nine linkage disequilibrium tests were examined by simulation. Five tests involve selecting isolated unrelated individuals while four involved the selection of parent child trios (TDT). All nine tests were found to be able to identify disequilibrium with the correct significance level in Hardy-Weinberg populations. Increasing linked genetic variance and trait allele frequency were found to increase the power to detect disequilibrium, while increasing the number of generations and distance between marker and trait loci decreased the power to detect disequilibrium. Discordant sampling was used for several of the tests. It was found that the more stringent the sampling, the greater the power to detect disequilibrium in a sample of given size. The power to detect disequilibrium was not affected by the presence of polygenic effects.^ When the trait locus had more than two trait alleles, the power of the tests maximized to less than one. For the simulation methods used here, when there were more than two-trait alleles there was a probability equal to 1-heterozygosity of the marker locus that both trait alleles were in disequilibrium with the same marker allele, resulting in the marker being uninformative for disequilibrium.^ The five tests using isolated unrelated individuals were found to have excess error rates when there was disequilibrium due to population admixture. Increased error rates also resulted from increased unlinked major gene effects, discordant trait allele frequency, and increased disequilibrium. Polygenic effects did not affect the error rates. The TDT, Transmission Disequilibrium Test, based tests were not liable to any increase in error rates.^ For all sample ascertainment costs, for recent mutations ($<$100 generations) linkage disequilibrium tests were less expensive than the variance component test to carry out. Candidate gene scans saved even more money. The use of recently admixed populations also decreased the cost of performing a linkage disequilibrium test. ^
Resumo:
AIMS: We conducted a meta-analysis to evaluate the accuracy of quantitative stress myocardial contrast echocardiography (MCE) in coronary artery disease (CAD). METHODS AND RESULTS: Database search was performed through January 2008. We included studies evaluating accuracy of quantitative stress MCE for detection of CAD compared with coronary angiography or single-photon emission computed tomography (SPECT) and measuring reserve parameters of A, beta, and Abeta. Data from studies were verified and supplemented by the authors of each study. Using random effects meta-analysis, we estimated weighted mean difference (WMD), likelihood ratios (LRs), diagnostic odds ratios (DORs), and summary area under curve (AUC), all with 95% confidence interval (CI). Of 1443 studies, 13 including 627 patients (age range, 38-75 years) and comparing MCE with angiography (n = 10), SPECT (n = 1), or both (n = 2) were eligible. WMD (95% CI) were significantly less in CAD group than no-CAD group: 0.12 (0.06-0.18) (P < 0.001), 1.38 (1.28-1.52) (P < 0.001), and 1.47 (1.18-1.76) (P < 0.001) for A, beta, and Abeta reserves, respectively. Pooled LRs for positive test were 1.33 (1.13-1.57), 3.76 (2.43-5.80), and 3.64 (2.87-4.78) and LRs for negative test were 0.68 (0.55-0.83), 0.30 (0.24-0.38), and 0.27 (0.22-0.34) for A, beta, and Abeta reserves, respectively. Pooled DORs were 2.09 (1.42-3.07), 15.11 (7.90-28.91), and 14.73 (9.61-22.57) and AUCs were 0.637 (0.594-0.677), 0.851 (0.828-0.872), and 0.859 (0.842-0.750) for A, beta, and Abeta reserves, respectively. CONCLUSION: Evidence supports the use of quantitative MCE as a non-invasive test for detection of CAD. Standardizing MCE quantification analysis and adherence to reporting standards for diagnostic tests could enhance the quality of evidence in this field.
Resumo:
PURPOSE Fundus autofluorescence (FAF) cannot only be characterized by the intensity or the emission spectrum, but also by its lifetime. As the lifetime of a fluorescent molecule is sensitive to its local microenvironment, this technique may provide more information than fundus autofluorescence imaging. We report here the characteristics and repeatability of FAF lifetime measurements of the human macula using a new fluorescence lifetime imaging ophthalmoscope (FLIO). METHODS A total of 31 healthy phakic subjects were included in this study with an age range from 22 to 61 years. For image acquisition, a fluorescence lifetime ophthalmoscope based on a Heidelberg Engineering Spectralis system was used. Fluorescence lifetime maps of the retina were recorded in a short- (498-560 nm) and a long- (560-720 nm) spectral channel. For quantification of fluorescence lifetimes a standard ETDRS grid was used. RESULTS Mean fluorescence lifetimes were shortest in the fovea, with 208 picoseconds for the short-spectral channel and 239 picoseconds for the long-spectral channel, respectively. Fluorescence lifetimes increased from the central area to the outer ring of the ETDRS grid. The test-retest reliability of FLIO was very high for all ETDRS areas (Spearman's ρ = 0.80 for the short- and 0.97 for the long-spectral channel, P < 0.0001). Fluorescence lifetimes increased with age. CONCLUSIONS The FLIO allows reproducible measurements of fluorescence lifetimes of the macula in healthy subjects. By using a custom-built software, we were able to quantify fluorescence lifetimes within the ETDRS grid. Establishing a clinically accessible standard against which to measure FAF lifetimes within the retina is a prerequisite for future studies in retinal disease.
Resumo:
Epileptic seizures are associated with high behavioral stereotypy of the patients. In the EEG of epilepsy patients characteristic signal patterns can be found during and between seizures. Here we use ordinal patterns to analyze EEGs of epilepsy patients and quantify the degree of signal determinism. Besides relative signal redundancy and the fraction of forbidden patterns we introduce the fraction of under-represented patterns as a new measure. Using the logistic map, parameter scans are performed to explore the sensitivity of the measures to signal determinism. Thereafter, application is made to two types of EEGs recorded in two epilepsy patients. Intracranial EEG shows pronounced determinism peaks during seizures. Finally, we demonstrate that ordinal patterns may be useful for improving analysis of non-invasive simultaneous EEG-fMRI.
Resumo:
BACKGROUND Pressure ulcers are associated with severe impairment for the patients and high economic load. With this study we wanted to gain more insight to the skin perfusion dynamics due to external loading. Furthermore, we evaluated the effect of different types of pressure relief mattresses. METHODS A total of 25 healthy volunteers were enrolled in the study. Perfusion dynamics of the sacral and the heel area were assessed using the O2C-device, which combines a laser light, to determine blood flow, and white light to determine the relative amount of hemoglobin. Three mattresses were evaluated compared to a hard surface: a standard hospital foam mattress bed, a visco-elastic foam mattress, and an air-fluidized bed. RESULTS In the heel area, only the air-fluidized bed was able to maintain the blood circulation (mean blood flow of 13.6 ± 6 versus 3.9 ± 3 AU and mean relative amount of hemoglobin of 44.0 ± 14 versus 32.7 ± 12 AU.) In the sacral area, all used mattresses revealed an improvement of blood circulation compared to the hard surface. CONCLUSION The results of this study form a more precise pattern of perfusion changes due to external loading on various pressure relief mattresses. This knowledge may reduce the incidence of pressure ulcers and may be an influencing factor in pressure relief mattress selection.
Resumo:
BACKGROUND Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D) evaluation, but three-dimensional (3D) imaging methods to assess treatment outcome are steadily rising. OBJECTIVE To identify 3D imaging methods for quantitative assessment of soft tissue and skeletal morphology in patients with cleft lip and palate. DATA SOURCES Literature was searched using PubMed (1948-2012), EMBASE (1980-2012), Scopus (2004-2012), Web of Science (1945-2012), and the Cochrane Library. The last search was performed September 30, 2012. Reference lists were hand searched for potentially eligible studies. There was no language restriction. STUDY SELECTION We included publications using 3D imaging techniques to assess facial soft tissue or skeletal morphology in patients older than 5 years with a cleft lip with/or without cleft palate. We reviewed studies involving the facial region when at least 10 subjects in the sample size had at least one cleft type. Only primary publications were included. DATA EXTRACTION Independent extraction of data and quality assessments were performed by two observers. RESULTS Five hundred full text publications were retrieved, 144 met the inclusion criteria, with 63 high quality studies. There were differences in study designs, topics studied, patient characteristics, and success measurements; therefore, only a systematic review could be conducted. Main 3D-techniques that are used in cleft lip and palate patients are CT, CBCT, MRI, stereophotogrammetry, and laser surface scanning. These techniques are mainly used for soft tissue analysis, evaluation of bone grafting, and changes in the craniofacial skeleton. Digital dental casts are used to evaluate treatment and changes over time. CONCLUSION Available evidence implies that 3D imaging methods can be used for documentation of CLP patients. No data are available yet showing that 3D methods are more informative than conventional 2D methods. Further research is warranted to elucidate it.
Resumo:
We present the results of an investigation into the nature of information needs of software developers who work in projects that are part of larger ecosystems. This work is based on a quantitative survey of 75 professional software developers. We corroborate the results identified in the sur- vey with needs and motivations proposed in a previous sur- vey and discover that tool support for developers working in an ecosystem context is even more meager than we thought: mailing lists and internet search are the most popular tools developers use to satisfy their ecosystem-related information needs.
Resumo:
Microindentation in bone is a micromechanical testing technique routinely used to extract material properties related to bone quality. As the analysis of microindentation data is based on assumptions about the contact between sample and surface, the aim of this study was to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topology was measured by atomic force microscopy. Statistical shape modeling of the residual imprint allowed to define a mean shape and to describe the variability in terms of 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was found to be highly consistent and free of any pile up while differing mostly by depth between species and direction. A few of the topological parameters, in particular depth, showed significant but rather weak and inconsistent correlations to variations in mechanical properties. The mechanical response of bone as well as the residual imprint shape was highly consistent within each category. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small deviations from an ideally flat surface.
Resumo:
The long-term integrity of protected areas (PAs), and hence the maintenance of related ecosystem services (ES), are dependent on the support of local people. In the present study, local people's perceptions of ecosystem services from PAs and factors that govern local preferences for PAs are assessed. Fourteen study villages were randomly selected from three different protected forest areas and one control site along the southern coast of Côte d'Ivoire. Data was collected through a mixed-method approach, including qualitative semi-structured interviews and a household survey based on hypothetical choice scenarios. Local people's perceptions of ecosystem service provision was decrypted through qualitative content analysis, while the relation between people's preferences and potential factors that affect preferences were analyzed through multinomial models. This study shows that rural villagers do perceive a number of different ecosystem services as benefits from PAs in Côte d'Ivoire. The results based on quantitative data also suggest that local preferences for PAs and related ecosystem services are driven by PAs' management rules, age, and people's dependence on natural resources.
Resumo:
PURPOSE Quantification of retinal layers using automated segmentation of optical coherence tomography (OCT) images allows for longitudinal studies of retinal and neurological disorders in mice. The purpose of this study was to compare the performance of automated retinal layer segmentation algorithms with data from manual segmentation in mice using the Spectralis OCT. METHODS Spectral domain OCT images from 55 mice from three different mouse strains were analyzed in total. The OCT scans from 22 C57Bl/6, 22 BALBc, and 11 C3A.Cg-Pde6b(+)Prph2(Rd2) /J mice were automatically segmented using three commercially available automated retinal segmentation algorithms and compared to manual segmentation. RESULTS Fully automated segmentation performed well in mice and showed coefficients of variation (CV) of below 5% for the total retinal volume. However, all three automated segmentation algorithms yielded much thicker total retinal thickness values compared to manual segmentation data (P < 0.0001) due to segmentation errors in the basement membrane. CONCLUSIONS Whereas the automated retinal segmentation algorithms performed well for the inner layers, the retinal pigmentation epithelium (RPE) was delineated within the sclera, leading to consistently thicker measurements of the photoreceptor layer and the total retina. TRANSLATIONAL RELEVANCE The introduction of spectral domain OCT allows for accurate imaging of the mouse retina. Exact quantification of retinal layer thicknesses in mice is important to study layers of interest under various pathological conditions.