88 resultados para AUDITORY THRESHOLD


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction. Auditory hallucinations exist in psychotic disorders as well as the general population. Proneness to hallucinations, as measured by positive schizotypy, predicts false perceptions during an auditory signal detection task (Barkus, Stirling, Hopkins, McKie, & Lewis, 2007). Our aim was to replicate this result and extend it by examining effects of age and sex, both important demographic predictors of psychosis.

Method. A sample of 76 healthy volunteers split into 15-17 years (n = 46) and 19 years plus (n = 30) underwent a signal detection task designed to detect propensity towards false perceptions under ambiguous auditory conditions. Scores on the Unusual Experiences subscale (UE) of the O-LIFE schizotypy scale, IQ, and a measure of working memory were also assessed.

Results. We replicated our initial finding (Barkus et al., 2007): High scores on positive schizotypy were associated with false perceptions. Younger participants who scored highly on positive schizotypy reported significantly more false perceptions compared to other groups (p = .04). Older participants who had had an imaginary friend reported more false perceptions during the signal detection task (p <. 01).

Conclusions. Younger participants seem most vulnerable to the effects of positive schizotypal traits in terms of a signal detection deficit that underlies auditory hallucinations. Schizotypy may have greatest impact closer to the risk period for development of psychotic disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A criterion is derived for delamination onset in transversely isotropic laminated plates under small mass, high velocity impact. The resulting delamination threshold load is about 21% higher than the corresponding quasi-static threshold load. A closed form approximation for the peak impact load is then used to predict the delamination threshold velocity. The theory is validated for a range of test cases by comparison with 3D finite element simulation using LS-DYNA and a newly developed interface element to model delamination onset and growth. The predicted delamination threshold loads and velocities are in very good agreement with the finite element simulations. Good agreement is also shown in a comparison with published experimental results. In contrast to quasi-static impacts, delamination growth occurs under a rapidly decreasing load. Inclusion of finite thickness effects and a proper description of the contact stiffness are found to be vital for accurate prediction of the delamination threshold velocity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: To compare the ability of Glaucoma Progression Analysis (GPA) and Threshold Noiseless Trend (TNT) programs to detect visual-field deterioration.

METHODS: Patients with open-angle glaucoma followed for a minimum of 2 years and a minimum of seven reliable visual fields were included. Progression was assessed subjectively by four masked glaucoma experts, and compared with GPA and TNT results. Each case was judged to be stable, deteriorated or suspicious of deterioration

RESULTS: A total of 56 eyes of 42 patients were followed with a mean of 7.8 (SD 1.0) tests over an average of 5.5 (1.04) years. Interobserver agreement to detect progression was good (mean kappa = 0.57). Progression was detected in 10-19 eyes by the experts, in six by GPA and in 24 by TNT. Using the consensus expert opinion as the gold standard (four clinicians detected progression), the GPA sensitivity and specificity were 75% and 83%, respectively, while the TNT sensitivity and specificity was 100% and 77%, respectively.

CONCLUSION: TNT showed greater concordance with the experts than GPA in the detection of visual-field deterioration. GPA showed a high specificity but lower sensitivity, mainly detecting cases of high focality and pronounced mean defect slopes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: The authors sought to quantify neighboring and distant interpoint correlations of threshold values within the visual field in patients with glaucoma. Methods: Visual fields of patients with confirmed or suspected glaucoma were analyzed (n = 255). One eye per patient was included. Patients were examined using the 32 program of the Octopus 1-2-3. Linear regression analysis among each of the locations and the rest of the points of the visual field was performed, and the correlation coefficient was calculated. The degree of correlation was categorized as high (r > 0.66), moderate (0.66 = r > 0.33), or low (r = 0.33). The standard error of threshold estimation was calculated. Results: Most locations of the visual field had high and moderate correlations with neighboring points and with distant locations corresponding to the same nerve fiber bundle. Locations of the visual field had low correlations with those of the opposite hemifield, with the exception of locations temporal to the blind spot. The standard error of threshold estimation increased from 0.6 to 0.9 dB with an r reduction of 0.1. Conclusion: Locations of the visual field have highest interpoint correlation with neighboring points and with distant points in areas corresponding to the distribution of the retinal nerve fiber layer. The quantification of interpoint correlations may be useful in the design and interpretation of visual field tests in patients with glaucoma.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To evaluate the sensitivity and specificity of the screening mode of the Humphrey-Welch Allyn frequency-doubling technology (FDT), Octopus tendency-oriented perimetry (TOP), and the Humphrey Swedish Interactive Threshold Algorithm (SITA)-fast (HSF) in patients with glaucoma. DESIGN: A comparative consecutive case series. METHODS: This was a prospective study which took place in the glaucoma unit of an academic department of ophthalmology. One eye of 70 consecutive glaucoma patients and 28 age-matched normal subjects was studied. Eyes were examined with the program C-20 of FDT, G1-TOP, and 24-2 HSF in one visit and in random order. The gold standard for glaucoma was presence of a typical glaucomatous optic disk appearance on stereoscopic examination, which was judged by a glaucoma expert. The sensitivity and specificity, positive and negative predictive value, and receiver operating characteristic (ROC) curves of two algorithms for the FDT screening test, two algorithms for TOP, and three algorithms for HSF, as defined before the start of this study, were evaluated. The time required for each test was also analyzed. RESULTS: Values for area under the ROC curve ranged from 82.5%-93.9%. The largest area (93.9%) under the ROC curve was obtained with the FDT criteria, defining abnormality as presence of at least one abnormal location. Mean test time was 1.08 ± 0.28 minutes, 2.31 ± 0.28 minutes, and 4.14 ± 0.57 minutes for the FDT, TOP, and HSF, respectively. The difference in testing time was statistically significant (P <.0001). CONCLUSIONS: The C-20 FDT, G1-TOP, and 24-2 HSF appear to be useful tools to diagnose glaucoma. The test C-20 FDT and G1-TOP take approximately 1/4 and 1/2 of the time taken by 24 to 2 HSF. © 2002 by Elsevier Science Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Before a natural sound can be recognized, an auditory signature of its source must be learned through experience. Here we used random waveforms to probe the formation of new memories for arbitrary complex sounds. A behavioral measure was designed, based on the detection of repetitions embedded in noises up to 4 s long. Unbeknownst to listeners, some noise samples reoccurred randomly throughout an experimental block. Results showed that repeated exposure induced learning for otherwise totally unpredictable and meaningless sounds. The learning was unsupervised and resilient to interference from other task-relevant noises. When memories were formed, they emerged rapidly, performance became abruptly near-perfect, and multiple noises were remembered for several weeks. The acoustic transformations to which recall was tolerant suggest that the learned features were local in time. We propose that rapid sensory plasticity could explain how the auditory brain creates useful memories from the ever-changing, but sometimes repeating, acoustical world. © 2010 Elsevier Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a low-complexity closed-loop spatial multiplexing method with limited feedback over multi-input-multi-output (MIMO) fading channels. The transmit adaptation is simply performed by selecting transmit antennas (or substreams) by comparing their signal-to-noise ratios to a given threshold with a fixed nonadaptive constellation and fixed transmit power per substream. We analyze the performance of the proposed system by deriving closed-form expressions for spectral efficiency, average transmit power, and bit error rate (BER). Depending on practical system design constraints, the threshold is chosen to maximize the spectral efficiency (or minimize the average BER) subject to average transmit power and average BER (or spectral efficiency) constraints, respectively. We present numerical and Monte Carlo simulation results that validate our analysis. Compared to open-loop spatial multiplexing and other approaches that select the best antenna subset in spatial multiplexing, the numerical results illustrate that the proposed technique obtains significant power gains for the same BER and spectral efficiency. We also provide numerical results that show improvement over rate-adaptive orthogonal space-time block coding, which requires highly complex constellation adaptation. We analyze the impact of feedback delay using analytical and Monte Carlo approaches. The proposed approach is arguably the simplest possible adaptive spatial multiplexing system from an implementation point of view. However, our approach and analysis can be extended to other systems using multiple constellations and power levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We employ the time-dependent R-matrix (TDRM) method to calculate anisotropy parameters for positive and negative sidebands of selected harmonics generated by two-color two-photon above-threshold ionization of argon. We consider odd harmonics of an 800-nm field ranging from the 13th to 19th harmonic, overlapped by a fundamental 800-nm IR field. The anisotropy parameters obtained using the TDRM method are compared with those obtained using a second-order perturbation theory with a model potential approach and a soft photon approximation approach. Where available, a comparison is also made to published experimental results. All three theoretical approaches provide similar values for anisotropy parameters. The TDRM approach obtains values that are closest to published experimental values. At high photon energies, the differences between each of the theoretical methods become less significant.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sounds such as the voice or musical instruments can be recognized on the basis of timbre alone. Here, sound recognition was investigated with severely reduced timbre cues. Short snippets of naturally recorded sounds were extracted from a large corpus. Listeners were asked to report a target category (e.g., sung voices) among other sounds (e.g., musical instruments). All sound categories covered the same pitch range, so the task had to be solved on timbre cues alone. The minimum duration for which performance was above chance was found to be short, on the order of a few milliseconds, with the best performance for voice targets. Performance was independent of pitch and was maintained when stimuli contained less than a full waveform cycle. Recognition was not generally better when the sound snippets were time-aligned with the sound onset compared to when they were extracted with a random starting time. Finally, performance did not depend on feedback or training, suggesting that the cues used by listeners in the artificial gating task were similar to those relevant for longer, more familiar sounds. The results show that timbre cues for sound recognition are available at a variety of time scales, including very short ones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The environmental quality of land can be assessed by calculating relevant threshold values, which differentiate between concentrations of elements resulting from geogenic and diffuse anthropogenic sources and concentrations generated by point sources of elements. A simple process allowing the calculation of these typical threshold values (TTVs) was applied across a region of highly complex geology (Northern Ireland) to six elements of interest; arsenic, chromium, copper, lead, nickel and vanadium. Three methods for identifying domains (areas where a readily identifiable factor can be shown to control the concentration of an element) were used: k-means cluster analysis, boxplots and empirical cumulative distribution functions (ECDF). The ECDF method was most efficient at determining areas of both elevated and reduced concentrations and was used to identify domains in this investigation. Two statistical methods for calculating normal background concentrations (NBCs) and upper limits of geochemical baseline variation (ULBLs), currently used in conjunction with legislative regimes in the UK and Finland respectively, were applied within each domain. The NBC methodology was constructed to run within a specific legislative framework, and its use on this soil geochemical data set was influenced by the presence of skewed distributions and outliers. In contrast, the ULBL methodology was found to calculate more appropriate TTVs that were generally more conservative than the NBCs. TTVs indicate what a "typical" concentration of an element would be within a defined geographical area and should be considered alongside the risk that each of the elements pose in these areas to determine potential risk to receptors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Loss-of-mains protection is an important component of the protection systems of embedded generation. The role of loss-of-mains is to disconnect the embedded generator from the utility grid in the event that connection to utility dispatched generation is lost. This is necessary for a number of reasons, including the safety of personnel during fault restoration and the protection of plant against out-of-synchronism reclosure to the mains supply. The incumbent methods of loss-of-mains protection were designed when the installed capacity of embedded generation was low, and known problems with nuisance tripping of the devices were considered acceptable because of the insignificant consequence to system operation. With the dramatic increase in the installed capacity of embedded generation over the last decade, the limitations of current islanding detection methods are no longer acceptable. This study describes a new method of loss-of-mains protection based on phasor measurement unit (PMU) technology, specifically using a low cost PMU device of the authors' design which has been developed for distribution network applications. The proposed method addresses the limitations of the incumbent methods, providing a solution that is free of nuisance tripping and has a zero non-detection zone. This system has been tested experimentally and is shown to be practical, feasible and effective. Threshold settings for the new method are recommended based on data acquired from both the Great Britain and Ireland power systems.