946 resultados para TO-NOISE RATIO


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Acute acoustic trauma (AAT) is a sudden sensorineural hearing loss caused by exposure of the hearing organ to acoustic overstimulation, typically an intense sound impulse, hyperbaric oxygen therapy (HOT), which favors repair of the microcirculation, can be potentially used to treat it. Hence, this study aimed to assess the effects of HOT on guinea pigs exposed to acoustic trauma. Fifteen guinea pigs were exposed to noise in the 4-kHz range with intensity of 110 dB sound level pressure for 72 h. They were assessed by brainstem auditory evoked potential (BAEP) and by distortion product otoacoustic emission (DPOAE) before and after exposure and after HOT at 2.0 absolute atmospheres for 1 h. The cochleae were then analyzed using scanning electron microscopy (SEM). There was a statistically significant difference in the signal-to-noise ratio of the DPOAE amplitudes for the 1- to 4-kHz frequencies and the SEM findings revealed damaged outer hair cells (OHC) after exposure to noise, with recovery after HOT (p = 0.0159), which did not occur on thresholds and amplitudes to BAEP (p = 0.1593). The electrophysiological BAEP data did not demonstrate effectiveness of HOT against AAT damage. However, there was improvement of the anatomical pattern of damage detected by SEM, with a significant reduction of the number of injured cochlear OHC and their functionality detected by DPOAE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim - A quantative primary study to determine whether increasing source to image distance (SID), with and without the use of automatic exposure control (AEC) for antero-posterior (AP) pelvis imaging, reduces dose whilst still producing an image of diagnostic quality. Methods - Using a computed radiography (CR) system, an anthropomorphic pelvic phantom was positioned for an AP examination using the table bucky. SID was initially set at 110 cm, with tube potential set at a constant 75 kVp, with two outer chambers selected and a fine focal spot of 0.6 mm. SID was then varied from 90 cm to 140 cm with two exposures made at each 5 cm interval, one using the AEC and another with a constant 16 mAs derived from the initial exposure. Effective dose (E) and entrance surface dose (ESD) were calculated for each acquisition. Seven experienced observers blindly graded image quality using a 5-point Likert scale and 2 Alternative Forced Choice software. Signal-to-Noise Ratio (SNR) was calculated for comparison. For each acquisition, femoral head diameter was also measured for magnification indication. Results - Results demonstrated that when increasing SID from 110 cm to 140 cm, both E and ESD reduced by 3.7% and 17.3% respectively when using AEC and 50.13% and 41.79% respectively, when the constant mAs was used. No significant statistical (T-test) difference (p = 0.967) between image quality was detected when increasing SID, with an intra-observer correlation of 0.77 (95% confidence level). SNR reduced slightly for both AEC (38%) and no AEC (36%) with increasing SID. Conclusion - For CR, increasing SID significantly reduces both E and ESD for AP pelvis imaging without adversely affecting image quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrocardiographic (ECG) signals are emerging as a recent trend in the field of biometrics. In this paper, we propose a novel ECG biometric system that combines clustering and classification methodologies. Our approach is based on dominant-set clustering, and provides a framework for outlier removal and template selection. It enhances the typical workflows, by making them better suited to new ECG acquisition paradigms that use fingers or hand palms, which lead to signals with lower signal to noise ratio, and more prone to noise artifacts. Preliminary results show the potential of the approach, helping to further validate the highly usable setups and ECG signals as a complementary biometric modality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To examine the association between obesity and food group intakes, physical activity and socio-economic status in adolescents. Design: A cross-sectional study was carried out in 2008. Cole’s cut-off points were used to categorize BMI. Abdominal obesity was defined by a waist circumference at or above the 90th percentile, as well as a waist-to-height ratio at or above 0?500. Diet was evaluated using an FFQ, and the food group consumption was categorized using sex-specific tertiles of each food group amount. Physical activity was assessed via a self-report questionnaire. Socio-economic status was assessed referring to parental education and employment status. Data were analysed separately for girls and boys and the associations among food consumption, physical activity, socio-economic status and BMI, waist circumference and waist-to-height ratio were evaluated using logistic regression analysis, adjusting the results for potential confounders. Setting: Public schools in the Azorean Archipelago, Portugal. Subjects: Adolescents (n 1209) aged 15–18 years. Results: After adjustment, in boys, higher intake of ready-to-eat cereals was a negative predictor while vegetables were a positive predictor of overweight/ obesity and abdominal obesity. Active boys had lower odds of abdominal obesity compared with inactive boys. Boys whose mother showed a low education level had higher odds of abdominal obesity compared with boys whose mother presented a high education level. Concerning girls, higher intake of sweets and pastries was a negative predictor of overweight/obesity and abdominal obesity. Girls in tertile 2 of milk intake had lower odds of abdominal obesity than those in tertile 1. Girls whose father had no relationship with employment displayed higher odds of abdominal obesity compared with girls whose father had high employment status. Conclusions: We have found that different measures of obesity have distinct associations with food group intakes, physical activity and socio-economic status.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract: Ototoxic substances have been associated to damage of the auditory system, and its effects are potentiated by noise exposure. The present study aims at analyzing auditory changes from combined exposure to noise and organic solvents, through a pilot study in the furniture industry sector. Audiological tests were performed on 44 workers, their levels of exposure to toluene, xylene and ethylbenzene were determined and the levels of noise exposure were evaluated. The results showed that workers are generally exposed to high noise levels and cabin priming filler and varnish sector workers have high levels of exposure to toluene. However, no hearing loss was registered among the workers. Workers exposed simultaneously to noise and ototoxic substances do not have a higher degree of hearing loss than those workers exposed only to noise. Thus, the results of this study did not show that the combined exposure to noise and the organic solvent is associated with hearing disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE To evaluate the audiometric profile of civilian pilots according to the noise exposure level. METHODS This observational cross-sectional study evaluated 3,130 male civilian pilots aged between 17 and 59 years. These pilots were subjected to audiometric examinations for obtaining or revalidating the functional capacity certificate in 2011. The degree of hearing loss was classified as normal, suspected noise-induced hearing loss, and no suspected hearing loss with other associated complications. Pure-tone air-conduction audiometry was performed using supra-aural headphones and acoustic stimulus of the pure-tone type, containing tone thresholds of frequencies between 250 Hz and 6,000 Hz. The independent variables were professional categories, length of service, hours of flight, and right or left ear. The dependent variable was pilots with suspected noise-induced hearing loss. The noise exposure level was considered low/medium or high, and the latter involved periods > 5,000 flight hours and > 10 years of flight service. RESULTS A total of 29.3% pilots had suspected noise-induced hearing loss, which was bilateral in 12.8% and predominant in the left ear (23.7%). The number of pilots with suspected hearing loss increased as the noise exposure level increased. CONCLUSIONS Hearing loss in civilian pilots may be associated with noise exposure during the period of service and hours of flight.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Doctoral Thesis for PhD degree in Industrial and Systems Engineering

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Neutrophil-to-lymphocyte ratio (NLR) has been found to be a good predictor of future adverse cardiovascular outcomes in patients with ST-segment elevation myocardial infarction (STEMI). Changes in the QRS terminal portion have also been associated with adverse outcomes following STEMI. Objective: To investigate the relationship between ECG ischemia grade and NLR in patients presenting with STEMI, in order to determine additional conventional risk factors for early risk stratification. Methods: Patients with STEMI were investigated. The grade of ischemia was analyzed from the ECG performed on admission. White blood cells and subtypes were measured as part of the automated complete blood count (CBC) analysis. Patients were classified into two groups according to the ischemia grade presented on the admission ECG, as grade 2 ischemia (G2I) and grade 3 ischemia (G3I). Results: Patients with G3I had significantly lower mean left ventricular ejection fraction than those in G2I (44.58 ± 7.23 vs. 48.44 ± 7.61, p = 0.001). As expected, in-hospital mortality rate increased proportionally with the increase in ischemia grade (p = 0.036). There were significant differences in percentage of lymphocytes (p = 0.010) and percentage of neutrophils (p = 0.004), and therefore, NLR was significantly different between G2I and G3I patients (p < 0.001). Multivariate logistic regression analysis revealed that only NLR was the independent variable with a significant effect on ECG ischemia grade (odds ratio = 1.254, 95% confidence interval 1.120–1.403, p < 0.001). Conclusion: We found an association between G3I and elevated NLR in patients with STEMI. We believe that such an association might provide an additional prognostic value for risk stratification in patients with STEMI when combined with standardized risk scores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background: Neutrophil-to-lymphocyte ratio (NLR) and platelet-to-lymphocyte ratio (PLR) are inflammatory markers used as prognostic factors in various diseases. The aims of this study were to compare the PLR and the NLR of heart failure (HF) patients with those of age-sex matched controls, to evaluate the predictive value of those markers in detecting HF, and to demonstrate the effect of NLR and PLR on mortality in HF patients during follow-up. Methods: This study included 56 HF patients and 40 controls without HF. All subjects underwent transthoracic echocardiography to evaluate cardiac functions. The NLR and the PLR were calculated as the ratio of neutrophil count to lymphocyte count and as the ratio of platelet count to lymphocyte count, respectively. All HF patients were followed after their discharge from the hospital to evaluate mortality, cerebrovascular events, and re-hospitalization. Results: The NLR and the PLR of HF patients were significantly higher compared to those of the controls (p < 0.01). There was an inverse correlation between the NLR and the left ventricular ejection fraction of the study population (r: -0.409, p < 0.001). The best cut-off value of NLR to predict HF was 3.0, with 86.3% sensitivity and 77.5% specificity, and the best cut-off value of PLR to predict HF was 137.3, with 70% sensitivity and 60% specificity. Only NLR was an independent predictor of mortality in HF patients. A cut-off value of 5.1 for NLR can predict death in HF patients with 75% sensitivity and 62% specificity during a 12.8-month follow-up period on average. Conclusion: NLR and PLR were higher in HF patients than in age-sex matched controls. However, NLR and PLR were not sufficient to establish a diagnosis of HF. NLR can be used to predict mortality during the follow-up of HF patients.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method of objectively determining imaging performance for a mammography quality assurance programme for digital systems was developed. The method is based on the assessment of the visibility of a spherical microcalcification of 0.2 mm using a quasi-ideal observer model. It requires the assessment of the spatial resolution (modulation transfer function) and the noise power spectra of the systems. The contrast is measured using a 0.2-mm thick Al sheet and Polymethylmethacrylate (PMMA) blocks. The minimal image quality was defined as that giving a target contrast-to-noise ratio (CNR) of 5.4. Several evaluations of this objective method for evaluating image quality in mammography quality assurance programmes have been considered on computed radiography (CR) and digital radiography (DR) mammography systems. The measurement gives a threshold CNR necessary to reach the minimum standard image quality required with regards to the visibility of a 0.2-mm microcalcification. This method may replace the CDMAM image evaluation and simplify the threshold contrast visibility test used in mammography quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. METHODS: Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i) linear regression; (ii) logistic classification; (iii) regression trees; (iv) classification trees (iii and iv are collectively known as "CART"). Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. RESULTS: Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60-80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. CONCLUSIONS: There were no striking differences between either the algebraic (i, ii) vs. non-algebraic (iii, iv), or the regression (i, iii) vs. classification (ii, iv) modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: Recent developments of magnetic resonance imaging enabled free-breathing coronary MRA (cMRA) using steady-state-free-precession (SSFP) for endogenous contrast. The purpose of this study was a systematic comparison of SSFP cMRA with standard T2-prepared gradient-echo and spiral cMRA. METHODS: Navigator-gated free-breathing T2-prepared SSFP-, T2-prepared gradient-echo- and T2-prepared spiral cMRA was performed in 18 healthy swine (45-68 kg body-weight). Image quality was investigated subjectively and signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) and vessel sharpness were compared. RESULTS: SSFP cMRA allowed for high quality cMRA during free breathing with substantial improvements in SNR, CNR and vessel sharpness when compared with standard T2-prepared gradient-echo imaging. Spiral imaging demonstrated the highest SNR while image quality score and vessel definition was best for SSFP imaging. CONCLUSION: Navigator-gated free-breathing T2-prepared SSFP cMRA is a promising new imaging approach for high signal and high contrast imaging of the coronary arteries with improved vessel border definition.