158 resultados para Refract index matching
Resumo:
The applicability of ultra-short-term wind power prediction (USTWPP) models is reviewed. The USTWPP method proposed extracts featrues from historical data of wind power time series (WPTS), and classifies every short WPTS into one of several different subsets well defined by stationary patterns. All the WPTS that cannot match any one of the stationary patterns are sorted into the subset of nonstationary pattern. Every above WPTS subset needs a USTWPP model specially optimized for it offline. For on-line application, the pattern of the last short WPTS is recognized, then the corresponding prediction model is called for USTWPP. The validity of the proposed method is verified by simulations.
Resumo:
PURPOSE. Raman spectroscopy is an effective probe of advanced glycation end products (AGEs) in Bruch's membrane. However, because it is the outermost layer of the retina, this extracellular matrix is difficult to analyze in vivo with current technology. The sclera shares many compositional characteristics with Bruch's membrane, but it is much easier to access for in vivo Raman analysis. This study investigated whether sclera could act as a surrogate tissue for Raman-based investigation of pathogenic AGEs in Bruch's membrane.
METHODS. Human sclera and Bruch's membrane were dissected from postmortem eyes (n = 67) across a wide age range (33-92 years) and were probed by Raman spectroscopy. The biochemical composition, AGEs, and their age-related trends were determined from data reduction of the Raman spectra and compared for the two tissues.
RESULTS. Raman microscopy demonstrated that Bruch's membrane and sclera are composed of a similar range of biomolecules but with distinct relative quantities, such as in the heme/collagen and the elastin/collagen ratios. Both tissues accumulated AGEs, and these correlated with chronological age (R(2) = 0.824 and R(2) = 0.717 for sclera and Bruch's membrane, respectively). The sclera accumulated AGE adducts at a lower rate than Bruch's membrane, and the models of overall age-related changes exhibited a lower rate (one-fourth that of Bruch's membrane) but a significant increase with age (P <0.05).
CONCLUSIONS. The results suggest that the sclera is a viable surrogate marker for estimating AGE accumulation in Bruch's membrane and for reliably predicting chronological age. These findings also suggest that sclera could be a useful target tissue for future patient-based, Raman spectroscopy studies. (Invest Ophthalmol Vis Sci 2011;52:1593-1598) DOI:10.1167/iovs.10-6554
Resumo:
This paper presents a new approach to speech enhancement from single-channel measurements involving both noise and channel distortion (i.e., convolutional noise), and demonstrates its applications for robust speech recognition and for improving noisy speech quality. The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise for speech estimation. Third, we present an iterative algorithm which updates the noise and channel estimates of the corpus data model. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement.
Resumo:
This paper presents a new approach to single-channel speech enhancement involving both noise and channel distortion (i.e., convolutional noise). The approach is based on finding longest matching segments (LMS) from a corpus of clean, wideband speech. The approach adds three novel developments to our previous LMS research. First, we address the problem of channel distortion as well as additive noise. Second, we present an improved method for modeling noise. Third, we present an iterative algorithm for improved speech estimates. In experiments using speech recognition as a test with the Aurora 4 database, the use of our enhancement approach as a preprocessor for feature extraction significantly improved the performance of a baseline recognition system. In another comparison against conventional enhancement algorithms, both the PESQ and the segmental SNR ratings of the LMS algorithm were superior to the other methods for noisy speech enhancement. Index Terms: corpus-based speech model, longest matching segment, speech enhancement, speech recognition
Resumo:
Quasi-phase matching (QPM) can be used to increase the conversion efficiency of the high harmonic generation (HHG) process. We observed QPM with an improved dual-gas foil target with a 1 kHz, 10 mJ, 30 fs laser system. Phase tuning and enhancement were possible within a spectral range from 17 nm to 30 nm. Furthermore analytical calculations and numerical simulations were carried out to distinguish QPM from other effects, such as the influence of adjacent jets on each other or the laser gas interaction. The simulations were performed with a 3 dimensional code to investigate the phase matching of the short and long trajectories individually over a large spectral range.
Resumo:
In this paper, we propose a sparse multi-carrier index keying (MCIK) method for orthogonal frequency division multiplexing (OFDM) system, which uses the indices of sparse sub-carriers to transmit the data, and improve the performance
of signal detection in highly correlated sub-carriers. Although a receiver is able to exploit a power gain with precoding in OFDM, the sensitivity of the signal detection is usually high as the orthogonality is not retained in highly dispersive
environments. To overcome this, we focus on developing the trade-off between the sparsity of the MCIK, correlation, and performances, analyzing the average probability of the error propagation imposed by incorrect index detection over highly correlated sub-carriers. In asymptotic cases, we are able to see how sparsity of MCIK should be designed in order to perform superior to the classical OFDM system. Based on this feature, sparse MCIK based OFDM is a better choice for low detection errors in highly correlated sub-carriers.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
Background: Lung clearance index (LCI) derived from sulfur hexafluoride (SF6) multiple breath washout (MBW) is a sensitive measure of lung disease in people with cystic fibrosis (CF). However, it can be time-consuming, limiting its use clinically. Aim: To compare the repeatability, sensitivity and test duration of LCI derived from washout to 1/30th (LCI1/30), 1/20th (LCI1/20) and 1/10th (LCI1/10) to ‘standard’ LCI derived from washout to 1/40th initial concentration (LCI1/40). Methods: Triplicate MBW test results from 30 clinically stable people with CF and 30 healthy controls were analysed retrospectively. MBW tests were performed using 0.2% SF6 and a modified Innocor device. All LCI end points were calculated using SimpleWashout software. Repeatability was assessed using coefficient of variation (CV%). The proportion of people with CF with and without abnormal LCI and forced expiratory volume in 1 s (FEV1) % predicted was compared. Receiver operating characteristic (ROC) curve statistics were calculated. Test duration of all LCI end points was compared using paired t tests. Results: In people with CF, LCI1/40 CV% (p=0.16), LCI1/30 CV%, (p=0.53), LCI1/20 CV% (p=0.14) and LCI1/10 CV% (p=0.25) was not significantly different to controls. The sensitivity of LCI1/40, LCI1/30 and LCI1/20 to the presence of CF was equal (67%). The sensitivity of LCI1/10 and FEV1% predicted was lower (53% and 47% respectively). Area under the ROC curve (95% CI) for LCI1/40, LCI1/30, LCI1/20, LCI1/10 and FEV1% predicted was 0.89 (0.80 to 0.97), 0.87 (0.77 to 0.96), 0.87 (0.78 to 0.96), 0.83 (0.72 to 0.94) and 0.73 (0.60 to 0.86), respectively. Test duration of LCI1/30, LCI1/20 and LCI1/10 was significantly shorter compared with the test duration of LCI1/40 in people with CF (p<0.0001) equating to a 5%, 9% and 15% time saving, respectively. Conclusions: In this study, LCI1/20 was a repeatable and sensitive measure with equal diagnostic performance to LCI1/40. LCI1/20 was shorter, potentially offering a more feasible research and clinical measure.
Resumo:
In forensic investigations, it is common for forensic investigators to obtain a photograph of evidence left at the scene of crimes to aid them catch the culprit(s). Although, fingerprints are the most popular evidence that can be used, scene of crime officers claim that more than 30% of the evidence recovered from crime scenes originate from palms. Usually, palmprints evidence left at crime scenes are partial since very rarely full palmprints are obtained. In particular, partial palmprints do not exhibit a structured shape and often do not contain a reference point that can be used for their alignment to achieve efficient matching. This makes conventional matching methods based on alignment and minutiae pairing, as used in fingerprint recognition, to fail in partial palmprint recognition problems. In this paper a new partial-to-full palmprint recognition based on invariant minutiae descriptors is proposed where the partial palmprint’s minutiae are extracted and considered as the distinctive and discriminating features for each palmprint image. This is achieved by assigning to each minutiae a feature descriptor formed using the values of all the orientation histograms of the minutiae at hand. This allows for the descriptors to be rotation invariant and as such do not require any image alignment at the matching stage. The results obtained show that the proposed technique yields a recognition rate of 99.2%. The solution does give a high confidence to the judicial jury in their deliberations and decision.
Resumo:
It is shown that under certain conditions it is possible to obtain a good speech estimate from noise without requiring noise estimation. We study an implementation of the theory, namely wide matching, for speech enhancement. The new approach performs sentence-wide joint speech segment estimation subject to maximum recognizability to gain noise robustness. Experiments have been conducted to evaluate the new approach with variable noises and SNRs from -5 dB to noise free. It is shown that the new approach, without any estimation of the noise, significantly outperformed conventional methods in the low SNR conditions while retaining comparable performance in the high SNR conditions. It is further suggested that the wide matching and deep learning approaches can be combined towards a highly robust and accurate speech estimator.
Resumo:
Given the success of patch-based approaches to image denoising,this paper addresses the ill-posed problem of patch size selection.Large patch sizes improve noise robustness in the presence of good matches, but can also lead to artefacts in textured regions due to the rare patch effect; smaller patch sizes reconstruct details more accurately but risk over-fitting to the noise in uniform regions. We propose to jointly optimize each matching patch’s identity and size for gray scale image denoising, and present several implementations.The new approach effectively selects the largest matching areas, subject to the constraints of the available data and noise level, to improve noise robustness. Experiments on standard test images demonstrate our approach’s ability to improve on fixed-size reconstruction, particularly at high noise levels, on smoother image regions.
Resumo:
In this paper, we introduce a novel approach to face recognition which simultaneously tackles three combined challenges: 1) uneven illumination; 2) partial occlusion; and 3) limited training data. The new approach performs lighting normalization, occlusion de-emphasis and finally face recognition, based on finding the largest matching area (LMA) at each point on the face, as opposed to traditional fixed-size local area-based approaches. Robustness is achieved with novel approaches for feature extraction, LMA-based face image comparison and unseen data modeling. On the extended YaleB and AR face databases for face identification, our method using only a single training image per person, outperforms other methods using a single training image, and matches or exceeds methods which require multiple training images. On the labeled faces in the wild face verification database, our method outperforms comparable unsupervised methods. We also show that the new method performs competitively even when the training images are corrupted.