931 resultados para Receiver
Resumo:
The current study discusses new opportunities for secure ground to satellite communications using shaped femtosecond pulses that induce spatial hole burning in the atmosphere for efficient communications with data encoded within super-continua generated by femtosecond pulses. Refractive index variation across the different layers in the atmosphere may be modelled using assumptions that the upper strata of the atmosphere and troposphere behaving as layered composite amorphous dielectric networks composed of resistors and capacitors with different time constants across each layer. Input-output expressions of the dynamics of the networks in the frequency domain provide the transmission characteristics of the propagation medium. Femtosecond pulse shaping may be used to optimize the pulse phase-front and spectral composition across the different layers in the atmosphere. A generic procedure based on evolutionary algorithms to perform the pulse shaping is proposed. In contrast to alternative procedures that would require ab initio modelling and calculations of the propagation constant for the pulse through the atmosphere, the proposed approach is adaptive, compensating for refractive index variations along the column of air between the transmitter and receiver.
Resumo:
Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform on previously unseen data, and thus, how they would perform in clinical practice when there is no real opportunity to adapt the algorithm to the data at hand. To address these comparability, generalizability and clinical applicability issues, we organized a grand challenge that aimed to objectively compare algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease, patients with mild cognitive impairment and healthy controls. The diagnosis based on clinical criteria was used as reference standard, as it was the best available reference despite its known limitations. For evaluation, a previously unseen test set was used consisting of 354 T1-weighted MRI scans with the diagnoses blinded. Fifteen research teams participated with a total of 29 algorithms. The algorithms were trained on a small training set (n = 30) and optionally on data from other sources (e.g., the Alzheimer's Disease Neuroimaging Initiative, the Australian Imaging Biomarkers and Lifestyle flagship study of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume, cortical thickness, shape and intensity. The challenge is open for new submissions via the web-based framework: http://caddementia.grand-challenge.org.
On-line Gaussian mixture density estimator for adaptive minimum bit-error-rate beamforming receivers
Resumo:
We develop an on-line Gaussian mixture density estimator (OGMDE) in the complex-valued domain to facilitate adaptive minimum bit-error-rate (MBER) beamforming receiver for multiple antenna based space-division multiple access systems. Specifically, the novel OGMDE is proposed to adaptively model the probability density function of the beamformer’s output by tracking the incoming data sample by sample. With the aid of the proposed OGMDE, our adaptive beamformer is capable of updating the beamformer’s weights sample by sample to directly minimize the achievable bit error rate (BER). We show that this OGMDE based MBER beamformer outperforms the existing on-line MBER beamformer, known as the least BER beamformer, in terms of both the convergence speed and the achievable BER.
Resumo:
In cooperative communication networks, owing to the nodes' arbitrary geographical locations and individual oscillators, the system is fundamentally asynchronous. Such a timing mismatch may cause rank deficiency of the conventional space-time codes and, thus, performance degradation. One efficient way to overcome such an issue is the delay-tolerant space-time codes (DT-STCs). The existing DT-STCs are designed assuming that the transmitter has no knowledge about the channels. In this paper, we show how the performance of DT-STCs can be improved by utilizing some feedback information. A general framework for designing DT-STC with limited feedback is first proposed, allowing for flexible system parameters such as the number of transmit/receive antennas, modulated symbols, and the length of codewords. Then, a new design method is proposed by combining Lloyd's algorithm and the stochastic gradient-descent algorithm to obtain optimal codebook of STCs, particularly for systems with linear minimum-mean-square-error receiver. Finally, simulation results confirm the performance of the newly designed DT-STCs with limited feedback.
Resumo:
To mitigate the inter-carrier interference (ICI) of doubly-selective (DS) fading channels, we consider a hybrid carrier modulation (HCM) system employing the discrete partial fast Fourier transform (DPFFT) demodulation and the banded minimum mean square error (MMSE) equalization in this letter. We first provide the discrete form of partial FFT demodulation, then apply the banded MMSE equalization to suppress the residual interference at the receiver. The proposed algorithm has been demonstrated, via numerical simulations, to be its superior over the single carrier modulation (SCM) system and circularly prefixed orthogonal frequency division multiplexing (OFDM) system over a typical DS channel. Moreover, it represents a good trade-off between computational complexity and performance.
Resumo:
The extensive use of land resources for food production, fibre for construction, wood pulp for paper, removal for extractive industries, sealing for urban and industrial development and as a receiver (either deliberate or accidental) of polluting substances has wrought huge changes in the chemistry, structure and biology of soils, away from their natural state.
Resumo:
I examine the factors underpinning the British radio-equipment sector's particularly poor interwar productivity performance relative to the United States. Differences in socio-legal environments were crucial in allowing key players in the British industry to derive higher monopoly rents than their American counterparts. Higher British rents in turn, had the unintended outcome of stimulating innovation around restrictive patents, initiating a path-dependent process of technical change in favor of expensive multifunctional valves. These valves both raised direct production costs and prevented British firms from following the American path of broadening the radio market beyond the household's prime receiver.
Resumo:
The notions of resolution and discrimination of probability forecasts are revisited. It is argued that the common concept underlying both resolution and discrimination is the dependence (in the sense of probability theory) of forecasts and observations. More specifically, a forecast has no resolution if and only if it has no discrimination if and only if forecast and observation are stochastically independent. A statistical tests for independence is thus also a test for no resolution and, at the same time, for no discrimination. The resolution term in the decomposition of the logarithmic scoring rule, and the area under the Receiver Operating Characteristic will be investigated in this light.
Resumo:
The urban boundary layer, above the canopy, is still poorly understood. One of the challenges is obtaining data by sampling more than a few meters above the rooftops, given the spatial and temporal inhomogeneities in both horizontal and vertical. Sodars are generally useful tools for ground-based remote sensing of winds and turbulence, but rely on horizontal homogeneity (as do lidars) in building up 3-component wind vectors from sampling three or more spatially separated volumes. The time taken for sound to travel to a typical range of 200 m and back is also a limitation. A sodar of radically different design is investigated, aimed at addressing these problems. It has a single vertical transmitted sound pulse. Doppler shifted signals are received from a number of volumes around the periphery of the transmitted beam with microphones which each having tight angular sensitivity at zenith angles slightly off-vertical. The spatial spread of sampled volumes is therefore smaller. By having more receiver microphones than a conventional sodar, the effect of smaller zenith angle is offset. More rapid profiling is also possible with a single vertical transmitted beam, instead of the usual multiple beams.A prototype design is described, together with initial field measurements. It is found that the beam forming using a single dish antenna and the drift of the sound pulse downwind both give rise to reduced performance compared with expectations. It is concluded that, while the new sodar works in principle, the compromises arising in the design mean that the expected advantages have not been realized
Resumo:
Scope: The use of biomarkers in the objective assessment of dietary intake is a high priority in nutrition research. The aim of this study was to examine pentadecanoic acid (C15:0) and heptadecanoic acid (C17:0) as biomarkers of dairy foods intake. Methods and results: The data used in the present study were obtained as part of the Food4me Study. Estimates of C15:0 and C17:0 from dried blood spots and intakes of dairy from an FFQ were obtained from participants (n=1,180) across 7 countries. Regression analyses were used to explore associations of biomarkers with dairy intake levels and receiver operating characteristic (ROC) analyses were used to evaluate the fatty acids. Significant positive associations were found between C15:0 and total intakes of high-fat dairy products. C15:0 showed good ability to distinguish between low and high consumers of high-fat dairy products. Conclusion: C15:0 can be used as a biomarker of high-fat dairy intake and of specific high-fat dairy products. Both C15:0 and C17:0 performed poorly for total dairy intake highlighting the need for caution when using these in epidemiological studies.
Genetic algorithm inversion of the average 1D crustal structure using local and regional earthquakes
Resumo:
Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Data obtained during routine diagnosis of human T-cell lymphotropic virus type 1 (HTLV-1) and 2 (HTLV-2) in ""at-risk"" individuals from Sao Paulo, Brazil using signal-to-cutoff (S/C) values obtained by first, second, and third generation enzyme immunoassay (EIA) kits, were compared. The highest S/C values were obtained with third generation EIA kits, but no correlation was detected between these values and specific antibody reactivity to HTLV-1, HTLV-2, or untyped HTLV (p = 0.302). In addition, use of these third generation kits resulted in HTLV-1/2 false-positive samples. In contrast, first and second generation EIA kits showed high specificity, and the second generation EIA kits showed the highest efficiency, despite lower S/C values. Using first and second generation EIA kits, significant differences in specific antibody detection of HTLV-1, relative to HTLV-2 (p = 0.019 for first generation and p < 0.001 for second generation EIA kits) and relative to untyped HTLV (p = 0.025 for first generation EIA kits), were observed. These results were explained by the composition and format of the assays. In addition, using receiver operating characteristics (ROC) analysis, a slight adjustment in cutoff values for third generation EIA kits improved their specificities and should be used when HTLV ""at-risk"" populations from this geographic area are to be evaluated. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
There is an increasing interest in the application of Evolutionary Algorithms (EAs) to induce classification rules. This hybrid approach can benefit areas where classical methods for rule induction have not been very successful. One example is the induction of classification rules in imbalanced domains. Imbalanced data occur when one or more classes heavily outnumber other classes. Frequently, classical machine learning (ML) classifiers are not able to learn in the presence of imbalanced data sets, inducing classification models that always predict the most numerous classes. In this work, we propose a novel hybrid approach to deal with this problem. We create several balanced data sets with all minority class cases and a random sample of majority class cases. These balanced data sets are fed to classical ML systems that produce rule sets. The rule sets are combined creating a pool of rules and an EA is used to build a classifier from this pool of rules. This hybrid approach has some advantages over undersampling, since it reduces the amount of discarded information, and some advantages over oversampling, since it avoids overfitting. The proposed approach was experimentally analysed and the experimental results show an improvement in the classification performance measured as the area under the receiver operating characteristics (ROC) curve.
Resumo:
The Mario Schenberg gravitational wave detector has started its commissioning phase at the Physics Institute of the University of Sao Paulo. We have collected almost 200 h of data from the instrument in order to check out its behavior and performance. We have also been developing a data acquisition system for it under a VXI System. Such a system is composed of an analog-to-digital converter and a GPS receiver for time synchronization. We have been building the software that controls and sets up the data acquisition. Here we present an overview of the Mario Schenberg detector and its data acquisition system, some results from the first commissioning run and solutions for some problems we have identified.
Resumo:
BACKGROUND: Optical spectroscopy is a noninvasive technique with potential applications for diagnosis of oral dysplasia and early cancer. In this study, we evaluated the diagnostic performance of a depth-sensitive optical spectroscopy (DSOS) system for distinguishing dysplasia and carcinoma from non-neoplastic oral mucosa. METHODS: Patients with oral lesions and volunteers without any oral abnormalities were recruited to participate. Autofluorescence and diffuse reflectance spectra of selected oral sites were measured using the DSOS system. A total of 424 oral sites in 124 subjects were measured and analyzed, including 154 sites in 60 patients with oral lesions and 270 sites in 64 normal volunteers. Measured optical spectra were used to develop computer-based algorithms to identify the presence of dysplasia or cancer. Sensitivity and specificity were calculated using a gold standard of histopathology for patient sites and clinical impression for normal volunteer sites. RESULTS: Differences in oral spectra were observed in: (1) neoplastic versus nonneoplastic sites, (2) keratinized versus nonkeratinized tissue, and (3) shallow versus deep depths within oral tissue. Algorithms based on spectra from 310 nonkeratinized anatomic sites (buccal, tongue, floor of mouth, and lip) yielded an area under the receiver operating characteristic curve of 0.96 in the training set and 0.93 in the validation set. CONCLUSIONS: The ability to selectively target epithelial and shallow stromal depth regions appeared to be diagnostically useful. For nonkeratinized oral sites, the sensitivity and specificity of this objective diagnostic technique were comparable to that of clinical diagnosis by expert observers. Thus, DSOS has potential to augment oral cancer screening efforts in community settings. Cancer 2009;115:1669-79. (C) 2009 American Cancer Society.