67 resultados para array processing
em Biblioteca Digital da Produ
Resumo:
Sound source localization (SSL) is an essential task in many applications involving speech capture and enhancement. As such, speaker localization with microphone arrays has received significant research attention. Nevertheless, existing SSL algorithms for small arrays still have two significant limitations: lack of range resolution, and accuracy degradation with increasing reverberation. The latter is natural and expected, given that strong reflections can have amplitudes similar to that of the direct signal, but different directions of arrival. Therefore, correctly modeling the room and compensating for the reflections should reduce the degradation due to reverberation. In this paper, we show a stronger result. If modeled correctly, early reflections can be used to provide more information about the source location than would have been available in an anechoic scenario. The modeling not only compensates for the reverberation, but also significantly increases resolution for range and elevation. Thus, we show that under certain conditions and limitations, reverberation can be used to improve SSL performance. Prior attempts to compensate for reverberation tried to model the room impulse response (RIR). However, RIRs change quickly with speaker position, and are nearly impossible to track accurately. Instead, we build a 3-D model of the room, which we use to predict early reflections, which are then incorporated into the SSL estimation. Simulation results with real and synthetic data show that even a simplistic room model is sufficient to produce significant improvements in range and elevation estimation, tasks which would be very difficult when relying only on direct path signal components.
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
We present a novel array RLS algorithm with forgetting factor that circumvents the problem of fading regularization, inherent to the standard exponentially-weighted RLS, by allowing for time-varying regularization matrices with generic structure. Simulations in finite precision show the algorithm`s superiority as compared to alternative algorithms in the context of adaptive beamforming.
Resumo:
In about 50% of first trimester spontaneous abortion the cause remains undetermined after standard cytogenetic investigation. We evaluated the usefulness of array-CGH in diagnosing chromosome abnormalities in products of conception from first trimester spontaneous abortions. Cell culture was carried out in short- and long-term cultures of 54 specimens and cytogenetic analysis was successful in 49 of them. Cytogenetic abnormalities (numerical and structural) were detected in 22 (44.89%) specimens. Subsequent, array-CGH based on large insert clones spaced at ~1 Mb intervals over the whole genome was used in 17 cases with normal G-banding karyotype. This revealed chromosome aneuplodies in three additional cases, giving a final total of 51% cases in which an abnormal karyotype was detected. In keeping with other recently published works, this study shows that array-CGH detects abnormalities in a further ~10% of spontaneous abortion specimens considered to be normal using standard cytogenetic methods. As such, array-CGH technique may present a suitable complementary test to cytogenetic analysis in cases with a normal karyotype.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Resumo:
This study addressed the use of conventional and vegetable origin polyurethane foams to extract C. I. Acid Orange 61 dye. The quantitative determination of the residual dye was carried out with an UV/Vis absorption spectrophotometer. The extraction of the dye was found to depend on various factors such as pH of the solution, foam cell structure, contact time and dye and foam interactions. After 45 days, better results were obtained for conventional foam when compared to vegetable foam. Despite presenting a lower percentage of extraction, vegetable foam is advantageous as it is considered a polymer with biodegradable characteristics.
Resumo:
This paper describes a new food classification which assigns foodstuffs according to the extent and purpose of the industrial processing applied to them. Three main groups are defined: unprocessed or minimally processed foods (group 1), processed culinary and food industry ingredients (group 2), and ultra-processed food products (group 3). The use of this classification is illustrated by applying it to data collected in the Brazilian Household Budget Survey which was conducted in 2002/2003 through a probabilistic sample of 48,470 Brazilian households. The average daily food availability was 1,792 kcal/person being 42.5% from group 1 (mostly rice and beans and meat and milk), 37.5% from group 2 (mostly vegetable oils, sugar, and flours), and 20% from group 3 (mostly breads, biscuits, sweets, soft drinks, and sausages). The share of group 3 foods increased with income, and represented almost one third of all calories in higher income households. The impact of the replacement of group 1 foods and group 2 ingredients by group 3 products on the overall quality of the diet, eating patterns and health is discussed.
Resumo:
Orthodox teaching and practice on nutrition and health almost always focuses on nutrients, or else on foods and drinks. Thus, diets that are high in folate and in green leafy vegetables are recommended, whereas diets high in saturated fat and in full-fat milk and other dairy products are not recommended. Food guides such as the US Food Guide Pyramid are designed to encourage consumption of healthier foods, by which is usually meant those higher in vitamins, minerals and other nutrients seen as desirable.What is generally overlooked in such approaches, which currently dominate official and other authoritative information and education programmes, and also food and nutrition public health policies, is food processing. It is now generally acknowledged that the current pandemic of obesity and related chronic diseases has as one of its important causes increased consumption of convenience including pre-prepared foods(1,2). However, the issue of food processing is largely ignored or minimised in education and information about food, nutrition and health, and also in public health policies.A short commentary cannot be comprehensive, and a general proposal such as that made here is bound to have some problems and exceptions. Also, the social, cultural, economic and environmental consequences of food processing are not discussed here. Readers comments and queries are invited
Resumo:
This paper presents a rational approach to the design of a catamaran's hydrofoil applied within a modern context of multidisciplinary optimization. The approach used includes the use of response surfaces represented by neural networks and a distributed programming environment that increases the optimization speed. A rational approach to the problem simplifies the complex optimization model; when combined with the distributed dynamic training used for the response surfaces, this model increases the efficiency of the process. The results achieved using this approach have justified this publication.
Resumo:
Previously, we isolated two strains of spontaneous oxidative (SpOx2 and SpOx3) stress mutants of Lactococcus lactis subsp cremoris. Herein, we compared these mutants to a parental wild- type strain (J60011) and a commercial starter in experimental fermented milk production. Total solid contents of milk and fermentation temperature both affected the acidification profile of the spontaneous oxidative stress- resistant L. lactis mutants during fermented milk production. Fermentation times to pH 4.7 ranged from 6.40 h (J60011) to 9.36 h (SpOx2); V(max) values were inversely proportional to fermentation time. Bacterial counts increased to above 8.50 log(10) cfu/mL. The counts of viable SpOx3 mutants were higher than those of the parental wild strain in all treatments. All fermented milk products showed post-fermentation acidification after 24 h of storage at 4 degrees C; they remained stable after one week of storage.
Resumo:
Background: Various neuroimaging studies, both structural and functional, have provided support for the proposal that a distributed brain network is likely to be the neural basis of intelligence. The theory of Distributed Intelligent Processing Systems (DIPS), first developed in the field of Artificial Intelligence, was proposed to adequately model distributed neural intelligent processing. In addition, the neural efficiency hypothesis suggests that individuals with higher intelligence display more focused cortical activation during cognitive performance, resulting in lower total brain activation when compared with individuals who have lower intelligence. This may be understood as a property of the DIPS. Methodology and Principal Findings: In our study, a new EEG brain mapping technique, based on the neural efficiency hypothesis and the notion of the brain as a Distributed Intelligence Processing System, was used to investigate the correlations between IQ evaluated with WAIS (Whechsler Adult Intelligence Scale) and WISC (Wechsler Intelligence Scale for Children), and the brain activity associated with visual and verbal processing, in order to test the validity of a distributed neural basis for intelligence. Conclusion: The present results support these claims and the neural efficiency hypothesis.
Resumo:
Maltose-binding protein is the periplasmic component of the ABC transporter responsible for the uptake of maltose/maltodextrins. The Xanthomonas axonopodis pv. citri maltose-binding protein MalE has been crystallized at 293 Kusing the hanging-drop vapour-diffusion method. The crystal belonged to the primitive hexagonal space group P6(1)22, with unit-cell parameters a = 123.59, b = 123.59, c = 304.20 angstrom, and contained two molecules in the asymetric unit. It diffracted to 2.24 angstrom resolution.
Resumo:
We have modeled, fabricated, and characterized superhydrophobic surfaces with a morphology formed of periodic microstructures which are cavities. This surface morphology is the inverse of that generally reported in the literature when the surface is formed of pillars or protrusions, and has the advantage that when immersed in water the confined air inside the cavities tends to expel the invading water. This differs from the case of a surface morphology formed of pillars or protrusions, for which water can penetrate irreversibly among the microstructures, necessitating complete drying of the surface in order to again recover its superhydrophobic character. We have developed a theoretical model that allows calculation of the microcavity dimensions needed to obtain superhydrophobic surfaces composed of patterns of such microcavities, and that provides estimates of the advancing and receding contact angle as a function of microcavity parameters. The model predicts that the cavity aspect ratio (depth-to-diameter ratio) can be much less than unity, indicating that the microcavities do not need to be deep in order to obtain a surface with enhanced superhydrophobic character. Specific microcavity patterns have been fabricated in polydimethylsiloxane and characterized by scanning electron microscopy, atomic force microscopy, and contact angle measurements. The measured advancing and receding contact angles are in good agreement with the predictions of the model. (C) 2010 American Institute of Physics. [doi:10.1063/1.3466979]