931 resultados para Processing Graph
Resumo:
Foliage Penetration (FOPEN) radar systems were introduced in 1960, and have been constantly improved by several organizations since that time. The use of Synthetic Aperture Radar (SAR) approaches for this application has important advantages, due to the need for high resolution in two dimensions. The design of this type of systems, however, includes some complications that are not present in standard SAR systems. FOPEN SAR systems need to operate with a low central frequency (VHF or UHF bands) in order to be able to penetrate the foliage. High bandwidth is also required to obtain high resolution. Due to the low central frequency, large integration angles are required during SAR image formation, and therefore the Range Migration Algorithm (RMA) is used. This project thesis identifies the three main complications that arise due to these requirements. First, a high fractional bandwidth makes narrowband propagation models no longer valid. Second, the VHF and UHF bands are used by many communications systems. The transmitted signal spectrum needs to be notched to avoid interfering them. Third, those communications systems cause Radio Frequency Interference (RFI) on the received signal. The thesis carries out a thorough analysis of the three problems, their degrading effects and possible solutions to compensate them. The UWB model is applied to the SAR signal, and the degradation induced by it is derived. The result is tested through simulation of both a single pulse stretch processor and the complete RMA image formation. Both methods show that the degradation is negligible, and therefore the UWB propagation effect does not need compensation. A technique is derived to design a notched transmitted signal. Then, its effect on the SAR image formation is evaluated analytically. It is shown that the stretch processor introduces a processing gain that reduces the degrading effects of the notches. The remaining degrading effect after processing gain is assessed through simulation, and an experimental graph of degradation as a function of percentage of nulled frequencies is obtained. The RFI is characterized and its effect on the SAR processor is derived. Once again, a processing gain is found to be introduced by the receiver. As the RFI power can be much higher than that of the desired signal, an algorithm is proposed to remove the RFI from the received signal before RMA processing. This algorithm is a modification of the Chirp Least Squares Algorithm (CLSA) explained in [4], which adapts it to deramped signals. The algorithm is derived analytically and then its performance is evaluated through simulation, showing that it is effective in removing the RFI and reducing the degradation caused by both RFI and notching. Finally, conclusions are drawn as to the importance of each one of the problems in SAR system design.
Resumo:
The structural connectivity of the brain is considered to encode species-wise and subject-wise patterns that will unlock large areas of understanding of the human brain. Currently, diffusion MRI of the living brain enables to map the microstructure of tissue, allowing to track the pathways of fiber bundles connecting the cortical regions across the brain. These bundles are summarized in a network representation called connectome that is analyzed using graph theory. The extraction of the connectome from diffusion MRI requires a large processing flow including image enhancement, reconstruction, segmentation, registration, diffusion tracking, etc. Although a concerted effort has been devoted to the definition of standard pipelines for the connectome extraction, it is still crucial to define quality assessment protocols of these workflows. The definition of quality control protocols is hindered by the complexity of the pipelines under test and the absolute lack of gold-standards for diffusion MRI data. Here we characterize the impact on structural connectivity workflows of the geometrical deformation typically shown by diffusion MRI data due to the inhomogeneity of magnetic susceptibility across the imaged object. We propose an evaluation framework to compare the existing methodologies to correct for these artifacts including whole-brain realistic phantoms. Additionally, we design and implement an image segmentation and registration method to avoid performing the correction task and to enable processing in the native space of diffusion data. We release PySDCev, an evaluation framework for the quality control of connectivity pipelines, specialized in the study of susceptibility-derived distortions. In this context, we propose Diffantom, a whole-brain phantom that provides a solution to the lack of gold-standard data. The three correction methodologies under comparison performed reasonably, and it is difficult to determine which method is more advisable. We demonstrate that susceptibility-derived correction is necessary to increase the sensitivity of connectivity pipelines, at the cost of specificity. Finally, with the registration and segmentation tool called regseg we demonstrate how the problem of susceptibility-derived distortion can be overcome allowing data to be used in their original coordinates. This is crucial to increase the sensitivity of the whole pipeline without any loss in specificity.
Resumo:
Electroencephalographic (EEG) signals of the human brains represent electrical activities for a number of channels recorded over a the scalp. The main purpose of this thesis is to investigate the interactions and causality of different parts of a brain using EEG signals recorded during a performance subjects of verbal fluency tasks. Subjects who have Parkinson's Disease (PD) have difficulties with mental tasks, such as switching between one behavior task and another. The behavior tasks include phonemic fluency, semantic fluency, category semantic fluency and reading fluency. This method uses verbal generation skills, activating different Broca's areas of the Brodmann's areas (BA44 and BA45). Advanced signal processing techniques are used in order to determine the activated frequency bands in the granger causality for verbal fluency tasks. The graph learning technique for channel strength is used to characterize the complex graph of Granger causality. Also, the support vector machine (SVM) method is used for training a classifier between two subjects with PD and two healthy controls. Neural data from the study was recorded at the Colorado Neurological Institute (CNI). The study reveals significant difference between PD subjects and healthy controls in terms of brain connectivities in the Broca's Area BA44 and BA45 corresponding to EEG electrodes. The results in this thesis also demonstrate the possibility to classify based on the flow of information and causality in the brain of verbal fluency tasks. These methods have the potential to be applied in the future to identify pathological information flow and causality of neurological diseases.
Resumo:
Comunicación presentada en el XI Workshop of Physical Agents, Valencia, 9-10 septiembre 2010.
Resumo:
"COO-2118-0031."
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
One of the ultimate aims of Natural Language Processing is to automate the analysis of the meaning of text. A fundamental step in that direction consists in enabling effective ways to automatically link textual references to their referents, that is, real world objects. The work presented in this paper addresses the problem of attributing a sense to proper names in a given text, i.e., automatically associating words representing Named Entities with their referents. The method for Named Entity Disambiguation proposed here is based on the concept of semantic relatedness, which in this work is obtained via a graph-based model over Wikipedia. We show that, without building the traditional bag of words representation of the text, but instead only considering named entities within the text, the proposed method achieves results competitive with the state-of-the-art on two different datasets.
Resumo:
This dissertation investigates the connection between spectral analysis and frame theory. When considering the spectral properties of a frame, we present a few novel results relating to the spectral decomposition. We first show that scalable frames have the property that the inner product of the scaling coefficients and the eigenvectors must equal the inverse eigenvalues. From this, we prove a similar result when an approximate scaling is obtained. We then focus on the optimization problems inherent to the scalable frames by first showing that there is an equivalence between scaling a frame and optimization problems with a non-restrictive objective function. Various objective functions are considered, and an analysis of the solution type is presented. For linear objectives, we can encourage sparse scalings, and with barrier objective functions, we force dense solutions. We further consider frames in high dimensions, and derive various solution techniques. From here, we restrict ourselves to various frame classes, to add more specificity to the results. Using frames generated from distributions allows for the placement of probabilistic bounds on scalability. For discrete distributions (Bernoulli and Rademacher), we bound the probability of encountering an ONB, and for continuous symmetric distributions (Uniform and Gaussian), we show that symmetry is retained in the transformed domain. We also prove several hyperplane-separation results. With the theory developed, we discuss graph applications of the scalability framework. We make a connection with graph conditioning, and show the in-feasibility of the problem in the general case. After a modification, we show that any complete graph can be conditioned. We then present a modification of standard PCA (robust PCA) developed by Cand\`es, and give some background into Electron Energy-Loss Spectroscopy (EELS). We design a novel scheme for the processing of EELS through robust PCA and least-squares regression, and test this scheme on biological samples. Finally, we take the idea of robust PCA and apply the technique of kernel PCA to perform robust manifold learning. We derive the problem and present an algorithm for its solution. There is also discussion of the differences with RPCA that make theoretical guarantees difficult.
Resumo:
The aim of this study was to evaluate fat substitute in processing of sausages prepared with surimi of waste from piramutaba filleting. The formulation ingredients were mixed with the fat substitutes added according to a fractional planning 2(4-1), where the independent variables, manioc starch (Ms), hydrogenated soy fat (F), texturized soybean protein (Tsp) and carrageenan (Cg) were evaluated on the responses of pH, texture (Tx), raw batter stability (RBS) and water holding capacity (WHC) of the sausage. Fat substitutes were evaluated in 11 formulations and the results showed that the greatest effects on the responses were found to Ms, F and Cg, being eliminated from the formulation Tsp. To find the best formulation for processing piramutaba sausage was made a complete factorial planning of 2(3) to evaluate the concentrations of fat substitutes in an enlarged range. The optimum condition found for fat substitutes in the sausages formulation were carrageenan (0.51%), manioc starch (1.45%) and fat (1.2%).
Resumo:
To investigate central auditory processing in children with unilateral stroke and to verify whether the hemisphere affected by the lesion influenced auditory competence. 23 children (13 male) between 7 and 16 years old were evaluated through speech-in-noise tests (auditory closure); dichotic digit test and staggered spondaic word test (selective attention); pitch pattern and duration pattern sequence tests (temporal processing) and their results were compared with control children. Auditory competence was established according to the performance in auditory analysis ability. Was verified similar performance between groups in auditory closure ability and pronounced deficits in selective attention and temporal processing abilities. Most children with stroke showed an impaired auditory ability in a moderate degree. Children with stroke showed deficits in auditory processing and the degree of impairment was not related to the hemisphere affected by the lesion.
Resumo:
The aim of this research was to analyze temporal auditory processing and phonological awareness in school-age children with benign childhood epilepsy with centrotemporal spikes (BECTS). Patient group (GI) consisted of 13 children diagnosed with BECTS. Control group (GII) consisted of 17 healthy children. After neurological and peripheral audiological assessment, children underwent a behavioral auditory evaluation and phonological awareness assessment. The procedures applied were: Gaps-in-Noise test (GIN), Duration Pattern test, and Phonological Awareness test (PCF). Results were compared between the groups and a correlation analysis was performed between temporal tasks and phonological awareness performance. GII performed significantly better than the children with BECTS (GI) in both GIN and Duration Pattern test (P < 0.001). GI performed significantly worse in all of the 4 categories of phonological awareness assessed: syllabic (P = 0.001), phonemic (P = 0.006), rhyme (P = 0.015) and alliteration (P = 0.010). Statistical analysis showed a significant positive correlation between the phonological awareness assessment and Duration Pattern test (P < 0.001). From the analysis of the results, it was concluded that children with BECTS may have difficulties in temporal resolution, temporal ordering, and phonological awareness skills. A correlation was observed between auditory temporal processing and phonological awareness in the suited sample.
Resumo:
The biofilm formation of Enterococcus faecalis and Enterococcus faecium isolated from the processing of ricotta on stainless steel coupons was evaluated, and the effect of cleaning and sanitization procedures in the control of these biofilms was determined. The formation of biofilms was observed while varying the incubation temperature (7, 25 and 39°C) and time (0, 1, 2, 4, 6 and 8days). At 7°C, the counts of E. faecalis and E. faecium were below 2log10CFU/cm(2). For the temperatures of 25 and 39°C, after 1day, the counts of E. faecalis and E. faecium were 5.75 and 6.07log10CFU/cm(2), respectively, which is characteristic of biofilm formation. The tested sanitation procedures a) acid-anionic tensioactive cleaning, b) anionic tensioactive cleaning+sanitizer and c) acid-anionic tensioactive cleaning+sanitizer were effective in removing the biofilms, reducing the counts to levels below 0.4log10CFU/cm(2). The sanitizer biguanide was the least effective, and peracetic acid was the most effective. These studies revealed the ability of enterococci to form biofilms and the importance of the cleaning step and the type of sanitizer used in sanitation processes for the effective removal of biofilms.
Resumo:
Due to the imprecise nature of biological experiments, biological data is often characterized by the presence of redundant and noisy data. This may be due to errors that occurred during data collection, such as contaminations in laboratorial samples. It is the case of gene expression data, where the equipments and tools currently used frequently produce noisy biological data. Machine Learning algorithms have been successfully used in gene expression data analysis. Although many Machine Learning algorithms can deal with noise, detecting and removing noisy instances from the training data set can help the induction of the target hypothesis. This paper evaluates the use of distance-based pre-processing techniques for noise detection in gene expression data classification problems. This evaluation analyzes the effectiveness of the techniques investigated in removing noisy data, measured by the accuracy obtained by different Machine Learning classifiers over the pre-processed data.
Resumo:
This study addressed the use of conventional and vegetable origin polyurethane foams to extract C. I. Acid Orange 61 dye. The quantitative determination of the residual dye was carried out with an UV/Vis absorption spectrophotometer. The extraction of the dye was found to depend on various factors such as pH of the solution, foam cell structure, contact time and dye and foam interactions. After 45 days, better results were obtained for conventional foam when compared to vegetable foam. Despite presenting a lower percentage of extraction, vegetable foam is advantageous as it is considered a polymer with biodegradable characteristics.
Resumo:
This paper describes a new food classification which assigns foodstuffs according to the extent and purpose of the industrial processing applied to them. Three main groups are defined: unprocessed or minimally processed foods (group 1), processed culinary and food industry ingredients (group 2), and ultra-processed food products (group 3). The use of this classification is illustrated by applying it to data collected in the Brazilian Household Budget Survey which was conducted in 2002/2003 through a probabilistic sample of 48,470 Brazilian households. The average daily food availability was 1,792 kcal/person being 42.5% from group 1 (mostly rice and beans and meat and milk), 37.5% from group 2 (mostly vegetable oils, sugar, and flours), and 20% from group 3 (mostly breads, biscuits, sweets, soft drinks, and sausages). The share of group 3 foods increased with income, and represented almost one third of all calories in higher income households. The impact of the replacement of group 1 foods and group 2 ingredients by group 3 products on the overall quality of the diet, eating patterns and health is discussed.