39 resultados para Componentes independentes

em Universidade Federal do Rio Grande do Norte(UFRN)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blind Source Separation (BSS) refers to the problem of estimate original signals from observed linear mixtures with no knowledge about the sources or the mixing process. Independent Component Analysis (ICA) is a technique mainly applied to BSS problem and from the algorithms that implement this technique, FastICA is a high performance iterative algorithm of low computacional cost that uses nongaussianity measures based on high order statistics to estimate the original sources. The great number of applications where ICA has been found useful reects the need of the implementation of this technique in hardware and the natural paralelism of FastICA favors the implementation of this algorithm on digital hardware. This work proposes the implementation of FastICA on a reconfigurable hardware platform for the viability of it's use in blind source separation problems, more specifically in a hardware prototype embedded in a Field Programmable Gate Array (FPGA) board for the monitoring of beds in hospital environments. The implementations will be carried out by Simulink models and it's synthesizing will be done through the DSP Builder software from Altera Corporation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work considers the development of a filtering system composed of an intelligent algorithm, that separates information and noise coming from sensors interconnected by Foundation Fieldbus (FF) network. The algorithm implementation will be made through FF standard function blocks, with on-line training through OPC (OLE for Process Control), and embedded technology in a DSP (Digital Signal Processor) that interacts with the fieldbus devices. The technique ICA (Independent Component Analysis), that explores the possibility of separating mixed signals based on the fact that they are statistically independent, was chosen to this Blind Source Separation (BSS) process. The algorithm and its implementations will be Presented, as well as the results

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The episodic memory system allows us to retrieve information about events, including its contextual aspects. It has been suggested that episodic memory is composed by two independent components: recollection and familiarity. Recollection is related to the vivid e detailed retrieval of item and contextual information, while familiarity is the capability to recognize items previously seen as familiars. Despite the fact that emotion is one of the most influent process on memory, only a few studies have investigated its effect on recollection and familiarity. Another limitation of studies about the effect of emotion on memory is that the majority of them have not adequately considered the differential effects of arousal and positive/negative valence. The main purpose of the current work is to investigate the independent effect of emotional valence and arousal on recollection and familiarity, as well as to test some hypothesis that have been suggested about the effect of emotion on episodic memory. The participants of the research performed a recognition task for three lists of emotional pictures: high arousal negative, high arousal positive and low arousal positive. At the test session, participants also rated the confidence level of their responses. The confidence ratings were used to plot ROC curves and estimate the contributions of recollection and familiarity of recognition performance. As the main results, we found that negative valence enhanced the component of recollection without any effect on familiarity or recognition accuracy. Arousal did not affect recognition performance or their components, but high arousal was associated with a higher proportion of false memories. This work highlight the importance of to consider both the emotional dimensions and episodic memory components in the study of emotion effect on episodic memory, since they interact in complex and independent way

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The exponential growth in the applications of radio frequency (RF) is accompanied by great challenges as more efficient use of spectrum as in the design of new architectures for multi-standard receivers or software defined radio (SDR) . The key challenge in designing architecture of the software defined radio is the implementation of a wide-band receiver, reconfigurable, low cost, low power consumption, higher level of integration and flexibility. As a new solution of SDR design, a direct demodulator architecture, based on fiveport technology, or multi-port demodulator, has been proposed. However, the use of the five-port as a direct-conversion receiver requires an I/Q calibration (or regeneration) procedure in order to generate the in-phase (I) and quadrature (Q) components of the transmitted baseband signal. In this work, we propose to evaluate the performance of a blind calibration technique without additional knowledge about training or pilot sequences of the transmitted signal based on independent component analysis for the regeneration of I/Q five-port downconversion, by exploiting the information on the statistical properties of the three output signals

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional methods to solve the problem of blind source separation nonlinear, in general, using series of restrictions to obtain the solution, often leading to an imperfect separation of the original sources and high computational cost. In this paper, we propose an alternative measure of independence based on information theory and uses the tools of artificial intelligence to solve problems of blind source separation linear and nonlinear later. In the linear model applies genetic algorithms and Rényi of negentropy as a measure of independence to find a separation matrix from linear mixtures of signals using linear form of waves, audio and images. A comparison with two types of algorithms for Independent Component Analysis widespread in the literature. Subsequently, we use the same measure of independence, as the cost function in the genetic algorithm to recover source signals were mixed by nonlinear functions from an artificial neural network of radial base type. Genetic algorithms are powerful tools for global search, and therefore well suited for use in problems of blind source separation. Tests and analysis are through computer simulations

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os recentes avanços técnicos das duas últimas décadas para o registro de sinais neuroeletrofisiológicos foram essenciais para que se testassem hipóteses há muito propostas acerca de como células nervosas processam e armazenam informação. No entanto, ao permitir maior detalhamento dos dados coletados, as novas tecnologias levam inevitavelmente ao aumento de sua complexidade estatística e, consequentemente, à necessidade de novas ferramentas matemático-computacionais para sua análise. Nesta tese, apresentamos novos métodos para a análise de dois componentes fundamentais nas atuais teorias da codificação neural: (1) assembleias celulares, definidas pela co-ativação de subgrupos neuronais; e (2) o padrão temporal de atividade de neurônios individuais. Em relação a (1), desenvolvemos um método baseado em análise de componentes independentes para identificar e rastrear padrões de co-ativação significativos com alta resolução temporal. Superamos limitações de métodos anteriores, ao efetivamente isolar assembleias e abrir a possibilidade de analisar simultaneamente grandes populações neuronais. Em relação a (2), apresentamos uma nova técnica para a extração de padrões de atividade em trens de disparo baseada na decomposição wavelet. Demonstramos, por meio de simulações e de aplicação a dados reais, que nossa ferramenta supera as mais utilizadas atualmente para decodificar respostas de neurônios e estimar a informação de Shannon entre trens de disparos e estímulos externos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The building envelope is the principal mean of interaction between indoors and environment, with direct influence on thermal and energy performance of the building. By intervening in the envelope, with the proposal of specific architectural elements, it is possible to promote the use of passive strategies of conditioning, such as natural ventilation. The cross ventilation is recommended by the NBR 15220-3 as the bioclimatic main strategy for the hot and humid climate of Natal/RN, offering among other benefits, the thermal comfort of occupants. The analysis tools of natural ventilation, on the other hand, cover a variety of techniques, from the simplified calculation methods to computer fluid dynamics, whose limitations are discussed in several papers, but without detailing the problems encountered. In this sense, the present study aims to evaluate the potential of wind catchers, envelope elements used to increase natural ventilation in the building, through CFD simplified simulation. Moreover, it seeks to quantify the limitations encountered during the analysis. For this, the procedure adopted to evaluate the elements implementation and efficiency was the CFD simulation, abbreviation for Computer Fluid Dynamics, with the software DesignBuilder CFD. It was defined a base case, where wind catchers were added with various settings, to compare them with each other and appreciate the differences in flows and air speeds encountered. Initially there has been done sensitivity tests for familiarization with the software and observe simulation patterns, mapping the settings used and simulation time for each case simulated. The results show the limitations encountered during the simulation process, as well as an overview of the efficiency and potential of wind catchers, with the increase of ventilation with the use of catchers, differences in air flow patterns and significant increase in air speeds indoors, besides changes found due to different element geometries. It is considered that the software used can help designers during preliminary analysis in the early stages of design

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O processamento térmico de materiais cerâmicos via energia de microondas, no estágio atual, vem ganhando cada dia mais importância, tendo em vista suas inúmeras aplicações, como por exemplo: aplicação de microondas na área de processamento mineral (aquecimento de minérios antes da moagem, secagem, redução carbotérmica de óxidos minerais, lixiviação, fusão, pré-tratamento de minérios e concentrados de ouro refratário, regeneração de carvão, etc. de acordo com Kigman & Rowson, 1998). Em virtude de uma série de vantagens em potencial, frente aos métodos convencionais de aquecimento, como redução no tempo de processamento; economia de energia; diminuição do diâmetro médio das partículas e melhoramento nas propriedades tecnológicas em geral, esta tecnologia vem se destacando. Neste contexto, o objetivo geral deste trabalho, é desenvolver uma pesquisa visando identificar e caracterizar novas opções de matérias-primas cerâmicas como argilas, feldspatos e caulins que sejam eficazes para definir a formulação de uma ou mais massas para produção de componentes de cerâmica estrutural com propriedades físicas, mecânicas e estéticas adequadas após passarem por sinterização convencional e por energia de microondas destacando as vantagens desta última. Além dos requisitos técnicos e de processo, as formulações apresentadas deverão atender às expectativas de preço e de logística de fornecimento. No estudo foram conformados corpos-de-prova por extrusão e prensagem, sinterizados em fornos microondas e convencional, sob ciclos de queima mais rápidos que os atualmente praticados. As matérias-primas foram caracterizadas e analisadas, utilizando as técnicas de fluorescência por raios X (FRX), difração por raios X (DRX), análise térmica diferencial (DTA), análise térmica gravimétrica (DTG), análise granulométrica (AG), microscopia eletrônica de varredura (MEV), absorção d agua (AA), massa especifica aparente (MEA), porosidade aparente (PA), retração linear (RL) e tensão de ruptura e flexão (TRF). Os resultados obtidos indicaram que as propriedades tecnológicas de Absorção de água (AA) e Tensão de Ruptura e flexão (TRF), proposto no trabalho foram adquiridos com sucesso e estão bem além do limite exigido pelas especificações das normas da ABNT NBR 15.270/05 e 15.310/09

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dyslipidemia and excess weight in adolescents, when combined, suggest a progression of risk factors for cardiovascular disease (CVD). Besides these, the dietary habits and lifestyle have also been considered unsuitable impacting the development of chronic diseases. The study objectives were: (1) estimate the prevalence of lipid profile and correlate with body mass index (BMI), waist circumference (WC) and waist / height ratio (WHR) in adolescents, considering the maturation sexual, (2) know the sources of variance in the diet and the number of days needed to estimate the usual diet of adolescents and (3) describe the dietary patterns and lifestyle of adolescents, family history of CVD and age correlates them with the patterns of risk for CVD, adjusted for sexual maturation. A cross-sectional study was performed with 432 adolescents, aged 10-19 years from public schools of the Natal city, Brazil. The dyslipidemias were evaluated considering the lipid profile, the index of I Castelli (TC / HDL) and II (LDL / HDL) and non-HDL cholesterol. Anthropometric indicators were BMI, WC and WHR. The intake of energy, nutrients including fiber, fatty acids and cholesterol was estimated from two 24-hour recalls (24HR). The variables of lipid profile, anthropometric and clinical data were used in the models of Pearson correlation and linear regression, considering the sexual maturation. The variance ratio of the diet was calculated from the component-person variance, determined by analysis of variance (ANOVA). The definition of the number of days to estimate the usual intake of each nutrient was obtained by taking the hypothetical correlation (r) ≥ 0.9, between nutrient intake and the true observed. We used the principal component analysis as a method of extracting factors that 129 accounted for the dependent variables and known cardiovascular risk obtained from the lipid profile, the index for Castelli I and II, non-HDL cholesterol, BMI, and WC the WHR. Dietary patterns and lifestyle were obtained from the independent variables, based on nutrients consumed and physical activity weekly. In the study of principal component analysis (PCA) was investigated associations between the patterns of cardiovascular risk factors in dietary patterns and lifestyle, age and positive family history of CVD, through bivariate and multiple logistic regression adjusted for sexual maturation. The low HDL-C dyslipidemia was most prevalent (50.5%) for adolescents. Significant correlations were observed between hypercholesterolemia and positive family history of CVD (r = 0.19, p <0.01) and hypertriglyceridemia with BMI (r = 0.30, p <0.01), with the CC (r = 0.32, p <0.01) and WHR (r = 0.33, p <0.01). The linear model constructed with sexual maturation, age and BMI explained about 1 to 10.4% of the variation in the lipid profile. The sources of variance between individuals were greater for all nutrients in both sexes. The reasons for variances were  1 for all nutrients were higher in females. The results suggest that to assess the diet of adolescents with greater precision, 2 days would be enough to R24h consumption of energy, carbohydrates, fiber, saturated and monounsaturated fatty acids. In contrast, 3 days would be recommended for protein, lipid, polyunsaturated fatty acids and cholesterol. Two cardiovascular risk factors as have been extracted in the ACP, referring to the dependent variables: the standard lipid profile (HDL-C and non-HDL cholesterol) and "standard anthropometric index (BMI, WC, WHR) with a power explaining 75% of the variance of the original data. The factors are representative of two independent variables led to dietary patterns, "pattern 130 western diet" and "pattern protein diet", and one on the lifestyle, "pattern energy balance". Together, these patterns provide an explanation power of 67%. Made adjustment for sexual maturation in males remained significant variables: the associations between puberty and be pattern anthropometric indicator (OR = 3.32, CI 1.34 to 8.17%), and between family history of CVD and the pattern lipid profile (OR = 2.62, CI 1.20 to 5.72%). In females adolescents, associations were identified between age after the first stage of puberty with anthropometric pattern (OR = 3.59, CI 1.58 to 8.17%) and lipid profile (OR = 0.33, CI 0.15 to 0.75%). Conclusions: The low HDL-C was the most prevalent dyslipidemia independent of sex and nutritional status of adolescents. Hypercholesterolemia was influenced by family history of CVD and sexual maturation, in turn, hypertriglyceridemia was closely associated with anthropometric indicators. The variance between the diets was greater for all nutrients. This fact reflected in a variance ratio less than 1 and consequently in a lower number of days requerid to estimate the usual diet of adolescents considering gender. The two dietary patterns were extracted and the pattern considered unhealthy lifestyle as healthy. The associations were found between the patterns of CVD risk with age and family history of CVD in the studied adolescents

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Farming of marine shrimp is growing worldwide and the Litopenaeus vannamei (L. vannamei) shrimp is the species most widely cultivated. Shrimp is an attractive food for its nutritional value and sensory aspects, being essential the maintenance of this attributes throughout storage, which takes place largely under freezing. The aim of this research was to evaluate quality characteristics of Litopenaeus vannamei shrimp, during freezing storage and to verify the effect of rosemary (Rosmarinus officinalis) adding. Considering the reutilization of processing shrimp wastes, total carotenoids analysis were conducted in waste of Litopenaeus vannamei shrimp and in the flour obtained after dryer. Monthly physicochemical and sensorial analysis were carried out on shrimp stored at 28,3 ± 3,8ºC for 180 days. Samples were placed in polyethylene bags and were categorized as whole shrimp (WS), peeled shrimp (PS), and PS with 0,5% dehydrated rosemary (RS). TBARS, pH, total carotenoid and sensorial Quantitative Descriptive Analysis (QDA) were carried out. Carotenoid total analysis was conducted in fresh wastes and processed flour (0 day) and after 60, 120 and 180 days of frozen storage. After 180 days, RS had lower pH (p = 0.001) and TBARS (p = 0.001) values and higher carotenoids (p = 0.003), while WS showed higher carotenoid losses. Sensory analysis showed that WS were firmer although rancid taste and smell were perceived with greater intensity (p = 0.001). Rancid taste was detected in RS only at 120 days at significantly lower intensity (p = 0.001) than WS and PS. Fresh wastes had 42.74μg/g of total carotenoids and processed flour 98.51μg/g. After 180 days of frozen storage, total carotenoids were significantly lower than 0 day (p<0,05). The addition of rosemary can improve sensory quality of frozen shrimp and reduce nutritional losses during storage. Shrimp wastes and flour of L. vannamei shrimp showed considerable astaxanthin content however, during storage it was observed losses in this pigment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to the current need of the industry to integrate data of the beginning of production originating from of several sources and of transforming them in useful information for sockets of decisions, a search exists every time larger for systems of visualization of information that come to collaborate with that functionality. On the other hand, a common practice nowadays, due to the high competitiveness of the market, it is the development of industrial systems that possess characteristics of modularity, distribution, flexibility, scalability, adaptation, interoperability, reusability and access through web. Those characteristics provide an extra agility and a larger easiness in adapting to the frequent changes of demand of the market. Based on the arguments exposed above, this work consists of specifying a component-based architecture, with the respective development of a system based on that architecture, for the visualization of industrial data. The system was conceived to be capable to supply on-line information and, optionally, historical information of variables originating from of the beginning of production. In this work it is shown that the component-based architecture developed possesses the necessary requirements for the obtaining of a system robust, reliable and of easy maintenance, being, like this, in agreement with the industrial needs. The use of that architecture allows although components can be added, removed or updated in time of execution, through a manager of components through web, still activating more the adaptation process and updating of the system

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, where the market competition requires products with better quality and a constant search for cost savings and a better use of raw materials, the research for more efficient control strategies becomes vital. In Natural Gas Processin Units (NGPUs), as in the most chemical processes, the quality control is accomplished through their products composition. However, the chemical composition analysis has a long measurement time, even when performed by instruments such as gas chromatographs. This fact hinders the development of control strategies to provide a better process yield. The natural gas processing is one of the most important activities in the petroleum industry. The main economic product of a NGPU is the liquefied petroleum gas (LPG). The LPG is ideally composed by propane and butane, however, in practice, its composition has some contaminants, such as ethane and pentane. In this work is proposed an inferential system using neural networks to estimate the ethane and pentane mole fractions in LPG and the propane mole fraction in residual gas. The goal is to provide the values of these estimated variables in every minute using a single multilayer neural network, making it possibly to apply inferential control techniques in order to monitor the LPG quality and to reduce the propane loss in the process. To develop this work a NGPU was simulated in HYSYS R software, composed by two distillation collumns: deethanizer and debutanizer. The inference is performed through the process variables of the PID controllers present in the instrumentation of these columns. To reduce the complexity of the inferential neural network is used the statistical technique of principal component analysis to decrease the number of network inputs, thus forming a hybrid inferential system. It is also proposed in this work a simple strategy to correct the inferential system in real-time, based on measurements of the chromatographs which may exist in process under study

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We developed an assay methodology that considered the temperature variation and the scanning electron microscopy as a method to quantify and characterize respectively the consumption evolution in three 46 LA machines, with internal combustion and two-stroke engines, 7.64 cm3 cylinder capacity, 23.0 millimeters diameter and 18.4 millimeters course, RPM service from 2.000 to 16.000 rpm, 1.2 HP power, and 272 grams weight. The investigated engines components were: (1) head of the engine (Al-Si alloy), (2) piston (Al-Si alloy) and (3) piston pin (AISI 52100 steel). The assays were carried out on a desktop; engines 1 and 2 were assayed with no load, whereas in two assays of engine 3 we added a fan with wind speed that varied from 8.10 m/s to 11.92 m/s, in order to identify and compare the engine dynamic behavior as related to the engines assayed with no load. The temperatures of the engine s surface and surroundings were measured by two type K thermopairs connected to the assay device and registered in a microcomputer with data recording and parameters control and monitoring software, throughout the assays. The consumed surface of the components was analyzed by scanning electron microscopy (SEM) and microanalysis-EDS. The study was complemented with shape deformation and mass measurement assays. The temperature variation was associated with the oxides morphology and the consumption mechanisms were discussed based on the relation between the thermal mechanical effects and the responses of the materials characterization

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A região semiárida sofre escassez hídrica. A fim de regularizar a disponibilidade hídrica nos períodos de estiagem, são construídas barragens. No entanto, a qualidade da água armazenada tem sofrido os efeitos do descarte irregular de resíduos no meio ambiente e das atividades antrópicas exercidas nas bacias hidrográficas. A degradação hídrica pode ser constatada a partir do monitoramento dos parâmetros de qualidade da água. Estes dados podem ser analisados através de métodos estatísticos tais como a Análise de Componentes Principais e a análise de agrupamento, que seleciona indivíduos com características semelhantes. O objetivo deste trabalho é realizar oagrupamento dos reservatórios do Rio Grande do Norte, com base nos parâmetros de qualidade da água, para a identificação de grupos homogêneos de reservatórios em termos de fontes de poluição. Serão objeto desse estudo as bacias Piranhas-Açu, Apodi-Mossoró, Trairí, Potengi e Ceará-Mirim. Os parâmetros mercúrio, chumbo, cromo, fósforo total, nitrogênio total e zinco contribuíram para a formação da primeira componente principal, que pode indicar poluição por metais pesados; sólidos totais, DBO, OD e cobre, para a segunda componente, que pode ser indicativo de poluição por matéria orgânica e atividades antrópicas; e clorofila a , cádmio e níquel, para a terceira componente, que pode indicar eutrofização e poluição por metais pesados. De posse das componentes principais se procedeu o agrupamento dos reservatórios, formando-se quatro grupos distintos. Os grupos 1 e 2 são constituídos por reservatórios da Bacia Piranhas-Açu, que apresentou maiores valores de metais pesados. O grupo 3, constituído por reservatórios das bacias Ceará-Mirim, Potengi e Trairí, apresentou maiores valores de DBO e sólidostotais e o grupo 4 é formado por reservatórios da Bacia Apodi-Mossoró. Nas Bacias do Trarí e Piranhas-Açu deve ser coibido o lançamento desordenado de efluentes e fontes de poluição difusas, e implantado um sistema de coleta de esgoto para minimizar a poluição por matéria orgânica