36 resultados para Suavização Exponencial
em Universidade Federal do Rio Grande do Norte(UFRN)
Resumo:
The random walk models with temporal correlation (i.e. memory) are of interest in the study of anomalous diffusion phenomena. The random walk and its generalizations are of prominent place in the characterization of various physical, chemical and biological phenomena. The temporal correlation is an essential feature in anomalous diffusion models. These temporal long-range correlation models can be called non-Markovian models, otherwise, the short-range time correlation counterparts are Markovian ones. Within this context, we reviewed the existing models with temporal correlation, i.e. entire memory, the elephant walk model, or partial memory, alzheimer walk model and walk model with a gaussian memory with profile. It is noticed that these models shows superdiffusion with a Hurst exponent H > 1/2. We study in this work a superdiffusive random walk model with exponentially decaying memory. This seems to be a self-contradictory statement, since it is well known that random walks with exponentially decaying temporal correlations can be approximated arbitrarily well by Markov processes and that central limit theorems prohibit superdiffusion for Markovian walks with finite variance of step sizes. The solution to the apparent paradox is that the model is genuinely non-Markovian, due to a time-dependent decay constant associated with the exponential behavior. In the end, we discuss ideas for future investigations.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
REGIS, Josiana Florencio Vieira; CAMPOS, Ana Celia Cavalcanti Fernandes. O paradigma tecnologico e a revoluçao informacional: fundamentos da sociedade da informaçao. In: CONGRESSO INTERNACIONAL EM SISTEMAS DE INFORMAÇAO E GESTAO DA TECNOLOGIA, 6., 2009. Sao Paulo. Anais eletronicos... Sao Paulo: FEA/USP, 2009. Trabalho oral.
Resumo:
O modelo civilizatório da sociedade global fundamenta-se na produção à larga escala e no aumento exponencial e diversificado do consumo. Este modelo impacta o meio ambiente já que demanda grandes quantidades de recursos naturais e provoca contaminação ambiental. No leque desta contaminação, a geração de resíduos sólidos surge como uma das principais devido a seus efeitos nocivos serem sentidos de forma imediata pelas pessoas. Em países como o Brasil, uma das soluções requeridas para se minimizar e/ou equacionar a problemática engendrada pelos resíduos sólidos é a reciclagem dos materiais. A justificativa oficial pelo esforço à reciclagem está nas características da atividade já que o uso de materiais reciclados reduz a demanda por recursos naturais em processos produtivos industriais, aumenta o tempo de vida útil dos aterros sanitários (local de destino final dos resíduos), além de gerar emprego e renda para os catadores, sujeitos que sobrevivem da coleta e separação dos materiais recicláveis. A partir de uma ética ambiental, a pergunta que deve ser feita quando nos propomos a analisar as implicações da geração dos resíduos é: por que a sociedade global gera resíduos sólidos de maneira acentuada? Contudo, à luz dos pressupostos mercadológicos do capitalismo, a pergunta que move as discussões acerca da problemática dos resíduos sólidos é: o que fazer com a crescente geração de resíduos sólidos? O presente artigo se propõe a uma reflexão dos elementos justificativos dessa ode à reciclagem. Em nossa perspectiva, a reciclagem fomenta ao que denominamos de ambientalismo econômico, no qual o discurso pró-reciclagem se apropria dos elementos e potencialidades ambientais da atividade da reciclagem para justificar as ações de caráter econômico no que se refere ao que fazer com os resíduos gerados diariamente.
Resumo:
The objective is to analyze the relationship between risk and number of stocks of a portfolio for an individual investor when stocks are chosen by "naive strategy". For this, we carried out an experiment in which individuals select actions to reproduce this relationship. 126 participants were informed that the risk of first choice would be an asset average of all standard deviations of the portfolios consist of a single asset, and the same procedure should be used for portfolios composed of two, three and so on, up to 30 actions . They selected the assets they want in their portfolios without the support of a financial analysis. For comparison we also tested a hypothetical simulation of 126 investors who selected shares the same universe, through a random number generator. Thus, each real participant is compensated for random hypothetical investor facing the same opportunity. Patterns were observed in the portfolios of individual participants, characterizing the curves for the components of the samples. Because these groupings are somewhat arbitrary, it was used a more objective measure of behavior: a simple linear regression for each participant, in order to predict the variance of the portfolio depending on the number of assets. In addition, we conducted a pooled regression on all observations by analyzing cross-section. The result of pattern occurs on average but not for most individuals, many of which effectively "de-diversify" when adding seemingly random bonds. Furthermore, the results are slightly worse using a random number generator. This finding challenges the belief that only a small number of titles is necessary for diversification and shows that there is only applicable to a large sample. The implications are important since many individual investors holding few stocks in their portfolios
Resumo:
The cerium oxide has a high potential for use in removing pollutants after combustion, removal of organic matter in waste water and the fuel-cell technology. The nickel oxide is an attractive material due to its excellent chemical stability and their optical properties, electrical and magnetic. In this work, CeO2-NiO- systems on molars reasons 1:1(I), 1:2(II) e 1:3(III) metal-citric acid were synthesized using the Pechini method. We used techniques of TG / DTG and ATD to monitor the degradation process of organic matter to the formation of the oxide. By thermogravimetric analysis and applying the dynamic method proposed by Coats-Redfern, it was possible to study the reactions of thermal decomposition in order to propose the possible mechanism by which the reaction takes place, as well as the determination of kinetic parameters as activation energy, Ea, pre-exponential factor and parameters of activation. It was observed that both variables exert a significant influence on the formation of complex polymeric precursor. The model that best fitted the experimental data in the dynamic mode was R3, which consists of nuclear growth, which formed the nuclei grow to a continuous reaction interface, it proposes a spherical symmetry (order 2 / 3). The values of enthalpy of activation of the system showed that the reaction in the state of transition is exothermic. The variables of composition, together with the variable temperature of calcination were studied by different techniques such as XRD, IV and SEM. Also a study was conducted microstructure by the Rietveld method, the calculation routine was developed to run the package program FullProf Suite, and analyzed by pseudo-Voigt function. It was found that the molar ratio of variable metal-citric acid in the system CeO2-NiO (I), (II), (III) has strong influence on the microstructural properties, size of crystallites and microstrain network, and can be used to control these properties
Resumo:
Originally aimed at operational objectives, the continuous measurement of well bottomhole pressure and temperature, recorded by permanent downhole gauges (PDG), finds vast applicability in reservoir management. It contributes for the monitoring of well performance and makes it possible to estimate reservoir parameters on the long term. However, notwithstanding its unquestionable value, data from PDG is characterized by a large noise content. Moreover, the presence of outliers within valid signal measurements seems to be a major problem as well. In this work, the initial treatment of PDG signals is addressed, based on curve smoothing, self-organizing maps and the discrete wavelet transform. Additionally, a system based on the coupling of fuzzy clustering with feed-forward neural networks is proposed for transient detection. The obtained results were considered quite satisfactory for offshore wells and matched real requisites for utilization
Resumo:
One of the greatest challenges of demography, nowadays, is to obtain estimates of mortality, in a consistent manner, mainly in small areas. The lack of this information, hinders public health actions and leads to impairment of quality of classification of deaths, generating concern on the part of demographers and epidemiologists in obtaining reliable statistics of mortality in the country. In this context, the objective of this work is to obtain estimates of deaths adjustment factors for correction of adult mortality, by States, meso-regions and age groups in the northeastern region, in 2010. The proposal is based on two lines of observation: a demographic one and a statistical one, considering also two areas of coverage in the States of the Northeast region, the meso-regions, as larger areas and counties, as small areas. The methodological principle is to use the General Equation and Balancing demographic method or General Growth Balance to correct the observed deaths, in larger areas (meso-regions) of the states, since they are less prone to breakage of methodological assumptions. In the sequence, it will be applied the statistical empirical Bayesian estimator method, considering as sum of deaths in the meso-regions, the death value corrected by the demographic method, and as reference of observation of smaller area, the observed deaths in small areas (counties). As results of this combination, a smoothing effect on the degree of coverage of deaths is obtained, due to the association with the empirical Bayesian Estimator, and the possibility of evaluating the degree of coverage of deaths by age groups at counties, meso-regions and states levels, with the advantage of estimete adjustment factors, according to the desired level of aggregation. The results grouped by State, point to a significant improvement of the degree of coverage of deaths, according to the combination of the methods with values above 80%. Alagoas (0.88), Bahia (0.90), Ceará (0.90), Maranhão (0.84), Paraíba (0.88), Pernambuco (0.93), Piauí (0.85), Rio Grande do Norte (0.89) and Sergipe (0.92). Advances in the control of the registry information in the health system, linked to improvements in socioeconomic conditions and urbanization of the counties, in the last decade, provided a better quality of information registry of deaths in small areas
Resumo:
It is presented the analysis of a retaining wall designed for the basement of a residential building, located in Natal/RN, which consists in a spaced pile wall, anchored by tiebacks, in sand. This structure was instrumented in order to measure the wall s horizontal movements and the load distribution throughout the anchor fixed length. The horizontal movements were measured with an inclinometer, and the loads in the anchors were measured with strain gages, installed in three places throughout the anchor fixed length. Measurements for displacement were done right after the implementation of each stage of the building and right after the conclusion of the building, and the measurements for loads in the anchors were done during the performance test, at the moment of the locking off and, also, right after the conclusion of the building. From the data of displacement were obtained velocity and acceleration data of wall. It was found that the time elapsed on braced installation was decisive in the magnitude of the displacements. The maximum horizontal displacement of wall ranged between 0,18 and 0,66% of the final depth of excavation. The loads in the anchors strongly reduced to approximately half the anchor fixed length, followed an exponential distribution. Furthermore, it was found that there was a loss of load in the anchors over time, reaching 50% loss in one of them
Resumo:
Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda
Resumo:
This work proposes a new technique for phasor estimation applied in microprocessor numerical relays for distance protection of transmission lines, based on the recursive least squares method and called least squares modified random walking. The phasor estimation methods have compromised their performance, mainly due to the DC exponential decaying component present in fault currents. In order to reduce the influence of the DC component, a Morphological Filter (FM) was added to the method of least squares and previously applied to the process of phasor estimation. The presented method is implemented in MATLABr and its performance is compared to one-cycle Fourier technique and conventional phasor estimation, which was also based on least squares algorithm. The methods based on least squares technique used for comparison with the proposed method were: forgetting factor recursive, covariance resetting and random walking. The techniques performance analysis were carried out by means of signals synthetic and signals provided of simulations on the Alternative Transient Program (ATP). When compared to other phasor estimation methods, the proposed method showed satisfactory results, when it comes to the estimation speed, the steady state oscillation and the overshoot. Then, the presented method performance was analyzed by means of variations in the fault parameters (resistance, distance, angle of incidence and type of fault). Through this study, the results did not showed significant variations in method performance. Besides, the apparent impedance trajectory and estimated distance of the fault were analysed, and the presented method showed better results in comparison to one-cycle Fourier algorithm
Resumo:
The static and cyclic assays are common to test materials in structures.. For cycling assays to assess the fatigue behavior of the material and thereby obtain the S-N curves and these are used to construct the diagrams of living constant. However, these diagrams, when constructed with small amounts of S-N curves underestimate or overestimate the actual behavior of the composite, there is increasing need for more testing to obtain more accurate results. Therewith, , a way of reducing costs is the statistical analysis of the fatigue behavior. The aim of this research was evaluate the probabilistic fatigue behavior of composite materials. The research was conducted in three parts. The first part consists of associating the equation of probability Weilbull equations commonly used in modeling of composite materials S-N curve, namely the exponential equation and power law and their generalizations. The second part was used the results obtained by the equation which best represents the S-N curves of probability and trained a network to the modular 5% failure. In the third part, we carried out a comparative study of the results obtained using the nonlinear model by parts (PNL) with the results of a modular network architecture (MN) in the analysis of fatigue behavior. For this we used a database of ten materials obtained from the literature to assess the ability of generalization of the modular network as well as its robustness. From the results it was found that the power law of probability generalized probabilistic behavior better represents the fatigue and composites that although the generalization ability of the MN that was not robust training with 5% failure rate, but for values mean the MN showed more accurate results than the PNL model
Resumo:
Global warming due to Greenhouse Gases (GHG) emissions, especially CO2, has been identified as one of the major problems of the twenty-first century, considering the consequences that could represent to planet. Currently, biological processes have been mentioned as a possible solution, especially CO2 biofixation due to association microalgae growth. This strategy has been emphasized as in addition to CO2 mitigation, occurs the production of biomass rich in compounds of high added value. The Microalgae show high photosynthetic capacity and growth rate higher than the superior plants, doubling its biomass in one day. Its culture does not show seasons, they grow in salt water and do not require irrigation, herbicides or pesticides. The lipid content of these microorganisms, depending on the species, may range from 10 to 70% of its dry weight, reaching 90% under certain culture conditions. Studies indicate that the most effective method to promote increased production of lipids in microalgae is to induce stress by limiting nitrogen content in the culture medium. These evidences justify research continuing the production of biofuels from microalgae. In this paper, it was studied the strategy of increasing the production of lipids in microalgae I. galbana with programmed nutritional stress, due to nitrogen limitation. The physiological responses of microalgae, grown in f / 2 with different concentrations of nitrogen (N: P 15,0-control, N: 5,0 P and N: P 2,5) were monitored. During exponential phase, results showed invariability in the studied conditions. However the cultures subjected to stress in stationary phase, showed lower biomass yields. There was an increase of 32,5% in carbohydrate content and 87.68% in lipids content at N: P ratio of 5,0 and an average decrease of 65% in protein content at N: P ratios of 5, 0 and 2.5. There were no significant variations in ash content, independently of cultivation and growth phase. Despite the limitation of biomass production in cultures with N: P smaller ratios, the increase of lipid accumulation highest lipids yields were observed as compared to the control culture. Given the increased concentration of lipids associated to stress, this study suggests the use of microalgae Isochrysis galbana as an alternative raw material for biofuel production
Resumo:
A chemical process optimization and control is strongly correlated with the quantity of information can be obtained from the system. In biotechnological processes, where the transforming agent is a cell, many variables can interfere in the process, leading to changes in the microorganism metabolism and affecting the quantity and quality of final product. Therefore, the continuously monitoring of the variables that interfere in the bioprocess, is crucial to be able to act on certain variables of the system, keeping it under desirable operational conditions and control. In general, during a fermentation process, the analysis of important parameters such as substrate, product and cells concentration, is done off-line, requiring sampling, pretreatment and analytical procedures. Therefore, this steps require a significant run time and the use of high purity chemical reagents to be done. In order to implement a real time monitoring system for a benchtop bioreactor, these study was conducted in two steps: (i) The development of a software that presents a communication interface between bioreactor and computer based on data acquisition and process variables data recording, that are pH, temperature, dissolved oxygen, level, foam level, agitation frequency and the input setpoints of the operational parameters of the bioreactor control unit; (ii) The development of an analytical method using near-infrared spectroscopy (NIRS) in order to enable substrate, products and cells concentration monitoring during a fermentation process for ethanol production using the yeast Saccharomyces cerevisiae. Three fermentation runs were conducted (F1, F2 and F3) that were monitored by NIRS and subsequent sampling for analytical characterization. The data obtained were used for calibration and validation, where pre-treatments combined or not with smoothing filters were applied to spectrum data. The most satisfactory results were obtained when the calibration models were constructed from real samples of culture medium removed from the fermentation assays F1, F2 and F3, showing that the analytical method based on NIRS can be used as a fast and effective method to quantify cells, substrate and products concentration what enables the implementation of insitu real time monitoring of fermentation processes
Resumo:
This study aimed to evaluate the potential use of smectite clays for color removal of textile effluents. The experiments were performed by testing exploratory/planning method factorial and fractional factorial where the factors and levels are predetermined. The smectite clays were used originating from gypsum hub of the region Araripe-PE, and the dye used was Reactive Yellow BF-4G 200%. The smectite clay was collected and transported to the Laboratory of Soil Physics of UFRPE, where it held its preparation through air drying, lump breaking and classification in sieve to then submit it to the adsorption process. Upon completion of 22 complete factorial design it was concluded that the values of (96, 96,5 and 95,8%) corresponding to the percentage of of removal for "in-kind", chemically and thermally activated, respectively and adsorbed amounts of (4,80, 4,61 and 4,74 mg/g) for three clays. Showed that the activation processes used did not increase the adsorption capacity of smectite clay. The kinetic data were best fitted to the Freundlich isotherm, with an exponential distribution of active sites and that shows above the Langmuir equation for adsorption of cations and anions by clays. The kinetic model that best adapted to the results was the pseudosecond order model. In the factorial design study 24-1, at concentrations up to 500 mg/L obtains high percentage of color removal (92,37, 90,92 and 93,40%) and adsorbed amount (230,94, 227,31 and 233,50 mg/g) for three clays. The kinetic data fitted well to Langmuir and Freundlich isotherms. The kinetic model that best adapted to the results was the pseudosecond order model