42 resultados para Modelos de separação de efeitos
Resumo:
Neste trabalho investigamos aspectos da propagação de danos em sistemas cooperativos, descritos por modelos de variáveis discretas (spins), mutuamente interagentes, distribuÃdas nos sÃtios de uma rede regular. Os seguintes casos foram examinados: (i) A influência do tipo de atualização (paralela ou sequencial) das configurações microscópicas, durante o processo de simulação computacional de Monte Carlo, no modelo de Ising em uma rede triangular. Observamos que a atualização sequencial produz uma transição de fase dinâmica (Caótica- Congelada) a uma temperatura TD ≈TC (Temperatura de Curie), para acoplamentos ferromagnéticos (TC=3.6409J/Kb) e antiferromagnéticos (TC=0). A atualização paralela, que neste caso é incapaz de diferenciar os dois tipos de acoplamentos, leva a uma transição em TD ≠TC; (ii) Um estudo do modelo de Ising na rede quadrada, com diluição temperada de sÃtios, mostrou que a técnica de propagação de danos é um eficiente método para o cálculo da fronteira crÃtica e da dimensão fractal do aglomerado percolante, já que os resultados obtidos (apesar de um esforço computacional relativamente modesto), são comparáveis à queles resultantes da aplicação de outros métodos analÃticos e/ou computacionais de alto empenho; (iii) Finalmente, apresentamos resultados analÃticos que mostram como certas combinações especiais de danos podem ser utilizadas para o cálculo de grandezas termodinâmicas (parâmetros de ordem, funções de correlação e susceptibilidades) do modelo Nα x Nβ, o qual contém como casos particulares alguns dos modelos mais estudados em Mecânica EstatÃstica (Ising, Potts, Ashkin Teller e Cúbico)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de NÃvel Superior
Resumo:
The effect of confinement on the magnetic structure of vortices of dipolar coupled ferromagnetic nanoelements is an issue of current interest, not only for academic reasons, but also for the potential impact in a number of promising applications. Most applications, such as nano-oscillators for wireless data transmission, benefit from the possibility of tailoring the vortex core magnetic pattern. We report a theoretical study of vortex nucleation in pairs of coaxial iron and Permalloy cylinders, with diameters ranging from 21nm to 150nm, and 12nm and 21nm thicknesses, separated by a non-magnetic layer. 12nm thick iron and Permalloy isolated (single) cylinders do not hold a vortex, and 21nm isolated cylinders hold a vortex. Our results indicate that one may tailor the magnetic structure of the vortices, and the relative chirality, by selecting the thickness of the non-magnetic spacer and the values of the cylinders diameters and thicknesses. Also, the dipolar interaction may induce vortex formation in pairs of 12nm thick nanocylinders and inhibit the formation of vortices in pairs of 21nm thick nanocylinders. These new phases are formed according to the value of the distance between the cylinderes. Furthermore, we show that the preparation route may control relative chirality and polarity of the vortex pair. For instance: by saturating a pair of Fe 81nm diameter, 21nm thickness cylinders, along the crystalline anisotropy direction, a pair of 36nm core diameter vortices, with same chirality and polarity is prepared. By saturating along the perpendicular direction, one prepares a 30nm diameter core vortex pair, with opposite chirality and opposite polarity. We also present a theoretical discussion of the impact of vortices on the thermal hysteresis of a pair of interface biased elliptical iron nanoelements, separated by an ultrathin nonmagnetic insulating layer. We have found that iron nanoelements exchange coupled to a noncompensated NiO substrate, display thermal hysteresis at room temperature, well below the iron Curie temperature. The thermal hysteresis consists in different sequences of magnetic states in the heating and cooling branches of a thermal loop, and originates in the thermal reduction of the interface field, and on the rearrangements of the magnetic structure at high temperatures, 5 produce by the strong dipolar coupling. The width of the thermal hysteresis varies from 500 K to 100 K for lateral dimensions of 125 nm x 65 nm and 145 nm x 65 nm. We focus on the thermal effects on two particular states: the antiparallel state, which has, at low temperatures, the interface biased nanoelement with the magnetization aligned with the interface field and the second nanoelement aligned opposite to the interface field; and in the parallel state, which has both nanoelements with the magnetization aligned with the interface field at low temperatures. We show that the dipolar interaction leads to enhanced thermal stability of the antiparallel state, and reduces the thermal stability of the parallel state. These states are the key phases in the application of pairs of ferromagnetic nanoelements, separated by a thin insulating layer, for tunneling magnetic memory cells. We have found that for a pair of 125nm x 65nm nanoelements, separated by 1.1nm, and low temperature interface field strength of 5.88kOe, the low temperature state (T = 100K) consists of a pair of nearly parallel buckle-states. This low temperature phase is kept with minor changes up to T= 249 K when the magnetization is reduced to 50% of the low temperature value due to nucleation of a vortex centered around the middle of the free surface nanoelement. By further increasing the temperature, there is another small change in the magnetization due to vortex motion. Apart from minor changes in the vortex position, the high temperature vortex state remains stable, in the cooling branch, down to low temperatures. We note that wide loop thermal hysteresis may pose limits on the design of tunneling magnetic memory cells
Resumo:
The recent observational advances of Astronomy and a more consistent theoretical framework turned Cosmology in one of the most exciting frontiers of contemporary science. In this thesis, homogeneous and inhomogeneous Universe models containing dark matter and different kinds of dark energy are confronted with recent observational data. Initially, we analyze constraints from the existence of old high redshift objects, Supernovas type Ia and the gas mass fraction of galaxy clusters for 2 distinct classes of homogeneous and isotropic models: decaying vacuum and X(z)CDM cosmologies. By considering the quasar APM 08279+5255 at z = 3.91 with age between 2-3 Gyr, we obtain 0,2 < OM < 0,4 while for the j3 parameter which quantifies the contribution of A( t) is restricted to the intervalO, 07 < j3 < 0,32 thereby implying that the minimal age of the Universe amounts to 13.4 Gyr. A lower limit to the quasar formation redshift (zJ > 5,11) was also obtained. Our analyzes including flat, closed and hyperbolic models show that there is no an age crisis for this kind of decaying A( t) scenario. Tests from SN e Ia and gas mass fraction data were realized for flat X(z)CDM models. For an equation of state, úJ(z) = úJo + úJIZ, the best fit is úJo = -1,25, úJl = 1,3 and OM = 0,26, whereas for models with úJ(z) = úJo+úJlz/(l+z), we obtainúJo = -1,4, úJl = 2,57 and OM = 0,26. In another line of development, we have discussed the influence of the observed inhomogeneities by considering the Zeldovich-Kantowski-DyerRoeder (ZKDR) angular diameter distance. By applying the statistical X2 method to a sample of angular diameter for compact radio sources, the best fit to the cosmological parameters for XCDM models are OM = O, 26,úJ = -1,03 and a = 0,9, where úJ and a are the equation of state and the smoothness parameters, respectively. Such results are compatible with a phantom energy component (úJ < -1). The possible bidimensional spaces associated to the plane (a , OM) were restricted by using data from SNe Ia and gas mass fraction of galaxy clusters. For Supernovas the parameters are restricted to the interval 0,32 < OM < 0,5(20") and 0,32 < a < 1,0(20"), while to the gas mass fraction we find 0,18 < OM < 0,32(20") with alI alIowed values of a. For a joint analysis involving Supernovas and gas mass fraction data we obtained 0,18 < OM < 0,38(20"). In general grounds, the present study suggests that the influence of the cosmological inhomogeneities in the matter distribution need to be considered with more detail in the analyses of the observational tests. Further, the analytical treatment based on the ZKDR distance may give non-negligible corrections to the so-calIed background tests of FRW type cosmologies
Resumo:
The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP
Resumo:
We present residual analysis techniques to assess the fit of correlated survival data by Accelerated Failure Time Models (AFTM) with random effects. We propose an imputation procedure for censored observations and consider three types of residuals to evaluate different model characteristics. We illustrate the proposal with the analysis of AFTM with random effects to a real data set involving times between failures of oil well equipment
Resumo:
The caffeine is a mild psychostimulant that has positive cognitive effects at low doses, while promotes detrimental effects on these processes at higher doses. The episodic-like memory can be evaluated in rodents through hippocampus-dependent tasks. The dentate gyrus is a hippocampal subregion in which neurogenesis occurs in adults, and it is believed that this process is related to the function of patterns separation, such as the identification of spatial and temporal patterns when discriminating events. Furthermore, neurogenesis is influenced spatial and contextual learning tasks. Our goal was to evaluate the performance of male Wistar rats in episodic-like tasks after acute or chronic caffeine treatment (15mg/kg or 30mg/kg). Moreover, we assessed the chronic effect of the caffeine treatment, as well as the influence of the hippocampus-dependent learning tasks, on the survival of new-born neurons at the beginning of treatment. For this purpose, we used BrdU to label the new cells generated in the dentate gyrus. Regarding the acute treatment, we found that the saline group presented a tendency to have better spatial and temporal discrimination than caffeine groups. The chronic caffeine group 15 mg/kg (low dose) showed the best discrimination of the temporal aspect of episodic-like memory, whereas the chronic caffeine group 30mg/kg (high dose) was able to discriminate temporal order, only in a condition of greater difficulty. Assessment of neurogenesis using immunohistochemistry for evaluating survival of new-born neurons generated in the dentate gyrus revealed no difference among groups of chronic treatment. Thus, the positive mnemonic effects of the chronic caffeine treatment were not related to neuronal survival. However, another plastic mechanism could explain the positive mnemonic effect, given that there was no improvement in the acute caffeine groups
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de NÃvel Superior
Resumo:
The exposure to stressors produces physiological changes of the organism in order to adapt the individual to the environment. Depending on the type, intensity and duration, stress can affect some cognitive functions, particularly processes of learning and memory. Several studies have also proposed that some level of anxiety would be necessary for memory formation. In this context, memories of previously aversive experiences may determine the manner and intensity with which are expressed fear responses, which explains the great interest in analyzing both anxiety and memory in animals. In addition, males and females demonstrate different reactions in relation to stressful stimuli, showing different levels of anxiety and differences in processing of the acquisition, retention and recall of information. Based on this information, the present study aimed to verify the effect of stress on learning, memory and anxiety behavioral parameters in rats exposed at different types of stressors of long duration (seven consecutive days): restraint (4h/day), overcrowding (18h/day) and social isolation (18h/day) in the different phases of the estrous cycle. Our results showed that the stress induced by restraint and social isolation did not cause changes in the acquisition process, but impaired the recall of memory in rats. Furthermore, it is suggested a protective effect of sex hormones on retrieval of aversive memory, since female rats in proestrus or estrus phase, characterized by high estrogen concentrations, showed no aversive memory deficits. Furthermore, despite the increased plasma levels of corticosterone observed in female rats subjected to restraint stress and social isolation, anxiety levels were unaltered, compared to those various stress conditions. Animal models based on psychological and social stress have been extensively discussed in the literature. Correlate behavioral responses, physiological and psychological have contributed in increasing the understanding of stress-induced psychophysiological disorders
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de NÃvel Superior
Resumo:
This paper proposes a systematic approach to management of variability modelsdriven and aspects using the mechanisms of approaches Aspect-Oriented Software Development (AOSD) and Model-Driven Development (MDD). The main goal of the approach, named CrossMDA-SPL, is to improve the management(gerência), modularization and isolation ou separation of the variability of the LPSs of architecture in a high level of abstraction (model) at the design and implementing phases of development Software Product Lines (SPLs), exploiting the synergy between AOSD and MDD. The CrossMDA-SPL approach defines some artifacts basis for advance the separation clear in between the mandatory (bounden) and optional features in the architecture of SPL. The artifacts are represented by two models named: (i) core model (base domain) - responsible for specify the common features the all members of the SPL, and (ii) variability model - responsible for represent the variables features of SPL. In addition, the CrossMDA-SPL approach is composed of: (i) guidelines for modeling and representation of variability, (ii) CrossMDA-SPL services and process, and (iii) models of the architecture of SPL or product instance of SPL. The guidelines use the advantages of AOSD and MDD to promote a better modularization of the variable features of the architecture of SPL during the creation of core and variability models of the approach. The services and sub-processes are responsible for combination automatically, through of process of transformation between the core and variability models, and the generation of new models that represent the implementation of the architecture of SPL or a instance model of SPL. Mechanisms for effective modularization of variability for architectures of SPL at model level. The concepts are described and measured with the execution of a case study of an SPL for management systems of transport electronic tickets
Resumo:
The occurrence of problems related to the scattering and tangling phenomenon, such as the difficulty to do system maintenance, increasingly frequent. One way to solve this problem is related to the crosscutting concerns identification. To maximize its benefits, the identification must be performed from early stages of development process, but some works have reported that this has not been done in most of cases, making the system development susceptible to the errors incidence and prone to the refactoring later. This situation affects directly to the quality and cost of the system. PL-AOVgraph is a goal-oriented requirements modeling language which offers support to the relationships representation among requirements and provides separation of crosscutting concerns by crosscutting relationships representation. Therefore, this work presents a semi-automatic method to crosscutting concern identification in requirements specifications written in PL-AOVgraph. An adjacency matrix is used to identify the contributions relationships among the elements. The crosscutting concern identification is based in fan-out analysis of contribution relationships from the informations of adjacency matrix. When identified, the crosscutting relationships are created. And also, this method is implemented as a new module of ReqSys-MDD tool
Resumo:
The separation methods are reduced applications as a result of the operational costs, the low output and the long time to separate the uids. But, these treatment methods are important because of the need for extraction of unwanted contaminants in the oil production. The water and the concentration of oil in water should be minimal (around 40 to 20 ppm) in order to take it to the sea. Because of the need of primary treatment, the objective of this project is to study and implement algorithms for identification of polynomial NARX (Nonlinear Auto-Regressive with Exogenous Input) models in closed loop, implement a structural identification, and compare strategies using PI control and updated on-line NARX predictive models on a combination of three-phase separator in series with three hydro cyclones batteries. The main goal of this project is to: obtain an optimized process of phase separation that will regulate the system, even in the presence of oil gushes; Show that it is possible to get optimized tunings for controllers analyzing the mesh as a whole, and evaluate and compare the strategies of PI and predictive control applied to the process. To accomplish these goals a simulator was used to represent the three phase separator and hydro cyclones. Algorithms were developed for system identification (NARX) using RLS(Recursive Least Square), along with methods for structure models detection. Predictive Control Algorithms were also implemented with NARX model updated on-line, and optimization algorithms using PSO (Particle Swarm Optimization). This project ends with a comparison of results obtained from the use of PI and predictive controllers (both with optimal state through the algorithm of cloud particles) in the simulated system. Thus, concluding that the performed optimizations make the system less sensitive to external perturbations and when optimized, the two controllers show similar results with the assessment of predictive control somewhat less sensitive to disturbances
Resumo:
We considered prediction techniques based on models of accelerated failure time with random e ects for correlated survival data. Besides the bayesian approach through empirical Bayes estimator, we also discussed about the use of a classical predictor, the Empirical Best Linear Unbiased Predictor (EBLUP). In order to illustrate the use of these predictors, we considered applications on a real data set coming from the oil industry. More speci - cally, the data set involves the mean time between failure of petroleum-well equipments of the Bacia Potiguar. The goal of this study is to predict the risk/probability of failure in order to help a preventive maintenance program. The results show that both methods are suitable to predict future failures, providing good decisions in relation to employment and economy of resources for preventive maintenance.
Resumo:
The principal effluent in the oil industry is the produced water, which is commonly associated to the produced oil. It presents a pronounced volume of production and it can be reflected on the environment and society, if its discharge is unappropriated. Therefore, it is indispensable a valuable careful to establish and maintain its management. The traditional treatment of produced water, usualy includes both tecniques, flocculation and flotation. At flocculation processes, there are traditional floculant agents that aren’t well specified by tecnichal information tables and still expensive. As for the flotation process, it’s the step in which is possible to separate the suspended particles in the effluent. The dissolved air flotation (DAF) is a technique that has been consolidating economically and environmentally, presenting great reliability when compared with other processes. The DAF is presented as a process widely used in various fields of water and wastewater treatment around the globe. In this regard, this study was aimed to evaluate the potential of an alternative natural flocculant agent based on Moringa oleifera to reduce the amount of oil and grease (TOG) in produced water from the oil industry by the method of flocculation/DAF. the natural flocculant agent was evaluated by its efficacy, as well as its efficiency when compared with two commercial flocculant agents normally used by the petroleum industry. The experiments were conducted following an experimental design and the overall efficiencies for all flocculants were treated through statistical calculation based on the use of STATISTICA software version 10.0. Therefore, contour surfaces were obtained from the experimental design and were interpreted in terms of the response variable removal efficiency TOG (total oil and greases). The plan still allowed to obtain mathematical models for calculating the response variable in the studied conditions. Commercial flocculants showed similar behavior, with an average overall efficiency of 90% for oil removal, however it is the economical analysis the decisive factor to choose one of these flocculant agents to the process. The natural alternative flocculant agent based on Moringa oleifera showed lower separation efficiency than those of commercials one (average 70%), on the other hand this flocculant causes less environmental impacts and it´s less expensive