175 resultados para Simulações numéricas
Resumo:
The central question of the present study is to identify the epistemological knowledge that the teachers-trainees possess regarding the characteristics (properties) of the decimal numbering system; its purpose is to offer a contribution to the pedagogic practice of the teachers who work within the Basic Literacy Cycle, in terms of what concerns both the acquisition of contents and the development of the knowledge that helps them in the elaboration of adequate strategies to working with the Decimal Numbering System in the classroom. The study is based on the constructivist sociointeractionist approach to teaching Mathematics and it constitutes, in itself, a methodological intervention with the teachers-trainees engaged in the Professional Qualification Program in Basic Education of the Federal University of Rio Grande do Norte. The foundations of the study were found in investigations of researchers who had carried out studies on the construction of numerical writing, showing, for instance, that the construction process of ideas and procedures involved in groupings and changes to base 10 take a lot longer to be accomplished than one can imagine. A set of activities was then elaborated which could not only contribute to the acquisition of contents but that could also make the teachers-trainees reflect upon their teaching practices in the classroom so that in this way they will be able to elaborate more consistent didactic approaches, taking into consideration the previous knowledge of the students and also some obstacles that often appear along the way. Even when teachers have access to the most appropriate dicactic resources, the lack of knowledge of the content and of the real meaning of that content make the Decimal Numbering System, a subject of fundamental importance, be taught most times in a mechanical way. The analisys of the discussions and behaviours of the teachers-trainees during the activities reavealed that they made them reflect upon their current practices in the classroom and that, as a whole, the aims of each of the activities carried out with the teachers-trainers were reached
Resumo:
Este estudio pretende dilucidar las motivaciones que llevan a los lectores del romance Grande Sertão: Veredas a la categorización de ciertas construcciones gramaticales que se encuentran en la obra como proverbiales. Para ello, recurren a la investigación de los procesos cognitivos implicados en la configuración del patrón discursivo proverbio, teniendo como soporte teórico a la Lingüística Cognitiva. En este empeño, nos ancoramos en las nociones de construcciones corporeizadas, simulación mental, frecuencia, patrón discursivo y en la expresión idiomática. Suponemos que los proverbios constituyen un patrón discursivo cristalizado a partir de la recurrencia de uso y, en virtud de eso, indagamos: ¿Qué mecanismos cognitivos son activados por los lectores en el proceso de categorización de las expresiones dichas proverbiales? Motivados por la situación problema presentada, formulamos algunos experimentos con la intención de aclarar las cuestiones investigadas y concluimos que los lectores recurren a constituciones construccionales subyacentes a los proverbios conocidos a través de las interacciones realizadas en su entorno sociocultural, para dar cuenta de la semántica de construcciones inéditas. En ese proceso, los esquemas y frames se activan a través de simulaciones mentales instanciadas por experiencias corporales y culturales, decisivas para la concretización de los procesos de las construcciones proverbiales
Resumo:
This research aims to reconstruct and explain the argument proposed by Peter Singer to justify the principle of equal consideration of interests (PECI). The PECI is the basic normative principle according to people should consider the interests of all sentient beings affected when somebody taking a moral decision. It is the join that Singer proposes between universalizability and the principle of equal consideration of interests that constitutes a compelling reason to justify it. The universalizability requires to disregard the numerical differences, putting yourself in other people s shoes, and to consider preferences, interests, desires and ideals of those affected. Singer joins universalizability to normative principle and molds the form and content of his theory. The first chapter introduces the discussion will be developed in this essay. The second chapter deals the historical and philosophical viewpoint from which Singer starts his studies. The third chapter is about the Singer s critiques of naturalism, intuitionism, relativism, simple subjectivism and emotivism. The fourth chapter exposes the design of universal prescriptivism proposed by R. M. Hare. The universal prescriptivism indicates, in the Singer s viewpoint, a consistent way to create the join between the universalizability and PECI. It highlights also the criticism designed by J. L. Mackie and Singer himself to universal prescriptivism. The second part of this chapter shows briefly some of the main points of the classical conception of utilitarianism and its possible relationship with the theory of Singer. The fifth chapter introduces the Singer s thesis about the origin of ethics and the universalizability as a feature necessary to the point of view of ethic, and the way which this argument is developed to form the PECI. The sixth chapter exposes the main distinctions that characterize the PECI. Finally the seventh chapter provides a discussion about the reasons highlighted by Singer for one who wants orient his life according to the standpoint of ethics. This structure allows explaining the main ideas of the author concerning the theoretical foundations of his moral philosophy
Resumo:
Neste trabalho, através de simulações computacionais, identificamos os fenômenos físicos associados ao crescimento e a dinâmica de polímeros como sistemas complexos exibindo comportamentos não linearidades, caos, criticalidade auto-organizada, entre outros. No primeiro capítulo, iniciamos com uma breve introdução onde descrevemos alguns conceitos básicos importantes ao entendimento do nosso trabalho. O capítulo 2 consiste na descrição do nosso estudo da distribuição de segmentos num polímero ramificado. Baseado em cálculos semelhantes aos usados em cadeias poliméricas lineares, utilizamos o modelo de crescimento para polímeros ramificados (Branched Polymer Growth Model - BPGM) proposto por Lucena et al., e analisamos a distribuição de probabilidade dos monômeros num polímero ramificado em 2 dimensões, até então desconhecida. No capítulo seguinte estudamos a classe de universalidade dos polímeros ramificados gerados pelo BPGM. Utilizando simulações computacionais em 3 dimensões do modelo proposto por Lucena et al., calculamos algumas dimensões críticas (dimensões fractal, mínima e química) para tentar elucidar a questão da classe de universalidade. Ainda neste Capítulo, descrevemos um novo modelo para a simulação de polímeros ramificados que foi por nós desenvolvido de modo a poupar esforço computacional. Em seguida, no capítulo 4 estudamos o comportamento caótico do crescimento de polímeros gerados pelo BPGM. Partimos de polímeros criticamente organizados e utilizamos uma técnica muito semelhante aquela usada em transições de fase em Modelos de Ising para estudar propagação de danos chamada de Distância de Hamming. Vimos que a distância de Hamming para o caso dos polímeros ramificados se comporta como uma lei de potência, indicando um caráter não-extensivo na dinâmica de crescimento. No Capítulo 5 analisamos o movimento molecular de cadeias poliméricas na presença de obstáculos e de gradientes de potenciais. Usamos um modelo generalizado de reptação para estudar a difusão de polímeros lineares em meios desordenados. Investigamos a evolução temporal destas cadeias em redes quadradas e medimos os tempos característicos de transporte t. Finalizamos esta dissertação com um capítulo contendo a conclusão geral denoss o trabalho (Capítulo 6), mais dois apêndices (Apêndices A e B) contendo a fenomenologia básica para alguns conceitos que utilizaremos ao longo desta tese (Fractais e Percolação respectivamente) e um terceiro e ´ultimo apêndice (Apêndice C) contendo uma descrição de um programa de computador para simular o crescimentos de polímeros ramificados em uma rede quadrada
Resumo:
High-precision calculations of the correlation functions and order parameters were performed in order to investigate the critical properties of several two-dimensional ferro- magnetic systems: (i) the q-state Potts model; (ii) the Ashkin-Teller isotropic model; (iii) the spin-1 Ising model. We deduced exact relations connecting specific damages (the difference between two microscopic configurations of a model) and the above mentioned thermodynamic quanti- ties which permit its numerical calculation, by computer simulation and using any ergodic dynamics. The results obtained (critical temperature and exponents) reproduced all the known values, with an agreement up to several significant figures; of particular relevance were the estimates along the Baxter critical line (Ashkin-Teller model) where the exponents have a continuous variation. We also showed that this approach is less sensitive to the finite-size effects than the standard Monte-Carlo method. This analysis shows that the present approach produces equal or more accurate results, as compared to the usual Monte Carlo simulation, and can be useful to investigate these models in circumstances for which their behavior is not yet fully understood
Resumo:
We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models
Resumo:
We use a finite diference eulerian numerical code, called ZEUS 3D, to do simulations involving the collision between two magnetized molecular clouds, aiming to evaluate the rate of star formation triggered by the collision and to analyse how that rate varies depending on the relative orientations between the cloud magnetic fields before the shock. The ZEUS 3D code is not an easy code to handle. We had to create two subroutines, one to study the cloud-cloud collision and the other for the data output. ZEUS is a modular code. Its hierarchical way of working is explained as well as the way our subroutines work. We adopt two sets of different initial values for density, temperature and magnetic field of the clouds and of the external medium in order to study the collision between two molecular clouds. For each set, we analyse in detail six cases with different directions and orientations of the cloud magnetic field relative to direction of motion of the clouds. The analysis of these twelve cases allowed us to conform analytical-theoretical proposals found in the literature, and to obtain several original results. Previous works indicate that, if the cloud magnetic fields before the collision are orthogonal to the direction of motion, then a strong inhibition of star formation will occur during a cloud-cloud shock, whereas if those magnetic fields are parallel to the direction of motion, star formation will be stimulated. Our treatment of the problem confirmed numerically those results, and further allowed us to quantify the relative star forming efficiencies in each case. Moreover, we propose and analyse an intermediate case where the field of one of the clouds is orthogonal to the motion and the field of the other one is parallel to the motion. We conclude that, in this case, the rate at which the star formation occurs has a value also intermediate between the two extreme cases we mentioned above. Besides that we study the case in which the fields are orthogonal to the direction of the motion but, instead of being parallel to each other, they are anti-parallel, and we obtained for this case the corresponding variation of the star formation rate due to this alteration of the field configuration. This last case has not been studied in the literature before. Our study allows us to obtain, from the simulations, the rate of star formation in each case, as well as the temporal dependence of that rate as each collision evolves, what we do in detail for one of the cases in particular. The values we obtain for the rate of star formation are in accordance with those expected from the presently existing observational data
Resumo:
In this work we study a connection between a non-Gaussian statistics, the Kaniadakis
statistics, and Complex Networks. We show that the degree distribution P(k)of
a scale free-network, can be calculated using a maximization of information entropy in
the context of non-gaussian statistics. As an example, a numerical analysis based on the
preferential attachment growth model is discussed, as well as a numerical behavior of
the Kaniadakis and Tsallis degree distribution is compared. We also analyze the diffusive
epidemic process (DEP) on a regular lattice one-dimensional. The model is composed
of A (healthy) and B (sick) species that independently diffusive on lattice with diffusion
rates DA and DB for which the probabilistic dynamical rule A + B → 2B and B → A. This
model belongs to the category of non-equilibrium systems with an absorbing state and a
phase transition between active an inactive states. We investigate the critical behavior of
the DEP using an auto-adaptive algorithm to find critical points: the method of automatic
searching for critical points (MASCP). We compare our results with the literature and we
find that the MASCP successfully finds the critical exponents 1/ѵ and 1/zѵ in all the cases
DA =DB, DA
Resumo:
In this work we have studied, by Monte Carlo computer simulation, several properties that characterize the damage spreading in the Ising model, defined in Bravais lattices (the square and the triangular lattices) and in the Sierpinski Gasket. First, we investigated the antiferromagnetic model in the triangular lattice with uniform magnetic field, by Glauber dynamics; The chaotic-frozen critical frontier that we obtained coincides , within error bars, with the paramegnetic-ferromagnetic frontier of the static transition. Using heat-bath dynamics, we have studied the ferromagnetic model in the Sierpinski Gasket: We have shown that there are two times that characterize the relaxation of the damage: One of them satisfy the generalized scaling theory proposed by Henley (critical exponent z~A/T for low temperatures). On the other hand, the other time does not obey any of the known scaling theories. Finally, we have used methods of time series analysis to study in Glauber dynamics, the damage in the ferromagnetic Ising model on a square lattice. We have obtained a Hurst exponent with value 0.5 in high temperatures and that grows to 1, close to the temperature TD, that separates the chaotic and the frozen phases
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico
Resumo:
In this thesis, we address two issues of broad conceptual and practical relevance in the study of complex networks. The first is associated with the topological characterization of networks while the second relates to dynamical processes that occur on top of them. Regarding the first line of study, we initially designed a model for networks growth where preferential attachment includes: (i) connectivity and (ii) homophily (links between sites with similar characteristics are more likely). From this, we observe that the competition between these two aspects leads to a heterogeneous pattern of connections with the topological properties of the network showing quite interesting results. In particular, we emphasize that there is a region where the characteristics of sites play an important role not only for the rate at which they get links, but also for the number of connections which occur between sites with similar and dissimilar characteristics. Finally, we investigate the spread of epidemics on the network topology developed, whereas its dissemination follows the rules of the contact process. Using Monte Carlo simulations, we show that the competition between states (infected/healthy) sites, induces a transition between an active phase (presence of sick) and an inactive (no sick). In this context, we estimate the critical point of the transition phase through the cumulant Binder and ratio between moments of the order parameter. Then, using finite size scaling analysis, we determine the critical exponents associated with this transition
Resumo:
Lithium (Li) is a chemical element with atomic number 3 and it is among the lightest known elements in the universe. In general, the Lithium is found in the nature under the form of two stable isotopes, the 6Li and 7Li. This last one is the most dominant and responds for about 93% of the Li found in the Universe. Due to its fragileness this element is largely used in the astrophysics, especially in what refers to the understanding of the physical process that has occurred since the Big Bang going through the evolution of the galaxies and stars. In the primordial nucleosynthesis in the Big Bang moment (BBN), the theoretical calculation forecasts a Li production along with all the light elements such as Deuterium and Beryllium. To the Li the BNB theory reviews a primordial abundance of Log log ǫ(Li) =2.72 dex in a logarithmic scale related to the H. The abundance of Li found on the poor metal stars, or pop II stars type, is called as being the abundance of Li primordial and is the measure as being log ǫ(Li) =2.27 dex. In the ISM (Interstellar medium), that reflects the current value, the abundance of Lithium is log ǫ(Li) = 3.2 dex. This value has great importance for our comprehension on the chemical evolution of the galaxy. The process responsible for the increasing of the primordial value present in the Li is not clearly understood until nowadays. In fact there is a real contribution of Li from the giant stars of little mass and this contribution needs to be well streamed if we want to understand our galaxy. The main objection in this logical sequence is the appearing of some giant stars with little mass of G and K spectral types which atmosphere is highly enriched with Li. Such elevated values are exactly the opposite of what could happen with the typical abundance of giant low mass stars, where convective envelops pass through a mass deepening in which all the Li should be diluted and present abundances around log ǫ(Li) ∼1.4 dex following the model of stellar evolution. In the Literature three suggestions are found that try to reconcile the values of the abundance of Li theoretical and observed in these rich in Li giants, but any of them bring conclusive answers. In the present work, we propose a qualitative study of the evolutionary state of the rich in Li stars in the literature along with the recent discovery of the first star rich in Li observed by the Kepler Satellite. The main objective of this work is to promote a solid discussion about the evolutionary state based on the characteristic obtained from the seismic analysis of the object observed by Kepler. We used evolutionary traces and simulation done with the population synthesis code TRILEGAL intending to evaluate as precisely as possible the evolutionary state of the internal structure of these groups of stars. The results indicate a very short characteristic time when compared to the evolutionary scale related to the enrichment of these stars
Resumo:
The recent astronomical observations indicate that the universe has null spatial curvature, is accelerating and its matter-energy content is composed by circa 30% of matter (baryons + dark matter) and 70% of dark energy, a relativistic component with negative pressure. However, in order to built more realistic models it is necessary to consider the evolution of small density perturbations for explaining the richness of observed structures in the scale of galaxies and clusters of galaxies. The structure formation process was pioneering described by Press and Schechter (PS) in 1974, by means of the galaxy cluster mass function. The PS formalism establishes a Gaussian distribution for the primordial density perturbation field. Besides a serious normalization problem, such an approach does not explain the recent cluster X-ray data, and it is also in disagreement with the most up-to-date computational simulations. In this thesis, we discuss several applications of the nonextensive q-statistics (non-Gaussian), proposed in 1988 by C. Tsallis, with special emphasis in the cosmological process of the large structure formation. Initially, we investigate the statistics of the primordial fluctuation field of the density contrast, since the most recent data from the Wilkinson Microwave Anisotropy Probe (WMAP) indicates a deviation from gaussianity. We assume that such deviations may be described by the nonextensive statistics, because it reduces to the Gaussian distribution in the limit of the free parameter q = 1, thereby allowing a direct comparison with the standard theory. We study its application for a galaxy cluster catalog based on the ROSAT All-Sky Survey (hereafter HIFLUGCS). We conclude that the standard Gaussian model applied to HIFLUGCS does not agree with the most recent data independently obtained by WMAP. Using the nonextensive statistics, we obtain values much more aligned with WMAP results. We also demonstrate that the Burr distribution corrects the normalization problem. The cluster mass function formalism was also investigated in the presence of the dark energy. In this case, constraints over several cosmic parameters was also obtained. The nonextensive statistics was implemented yet in 2 distinct problems: (i) the plasma probe and (ii) in the Bremsstrahlung radiation description (the primary radiation from X-ray clusters); a problem of considerable interest in astrophysics. In another line of development, by using supernova data and the gas mass fraction from galaxy clusters, we discuss a redshift variation of the equation of state parameter, by considering two distinct expansions. An interesting aspect of this work is that the results do not need a prior in the mass parameter, as usually occurs in analyzes involving only supernovae data.Finally, we obtain a new estimate of the Hubble parameter, through a joint analysis involving the Sunyaev-Zeldovich effect (SZE), the X-ray data from galaxy clusters and the baryon acoustic oscillations. We show that the degeneracy of the observational data with respect to the mass parameter is broken when the signature of the baryon acoustic oscillations as given by the Sloan Digital Sky Survey (SDSS) catalog is considered. Our analysis, based on the SZE/X-ray data for a sample of 25 galaxy clusters with triaxial morphology, yields a Hubble parameter in good agreement with the independent studies, provided by the Hubble Space Telescope project and the recent estimates of the WMAP
Resumo:
The inventory management in hospitals is of paramount importance, since the supply materials and drugs interruption can cause irreparable damage to human lives while excess inventories involves immobilization of capital. Hospitals should use techniques of inventory management to perform replenishment in shorter and shorter intervals, in order to reduce inventories and fixed assets and meet citizens requirements properly. The inventory management can be an even bigger problem for public hospitals, which have restrictions on the use of resources and decisionmaking structure more bureaucratized. Currently the University Hospital Onofre Lopes (HUOL) uses a periodic replacement policy for hospital medical supplies and medicines, which involves one moment surplus stock replenishment, the next out of stock items. This study aims to propose a system for continuous replenishment through order point for inventory of medical supplies and medicines to the hospital HUOL. Therefore, a literature review of Federal University Hospitals Management, Logistics, Inventory Management and Replenishment System in Hospitals was performed, emphasizing the demand forecast, classification or ABC curve and order point system. And also, policies of inventory management and the current proposal were described, dealing with profile of the mentioned institution, the current policy of inventory management and simulation for continuous replenishment order point. For the simulation, the sample consisted of 102 and 44 items of medical and hospital drugs, respectively, selected using the ABC classification of inventory, prioritizing items of Class A, which contains the most relevant items in added value, representing 80 % of the financial value in 2012 fiscal year. Considering that it is a public organization, subject to the laws, we performed two simulations: the first, following the signs for inventory management of Instruction No. 205 (IN 205 ), from Secretary of Public Administration of the Presidency ( SEDAP / PR ), and the second, based on the literature specializing in inventory management hospital. The results of two simulations were compared to the current policy of replenishment system. Among these results are: an indication that the system for continuous replenishment reorder point based on IN 205 provides lower levels of safety stock and maximum stock, enables a 17% reduction in the amount spent for the full replenishment of inventories, in other words, decreasing capital assets, as well as reduction in stock quantity, also the simulation made from the literature has indicated parameters that prevent the application of this technique to all items of the sample. Hence, a change in inventory management of HUOL, with the application of the continuous replenishment according to IN 205, provides a significant reduction in acquisition costs of medical and hospital medicine
Resumo:
In this work we study the survival cure rate model proposed by Yakovlev (1993) that are considered in a competing risk setting. Covariates are introduced for modeling the cure rate and we allow some covariates to have missing values. We consider only the cases by which the missing covariates are categorical and implement the EM algorithm via the method of weights for maximum likelihood estimation. We present a Monte Carlo simulation experiment to compare the properties of the estimators based on this method with those estimators under the complete case scenario. We also evaluate, in this experiment, the impact in the parameter estimates when we increase the proportion of immune and censored individuals among the not immune one. We demonstrate the proposed methodology with a real data set involving the time until the graduation for the undergraduate course of Statistics of the Universidade Federal do Rio Grande do Norte