888 resultados para Quantitative Dynamic General Equilibrium
Resumo:
Study abroad programmes (SAP) have become increasingly popular with university students and within academia. They are often seen as an experiential opportunity to expand student learning and development, including increases in global, international, and intercultural competences. However, despite the increasing popularity of and participation in study abroad programmes, many student concerns and uncertainties remain. This research investigates initial pre-departure concerns and apprehensions of students undertaking a one-semester study abroad programme and uses these as context for an examination of violated expectations of students during their programme. The research uses interpretative phenomenological analysis to interpret data collected from regularly-updated blogs composed by students throughout their SAP experience. The process of using blogs to collect data is less formalised than many other approaches of interpretative phenomenological analysis, enabling ‘in the moment’ feedback during the SAP and lending greater depth to the understanding of student perceptions.
Análise de volatilidade, integração de preços e previsibilidade para o mercado brasileiro de camarão
Resumo:
The present paper has the purpose of investigate the dynamics of the volatility structure in the shrimp prices in the Brazilian fish market. Therefore, a description of the initial aspects of the shrimp price series was made. From this information, statistics tests were made and selected univariate models to be price predictors. Then, it was verified the existence of relationship of long-term equilibrium between the Brazilian and American imported shrimp and if, confirmed the relationship, whether or not there is a causal link between these assets, considering that the two countries had presented trade relations over the years. It is presented as an exploratory research of applied nature with quantitative approach. The database was collected through direct contact with the Companhia de Entrepostos e Armazéns Gerais de São Paulo (CEAGESP) and on the official website of American import, National Marine Fisheries Service - National Oceanic and Atmospheric Administration (NMFS- NOAA). The results showed that the great variability in the active price is directly related with the gain and loss of the market agents. The price series presents a strong seasonal and biannual effect. The average structure of price of shrimp in the last 12 years was R$ 11.58 and external factors besides the production and marketing (U.S. antidumping, floods and pathologies) strongly affected the prices. Among the tested models for predicting prices of shrimp, four were selected, which through the prediction methodologies of one step forward of horizon 12, proved to be statistically more robust. It was found that there is weak evidence of long-term equilibrium between the Brazilian and American shrimp, where equivalently, was not found a causal link between them. We concluded that the dynamic pricing of commodity shrimp is strongly influenced by external productive factors and that these phenomena cause seasonal effects in the prices. There is no relationship of long-term stability between the Brazilian and American shrimp prices, but it is known that Brazil imports USA production inputs, which somehow shows some dependence productive. To the market agents, the risk of interferences of the external prices cointegrated to Brazilian is practically inexistent. Through statistical modeling is possible to minimize the risk and uncertainty embedded in the fish market, thus, the sales and marketing strategies for the Brazilian shrimp can be consolidated and widespread
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
To be announced...
Resumo:
Many-core systems are emerging from the need of more computational power and power efficiency. However there are many issues which still revolve around the many-core systems. These systems need specialized software before they can be fully utilized and the hardware itself may differ from the conventional computational systems. To gain efficiency from many-core system, programs need to be parallelized. In many-core systems the cores are small and less powerful than cores used in traditional computing, so running a conventional program is not an efficient option. Also in Network-on-Chip based processors the network might get congested and the cores might work at different speeds. In this thesis is, a dynamic load balancing method is proposed and tested on Intel 48-core Single-Chip Cloud Computer by parallelizing a fault simulator. The maximum speedup is difficult to obtain due to severe bottlenecks in the system. In order to exploit all the available parallelism of the Single-Chip Cloud Computer, a runtime approach capable of dynamically balancing the load during the fault simulation process is used. The proposed dynamic fault simulation approach on the Single-Chip Cloud Computer shows up to 45X speedup compared to a serial fault simulation approach. Many-core systems can draw enormous amounts of power, and if this power is not controlled properly, the system might get damaged. One way to manage power is to set power budget for the system. But if this power is drawn by just few cores of the many, these few cores get extremely hot and might get damaged. Due to increase in power density multiple thermal sensors are deployed on the chip area to provide realtime temperature feedback for thermal management techniques. Thermal sensor accuracy is extremely prone to intra-die process variation and aging phenomena. These factors lead to a situation where thermal sensor values drift from the nominal values. This necessitates efficient calibration techniques to be applied before the sensor values are used. In addition, in modern many-core systems cores have support for dynamic voltage and frequency scaling. Thermal sensors located on cores are sensitive to the core's current voltage level, meaning that dedicated calibration is needed for each voltage level. In this thesis a general-purpose software-based auto-calibration approach is also proposed for thermal sensors to calibrate thermal sensors on different range of voltages.
Resumo:
The central product of the DRAMA (Dynamic Re-Allocation of Meshes for parallel Finite Element Applications) project is a library comprising a variety of tools for dynamic re-partitioning of unstructured Finite Element (FE) applications. The input to the DRAMA library is the computational mesh, and corresponding costs, partitioned into sub-domains. The core library functions then perform a parallel computation of a mesh re-allocation that will re-balance the costs based on the DRAMA cost model. We discuss the basic features of this cost model, which allows a general approach to load identification, modelling and imbalance minimisation. Results from crash simulations are presented which show the necessity for multi-phase/multi-constraint partitioning components.
Resumo:
Permeability of a rock is a dynamic property that varies spatially and temporally. Fractures provide the most efficient channels for fluid flow and thus directly contribute to the permeability of the system. Fractures usually form as a result of a combination of tectonic stresses, gravity (i.e. lithostatic pressure) and fluid pressures. High pressure gradients alone can cause fracturing, the process which is termed as hydrofracturing that can determine caprock (seal) stability or reservoir integrity. Fluids also transport mass and heat, and are responsible for the formation of veins by precipitating minerals within open fractures. Veining (healing) thus directly influences the rock’s permeability. Upon deformation these closed factures (veins) can refracture and the cycle starts again. This fracturing-healing-refacturing cycle is a fundamental part in studying the deformation dynamics and permeability evolution of rock systems. This is generally accompanied by fracture network characterization focusing on network topology that determines network connectivity. Fracture characterization allows to acquire quantitative and qualitative data on fractures and forms an important part of reservoir modeling. This thesis highlights the importance of fracture-healing and veins’ mechanical properties on the deformation dynamics. It shows that permeability varies spatially and temporally, and that healed systems (veined rocks) should not be treated as fractured systems (rocks without veins). Field observations also demonstrate the influence of contrasting mechanical properties, in addition to the complexities of vein microstructures that can form in low-porosity and permeability layered sequences. The thesis also presents graph theory as a characterization method to obtain statistical measures on evolving network connectivity. It also proposes what measures a good reservoir should have to exhibit potentially large permeability and robustness against healing. The results presented in the thesis can have applications for hydrocarbon and geothermal reservoir exploration, mining industry, underground waste disposal, CO2 injection or groundwater modeling.
Resumo:
With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.
Resumo:
Metabolism in an environment containing of 21% oxygen has a high risk of oxidative damage due to the formation of reactive oxygen species. Therefore, plants have evolved an antioxidant system consisting of metabolites and enzymes that either directly scavenge ROS or recycle the antioxidant metabolites. Ozone is a temporally dynamic molecule that is both naturally occurring as well as an environmental pollutant that is predicted to increase in concentration in the future as anthropogenic precursor emissions rise. It has been hypothesized that any elevation in ozone concentration will cause increased oxidative stress in plants and therefore enhanced subsequent antioxidant metabolism, but evidence for this response is variable. Along with increasing atmospheric ozone concentrations, atmospheric carbon dioxide concentration is also rising and is predicted to continue rising in the future. The effect of elevated carbon dioxide concentrations on antioxidant metabolism varies among different studies in the literature. Therefore, the question of how antioxidant metabolism will be affected in the most realistic future atmosphere, with increased carbon dioxide concentration and increased ozone concentration, has yet to be answered, and is the subject of my thesis research. First, in order to capture as much of the variability in the antioxidant system as possible, I developed a suite of high-throughput quantitative assays for a variety of antioxidant metabolites and enzymes. I optimized these assays for Glycine max (soybean), one of the most important food crops in the world. These assays provide accurate, rapid and high-throughput measures of both the general and specific antioxidant action of plant tissue extracts. Second, I investigated how growth at either elevated carbon dioxide concentration or chronic elevated ozone concentration altered antioxidant metabolism, and the ability of soybean to respond to an acute oxidative stress in a controlled environment study. I found that growth at chronic elevated ozone concentration increased the antioxidant capacity of leaves, but was unchanged or only slightly increased following an acute oxidative stress, suggesting that growth at chronic elevated ozone concentration primed the antioxidant system. Growth at high carbon dioxide concentration decreased the antioxidant capacity of leaves, increased the response of the existing antioxidant enzymes to an acute oxidative stress, but dampened and delayed the transcriptional response, suggesting an entirely different regulation of the antioxidant system. Third, I tested the findings from the controlled environment study in a field setting by investigating the response of the soybean antioxidant system to growth at elevated carbon dioxide concentration, chronic elevated ozone concentration and the combination of elevated carbon dioxide concentration and elevated ozone concentration. In this study, I confirmed that growth at elevated carbon dioxide concentration decreased specific components of antioxidant metabolism in the field. I also verified that increasing ozone concentration is highly correlated with increases in the metabolic and genomic components of antioxidant metabolism, regardless of carbon dioxide concentration environment, but that the response to increasing ozone concentration was dampened at elevated carbon dioxide concentration. In addition, I found evidence suggesting an up regulation of respiratory metabolism at higher ozone concentration, which would supply energy and carbon for detoxification and repair of cellular damage. These results consistently support the conclusion that growth at elevated carbon dioxide concentration decreases antioxidant metabolism while growth at elevated ozone concentration increases antioxidant metabolism.
Resumo:
PURPOSE: To quantitatively evaluate visual function 12 months after bilateral implantation of the Physiol FineVision® trifocal intraocular lens (IOL) and to compare these results with those obtained in the first postoperative month. METHODS: In this prospective case series, 20 eyes of 10 consecutive patients were included. Monocular and binocular, uncorrected and corrected visual acuities (distance, near, and intermediate) were measured. Metrovision® was used to test contrast sensitivity under static and dynamic conditions, both in photopic and low-mesopic settings. The same software was used for pupillometry and glare evaluation. Motion, achromatic, and chromatic contrast discrimination were tested using 2 innovative psychophysical tests. A complete ophthalmologic examination was performed preoperatively and at 1, 3, 6, and 12 months postoperatively. Psychophysical tests were performed 1 month after surgery and repeated 12 months postoperatively. RESULTS: Final distance uncorrected visual acuity (VA) was 0.00 ± 0.08 and distance corrected VA was 0.00 ± 0.05 logMAR. Distance corrected near VA was 0.00 ± 0.09 and distance corrected intermediate VA was 0.00 ± 0.06 logMAR. Glare testing, pupillometry, contrast sensitivity, motion, and chromatic and achromatic contrast discrimination did not differ significantly between the first and last visit (p>0.05) or when compared to an age-matched control group (p>0.05). CONCLUSIONS: The Physiol FineVision® trifocal IOL provided satisfactory full range of vision and quality of vision parameters 12 months after surgery. Visual acuity and psychophysical tests did not vary significantly between the first and last visit.
Resumo:
A detailed non-equilibrium state diagram of shape-anisotropic particle fluids is constructed. The effects of particle shape are explored using Naive Mode Coupling Theory (NMCT), and a single particle Non-linear Langevin Equation (NLE) theory. The dynamical behavior of non-ergodic fluids are discussed. We employ a rotationally frozen approach to NMCT in order to determine a transition to center of mass (translational) localization. Both ideal and kinetic glass transitions are found to be highly shape dependent, and uniformly increase with particle dimensionality. The glass transition volume fraction of quasi 1- and 2- dimensional particles fall monotonically with the number of sites (aspect ratio), while 3-dimensional particles display a non-monotonic dependence of glassy vitrification on the number of sites. Introducing interparticle attractions results in a far more complex state diagram. The ideal non-ergodic boundary shows a glass-fluid-gel re-entrance previously predicted for spherical particle fluids. The non-ergodic region of the state diagram presents qualitatively different dynamics in different regimes. They are qualified by the different behaviors of the NLE dynamic free energy. The caging dominated, repulsive glass regime is characterized by long localization lengths and barrier locations, dictated by repulsive hard core interactions, while the bonding dominated gel region has short localization lengths (commensurate with the attraction range), and barrier locations. There exists a small region of the state diagram which is qualified by both glassy and gel localization lengths in the dynamic free energy. A much larger (high volume fraction, and high attraction strength) region of phase space is characterized by short gel-like localization lengths, and long barrier locations. The region is called the attractive glass and represents a 2-step relaxation process whereby a particle first breaks attractive physical bonds, and then escapes its topological cage. The dynamic fragility of fluids are highly particle shape dependent. It increases with particle dimensionality and falls with aspect ratio for quasi 1- and 2- dimentional particles. An ultralocal limit analysis of the NLE theory predicts universalities in the behavior of relaxation times, and elastic moduli. The equlibrium phase diagram of chemically anisotropic Janus spheres and Janus rods are calculated employing a mean field Random Phase Approximation. The calculations for Janus rods are corroborated by the full liquid state Reference Interaction Site Model theory. The Janus particles consist of attractive and repulsive regions. Both rods and spheres display rich phase behavior. The phase diagrams of these systems display fluid, macrophase separated, attraction driven microphase separated, repulsion driven microphase separated and crystalline regimes. Macrophase separation is predicted in highly attractive low volume fraction systems. Attraction driven microphase separation is charaterized by long length scale divergences, where the ordering length scale determines the microphase ordered structures. The ordering length scale of repulsion driven microphase separation is determined by the repulsive range. At the high volume fractions, particles forgo the enthalpic considerations of attractions and repulsions to satisfy hard core constraints and maximize vibrational entropy. This results in site length scale ordering in rods, and the sphere length scale ordering in Janus spheres, i.e., crystallization. A change in the Janus balance of both rods and spheres results in quantitative changes in spinodal temperatures and the position of phase boundaries. However, a change in the block sequence of Janus rods causes qualitative changes in the type of microphase ordered state, and induces prominent features (such as the Lifshitz point) in the phase diagrams of these systems. A detailed study of the number of nearest neighbors in Janus rod systems reflect a deep connection between this local measure of structure, and the structure factor which represents the most global measure of order.
Resumo:
In this paper we consider a class of scalar integral equations with a form of space-dependent delay. These non-local models arise naturally when modelling neural tissue with active axons and passive dendrites. Such systems are known to support a dynamic (oscillatory) Turing instability of the homogeneous steady state. In this paper we develop a weakly nonlinear analysis of the travelling and standing waves that form beyond the point of instability. The appropriate amplitude equations are found to be the coupled mean-field Ginzburg-Landau equations describing a Turing-Hopf bifurcation with modulation group velocity of O(1). Importantly we are able to obtain the coefficients of terms in the amplitude equations in terms of integral transforms of the spatio-temporal kernels defining the neural field equation of interest. Indeed our results cover not only models with axonal or dendritic delays but those which are described by a more general distribution of delayed spatio-temporal interactions. We illustrate the predictive power of this form of analysis with comparison against direct numerical simulations, paying particular attention to the competition between standing and travelling waves and the onset of Benjamin-Feir instabilities.
Resumo:
This thesis proves certain results concerning an important question in non-equilibrium quantum statistical mechanics which is the derivation of effective evolution equations approximating the dynamics of a system of large number of bosons initially at equilibrium (ground state at very low temperatures). The dynamics of such systems are governed by the time-dependent linear many-body Schroedinger equation from which it is typically difficult to extract useful information due to the number of particles being large. We will study quantitatively (i.e. with explicit bounds on the error) how a suitable one particle non-linear Schroedinger equation arises in the mean field limit as number of particles N → ∞ and how the appropriate corrections to the mean field will provide better approximations of the exact dynamics. In the first part of this thesis we consider the evolution of N bosons, where N is large, with two-body interactions of the form N³ᵝv(Nᵝ⋅), 0≤β≤1. The parameter β measures the strength and the range of interactions. We compare the exact evolution with an approximation which considers the evolution of a mean field coupled with an appropriate description of pair excitations, see [18,19] by Grillakis-Machedon-Margetis. We extend the results for 0 ≤ β < 1/3 in [19, 20] to the case of β < 1/2 and obtain an error bound of the form p(t)/Nᵅ, where α>0 and p(t) is a polynomial, which implies a specific rate of convergence as N → ∞. In the second part, utilizing estimates of the type discussed in the first part, we compare the exact evolution with the mean field approximation in the sense of marginals. We prove that the exact evolution is close to the approximate in trace norm for times of the order o(1)√N compared to log(o(1)N) as obtained in Chen-Lee-Schlein [6] for the Hartree evolution. Estimates of similar type are obtained for stronger interactions as well.
Resumo:
This thesis reports on the development of quantitative measurement using micromachined scanning thermal microscopy (SThM) probes. These thermal probes employ a resistive element at their end, which can be used in passive or active modes. With the help of a review of SThM, the current issues and potentials associated with this technique are revealed. As a consequence of this understanding, several experimental and theoretical methods are discussed, which expand our understanding of these probes. The whole thesis can be summarized into three parts, one focusing on the thermal probe, one on probe-sample thermal interactions, and the third on heat transfer within the sample. In the first part, a series of experiments are demonstrated, aimed at characterizing the probe in its electrical and thermal properties, benefiting advanced probe design, and laying a fundamental base for quantifying the temperature of the probe. The second part focuses on two artifacts observed during the thermal scans – one induced by topography and the other by air conduction. Correspondingly, two devices, probing these artifacts, are developed. A topography-free sample, utilizing a pattern transfer technique, minimises topography-related artifacts that limited the reliability of SThM data; a controlled temperature ‘Johnson noise device’, with multiple-heater design, offers a uniform, accurate, temperature distribution. Analyzing results of scan from these samples provides data for studying the thermal interactions within the probe and the tip-sample interface. In the final part, the observation is presented that quantification of measurements depends not only on an accurate measurement tool, but also on a deep understanding of the heat transfer within the sample resulting from the nanoscopic contact. It is believed that work in this thesis contributes to SThM gaining wider application in the scientific community.