962 resultados para Monte-Carlo-Simulation
Resumo:
This paper addresses the analysis of probabilistic corrosion time initiation in reinforced concrete structures exposed to ions chloride penetration. Structural durability is an important criterion which must be evaluated in every type of structure, especially when these structures are constructed in aggressive atmospheres. Considering reinforced concrete members, chloride diffusion process is widely used to evaluate the durability. Therefore, at modelling this phenomenon, corrosion of reinforcements can be better estimated and prevented. These processes begin when a threshold level of chlorides concentration is reached at the steel bars of reinforcements. Despite the robustness of several models proposed in the literature, deterministic approaches fail to predict accurately the corrosion time initiation due to the inherently randomness observed in this process. In this regard, the durability can be more realistically represented using probabilistic approaches. A probabilistic analysis of ions chloride penetration is presented in this paper. The ions chloride penetration is simulated using the Fick's second law of diffusion. This law represents the chloride diffusion process, considering time dependent effects. The probability of failure is calculated using Monte Carlo simulation and the First Order Reliability Method (FORM) with a direct coupling approach. Some examples are considered in order to study these phenomena and a simplified method is proposed to determine optimal values for concrete cover.
Resumo:
Abstract Background HBV genotype F is primarily found in indigenous populations from South America and is classified in four subgenotypes (F1 to F4). Subgenotype F2a is the most common in Brazil among genotype F cases. The aim of this study was to characterize HBV genotype F2a circulating in 16 patients from São Paulo, Brazil. Samples were collected between 2006 and 2012 and sent to Hospital Israelita Albert Einstein. A fragment of 1306 bp partially comprising HBsAg and DNA polymerase coding regions was amplified and sequenced. Viral sequences were genotyped by phylogenetic analysis using reference sequences from GenBank (n=198), including 80 classified as subgenotype F2a. Bayesian Markov chain Monte Carlo simulation implemented in BEAST v.1.5.4 was applied to obtain the best possible estimates using the model of nucleotide substitutions GTR+G+I. Findings It were identified three groups of sequences of subgenotype F2a: 1) 10 sequences from São Paulo state; 2) 3 sequences from Rio de Janeiro and one from São Paulo states; 3) 8 sequences from the West Amazon Basin. Conclusions These results showing for the first time the distribution of F2a subgenotype in Brazil. The spreading and the dynamic of subgenotype F2a in Brazil requires the study of a higher number of samples from different regions as it is unfold in almost all Brazilian populations studied so far. We cannot infer with certainty the origin of these different groups due to the lack of available sequences. Nevertheless, our data suggest that the common origin of these groups probably occurred a long time ago.
Resumo:
There is special interest in the incorporation of metallic nanoparticles in a surrounding dielectric matrix for obtaining composites with desirable characteristics such as for surface plasmon resonance, which can be used in photonics and sensing, and controlled surface electrical conductivity. We investigated nanocomposites produced through metallic ion implantation in insulating substrate, where the implanted metal self-assembles into nanoparticles. During the implantation, the excess of metal atom concentration above the solubility limit leads to nucleation and growth of metal nanoparticles, driven by the temperature and temperature gradients within the implanted sample including the beam-induced thermal characteristics. The nanoparticles nucleate near the maximum of the implantation depth profile (projected range), that can be estimated by computer simulation using the TRIDYN. This is a Monte Carlo simulation program based on the TRIM (Transport and Range of Ions in Matter) code that takes into account compositional changes in the substrate due to two factors: previously implanted dopant atoms, and sputtering of the substrate surface. Our study suggests that the nanoparticles form a bidimentional array buried few nanometers below the substrate surface. More specifically we have studied Au/PMMA (polymethylmethacrylate), Pt/PMMA, Ti/alumina and Au/alumina systems. Transmission electron microscopy of the implanted samples showed the metallic nanoparticles formed in the insulating matrix. The nanocomposites were characterized by measuring the resistivity of the composite layer as function of the dose implanted. These experimental results were compared with a model based on percolation theory, in which electron transport through the composite is explained by conduction through a random resistor network formed by the metallic nanoparticles. Excellent agreement was found between the experimental results and the predictions of the theory. It was possible to conclude, in all cases, that the conductivity process is due only to percolation (when the conducting elements are in geometric contact) and that the contribution from tunneling conduction is negligible.
Resumo:
In this work, we have used a combined of atomistic simulation methods to explore the effects of confinement of water molecules between silica surfaces. Firstly, the mechanical properties of water severe confined (~3A) between two silica alpha-quartz was determined based on first principles calculations within the density functional theory (DFT). Simulated annealing methods were employed due to the complex potential energry surface, and the difficulties to avoid local minima. Our results suggest that much of the stiffness of the material (46%) remains, even after the insertion of a water monolayer in the silica. Secondly, in order to access typical time scales for confined systems, classical molecular dynamics was used to determine the dynamical properties of water confined in silica cylindrical pores, with diameters varying from 10 to 40A. in this case we have varied the passivation of the silica surface, from 13% to 100% of SiOH, and the other terminations being SiOH2 and SiOH3, the distribution of the different terminations was obtained with a Monte Carlo simulation. The simulations indicates a lowering of the diffusion coefficientes as the diameter decreases, due to the structuration of hydrogen bonds of water molecules; we have also obtained the density profiles of the confined water and the interfacial tension.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
The inherent stochastic character of most of the physical quantities involved in engineering models has led to an always increasing interest for probabilistic analysis. Many approaches to stochastic analysis have been proposed. However, it is widely acknowledged that the only universal method available to solve accurately any kind of stochastic mechanics problem is Monte Carlo Simulation. One of the key parts in the implementation of this technique is the accurate and efficient generation of samples of the random processes and fields involved in the problem at hand. In the present thesis an original method for the simulation of homogeneous, multi-dimensional, multi-variate, non-Gaussian random fields is proposed. The algorithm has proved to be very accurate in matching both the target spectrum and the marginal probability. The computational efficiency and robustness are very good too, even when dealing with strongly non-Gaussian distributions. What is more, the resulting samples posses all the relevant, welldefined and desired properties of “translation fields”, including crossing rates and distributions of extremes. The topic of the second part of the thesis lies in the field of non-destructive parametric structural identification. Its objective is to evaluate the mechanical characteristics of constituent bars in existing truss structures, using static loads and strain measurements. In the cases of missing data and of damages that interest only a small portion of the bar, Genetic Algorithm have proved to be an effective tool to solve the problem.
Resumo:
This thesis is focused on the financial model for interest rates called the LIBOR Market Model. In the appendixes, we provide the necessary mathematical theory. In the inner chapters, firstly, we define the main interest rates and financial instruments concerning with the interest rate models, then, we set the LIBOR market model, demonstrate its existence, derive the dynamics of forward LIBOR rates and justify the pricing of caps according to the Black’s formula. Then, we also present the Swap Market Model, which models the forward swap rates instead of the LIBOR ones. Even this model is justified by a theoretical demonstration and the resulting formula to price the swaptions coincides with the Black’s one. However, the two models are not compatible from a theoretical point. Therefore, we derive various analytical approximating formulae to price the swaptions in the LIBOR market model and we explain how to perform a Monte Carlo simulation. Finally, we present the calibration of the LIBOR market model to the markets of both caps and swaptions, together with various examples of application to the historical correlation matrix and the cascade calibration of the forward volatilities to the matrix of implied swaption volatilities provided by the market.
Resumo:
Der AMANDA-II Detektor ist primär für den richtungsaufgelösten Nachweis hochenergetischer Neutrinos konzipiert. Trotzdem können auch niederenergetische Neutrinoausbrüche, wie sie von Supernovae erwartet werden, mit hoher Signifikanz nachgewiesen werden, sofern sie innerhalb der Milchstraße stattfinden. Die experimentelle Signatur im Detektor ist ein kollektiver Anstieg der Rauschraten aller optischen Module. Zur Abschätzung der Stärke des erwarteten Signals wurden theoretische Modelle und Simulationen zu Supernovae und experimentelle Daten der Supernova SN1987A studiert. Außerdem wurden die Sensitivitäten der optischen Module neu bestimmt. Dazu mussten für den Fall des südpolaren Eises die Energieverluste geladener Teilchen untersucht und eine Simulation der Propagation von Photonen entwickelt werden. Schließlich konnte das im Kamiokande-II Detektor gemessene Signal auf die Verhältnisse des AMANDA-II Detektors skaliert werden. Im Rahmen dieser Arbeit wurde ein Algorithmus zur Echtzeit-Suche nach Signalen von Supernovae als Teilmodul der Datennahme implementiert. Dieser beinhaltet diverse Verbesserungen gegenüber der zuvor von der AMANDA-Kollaboration verwendeten Version. Aufgrund einer Optimierung auf Rechengeschwindigkeit können nun mehrere Echtzeit-Suchen mit verschiedenen Analyse-Zeitbasen im Rahmen der Datennahme simultan laufen. Die Disqualifikation optischer Module mit ungeeignetem Verhalten geschieht in Echtzeit. Allerdings muss das Verhalten der Module zu diesem Zweck anhand von gepufferten Daten beurteilt werden. Dadurch kann die Analyse der Daten der qualifizierten Module nicht ohne eine Verzögerung von etwa 5 Minuten geschehen. Im Falle einer erkannten Supernova werden die Daten für die Zeitdauer mehrerer Minuten zur späteren Auswertung in 10 Millisekunden-Intervallen archiviert. Da die Daten des Rauschverhaltens der optischen Module ansonsten in Intervallen von 500 ms zur Verfgung stehen, ist die Zeitbasis der Analyse in Einheiten von 500 ms frei wählbar. Im Rahmen dieser Arbeit wurden drei Analysen dieser Art am Südpol aktiviert: Eine mit der Zeitbasis der Datennahme von 500 ms, eine mit der Zeitbasis 4 s und eine mit der Zeitbasis 10 s. Dadurch wird die Sensitivität für Signale maximiert, die eine charakteristische exponentielle Zerfallszeit von 3 s aufweisen und gleichzeitig eine gute Sensitivität über einen weiten Bereich exponentieller Zerfallszeiten gewahrt. Anhand von Daten der Jahre 2000 bis 2003 wurden diese Analysen ausführlich untersucht. Während die Ergebnisse der Analyse mit t = 500 ms nicht vollständig nachvollziehbare Ergebnisse produzierte, konnten die Resultate der beiden Analysen mit den längeren Zeitbasen durch Simulationen reproduziert und entsprechend gut verstanden werden. Auf der Grundlage der gemessenen Daten wurden die erwarteten Signale von Supernovae simuliert. Aus einem Vergleich zwischen dieser Simulation den gemessenen Daten der Jahre 2000 bis 2003 und der Simulation des erwarteten statistischen Untergrunds kann mit einem Konfidenz-Niveau von mindestens 90 % gefolgert werden, dass in der Milchstraße nicht mehr als 3.2 Supernovae pro Jahr stattfinden. Zur Identifikation einer Supernova wird ein Ratenanstieg mit einer Signifikanz von mindestens 7.4 Standardabweichungen verlangt. Die Anzahl erwarteter Ereignisse aus dem statistischen Untergrund beträgt auf diesem Niveau weniger als ein Millionstel. Dennoch wurde ein solches Ereignis gemessen. Mit der gewählten Signifikanzschwelle werden 74 % aller möglichen Vorläufer-Sterne von Supernovae in der Galaxis überwacht. In Kombination mit dem letzten von der AMANDA-Kollaboration veröffentlicheten Ergebnis ergibt sich sogar eine obere Grenze von nur 2.6 Supernovae pro Jahr. Im Rahmen der Echtzeit-Analyse wird für die kollektive Ratenüberhöhung eine Signifikanz von mindestens 5.5 Standardabweichungen verlangt, bevor eine Meldung über die Detektion eines Supernova-Kandidaten verschickt wird. Damit liegt der überwachte Anteil Sterne der Galaxis bei 81 %, aber auch die Frequenz falscher Alarme steigt auf bei etwa 2 Ereignissen pro Woche. Die Alarm-Meldungen werden über ein Iridium-Modem in die nördliche Hemisphäre übertragen, und sollen schon bald zu SNEWS beitragen, dem weltweiten Netzwerk zur Früherkennung von Supernovae.
Resumo:
Die Drei-Spektrometer-Anlage am Mainzer Institut für Kernphysik wurde um ein zusätzliches Spektrometer ergänzt, welches sich durch seine kurze Baulänge auszeichnet und deshalb Short-Orbit-Spektrometer (SOS) genannt wird. Beim nominellen Abstand des SOS vom Target (66 cm) legen die nachzuweisenden Teilchen zwischen Reaktionsort und Detektor eine mittlere Bahnlänge von 165 cm zurück. Für die schwellennahe Pionproduktion erhöht sich dadurch im Vergleich zu den großen Spektrometern die Überlebenswahrscheinlichkeit geladener Pionen mit Impuls 100 MeV/c von 15% auf 73%. Demzufolge verringert sich der systematische Fehler ("Myon-Kontamination"), etwa bei der geplanten Messung der schwachen Formfaktoren G_A(Q²) und G_P(Q²), signifikant. Den Schwerpunkt der vorliegenden Arbeit bildet die Driftkammer des SOS. Ihre niedrige Massenbelegung (0,03% X_0) zur Reduzierung der Kleinwinkelstreuung ist auf den Nachweis niederenergetischer Pionen hin optimiert. Aufgrund der neuartigen Geometrie des Detektors musste eine eigene Software zur Spurrekonstruktion, Effizienzbestimmung etc. entwickelt werden. Eine komfortable Möglichkeit zur Eichung der Driftweg-Driftzeit-Relation, die durch kubische Splines dargestellt wird, wurde implementiert. Das Auflösungsvermögen des Spurdetektors liegt in der dispersiven Ebene bei 76 µm für die Orts- und 0,23° für die Winkelkoordinate (wahrscheinlichster Fehler) sowie entsprechend in der nicht-dispersiven Ebene bei 110 µm bzw. 0,29°. Zur Rückrechnung der Detektorkoordinaten auf den Reaktionsort wurde die inverse Transfermatrix des Spektrometers bestimmt. Hierzu wurden an Protonen im ¹²C-Kern quasielastisch gestreute Elektronen verwendet, deren Startwinkel durch einen Lochkollimator definiert wurden. Daraus ergeben sich experimentelle Werte für die mittlere Winkelauflösung am Target von sigma_phi = 1,3 mrad bzw. sigma_theta = 10,6 mrad. Da die Impulseichung des SOS nur mittels quasielastischer Streuung (Zweiarmexperiment) durchgeführt werden kann, muss man den Beitrag des Protonarms zur Breite des Piks der fehlenden Masse in einer Monte-Carlo-Simulation abschätzen und herausfalten. Zunächst lässt sich nur abschätzen, dass die Impulsauflösung sicher besser als 1% ist.
Resumo:
In this thesis we consider three different models for strongly correlated electrons, namely a multi-band Hubbard model as well as the spinless Falicov-Kimball model, both with a semi-elliptical density of states in the limit of infinite dimensions d, and the attractive Hubbard model on a square lattice in d=2.
In the first part, we study a two-band Hubbard model with unequal bandwidths and anisotropic Hund's rule coupling (J_z-model) in the limit of infinite dimensions within the dynamical mean-field theory (DMFT). Here, the DMFT impurity problem is solved with the use of quantum Monte Carlo (QMC) simulations. Our main result is that the J_z-model describes the occurrence of an orbital-selective Mott transition (OSMT), in contrast to earlier findings. We investigate the model with a high-precision DMFT algorithm, which was developed as part of this thesis and which supplements QMC with a high-frequency expansion of the self-energy.
The main advantage of this scheme is the extraordinary accuracy of the numerical solutions, which can be obtained already with moderate computational effort, so that studies of multi-orbital systems within the DMFT+QMC are strongly improved. We also found that a suitably defined
Falicov-Kimball (FK) model exhibits an OSMT, revealing the close connection of the Falicov-Kimball physics to the J_z-model in the OSM phase.
In the second part of this thesis we study the attractive Hubbard model in two spatial dimensions within second-order self-consistent perturbation theory.
This model is considered on a square lattice at finite doping and at low temperatures. Our main result is that the predictions of first-order perturbation theory (Hartree-Fock approximation) are renormalized by a factor of the order of unity even at arbitrarily weak interaction (U->0). The renormalization factor q can be evaluated as a function of the filling n for 0
Resumo:
In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.
Resumo:
Der radiative Zerfall eines Hyperons in ein leichteres Hyperon und ein Photon erlaubt eine Untersuchung der Struktur der elektroschwachen Wechselwirkung von Hadronen. Dazu wird die Zerfallsasymmetrie $alpha$ betrachtet. Sie beschreibt die Verteilung des Tochterhyperons bezüglich der Polarisation $vec{P}$ des Mutterhyperons mit $dN / d cos(Theta) propto 1 + alpha |vec{P}| cos(Theta)$, wobei $Theta$ der Winkel zwischen $vec{P}$ und dem Impuls des Tochterhyperons ist. Von besonderem Interesse ist der radiative Zerfall $Xi^0 to Lambda gamma$, für den alle Rechnungen auf Quarkniveau eine positive Asymmetrie vorhersagen, wohingegen bisher eine negative Asymmetrie von $alpha_{Lambda gamma} = -0,73 +- 0,17$ gemessen wurde. Ziel dieser Arbeit war es, die bisherigen Messungen zu überprüfen und die Asymmetrie mit einer deutlich höheren Präzision zu bestimmen. Ferner wurden die Zerfallsasymmetrie des radiativen Zerfalls $Xi^0 to Sigma^0 gamma$ ermittelt und zum Test der angewandten Analysemethode der gut bekannte Zerfall $Xi^0 to Lambda pi^0$ herangezogen. Während der Datennahme im Jahr 2002 zeichnete das NA48/1-Experiment am CERN gezielt seltene $K_S$- und Hyperonzerfälle auf. Damit konnte der weltweit größte Datensatz an $Xi^0$-Zerfällen gewonnen werden, aus dem etwa 52.000 $Xi^0 to Lambda gamma$-Zerfälle, 15.000 $Xi^0 to Sigma^0 gamma$-Zerfälle und 4 Mill. $Xi^0 to Lambda pi^0$-Zerfälle mit nur geringem Untergrund extrahiert wurden. Ebenso wurden die entsprechenden $antiXi$-Zerfälle mit etwa einem Zehntel der obigen Ereigniszahlen registriert. Die Bestimmung der Zerfallsasymmetrien erfolgte durch den Vergleich der gemessene Daten mit einer detaillierten Detektorsimulation und führte zu den folgenden Resultaten dieser Arbeit: $alpha_{Lambda gamma} = -0,701 +- 0,019_{stat} +- 0,064_{sys}$, $alpha_{Sigma^0 gamma} = -0,683 +- 0,032_{stat} +- 0,077_{sys}$, $alpha_{Lambda pi^0} = -0,439 +- 0,002_{stat} +- 0,056_{sys}$, $alpha_{antiLambda gamma} = 0,772 +- 0,064_{stat} +- 0,066_{sys}$, $alpha_{antiSigma^0 gamma} = 0,811 +- 0,103_{stat} +- 0,135_{sys}$, $alpha_{antiLambda pi^0} = 0,451 +- 0,005_{stat} +- 0,057_{sys}$. Somit konnte die Unsicherheit der $Xi^0 to Lambda gamma$-Zerfallsasymmetrie auf etwa ein Drittel reduziert werden. Ihr negatives Vorzeichen und damit der Widerspruch zu den Vorhersagen der Quarkmodellrechnungen ist so zweifelsfrei bestätigt. Mit den zum ersten Mal gemessenen $antiXi$-Asymmetrien konnten zusätzlich Grenzen auf eine mögliche CP-Verletzung in den $Xi^0$-Zerfällen, die $alpha_{Xi^0} neq -alpha_{antiXi}$ zur Folge hätte, bestimmt werden.
Resumo:
By pulling and releasing the tension on protein homomers with the Atomic Force Miscroscope (AFM) at different pulling speeds, dwell times and dwell distances, the observed force-response of the protein can be fitted with suitable theoretical models. In this respect we developed mathematical procedures and open-source computer codes for driving such experiments and fitting Bell’s model to experimental protein unfolding forces and protein folding frequencies. We applied the above techniques to the study of proteins GB1 (the B1 IgG-binding domain of protein G from Streptococcus) and I27 (a module of human cardiac titin) in aqueous solutions of protecting osmolytes such as dimethyl sulfoxide (DMSO), glycerol and trimethylamine N-oxide (TMAO). In order to get a molecular understanding of the experimental results we developed an Ising-like model for proteins that incorporates the osmophobic nature of their backbone. The model benefits from analytical thermodynamics and kinetics amenable to Monte-Carlo simulation. The prevailing view used to be that small protecting osmolytes bridge the separating beta-strands of proteins with mechanical resistance, presumably shifting the transition state to significantly higher distances that correlate with the molecular size of the osmolyte molecules. Our experiments showed instead that protecting osmolytes slow down protein unfolding and speed-up protein folding at physiological pH without shifting the protein transition state on the mechanical reaction coordinate. Together with the theoretical results of the Ising-model, our results lend support to the osmophobic theory according to which osmolyte stabilisation is a result of the preferential exclusion of the osmolyte molecules from the protein backbone. The results obtained during this thesis work have markedly improved our understanding of the strategy selected by Nature to strengthen protein stability in hostile environments, shifting the focus from hypothetical protein-osmolyte interactions to the more general mechanism based on the osmophobicity of the protein backbone.
Resumo:
This thesis is concerned with the adsorption and detachment of polymers at planar, rigid surfaces. We have carried out a systematic investigation of adsorption of polymers using analytical techniques as well as Monte Carlo simulations with a coarse grained off-lattice bead spring model. The investigation was carried out in three stages. In the first stage the adsorption of a single multiblock AB copolymer on a solid surface was investigated by means of simulations and scaling analysis. It was shown that the problem could be mapped onto an effective homopolymer problem. Our main result was the phase diagram of regular multiblock copolymers which shows an increase in the critical adsorption potential of the substrate with decreasing size of blocks. We also considered the adsorption of random copolymers which was found to be well described within the annealed disorder approximation. In the next phase, we studied the adsorption kinetics of a single polymer on a flat, structureless surface in the regime of strong physisorption. The idea of a ’stem-flower’ polymer conformation and the mechanism of ’zipping’ during the adsorption process were used to derive a Fokker-Planck equation with reflecting boundary conditions for the time dependent probability distribution function (PDF) of the number of adsorbed monomers. The numerical solution of the time-dependent PDF obtained from a discrete set of coupled differential equations were shown to be in perfect agreement with Monte Carlo simulation results. Finally we studied force induced desorption of a polymer chain adsorbed on an attractive surface. We approached the problem within the framework of two different statistical ensembles; (i) by keeping the pulling force fixed while measuring the position of the polymer chain end, and (ii) by measuring the force necessary to keep the chain end at fixed distance above the adsorbing plane. In the first case we treated the problem within the framework of the Grand Canonical Ensemble approach and derived analytic expressions for the various conformational building blocks, characterizing the structure of an adsorbed linear polymer chain, subject to pulling force of fixed strength. The main result was the phase diagram of a polymer chain under pulling. We demonstrated a novel first order phase transformation which is dichotomic i.e. phase coexistence is not possible. In the second case, we carried out our study in the “fixed height” statistical ensemble where one measures the fluctuating force, exerted by the chain on the last monomer when a chain end is kept fixed at height h over the solid plane at different adsorption strength ε. The phase diagram in the h − ε plane was calculated both analytically and by Monte Carlo simulations. We demonstrated that in the vicinity of the polymer desorption transition a number of properties like fluctuations and probability distribution of various quantities behave differently, if h rather than the force, f, is used as an independent control parameter.
Resumo:
The advances that have been characterizing spatial econometrics in recent years are mostly theoretical and have not found an extensive empirical application yet. In this work we aim at supplying a review of the main tools of spatial econometrics and to show an empirical application for one of the most recently introduced estimators. Despite the numerous alternatives that the econometric theory provides for the treatment of spatial (and spatiotemporal) data, empirical analyses are still limited by the lack of availability of the correspondent routines in statistical and econometric software. Spatiotemporal modeling represents one of the most recent developments in spatial econometric theory and the finite sample properties of the estimators that have been proposed are currently being tested in the literature. We provide a comparison between some estimators (a quasi-maximum likelihood, QML, estimator and some GMM-type estimators) for a fixed effects dynamic panel data model under certain conditions, by means of a Monte Carlo simulation analysis. We focus on different settings, which are characterized either by fully stable or quasi-unit root series. We also investigate the extent of the bias that is caused by a non-spatial estimation of a model when the data are characterized by different degrees of spatial dependence. Finally, we provide an empirical application of a QML estimator for a time-space dynamic model which includes a temporal, a spatial and a spatiotemporal lag of the dependent variable. This is done by choosing a relevant and prolific field of analysis, in which spatial econometrics has only found limited space so far, in order to explore the value-added of considering the spatial dimension of the data. In particular, we study the determinants of cropland value in Midwestern U.S.A. in the years 1971-2009, by taking the present value model (PVM) as the theoretical framework of analysis.