995 resultados para Self-optimization
Resumo:
We propose a procedure for analyzing and characterizing complex networks. We apply this to the social network as constructed from email communications within a medium sized university with about 1700 employees. Email networks provide an accurate and nonintrusive description of the flow of information within human organizations. Our results reveal the self-organization of the network into a state where the distribution of community sizes is self-similar. This suggests that a universal mechanism, responsible for emergence of scaling in other self-organized complex systems, as, for instance, river networks, could also be the underlying driving force in the formation and evolution of social networks.
Resumo:
Different microscopic models exhibiting self-organized criticality are studied numerically and analytically. Numerical simulations are performed to compute critical exponents, mainly the dynamical exponent, and to check universality classes. We find that various models lead to the same exponent, but this universality class is sensitive to disorder. From the dynamic microscopic rules we obtain continuum equations with different sources of noise, which we call internal and external. Different correlations of the noise give rise to different critical behavior. A model for external noise is proposed that makes the upper critical dimensionality equal to 4 and leads to the possible existence of a phase transition above d=4. Limitations of the approach of these models by a simple nonlinear equation are discussed.
Resumo:
We propose a general scenario to analyze technological changes in socio-economic environments. We illustrate the ideas with a model that incorporating the main trends is simple enough to extract analytical results and, at the same time, sufficiently complex to display a rich dynamic behavior. Our study shows that there exists a macroscopic observable that is maximized in a regime where the system is critical, in the sense that the distribution of events follow power laws. Computer simulations show that, in addition, the system always self-organizes to achieve the optimal performance in the stationary state.
Resumo:
The self-intermediate dynamic structure factor Fs(k,t) of liquid lithium near the melting temperature is calculated by molecular dynamics. The results are compared with the predictions of several theoretical approaches, paying special attention to the Lovesey model and the Wahnstrm and Sjgren mode-coupling theory. To this end the results for the Fs(k,t) second memory function predicted by both models are compared with the ones calculated from the simulations.
Resumo:
Molecular dynamics simulation is applied to the study of the diffusion properties in binary liquid mixtures made up of soft-sphere particles with different sizes and masses. Self- and distinct velocity correlation functions and related diffusion coefficients have been calculated. Special attention has been paid to the dynamic cross correlations which have been computed through recently introduced relative mean molecular velocity correlation functions which are independent on the reference frame. The differences between the distinct velocity correlations and diffusion coefficients in different reference frames (mass-fixed, number-fixed, and solvent-fixed) are discussed.
Resumo:
Critical exponents of the infinitely slowly driven Zhang model of self-organized criticality are computed for d=2 and 3, with particular emphasis devoted to the various roughening exponents. Besides confirming recent estimates of some exponents, new quantities are monitored, and their critical exponents computed. Among other results, it is shown that the three-dimensional exponents do not coincide with the Bak-Tang-Wiesenfeld [Phys. Rev. Lett. 59, 381 (1987); Phys. Rev. A 38, 364 (1988)] (Abelian) model, and that the dynamical exponent as computed from the correlation length and from the roughness of the energy profile do not necessarily coincide, as is usually implicitly assumed. An explanation for this is provided. The possibility of comparing these results with those obtained from renormalization group arguments is also briefly addressed.
Resumo:
Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.
Resumo:
Previous Iowa DOT sponsored research has shown that some Class C fly ashes are ementitious (because calcium is combined as calcium aluminates) while other Class C ashes containing similar amounts of elemental calcium are not (1). Fly ashes from modern power plants in Iowa contain significant amounts of calcium in their glassy phases, regardless of their cementitious properties. The present research was based on these findings and on the hyphothesis that: attack of the amorphous phase of high calcium fly ash could be initiated with trace additives, thus making calcium available for formation of useful calcium-silicate cements. Phase I research was devoted to finding potential additives through a screening process; the likely chemicals were tested with fly ashes representative of the cementitious and non-cementitious ashes available in the state. Ammonium phosphate, a fertilizer, was found to produce 3,600 psi cement with cementitious Neal #4 fly ash; this strength is roughly equivalent to that of portland cement, but at about one-third the cost. Neal #2 fly ash, a slightly cementitious Class C, was found to respond best with ammonium nitrate; through the additive, a near-zero strength material was transformed into a 1,200 psi cement. The second research phase was directed to optimimizing trace additive concentrations, defining the behavior of the resulting cements, evaluating more comprehensively the fly ashes available in Iowa, and explaining the cement formation mechanisms of the most promising trace additives. X-ray diffraction data demonstrate that both amorphous and crystalline hydrates of chemically enhanced fly ash differ from those of unaltered fly ash hydrates. Calciumaluminum- silicate hydrates were formed, rather than the expected (and hypothesized) calcium-silicate hydrates. These new reaction products explain the observed strength enhancement. The final phase concentrated on laboratory application of the chemically-enhanced fly ash cements to road base stabilization. Emphasis was placed on use of marginal aggregates, such as limestone crusher fines and unprocessed blow sand. The nature of the chemically modified fly ash cements led to an evaluation of fine grained soil stabilization where a wide range of materials, defined by plasticity index, could be stabilized. Parameters used for evaluation included strength, compaction requirements, set time, and frost resistance.
Resumo:
The goal of the project was to develop a new type of self-consolidating concrete (SCC) for slip-form paving to simplify construction an make smoother pavements. Developing the new SCC involved two phases: a feasibility study (Phase I sponsored by TPF-5[098] and concrete admixtures industry) and an in-depth mix proportioning and performance study and field applications (Phase II). The phase I study demonstrated that the new type of SCC needs to possess not only excellent self-consolidating ability before a pavement slab is extruded, but also sufficient “green” strength (the strength of the concrete in a plastic state) after the extrusion. To meet these performance criteria, the new type of SCC mixtures should not be as fluid as conventional SCC but just flowable enough to be self-consolidating. That is, this new type of SCC should be semi-flowable self-consolidating concrete (SFSCC). In the phase II study, effects of different materials and admixtures on rheology, especially the thixotropy, and green strength of fresh SFSCC have been further investigated. The results indicate that SFSCC can be designed to (1) be workable enough for machine placement, (2) be self-consolidating without segregation, (3) hold its shape after extrusion from a paver, and (4) have performance properties (strength and durability) comparable with current pavement concrete. Due to the combined flowability (for self-consolidation) and shape-holding ability (for slip-forming) requirements, SFSCC demands higher cementitious content than conventional pavement concrete. Generally, high cementitious content is associated with high drying shrinkage potential of the concrete. However, well-proportioned and well-constructed SFSCC in a bike path constructed at Ames, IA, has not shown any shrinkage cracks after approximately 3 years of field service. On the other hand, another SFSCC pavement with different mix proportions and construction conditions showed random cracking. The results from the field SFSCC performance monitoring implied that not only the mix proportioning method but also the construction practice is important for producing durable SFSCC pavements. A carbon footprint, energy consumption, and cost analysis conducted in this study have suggested that SFSCC is economically comparable to conventional pavement concrete in fixed-form paving construction, with the benefit of faster, quieter, and easier construction.
Resumo:
Geographic information systems (GIS) and artificial intelligence (AI) techniques were used to develop an intelligent snow removal asset management system (SRAMS). The system has been evaluated through a case study examining snow removal from the roads in Black Hawk County, Iowa, for which the Iowa Department of Transportation (Iowa DOT) is responsible. The SRAMS is comprised of an expert system that contains the logical rules and expertise of the Iowa DOT’s snow removal experts in Black Hawk County, and a geographic information system to access and manage road data. The system is implemented on a mid-range PC by integrating MapObjects 2.1 (a GIS package), Visual Rule Studio 2.2 (an AI shell), and Visual Basic 6.0 (a programming tool). The system could efficiently be used to generate prioritized snowplowing routes in visual format, to optimize the allocation of assets for plowing, and to track materials (e.g., salt and sand). A test of the system reveals an improvement in snowplowing time by 1.9 percent for moderate snowfall and 9.7 percent for snowstorm conditions over the current manual system.
Resumo:
Central and peripheral tolerance prevent autoimmunity by deleting the most aggressive CD8(+) T cells but they spare cells that react weakly to tissue-restricted antigen (TRA). To reveal the functional characteristics of these spared cells, we generated a transgenic mouse expressing the TCR of a TRA-specific T cell that had escaped negative selection. Interestingly, the isolated TCR matches the affinity/avidity threshold for negatively selecting T cells, and when developing transgenic cells are exposed to their TRA in the thymus, only a fraction of them are eliminated but significant numbers enter the periphery. In contrast to high avidity cells, low avidity T cells persist in the antigen-positive periphery with no signs of anergy, unresponsiveness, or prior activation. Upon activation during an infection they cause autoimmunity and form memory cells. Unexpectedly, peptide ligands that are weaker in stimulating the transgenic T cells than the thymic threshold ligand also induce profound activation in the periphery. Thus, the peripheral T cell activation threshold during an infection is below that of negative selection for TRA. These results demonstrate the existence of a level of self-reactivity to TRA to which the thymus confers no protection and illustrate that organ damage can occur without genetic predisposition to autoimmunity.