988 resultados para Optimization framework


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coalescing compact binary systems are important sources of gravitational waves. Here we investigate the detectability of this gravitational radiation by the recently proposed laser interferometers. The spectral density of noise for various practicable configurations of the detector is also reviewed. This includes laser interferometers with delay lines and Fabry-Prot cavities in the arms, both in standard and dual recycling arrangements. The sensitivity of the detector in all those configurations is presented graphically and the signal-to-noise ratio is calculated numerically. For all configurations we find values of the detector's parameters which maximize the detectability of coalescing binaries, the discussion comprising Newtonian- as well as post-Newtonian-order effects. Contour plots of the signal-to-noise ratio are also presented in certain parameter domains which illustrate the interferometer's response to coalescing binary signals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We derive a Hamiltonian formulation for the three-dimensional formalism of predictive relativistic mechanics. This Hamiltonian structure is used to derive a set of dynamical equations describing the interaction among systems in perturbation theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Because of the increase in workplace automation and the diversification of industrial processes, workplaces have become more and more complex. The classical approaches used to address workplace hazard concerns, such as checklists or sequence models, are, therefore, of limited use in such complex systems. Moreover, because of the multifaceted nature of workplaces, the use of single-oriented methods, such as AEA (man oriented), FMEA (system oriented), or HAZOP (process oriented), is not satisfactory. The use of a dynamic modeling approach in order to allow multiple-oriented analyses may constitute an alternative to overcome this limitation. The qualitative modeling aspects of the MORM (man-machine occupational risk modeling) model are discussed in this article. The model, realized on an object-oriented Petri net tool (CO-OPN), has been developed to simulate and analyze industrial processes in an OH&S perspective. The industrial process is modeled as a set of interconnected subnets (state spaces), which describe its constitutive machines. Process-related factors are introduced, in an explicit way, through machine interconnections and flow properties. While man-machine interactions are modeled as triggering events for the state spaces of the machines, the CREAM cognitive behavior model is used in order to establish the relevant triggering events. In the CO-OPN formalism, the model is expressed as a set of interconnected CO-OPN objects defined over data types expressing the measure attached to the flow of entities transiting through the machines. Constraints on the measures assigned to these entities are used to determine the state changes in each machine. Interconnecting machines implies the composition of such flow and consequently the interconnection of the measure constraints. This is reflected by the construction of constraint enrichment hierarchies, which can be used for simulation and analysis optimization in a clear mathematical framework. The use of Petri nets to perform multiple-oriented analysis opens perspectives in the field of industrial risk management. It may significantly reduce the duration of the assessment process. But, most of all, it opens perspectives in the field of risk comparisons and integrated risk management. Moreover, because of the generic nature of the model and tool used, the same concepts and patterns may be used to model a wide range of systems and application fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Preface The starting point for this work and eventually the subject of the whole thesis was the question: how to estimate parameters of the affine stochastic volatility jump-diffusion models. These models are very important for contingent claim pricing. Their major advantage, availability T of analytical solutions for characteristic functions, made them the models of choice for many theoretical constructions and practical applications. At the same time, estimation of parameters of stochastic volatility jump-diffusion models is not a straightforward task. The problem is coming from the variance process, which is non-observable. There are several estimation methodologies that deal with estimation problems of latent variables. One appeared to be particularly interesting. It proposes the estimator that in contrast to the other methods requires neither discretization nor simulation of the process: the Continuous Empirical Characteristic function estimator (EGF) based on the unconditional characteristic function. However, the procedure was derived only for the stochastic volatility models without jumps. Thus, it has become the subject of my research. This thesis consists of three parts. Each one is written as independent and self contained article. At the same time, questions that are answered by the second and third parts of this Work arise naturally from the issues investigated and results obtained in the first one. The first chapter is the theoretical foundation of the thesis. It proposes an estimation procedure for the stochastic volatility models with jumps both in the asset price and variance processes. The estimation procedure is based on the joint unconditional characteristic function for the stochastic process. The major analytical result of this part as well as of the whole thesis is the closed form expression for the joint unconditional characteristic function for the stochastic volatility jump-diffusion models. The empirical part of the chapter suggests that besides a stochastic volatility, jumps both in the mean and the volatility equation are relevant for modelling returns of the S&P500 index, which has been chosen as a general representative of the stock asset class. Hence, the next question is: what jump process to use to model returns of the S&P500. The decision about the jump process in the framework of the affine jump- diffusion models boils down to defining the intensity of the compound Poisson process, a constant or some function of state variables, and to choosing the distribution of the jump size. While the jump in the variance process is usually assumed to be exponential, there are at least three distributions of the jump size which are currently used for the asset log-prices: normal, exponential and double exponential. The second part of this thesis shows that normal jumps in the asset log-returns should be used if we are to model S&P500 index by a stochastic volatility jump-diffusion model. This is a surprising result. Exponential distribution has fatter tails and for this reason either exponential or double exponential jump size was expected to provide the best it of the stochastic volatility jump-diffusion models to the data. The idea of testing the efficiency of the Continuous ECF estimator on the simulated data has already appeared when the first estimation results of the first chapter were obtained. In the absence of a benchmark or any ground for comparison it is unreasonable to be sure that our parameter estimates and the true parameters of the models coincide. The conclusion of the second chapter provides one more reason to do that kind of test. Thus, the third part of this thesis concentrates on the estimation of parameters of stochastic volatility jump- diffusion models on the basis of the asset price time-series simulated from various "true" parameter sets. The goal is to show that the Continuous ECF estimator based on the joint unconditional characteristic function is capable of finding the true parameters. And, the third chapter proves that our estimator indeed has the ability to do so. Once it is clear that the Continuous ECF estimator based on the unconditional characteristic function is working, the next question does not wait to appear. The question is whether the computation effort can be reduced without affecting the efficiency of the estimator, or whether the efficiency of the estimator can be improved without dramatically increasing the computational burden. The efficiency of the Continuous ECF estimator depends on the number of dimensions of the joint unconditional characteristic function which is used for its construction. Theoretically, the more dimensions there are, the more efficient is the estimation procedure. In practice, however, this relationship is not so straightforward due to the increasing computational difficulties. The second chapter, for example, in addition to the choice of the jump process, discusses the possibility of using the marginal, i.e. one-dimensional, unconditional characteristic function in the estimation instead of the joint, bi-dimensional, unconditional characteristic function. As result, the preference for one or the other depends on the model to be estimated. Thus, the computational effort can be reduced in some cases without affecting the efficiency of the estimator. The improvement of the estimator s efficiency by increasing its dimensionality faces more difficulties. The third chapter of this thesis, in addition to what was discussed above, compares the performance of the estimators with bi- and three-dimensional unconditional characteristic functions on the simulated data. It shows that the theoretical efficiency of the Continuous ECF estimator based on the three-dimensional unconditional characteristic function is not attainable in practice, at least for the moment, due to the limitations on the computer power and optimization toolboxes available to the general public. Thus, the Continuous ECF estimator based on the joint, bi-dimensional, unconditional characteristic function has all the reasons to exist and to be used for the estimation of parameters of the stochastic volatility jump-diffusion models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Outgoing radiation is introduced in the framework of the classical predictive electrodynamics using LorentzDiracs equation as a subsidiary condition. In a perturbative scheme in the charges the first radiative self-terms of the accelerations, momentum and angular momentum of a two charge system without external field are calculated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous Iowa DOT sponsored research has shown that some Class C fly ashes are ementitious (because calcium is combined as calcium aluminates) while other Class C ashes containing similar amounts of elemental calcium are not (1). Fly ashes from modern power plants in Iowa contain significant amounts of calcium in their glassy phases, regardless of their cementitious properties. The present research was based on these findings and on the hyphothesis that: attack of the amorphous phase of high calcium fly ash could be initiated with trace additives, thus making calcium available for formation of useful calcium-silicate cements. Phase I research was devoted to finding potential additives through a screening process; the likely chemicals were tested with fly ashes representative of the cementitious and non-cementitious ashes available in the state. Ammonium phosphate, a fertilizer, was found to produce 3,600 psi cement with cementitious Neal #4 fly ash; this strength is roughly equivalent to that of portland cement, but at about one-third the cost. Neal #2 fly ash, a slightly cementitious Class C, was found to respond best with ammonium nitrate; through the additive, a near-zero strength material was transformed into a 1,200 psi cement. The second research phase was directed to optimimizing trace additive concentrations, defining the behavior of the resulting cements, evaluating more comprehensively the fly ashes available in Iowa, and explaining the cement formation mechanisms of the most promising trace additives. X-ray diffraction data demonstrate that both amorphous and crystalline hydrates of chemically enhanced fly ash differ from those of unaltered fly ash hydrates. Calciumaluminum- silicate hydrates were formed, rather than the expected (and hypothesized) calcium-silicate hydrates. These new reaction products explain the observed strength enhancement. The final phase concentrated on laboratory application of the chemically-enhanced fly ash cements to road base stabilization. Emphasis was placed on use of marginal aggregates, such as limestone crusher fines and unprocessed blow sand. The nature of the chemically modified fly ash cements led to an evaluation of fine grained soil stabilization where a wide range of materials, defined by plasticity index, could be stabilized. Parameters used for evaluation included strength, compaction requirements, set time, and frost resistance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Geographic information systems (GIS) and artificial intelligence (AI) techniques were used to develop an intelligent snow removal asset management system (SRAMS). The system has been evaluated through a case study examining snow removal from the roads in Black Hawk County, Iowa, for which the Iowa Department of Transportation (Iowa DOT) is responsible. The SRAMS is comprised of an expert system that contains the logical rules and expertise of the Iowa DOT’s snow removal experts in Black Hawk County, and a geographic information system to access and manage road data. The system is implemented on a mid-range PC by integrating MapObjects 2.1 (a GIS package), Visual Rule Studio 2.2 (an AI shell), and Visual Basic 6.0 (a programming tool). The system could efficiently be used to generate prioritized snowplowing routes in visual format, to optimize the allocation of assets for plowing, and to track materials (e.g., salt and sand). A test of the system reveals an improvement in snowplowing time by 1.9 percent for moderate snowfall and 9.7 percent for snowstorm conditions over the current manual system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In June 2006, the Swiss Parliament made two important decisions with regards to public registers' governance and individuals' identification. It adopted a new law on the harmonisation of population registers in order to simplify statistical data collection and data exchange from around 4'000 decentralized registers, and it also approved the introduction of a Unique Person Identifier (UPI). The law is rather vague about the implementation of this harmonisation and even though many projects are currently being undertaken in this domain, most of them are quite technical. We believe there is a need for analysis tools and therefore we propose a conceptual framework based on three pillars (Privacy, Identity and Governance) to analyse the requirements in terms of data management for population registers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

La aplicabilidad, repetibilidad y capacidad de diferentes métodos de análisis para discriminar muestras de aceites con diferentes grados de oxidación fueron evaluadas mediante aceites recogidos en procesos de fritura en continuo en varias empresas españolas. El objetivo de este trabajo fue encontrar métodos complementarios a la determinación del índice de acidez para el control de calidad rutinario de los aceites de fritura empleados en estas empresas. La optimización de la determinación de la constante dieléctrica conllevó una clara mejora de la variabilidad. No obstante, excepto en el caso del índice del ATB, el resto de métodos ensayados mostraron una menor variabilidad. La determinación del índice del ATB fue descartada ya que su sensibilidad fue insuficiente para discriminar entre aceites con diferente grado de oxidación. Los diferentes parámetros de alteración determinados en los aceites de fritura mostraron correlaciones significativas entre el índice de acidez y varios parámetros de oxidación diferentes, como la constante dieléctrica, el índice de p-anisidina, la absorción al ultravioleta y el contenido en polímeros de los triacilgliceroles. El índice de acidez solo evalúa la alteración hidrolítica, por lo que estos parámetros aportan información complementaria al evaluar la alteración termooxidativa.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this article was to review the strategies to control patient dose in adult and pediatric computed tomography (CT), taking into account the change of technology from single-detector row CT to multi-detector row CT. First the relationships between computed tomography dose index, dose length product, and effective dose in adult and pediatric CT are revised, along with the diagnostic reference level concept. Then the effect of image noise as a function of volume computed tomography dose index, reconstructed slice thickness, and the size of the patient are described. Finally, the potential of tube current modulation CT is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rigorous quantum dynamics calculations of reaction rates and initial state-selected reaction probabilities of polyatomic reactions can be efficiently performed within the quantum transition state concept employing flux correlation functions and wave packet propagation utilizing the multi-configurational time-dependent Hartree approach. Here, analytical formulas and a numerical scheme extending this approach to the calculation of state-to-state reaction probabilities are presented. The formulas derived facilitate the use of three different dividing surfaces: two dividing surfaces located in the product and reactant asymptotic region facilitate full state resolution while a third dividing surface placed in the transition state region can be used to define an additional flux operator. The eigenstates of the corresponding thermal flux operator then correspond to vibrational states of the activated complex. Transforming these states to reactant and product coordinates and propagating them into the respective asymptotic region, the full scattering matrix can be obtained. To illustrate the new approach, test calculations study the D + H2(ν, j) → HD(ν′, j′) + H reaction for J = 0.