986 resultados para Threshold models
Resumo:
A conceptually unifying and flexible approach to the ABC and FGH segments of the nortriterpenoid rubrifloradilactone C, each embodying a furo[3,2-b]furanone moiety, from the appropriate Morita-Baylis-Hillman adducts is delineated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Yhteenveto: Järvien ainetasemallien kehittäminen.
Resumo:
Yhteenveto: Lumimallit vesistöjen ennustemalleissa
Resumo:
Deterministic models have been widely used to predict water quality in distribution systems, but their calibration requires extensive and accurate data sets for numerous parameters. In this study, alternative data-driven modeling approaches based on artificial neural networks (ANNs) were used to predict temporal variations of two important characteristics of water quality chlorine residual and biomass concentrations. The authors considered three types of ANN algorithms. Of these, the Levenberg-Marquardt algorithm provided the best results in predicting residual chlorine and biomass with error-free and ``noisy'' data. The ANN models developed here can generate water quality scenarios of piped systems in real time to help utilities determine weak points of low chlorine residual and high biomass concentration and select optimum remedial strategies.
Resumo:
Polarizabilities and Hyperpolarizabilities of conjugated organic chains are calculated using correlated model Hamiltonians. While correlations reduce the Polarizabilities and extend the range of linear response, the Hyperpolarizabilities essentially are unaffected by the same. This explains the apparently large Hyperpolarizabilities of conjugated electronic systems.
Resumo:
Cosmopolitan ideals have been on the philosophical agenda for several millennia but the end of the Cold War started a new discussion on state sovereignty, global democracy, the role of international law and global institutions. The Westphalian state system in practice since the 17th century is transforming and the democracy deficit needs new solutions. An impetus has been the fact that in the present world, an international body representing global citizens does not exist. In this Master’s thesis, the possibility of establishing a world parliament is examined. In a case analysis, 17 models on world parliament from two journals, a volume of essays and two other publications are discussed. Based on general observations, the models are divided into four thematic groups. The models are analyzed with an emphasis on feasible and probable elements. Further, a new scenario with a time frame of thirty years is proposed based on the methodology of normative futures studies, taking special interest in causal relationships and actions leading to change. The scenario presents three gradual steps that each need to be realized before a sustainable world parliament is established. The theoretical framework is based on social constructivism, and changes in international and multi-level governance are examined with the concepts of globalization, democracy and sovereignty. A feasible, desirable and credible world parliament is constituted gradually by implying electoral, democratic and legal measures for members initially from exclusively democratic states, parliamentarians, non-governmental organizations and other groups. The parliament should be located outside the United Nations context, since a new body avoids the problem of inefficiency currently prevailing in the UN. The main objectives of the world parliament are to safeguard peace and international law and to offer legal advice in cases when international law has been violated. A feasible world parliament is advisory in the beginning but it is granted legislative powers in the future. The number of members in the world parliament could also be extended following the example of the EU enlargement process.
Resumo:
Ecology and evolutionary biology is the study of life on this planet. One of the many methods applied to answering the great diversity of questions regarding the lives and characteristics of individual organisms, is the utilization of mathematical models. Such models are used in a wide variety of ways. Some help us to reason, functioning as aids to, or substitutes for, our own fallible logic, thus making argumentation and thinking clearer. Models which help our reasoning can lead to conceptual clarification; by expressing ideas in algebraic terms, the relationship between different concepts become clearer. Other mathematical models are used to better understand yet more complicated models, or to develop mathematical tools for their analysis. Though helping us to reason and being used as tools in the craftmanship of science, many models do not tell us much about the real biological phenomena we are, at least initially, interested in. The main reason for this is that any mathematical model is a simplification of the real world, reducing the complexity and variety of interactions and idiosynchracies of individual organisms. What such models can tell us, however, both is and has been very valuable throughout the history of ecology and evolution. Minimally, a model simplifying the complex world can tell us that in principle, the patterns produced in a model could also be produced in the real world. We can never know how different a simplified mathematical representation is from the real world, but the similarity models do strive for, gives us confidence that their results could apply. This thesis deals with a variety of different models, used for different purposes. One model deals with how one can measure and analyse invasions; the expanding phase of invasive species. Earlier analyses claims to have shown that such invasions can be a regulated phenomena, that higher invasion speeds at a given point in time will lead to a reduction in speed. Two simple mathematical models show that analysis on this particular measure of invasion speed need not be evidence of regulation. In the context of dispersal evolution, two models acting as proof-of-principle are presented. Parent-offspring conflict emerges when there are different evolutionary optima for adaptive behavior for parents and offspring. We show that the evolution of dispersal distances can entail such a conflict, and that under parental control of dispersal (as, for example, in higher plants) wider dispersal kernels are optimal. We also show that dispersal homeostasis can be optimal; in a setting where dispersal decisions (to leave or stay in a natal patch) are made, strategies that divide their seeds or eggs into fractions that disperse or not, as opposed to randomized for each seed, can prevail. We also present a model of the evolution of bet-hedging strategies; evolutionary adaptations that occur despite their fitness, on average, being lower than a competing strategy. Such strategies can win in the long run because they have a reduced variance in fitness coupled with a reduction in mean fitness, and fitness is of a multiplicative nature across generations, and therefore sensitive to variability. This model is used for conceptual clarification; by developing a population genetical model with uncertain fitness and expressing genotypic variance in fitness as a product between individual level variance and correlations between individuals of a genotype. We arrive at expressions that intuitively reflect two of the main categorizations of bet-hedging strategies; conservative vs diversifying and within- vs between-generation bet hedging. In addition, this model shows that these divisions in fact are false dichotomies.
Resumo:
A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.
Resumo:
A study is presented which is aimed at developing techniques suitable for effective planning and efficient operation of fleets of aircraft typical of the air force of a developing country. An important aspect of fleet management, the problem of resource allocation for achieving prescribed operational effectiveness of the fleet, is considered. For analysis purposes, it is assumed that the planes operate in a single flying-base repair-depot environment. The perennial problem of resource allocation for fleet and facility buildup that faces planners is modeled and solved as an optimal control problem. These models contain two "policy" variables representing investments in aircraft and repair facilities. The feasibility of decentralized control is explored by assuming the two policy variables are under the control of two independent decisionmakers guided by different and not often well coordinated objectives.
Resumo:
This thesis is composed of an introductory chapter and four applications each of them constituting an own chapter. The common element underlying each of the chapters is the econometric methodology. The applications rely mostly on the leading econometric techniques related to estimation of causal effects. The first chapter introduces the econometric techniques that are employed in the remaining chapters. Chapter 2 studies the effects of shocking news on student performance. It exploits the fact that the school shooting in Kauhajoki in 2008 coincided with the matriculation examination period of that fall. It shows that the performance of men declined due to the news of the school shooting. For women the similar pattern remains unobserved. Chapter 3 studies the effects of minimum wage on employment by employing the original Card and Krueger (1994; CK) and Neumark and Wascher (2000; NW) data together with the changes-in-changes (CIC) estimator. As the main result it shows that the employment effect of an increase in the minimum wage is positive for small fast-food restaurants and negative for big fast-food restaurants. Therefore, it shows that the controversial positive employment effect reported by CK is overturned for big fast-food restaurants and that the NW data are shown, in contrast to their original results, to provide support for the positive employment effect. Chapter 4 employs the state-specific U.S. data (collected by Cohen and Einav [2003; CE]) on traffic fatalities to re-evaluate the effects of seat belt laws on the traffic fatalities by using the CIC estimator. It confirms the CE results that on the average an implementation of a mandatory seat belt law results in an increase in the seat belt usage rate and a decrease in the total fatality rate. In contrast to CE, it also finds evidence on compensating-behavior theory, which is observed especially in the states by the border of the U.S. Chapter 5 studies the life cycle consumption in Finland, with the special interest laid on the baby boomers and the older households. It shows that the baby boomers smooth their consumption over the life cycle more than other generations. It also shows that the old households smoothed their life cycle consumption more as a result of the recession in the 1990s, compared to young households.
Resumo:
The nonminimal coupling of a massive self-interacting scalar field with a gravitational field is studied. Spontaneous symmetry breaking occurs in the open universe even when the sign on the mass term is positive. In contrast to grand unified theories, symmetry breakdown is more important for the early universe and it is restored only in the limit of an infinite expansion. Symmetry breakdown is shown to occur in flat and closed universes when the mass term carries a wrong sign. The model has a naturally defined effective gravitational coupling coefficient which is rendered time-dependent due to the novel symmetry breakdown. It changes sign below a critical value of the cosmic scale factor indicating the onset of a repulsive field. The presence of the mass term severely alters the behaviour of ordinary matter and radiation in the early universe. The total energy density becomes negative in a certain domain. These features make possible a nonsingular cosm