980 resultados para stochastic load factor
Resumo:
The current state of knowledge and understanding of the long fatigue crack propagation behavior of nickel-base superalloys are reviewed, with particular emphasis on turbine disk materials. The data are presented in the form of crack growth rate versus stress intensity factor range curves, and the effects of such variables as microstructure, load ratio, and temperature in the near-threshold and Paris regimes of the curves, are discussed.
Resumo:
The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.
Resumo:
The aim of this article is to draw attention to calculations on the environmental effects of agriculture and to the definition of marginal agricultural yield. When calculating the environmental impacts of agricultural activities, the real environmental load generated by agriculture is not revealed properly through ecological footprint indicators, as the type of agricultural farming (thus the nature of the pollution it creates) is not incorporated in the calculation. It is commonly known that extensive farming uses relatively small amounts of labor and capital. It produces a lower yield per unit of land and thus requires more land than intensive farming practices to produce similar yields, so it has a larger crop and grazing footprint. However, intensive farms, to achieve higher yields, apply fertilizers, insecticides, herbicides, etc., and cultivation and harvesting are often mechanized. In this study, the focus is on highlighting the differences in the environmental impacts of extensive and intensive farming practices through a statistical analysis of the factors determining agricultural yield. A marginal function is constructed for the relation between chemical fertilizer use and yield per unit fertilizer input. Furthermore, a proposal is presented for how calculation of the yield factor could possibly be improved. The yield factor used in the calculation of biocapacity is not the marginal yield for a given area, but is calculated from the real and actual yields, and this way biocapacity and the ecological footprint for cropland are equivalent. Calculations for cropland biocapacity do not show the area needed for sustainable production, but rather the actual land area used for agricultural production. The proposal the authors present is a modification of the yield factor and also the changed biocapacity is calculated. The results of statistical analyses reveal the need for a clarification of the methodology for calculating marginal yield, which could clearly contribute to assessing the real environmental impacts of agriculture.
Resumo:
This research is based on the premises that teams can be designed to optimize its performance, and appropriate team coordination is a significant factor to team outcome performance. Contingency theory argues that the effectiveness of a team depends on the right fit of the team design factors to the particular job at hand. Therefore, organizations need computational tools capable of predict the performance of different configurations of teams. This research created an agent-based model of teams called the Team Coordination Model (TCM). The TCM estimates the coordination load and performance of a team, based on its composition, coordination mechanisms, and job’s structural characteristics. The TCM can be used to determine the team’s design characteristics that most likely lead the team to achieve optimal performance. The TCM is implemented as an agent-based discrete-event simulation application built using JAVA and Cybele Pro agent architecture. The model implements the effect of individual team design factors on team processes, but the resulting performance emerges from the behavior of the agents. These team member agents use decision making, and explicit and implicit mechanisms to coordinate the job. The model validation included the comparison of the TCM’s results with statistics from a real team and with the results predicted by the team performance literature. An illustrative 26-1 fractional factorial experimental design demonstrates the application of the simulation model to the design of a team. The results from the ANOVA analysis have been used to recommend the combination of levels of the experimental factors that optimize the completion time for a team that runs sailboats races. This research main contribution to the team modeling literature is a model capable of simulating teams working on complex job environments. The TCM implements a stochastic job structure model capable of capturing some of the complexity not capture by current models. In a stochastic job structure, the tasks required to complete the job change during the team execution of the job. This research proposed three new types of dependencies between tasks required to model a job as a stochastic structure. These dependencies are conditional sequential, single-conditional sequential, and the merge dependencies.
Resumo:
Lateral load distribution factor is a key factor for designing and analyzing curved steel I-girder bridges. In this dissertation, the effects of various parameters on moment and shear distribution for curved steel I-girder bridges were studied using the Finite Element Method (FEM). The parameters considered in the study were: radius of curvature, girder spacing, overhang, span length, number of girders, ratio of girder stiffness to overall bridge stiffness, slab thickness, girder longitudinal stiffness, cross frame spacing, and girder torsional inertia. The variations of these parameters were based on the statistical analysis of the real bridge database, which was created by extracting data from existing or newly designed curved steel I-girder bridge plans collected all over the nation. A hypothetical bridge superstructure model that was made of all the mean values of the data was created and used for the parameter study. ^ The study showed that cross frame spacing and girder torsional inertia had negligible effects. Other parameters had been identified as key parameters. Regression analysis was conducted based on the FEM analysis results and simplified formulas for predicting positive moment, negative moment, and shear distribution factors were developed. Thirty-three real bridges were analyzed using FEM to verify the formulas. The ratio of the distribution factor obtained from the formula to the one obtained from the FEM analysis, which was referred to as the g-ratio, was examined. The results showed that the standard deviation of the g-ratios was within 0.04 to 0.06 and the mean value of the g-ratios was greater than unity by one standard deviation. This indicates that the formulas are conservative in most cases but not overly conservative. The final formulas are similar in format to the current American Association of State Highway and Transportation Officials (AASHTO) Load Resistance and Factor Design (LRFD) specifications. ^ The developed formulas were compared with other simplified methods. The outcomes showed that the proposed formulas had the most accurate results among all methods. ^ The formulas developed in this study will assist bridge engineers and researchers in predicting the actual live load distribution in horizontally curved steel I-girder bridges. ^
Resumo:
Implicit in current design practice of minimum uplift capacity, is the assumption that the connection's capacity is proportional to the number of fasteners per connection joint. This assumption may overestimate the capacity of joints by a factor of two or more and maybe the cause of connection failures in extreme wind events. The current research serves to modify the current practice by proposing a realistic relationship between the number of fasteners and the capacity of the joint. The research is also aimed at further development of non-intrusive continuous load path (CLP) connection system using Glass Fiber Reinforced Polymer (GFRP) and epoxy. Suitable designs were developed for stud to top plate and gable end connections and tests were performed to evaluate the ultimate load, creep and fatigue behavior. The objective was to determine the performance of the connections under simulated sustained hurricane conditions. The performance of the new connections was satisfactory.
Resumo:
In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.
In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.
The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.
This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.
The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.
The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.
EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.
Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.
The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.
This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.
The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.
Resumo:
Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
This Ph.D. thesis contains 4 essays in mathematical finance with a focus on pricing Asian option (Chapter 4), pricing futures and futures option (Chapter 5 and Chapter 6) and time dependent volatility in futures option (Chapter 7). In Chapter 4, the applicability of the Albrecher et al.(2005)'s comonotonicity approach was investigated in the context of various benchmark models for equities and com- modities. Instead of classical Levy models as in Albrecher et al.(2005), the focus is the Heston stochastic volatility model, the constant elasticity of variance (CEV) model and the Schwartz (1997) two-factor model. It is shown that the method delivers rather tight upper bounds for the prices of Asian Options in these models and as a by-product delivers super-hedging strategies which can be easily implemented. In Chapter 5, two types of three-factor models were studied to give the value of com- modities futures contracts, which allow volatility to be stochastic. Both these two models have closed-form solutions for futures contracts price. However, it is shown that Model 2 is better than Model 1 theoretically and also performs very well empiri- cally. Moreover, Model 2 can easily be implemented in practice. In comparison to the Schwartz (1997) two-factor model, it is shown that Model 2 has its unique advantages; hence, it is also a good choice to price the value of commodity futures contracts. Fur- thermore, if these two models are used at the same time, a more accurate price for commodity futures contracts can be obtained in most situations. In Chapter 6, the applicability of the asymptotic approach developed in Fouque et al.(2000b) was investigated for pricing commodity futures options in a Schwartz (1997) multi-factor model, featuring both stochastic convenience yield and stochastic volatility. It is shown that the zero-order term in the expansion coincides with the Schwartz (1997) two-factor term, with averaged volatility, and an explicit expression for the first-order correction term is provided. With empirical data from the natural gas futures market, it is also demonstrated that a significantly better calibration can be achieved by using the correction term as compared to the standard Schwartz (1997) two-factor expression, at virtually no extra effort. In Chapter 7, a new pricing formula is derived for futures options in the Schwartz (1997) two-factor model with time dependent spot volatility. The pricing formula can also be used to find the result of the time dependent spot volatility with futures options prices in the market. Furthermore, the limitations of the method that is used to find the time dependent spot volatility will be explained, and it is also shown how to make sure of its accuracy.
Resumo:
T-cell based vaccines against human immunodeficiency virus (HIV) generate specific responses that may limit both transmission and disease progression by controlling viral load. Broad, polyfunctional, and cytotoxic CD4+ T-cell responses have been associated with control of simian immunodeficiency virus/HIV-1 replication, supporting the inclusion of CD4+ T-cell epitopes in vaccine formulations. Plasmid-encoded granulocyte-macrophage colony-stimulating factor (pGM-CSF) co-administration has been shown to induce potent CD4+ T-cell responses and to promote accelerated priming and increased migration of antigen-specific CD4+ T-cells. However, no study has shown whether co-immunisation with pGM-CSF enhances the number of vaccine-induced polyfunctional CD4+ T-cells. Our group has previously developed a DNA vaccine encoding conserved, multiple human leukocyte antigen (HLA)-DR binding HIV-1 subtype B peptides, which elicited broad, polyfunctional and long-lived CD4+ T-cell responses. Here, we show that pGM-CSF co-immunisation improved both magnitude and quality of vaccine-induced T-cell responses, particularly by increasing proliferating CD4+ T-cells that produce simultaneously interferon-γ, tumour necrosis factor-α and interleukin-2. Thus, we believe that the use of pGM-CSF may be helpful for vaccine strategies focused on the activation of anti-HIV CD4+ T-cell immunity.
Resumo:
Two trends are emerging from modern electric power systems: the growth of renewable (e.g., solar and wind) generation, and the integration of information technologies and advanced power electronics. The former introduces large, rapid, and random fluctuations in power supply, demand, frequency, and voltage, which become a major challenge for real-time operation of power systems. The latter creates a tremendous number of controllable intelligent endpoints such as smart buildings and appliances, electric vehicles, energy storage devices, and power electronic devices that can sense, compute, communicate, and actuate. Most of these endpoints are distributed on the load side of power systems, in contrast to traditional control resources such as centralized bulk generators. This thesis focuses on controlling power systems in real time, using these load side resources. Specifically, it studies two problems.
(1) Distributed load-side frequency control: We establish a mathematical framework to design distributed frequency control algorithms for flexible electric loads. In this framework, we formulate a category of optimization problems, called optimal load control (OLC), to incorporate the goals of frequency control, such as balancing power supply and demand, restoring frequency to its nominal value, restoring inter-area power flows, etc., in a way that minimizes total disutility for the loads to participate in frequency control by deviating from their nominal power usage. By exploiting distributed algorithms to solve OLC and analyzing convergence of these algorithms, we design distributed load-side controllers and prove stability of closed-loop power systems governed by these controllers. This general framework is adapted and applied to different types of power systems described by different models, or to achieve different levels of control goals under different operation scenarios. We first consider a dynamically coherent power system which can be equivalently modeled with a single synchronous machine. We then extend our framework to a multi-machine power network, where we consider primary and secondary frequency controls, linear and nonlinear power flow models, and the interactions between generator dynamics and load control.
(2) Two-timescale voltage control: The voltage of a power distribution system must be maintained closely around its nominal value in real time, even in the presence of highly volatile power supply or demand. For this purpose, we jointly control two types of reactive power sources: a capacitor operating at a slow timescale, and a power electronic device, such as a smart inverter or a D-STATCOM, operating at a fast timescale. Their control actions are solved from optimal power flow problems at two timescales. Specifically, the slow-timescale problem is a chance-constrained optimization, which minimizes power loss and regulates the voltage at the current time instant while limiting the probability of future voltage violations due to stochastic changes in power supply or demand. This control framework forms the basis of an optimal sizing problem, which determines the installation capacities of the control devices by minimizing the sum of power loss and capital cost. We develop computationally efficient heuristics to solve the optimal sizing problem and implement real-time control. Numerical experiments show that the proposed sizing and control schemes significantly improve the reliability of voltage control with a moderate increase in cost.
Resumo:
This work presents an optical non-contact technique to evaluate the fatigue damage state of CFRP structures measuring the irregularity factor of the surface. This factor includes information about surface topology and can be measured easily on field, by techniques such as optical perfilometers. The surface irregularity factor has been correlated with stiffness degradation, which is a well-accepted parameter for the evaluation of the fatigue damage state of composite materials. Constant amplitude fatigue loads (CAL) and realistic variable amplitude loads (VAL), representative of real in- flight conditions, have been applied to “dog bone” shaped tensile specimens. It has been shown that the measurement of the surface irregularity parameters can be applied to evaluate the damage state of a structure, and that it is independent of the type of fatigue load that has caused the damage. As a result, this measurement technique is applicable for a wide range of inspections of composite material structures, from pressurized tanks with constant amplitude loads, to variable amplitude loaded aeronautical structures such as wings and empennages, up to automotive and other industrial applications.