855 resultados para uncertain volatility


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transmission investments are currently needed to meet an increasing electricity demand, to address security of supply concerns, and to reach carbon-emissions targets. A key issue when assessing the benefits from an expanded grid concerns the valuation of the uncertain cash flows that result from the expansion. We propose a valuation model that accommodates both physical and economic uncertainties following the Real Options approach. It combines optimization techniques with Monte Carlo simulation. We illustrate the use of our model in a simplified, two-node grid and assess the decision whether to invest or not in a particular upgrade. The generation mix includes coal-and natural gas-fired stations that operate under carbon constraints. The underlying parameters are estimated from observed market data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Elkhorn Slough was first exposed to direct tidal forcing from the waters of Monterey Bay with the construction of Moss Landing Harbor in 1946. Elkhorn Slough is located mid-way between Santa Cruz and Monterey close to the head of Monterey Submarine Canyon. It follows a 10 km circuitous path inland from its entrance at Moss Landing Harbor. Today, Elkhorn Slough is a habitat and sanctuary for a wide variety of marine mammals, fish, and seabirds. The Slough also serves as a sink and pathway for various nutrients and pollutants. These attributes are directly or indirectly affected by its circulation and physical properties. Currents, tides and physical properties of Elkhorn Slough have been observed on an irregular basis since 1970. Based on these observations, the physical characteristics of Elkhorn Slough are examined and summarized. Elkhorn Slough is an ebb-dominated estuary and, as a result, the rise and fall of the tides is asymmetric. The fact that lower low water always follows higher high water and the tidal asymmetry produces ebb currents that are stronger than flooding currents. The presence of extensive mud flats and Salicornia marsh contribute to tidal distortion. Tidal distortion also produces several shallow water constituents including the M3, M4, and M6 overtides and the 2MK3 and MK3 compound tides. Tidal elevations and currents are approximately in quadrature; thus, the tides in Elkhorn Slough have some of the characters of a standing wave system. The temperature and salinity of lower Elkhorn Slough waters reflect, to a large extent, the influence of Monterey Bay waters, whereas the temperature and salinity of the waters of the upper Slough (>5 km from the mouth) are more sensitive to local processes. During the summer, temperature and salinity are higher in the upper slough due to local heating and evaporation. Maximum tidal currents in Elkhorn Slough have increased from approximately 75 to 120 cm/s over the past 30 years. This increase in current speed is primarily due to the change in tidal prism which has increased from approximately 2.5 to 6.2 x 106 m3 between 1956 and 1993. The increase in tidal prism is the result of both 3 rapid man-made changes to the Slough, and the continuing process of tidal erosion. Because of the increase in the tidal prism, the currents in Elkhorn Slough exhibit positive feedback, a process with uncertain consequences. [PDF contains 55 pages]

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background & Aims: Pro-inflammatory cytokines are important for liver regeneration after partial hepatectomy (PH). Expression of Fibroblast growth factor-inducible 14 (Fn14), the receptor for TNF-like weak inducer of apoptosis (TWEAK), is induced rapidly after PH and remains elevated throughout the period of peak hepatocyte replication. The role of Fn14 in post-PH liver regeneration is uncertain because Fn14 is expressed by liver progenitors and TWEAK-Fn14 interactions stimulate progenitor growth, but replication of mature hepatocytes is thought to drive liver regeneration after PH. Methods: To clarify the role of TWEAK-Fn14 after PH, we compared post-PH regenerative responses in wild type (WT) mice, Fn14 knockout (KO) mice, TWEAK KO mice, and WT mice treated with anti-TWEAK antibodies. Results: In WT mice, rare Fn14(+) cells localized with other progenitor markers in peri-portal areas before PH. PH rapidly increased proliferation of Fn14(+) cells; hepatocytic cells that expressed Fn14 and other progenitor markers, such as Lgr5, progressively accumulated from 12-8 h post-PH and then declined to baseline by 96 h. When TWEAK/Fn14 signaling was disrupted, progenitor accumulation, induction of pro-regenerative cytokines, hepatocyte and cholangiocyte proliferation, and over-all survival were inhibited, while post-PH liver damage and bilirubin levels were increased. TWEAK stimulated proliferation and increased Lgr5 expression in cultured liver progenitors, but had no effect on either parameter in cultured primary hepatocytes. Conclusions: TWEAK-FN14 signaling is necessary for the healthy adult liver to regenerate normally after acute partial hepatectomy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper deals with the economics of gasification facilities in general and IGCC power plants in particular. Regarding the prospects of these systems, passing the technological test is one thing, passing the economic test can be quite another. In this respect, traditional valuations assume constant input and/or output prices. Since this is hardly realistic, we allow for uncertainty in prices. We naturally look at the markets where many of the products involved are regularly traded. Futures markets on commodities are particularly useful for valuing uncertain future cash flows. Thus, revenues and variable costs can be assessed by means of sound financial concepts and actual market data. On the other hand, these complex systems provide a number of flexibility options (e.g., to choose among several inputs, outputs, modes of operation, etc.). Typically, flexibility contributes significantly to the overall value of real assets. Indeed, maximization of the asset value requires the optimal exercise of any flexibility option available. Yet the economic value of flexibility is elusive, the more so under (price) uncertainty. And the right choice of input fuels and/or output products is a main concern for the facility managers. As a particular application, we deal with the valuation of input flexibility. We follow the Real Options approach. In addition to economic variables, we also address technical and environmental issues such as energy efficiency, utility performance characteristics and emissions (note that carbon constraints are looming). Lastly, a brief introduction to some stochastic processes suitable for valuation purposes is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Consensus development techniques were used in the late 1980s to create explicit criteria for the appropriateness of cataract extraction. We developed a new appropriateness of indications tool for cataract following the RAND method. We tested the validity of our panel results. Methods: Criteria were developed using a modified Delphi panel judgment process. A panel of 12 ophthalmologists was assembled. Ratings were analyzed regarding the level of agreement among panelists. We studied the influence of all variables on the final panel score using linear and logistic regression models. The explicit criteria developed were summarized by classification and regression tree analysis. Results: Of the 765 indications evaluated by the main panel in the second round, 32.9% were found appropriate, 30.1% uncertain, and 37% inappropriate. Agreement was found in 53% of the indications and disagreement in 0.9%. Seven variables were considered to create the indications and divided into three groups: simple cataract, with diabetic retinopathy, or with other ocular pathologies. The preoperative visual acuity in the cataractous eye and visual function were the variables that best explained the panel scoring. The panel results were synthesized and presented in three decision trees. Misclassification error in the decision trees, as compared with the panel original criteria, was 5.3%. Conclusion: The parameters tested showed acceptable validity for an evaluation tool. These results support the use of this indication algorithm as a screening tool for assessing the appropriateness of cataract extraction in field studies and for the development of practice guidelines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper argues that the widespread belief that ambiguity is beneficial in design communication stems from conceptual confusion. Communicating imprecise, uncertain and provisional ideas is a vital part of design teamwork, but what is uncertain and provisional needs to be expressed as clearly as possible. This paper argues that viewing design communication as conveying permitted spaces for further designing is a useful rationalisation for understanding what designers need from their notations and computer tools, to achieve clear communication of uncertain ideas. The paper presents a typology of ways that designs can be uncertain. It discusses how sketches and other representations of designs can be both intrinsically ambiguous, and ambiguous or misleading by failing to convey information about uncertainty and provisionality, with reference to knitwear design, where communication using inadequate representations causes severe problems. It concludes that systematic use of meta-notations for conveying provisionality and uncertainty can reduce these problems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five fishing villages in Lake Chad Basin region of Borno State (Nigeria) were assessed for the roles of children in fishing activities in the area. The villages surveyed include: Bundaram, Yobe, Daba masara, Dumba and Doro. The results show that the children were largely between 12-18 years of age in the villages. Generally, the younger children (less than 12 years) participate in activities that require no technicality and little physical strength, while the older children (12 years and above) engage in skillful fabrication of gear and fishing activities. Some activities in the surveyed villages were gender specific. Such activities include fish processing (smoking) which is exclusive for female and few male children, who carry out preliminary cleaning of fish before any processing method is applied. 80% of the children in the five fishing villages claimed proper understanding of the techniques and procedure involved in most fishing activities. About 65% of the children sampled showed willingness to become full time fishermen while 22% were uncertain and claimed that they do not know what the future holds for them. 15% of them resolved to migrate to town so that they could live a city life

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper explores the benefits of including age-structure in the control rule (HCR) when decision makers regard their (age-structured) models as approximations. We find that introducing age structure into the HCR reduces both the volatility of the spawning biomass and the yield. Although at a fairly imprecise level the benefits are lower, there are still major advantages for actual assessment precision of the case study. Moreover, we find that when age-structure is included in the HCR the relative ranking of different policies in terms of variance in biomass and yield does not differ. These results are shown both theoretically and numerically by applying the model to the Southern Hake fishery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract to Part I

The inverse problem of seismic wave attenuation is solved by an iterative back-projection method. The seismic wave quality factor, Q, can be estimated approximately by inverting the S-to-P amplitude ratios. Effects of various uncertain ties in the method are tested and the attenuation tomography is shown to be useful in solving for the spatial variations in attenuation structure and in estimating the effective seismic quality factor of attenuating anomalies.

Back-projection attenuation tomography is applied to two cases in southern California: Imperial Valley and the Coso-Indian Wells region. In the Coso-Indian Wells region, a highly attenuating body (S-wave quality factor (Q_β ≈ 30) coincides with a slow P-wave anomaly mapped by Walck and Clayton (1987). This coincidence suggests the presence of a magmatic or hydrothermal body 3 to 5 km deep in the Indian Wells region. In the Imperial Valley, slow P-wave travel-time anomalies and highly attenuating S-wave anomalies were found in the Brawley seismic zone at a depth of 8 to 12 km. The effective S-wave quality factor is very low (Q_β ≈ 20) and the P-wave velocity is 10% slower than the surrounding areas. These results suggest either magmatic or hydrothermal intrusions, or fractures at depth, possibly related to active shear in the Brawley seismic zone.

No-block inversion is a generalized tomographic method utilizing the continuous form of an inverse problem. The inverse problem of attenuation can be posed in a continuous form , and the no-block inversion technique is applied to the same data set used in the back-projection tomography. A relatively small data set with little redundancy enables us to apply both techniques to a similar degree of resolution. The results obtained by the two methods are very similar. By applying the two methods to the same data set, formal errors and resolution can be directly computed for the final model, and the objectivity of the final result can be enhanced.

Both methods of attenuation tomography are applied to a data set of local earthquakes in Kilauea, Hawaii, to solve for the attenuation structure under Kilauea and the East Rift Zone. The shallow Kilauea magma chamber, East Rift Zone and the Mauna Loa magma chamber are delineated as attenuating anomalies. Detailed inversion reveals shallow secondary magma reservoirs at Mauna Ulu and Puu Oo, the present sites of volcanic eruptions. The Hilina Fault zone is highly attenuating, dominating the attenuating anomalies at shallow depths. The magma conduit system along the summit and the East Rift Zone of Kilauea shows up as a continuous supply channel extending down to a depth of approximately 6 km. The Southwest Rift Zone, on the other hand, is not delineated by attenuating anomalies, except at a depth of 8-12 km, where an attenuating anomaly is imaged west of Puu Kou. The Ylauna Loa chamber is seated at a deeper level (about 6-10 km) than the Kilauea magma chamber. Resolution in the Mauna Loa area is not as good as in the Kilauea area, and there is a trade-off between the depth extent of the magma chamber imaged under Mauna Loa and the error that is due to poor ray coverage. Kilauea magma chamber, on the other hand, is well resolved, according to a resolution test done at the location of the magma chamber.

Abstract to Part II

Long period seismograms recorded at Pasadena of earthquakes occurring along a profile to Imperial Valley are studied in terms of source phenomena (e.g., source mechanisms and depths) versus path effects. Some of the events have known source parameters, determined by teleseismic or near-field studies, and are used as master events in a forward modeling exercise to derive the Green's functions (SH displacements at Pasadena that are due to a pure strike-slip or dip-slip mechanism) that describe the propagation effects along the profile. Both timing and waveforms of records are matched by synthetics calculated from 2-dimensional velocity models. The best 2-dimensional section begins at Imperial Valley with a thin crust containing the basin structure and thickens towards Pasadena. The detailed nature of the transition zone at the base of the crust controls the early arriving shorter periods (strong motions), while the edge of the basin controls the scattered longer period surface waves. From the waveform characteristics alone, shallow events in the basin are easily distinguished from deep events, and the amount of strike-slip versus dip-slip motion is also easily determined. Those events rupturing the sediments, such as the 1979 Imperial Valley earthquake, can be recognized easily by a late-arriving scattered Love wave that has been delayed by the very slow path across the shallow valley structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents composition measurements for atmospherically relevant inorganic and organic aerosol from laboratory and ambient measurements using the Aerodyne aerosol mass spectrometer. Studies include the oxidation of dodecane in the Caltech environmental chambers, and several aircraft- and ground-based field studies, which include the quantification of wildfire emissions off the coast of California, and Los Angeles urban emissions.

The oxidation of dodecane by OH under low NO conditions and the formation of secondary organic aerosol (SOA) was explored using a gas-phase chemical model, gas-phase CIMS measurements, and high molecular weight ion traces from particle- phase HR-TOF-AMS mass spectra. The combination of these measurements support the hypothesis that particle-phase chemistry leading to peroxyhemiacetal formation is important. Positive matrix factorization (PMF) was applied to the AMS mass spectra which revealed three factors representing a combination of gas-particle partitioning, chemical conversion in the aerosol, and wall deposition.

Airborne measurements of biomass burning emissions from a chaparral fire on the central Californian coast were carried out in November 2009. Physical and chemical changes were reported for smoke ages 0 – 4 h old. CO2 normalized ammonium, nitrate, and sulfate increased, whereas the normalized OA decreased sharply in the first 1.5 - 2 h, and then slowly increased for the remaining 2 h (net decrease in normalized OA). Comparison to wildfire samples from the Yucatan revealed that factors such as relative humidity, incident UV radiation, age of smoke, and concentration of emissions are important for wildfire evolution.

Ground-based aerosol composition is reported for Pasadena, CA during the summer of 2009. The OA component, which dominated the submicron aerosol mass, was deconvolved into hydrocarbon-like organic aerosol (HOA), semi-volatile oxidized organic aerosol (SVOOA), and low-volatility oxidized organic aerosol (LVOOA). The HOA/OA was only 0.08–0.23, indicating that most of Pasadena OA in the summer months is dominated by oxidized OA resulting from transported emissions that have undergone photochemistry and/or moisture-influenced processing, as apposed to only primary organic aerosol emissions. Airborne measurements and model predictions of aerosol composition are reported for the 2010 CalNex field campaign.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secondary-ion mass spectrometry (SIMS), electron probe analysis (EPMA), analytical scanning electron microscopy (SEM) and infrared (IR) spectroscopy were used to determine the chemical composition and the mineralogy of sub-micrometer inclusions in cubic diamonds and in overgrowths (coats) on octahedral diamonds from Zaire, Botswana, and some unknown localities.

The inclusions are sub-micrometer in size. The typical diameter encountered during transmission electron microscope (TEM) examination was 0.1-0.5 µm. The micro-inclusions are sub-rounded and their shape is crystallographically controlled by the diamond. Normally they are not associated with cracks or dislocations and appear to be well isolated within the diamond matrix. The number density of inclusions is highly variable on any scale and may reach 10^(11) inclusions/cm^3 in the most densely populated zones. The total concentration of metal oxides in the diamonds varies between 20 and 1270 ppm (by weight).

SIMS analysis yields the average composition of about 100 inclusions contained in the sputtered volume. Comparison of analyses of different volumes of an individual diamond show roughly uniform composition (typically ±10% relative). The variation among the average compositions of different diamonds is somewhat greater (typically ±30%). Nevertheless, all diamonds exhibit similar characteristics, being rich in water, carbonate, SiO_2, and K_2O, and depleted in MgO. The composition of micro-inclusions in most diamonds vary within the following ranges: SiO_2, 30-53%; K_2O, 12-30%; CaO, 8-19%; FeO, 6-11%; Al_2O_3, 3-6%; MgO, 2-6%; TiO_2, 2-4%; Na_2O, 1-5%; P_2O_5, 1-4%; and Cl, 1-3%. In addition, BaO, 1-4%; SrO, 0.7-1.5%; La_2O_3, 0.1-0.3%; Ce_2O_3, 0.3-0.5%; smaller amounts of other rare-earth elements (REE), as well as Mn, Th, and U were also detected by instrumental neutron activation analysis (INAA). Mg/(Fe+Mg), 0.40-0.62 is low compared with other mantle derived phases; K/ AI ratios of 2-7 are very high, and the chondrite-normalized Ce/Eu ratios of 10-21 are also high, indicating extremely fractionated REE patterns.

SEM analyses indicate that individual inclusions within a single diamond are roughly of similar composition. The average composition of individual inclusions as measured with the SEM is similar to that measured by SIMS. Compositional variations revealed by the SEM are larger than those detected by SIMS and indicate a small variability in the composition of individual inclusions. No compositions of individual inclusions were determined that might correspond to mono-mineralic inclusions.

IR spectra of inclusion- bearing zones exhibit characteristic absorption due to: (1) pure diamonds, (2) nitrogen and hydrogen in the diamond matrix; and (3) mineral phases in the micro-inclusions. Nitrogen concentrations of 500-1100 ppm, typical of the micro-inclusion-bearing zones, are higher than the average nitrogen content of diamonds. Only type IaA centers were detected by IR. A yellow coloration may indicate small concentration of type IB centers.

The absorption due to the micro-inclusions in all diamonds produces similar spectra and indicates the presence of hydrated sheet silicates (most likely, Fe-rich clay minerals), carbonates (most likely calcite), and apatite. Small quantities of molecular CO_2 are also present in most diamonds. Water is probably associated with the silicates but the possibility of its presence as a fluid phase cannot be excluded. Characteristic lines of olivine, pyroxene and garnet were not detected and these phases cannot be significant components of the inclusions. Preliminary quantification of the IR data suggests that water and carbonate account for, on average, 20-40 wt% of the micro-inclusions.

The composition and mineralogy of the micro-inclusions are completely different from those of the more common, larger inclusions of the peridotitic or eclogitic assemblages. Their bulk composition resembles that of potassic magmas, such as kimberlites and lamproites, but is enriched in H_2O, CO_3, K_2O, and incompatible elements, and depleted in MgO.

It is suggested that the composition of the micro-inclusions represents a volatile-rich fluid or a melt trapped by the diamond during its growth. The high content of K, Na, P, and incompatible elements suggests that the trapped material found in the micro-inclusions may represent an effective metasomatizing agent. It may also be possible that fluids of similar composition are responsible for the extreme enrichment of incompatible elements documented in garnet and pyroxene inclusions in diamonds.

The origin of the fluid trapped in the micro-inclusions is still uncertain. It may have been formed by incipient melting of a highly metasomatized mantle rocks. More likely, it is the result of fractional crystallization of a potassic parental magma at depth. In either case, the micro-inclusions document the presence of highly potassic fluids or melts at depths corresponding to the diamond stability field in the upper mantle. The phases presently identified in the inclusions are believed to be the result of closed system reactions at lower pressures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is motivated by safety-critical applications involving autonomous air, ground, and space vehicles carrying out complex tasks in uncertain and adversarial environments. We use temporal logic as a language to formally specify complex tasks and system properties. Temporal logic specifications generalize the classical notions of stability and reachability that are studied in the control and hybrid systems communities. Given a system model and a formal task specification, the goal is to automatically synthesize a control policy for the system that ensures that the system satisfies the specification. This thesis presents novel control policy synthesis algorithms for optimal and robust control of dynamical systems with temporal logic specifications. Furthermore, it introduces algorithms that are efficient and extend to high-dimensional dynamical systems.

The first contribution of this thesis is the generalization of a classical linear temporal logic (LTL) control synthesis approach to optimal and robust control. We show how we can extend automata-based synthesis techniques for discrete abstractions of dynamical systems to create optimal and robust controllers that are guaranteed to satisfy an LTL specification. Such optimal and robust controllers can be computed at little extra computational cost compared to computing a feasible controller.

The second contribution of this thesis addresses the scalability of control synthesis with LTL specifications. A major limitation of the standard automaton-based approach for control with LTL specifications is that the automaton might be doubly-exponential in the size of the LTL specification. We introduce a fragment of LTL for which one can compute feasible control policies in time polynomial in the size of the system and specification. Additionally, we show how to compute optimal control policies for a variety of cost functions, and identify interesting cases when this can be done in polynomial time. These techniques are particularly relevant for online control, as one can guarantee that a feasible solution can be found quickly, and then iteratively improve on the quality as time permits.

The final contribution of this thesis is a set of algorithms for computing feasible trajectories for high-dimensional, nonlinear systems with LTL specifications. These algorithms avoid a potentially computationally-expensive process of computing a discrete abstraction, and instead compute directly on the system's continuous state space. The first method uses an automaton representing the specification to directly encode a series of constrained-reachability subproblems, which can be solved in a modular fashion by using standard techniques. The second method encodes an LTL formula as mixed-integer linear programming constraints on the dynamical system. We demonstrate these approaches with numerical experiments on temporal logic motion planning problems with high-dimensional (10+ states) continuous systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, the development of a probabilistic approach to robust control is motivated by structural control applications in civil engineering. Often in civil structural applications, a system's performance is specified in terms of its reliability. In addition, the model and input uncertainty for the system may be described most appropriately using probabilistic or "soft" bounds on the model and input sets. The probabilistic robust control methodology contrasts with existing H∞/μ robust control methodologies that do not use probability information for the model and input uncertainty sets, yielding only the guaranteed (i.e., "worst-case") system performance, and no information about the system's probable performance which would be of interest to civil engineers.

The design objective for the probabilistic robust controller is to maximize the reliability of the uncertain structure/controller system for a probabilistically-described uncertain excitation. The robust performance is computed for a set of possible models by weighting the conditional performance probability for a particular model by the probability of that model, then integrating over the set of possible models. This integration is accomplished efficiently using an asymptotic approximation. The probable performance can be optimized numerically over the class of allowable controllers to find the optimal controller. Also, if structural response data becomes available from a controlled structure, its probable performance can easily be updated using Bayes's Theorem to update the probability distribution over the set of possible models. An updated optimal controller can then be produced, if desired, by following the original procedure. Thus, the probabilistic framework integrates system identification and robust control in a natural manner.

The probabilistic robust control methodology is applied to two systems in this thesis. The first is a high-fidelity computer model of a benchmark structural control laboratory experiment. For this application, uncertainty in the input model only is considered. The probabilistic control design minimizes the failure probability of the benchmark system while remaining robust with respect to the input model uncertainty. The performance of an optimal low-order controller compares favorably with higher-order controllers for the same benchmark system which are based on other approaches. The second application is to the Caltech Flexible Structure, which is a light-weight aluminum truss structure actuated by three voice coil actuators. A controller is designed to minimize the failure probability for a nominal model of this system. Furthermore, the method for updating the model-based performance calculation given new response data from the system is illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.