940 resultados para equivalence
Resumo:
The present work focused on improving the engine performance with different fuel equivalence ratios and fuel injections. A scramjet model with strut/cavity integrated configurations was tested under Mach 5.8 flows. The results showed that the strut may sreve as an effective tool in a kerosene-fueled scramjet. The integration of strut/cavities also had great effect on stablizing the combustion in a wide range of fuel equivalence ratio. The one-sdimensional analysis method was used to analyze the main characteristics of the model. The two-stage fuel injection should have better performance in increasing the chemical reaction rate in the first cavity region.
Resumo:
Resumen: La identificación de los fenómenos considerados sublunares en la Antigüedad a partir de la terminología latina es particularmente dificultosa. Nuestra propuesta es considerar los términos que aparecen en las enumeraciones de Plinio y Séneca tomando como parámetro de referencia la nomenclatura astronómica moderna y como hipótesis la de que las observaciones antiguas, por personas de gran capacidad de observación en un cielo sin contaminación lumínica pueden compararse a las que podemos obtener hoy a través de la astrofotografía. La conclusión es que, dado que en la Antigüedad grecolatina cada fenómeno se reportaba con la misma fórmula descriptiva, es posible determinar la equivalencia entre los términos romanos y los actuales (cometas, meteoros, etc.), como paso previo a adoptar una decisión traductiva, ya sea que los consideremos términos científicos o culturalmente específicos.
Resumo:
In this paper, by use of the boundary integral equation method and the techniques of Green basic solution and singularity analysis, the dynamic problem of antiplane is investigated. The problem is reduced to solving a Cauchy singular integral equation in Laplace transform space. This equation is strictly proved to be equivalent to the dual integral equations obtained by Sih [Mechanics of Fracture, Vol. 4. Noordhoff, Leyden (1977)]. On this basis, the dynamic influence between two parallel cracks is also investigated. By use of the high precision numerical method for the singular integral equation and Laplace numerical inversion, the dynamic stress intensity factors of several typical problems are calculated in this paper. The related numerical results are compared to be consistent with those of Sih. It shows that the method of this paper is successful and can be used to solve more complicated problems. Copyright (C) 1996 Elsevier Science Ltd
Resumo:
Four methods to control the smooth cordgrass Spartina (Spartina alterniflora) and the footwear worn by treatment personnelat several sites in Willapa Bay, Washington were evaluatedto determine the non-target impacts to eelgrass (Zostera japonica). Clone-sized infestations of Spartina were treated bymowing or a single hand-spray application of Rodeo® formulatedat 480 g L-1acid equivalence (ae) of the isopropylaminesalt of glyphosate (Monsanto Agricultural Co., St. Louis, MO;currently Dow AgroSciences, Indianapolis, IN) with the nonionic surfactant LI 700® (2% v/v) or a combination of mowing and hand spraying. An aerial application of Rodeo® with X-77 Spreader® (0.13% v/v) to a 2-ha meadow was also investigated. Monitoring consisted of measuring eelgrass shoot densities and percent cover pre-treatment and 1-yr post-treatment. Impacts to eelgrass adjacent to treated clones were determined 1 m from the clones and compared to a control 5-m away. Impacts from footwear were assessed at 5 equidistant intervals along a 10-m transect on mudflat and an untreated control transect at each of the three clone treatment sites. Impacts from the aerial application were determined by comparing shoot densities and percent cover 1, 3 and 10 m from the edge of the treated Spartina meadow to that at comparable distances from an untreated meadow. Methods utilized to control Spartina clones did not impact surrounding eelgrass at two of three sites. Decreases in shoot densities observed at the third site were consistent across treatments. Most impacts to eelgrass from the footwear worn by treatment personnel were negligible and those that were significant were limited to soft mud substrate. The aerial application of the herbicide was associated with reductions in eelgrass (shoot density and percent cover) at two of the three sampling distances, but reductions on the control plot were greater. We conclude that the unchecked spread of Spartina is a far greater threat to the survival and health of eelgrass than that from any of the control measures we studied. The basis for evaluating control measures for Spartina should be efficacy and logistical constraints and not impacts to eelgrass. PDF is 7 pages.
Resumo:
In a two-stage delegation game model with Nash bargaining between a manager and an owner, an equivalence result is found between this game and Fershtman and Judd's strategic delegation game (Fershtman and Judd, 1987). Interestingly, although both games are equivalent in terms of profits under certain conditions, managers obtain greater rewards in the bargaining game. This results in a redistribution of profits between owners and managers.
Resumo:
Injection and combustion of vaporized kerosene was experimentally investigated in a Mach 2.5 model combustor at various fuel temperatures and injection pressures. A unique kerosene heating and delivery system, which can prepare heated kerosene up to 820 K at a pressure of 5.5 MPa with negligible fuel coking, was developed. A three-species surrogate was employed to simulate the thermophysical properties of kerosene. The calculated thermophysical properties of surrogate provided insight into the fuel flow control in experiments. Kerosene jet structures at various preheat temperatures injecting into both quiescent environment and a Mach 2.5 crossflow were characterized. It was shown that the use ofvaporized kerosene injection holds the potential of enhancing fuel-air mixing and promoting overall burning. Supersonic combustion tests further confirmed the preceding conjecture by comparing the combustor performances of supercritical kerosene with those of liquid kerosene and effervescent atomization with hydrogen barbotage. Under the similar flow conditions and overall kerosene equivalence ratios, experimental results illustrated that the combustion efficiency of supercritical kerosene increased approximately 10-15% over that of liquid kerosene, which was comparable to that of effervescent atomization.
Resumo:
This paper investigates the presence of limit oscillations in an adaptive sampling system. The basic sampling criterion operates in the sense that each next sampling occurs when the absolute difference of the signal amplitude with respect to its currently sampled signal equalizes a prescribed threshold amplitude. The sampling criterion is extended involving a prescribed set of amplitudes. The limit oscillations might be interpreted through the equivalence of the adaptive sampling and hold device with a nonlinear one consisting of a relay with multiple hysteresis whose parameterization is, in general, dependent on the initial conditions of the dynamic system. The performed study is performed on the time domain.
Resumo:
In this thesis we propose a new approach to deduction methods for temporal logic. Our proposal is based on an inductive definition of eventualities that is different from the usual one. On the basis of this non-customary inductive definition for eventualities, we first provide dual systems of tableaux and sequents for Propositional Linear-time Temporal Logic (PLTL). Then, we adapt the deductive approach introduced by means of these dual tableau and sequent systems to the resolution framework and we present a clausal temporal resolution method for PLTL. Finally, we make use of this new clausal temporal resolution method for establishing logical foundations for declarative temporal logic programming languages. The key element in the deduction systems for temporal logic is to deal with eventualities and hidden invariants that may prevent the fulfillment of eventualities. Different ways of addressing this issue can be found in the works on deduction systems for temporal logic. Traditional tableau systems for temporal logic generate an auxiliary graph in a first pass.Then, in a second pass, unsatisfiable nodes are pruned. In particular, the second pass must check whether the eventualities are fulfilled. The one-pass tableau calculus introduced by S. Schwendimann requires an additional handling of information in order to detect cyclic branches that contain unfulfilled eventualities. Regarding traditional sequent calculi for temporal logic, the issue of eventualities and hidden invariants is tackled by making use of a kind of inference rules (mainly, invariant-based rules or infinitary rules) that complicates their automation. A remarkable consequence of using either a two-pass approach based on auxiliary graphs or aone-pass approach that requires an additional handling of information in the tableau framework, and either invariant-based rules or infinitary rules in the sequent framework, is that temporal logic fails to carry out the classical correspondence between tableaux and sequents. In this thesis, we first provide a one-pass tableau method TTM that instead of a graph obtains a cyclic tree to decide whether a set of PLTL-formulas is satisfiable. In TTM tableaux are classical-like. For unsatisfiable sets of formulas, TTM produces tableaux whose leaves contain a formula and its negation. In the case of satisfiable sets of formulas, TTM builds tableaux where each fully expanded open branch characterizes a collection of models for the set of formulas in the root. The tableau method TTM is complete and yields a decision procedure for PLTL. This tableau method is directly associated to a one-sided sequent calculus called TTC. Since TTM is free from all the structural rules that hinder the mechanization of deduction, e.g. weakening and contraction, then the resulting sequent calculus TTC is also free from this kind of structural rules. In particular, TTC is free of any kind of cut, including invariant-based cut. From the deduction system TTC, we obtain a two-sided sequent calculus GTC that preserves all these good freeness properties and is finitary, sound and complete for PLTL. Therefore, we show that the classical correspondence between tableaux and sequent calculi can be extended to temporal logic. The most fruitful approach in the literature on resolution methods for temporal logic, which was started with the seminal paper of M. Fisher, deals with PLTL and requires to generate invariants for performing resolution on eventualities. In this thesis, we present a new approach to resolution for PLTL. The main novelty of our approach is that we do not generate invariants for performing resolution on eventualities. Our method is based on the dual methods of tableaux and sequents for PLTL mentioned above. Our resolution method involves translation into a clausal normal form that is a direct extension of classical CNF. We first show that any PLTL-formula can be transformed into this clausal normal form. Then, we present our temporal resolution method, called TRS-resolution, that extends classical propositional resolution. Finally, we prove that TRS-resolution is sound and complete. In fact, it finishes for any input formula deciding its satisfiability, hence it gives rise to a new decision procedure for PLTL. In the field of temporal logic programming, the declarative proposals that provide a completeness result do not allow eventualities, whereas the proposals that follow the imperative future approach either restrict the use of eventualities or deal with them by calculating an upper bound based on the small model property for PLTL. In the latter, when the length of a derivation reaches the upper bound, the derivation is given up and backtracking is used to try another possible derivation. In this thesis we present a declarative propositional temporal logic programming language, called TeDiLog, that is a combination of the temporal and disjunctive paradigms in Logic Programming. We establish the logical foundations of our proposal by formally defining operational and logical semantics for TeDiLog and by proving their equivalence. Since TeDiLog is, syntactically, a sublanguage of PLTL, the logical semantics of TeDiLog is supported by PLTL logical consequence. The operational semantics of TeDiLog is based on TRS-resolution. TeDiLog allows both eventualities and always-formulas to occur in clause heads and also in clause bodies. To the best of our knowledge, TeDiLog is the first declarative temporal logic programming language that achieves this high degree of expressiveness. Since the tableau method presented in this thesis is able to detect that the fulfillment of an eventuality is prevented by a hidden invariant without checking for it by means of an extra process, since our finitary sequent calculi do not include invariant-based rules and since our resolution method dispenses with invariant generation, we say that our deduction methods are invariant-free.
Resumo:
Background: Budesonide has a long history as intranasal drug, with many marketed products. Efforts should be made to demonstrate the therapeutic equivalence and safety comparability between them. Given that systemic availability significantly varies from formulations, the clinical comparability of diverse products comes to be of clinical interest and a regulatory requirement. The aim of the present study was to compare the systemic availability, pharmacodynamic effect, and safety of two intranasal budesonide formulations for the treatment of rhinitis. Methods: Eighteen healthy volunteers participated in this randomised, controlled, crossover, clinical trial. On two separated days, subjects received a single dose of 512 mu g budesonide (4 puffs per nostril) from each of the assayed devices (Budesonida nasal 64 (R), Aldo-Union, Spain and Rhinocort 64 (R), AstraZeneca, Spain). Budesonide availability was determined by the measurement of budesonide plasma concentration. The pharmacodynamic effect on the hypothalamic-adrenal axis was evaluated as both plasma and urine cortisol levels. Adverse events were tabulated and described. Budesonide availability between formulations was compared by the calculation of 90% CI intervals of the ratios of the main pharmacokinetic parameters describing budesonide bioavailability. Plasma cortisol concentration-time curves were compared by means of a GLM for Repeated Measures. Urine cortisol excretion between formulations was compared through the Wilcoxon's test. Results: All the enroled volunteers successfully completed the study. Pharmacokinetic parameters were comparable in terms of AUC(t) (2.6 +/- 1.5 vs 2.2 +/- 0.7), AUCi (2.9 +/- 1.5 vs 2.4 +/- 0.7), t(max) (0.4 +/- 0.1 vs 0.4 +/- 0.2), C(max)/AUC(i) (0.3 +/- 0.1 vs 0.3 +/- 0.0), and MRT (5.0 +/- 1.4 vs 4.5 +/- 0.6), but not in the case of C(max) (0.9 +/- 0.3 vs 0.7 +/- 0.2) and t(1/2) (3.7 +/- 1.8 vs 2.9 +/- 0.4). The pharmacodynamic effects, measured as the effect over plasma and urine cortisol, were also comparables between both formulations. No severe adverse events were reported and tolerance was comparable between formulations. Conclusion: The systemic availability of intranasal budesonide was comparable for both formulations in terms of most pharmacokinetic parameters. The pharmacodynamic effect on hypothalamic-pituitary-adrenal axis was also similar. Side effects were scarce and equivalent between the two products. This methodology to compare different budesonide-containing devices is reliable and easy to perform, and should be recommended for similar products intented to be marketed or already on the market.
Resumo:
Effects of flame stretch on the laminar burning velocities of near-limit fuel-lean methane/air flames have been studied experimentally using a microgravity environment to minimize the complications of buoyancy. Outwardly propagating spherical flames were employed to assess the sensitivities of the laminar burning velocity to flame stretch, represented by Markstein lengths, and the fundamental laminar burning velocities of unstretched flames. Resulting data were reported for methane/air mixtures at ambient temperature and pressure, over the specific range of equivalence ratio that extended from 0.512 (the microgravity flammability limit found in the combustion chamber) to 0.601. Present measurements of unstretched laminar burning velocities were in good agreement with the unique existing microgravity data set at all measured equivalence ratios. Most of previous 1-g experiments using a variety of experimental techniques, however, appeared to give significantly higher burning velocities than the microgravity results. Furthermore, the burning velocities predicted by three chemical reaction mechanisms, which have been tuned primarily under off-limit conditions, were also considerably higher than the present experimental data. Additional results of the present investigation were derived for the overall activation energy and corresponding Zeldovich numbers, and the variation of the global flame Lewis numbers with equivalence ratio. The implications of these results were discussed. 2010 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
The primary focus of this thesis is on the interplay of descriptive set theory and the ergodic theory of group actions. This incorporates the study of turbulence and Borel reducibility on the one hand, and the theory of orbit equivalence and weak equivalence on the other. Chapter 2 is joint work with Clinton Conley and Alexander Kechris; we study measurable graph combinatorial invariants of group actions and employ the ultraproduct construction as a way of constructing various measure preserving actions with desirable properties. Chapter 3 is joint work with Lewis Bowen; we study the property MD of residually finite groups, and we prove a conjecture of Kechris by showing that under general hypotheses property MD is inherited by a group from one of its co-amenable subgroups. Chapter 4 is a study of weak equivalence. One of the main results answers a question of Abért and Elek by showing that within any free weak equivalence class the isomorphism relation does not admit classification by countable structures. The proof relies on affirming a conjecture of Ioana by showing that the product of a free action with a Bernoulli shift is weakly equivalent to the original action. Chapter 5 studies the relationship between mixing and freeness properties of measure preserving actions. Chapter 6 studies how approximation properties of ergodic actions and unitary representations are reflected group theoretically and also operator algebraically via a group's reduced C*-algebra. Chapter 7 is an appendix which includes various results on mixing via filters and on Gaussian actions.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
Since the discovery of D-branes as non-perturbative, dynamic objects in string theory, various configurations of branes in type IIA/B string theory and M-theory have been considered to study their low-energy dynamics described by supersymmetric quantum field theories.
One example of such a construction is based on the description of Seiberg-Witten curves of four-dimensional N = 2 supersymmetric gauge theories as branes in type IIA string theory and M-theory. This enables us to study the gauge theories in strongly-coupled regimes. Spectral networks are another tool for utilizing branes to study non-perturbative regimes of two- and four-dimensional supersymmetric theories. Using spectral networks of a Seiberg-Witten theory we can find its BPS spectrum, which is protected from quantum corrections by supersymmetry, and also the BPS spectrum of a related two-dimensional N = (2,2) theory whose (twisted) superpotential is determined by the Seiberg-Witten curve. When we don’t know the perturbative description of such a theory, its spectrum obtained via spectral networks is a useful piece of information. In this thesis we illustrate these ideas with examples of the use of Seiberg-Witten curves and spectral networks to understand various two- and four-dimensional supersymmetric theories.
First, we examine how the geometry of a Seiberg-Witten curve serves as a useful tool for identifying various limits of the parameters of the Seiberg-Witten theory, including Argyres-Seiberg duality and Argyres-Douglas fixed points. Next, we consider the low-energy limit of a two-dimensional N = (2, 2) supersymmetric theory from an M-theory brane configuration whose (twisted) superpotential is determined by the geometry of the branes. We show that, when the two-dimensional theory flows to its infra-red fixed point, particular cases realize Kazama-Suzuki coset models. We also study the BPS spectrum of an Argyres-Douglas type superconformal field theory on the Coulomb branch by using its spectral networks. We provide strong evidence of the equivalence of superconformal field theories from different string-theoretic constructions by comparing their BPS spectra.
Resumo:
This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.
Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.
In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.
Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.
Resumo:
A technique for obtaining approximate periodic solutions to nonlinear ordinary differential equations is investigated. The approach is based on defining an equivalent differential equation whose exact periodic solution is known. Emphasis is placed on the mathematical justification of the approach. The relationship between the differential equation error and the solution error is investigated, and, under certain conditions, bounds are obtained on the latter. The technique employed is to consider the equation governing the exact solution error as a two point boundary value problem. Among other things, the analysis indicates that if an exact periodic solution to the original system exists, it is always possible to bound the error by selecting an appropriate equivalent system.
Three equivalence criteria for minimizing the differential equation error are compared, namely, minimum mean square error, minimum mean absolute value error, and minimum maximum absolute value error. The problem is analyzed by way of example, and it is concluded that, on the average, the minimum mean square error is the most appropriate criterion to use.
A comparison is made between the use of linear and cubic auxiliary systems for obtaining approximate solutions. In the examples considered, the cubic system provides noticeable improvement over the linear system in describing periodic response.
A comparison of the present approach to some of the more classical techniques is included. It is shown that certain of the standard approaches where a solution form is assumed can yield erroneous qualitative results.