33 resultados para Infinite horizon problems


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.

A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.

On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis focuses mainly on linear algebraic aspects of combinatorics. Let N_t(H) be an incidence matrix with edges versus all subhypergraphs of a complete hypergraph that are isomorphic to H. Richard M. Wilson and the author find the general formula for the Smith normal form or diagonal form of N_t(H) for all simple graphs H and for a very general class of t-uniform hypergraphs H.

As a continuation, the author determines the formula for diagonal forms of integer matrices obtained from other combinatorial structures, including incidence matrices for subgraphs of a complete bipartite graph and inclusion matrices for multisets.

One major application of diagonal forms is in zero-sum Ramsey theory. For instance, Caro's results in zero-sum Ramsey numbers for graphs and Caro and Yuster's results in zero-sum bipartite Ramsey numbers can be reproduced. These results are further generalized to t-uniform hypergraphs. Other applications include signed bipartite graph designs.

Research results on some other problems are also included in this thesis, such as a Ramsey-type problem on equipartitions, Hartman's conjecture on large sets of designs and a matroid theory problem proposed by Welsh.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a probabilistic assessment of the performance of structures subjected to uncertain environmental loads such as earthquakes, an important problem is to determine the probability that the structural response exceeds some specified limits within a given duration of interest. This problem is known as the first excursion problem, and it has been a challenging problem in the theory of stochastic dynamics and reliability analysis. In spite of the enormous amount of attention the problem has received, there is no procedure available for its general solution, especially for engineering problems of interest where the complexity of the system is large and the failure probability is small.

The application of simulation methods to solving the first excursion problem is investigated in this dissertation, with the objective of assessing the probabilistic performance of structures subjected to uncertain earthquake excitations modeled by stochastic processes. From a simulation perspective, the major difficulty in the first excursion problem comes from the large number of uncertain parameters often encountered in the stochastic description of the excitation. Existing simulation tools are examined, with special regard to their applicability in problems with a large number of uncertain parameters. Two efficient simulation methods are developed to solve the first excursion problem. The first method is developed specifically for linear dynamical systems, and it is found to be extremely efficient compared to existing techniques. The second method is more robust to the type of problem, and it is applicable to general dynamical systems. It is efficient for estimating small failure probabilities because the computational effort grows at a much slower rate with decreasing failure probability than standard Monte Carlo simulation. The simulation methods are applied to assess the probabilistic performance of structures subjected to uncertain earthquake excitation. Failure analysis is also carried out using the samples generated during simulation, which provide insight into the probable scenarios that will occur given that a structure fails.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emphasis in reactor physics research has shifted toward investigations of fast reactors. The effects of high energy neutron processes have thus become fundamental to our understanding, and one of the most important of these processes is nuclear inelastic scattering. In this research we include inelastic scattering as a primary energy transfer mechanism, and study the resultant neutron energy spectrum in an infinite medium. We assume that the moderator material has a high mass number, so that in a laboratory coordinate system the energy loss of an inelastically scattered neutron may be taken as discrete. It is then consistent to treat elastic scattering with an age theory expansion. Mathematically these assumptions lead to balance equations of the differential-difference type.

The steady state problem is explored first by way of Laplace transformation of the energy variable. We then develop another steady state technique, valid for multiple inelastic level excitations, which depends on the level structure satisfying a physically reasonable constraint. In all cases the solutions we generate are compared with results obtained by modeling inelastic scattering with a separable, evaporative kernel.

The time dependent problem presents some new difficulties. By modeling the elastic scattering cross section in a particular way, we generate solutions to this more interesting problem. We conjecture the method of characteristics may be useful in analyzing time dependent problems with general cross sections. These ideas are briefly explored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: The dynamic response of an elastic half space to an explosion in a buried spherical cavity is investigated by two methods. The first is implicit, and the final expressions for the displacements at the free surface are given as a series of spherical wave functions whose coefficients are solutions of an infinite set of linear equations. The second method is based on Schwarz's technique to solve boundary value problems, and leads to an iterative solution, starting with the known expression for the point source in a half space as first term. The iterative series is transformed into a system of two integral equations, and into an equivalent set of linear equations. In this way, a dual interpretation of the physical phenomena is achieved. The systems are treated numerically and the Rayleigh wave part of the displacements is given in the frequency domain. Several comparisons with simpler cases are analyzed to show the effect of the cavity radius-depth ratio on the spectra of the displacements.

Part II: A high speed, large capacity, hypocenter location program has been written for an IBM 7094 computer. Important modifications to the standard method of least squares have been incorporated in it. Among them are a new way to obtain the depth of shocks from the normal equations, and the computation of variable travel times for the local shocks in order to account automatically for crustal variations. The multiregional travel times, largely based upon the investigations of the United States Geological Survey, are confronted with actual traverses to test their validity.

It is shown that several crustal phases provide control enough to obtain good solutions in depth for nuclear explosions, though not all the recording stations are in the region where crustal corrections are considered. The use of the European travel times, to locate the French nuclear explosion of May 1962 in the Sahara, proved to be more adequate than previous work.

A simpler program, with manual crustal corrections, is used to process the Kern County series of aftershocks, and a clearer picture of tectonic mechanism of the White Wolf fault is obtained.

Shocks in the California region are processed automatically and statistical frequency-depth and energy depth curves are discussed in relation to the tectonics of the area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interest in the possible applications of a priori inequalities in linear elasticity theory motivated the present investigation. Korn's inequality under various side conditions is considered, with emphasis on the Korn's constant. In the "second case" of Korn's inequality, a variational approach leads to an eigenvalue problem; it is shown that, for simply-connected two-dimensional regions, the problem of determining the spectrum of this eigenvalue problem is equivalent to finding the values of Poisson's ratio for which the displacement boundary-value problem of linear homogeneous isotropic elastostatics has a non-unique solution.

Previous work on the uniqueness and non-uniqueness issue for the latter problem is examined and the results applied to the spectrum of the Korn eigenvalue problem. In this way, further information on the Korn constant for general regions is obtained.

A generalization of the "main case" of Korn's inequality is introduced and the associated eigenvalue problem is a gain related to the displacement boundary-value problem of linear elastostatics in two dimensions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.

A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.

To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.

An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.

The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.

A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.

The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Three different categories of flow problems of a fluid containing small particles are being considered here. They are: (i) a fluid containing small, non-reacting particles (Parts I and II); (ii) a fluid containing reacting particles (Parts III and IV); and (iii) a fluid containing particles of two distinct sizes with collisions between two groups of particles (Part V).

Part I

A numerical solution is obtained for a fluid containing small particles flowing over an infinite disc rotating at a constant angular velocity. It is a boundary layer type flow, and the boundary layer thickness for the mixture is estimated. For large Reynolds number, the solution suggests the boundary layer approximation of a fluid-particle mixture by assuming W = Wp. The error introduced is consistent with the Prandtl’s boundary layer approximation. Outside the boundary layer, the flow field has to satisfy the “inviscid equation” in which the viscous stress terms are absent while the drag force between the particle cloud and the fluid is still important. Increase of particle concentration reduces the boundary layer thickness and the amount of mixture being transported outwardly is reduced. A new parameter, β = 1/Ω τv, is introduced which is also proportional to μ. The secondary flow of the particle cloud depends very much on β. For small values of β, the particle cloud velocity attains its maximum value on the surface of the disc, and for infinitely large values of β, both the radial and axial particle velocity components vanish on the surface of the disc.

Part II

The “inviscid” equation for a gas-particle mixture is linearized to describe the flow over a wavy wall. Corresponding to the Prandtl-Glauert equation for pure gas, a fourth order partial differential equation in terms of the velocity potential ϕ is obtained for the mixture. The solution is obtained for the flow over a periodic wavy wall. For equilibrium flows where λv and λT approach zero and frozen flows in which λv and λT become infinitely large, the flow problem is basically similar to that obtained by Ackeret for a pure gas. For finite values of λv and λT, all quantities except v are not in phase with the wavy wall. Thus the drag coefficient CD is present even in the subsonic case, and similarly, all quantities decay exponentially for supersonic flows. The phase shift and the attenuation factor increase for increasing particle concentration.

Part III

Using the boundary layer approximation, the initial development of the combustion zone between the laminar mixing of two parallel streams of oxidizing agent and small, solid, combustible particles suspended in an inert gas is investigated. For the special case when the two streams are moving at the same speed, a Green’s function exists for the differential equations describing first order gas temperature and oxidizer concentration. Solutions in terms of error functions and exponential integrals are obtained. Reactions occur within a relatively thin region of the order of λD. Thus, it seems advantageous in the general study of two-dimensional laminar flame problems to introduce a chemical boundary layer of thickness λD within which reactions take place. Outside this chemical boundary layer, the flow field corresponds to the ordinary fluid dynamics without chemical reaction.

Part IV

The shock wave structure in a condensing medium of small liquid droplets suspended in a homogeneous gas-vapor mixture consists of the conventional compressive wave followed by a relaxation region in which the particle cloud and gas mixture attain momentum and thermal equilibrium. Immediately following the compressive wave, the partial pressure corresponding to the vapor concentration in the gas mixture is higher than the vapor pressure of the liquid droplets and condensation sets in. Farther downstream of the shock, evaporation appears when the particle temperature is raised by the hot surrounding gas mixture. The thickness of the condensation region depends very much on the latent heat. For relatively high latent heat, the condensation zone is small compared with ɅD.

For solid particles suspended initially in an inert gas, the relaxation zone immediately following the compression wave consists of a region where the particle temperature is first being raised to its melting point. When the particles are totally melted as the particle temperature is further increased, evaporation of the particles also plays a role.

The equilibrium condition downstream of the shock can be calculated and is independent of the model of the particle-gas mixture interaction.

Part V

For a gas containing particles of two distinct sizes and satisfying certain conditions, momentum transfer due to collisions between the two groups of particles can be taken into consideration using the classical elastic spherical ball model. Both in the relatively simple problem of normal shock wave and the perturbation solutions for the nozzle flow, the transfer of momentum due to collisions which decreases the velocity difference between the two groups of particles is clearly demonstrated. The difference in temperature as compared with the collisionless case is quite negligible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let {Ƶn}n = -∞ be a stochastic process with state space S1 = {0, 1, …, D – 1}. Such a process is called a chain of infinite order. The transitions of the chain are described by the functions

Qi(i(0)) = Ƥ(Ƶn = i | Ƶn - 1 = i (0)1, Ƶn - 2 = i (0)2, …) (i ɛ S1), where i(0) = (i(0)1, i(0)2, …) ranges over infinite sequences from S1. If i(n) = (i(n)1, i(n)2, …) for n = 1, 2,…, then i(n) → i(0) means that for each k, i(n)k = i(0)k for all n sufficiently large.

Given functions Qi(i(0)) such that

(i) 0 ≤ Qi(i(0) ≤ ξ ˂ 1

(ii)D – 1/Ʃ/i = 0 Qi(i(0)) Ξ 1

(iii) Qi(i(n)) → Qi(i(0)) whenever i(n) → i(0),

we prove the existence of a stationary chain of infinite order {Ƶn} whose transitions are given by

Ƥ (Ƶn = i | Ƶn - 1, Ƶn - 2, …) = Qin - 1, Ƶn - 2, …)

With probability 1. The method also yields stationary chains {Ƶn} for which (iii) does not hold but whose transition probabilities are, in a sense, “locally Markovian.” These and similar results extend a paper by T.E. Harris [Pac. J. Math., 5 (1955), 707-724].

Included is a new proof of the existence and uniqueness of a stationary absolute distribution for an Nth order Markov chain in which all transitions are possible. This proof allows us to achieve our main results without the use of limit theorem techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Maxwell integral equations of transfer are applied to a series of problems involving flows of arbitrary density gases about spheres. As suggested by Lees a two sided Maxwellian-like weighting function containing a number of free parameters is utilized and a sufficient number of partial differential moment equations is used to determine these parameters. Maxwell's inverse fifth-power force law is used to simplify the evaluation of the collision integrals appearing in the moment equations. All flow quantities are then determined by integration of the weighting function which results from the solution of the differential moment system. Three problems are treated: the heat-flux from a slightly heated sphere at rest in an infinite gas; the velocity field and drag of a slowly moving sphere in an unbounded space; the velocity field and drag torque on a slowly rotating sphere. Solutions to the third problem are found to both first and second-order in surface Mach number with the secondary centrifugal fan motion being of particular interest. Singular aspects of the moment method are encountered in the last two problems and an asymptotic study of these difficulties leads to a formal criterion for a "well posed" moment system. The previously unanswered question of just how many moments must be used in a specific problem is now clarified to a great extent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with two problems. The first is the determination of λ-designs, combinatorial configurations which are essentially symmetric block designs with the condition that each subset be of the same cardinality negated. We construct an infinite family of such designs from symmetric block designs and obtain some basic results about their structure. These results enable us to solve the problem for λ = 3 and λ = 4. The second problem deals with configurations related to both λ -designs and (ѵ, k, λ)-configurations. We have (n-1) k-subsets of {1, 2, ..., n}, S1, ..., Sn-1 such that Si ∩ Sj is a λ-set for i ≠ j. We obtain specifically the replication numbers of such a design in terms of n, k, and λ with one exceptional class which we determine explicitly. In certain special cases we settle the problem entirely.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.

Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.

Two methods are employed for optimization:

(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.

(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.