12 resultados para fixed speed induction generator

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A method is developed to calculate the settling speed of dilute arrays of spheres for the three cases of: I, a random array of freely moving particles; II, a random array of rigidly held particles; and III, a cubic array of particles. The basic idea of the technique is to give a formal representation for the solution and then manipulate this representation in a straightforward manner to obtain the result. For infinite arrays of spheres, our results agree with the results previously found by other authors, and the analysis here appears to be simpler. This method is able to obtain more terms in the answer than was possible by Saffman's unified treatment for point particles. Some results for arbitrary two sphere distributions are presented, and an analysis of the wall effect for particles settling in a tube is given. It is expected that the method presented here can be generalized to solve other types of problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Smartphones and other powerful sensor-equipped consumer devices make it possible to sense the physical world at an unprecedented scale. Nearly 2 million Android and iOS devices are activated every day, each carrying numerous sensors and a high-speed internet connection. Whereas traditional sensor networks have typically deployed a fixed number of devices to sense a particular phenomena, community networks can grow as additional participants choose to install apps and join the network. In principle, this allows networks of thousands or millions of sensors to be created quickly and at low cost. However, making reliable inferences about the world using so many community sensors involves several challenges, including scalability, data quality, mobility, and user privacy.

This thesis focuses on how learning at both the sensor- and network-level can provide scalable techniques for data collection and event detection. First, this thesis considers the abstract problem of distributed algorithms for data collection, and proposes a distributed, online approach to selecting which set of sensors should be queried. In addition to providing theoretical guarantees for submodular objective functions, the approach is also compatible with local rules or heuristics for detecting and transmitting potentially valuable observations. Next, the thesis presents a decentralized algorithm for spatial event detection, and describes its use detecting strong earthquakes within the Caltech Community Seismic Network. Despite the fact that strong earthquakes are rare and complex events, and that community sensors can be very noisy, our decentralized anomaly detection approach obtains theoretical guarantees for event detection performance while simultaneously limiting the rate of false alarms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical microscopy has become an indispensable tool for biological researches since its invention, mostly owing to its sub-cellular spatial resolutions, non-invasiveness, instrumental simplicity, and the intuitive observations it provides. Nonetheless, obtaining reliable, quantitative spatial information from conventional wide-field optical microscopy is not always intuitive as it appears to be. This is because in the acquired images of optical microscopy the information about out-of-focus regions is spatially blurred and mixed with in-focus information. In other words, conventional wide-field optical microscopy transforms the three-dimensional spatial information, or volumetric information about the objects into a two-dimensional form in each acquired image, and therefore distorts the spatial information about the object. Several fluorescence holography-based methods have demonstrated the ability to obtain three-dimensional information about the objects, but these methods generally rely on decomposing stereoscopic visualizations to extract volumetric information and are unable to resolve complex 3-dimensional structures such as a multi-layer sphere.

The concept of optical-sectioning techniques, on the other hand, is to detect only two-dimensional information about an object at each acquisition. Specifically, each image obtained by optical-sectioning techniques contains mainly the information about an optically thin layer inside the object, as if only a thin histological section is being observed at a time. Using such a methodology, obtaining undistorted volumetric information about the object simply requires taking images of the object at sequential depths.

Among existing methods of obtaining volumetric information, the practicability of optical sectioning has made it the most commonly used and most powerful one in biological science. However, when applied to imaging living biological systems, conventional single-point-scanning optical-sectioning techniques often result in certain degrees of photo-damages because of the high focal intensity at the scanning point. In order to overcome such an issue, several wide-field optical-sectioning techniques have been proposed and demonstrated, although not without introducing new limitations and compromises such as low signal-to-background ratios and reduced axial resolutions. As a result, single-point-scanning optical-sectioning techniques remain the most widely used instrumentations for volumetric imaging of living biological systems to date.

In order to develop wide-field optical-sectioning techniques that has equivalent optical performance as single-point-scanning ones, this thesis first introduces the mechanisms and limitations of existing wide-field optical-sectioning techniques, and then brings in our innovations that aim to overcome these limitations. We demonstrate, theoretically and experimentally, that our proposed wide-field optical-sectioning techniques can achieve diffraction-limited optical sectioning, low out-of-focus excitation and high-frame-rate imaging in living biological systems. In addition to such imaging capabilities, our proposed techniques can be instrumentally simple and economic, and are straightforward for implementation on conventional wide-field microscopes. These advantages together show the potential of our innovations to be widely used for high-speed, volumetric fluorescence imaging of living biological systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mean velocity profiles were measured in the 5” x 60” wind channel of the turbulence laboratory at the GALCIT, by the use of a hot-wire anemometer. The repeatability of results was established, and the accuracy of the instrumentation estimated. Scatter of experimental results is a little, if any, beyond this limit, although some effects might be expected to arise from variations in atmospheric humidity, no account of this factor having been taken in the present work. Also, slight unsteadiness in flow conditions will be responsible for some scatter.

Irregularities of a hot-wire in close proximity to a solid boundary at low speeds were observed, as have already been found by others.

That Kármán’s logarithmic law holds reasonably well over the main part of a fully developed turbulent flow was checked, the equation u/ut = 6.0 + 6.25 log10 yut/v being obtained, and, as has been previously the case, the experimental points do not quite form one straight line in the region where viscosity effects are small. The values of the constants for this law for the best over-all agreement were determined and compared with those obtained by others.

The range of Reynolds numbers used (based on half-width of channel) was from 20,000 to 60,000.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Studies on Nicotinamide Adenine Dinucleotide Glycohydrase (NADase)

NADase, like tyrosinase and L-amino acid oxidase, is not present in two day old cultures of wild type Neurospora, but it is coinduced with those two enzymes during starvation in phosphate buffer. The induction of NADase, like tyrosinase, is inhibited by puromycin. The induction of all three enzymes is inhibited by actinomycin D. These results suggest that NADase is synthesized de novo during induction as has been shown directly for tyrosinase. NADase induction differs in being inhibited by certain amino acids.

The tyrosinaseless mutant ty-1 contains a non-dialyzable, heat labile inhibitor of NADase. A new mutant, P110A, synthesizes NADase and L-amino acid oxidase while growing. A second strain, pe, fl;cot, makes NADase while growing. Both strains can be induced to make the other enzymes. These two strains prove that the control of these three enzymes is divisible. The strain P110A makes NADase even when grown in the presence of Tween 80. The synthesis of both NADase and L-amino acid oxidase by P110A is suppressed by complete medium. The theory of control of the synthesis of the enzymes is discussed.

II. Studies with EDTA

Neurospora tyrosinase contains copper but, unlike other phenol oxidases, this copper has never been removed reversibly. It was thought that the apo-enzyme might be made in vivo in the absence of copper. Therefore cultures were treated with EDTA to remove copper before the enzyme was induced. Although no apo-tyrosinase was detected, new information on the induction process was obtained.

A treatment of Neurospora with 0.5% EDTA pH 7, inhibits the subsequent induction during starvation in phosphate buffer of tyrosinase, L-amino acid oxidase and NADase. The inhibition of tyrosinase and L-amino acid oxidase induction is completely reversed by adding 5 x 10-5M CaCl2, 5 x 10-4M CuSO4, and a mixture of L-amino acids (2 x 10-3M each) to the buffer. Tyrosinase induction is also fully restored by 5 x 10-4M CaCl2 and amino acids. As yet NADase has been only partially restored.

The copper probably acts by sequestering EDTA left in the mycelium and may be replaced by nickel. The EDTA apparently removes some calcium from the mycelium, which the added calcium replaces. Magnesium cannot replace calcium. The amino acids probably replace endogenous amino acids lost to the buffer after the EDTA treatment.

The EDTA treatment also increases permeability, thereby increasing the sensitivity of induction to inhibition by actinomycin D and allowing cell contents to be lost to the induction buffer. EDTA treatment also inhibits the uptake of exogenous amino acids and their incorporation into proteins.

The lag period that precedes the first appearance of tyrosinase is demonstrated to be a separate dynamic phase of induction. It requires oxygen. It is inhibited by EDTA, but can be completed after EDTA treatment in the presence of 5 x 10-5M CaCl2 alone, although no tyrosinase is synthesized under these conditions.

The time course of induction has an early exponential phase suggesting an autocatalytic mechanism of induction.

The mode of action of EDTA, the process of induction and the kinetics of induction are discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Suppose that AG is a solvable group with normal subgroup G where (|A|, |G|) = 1. Assume that A is a class two odd p group all of whose irreducible representations are isomorphic to subgroups of extra special p groups. If pc ≠ rd + 1 for any c = 1, 2 and any prime r where r2d+1 divides |G| and if CG(A) = 1 then the Fitting length of G is bounded by the power of p dividing |A|.

The theorem is proved by applying a fixed point theorem to a reduction of the Fitting series of G. The fixed point theorem is proved by reducing a minimal counter example. IF R is an extra spec r subgroup of G fixed by A1, a subgroup of A, where A1 centralizes D(R), then all irreducible characters of A1R which are nontrivial on Z(R) are computed. All nonlinear characters of a class two p group are computed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A series of meso-phenyloctamethylporphyrins covalently bonded at the 4'phenyl position to quinones via rigid bicyclo[2.2.2]octane spacers were synthesized for the study of the dependence of electron transfer reaction rate on solvent, distance, temperature, and energy gap. A general and convergent synthesis was developed based on the condensation of ac-biladienes with masked quinonespacer-benzaldehydes. From picosecond fluorescence spectroscopy emission lifetimes were measured in seven solvents of varying polarity. Rate constants were determined to vary from 5.0x109sec-1 in N,N-dimethylformamide to 1.15x1010 Sec-1 in benzene, and were observed to rise at most by about a factor of three with decreasing solvent polarity. Experiments at low temperature in 2-MTHF glass (77K) revealed fast, nearly temperature-independent electron transfer characterized by non-exponential fluorescence decays, in contrast to monophasic behavior in fluid solution at 298K. This example evidently represents the first photosynthetic model system not based on proteins to display nearly temperature-independent electron transfer at high temperatures (nuclear tunneling). Low temperatures appear to freeze out the rotational motion of the chromophores, and the observed nonexponential fluorescence decays may be explained as a result of electron transfer from an ensemble of rotational conformations. The nonexponentiality demonstrates the sensitivity of the electron transfer rate to the precise magnitude of the electronic matrix element, which supports the expectation that electron transfer is nonadiabatic in this system. The addition of a second bicyclooctane moiety (15 Å vs. 18 Å edge-to-edge between porphyrin and quinone) reduces the transfer rate by at least a factor of 500-1500. Porphyrinquinones with variously substituted quinones allowed an examination of the dependence of the electron transfer rate constant κET on reaction driving force. The classical trend of increasing rate versus increasing exothermicity occurs from 0.7 eV≤ |ΔG0'(R)| ≤ 1.0 eV until a maximum is reached (κET = 3 x 108 sec-1 rising to 1.15 x 1010 sec-1 in acetonitrile). The rate remains insensitive to ΔG0 for ~ 300 mV from 1.0 eV≤ |ΔG0’(R)| ≤ 1.3 eV, and then slightly decreases in the most exothermic case studied (cyanoquinone, κET = 5 x 109 sec-1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem considered is that of minimizing the drag of a symmetric plate in infinite cavity flow under the constraints of fixed arclength and fixed chord. The flow is assumed to be steady, irrotational, and incompressible. The effects of gravity and viscosity are ignored.

Using complex variables, expressions for the drag, arclength, and chord, are derived in terms of two hodograph variables, Γ (the logarithm of the speed) and β (the flow angle), and two real parameters, a magnification factor and a parameter which determines how much of the plate is a free-streamline.

Two methods are employed for optimization:

(1) The parameter method. Γ and β are expanded in finite orthogonal series of N terms. Optimization is performed with respect to the N coefficients in these series and the magnification and free-streamline parameters. This method is carried out for the case N = 1 and minimum drag profiles and drag coefficients are found for all values of the ratio of arclength to chord.

(2) The variational method. A variational calculus method for minimizing integral functionals of a function and its finite Hilbert transform is introduced, This method is applied to functionals of quadratic form and a necessary condition for the existence of a minimum solution is derived. The variational method is applied to the minimum drag problem and a nonlinear integral equation is derived but not solved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a paper published in 1961, L. Cesari [1] introduces a method which extends certain earlier existence theorems of Cesari and Hale ([2] to [6]) for perturbation problems to strictly nonlinear problems. Various authors ([1], [7] to [15]) have now applied this method to nonlinear ordinary and partial differential equations. The basic idea of the method is to use the contraction principle to reduce an infinite-dimensional fixed point problem to a finite-dimensional problem which may be attacked using the methods of fixed point indexes.

The following is my formulation of the Cesari fixed point method:

Let B be a Banach space and let S be a finite-dimensional linear subspace of B. Let P be a projection of B onto S and suppose Г≤B such that pГ is compact and such that for every x in PГ, P-1x∩Г is closed. Let W be a continuous mapping from Г into B. The Cesari method gives sufficient conditions for the existence of a fixed point of W in Г.

Let I denote the identity mapping in B. Clearly y = Wy for some y in Г if and only if both of the following conditions hold:

(i) Py = PWy.

(ii) y = (P + (I - P)W)y.

Definition. The Cesari fixed paint method applies to (Г, W, P) if and only if the following three conditions are satisfied:

(1) For each x in PГ, P + (I - P)W is a contraction from P-1x∩Г into itself. Let y(x) be that element (uniqueness follows from the contraction principle) of P-1x∩Г which satisfies the equation y(x) = Py(x) + (I-P)Wy(x).

(2) The function y just defined is continuous from PГ into B.

(3) There are no fixed points of PWy on the boundary of PГ, so that the (finite- dimensional) fixed point index i(PWy, int PГ) is defined.

Definition. If the Cesari fixed point method applies to (Г, W, P) then define i(Г, W, P) to be the index i(PWy, int PГ).

The three theorems of this thesis can now be easily stated.

Theorem 1 (Cesari). If i(Г, W, P) is defined and i(Г, W, P) ≠0, then there is a fixed point of W in Г.

Theorem 2. Let the Cesari fixed point method apply to both (Г, W, P1) and (Г, W, P2). Assume that P2P1=P1P2=P1 and assume that either of the following two conditions holds:

(1) For every b in B and every z in the range of P2, we have that ‖b=P2b‖ ≤ ‖b-z‖

(2)P2Г is convex.

Then i(Г, W, P1) = i(Г, W, P2).

Theorem 3. If Ω is a bounded open set and W is a compact operator defined on Ω so that the (infinite-dimensional) Leray-Schauder index iLS(W, Ω) is defined, and if the Cesari fixed point method applies to (Ω, W, P), then i(Ω, W, P) = iLS(W, Ω).

Theorems 2 and 3 are proved using mainly a homotopy theorem and a reduction theorem for the finite-dimensional and the Leray-Schauder indexes. These and other properties of indexes will be listed before the theorem in which they are used.