23 resultados para simplified CDD
Resumo:
Fluid diffusion in glassy polymers proceeds in ways that are not explained by the standard diffusion model. Although the reasons for the anomalous effects are not known, much of the observed behavior is attributed to the long times that polymers below their glass transition temperature take to adjust to changes in their condition. The slow internal relaxations of the polymer chains ensure that the material properties are history-dependent, and also allow both local inhomogeneities and differential swelling to occur. Two models are developed in this thesis with the intent of accounting for these effects in the diffusion process.
In Part I, a model is developed to account for both the history dependence of the glassy polymer, and the dual sorption which occurs when gas molecules are immobilized by the local heterogeneities. A preliminary study of a special case of this model is conducted, showing the existence of travelling wave solutions and using perturbation techniques to investigate the effect of generalized diffusion mechanisms on their form. An integral averaging method is used to estimate the penetrant front position.
In Part II, a model is developed for particle diffusion along with displacements in isotropic viscoelastic materials. The nonlinear dependence of the materials on the fluid concentration is taken into account, while pure displacements are assumed to remain in the range of linear viscoelasticity. A fairly general model is obtained for three-dimensional irrotational movements, with the development of the model being based on the assumptions of irreversible thermodynamics. With the help of some dimensional analysis, this model is simplified to a version which is proposed to be studied for Case II behavior.
Resumo:
This document contains three papers examining the microstructure of financial interaction in development and market settings. I first examine the industrial organization of financial exchanges, specifically limit order markets. In this section, I perform a case study of Google stock surrounding a surprising earnings announcement in the 3rd quarter of 2009, uncovering parameters that describe information flows and liquidity provision. I then explore the disbursement process for community-driven development projects. This section is game theoretic in nature, using a novel three-player ultimatum structure. I finally develop econometric tools to simulate equilibrium and identify equilibrium models in limit order markets.
In chapter two, I estimate an equilibrium model using limit order data, finding parameters that describe information and liquidity preferences for trading. As a case study, I estimate the model for Google stock surrounding an unexpected good-news earnings announcement in the 3rd quarter of 2009. I find a substantial decrease in asymmetric information prior to the earnings announcement. I also simulate counterfactual dealer markets and find empirical evidence that limit order markets perform more efficiently than do their dealer market counterparts.
In chapter three, I examine Community-Driven Development. Community-Driven Development is considered a tool empowering communities to develop their own aid projects. While evidence has been mixed as to the effectiveness of CDD in achieving disbursement to intended beneficiaries, the literature maintains that local elites generally take control of most programs. I present a three player ultimatum game which describes a potential decentralized aid procurement process. Players successively split a dollar in aid money, and the final player--the targeted community member--decides between whistle blowing or not. Despite the elite capture present in my model, I find conditions under which money reaches targeted recipients. My results describe a perverse possibility in the decentralized aid process which could make detection of elite capture more difficult than previously considered. These processes may reconcile recent empirical work claiming effectiveness of the decentralized aid process with case studies which claim otherwise.
In chapter four, I develop in more depth the empirical and computational means to estimate model parameters in the case study in chapter two. I describe the liquidity supplier problem and equilibrium among those suppliers. I then outline the analytical forms for computing certainty-equivalent utilities for the informed trader. Following this, I describe a recursive algorithm which facilitates computing equilibrium in supply curves. Finally, I outline implementation of the Method of Simulated Moments in this context, focusing on Indirect Inference and formulating the pseudo model.
Resumo:
The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented. The variables are used in the analysis of the first proton-proton collisions dataset at CMS (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results. A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011. The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content. The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.
With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.
Resumo:
The goal of this thesis is to develop a proper microelectromechanical systems (MEMS) process to manufacture piezoelectric Parylene-C (PA-C), which is famous for its chemical inertness, mechanical and thermal properties and electrical insulation. Furthermore, piezoelectric PA-C is used to build miniature, inexpensive, non-biased piezoelectric microphones.
These piezoelectric PA-C MEMS microphones are to be used in any application where a conventional piezoelectric and electret microphone can be used, such as in cell phones and hearing aids. However, they have the advantage of a simplified fabrication process compared with existing technology. In addition, as a piezoelectric polymer, PA-C has varieties of applications due to its low dielectric constant, low elastic stiffness, low density, high voltage sensitivity, high temperature stability and low acoustic and mechanical impedance. Furthermore, PA-C is an FDA approved biocompatible material and is able to maintain operate at a high temperature.
To accomplish piezoelectric PA-C, a MEMS-compatible poling technology has been developed. The PA-C film is poled by applying electrical field during heating. The piezoelectric coefficient, -3.75pC/N, is obtained without film stretching.
The millimeter-scale piezoelectric PA-C microphone is fabricated with an in-plane spiral arrangement of two electrodes. The dynamic range is from less than 30 dB to above 110 dB SPL (referenced 20 µPa) and the open-circuit sensitivities are from 0.001 – 0.11 mV/Pa over a frequency range of 1 - 10 kHz. The total harmonic distortion of the device is less than 20% at 110 dB SPL and 1 kHz.
Resumo:
Our understanding of the structure and evolution of the deep Earth is strongly linked to knowledge of the thermodynamic properties of rocky materials at extreme temperatures and pressures. In this thesis, I present work that helps constrain the equation of state properties of iron-bearing Mg-silicate perovskite as well as oxide-silicate melts. I use a mixture of experimental, statistical, and theoretical techniques to obtain knowledge about these phases. These include laser-heated diamond anvil cell experiments, Bayesian statistical analysis of powder diffraction data, and the development of a new simplified model for understanding oxide and silicate melts at mantle conditions. By shedding light on the thermodynamic properties of such ubiquitous Earth-forming materials, I hope to aid our community’s progress toward understanding the large-scale processes operating in the Earth’s mantle, both in the modern day and early in Earth’s history.
Resumo:
This thesis presents a simplified state-variable method to solve for the nonstationary response of linear MDOF systems subjected to a modulated stationary excitation in both time and frequency domains. The resulting covariance matrix and evolutionary spectral density matrix of the response may be expressed as a product of a constant system matrix and a time-dependent matrix, the latter can be explicitly evaluated for most envelopes currently prevailing in engineering. The stationary correlation matrix of the response may be found by taking the limit of the covariance response when a unit step envelope is used. The reliability analysis can then be performed based on the first two moments of the response obtained.
The method presented facilitates obtaining explicit solutions for general linear MDOF systems and is flexible enough to be applied to different stochastic models of excitation such as the stationary models, modulated stationary models, filtered stationary models, and filtered modulated stationary models and their stochastic equivalents including the random pulse train model, filtered shot noise, and some ARMA models in earthquake engineering. This approach may also be readily incorporated into finite element codes for random vibration analysis of linear structures.
A set of explicit solutions for the response of simple linear structures subjected to modulated white noise earthquake models with four different envelopes are presented as illustration. In addition, the method has been applied to three selected topics of interest in earthquake engineering, namely, nonstationary analysis of primary-secondary systems with classical or nonclassical dampings, soil layer response and related structural reliability analysis, and the effect of the vertical components on seismic performance of structures. For all the three cases, explicit solutions are obtained, dynamic characteristics of structures are investigated, and some suggestions are given for aseismic design of structures.
Resumo:
A Bayesian probabilistic methodology for on-line structural health monitoring which addresses the issue of parameter uncertainty inherent in problem is presented. The method uses modal parameters for a limited number of modes identified from measurements taken at a restricted number of degrees of freedom of a structure as the measured structural data. The application presented uses a linear structural model whose stiffness matrix is parameterized to develop a class of possible models. Within the Bayesian framework, a joint probability density function (PDF) for the model stiffness parameters given the measured modal data is determined. Using this PDF, the marginal PDF of the stiffness parameter for each substructure given the data can be calculated.
Monitoring the health of a structure using these marginal PDFs involves two steps. First, the marginal PDF for each model parameter given modal data from the undamaged structure is found. The structure is then periodically monitored and updated marginal PDFs are determined. A measure of the difference between the calibrated and current marginal PDFs is used as a means to characterize the health of the structure. A procedure for interpreting the measure for use by an expert system in on-line monitoring is also introduced.
The probabilistic framework is developed in order to address the model parameter uncertainty issue inherent in the health monitoring problem. To illustrate this issue, consider a very simplified deterministic structural health monitoring method. In such an approach, the model parameters which minimize an error measure between the measured and model modal values would be used as the "best" model of the structure. Changes between the model parameters identified using modal data from the undamaged structure and subsequent modal data would be used to find the existence, location and degree of damage. Due to measurement noise, limited modal information, and model error, the "best" model parameters might vary from one modal dataset to the next without any damage present in the structure. Thus, difficulties would arise in separating normal variations in the identified model parameters based on limitations of the identification method and variations due to true change in the structure. The Bayesian framework described in this work provides a means to handle this parametric uncertainty.
The probabilistic health monitoring method is applied to simulated data and laboratory data. The results of these tests are presented.
Resumo:
The Hamilton Jacobi Bellman (HJB) equation is central to stochastic optimal control (SOC) theory, yielding the optimal solution to general problems specified by known dynamics and a specified cost functional. Given the assumption of quadratic cost on the control input, it is well known that the HJB reduces to a particular partial differential equation (PDE). While powerful, this reduction is not commonly used as the PDE is of second order, is nonlinear, and examples exist where the problem may not have a solution in a classical sense. Furthermore, each state of the system appears as another dimension of the PDE, giving rise to the curse of dimensionality. Since the number of degrees of freedom required to solve the optimal control problem grows exponentially with dimension, the problem becomes intractable for systems with all but modest dimension.
In the last decade researchers have found that under certain, fairly non-restrictive structural assumptions, the HJB may be transformed into a linear PDE, with an interesting analogue in the discretized domain of Markov Decision Processes (MDP). The work presented in this thesis uses the linearity of this particular form of the HJB PDE to push the computational boundaries of stochastic optimal control.
This is done by crafting together previously disjoint lines of research in computation. The first of these is the use of Sum of Squares (SOS) techniques for synthesis of control policies. A candidate polynomial with variable coefficients is proposed as the solution to the stochastic optimal control problem. An SOS relaxation is then taken to the partial differential constraints, leading to a hierarchy of semidefinite relaxations with improving sub-optimality gap. The resulting approximate solutions are shown to be guaranteed over- and under-approximations for the optimal value function. It is shown that these results extend to arbitrary parabolic and elliptic PDEs, yielding a novel method for Uncertainty Quantification (UQ) of systems governed by partial differential constraints. Domain decomposition techniques are also made available, allowing for such problems to be solved via parallelization and low-order polynomials.
The optimization-based SOS technique is then contrasted with the Separated Representation (SR) approach from the applied mathematics community. The technique allows for systems of equations to be solved through a low-rank decomposition that results in algorithms that scale linearly with dimensionality. Its application in stochastic optimal control allows for previously uncomputable problems to be solved quickly, scaling to such complex systems as the Quadcopter and VTOL aircraft. This technique may be combined with the SOS approach, yielding not only a numerical technique, but also an analytical one that allows for entirely new classes of systems to be studied and for stability properties to be guaranteed.
The analysis of the linear HJB is completed by the study of its implications in application. It is shown that the HJB and a popular technique in robotics, the use of navigation functions, sit on opposite ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. Analytical solutions to the HJB in these settings are available in simplified domains, yielding guidance towards optimality for approximation schemes. Finally, the use of HJB equations in temporal multi-task planning problems is investigated. It is demonstrated that such problems are reducible to a sequence of SOC problems linked via boundary conditions. The linearity of the PDE allows us to pre-compute control policy primitives and then compose them, at essentially zero cost, to satisfy a complex temporal logic specification.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.
The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.
The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.
First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.
In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.
The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.
Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.
A model for energy and morphology of crystalline grain boundaries with arbitrary geometric character
Resumo:
It has been well-established that interfaces in crystalline materials are key players in the mechanics of a variety of mesoscopic processes such as solidification, recrystallization, grain boundary migration, and severe plastic deformation. In particular, interfaces with complex morphologies have been observed to play a crucial role in many micromechanical phenomena such as grain boundary migration, stability, and twinning. Interfaces are a unique type of material defect in that they demonstrate a breadth of behavior and characteristics eluding simplified descriptions. Indeed, modeling the complex and diverse behavior of interfaces is still an active area of research, and to the author's knowledge there are as yet no predictive models for the energy and morphology of interfaces with arbitrary character. The aim of this thesis is to develop a novel model for interface energy and morphology that i) provides accurate results (especially regarding "energy cusp" locations) for interfaces with arbitrary character, ii) depends on a small set of material parameters, and iii) is fast enough to incorporate into large scale simulations.
In the first half of the work, a model for planar, immiscible grain boundary is formulated. By building on the assumption that anisotropic grain boundary energetics are dominated by geometry and crystallography, a construction on lattice density functions (referred to as "covariance") is introduced that provides a geometric measure of the order of an interface. Covariance forms the basis for a fully general model of the energy of a planar interface, and it is demonstrated by comparison with a wide selection of molecular dynamics energy data for FCC and BCC tilt and twist boundaries that the model accurately reproduces the energy landscape using only three material parameters. It is observed that the planar constraint on the model is, in some cases, over-restrictive; this motivates an extension of the model.
In the second half of the work, the theory of faceting in interfaces is developed and applied to the planar interface model for grain boundaries. Building on previous work in mathematics and materials science, an algorithm is formulated that returns the minimal possible energy attainable by relaxation and the corresponding relaxed morphology for a given planar energy model. It is shown that the relaxation significantly improves the energy results of the planar covariance model for FCC and BCC tilt and twist boundaries. The ability of the model to accurately predict faceting patterns is demonstrated by comparison to molecular dynamics energy data and experimental morphological observation for asymmetric tilt grain boundaries. It is also demonstrated that by varying the temperature in the planar covariance model, it is possible to reproduce a priori the experimentally observed effects of temperature on facet formation.
Finally, the range and scope of the covariance and relaxation models, having been demonstrated by means of extensive MD and experimental comparison, future applications and implementations of the model are explored.
Resumo:
The microscopic properties of a two-dimensional model dense fluid of Lennard-Jones disks have been studied using the so-called "molecular dynamics" method. Analyses of the computer-generated simulation data in terms of "conventional" thermodynamic and distribution functions verify the physical validity of the model and the simulation technique.
The radial distribution functions g(r) computed from the simulation data exhibit several subsidiary features rather similar to those appearing in some of the g(r) functions obtained by X-ray and thermal neutron diffraction measurements on real simple liquids. In the case of the model fluid, these "anomalous" features are thought to reflect the existence of two or more alternative configurations for local ordering.
Graphical display techniques have been used extensively to provide some intuitive insight into the various microscopic phenomena occurring in the model. For example, "snapshots" of the instantaneous system configurations for different times show that the "excess" area allotted to the fluid is collected into relatively large, irregular, and surprisingly persistent "holes". Plots of the particle trajectories over intervals of 2.0 to 6.0 x 10-12 sec indicate that the mechanism for diffusion in the dense model fluid is "cooperative" in nature, and that extensive diffusive migration is generally restricted to groups of particles in the vicinity of a hole.
A quantitative analysis of diffusion in the model fluid shows that the cooperative mechanism is not inconsistent with the statistical predictions of existing theories of singlet, or self-diffusion in liquids. The relative diffusion of proximate particles is, however, found to be retarded by short-range dynamic correlations associated with the cooperative mechanism--a result of some importance from the standpoint of bimolecular reaction kinetics in solution.
A new, semi-empirical treatment for relative diffusion in liquids is developed, and is shown to reproduce the relative diffusion phenomena observed in the model fluid quite accurately. When incorporated into the standard Smoluchowski theory of diffusion-controlled reaction kinetics, the more exact treatment of relative diffusion is found to lower the predicted rate of reaction appreciably.
Finally, an entirely new approach to an understanding of the liquid state is suggested. Our experience in dealing with the simulation data--and especially, graphical displays of the simulation data--has led us to conclude that many of the more frustrating scientific problems involving the liquid state would be simplified considerably, were it possible to describe the microscopic structures characteristic of liquids in a concise and precise manner. To this end, we propose that the development of a formal language of partially-ordered structures be investigated.
Resumo:
This study investigates lateral mixing of tracer fluids in turbulent open-channel flows when the tracer and ambient fluids have different densities. Longitudinal dispersion in flows with longitudinal density gradients is investigated also.
Lateral mixing was studied in a laboratory flume by introducing fluid tracers at the ambient flow velocity continuously and uniformly across a fraction of the flume width and over the entire depth of the ambient flow. Fluid samples were taken to obtain concentration distributions in cross-sections at various distances, x, downstream from the tracer source. The data were used to calculate variances of the lateral distributions of the depth-averaged concentration. When there was a difference in density between the tracer and the ambient fluids, lateral mixing close to the source was enhanced by density-induced secondary flows; however, far downstream where the density gradients were small, lateral mixing rates were independent of the initial density difference. A dimensional analysis of the problem and the data show that the normalized variance is a function of only three dimensionless numbers, which represent: (1) the x-coordinate, (2) the source width, and (3) the buoyancy flux from the source.
A simplified set of equations of motion for a fluid with a horizontal density gradient was integrated to give an expression for the density-induced velocity distribution. The dispersion coefficient due to this velocity distribution was also obtained. Using this dispersion coefficient in an analysis for predicting lateral mixing rates in the experiments of this investigation gave only qualitative agreement with the data. However, predicted longitudinal salinity distributions in an idealized laboratory estuary agree well with published data.
Resumo:
In this study the dynamics of flow over the blades of vertical axis wind turbines was investigated using a simplified periodic motion to uncover the fundamental flow physics and provide insight into the design of more efficient turbines. Time-resolved, two-dimensional velocity measurements were made with particle image velocimetry on a wing undergoing pitching and surging motion to mimic the flow on a turbine blade in a non-rotating frame. Dynamic stall prior to maximum angle of attack and a leading edge vortex development were identified in the phase-averaged flow field and captured by a simple model with five modes, including the first two harmonics of the pitch/surge frequency identified using the dynamic mode decomposition. Analysis of these modes identified vortical structures corresponding to both frequencies that led the separation and reattachment processes, while their phase relationship determined the evolution of the flow.
Detailed analysis of the leading edge vortex found multiple regimes of vortex development coupled to the time-varying flow field on the airfoil. The vortex was shown to grow on the airfoil for four convection times, before shedding and causing dynamic stall in agreement with 'optimal' vortex formation theory. Vortex shedding from the trailing edge was identified from instantaneous velocity fields prior to separation. This shedding was found to be in agreement with classical Strouhal frequency scaling and was removed by phase averaging, which indicates that it is not exactly coupled to the phase of the airfoil motion.
The flow field over an airfoil undergoing solely pitch motion was shown to develop similarly to the pitch/surge motion; however, flow separation took place earlier, corresponding to the earlier formation of the leading edge vortex. A similar reduced-order model to the pitch/surge case was developed, with similar vortical structures leading separation and reattachment; however, the relative phase lead of the separation mode, corresponding to earlier separation, necessitated that a third frequency to be incorporated into the reattachment mode to provide a relative lag in reattachment.
Finally, the results are returned to the rotating frame and the effects of each flow phenomena on the turbine are estimated, suggesting kinematic criteria for the design of improved turbines.
Resumo:
An investigation was conducted to estimate the error when the flat-flux approximation is used to compute the resonance integral for a single absorber element embedded in a neutron source.
The investigation was initiated by assuming a parabolic flux distribution in computing the flux-averaged escape probability which occurs in the collision density equation. Furthermore, also assumed were both wide resonance and narrow resonance expressions for the resonance integral. The fact that this simple model demonstrated a decrease in the resonance integral motivated the more detailed investigation of the thesis.
An integral equation describing the collision density as a function of energy, position and angle is constructed and is subsequently specialized to the case of energy and spatial dependence. This equation is further simplified by expanding the spatial dependence in a series of Legendre polynomials (since a one-dimensional case is considered). In this form, the effects of slowing-down and flux depression may be accounted for to any degree of accuracy desired. The resulting integral equation for the energy dependence is thus solved numerically, considering the slowing down model and the infinite mass model as separate cases.
From the solution obtained by the above method, the error ascribable to the flat-flux approximation is obtained. In addition to this, the error introduced in the resonance integral in assuming no slowing down in the absorber is deduced. Results by Chernick for bismuth rods, and by Corngold for uranium slabs, are compared to the latter case, and these agree to within the approximations made.