881 resultados para Aeronautics in police work
Resumo:
Aquaculture in the Philippines is a long-standing activity but has witnessed relatively recent, rapid, technical change with the introduction of hatchery technology and commercial feed-mills changing the production possibilities for a fishpond operator. We are confronted with a diversity of aquaculture practices in the coastal areas of the Philippines, with new technologies being incorporated into more traditional systems. As a first step to understanding the sector, we therefore present a typology of farming systems with the motivation of generating domains (farm “types”) over which we can compare performance on a number of indicators. Our typology, restricted to brackish-water pond systems, is constructed using multivariate methods (principal components analysis, cluster analysis). Eight variables are used relating to the management of the farm across all the major factors of production. A stratified net sample of 136 observations provides the data for the analysis, from a farm-level survey carried out between January and June 2003 in the two main brackish-water production regions in the Philippines. We define five farm types from this analysis. In later work we will show how the use of this typology can be used for comparative study of economic, social and ecological performance at the farm-level. [PDF contains 42 pages]
Resumo:
122 p.
Resumo:
In this work, we measured 14 horizontal velocity profiles along the vertical direction of a rectangular microchannel with aspect ratio alpha = h/w = 0.35 (h is the height of the channel and w is the width of the channel) using microPIV at Re = 1.8 and 3.6. The experimental velocity profiles are compared with the full 3D theoretical solution, and also with a Poiseuille parabolic profile. It is shown that the experimental velocity profiles in the horizontal and vertical planes are in agreement with the theoretical profiles, except for the planes close to the wall. The discrepancies between the experimental data and 3D theoretical results in the center vertical plane are less than 3.6%. But the deviations between experimental data and Poiseuille's results approaches 5%. It indicates that 2D Poiseuille profile is no longer a perfect theoretical approximation since a = 0.35. The experiments also reveal that, very near the hydrophilic wall (z = 0.5-1 mu m), the measured velocities are significantly larger than the theoretical velocity based on the no-slip assumption. A proper discussion on some physical effects influencing the near wall velocity measurement is given.
Resumo:
Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.
Resumo:
In this work, a level set method is developed for simulating the motion of a fluid particle rising in non-Newtonian fluids described by generalized Newtonian as well as viscoelastic model fluids. As the shear-thinning model we use a Carreau-Yasuda model, and the viscoelastic effect can be modeled with Oldroyd-B constitutive equations. The control volume formulation with the SIMPLEC algorithm incorporated is used to solve the governing equations on a staggered Eulerian grid. The level set method is implemented to compute the motion of a bubble in a Newtonian fluid as one of typical examples for validation, and the computational results are in good agreement with the reported experimental data.The level set method is also applied for simulating a Newtonian drop rising in Carreau-Yasuda and Oldroyd-B fluids.Numerical results including noticeably negative wake behind the drop and viscosity field are obtained, and compare satisfactorily with the known literature data.
Resumo:
In this work, a simple correlation, which incorporates the mixture velocity, drift velocity, and the correction factor of Farooqi and Richardson, was proposed to predict the void fraction of gas/non-Newtonian intermittent flow in upward inclined pipes. The correlation was based on 352 data points covering a wide range of flow rates for different CMC solutions at diverse angles. A good agreement was obtained between the predicted and experimental results. These results substantiated the general validity of the model presented for gas/non-Newtonian two-phase intermittent flows.
Resumo:
The technology of laser quenching is widely used to improve the surface properties of steels in surface engineering. Generally, laser quenching of steels can lead to two important results. One is the generation of residual stress in the surface layer. In general, the residual stress varies from the surface to the interior along the quenched track depth direction, and the residual stress variation is termed as residual stress gradient effect in this work. The other is the change of mechanical properties of the surface layer, such as the increases of the micro-hardness, resulting from the changes of the microstructure of the surface layer. In this work, a mechanical model of a laser-quenched specimen with a crack in the middle of the quenched layer is developed to quantify the effect of residual stress gradient and the average micro-hardness over the crack length on crack tip opening displacement (CTOD). It is assumed that the crack in the middle of the quenched layer is created after laser quenching, and the crack can be a pre-crack or a defect due to some reasons, such as a void, cavity or a micro-crack. Based on the elastic-plastic fracture mechanics theory and using the relationship between the micro-hardness and yield strength, a concise analytical solution, which can be used to quantify the effect of residual stress gradient and the average micro-hardness over the crack length resulting from laser quenching on CTOD, is obtained. The concise analytical solution obtained in this work, cannot only be used as a means to predict the crack driving force in terms of the CTOD, but also serve as a baseline for further experimental investigation of the effect after laser-quenching treatment on fracture toughness in terms of the critical CTOD of a specimen, accounting for the laser-quenching effect. A numerical example presented in this work shows that the CTOD of the quenched can be significantly decreased in comparison with that of the unquenched. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Because so little is known about the structure of membrane proteins, an attempt has been made in this work to develop techniques by which to model them in three dimensions. The procedures devised rely heavily upon the availability of several sequences of a given protein. The modelling procedure is composed of two parts. The first identifies transmembrane regions within the protein sequence on the basis of hydrophobicity, β-turn potential, and the presence of certain amino acid types, specifically, proline and basic residues. The second part of the procedure arranges these transmembrane helices within the bilayer based upon the evolutionary conservation of their residues. Conserved residues are oriented toward other helices and variable residues are positioned to face the surrounding lipids. Available structural information concerning the protein's helical arrangement, including the lengths of interhelical loops, is also taken into account. Rhodopsin, band 3, and the nicotinic acetylcholine receptor have all been modelled using this methodology, and mechanisms of action could be proposed based upon the resulting structures.
Specific residues in the rhodopsin and iodopsin sequences were identified, which may regulate the proteins' wavelength selectivities. A hinge-like motion of helices M3, M4, and M5 with respect to the rest of the protein was proposed to result in the activation of transducin, the G-protein associated with rhodopsin. A similar mechanism is also proposed for signal transduction by the muscarinic acetylcholine and β-adrenergic receptors.
The nicotinic acetylcholine receptor was modelled with four trans-membrane helices per subunit and with the five homologous M2 helices forming the cation channel. Putative channel-lining residues were identified and a mechanism of channel-opening based upon the concerted, tangential rotation of the M2 helices was proposed.
Band 3, the anion exchange protein found in the erythrocyte membrane, was modelled with 14 transmembrane helices. In general the pathway of anion transport can be viewed as a channel composed of six helices that contains a single hydrophobic restriction. This hydrophobic region will not allow the passage of charged species, unless they are part of an ion-pair. An arginine residue located near this restriction is proposed to be responsible for anion transport. When ion-paired with a transportable anion it rotates across the barrier and releases the anion on the other side of the membrane. A similar process returns it to its original position. This proposed mechanism, based on the three-dimensional model, can account for the passive, electroneutral, anion exchange observed for band 3. Dianions can be transported through a similar mechanism with the additional participation of a histidine residue. Both residues are located on M10.
Resumo:
This thesis presents a study of the dynamical, nonlinear interaction of colliding gravitational waves, as described by classical general relativity. It is focused mainly on two fundamental questions: First, what is the general structure of the singularities and Killing-Cauchy horizons produced in the collisions of exactly plane-symmetric gravitational waves? Second, under what conditions will the collisions of almost-plane gravitational waves (waves with large but finite transverse sizes) produce singularities?
In the work on the collisions of exactly-plane waves, it is shown that Killing horizons in any plane-symmetric spacetime are unstable against small plane-symmetric perturbations. It is thus concluded that the Killing-Cauchy horizons produced by the collisions of some exactly plane gravitational waves are nongeneric, and that generic initial data for the colliding plane waves always produce "pure" spacetime singularities without such horizons. This conclusion is later proved rigorously (using the full nonlinear theory rather than perturbation theory), in connection with an analysis of the asymptotic singularity structure of a general colliding plane-wave spacetime. This analysis also proves that asymptotically the singularities created by colliding plane waves are of inhomogeneous-Kasner type; the asymptotic Kasner axes and exponents of these singularities in general depend on the spatial coordinate that runs tangentially to the singularity in the non-plane-symmetric direction.
In the work on collisions of almost-plane gravitational waves, first some general properties of single almost-plane gravitational-wave spacetimes are explored. It is shown that, by contrast with an exact plane wave, an almost-plane gravitational wave cannot have a propagation direction that is Killing; i.e., it must diffract and disperse as it propagates. It is also shown that an almost-plane wave cannot be precisely sandwiched between two null wavefronts; i.e., it must leave behind tails in the spacetime region through which it passes. Next, the occurrence of spacetime singularities in the collisions of almost-plane waves is investigated. It is proved that if two colliding, almost-plane gravitational waves are initially exactly plane-symmetric across a central region of sufficiently large but finite transverse dimensions, then their collision produces a spacetime singularity with the same local structure as in the exact-plane-wave collision. Finally, it is shown that a singularity still forms when the central regions are only approximately plane-symmetric initially. Stated more precisely, it is proved that if the colliding almost-plane waves are initially sufficiently close to being exactly plane-symmetric across a bounded central region of sufficiently large transverse dimensions, then their collision necessarily produces spacetime singularities. In this case, nothing is now known about the local and global structures of the singularities.
Resumo:
In this work, computationally efficient approximate methods are developed for analyzing uncertain dynamical systems. Uncertainties in both the excitation and the modeling are considered and examples are presented illustrating the accuracy of the proposed approximations.
For nonlinear systems under uncertain excitation, methods are developed to approximate the stationary probability density function and statistical quantities of interest. The methods are based on approximating solutions to the Fokker-Planck equation for the system and differ from traditional methods in which approximate solutions to stochastic differential equations are found. The new methods require little computational effort and examples are presented for which the accuracy of the proposed approximations compare favorably to results obtained by existing methods. The most significant improvements are made in approximating quantities related to the extreme values of the response, such as expected outcrossing rates, which are crucial for evaluating the reliability of the system.
Laplace's method of asymptotic approximation is applied to approximate the probability integrals which arise when analyzing systems with modeling uncertainty. The asymptotic approximation reduces the problem of evaluating a multidimensional integral to solving a minimization problem and the results become asymptotically exact as the uncertainty in the modeling goes to zero. The method is found to provide good approximations for the moments and outcrossing rates for systems with uncertain parameters under stochastic excitation, even when there is a large amount of uncertainty in the parameters. The method is also applied to classical reliability integrals, providing approximations in both the transformed (independently, normally distributed) variables and the original variables. In the transformed variables, the asymptotic approximation yields a very simple formula for approximating the value of SORM integrals. In many cases, it may be computationally expensive to transform the variables, and an approximation is also developed in the original variables. Examples are presented illustrating the accuracy of the approximations and results are compared with existing approximations.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Resumo:
The interaction between integrin macrophage differentiation antigen associated with complement three receptor function (Mac-1) and intercellular adhesion molecule-1 (ICAM-1), which is controlled tightly by the ligand-binding activity of Mac-1, is central to the regulation of neutrophil adhesion in host defense. Several "inside-out" signals and extracellular metal ions or antibodies have been found to activate Mac-1, resulting in an increased adhesiveness of Mac-1 to its ligands. However, the molecular basis for Mac-1 activation is not well understood yet. In this work, we have carried out a single-molecule study of Mac-1/ICAM-1 interaction force in living cells by atomic force microscopy (AFM). Our results showed that the binding probability and adhesion force of Mac-1 with ICAM-1 increased upon Mac-1 activation. Moreover, by comparing the dynamic force spectra of different Mac-1 mutants, we expected that Mac-1 activation is governed by the downward movement of its alpha 7 helix. (c) 2007 Elsevier Inc. All rights reserved.
Biophysical and network mechanisms of high frequency extracellular potentials in the rat hippocampus
Resumo:
A fundamental question in neuroscience is how distributed networks of neurons communicate and coordinate dynamically and specifically. Several models propose that oscillating local networks can transiently couple to each other through phase-locked firing. Coherent local field potentials (LFP) between synaptically connected regions is often presented as evidence for such coupling. The physiological correlates of LFP signals depend on many anatomical and physiological factors, however, and how the underlying neural processes collectively generate features of different spatiotemporal scales is poorly understood. High frequency oscillations in the hippocampus, including gamma rhythms (30-100 Hz) that are organized by the theta oscillations (5-10 Hz) during active exploration and REM sleep, as well as sharp wave-ripples (SWRs, 140-200 Hz) during immobility or slow wave sleep, have each been associated with various aspects of learning and memory. Deciphering their physiology and functional consequences is crucial to understanding the operation of the hippocampal network.
We investigated the origins and coordination of high frequency LFPs in the hippocampo-entorhinal network using both biophysical models and analyses of large-scale recordings in behaving and sleeping rats. We found that the synchronization of pyramidal cell spikes substantially shapes, or even dominates, the electrical signature of SWRs in area CA1 of the hippocampus. The precise mechanisms coordinating this synchrony are still unresolved, but they appear to also affect CA1 activity during theta oscillations. The input to CA1, which often arrives in the form of gamma-frequency waves of activity from area CA3 and layer 3 of entorhinal cortex (EC3), did not strongly influence the timing of CA1 pyramidal cells. Rather, our data are more consistent with local network interactions governing pyramidal cells' spike timing during the integration of their inputs. Furthermore, the relative timing of input from EC3 and CA3 during the theta cycle matched that found in previous work to engage mechanisms for synapse modification and active dendritic processes. Our work demonstrates how local networks interact with upstream inputs to generate a coordinated hippocampal output during behavior and sleep, in the form of theta-gamma coupling and SWRs.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The superspace approach provides a manifestly supersymmetric formulation of supersymmetric theories. For N= 1 supersymmetry one can use either constrained or unconstrained superfields for such a formulation. Only the unconstrained formulation is suitable for quantum calculations. Until now, all interacting N>1 theories have been written using constrained superfields. No solutions of the nonlinear constraint equations were known.
In this work, we first review the superspace approach and its relation to conventional component methods. The difference between constrained and unconstrained formulations is explained, and the origin of the nonlinear constraints in supersymmetric gauge theories is discussed. It is then shown that these nonlinear constraint equations can be solved by transforming them into linear equations. The method is shown to work for N=1 Yang-Mills theory in four dimensions.
N=2 Yang-Mills theory is formulated in constrained form in six-dimensional superspace, which can be dimensionally reduced to four-dimensional N=2 extended superspace. We construct a superfield calculus for six-dimensional superspace, and show that known matter multiplets can be described very simply. Our method for solving constraints is then applied to the constrained N=2 Yang-Mills theory, and we obtain an explicit solution in terms of an unconstrained superfield. The solution of the constraints can easily be expanded in powers of the unconstrained superfield, and a similar expansion of the action is also given. A background-field expansion is provided for any gauge theory in which the constraints can be solved by our methods. Some implications of this for superspace gauge theories are briefly discussed.