934 resultados para Experimental Methods.
Resumo:
The present study focuses on exploring air-assisted atomization strategies for effective atomization of high-viscosity biofuels, such as pure plant oils (PPOs). The first part of the study concerns application of a novel air-assisted impinging jet atomization for continuous spray applications, and the second part concerns transient spray applications. The particle/droplet imaging analysis (PDIA) technique along with direct imaging methods are used for the purpose of spray characterization. In the first part, effective atomization of Jatropha PPO is demonstrated at gas-to-liquid ratios (GLRs) on the order 0.1. The effect of liquid and gas flow rates on the spray characteristics is evaluated, and results indicate a Sauter mean diameter (SMD) of 50 mu m is achieved with GLRs as low as 0.05. In the second part of the study, a commercially available air-assisted transient atomizer is evaluated using Jatropha PPO. The effect of the pressure difference across the air injector and ambient gas pressure on liquid spray characteristics is studied. The results indicate that it is possible to achieve the same level of atomization of Jatropha as diesel fuel by operating the atomizer at a higher pressure difference. Specifically, a SMD of 44 mu m is obtained for the Jatropha oil using injection pressures of <1 MPa. A further interesting observation associated with this injector is the near constancy of a nondimensional spray penetration rate for the Jatropha oil spray.
Resumo:
The space experimental device for testing the Marangoni drop migrations has been discussed in the present paper. The experiment is one of the spaceship projects of China. In comparison with similar devices, it has the ability of completing all the scientific experiments by both auto controlling and telescience methods. It not only can perform drop migration experiments of large Reynolds numbers but also has an equi-thick interferential system.
Resumo:
In HCCI engines, the Air/Fuel Ratio (AFR) and Residual Gas Fraction (RGF) are difficult to control during the SI-HCCI-SI transition, and this may result in incomplete combustion and/or high pressure raise rates. As a result, there may be undesirably high engine load fluctuations. The objectives of this work are to further understand this process and develop control methods to minimize these load fluctuations. This paper presents data on instantaneous AFR and RGF measurements, both taken by novel experimental techniques. The data provides an insight into the cyclic AFR and RGF fluctuations during the switch. These results suggest that the relatively slow change in the intake Manifold Air Pressure (MAP) and actuation time of the Variable Valve Timing (VVT) are the main causes of undesired AFR and RGF fluctuations, and hence an unacceptable Net IMEP (NIMEP) fluctuation. We also found large cylinder-to-cylinder AFR variations during the transition. Therefore, besides throttle opening control and VVT shifting, cyclic and individual cylinder fuel injection control is necessary to achieve a smooth transition. The control method was developed and implemented in a test engine, and the result was a considerably reduced NIMEP fluctuation during the mode switch. The instantaneous AFR and RGF measurements could furthermore be adopted to develop more sophisticated control methods for SI-HCCI-SI transitions. © 2010 SAE International.
Resumo:
In this paper, a series of experiments have been conducted in a U-shaped oscillatory flow tunnel, which provides a more realistic simulation than the previous actuator loading methods. Based on the experimental data of pipe displacement with two different constraint conditions (freely laid pipelines and anti-rolling pipelines), three characteristic times in the process of pipeline losing stability are identified. The effects of sand size on the pipeline lateral stability are examined for freely laid pipelines. The empirical relationships between non-dimensional pipeline weight (G) and Fronde number (Fr-b) are established for different constraint conditions, which will provide a guide for engineering practice. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, we study the issues of modeling, numerical methods, and simulation with comparison to experimental data for the particle-fluid two-phase flow problem involving a solid-liquid mixed medium. The physical situation being considered is a pulsed liquid fluidized bed. The mathematical model is based on the assumption of one-dimensional flows, incompressible in both particle and fluid phases, equal particle diameters, and the wall friction force on both phases being ignored. The model consists of a set of coupled differential equations describing the conservation of mass and momentum in both phases with coupling and interaction between the two phases. We demonstrate conditions under which the system is either mathematically well posed or ill posed. We consider the general model with additional physical viscosities and/or additional virtual mass forces, both of which stabilize the system. Two numerical methods, one of them is first-order accurate and the other fifth-order accurate, are used to solve the models. A change of variable technique effectively handles the changing domain and boundary conditions. The numerical methods are demonstrated to be stable and convergent through careful numerical experiments. Simulation results for realistic pulsed liquid fluidized bed are provided and compared with experimental data. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The transition processes from steady flow into oscillatory flow in a liquid bridge of the half floating zone are studied experimentally. Two methods of noncontacted diagnoses are developed to measure the distribution of critical Marangoni numbers described by the onset of the oscillation st the free surface of the liquid bridge.The experimental results obtained for both cases of the upper rod heated and the lower rod heated agree with the prediction by Rayleigh's instability theory.The sensitive relations between the relatively fat or slender liquid bridge and the onset of oscillatory convection are also discussed to reveal the insight of the pressure distribution near the free surface. The experiments have been performed in a small liquid bridge, where the Bond number is much smaller than 1, and the results can be used to simulate the experiment in the microgravity environment.
Resumo:
Validated by comparison with DNS, numerical database of turbulent channel flows is yielded by Large Eddy Simulation (LES). Three conventional techniques: uv quadrant 2, VITA and mu-level techniques for detecting turbulent bursts are applied to the identification of turbulent bursts. With a grouping parameter introduced by Bogard & Tiedemann (1986) or Luchik & Tiederman (1987), multiple ejections detected by these techniques which originate from a single burst can be grouped into a single-burst event. The results are compared with experimental results, showing that all techniques yield reasonable average burst period. However, uv quadrant 2 and mu-level are found to be superior to VITA in having large threshold-independent range.
Resumo:
The work presented here is part of a larger study to identify novel technologies and biomarkers for early Alzheimer disease (AD) detection and it focuses on evaluating the suitability of a new approach for early AD diagnosis by non-invasive methods. The purpose is to examine in a pilot study the potential of applying intelligent algorithms to speech features obtained from suspected patients in order to contribute to the improvement of diagnosis of AD and its degree of severity. In this sense, Artificial Neural Networks (ANN) have been used for the automatic classification of the two classes (AD and control subjects). Two human issues have been analyzed for feature selection: Spontaneous Speech and Emotional Response. Not only linear features but also non-linear ones, such as Fractal Dimension, have been explored. The approach is non invasive, low cost and without any side effects. Obtained experimental results were very satisfactory and promising for early diagnosis and classification of AD patients.
Resumo:
We study quantum state tomography, entanglement detection and channel noise reconstruction of propagating quantum microwaves via dual-path methods. The presented schemes make use of the following key elements: propagation channels, beam splitters, linear amplifiers and field quadrature detectors. Remarkably, our methods are tolerant to the ubiquitous noise added to the signals by phase-insensitive microwave amplifiers. Furthermore, we analyse our techniques with numerical examples and experimental data, and compare them with the scheme developed in Eichler et al (2011 Phys. Rev. Lett. 106 220503; 2011 Phys. Rev. Lett. 107 113601), based on a single path. Our methods provide key toolbox components that may pave the way towards quantum microwave teleportation and communication protocols.
Resumo:
In this study, the vortex-induced vibrations of a cylinder near a rigid plane boundary in a steady flow are studied experimentally. The phenomenon of vortex-induced vibrations of the cylinder near the rigid plane boundary is reproduced in the flume. The vortex shedding frequency and mode are also measured by the methods of hot film velocimeter and hydrogen bubbles. A parametric study is carried out to investigate the influences of reduced velocity, gap-to-diameter ratio, stability parameter and mass ratio on the amplitude and frequency responses of the cylinder. Experimental results indicate: (1) the Strouhal number (St) is around 0.2 for the stationary cylinder near a plane boundary in the sub-critical flow regime; (2) with increasing gap-to-diameter ratio (e (0)/D), the amplitude ratio (A/D) gets larger but frequency ratio (f/f (n) ) has a slight variation for the case of larger values of e (0)/D (e (0)/D > 0.66 in this study); (3) there is a clear difference of amplitude and frequency responses of the cylinder between the larger gap-to-diameter ratios (e (0)/D > 0.66) and the smaller ones (e (0)/D < 0.3); (4) the vibration of the cylinder is easier to occur and the range of vibration in terms of V (r) number becomes more extensive with decrease of the stability parameter, but the frequency response is affected slightly by the stability parameter; (5) with decreasing mass ratio, the width of the lock-in ranges in terms of V (r) and the frequency ratio (f/f (n) ) become larger.
Resumo:
A set of experimental system to study hydrate dissociation in porous media is built and some experiments on hydrate dissociation by depressurization are carried out. A mathematical model is developed to simulate the hydrate dissociation by depressurization in hydrate-bearing porous media. The model can be used to analyze the effects of the flow of multiphase fluids, the kinetic process and endothermic process of hydrate dissociation, ice-water phase equilibrium, the variation of permeability, convection and conduction on the hydrate dissociation, and gas and water productions. The numerical results agree well with the experimental results, which validate our mathematical model. For a 3-D hydrate reservoir of Class 3, the evolutions of pressure, temperature, and saturations are elucidated and the effects of some main parameters on gas and water rates are analyzed. Numerical results show that gas can be produced effectively from hydrate reservoir in the first stage of depressurization. Then, methods such as thermal stimulation or inhibitor injection should be considered due to the energy deficiency of formation energy. The numerical results for 3-D hydrate reservoir of Class 1 show that the overlying gas hydrate zone can apparently enhance gas rate and prolong life span of gas reservoir.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The problem of finding the depths of glaciers and the current methods are discussed briefly. Radar methods are suggested as a possible improvement for, or adjunct to, seismic and gravity survey methods. The feasibility of propagating electromagnetic waves in ice and the maximum range to be expected are then investigated theoretically with the aid of experimental data on the dielectric properties of ice. It is found that the maximum expected range is great enough to measure the depth of many glaciers at the lower radar frequencies if there is not too much liquid water present. Greater ranges can be attained by going to lower frequencies.
The results are given of two expeditions in two different years to the Seward Glacier in the Yukon Territory. Experiments were conducted on a small valley glacier whose depth was determined by seismic sounding. Many echoes were received but their identification was uncertain. Using the best echoes, a profile was obtained each year, but they were not in exact agreement with each other. It could not be definitely established that echoes had been received from bedrock. Agreement with seismic methods for a considerable number of glaciers would have to be obtained before radar methods could be relied upon. The presence of liquid water in the ice is believed to be one of the greatest obstacles. Besides increasing the attenuation and possibly reflecting energy, it makes it impossible to predict the velocity of propagation. The equipment used was far from adequate for such purposes, so many of the difficulties could be attributed to this. Partly because of this, and the fact that there are glaciers with very little liquid water present, radar methods are believed to be worthy of further research for the exploration of glaciers.
Resumo:
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.
We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.
We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.
We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.
Resumo:
As is known, copepods play an important role in the nutrition of fish. Therefore with a view to facilitating research on the study of the quantitative side of feeding, there have recently appeared a considerable number of papers devoted to the development of methods for determining the wet. weight of these crustaceans. For the further facilitating of research in the nutrition of fish it would be of great interest to clarify the problem, is there not some kind of rule in the growth of the crustaceans during metamorphosis, and if there is such a rule is it not possible, to determine the length of the larvae at each stage, not by measuring them, but by using the formulae derived on the basis of these rules. This article examines the growth curves of different species of freshwater Copepoda, obtained on the basis of experimental observations in cultures or by way of measurement of mass material at all stages of development in samples from water-bodies. The authors study in particular the ratio of the mean diameter of the eggs to the mean length of the egg-bearing females.