889 resultados para Drop tests.
Resumo:
The seasonal stability tests of Canova & Hansen (1995) (CH) provide a method complementary to that of Hylleberg et al. (1990) for testing for seasonal unit roots. But the distribution of the CH tests are unknown in small samples. We present a method to numerically compute critical values and P-values for the CH tests for any sample size and any seasonal periodicity. In fact this method is applicable to the types of seasonality which are commonly in use, but also to any other.
Resumo:
Researches on two-phase flow and pool boiling heat transfer in microgravity, which included groundbased tests, flight experiments, and theoretical analyses, were conducted in the National Microgravity Laboratory/CAS. A semi-theoretical Weber number model was proposed to predict the slug-to-annular flow transition of two-phase gas–liquid flows in microgravity, while the influence of the initial bubble size on the bubble-to-slug flow transition was investigated numerically using the Monte Carlo method. Two-phase flow pattern maps in microgravity were obtained in the experiments both aboard the Russian space station Mir and aboard IL-76 reduced gravity airplane. Mini-scale modeling was also used to simulate the behavior of microgravity two-phase flow on the ground. Pressure drops of two-phase flow in microgravity were also measured experimentally and correlated successfully based on its characteristics. Two space experiments on pool boiling phenomena in microgravity were performed aboard the Chinese recoverable satellites. Steady pool boiling of R113 on a thin wire with a temperature-controlled heating method was studied aboard RS-22, while quasi-steady pool boiling of FC-72 on a plate was studied aboard SJ-8. Ground-based experiments were also performed both in normal gravity and in short-term microgravity in the drop tower Beijing. Only slight enhancement of heat transfer was observed in the wire case, while enhancement in low heat flux and deterioration in high heat flux were observed in the plate case. Lateral motions of vapor bubbles were observed before their departure in microgravity. The relationship between bubble behavior and heat transfer on plate was analyzed. A semi-theoretical model was also proposed for predicting the bubble departure diameter during pool boiling on wires. The results obtained here are intended to become a powerful aid for further investigation in the present discipline and development of two-phase systems for space applications.
Resumo:
We present in this paper the results obtained from a parabolic flight campaign regarding the contact angle and the drop interface behavior of sessile drops created under terrestrial gravity (1g) or in microgravity (mu g). This is a preliminary study before further investigations on sessile drops evaporation under microgravity. In this study, drops are created by the mean of a syringe pump by injection through the substrate. The created drops are recorded using a video camera to extract the drops contact angles. Three fluids have been used in this study : de-ionized water, HFE-7100 and FC-72 and two heating surfaces: aluminum and PTFE. The results obtained evidence the feasibility of sessile drop creation in microgravity even for low surface tension liquids (below 15 mN m (-aEuro parts per thousand 1)) such as FC-72 and HFE-7100. We also evidence the contact angle behavior depending of the drop diameter and the gravity level. A second objective of this study is to analyze the drop interface shape in microgravity. The goal of the these experiments is to obtain reference data on the sessile drop behavior in microgravity for future experiments to be performed in an French-Chinese scientific instrument (IMPACHT).
Resumo:
Some factors that affect the experimental results in nanoindentation tests such as the contact depth, contact area, load and loading duration are analyzed in this article. Combining with the results of finite element numerical simulation, we find that the creep property of the tested material is one of the important factors causing the micron indentation hardness descending with the increase of indentation depth. The analysis of experimental results with different indentation depths demonstrates that the hardness decrease can be bated if the continuous stiffness measurement technique is not adopted; this indicates that the test method itself may also be one of the factors causing the hardness being descended.
Resumo:
The problem of the slow viscous flow of a gas past a sphere is considered. The fluid cannot be treated incompressible in the limit when the Reynolds number Re, and the Mach number M, tend to zero in such a way that Re ~ o(M^2 ). In this case, the lowest order approximation to the steady Navier-Stokes equations of motion leads to a paradox discovered by Lagerstrom and Chester. This paradox is resolved within the framework of continuum mechanics using the classical slip condition and an iteration scheme that takes into account certain terms in the full Navier-Stokes equations that drop out in the approximation used by the above authors. It is found however that the drag predicted by the theory does not agree with R. A. Millikan's classic experiments on sphere drag.
The whole question of the applicability of the Navier-Stokes theory when the Knudsen number M/Re is not small is examined. A new slip condition is proposed. The idea that the Navier-Stokes equations coupled with this condition may adequately describe small Reynolds number flows when the Knudsen number is not too large is looked at in some detail. First, a general discussion of asymptotic solutions of the equations for all such flows is given. The theory is then applied to several concrete problems of fluid motion. The deductions from this theory appear to interpret and summarize the results of Millikan over a much wider range of Knudsen numbers (almost up to the free molecular or kinetic limit) than hitherto Believed possible by a purely continuum theory. Further experimental tests are suggested and certain interesting applications to the theory of dilute suspensions in gases are noted. Some of the questions raised in the main body of the work are explored further in the appendices.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
There is a constantly increasing collection of manufactured substances, whose effectiveness in the sustenance of ducks is under investigation. The author examines the effects of some substances already previously tested and also there were examined substances which had not hitherto been studied. The use of different supplements for late autumn fattening is studied through various experiments.