997 resultados para PRECISION EXPERIMENTS
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
Theoretical approaches are of fundamental importance to predict the potential impact of waste disposal facilities on ground water contamination. Appropriate design parameters are, in general, estimated by fitting the theoretical models to a field monitoring or laboratory experimental data. Double-reservoir diffusion (Transient Through-Diffusion) experiments are generally conducted in the laboratory to estimate the mass transport parameters of the proposed barrier material. These design parameters are estimated by manual parameter adjusting techniques (also called eye-fitting) like Pollute. In this work an automated inverse model is developed to estimate the mass transport parameters from transient through-diffusion experimental data. The proposed inverse model uses particle swarm optimization (PSO) algorithm which is based on the social behaviour of animals for finding their food sources. Finite difference numerical solution of the transient through-diffusion mathematical model is integrated with the PSO algorithm to solve the inverse problem of parameter estimation.The working principle of the new solver is demonstrated by estimating mass transport parameters from the published transient through-diffusion experimental data. The estimated values are compared with the values obtained by existing procedure. The present technique is robust and efficient. The mass transport parameters are obtained with a very good precision in less time
Resumo:
The two-pion contribution from low energies to the muon magnetic moment anomaly, although small, has a large relative uncertainty since in this region the experimental data on the cross sections are neither sufficient nor precise enough. It is therefore of interest to see whether the precision can be improved by means of additional theoretical information on the pion electromagnetic form factor, which controls the leading-order contribution. In the present paper, we address this problem by exploiting analyticity and unitarity of the form factor in a parametrization-free approach that uses the phase in the elastic region, known with high precision from the Fermi-Watson theorem and Roy equations for pi pi elastic scattering as input. The formalism also includes experimental measurements on the modulus in the region 0.65-0.70 GeV, taken from the most recent e(+)e(-) ->pi(+)pi(-) experiments, and recent measurements of the form factor on the spacelike axis. By combining the results obtained with inputs from CMD2, SND, BABAR, and KLOE, we make the predictions a(mu)(pi pi,LO)2m(pi), 0.30 GeV] = (0.553 +/- 0.004) x 10(-10) and a(mu)(pi pi,LO)0.30 GeV; 0.63 GeV] = (133.083 +/- 0.837) x 10(-10). These are consistent with the other recent determinations and have slightly smaller errors.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
The time series of abundance indices for many groundfish populations, as determined from trawl surveys, are often imprecise and short, causing stock assessment estimates of abundance to be imprecise. To improve precision, prior probability distributions (priors) have been developed for parameters in stock assessment models by using meta-analysis, expert judgment on catchability, and empirically based modeling. This article presents a synthetic approach for formulating priors for rockfish trawl survey catchability (qgross). A multivariate prior for qgross for different surveys is formulated by using 1) a correction factor for bias in estimating fish density between trawlable and untrawlable areas, 2) expert judgment on trawl net catchability, 3) observations from trawl survey experiments, and 4) data on the fraction of population biomass in each of the areas surveyed. The method is illustrated by using bocaccio (Sebastes paucipinis) in British Columbia. Results indicate that expert judgment can be updated markedly by observing the catch-rate ratio from different trawl gears in the same areas. The marginal priors for qgross are consistent with empirical estimates obtained by fitting a stock assessment model to the survey data under a noninformative prior for qgross. Despite high prior uncertainty (prior coefficients of variation ≥0.8) and high prior correlation between qgross, the prior for qgross still enhances the precision of key stock assessment quantities.
Resumo:
Structured precision modelling is an important approach to improve the intra-frame correlation modelling of the standard HMM, where Gaussian mixture model with diagonal covariance are used. Previous work has all been focused on direct structured representation of the precision matrices. In this paper, a new framework is proposed, where the structure of the Cholesky square root of the precision matrix is investigated, referred to as Cholesky Basis Superposition (CBS). Each Cholesky matrix associated with a particular Gaussian distribution is represented as a linear combination of a set of Gaussian independent basis upper-triangular matrices. Efficient optimization methods are derived for both combination weights and basis matrices. Experiments on a Chinese dictation task showed that the proposed approach can significantly outperformed the direct structured precision modelling with similar number of parameters as well as full covariance modelling. © 2011 IEEE.
Resumo:
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
Test strip detectors of 125 mu m, 500 mu m, and 1 mm pitches with about 1 cm(2) areas have been made on medium-resistivity silicon wafers (1.3 and 2.7 k Ohm cm). Detectors of 500 mu m pitch have been tested for charge collection and position precision before and after neutron irradiation (up to 2 x 10(14) n/cm(2)) using 820 and 1030 nm laser lights with different beam-spot sizes. It has been found that for a bias of 250 V a strip detector made of 1.3 k Ohm cm (300 mu m thick) can be fully depleted before and after an irradiation of 2 x 10(14) n/cm(2). For a 500 mu m pitch strip detector made of 2.7 k Ohm cm tested with an 1030 nm laser light with 200 mu m spot size, the position reconstruction error is about 14 mu m before irradiation, and 17 mu m after about 1.7 x 10(13) n/cm(2) irradiation. We demonstrated in this work that medium resistivity silicon strip detectors can work just as well as the traditional high-resistivity ones, but with higher radiation tolerance. We also tested charge sharing and position reconstruction using a 1030 nm wavelength (300 mu m absorption length in Si at RT) laser, which provides a simulation of MIP particles in high-physics experiments in terms of charge collection and position reconstruction, (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Test strip detectors of 125 mu m, 500 mu m, and 1 mm pitches with about 1 cm(2) areas have been made on medium-resistivity silicon wafers (1.3 and 2.7 k Ohm cm). Detectors of 500 mu m pitch have been tested for charge collection and position precision before and after neutron irradiation (up to 2 x 10(14) n/cm(2)) using 820 and 1030 nm laser lights with different beam-spot sizes. It has been found that for a bias of 250 V a strip detector made of 1.3 k Ohm cm (300 mu m thick) can be fully depleted before and after an irradiation of 2 x 10(14) n/cm(2). For a 500 mu m pitch strip detector made of 2.7 k Ohm cm tested with an 1030 nm laser light with 200 mu m spot size, the position reconstruction error is about 14 mu m before irradiation, and 17 mu m after about 1.7 x 10(13) n/cm(2) irradiation. We demonstrated in this work that medium resistivity silicon strip detectors can work just as well as the traditional high-resistivity ones, but with higher radiation tolerance. We also tested charge sharing and position reconstruction using a 1030 nm wavelength (300 mu m absorption length in Si at RT) laser, which provides a simulation of MIP particles in high-physics experiments in terms of charge collection and position reconstruction, (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
Spectra of the ionized oxygen atom were researched with the Pro-500i monochromator equipped with CCD. The beam foil method was used at energy of 2 MeV in a 2 x 1.7 Tandem accelerator. In this work, we report 201 spectral lines determined in the region 250-350 nm, and most spectral lines were attributed to n, l energy level transitions from O II to O IV atoms. Our experimental results are in good agreement with existing theoretical calculations. Many lines reported in this paper have not been measured in past experiments, and a majority of them are week transitional lines.
Resumo:
In this paper we present a set of field tests for detection of human in the water with an unmanned surface vehicle using infrared and color cameras. These experiments aimed to contribute in the development of victim target tracking and obstacle avoidance for unmanned surface vehicles operating in marine search and rescue missions. This research is integrated in the work conducted in the European FP7 research project Icarus aiming to develop robotic tools for large scale rescue operations. The tests consisted in the use of the ROAZ unmanned surface vehicle equipped with a precision GPS system for localization and both visible spectrum and IR cameras to detect the target. In the experimental setup, the test human target was deployed in the water wearing a life vest and a diver suit (thus having lower temperature signature in the body except hands and head) and was equipped with a GPS logger. Multiple target approaches were performed in order to test the system with different sun incidence relative angles. The experimental setup, detection method and preliminary results from the field trials performed in the summer of 2013 in Sesimbra, Portugal and in La Spezia, Italy are also presented in this work.
Resumo:
The utility of the decimal growth stage (DGS) scoring system for cereals is reviewed. The DGS is the most widely used scale in academic and commercial applications because of its comprehensive coverage of cereal developmental stages, the ease of use and definition provided and adoption by official agencies. The DGS has demonstrable and established value in helping to optimise the timing of agronomic inputs, particularly with regard to plant growth regulators, herbicides, fungicides and soluble nitrogen fertilisers. In addition, the DGS is used to help parameterise crop models, and also in understanding the response and adaptation of crops to the environment. The value of the DGS for increasing precision relies on it indicating, to some degree, the various stages in the development of the stem apex and spike. Coincidence of specific growth stage scores with the transition of the apical meristem from a vegetative to a reproductive state, and also with the period of meiosis, is unreliable. Nonetheless, in pot experiments it is shown that the broad period of booting (DGS 41–49) appears adequate for covering the duration when the vulnerability of meiosis to drought and heat stress is exposed. Similarly, the duration of anthesis (61–69) is particularly susceptible to abiotic stresses: initially from a fertility perspective, but increasingly from a mean grain weight perspective as flowering progresses to DGS 69 and then milk development. These associations with DGS can have value at the crop level of organisation: for interpreting environmental effects, and in crop modelling. However, genetic, biochemical and physiological analysis to develop greater understanding of stress acclimation during the vegetative state, and tolerance at meiosis, does require more precision than DGS can provide. Similarly, individual floret analysis is needed to further understand the genetic basis of stress tolerance during anthesis.
Resumo:
This paper presents a practical experimentation for comparing reactive/non-active energy measures, considering three-phase four-wire non-sinusoidal and unbalanced circuits, involving five different commercial electronic meters. The experimentation set provides separately voltage and current generation, each one with any waveform involving up to fifty-first harmonic components, identically compared with acquisitions obtained from utility. The experimental accuracy is guaranteed by a class A power analyzer, according to IEC61000-4-30 standard. Some current and voltage combination profiles are presented and confronted with two different references of reactive/non-active calculation methodologies; instantaneous power theory and IEEE 1459-2010. The first methodology considers the instantaneous power theory, present into the advanced mathematical internal algorithm from WT3000 power analyzer, and the second methodology, accomplish with IEEE 1459-2010 standard, uses waveform voltage and current acquisition from WT3000 as input data for a virtual meter developed on Mathlab/Simulink software. © 2012 IEEE.
Resumo:
Variable rate sprinklers (VRS) have been developed to promote localized water application of irrigated areas. In Precision Irrigation, VRS permits better control of flow adjustment and, at the same time, provides satisfactory radial distribution profiles for various pressures and flow rates are really necessary. The objective of this work was to evaluate the performance and radial distribution profiles of a developed VRS which varies the nozzle cross sectional area by moving a pin in or out using a stepper motor. Field tests were performed under different conditions of service pressure, rotation angles imposed on the pin and flow rate which resulted in maximal water throw radiuses ranging from 7.30 to 10.38 m. In the experiments in which the service pressure remained constant, the maximal throw radius varied from 7.96 to 8.91 m. Averages were used of repetitions performed under conditions without wind or with winds less than 1.3 m s-1. The VRS with the four stream deflector resulted in greater water application throw radius compared to the six stream deflector. However, the six stream deflector had greater precipitation intensities, as well as better distribution. Thus, selection of the deflector to be utilized should be based on project requirements, respecting the difference in the obtained results. With a small opening of the nozzle, the VRS produced small water droplets that visually presented applicability for foliar chemigation. Regarding the comparison between the estimated and observed flow rates, the stepper motor produced excellent results.