997 resultados para Simulated experiment


Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main problems that public institutions face in the management of protected areas, such as the European Natura 2000 network, is determining how to design and implement sustainable management plans that account for the wide range of marketed and non-marketed benefits they provide to society. This paper presents an application of a stated preference valuation approach aimed at evaluating the social preferences of the population of the Basque Country, Spain, for the key attributes of a regional Natura 2000 network site. According to our results, individuals’ willingness-to-pay (WTP) is higher for attributes associated with non-use values (native tree species and biodiversity conservation) than for attributes associated with use values (agricultural development and commercial forestry). The paper concludes that management policies related to Natura 2000 network sites should account for both for the importance of non-use values and the heterogeneity of the population's preferences in order to minimize potential land use conflicts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pathogenic conformational conversion is a general causation of many disease, such as transmissible spon- giform encephalopathy (TSE) caused by misfolding of prion, sickle cell anemia, and etc. In such structural changes, misfolding occurs in regions important for the stability of native structure firstly. This destabi- lizes the normal conformation and leads to subsequent errors in folding pathway. Sites involved in the first stage can be deemed switch regions of the protein, and are vital for conformational conversion. Namely it could be a switch of disease at residue level. Here we report an algorithm that can identify such sites computationally with an accuracy of 93%, by calculating the probability of the native structure of a short segment jumping to a mistake one. Knowledge of such switch sites could be used to target clinical therapy, study physiological and pathologic mechanism of protein, and etc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Liquid mixtures of water and deuterium oxide as the liquid phase, were used to match the density of charged colloidal particles. Kossel diffraction method was used to detect the crystal structures. The experiments under the density-matched (g=0) and unmatched (g=1) conditions are compared to examine the influence of gravity on the crystal structures formed by self-assembly of 110 nm (in diameter) polystyrene microspheres. The result shows that die gravity tends to make the lattice constants of colloidal crystals smaller at lower positions, which indicates that the effect of gravity should be taken into account in the study of the colloidal crystals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: A two-stage sampling design is used to estimate the variances of the numbers of yellowfin in different age groups caught in the eastern Pacific Ocean. For purse seiners, the primary sampling unit (n) is a brine well containing fish from a month-area stratum; the number of fish lengths (m) measured from each well are the secondary units. The fish cannot be selected at random from the wells because of practical limitations. The effects of different sampling methods and other factors on the reliability and precision of statistics derived from the length-frequency data were therefore examined. Modifications are recommended where necessary. Lengths of fish measured during the unloading of six test wells revealed two forms of inherent size stratification: 1) short-term disruptions of existing pattern of sizes, and 2) transition zones between long-term trends in sizes. To some degree, all wells exhibited cyclic changes in mean size and variance during unloading. In half of the wells, it was observed that size selection by the unloaders induced a change in mean size. As a result of stratification, the sequence of sizes removed from all wells was non-random, regardless of whether a well contained fish from a single set or from more than one set. The number of modal sizes in a well was not related to the number of sets. In an additional well composed of fish from several sets, an experiment on vertical mixing indicated that a representative sample of the contents may be restricted to the bottom half of the well. The contents of the test wells were used to generate 25 simulated wells and to compare the results of three sampling methods applied to them. The methods were: (1) random sampling (also used as a standard), (2) protracted sampling, in which the selection process was extended over a large portion of a well, and (3) measuring fish consecutively during removal from the well. Repeated sampling by each method and different combinations indicated that, because the principal source of size variation occurred among primary units, increasing n was the most effective way to reduce the variance estimates of both the age-group sizes and the total number of fish in the landings. Protracted sampling largely circumvented the effects of size stratification, and its performance was essentially comparable to that of random sampling. Sampling by this method is recommended. Consecutive-fish sampling produced more biased estimates with greater variances. Analysis of the 1988 length-frequency samples indicated that, for age groups that appear most frequently in the catch, a minimum sampling frequency of one primary unit in six for each month-area stratum would reduce the coefficients of variation (CV) of their size estimates to approximately 10 percent or less. Additional stratification of samples by set type, rather than month-area alone, further reduced the CV's of scarce age groups, such as the recruits, and potentially improved their accuracy. The CV's of recruitment estimates for completely-fished cohorts during the 198184 period were in the vicinity of 3 to 8 percent. Recruitment estimates and their variances were also relatively insensitive to changes in the individual quarterly catches and variances, respectively, of which they were composed. SPANISH: Se usa un diseño de muestreo de dos etapas para estimar las varianzas de los números de aletas amari11as en distintos grupos de edad capturados en el Océano Pacifico oriental. Para barcos cerqueros, la unidad primaria de muestreo (n) es una bodega de salmuera que contenía peces de un estrato de mes-área; el numero de ta11as de peces (m) medidas de cada bodega es la unidad secundaria. Limitaciones de carácter practico impiden la selección aleatoria de peces de las bodegas. Por 10 tanto, fueron examinados los efectos de distintos métodos de muestreo y otros factores sobre la confiabilidad y precisión de las estadísticas derivadas de los datos de frecuencia de ta11a. Se recomiendan modificaciones donde sean necesarias. Las ta11as de peces medidas durante la descarga de seis bodegas de prueba revelaron dos formas de estratificación inherente por ta11a: 1) perturbaciones a corto plazo en la pauta de ta11as existente, y 2) zonas de transición entre las tendencias a largo plazo en las ta11as. En cierto grado, todas las bodegas mostraron cambios cíclicos en ta11a media y varianza durante la descarga. En la mitad de las bodegas, se observo que selección por ta11a por los descargadores indujo un cambio en la ta11a media. Como resultado de la estratificación, la secuencia de ta11as sacadas de todas las bodegas no fue aleatoria, sin considerar si una bodega contenía peces de un solo lance 0 de mas de uno. El numero de ta11as modales en una bodega no estaba relacionado al numero de lances. En una bodega adicional compuesta de peces de varios lances, un experimento de mezcla vertical indico que una muestra representativa del contenido podría estar limitada a la mitad inferior de la bodega. Se uso el contenido de las bodegas de prueba para generar 25 bodegas simuladas y comparar los resultados de tres métodos de muestreo aplicados a estas. Los métodos fueron: (1) muestreo aleatorio (usado también como norma), (2) muestreo extendido, en el cual el proceso de selección fue extendido sobre una porción grande de una bodega, y (3) medición consecutiva de peces durante la descarga de la bodega. EI muestreo repetido con cada método y distintas combinaciones de n y m indico que, puesto que la fuente principal de variación de ta11a ocurría entre las unidades primarias, aumentar n fue la manera mas eficaz de reducir las estimaciones de la varianza de las ta11as de los grupos de edad y el numero total de peces en los desembarcos. El muestreo extendido evito mayormente los efectos de la estratificación por ta11a, y su desempeño fue esencialmente comparable a aquel del muestreo aleatorio. Se recomienda muestrear con este método. El muestreo de peces consecutivos produjo estimaciones mas sesgadas con mayores varianzas. Un análisis de las muestras de frecuencia de ta11a de 1988 indico que, para los grupos de edad que aparecen con mayor frecuencia en la captura, una frecuencia de muestreo minima de una unidad primaria de cada seis para cada estrato de mes-área reduciría los coeficientes de variación (CV) de las estimaciones de ta11a correspondientes a aproximadamente 10% 0 menos. Una estratificación adicional de las muestras por tipo de lance, y no solamente mes-área, redujo aun mas los CV de los grupos de edad escasos, tales como los reclutas, y mejoró potencialmente su precisión. Los CV de las estimaciones del reclutamiento para las cohortes completamente pescadas durante 1981-1984 fueron alrededor de 3-8%. Las estimaciones del reclutamiento y sus varianzas fueron también relativamente insensibles a cambios en las capturas de trimestres individuales y las varianzas, respectivamente, de las cuales fueron derivadas. (PDF contains 70 pages)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: We analyzed catches per unit of effort (CPUE) from the Japanese longline fishery for bigeye tuna (Thunnus obesus) in the central and eastern Pacific Ocean (EPO) with regression tree methods. Regression trees have not previously been used to estimate time series of abundance indices fronl CPUE data. The "optimally sized" tree had 139 parameters; year, month, latitude, and longitude interacted to affect bigeye CPUE. The trend in tree-based abundance indices for the EPO was similar to trends estimated from a generalized linear model and fronl an empirical model that combines oceanographic data with information on the distribution of fish relative to environmental conditions. The regression tree was more parsimonious and would be easier to implement than the other two nl0dels, but the tree provided no information about the nlechanisms that caused bigeye CPUEs to vary in time and space. Bigeye CPUEs increased sharply during the mid-1980's and were more variable at the northern and southern edges of the fishing grounds. Both of these results can be explained by changes in actual abundance and changes in catchability. Results from a regression tree that was fitted to a subset of the data indicated that, in the EPO, bigeye are about equally catchable with regular and deep longlines. This is not consistent with observations that bigeye are more abundant at depth and indicates that classification by gear type (regular or deep longline) may not provide a good measure of capture depth. Asimulated annealing algorithm was used to summarize the tree-based results by partitioning the fishing grounds into regions where trends in bigeye CPUE were similar. Simulated annealing can be useful for designing spatial strata in future sampling programs. SPANISH: Analizamos la captura por unidad de esfuerzo (CPUE) de la pesquería palangrera japonesa de atún patudo (Thunnus obesus) en el Océano Pacifico oriental (OPO) y central con métodos de árbol de regresión. Hasta ahora no se han usado árboles de regresión para estimar series de tiempo de índices de abundancia a partir de datos de CPUE. EI árbol de "tamaño optimo" tuvo 139 parámetros; ano, mes, latitud, y longitud interactuaron para afectar la CPUE de patudo. La tendencia en los índices de abundancia basados en árboles para el OPO fue similar a las tendencias estimadas con un modelo lineal generalizado y con un modelo empírico que combina datos oceanográficos con información sobre la distribución de los peces en relación con las condiciones ambientales. EI árbol de regresión fue mas parsimonioso y seria mas fácil de utilizar que los dos otros modelos, pero no proporciono información sobre los mecanismos que causaron que las CPUE de patudo valiaran en el tiempo y en el espacio. Las CPUE de patudo aumentaron notablemente a mediados de los anos 80 y fueron mas variables en los extremos norte y sur de la zona de pesca. Estos dos resultados pueden ser explicados por cambios en la abundancia real y cambios en la capturabilidad. Los resultados de un arbal de regresión ajustado a un subconjunto de los datos indican que, en el OPO, el patudo es igualmente capturable con palangres regulares y profundos. Esto no es consistente con observaciones de que el patudo abunda mas a profundidad e indica que clasificación por tipo de arte (palangre regular 0 profundo) podría no ser una buena medida de la profundidad de captura. Se uso un algoritmo de templado simulado para resumir los resultados basados en el árbol clasificando las zonas de pesca en zonas con tendencias similares en la CPUE de patudo. El templado simulado podría ser útil para diseñar estratos espaciales en programas futuros de muestreo. (PDF contains 45 pages.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.

In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.

We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.

We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.

In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.

Finally, we discuss prospects for related future investigations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inspired by key experimental and analytical results regarding Shape Memory Alloys (SMAs), we propose a modelling framework to explore the interplay between martensitic phase transformations and plastic slip in polycrystalline materials, with an eye towards computational efficiency. The resulting framework uses a convexified potential for the internal energy density to capture the stored energy associated with transformation at the meso-scale, and introduces kinetic potentials to govern the evolution of transformation and plastic slip. The framework is novel in the way it treats plasticity on par with transformation.

We implement the framework in the setting of anti-plane shear, using a staggered implicit/explict update: we first use a Fast-Fourier Transform (FFT) solver based on an Augmented Lagrangian formulation to implicitly solve for the full-field displacements of a simulated polycrystal, then explicitly update the volume fraction of martensite and plastic slip using their respective stick-slip type kinetic laws. We observe that, even in this simple setting with an idealized material comprising four martensitic variants and four slip systems, the model recovers a rich variety of SMA type behaviors. We use this model to gain insight into the isothermal behavior of stress-stabilized martensite, looking at the effects of the relative plastic yield strength, the memory of deformation history under non-proportional loading, and several others.

We extend the framework to the generalized 3-D setting, for which the convexified potential is a lower bound on the actual internal energy, and show that the fully implicit discrete time formulation of the framework is governed by a variational principle for mechanical equilibrium. We further propose an extension of the method to finite deformations via an exponential mapping. We implement the generalized framework using an existing Optimal Transport Mesh-free (OTM) solver. We then model the $\alpha$--$\gamma$ and $\alpha$--$\varepsilon$ transformations in pure iron, with an initial attempt in the latter to account for twinning in the parent phase. We demonstrate the scalability of the framework to large scale computing by simulating Taylor impact experiments, observing nearly linear (ideal) speed-up through 256 MPI tasks. Finally, we present preliminary results of a simulated Split-Hopkinson Pressure Bar (SHPB) experiment using the $\alpha$--$\varepsilon$ model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The mobility of heavy metals (Zn, Cd, Pb and Ni) was studied in the laboratory acidic leaching two different soils around Ibadan with simulated acid rain. The sampling was carried out from two different sites viz: Orogun and Ilupeju respectively. For Orogun site a depth of 128cm was reached (consisting of four horizons). Different length of polyvinyl chloride (PVC) pipes were cut for different soil horizon depth as observed on the field. The PVC pipes were packed with requires masses of soil. This is then leached using simulated acid rain of different pH of 2.0, 4.0, 6.0 and 8.0 after spiking with known volume of standard solution of metals of interest. It was found that simulated acid rain enhanced the mobility of metals in solution. The pH, Cation Exchange capacity, % clay and organic matter were found to contributed majority to the mobility of metals. Generally as observed, the mobility of metal was to follow the order Zn>Ni>Pb>Cd as the soil is becoming more acidic

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.