30 resultados para Experimental methods
em CentAUR: Central Archive University of Reading - UK
Resumo:
We apply experimental methods to study the role of risk aversion on players’ behavior in repeated prisoners’ dilemma games. Faced with quantitatively equal discount factors, the most risk-averse players will choose Nash strategies more often in the presence of uncertainty than when future profits are discounted in a deterministic way. Overall, we find that risk aversion relates negatively with the frequency of collusive outcomes.
Resumo:
Experimental philosophy of language uses experimental methods developed in the cognitive sciences to investigate topics of interest to philosophers of language. This article describes the methodological background for the development of experimental approaches to topics in philosophy of language, distinguishes negative and positive projects in experimental philosophy of language, and evaluates experimental work on the reference of proper names and natural kind terms. The reliability of expert judgments vs. the judgments of ordinary speakers, the role that ambiguity plays in influencing responses to experiments, and the reliability of meta-linguistic judgments are also assessed.
Resumo:
Motivation: Intrinsic protein disorder is functionally implicated in numerous biological roles and is, therefore, ubiquitous in proteins from all three kingdoms of life. Determining the disordered regions in proteins presents a challenge for experimental methods and so recently there has been much focus on the development of improved predictive methods. In this article, a novel technique for disorder prediction, called DISOclust, is described, which is based on the analysis of multiple protein fold recognition models. The DISOclust method is rigorously benchmarked against the top.ve methods from the CASP7 experiment. In addition, the optimal consensus of the tested methods is determined and the added value from each method is quantified. Results: The DISOclust method is shown to add the most value to a simple consensus of methods, even in the absence of target sequence homology to known structures. A simple consensus of methods that includes DISOclust can significantly outperform all of the previous individual methods tested.
Resumo:
Question: What are the key physiological and life-history trade-offs responsible for the evolution of different suites of plant traits (strategies) in different environments? Experimental methods: Common-garden experiments were performed on physiologically realistic model plants, evolved in contrasting environments, in computer simulations. This allowed the identification of the trade-offs that resulted in different suites of traits (strategies). The environments considered were: resource rich, low disturbance (competitive); resource poor, low disturbance (stressed); resource rich, high disturbance (disturbed); and stressed environments containing herbivores (grazed). Results: In disturbed environments, plants increased reproduction at the expense of ability to compete for light and nitrogen. In competitive environments, plants traded off reproductive output and leaf production for vertical growth. In stressed environments, plants traded off vertical growth and reproductive output for nitrogen acquisition, contradicting Grime's (2001) theory that slow-growing, competitively inferior strategies are selected in stressed environments. The contradiction is partly resolved by incorporating herbivores into the stressed environment, which selects for increased investment in defence, at the expense of competitive ability and reproduction. Conclusion: Our explicit modelling of trade-offs produces rigorous testable explanations of observed associations between suites of traits and environments.
Resumo:
Bubble inclusion is one of the fastest growing operations practiced in the food industry. A variety of aerated foods is currently available in supermarkets, and newer products are emerging all the time. This paper aims to combine knowledge on chocolate aeration with studies performed on bubble formation and dispersion characteristics. More specifically, we have investigated bubble formation induced by applying vacuum. Experimental methods to determine gas hold-up (volume fraction of air), bubble section distributions along specific planes, and chocolate rheological properties are presented. This study concludes that decreasing pressures elevate gas hold-up values due to an increase in the number of bubble nuclei being formed and release of a greater volume of dissolved gases. Furthermore, bubbles are observed to be larger at lower pressures for a set amount of gas because the internal pressure needs to be in equilibrium with the surrounding pressures. Temperature-induced changes to the properties of the chocolate have less of an effect on bubble formation. On the other hand, when different fats and emulsifiers are added to a standard chocolate recipe, milk fat was found to increase, significantly, the gas hold-up values and the mean bubble-section diameters. It is hypothesized that this behavior is related to the way milk fats, which contain different fatty acids to cocoa butter, crystallize and influence the setting properties of the final product. It is highlighted that apparent viscosity values at low shear rate, as well as setting behavior, play an important role in terms of bubble formation and entrainment.
Resumo:
Often, firms have no information on the specification of the true demand model they are faced with. It is, however, a well established fact that trial-and-error algorithms may be used by them in order to learn how to make optimal decisions. Using experimental methods, we identify a property of the information on past actions which helps the seller of two asymmetric demand substitutes to reach the optimal prices more precisely and faster. The property concerns the possibility of disaggregating changes in each product’s demand into client exit/entry and shift from one product to the other.
Resumo:
In the present research, we conducted 4 studies designed to examine the hypothesis that perceived competence moderates the relation between performance-approach and performance-avoidance goals. Each study yielded supportive data, indicating that the correlation between the 2 goals is lower when perceived competence is high. This pattern was observed at the between- and within-subject level of analysis, with correlational and experimental methods and using both standard and novel achievement goal assessments, multiple operationalizations of perceived competence, and several different types of focal tasks. The findings from this research contribute to the achievement goal literature on theoretical, applied, and methodological fronts and highlight the importance of and need for additional empirical work in this area. (PsycINFO Database Record (c) 2012 APA, all rights reserved)(journal abstract)
Resumo:
M-type barium hexaferrite (BaM) is a hard ferrite, crystallizing in space group P6(3)/mmc possessing a hexagonal magneto-plumbite structure, which consists of alternate hexagonal and spinel blocks. The structure of BaM is thus related to those of garnet and spinel ferrite. However the material has proved difficult to synthesize. By taking into account the presence of the spinel block in barium hexagonal ferrite, highly efficient new synthetic methods were devised with routes significantly different from existing ones. These successful variations in synthetic methods have been derived by taking into account a detailed investigation of the structural features of barium hexagonal ferrite and the least change principle whereby configuration changes are kept to a minimum. Thus considering the relevant mechanisms has helped to improve the synthesis efficiencies for both hydrothermal and co-precipitation methods by choosing conditions that invoke the formation of the cubic block or the less stable Fe3O4. The role played by BaFe2O4 in the synthesis is also discussed. The distribution of iron from reactants or intermediates among different sites was also successfully explained. The proposed mechanisms are based on the principle that the cubic block must be self-assembled to form the final product. Thus, it is believed that these formulated mechanisms should be helpful in designing experiments to obtain a deeper understanding of the synthesis process and to investigate the substitution of magnetic ions with doping ions.
Resumo:
The role and function of a given protein is dependent on its structure. In recent years, however, numerous studies have highlighted the importance of unstructured, or disordered regions in governing a protein’s function. Disordered proteins have been found to play important roles in pivotal cellular functions, such as DNA binding and signalling cascades. Studying proteins with extended disordered regions is often problematic as they can be challenging to express, purify and crystallise. This means that interpretable experimental data on protein disorder is hard to generate. As a result, predictive computational tools have been developed with the aim of predicting the level and location of disorder within a protein. Currently, over 60 prediction servers exist, utilizing different methods for classifying disorder and different training sets. Here we review several good performing, publicly available prediction methods, comparing their application and discussing how disorder prediction servers can be used to aid the experimental solution of protein structure. The use of disorder prediction methods allows us to adopt a more targeted approach to experimental studies by accurately identifying the boundaries of ordered protein domains so that they may be investigated separately, thereby increasing the likelihood of their successful experimental solution.
Resumo:
1. Suction sampling is a popular method for the collection of quantitative data on grassland invertebrate populations, although there have been no detailed studies into the effectiveness of the method. 2. We investigate the effect of effort (duration and number of suction samples) and sward height on the efficiency of suction sampling of grassland beetle, true bug, planthopper and spider Populations. We also compare Suction sampling with an absolute sampling method based on the destructive removal of turfs. 3. Sampling for durations of 16 seconds was sufficient to collect 90% of all individuals and species of grassland beetles, with less time required for the true bugs, spiders and planthoppers. The number of samples required to collect 90% of the species was more variable, although in general 55 sub-samples was sufficient for all groups, except the true bugs. Increasing sward height had a negative effect on the capture efficiency of suction sampling. 4. The assemblage structure of beetles, planthoppers and spiders was independent of the sampling method (suction or absolute) used. 5. Synthesis and applications. In contrast to other sampling methods used in grassland habitats (e.g. sweep netting or pitfall trapping), suction sampling is an effective quantitative tool for the measurement of invertebrate diversity and assemblage structure providing sward height is included as a covariate. The effective sampling of beetles, true bugs, planthoppers and spiders altogether requires a minimum sampling effort of 110 sub-samples of duration of 16 seconds. Such sampling intensities can be adjusted depending on the taxa sampled, and we provide information to minimize sampling problems associated with this versatile technique. Suction sampling should remain an important component in the toolbox of experimental techniques used during both experimental and management sampling regimes within agroecosystems, grasslands or other low-lying vegetation types.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The purpose of this paper is to present two multi-criteria decision-making models, including an Analytic Hierarchy Process (AHP) model and an Analytic Network Process (ANP) model for the assessment of deconstruction plans and to make a comparison between the two models with an experimental case study. Deconstruction planning is under pressure to reduce operation costs, adverse environmental impacts and duration, in the meanwhile to improve productivity and safety in accordance with structure characteristics, site conditions and past experiences. To achieve these targets in deconstruction projects, there is an impending need to develop a formal procedure for contractors to select a most appropriate deconstruction plan. Because numbers of factors influence the selection of deconstruction techniques, engineers definitely need effective tools to conduct the selection process. In this regard, multi-criteria decision-making methods such as AHP have been adopted to effectively support deconstruction technique selection in previous researches. in which it has been proved that AHP method can help decision-makers to make informed decisions on deconstruction technique selection based on a sound technical framework. In this paper, the authors present the application and comparison of two decision-making models including the AHP model and the ANP model for deconstruction plan assessment. The paper concludes that both AHP and ANP are viable and capable tools for deconstruction plan assessment under the same set of evaluation criteria. However, although the ANP can measure relationship among selection criteria and their sub-criteria, which is normally ignored in the AHP, the authors also indicate that whether the ANP model can provide a more accurate result should be examined in further research.
Resumo:
A novel sparse kernel density estimator is derived based on a regression approach, which selects a very small subset of significant kernels by means of the D-optimality experimental design criterion using an orthogonal forward selection procedure. The weights of the resulting sparse kernel model are calculated using the multiplicative nonnegative quadratic programming algorithm. The proposed method is computationally attractive, in comparison with many existing kernel density estimation algorithms. Our numerical results also show that the proposed method compares favourably with other existing methods, in terms of both test accuracy and model sparsity, for constructing kernel density estimates.