117 resultados para Factorial experiment designs.
Resumo:
We assessed the effect of biochar incorporation into the soil on the soil-atmosphere exchange of the greenhouse gases (GHG) from an intensive subtropical pasture. For this, we measured N2O, CH4 and CO2 emissions with high temporal resolution from April to June 2009 in an existing factorial experiment where cattle feedlot biochar had been applied at 10 t ha-1 in November 2006. Over the whole measurement period, significant emissions of N2O and CO2 were observed, whereas a net uptake of CH4 was measured. N2O emissions were found to be highly episodic with one major emission pulse (up to 502 µg N2O-N m-2 h 1) following heavy rainfall. There was no significant difference in the net flux of GHGs from the biochar amended vs. the control plots. Our results demonstrate that intensively managed subtropical pastures on ferrosols in northern New South Wales of Australia can be a significant source of GHG. Our hypothesis that the application of biochar would lead to a reduction in emissions of GHG from soils was not supported in this field assessment. Additional studies with longer observation periods are needed to clarify the long term effect of biochar amendment on soil microbial processes and the emission of GHGs under field conditions.
Resumo:
The cotton strip assay (CSA) is an established technique for measuring soil microbial activity. The technique involves burying cotton strips and measuring their tensile strength after a certain time. This gives a measure of the rotting rate, R, of the cotton strips. R is then a measure of soil microbial activity. This paper examines properties of the technique and indicates how the assay can be optimised. Humidity conditioning of the cotton strips before measuring their tensile strength reduced the within and between day variance and enabled the distribution of the tensile strength measurements to approximate normality. The test data came from a three-way factorial experiment (two soils, two temperatures, three moisture levels). The cotton strips were buried in the soil for intervals of time ranging up to 6 weeks. This enabled the rate of loss of cotton tensile strength with time to be studied under a range of conditions. An inverse cubic model accounted for greater than 90% of the total variation within each treatment combination. This offers support for summarising the decomposition process by a single parameter R. The approximate variance of the decomposition rate was estimated from a function incorporating the variance of tensile strength and the differential of the function for the rate of decomposition, R, with respect to tensile strength. This variance function has a minimum when the measured strength is approximately 2/3 that of the original strength. The estimates of R are almost unbiased and relatively robust against the cotton strips being left in the soil for more or less than the optimal time. We conclude that the rotting rate X should be measured using the inverse cubic equation, and that the cotton strips should be left in the soil until their strength has been reduced to about 2/3.
Resumo:
This paper investigates why entrepreneurs experience stigma after firm failure and what can be done to reduce it. We use attribution theory as an overarching theoretical framework and hypothesize that entrepreneurs are held more accountable than employees for their unemployment after firm failure irrespective of the circumstances causing the failure. To test this hypothesis we conduct a between group, 2x2 full factorial experiment where the cause of the failure is manipulated. We find that entrepreneurs are held more accountable for firm failure irrespective of the circumstances causing the failure and that respondents who view failure as an inherent risk of firm ownership are less likely to stigmatize failed entrepreneurs.
Resumo:
The paper provides a systematic approach to designing the laboratory phase of a multiphase experiment, taking into account previous phases. General principles are outlined for experiments in which orthogonal designs can be employed. Multiphase experiments occur widely, although their multiphase nature is often not recognized. The need to randomize the material produced from the first phase in the laboratory phase is emphasized. Factor-allocation diagrams are used to depict the randomizations in a design and the use of skeleton analysis-of-variance (ANOVA) tables to evaluate their properties discussed. The methods are illustrated using a scenario and a case study. A basis for categorizing designs is suggested. This article has supplementary material online.
Resumo:
PURPOSE To compare diffusion-weighted functional magnetic resonance imaging (DfMRI), a novel alternative to the blood oxygenation level-dependent (BOLD) contrast, in a functional MRI experiment. MATERIALS AND METHODS Nine participants viewed contrast reversing (7.5 Hz) black-and-white checkerboard stimuli using block and event-related paradigms. DfMRI (b = 1800 mm/s2 ) and BOLD sequences were acquired. Four parameters describing the observed signal were assessed: percent signal change, spatial extent of the activation, the Euclidean distance between peak voxel locations, and the time-to-peak of the best fitting impulse response for different paradigms and sequences. RESULTS The BOLD conditions showed a higher percent signal change relative to DfMRI; however, event-related DfMRI showed the strongest group activation (t = 21.23, P < 0.0005). Activation was more diffuse and spatially closer to the BOLD response for DfMRI when the block design was used. DfMRIevent showed the shortest TTP (4.4 +/- 0.88 sec). CONCLUSION The hemodynamic contribution to DfMRI may increase with the use of block designs.
Resumo:
There is a wide range of potential study designs for intervention studies to decrease nosocomial infections in hospitals. The analysis is complex due to competing events, clustering, multiple timescales and time-dependent period and intervention variables. This review considers the popular pre-post quasi-experimental design and compares it with randomized designs. Randomization can be done in several ways: randomization of the cluster [intensive care unit (ICU) or hospital] in a parallel design; randomization of the sequence in a cross-over design; and randomization of the time of intervention in a stepped-wedge design. We introduce each design in the context of nosocomial infections and discuss the designs with respect to the following key points: bias, control for nonintervention factors, and generalizability. Statistical issues are discussed. A pre-post-intervention design is often the only choice that will be informative for a retrospective analysis of an outbreak setting. It can be seen as a pilot study with further, more rigorous designs needed to establish causality. To yield internally valid results, randomization is needed. Generally, the first choice in terms of the internal validity should be a parallel cluster randomized trial. However, generalizability might be stronger in a stepped-wedge design because a wider range of ICU clinicians may be convinced to participate, especially if there are pilot studies with promising results. For analysis, the use of extended competing risk models is recommended.
Resumo:
Measuring quality attributes of object-oriented designs (e.g. maintainability and performance) has been covered by a number of studies. However, these studies have not considered security as much as other quality attributes. Also, most security studies focus at the level of individual program statements. This approach makes it hard and expensive to discover and fix vulnerabilities caused by design errors. In this work, we focus on the security design of an object oriented application and define a number of security metrics. These metrics allow designers to discover and fix security vulnerabilities at an early stage, and help compare the security of various alternative designs. In particular, we propose seven security metrics to measure Data Encapsulation (accessibility) and Cohesion (interactions) of a given object-oriented class from the point of view of potential information flow.
Resumo:
The Internet theoretically enables marketers to personalize a Website to an individual consumer. This article examines optimal Website design from the perspective of personality trait theory and resource-matching theory. The influence of two traits relevant to Internet Web-site processing—sensation seeking and need for cognition— were studied in the context of resource matching and different levels of Web-site complexity. Data were collected at two points of time: personality-trait data and a laboratory experiment using constructed Web sites. Results reveal that (a) subjects prefer Web sites of a medium level of complexity, rather than high or low complexity; (b)high sensation seekers prefer complex visual designs, and low sensation seekers simple visual designs, both in Web sites of medium complexity; and (c) high need-for-cognition subjects evaluated Web sites with high verbal and low visual complexity more favourably.
Resumo:
The aim of this paper is to provide a contemporary summary of statistical and non-statistical meta-analytic procedures that have relevance to the type of experimental designs often used by sport scientists when examining differences/change in dependent measure(s) as a result of one or more independent manipulation(s). Using worked examples from studies on observational learning in the motor behaviour literature, we adopt a random effects model and give a detailed explanation of the statistical procedures for the three types of raw score difference-based analyses applicable to between-participant, within-participant, and mixed-participant designs. Major merits and concerns associated with these quantitative procedures are identified and agreed methods are reported for minimizing biased outcomes, such as those for dealing with multiple dependent measures from single studies, design variation across studies, different metrics (i.e. raw scores and difference scores), and variations in sample size. To complement the worked examples, we summarize the general considerations required when conducting and reporting a meta-analysis, including how to deal with publication bias, what information to present regarding the primary studies, and approaches for dealing with outliers. By bringing together these statistical and non-statistical meta-analytic procedures, we provide the tools required to clarify understanding of key concepts and principles.
Resumo:
In recent years a large body of research has investigated the various factors affecting child development and the consequent impact of child development on future educational and labour market outcomes. In this article we contribute to this literature by investigating the effect of handedness on a child and given recent research demonstrating that child development strongly affects adult outcomes. Using a large nationally representative sample of young children we find that the probability of a child being left-handed is not significantly related to child health at birth, family composition, parental employment or household income. We also find robust evidence that left-handed (and mixed handed) children perform significantly worse in nearly all measures of development than right-handed children with the relative disadvantage being larger for boys than girls. Importantly these differentials cannot be explained by different socioeconomic characteristics of the household, parental attitudes or investments in learning resources.
Resumo:
One of the classic forms of intermediate representation used for communication between compiler front-ends and back-ends are those based on abstract stack machines. It is possible to compile the stack machine instructions into machine code by means of an interpretive code generator, or to simulate the stack machine at runtime using an interpreter. This paper describes an approach intermediate between these two extremes. The front-end for a commercial Modula 2 compiler was ported to the "industry standard PC", and a partially compiling back-end written. The object code runs with the assistance of an interpreter, but may be linked with libraries which are fully compiled. The intent was to provide a programming environment on the PC which is identical to that of the same compilers on 32-bit UNIX machines. This objective has been met, and the compiler is available to educational institutions as free-ware. The design basis of the new compiler is described, and the performance critically evaluated.
Resumo:
Our working hypotheses is that cross-cultural differences in tax compliance behaviour have foundations in the institutions of tax administration and citizen assessment of the quality of governance. Tax compliance being a complex behavioural issue. Its investigation requires use of a variety of methods and data sources. Results from artefactual field experiments conducted in countries with substantially different political histories and records of governance quality demondtrate that observed differences in tax compliance levels persist over alternative levels of enforcement. The experimental results are shown to be robust by replicating them for the same countries using survey response measures of tax compliance.