31 resultados para economic-statistical design
em CentAUR: Central Archive University of Reading - UK
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
It is generally accepted that genetics may be an important factor in explaining the variation between patients’ responses to certain drugs. However, identification and confirmation of the responsible genetic variants is proving to be a challenge in many cases. A number of difficulties that maybe encountered in pursuit of these variants, such as non-replication of a true effect, population structure and selection bias, can be mitigated or at least reduced by appropriate statistical methodology. Another major statistical challenge facing pharmacogenetics studies is trying to detect possibly small polygenic effects using large volumes of genetic data, while controlling the number of false positive signals. Here we review statistical design and analysis options available for investigations of genetic resistance to anti-epileptic drugs.
Resumo:
This paper, the second in a series of three papers concerned with the statistical aspects of interim analyses in clinical trials, is concerned with stopping rules in phase II clinical trials. Phase II trials are generally small-scale studies, and may include one or more experimental treatments with or without a control. A common feature is that the results primarily determine the course of further clinical evaluation of a treatment rather than providing definitive evidence of treatment efficacy. This means that there is more flexibility available in the design and analysis of such studies than in phase III trials. This has led to a range of different approaches being taken to the statistical design of stopping rules for such trials. This paper briefly describes and compares the different approaches. In most cases the stopping rules can be described and implemented easily without knowledge of the detailed statistical and computational methods used to obtain the rules.
Resumo:
In order to identify the factors influencing adoption of technologies promoted by government to small-scale dairy farmers in the highlands of central Mexico, a field survey was conducted. A total of 115 farmers were grouped through cluster analysis (CA) and divided into three wealth status categories (high, medium and low) using wealth ranking. Chi-square analysis was used to examine the association of wealth status with technology adoption. Four groups of farms were differentiated in terms of farms’ dimensions, farmers’ education, sources of incomes, wealth status, management of herd, monetary support by government and technological availability. Statistical differences (p < 0.05) were observed in the milk yield per herd per year among groups. Government organizations (GO) participated little in the promotion of the 17 technologies identified, six of which focused on crop or forage production and 11 of which were related to animal husbandry. Relatives and other farmers played an important role in knowledge diffusion and technology adoption. Although wealth status had a significant association (p < 0.05) with adoption, other factors including importance of the technology to farmers, usefulness and productive benefits of innovations together with farmers’ knowledge of them, were important. It is concluded that the analysis of the information per group and wealth status was useful to identify suitable crop or forage related and animal husbandry technologies per group and wealth status of farmers. Therefore the characterizations of farmers could provide a useful starting point for the design and delivery of more appropriate and effective extension.
Resumo:
The chapter examines how far medieval economic crises can be identified by analysing the residuals from a simultaneous equation model of the medieval English economy. High inflation, falls in gross domestic product and large intermittent changes in wage rates are all considered as potential indicators of crisis. Potential causal factors include bad harvests, wars and political instability. The chapter suggests that crises arose when a combination of different problems overwhelmed the capacity of government to address them. It may therefore be a mistake to look for a single cause of any crisis. The coincidence of separate problems is a more plausible explanation of many crises.
Resumo:
Experimental results from the open literature have been employed for the design and techno-economic evaluation of four process flowsheets for the production of microbial oil or biodiesel. The fermentation of glucose-based media using the yeast strain Rhodosporidium toruloides has been considered. Biodiesel production was based on the exploitation of either direct transesterification (without extraction of lipids from microbial biomass) or indirect transesterifaction of extracted microbial oil. When glucose-based renewable resources are used as carbon source for an annual production capacity of 10,000 t microbial oil and zero cost of glucose (assuming development of integrated biorefineries in existing industries utilising waste or by-product streams) the estimated unitary cost of purified microbial oil is $3.4/kg. Biodiesel production via indirect transesterification of extracted microbial oil proved more cost-competitive process compared to the direct conversion of dried yeast cells. For a price of glucose of $400/t oil production cost and biodiesel production cost are estimated to be $5.5/kg oil and $5.9/kg biodiesel, correspondingly. Industrial implementation of microbial oil production from oleaginous yeast is strongly dependent on the feedstock used and on the fermentation stage where significantly higher productivities and final microbial oil concentrations should be achieved.
Resumo:
A dynamic, deterministic, economic simulation model was developed to estimate the costs and benefits of controlling Mycobacterium avium subsp. paratuberculosis (Johne's disease) in a suckler beef herd. The model is intended as a demonstration tool for veterinarians to use with farmers. The model design process involved user consultation and participation and the model is freely accessible on a dedicated website. The 'user-friendly' model interface allows the input of key assumptions and farm specific parameters enabling model simulations to be tailored to individual farm circumstances. The model simulates the effect of Johne's disease and various measures for its control in terms of herd prevalence and the shedding states of animals within the herd, the financial costs of the disease and of any control measures and the likely benefits of control of Johne's disease for the beef suckler herd over a 10-year period. The model thus helps to make more transparent the 'hidden costs' of Johne's in a herd and the likely benefits to be gained from controlling the disease. The control strategies considered within the model are 'no control', 'testing and culling of diagnosed animals', 'improving management measures' or a dual strategy of 'testing and culling in association with improving management measures'. An example 'run' of the model shows that the strategy 'improving management measures', which reduces infection routes during the early stages, results in a marked fall in herd prevalence and total costs. Testing and culling does little to reduce prevalence and does not reduce total costs over the 10-year period.
Resumo:
The conventional method for assessing acute oral toxicity (OECD Test Guideline 401) was designed to identify the median lethal dose (LD50), using the death of animals as an endpoint. Introduced as an alternative method (OECD Test Guideline 420), the Fixed Dose Procedure (FDP) relies on the observation of clear signs of toxicity, uses fewer animals and causes less suffering. More recently, the Acute Toxic Class method and the Up-and-Down Procedure have also been adopted as OECD test guidelines. Both of these methods also use fewer animals than the conventional method, although they still use death as an endpoint. Each of the three new methods incorporates a sequential dosing procedure, which results in increased efficiency. In 1999, with a view to replacing OECD Test Guideline 401, the OECD requested that the three new test guidelines be updated. This was to bring them in line with the regulatory needs of all OECD Member Countries, provide further reductions in the number of animals used, and introduce refinements to reduce the pain and distress experienced by the animals. This paper describes a statistical modelling approach for the evaluation of acute oral toxicity tests, by using the revised FDP for illustration. Opportunities for further design improvements are discussed.
Resumo:
The aim of phase II single-arm clinical trials of a new drug is to determine whether it has sufficient promising activity to warrant its further development. For the last several years Bayesian statistical methods have been proposed and used. Bayesian approaches are ideal for earlier phase trials as they take into account information that accrues during a trial. Predictive probabilities are then updated and so become more accurate as the trial progresses. Suitable priors can act as pseudo samples, which make small sample clinical trials more informative. Thus patients have better chances to receive better treatments. The goal of this paper is to provide a tutorial for statisticians who use Bayesian methods for the first time or investigators who have some statistical background. In addition, real data from three clinical trials are presented as examples to illustrate how to conduct a Bayesian approach for phase II single-arm clinical trials with binary outcomes.
Resumo:
As a continuing effort to establish the structure-activity relationships (SARs) within the series of the angiotensin II antagonists (sartans), a pharmacophoric model was built by using novel TOPP 3D descriptors. Statistical values were satisfactory (PC4: r(2)=0.96, q(2) ((5) (random) (groups))=0.84; SDEP=0.26) and encouraged the synthesis and consequent biological evaluation of a series of new pyrrolidine derivatives. SAR together with a combined 3D quantitative SAR and high-throughput virtual screening showed that the newly synthesized 1-acyl-N-(biphenyl-4-ylmethyl)pyrrolidine-2-carboxamides may represent an interesting starting point for the design of new antihypertensive agents. In particular, biological tests performed on CHO-hAT(1) cells stably expressing the human AT(1) receptor showed that the length of the acyl chain is crucial for the receptor interaction and that the valeric chain is the optimal one.
Resumo:
From a statistician's standpoint, the interesting kind of isomorphism for fractional factorial designs depends on the statistical application. Combinatorially isomorphic fractional factorial designs may have different statistical properties when factors are quantitative. This idea is illustrated by using Latin squares of order 3 to obtain fractions of the 3(3) factorial. design in 18 runs.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
The facilitation of healthier dietary choices by consumers is one of the key elements of the UK Government’s food strategy. Designing and targeting dietary interventions requires a clear understanding of the determinants of dietary choice. Conventional analysis of the determinants of dietary choice has focused on mean response functions which may mask significant differences in the dietary behaviour of different segments of the population. In this paper we use a quantile regression approach to investigate how food consumption behaviour varies amongst UK households in different segments of the population, especially in the upper and lower quantiles characterised by healthy or unhealthy consumption patterns. We find that the effect of demographic determinants of dietary choice on households that exhibit less healthy consumption patterns differs significantly from that on households that make healthier consumption choices. A more nuanced understanding of the differences in the behavioural responses of households making less-healthy eating choices provides useful insights for the design and targeting of measures to promote healthier diets.
Resumo:
It is considered that systemisation, the use of standard systems of components and design solutions, has an effect on the activities of the designer. This paper draws on many areas of knowledge; design movements, economic theory, quality evaluation and organisation theory to substantiate the view that systemisation will reduce the designer's discretion at the point of design. A methodology to test this hypothesis is described which will be of use to other researchers studying the social processes of construction organisations.
Resumo:
The present paper presents a meta-analysis of the economic and agronomic performance of genetically modified (GM) crops worldwide. Bayesian, classical and non-parametric approaches were used to evaluate the performance of GM crops v. their conventional counterparts. The two main GM crop traits (herbicide tolerant (HT) and insect resistant (Bt)) and three of the main GM crops produced worldwide (Bt cotton, HT soybean and Bt maize) were analysed in terms of yield, production cost and gross margin. The scope of the analysis covers developing and developed countries, six world regions, and all countries combined. Results from the statistical analyses indicate that GM crops perform better than their conventional counterparts in agronomic and economic (gross margin) terms. Regarding countries’ level of development, GM crops tend to perform better in developing countries than in developed countries, with Bt cotton being the most profitable crop grown.