979 resultados para Factorial experiment designs.
Resumo:
The purpose of this study was to compare kinematics and kinetics during walking for healthy subjects using unstable shoes with different designs. Ten subjects participated in this study, and foot biomechanical data during walking were quantified using motion analysis system and a force plate. Data were collected for unstable shoes condition after accommodation period of one week. With soft material added in the heel region, the peak impact force was effectively reduced when compared among similar shapes. In addition, the soft material added in the rocker bottom showed more to be in dorsiflexed position during the initial stance. The shoe with three rocker curves design reduced the contact area in the heel strike, which may result in increasing human body forward speed. Further studies shall be carried out after adapting to long periods of wearing unstable shoes.
Resumo:
Ordinal qualitative data are often collected for phenotypical measurements in plant pathology and other biological sciences. Statistical methods, such as t tests or analysis of variance, are usually used to analyze ordinal data when comparing two groups or multiple groups. However, the underlying assumptions such as normality and homogeneous variances are often violated for qualitative data. To this end, we investigated an alternative methodology, rank regression, for analyzing the ordinal data. The rank-based methods are essentially based on pairwise comparisons and, therefore, can deal with qualitative data naturally. They require neither normality assumption nor data transformation. Apart from robustness against outliers and high efficiency, the rank regression can also incorporate covariate effects in the same way as the ordinary regression. By reanalyzing a data set from a wheat Fusarium crown rot study, we illustrated the use of the rank regression methodology and demonstrated that the rank regression models appear to be more appropriate and sensible for analyzing nonnormal data and data with outliers.
Resumo:
Sampling strategies are developed based on the idea of ranked set sampling (RSS) to increase efficiency and therefore to reduce the cost of sampling in fishery research. The RSS incorporates information on concomitant variables that are correlated with the variable of interest in the selection of samples. For example, estimating a monitoring survey abundance index would be more efficient if the sampling sites were selected based on the information from previous surveys or catch rates of the fishery. We use two practical fishery examples to demonstrate the approach: site selection for a fishery-independent monitoring survey in the Australian northern prawn fishery (NPF) and fish age prediction by simple linear regression modelling a short-lived tropical clupeoid. The relative efficiencies of the new designs were derived analytically and compared with the traditional simple random sampling (SRS). Optimal sampling schemes were measured by different optimality criteria. For the NPF monitoring survey, the efficiency in terms of variance or mean squared errors of the estimated mean abundance index ranged from 114 to 199% compared with the SRS. In the case of a fish ageing study for Tenualosa ilisha in Bangladesh, the efficiency of age prediction from fish body weight reached 140%.
Resumo:
So far, most Phase II trials have been designed and analysed under a frequentist framework. Under this framework, a trial is designed so that the overall Type I and Type II errors of the trial are controlled at some desired levels. Recently, a number of articles have advocated the use of Bavesian designs in practice. Under a Bayesian framework, a trial is designed so that the trial stops when the posterior probability of treatment is within certain prespecified thresholds. In this article, we argue that trials under a Bayesian framework can also be designed to control frequentist error rates. We introduce a Bayesian version of Simon's well-known two-stage design to achieve this goal. We also consider two other errors, which are called Bayesian errors in this article because of their similarities to posterior probabilities. We show that our method can also control these Bayesian-type errors. We compare our method with other recent Bayesian designs in a numerical study and discuss implications of different designs on error rates. An example of a clinical trial for patients with nasopharyngeal carcinoma is used to illustrate differences of the different designs.
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
The purpose of a phase I trial in cancer is to determine the level (dose) of the treatment under study that has an acceptable level of adverse effects. Although substantial progress has recently been made in this area using parametric approaches, the method that is widely used is based on treating small cohorts of patients at escalating doses until the frequency of toxicities seen at a dose exceeds a predefined tolerable toxicity rate. This method is popular because of its simplicity and freedom from parametric assumptions. In this payer, we consider cases in which it is undesirable to assume a parametric dose-toxicity relationship. We propose a simple model-free approach by modifying the method that is in common use. The approach assumes toxicity is nondecreasing with dose and fits an isotonic regression to accumulated data. At any point in a trial, the dose given is that with estimated toxicity deemed closest to the maximum tolerable toxicity. Simulations indicate that this approach performs substantially better than the commonly used method and it compares favorably with other phase I designs.
Resumo:
Space-time codes from complex orthogonal designs (CODs) with no zero entries offer low Peak to Average Power Ratio (PAPR) and avoid the problem of switching off antennas. But square CODs for 2(a) antennas with a + 1. complex variables, with no zero entries were discovered only for a <= 3 and if a + 1 = 2(k), for k >= 4. In this paper, a method of obtaining no zero entry (NZE) square designs, called Complex Partial-Orthogonal Designs (CPODs), for 2(a+1) antennas whenever a certain type of NZE code exists for 2(a) antennas is presented. Then, starting from a so constructed NZE CPOD for n = 2(a+1) antennas, a construction procedure is given to obtain NZE CPODs for 2n antennas, successively. Compared to the CODs, CPODs have slightly more ML decoding complexity for rectangular QAM constellations and the same ML decoding complexity for other complex constellations. Using the recently constructed NZE CODs for 8 antennas our method leads to NZE CPODs for 16 antennas. The class of CPODs do not offer full-diversity for all complex constellations. For the NZE CPODs presented in the paper, conditions on the signal sets which will guarantee full-diversity are identified. Simulation results show that bit error performance of our codes is same as that of the CODs under average power constraint and superior to CODs under peak power constraint.
Resumo:
Objective: To assess the value of s-methylmethionine sulphonium chloride (SMMSC) (200 mg/kg) on nutritional performance of pigs and as prevention or therapy for oesophagogastric ulcers. Design: Sixty pigs from a high health status herd with continuing oesophagogastric ulcer problems were endoscopically assessed for the presence or absence of oesophagogastric ulcers. Forty-eight pigs were then selected and allocated according to an initial oesophagogastric epithelial (ulcer score) classification to replicated treatment groups in a 2 × 2 factorial design. Weight gain and feed intake were measured over 49 d, after which pigs were killed and stomachs were collected, re-examined and scored for oesophagogastric ulceration. Results: There was no difference over the 49 d in weight gain, feed intake and backfat in pigs with and without SMMSC supplementation between pigs with or without fully developed oesophagogastric ulcers at the start of the study. In pigs with an initially low ulcer score, feeding SMMSC did not prevent further oesophagogastric ulcer development. No significant effect of SMMSC was apparent when final mean oesophagogastric ulcer scores were compared in pigs with existing high ulcer score. However, further analysis of the changes in individual pig oesophagogastric ulcer scores during the experiment showed that the observed reductions in scores of the high ulcer group was significantly different from all other groups. Conclusion: This study has indicated that supplementation of pig diets with SMMSC cannot be justified unless the slight ulcer score improvement observed could be translated to some commercial production advantage such as a reduction in pig mortalities due to oesophagogastric ulcers. This study has further confirmed the benefit of endoscopy as a tool to enable objective assessment of oesophageal gastric health.
Resumo:
Design embraces several disciplines dedicated to the production of artifacts and services. These disciplines are quite independent and only recently has psychological interest focused on them. Nowadays, the psychological theories of design, also called design cognition literature, describe the design process from the information processing viewpoint. These models co-exist with the normative standards of how designs should be crafted. In many places there are concrete discrepancies between these two in a way that resembles the differences between the actual and ideal decision-making. This study aimed to explore the possible difference related to problem decomposition. Decomposition is a standard component of human problem-solving models and is also included in the normative models of design. The idea of decomposition is to focus on a single aspect of the problem at a time. Despite its significance, the nature of decomposition in conceptual design is poorly understood and has only been preliminary investigated. This study addressed the status of decomposition in conceptual design of products using protocol analysis. Previous empirical investigations have argued that there are implicit and explicit decomposition, but have not provided a theoretical basis for these two. Therefore, the current research began by reviewing the problem solving and design literature and then composing a cognitive model of the solution search of conceptual design. The result is a synthetic view which describes recognition and decomposition as the basic schemata for conceptual design. A psychological experiment was conducted to explore decomposition. In the test, sixteen (N=16) senior students of mechanical engineering created concepts for two alternative tasks. The concurrent think-aloud method and protocol analysis were used to study decomposition. The results showed that despite the emphasis on decomposition in the formal education, only few designers (N=3) used decomposition explicitly and spontaneously in the presented tasks, although the designers in general applied a top-down control strategy. Instead, inferring from the use of structured strategies, the designers always relied on implicit decomposition. These results confirm the initial observations found in the literature, but they also suggest that decomposition should be investigated further. In the future, the benefits and possibilities of explicit decomposition should be considered along with the cognitive mechanisms behind decomposition. After that, the current results could be reinterpreted.
Resumo:
Sufficient evidence tended to indicate that at least four factors can negatively influence broiler performance when offered sorghum-based diets; in particular energy utilisation of sorghum in young birds. It was proposed that mainly CT would further influence sorghum grain AME values when consumed by young chicks (0-7 and 7-14 d old). Overall, birds consuming sorghum-based diets during the starter phase (0-21 d), did not match the performance of birds offered wheat-based diets. The use of phytase enzymes in sorghum-based diets tended to improve bird performance. However, reducing the obtained AME of sorghum grains by -0.8 MJ during the 0-21 d period appears to be a practical solution.
Resumo:
This picture was taken during her last year of high school. The chemistry teacher, Professor Schmigielski was one of Elizabeth's favorite teachers.