117 resultados para Factorial experiment designs.
Resumo:
The goal of this article is to provide a new design framework and its corresponding estimation for phase I trials. Existing phase I designs assign each subject to one dose level based on responses from previous subjects. Yet it is possible that subjects with neither toxicity nor efficacy responses can be treated at higher dose levels, and their subsequent responses to higher doses will provide more information. In addition, for some trials, it might be possible to obtain multiple responses (repeated measures) from a subject at different dose levels. In this article, a nonparametric estimation method is developed for such studies. We also explore how the designs of multiple doses per subject can be implemented to improve design efficiency. The gain of efficiency from "single dose per subject" to "multiple doses per subject" is evaluated for several scenarios. Our numerical study shows that using "multiple doses per subject" and the proposed estimation method together increases the efficiency substantially.
Resumo:
A decision-theoretic framework is proposed for designing sequential dose-finding trials with multiple outcomes. The optimal strategy is solvable theoretically via backward induction. However, for dose-finding studies involving k doses, the computational complexity is the same as the bandit problem with k-dependent arms, which is computationally prohibitive. We therefore provide two computationally compromised strategies, which is of practical interest as the computational complexity is greatly reduced: one is closely related to the continual reassessment method (CRM), and the other improves CRM and approximates to the optimal strategy better. In particular, we present the framework for phase I/II trials with multiple outcomes. Applications to a pediatric HIV trial and a cancer chemotherapy trial are given to illustrate the proposed approach. Simulation results for the two trials show that the computationally compromised strategy can perform well and appear to be ethical for allocating patients. The proposed framework can provide better approximation to the optimal strategy if more extensive computing is available.
Resumo:
Several articles in this journal have studied optimal designs for testing a series of treatments to identify promising ones for further study. These designs formulate testing as an ongoing process until a promising treatment is identified. This formulation is considered to be more realistic but substantially increases the computational complexity. In this article, we show that these new designs, which control the error rates for a series of treatments, can be reformulated as conventional designs that control the error rates for each individual treatment. This reformulation leads to a more meaningful interpretation of the error rates and hence easier specification of the error rates in practice. The reformulation also allows us to use conventional designs from published tables or standard computer programs to design trials for a series of treatments. We illustrate these using a study in soft tissue sarcoma.
Resumo:
The purpose of a phase I trial in cancer is to determine the level (dose) of the treatment under study that has an acceptable level of adverse effects. Although substantial progress has recently been made in this area using parametric approaches, the method that is widely used is based on treating small cohorts of patients at escalating doses until the frequency of toxicities seen at a dose exceeds a predefined tolerable toxicity rate. This method is popular because of its simplicity and freedom from parametric assumptions. In this payer, we consider cases in which it is undesirable to assume a parametric dose-toxicity relationship. We propose a simple model-free approach by modifying the method that is in common use. The approach assumes toxicity is nondecreasing with dose and fits an isotonic regression to accumulated data. At any point in a trial, the dose given is that with estimated toxicity deemed closest to the maximum tolerable toxicity. Simulations indicate that this approach performs substantially better than the commonly used method and it compares favorably with other phase I designs.
Resumo:
Genotyping in DNA pools reduces the cost and the time required to complete large genotyping projects. The aim of the present study was to evaluate pooling as part of a strategy for fine mapping in regions of significant linkage. Thirty-nine single nucleotide polymorphisms (SNPs) were analyzed in two genomic DNA pools of 384 individuals each and results compared with data after typing all individuals used in the pools. There were no significant differences using data from either 2 or 8 heterozygous individuals to correct frequency estimates for unequal allelic amplification. After correction, the mean difference between estimates from the genomic pool and individual allele frequencies was .033. A major limitation of the use of DNA pools is the time and effort required to carefully adjust the concentration of each individual DNA sample before mixing aliquots. Pools were also constructed by combining DNA after Multiple Displacement Amplification (MDA). The MDA pools gave similar results to pools constructed after careful DNA quantitation (mean difference from individual genotyping .040) and MDA provides a rapid method to generate pools suitable for some applications. Pools provide a rapid and cost-effective screen to eliminate SNPs that are not polymorphic in a test population and can detect minor allele frequencies as low as 1% in the pooled samples. With current levels of accuracy, pooling is best suited to an initial screen in the SNP validation process that can provide high-throughput comparisons between cases and controls to prioritize SNPs for subsequent individual genotyping.
Resumo:
We have shown that novel synthesis methods combined with careful evaluation of DFT phonon calculations provides new insight into boron compounds including a capacity to predict Tc for AlB2-type superconductors.
Resumo:
Influential creative industries and creative place thinkers Richard Florida and Charles Landry agree that creativity is necessary for a prospering liveable and, therefore, sustainable city. Following Florida’s work, the ‘creative class’ has become central to what has turned out to be city-centre-centric growth policies. However, until the Queensland University of Technology’s Australian Research Council sponsored research into “creative suburbia”, few researchers had demonstrated – let alone challenged – the notion that a substantial cohort of creative industries workers might prefer to live and work at home in the suburbs rather than in city centres. The “creative suburb” work builds on the creative suburbia research. In a practice-led and property development industry embedded inquiry, the creative suburb draws on significant primary research with suburban, home-based, creative industries workers, vernacular architecture, and town planning in the Toowoomba region, in the state of Queensland, Australia, as inspiration for a series of new building and urban designs available for innovators operating in new suburban greenfield situations and suburban areas undergoing a refit in Queensland and possibly further afield. This paper focuses on one building design informed by this inquiry, with the intention of its construction as a ’showcasestudy’ ‘homeworkhouse’, suitable for creative industries workers in the Toowoomba region.
Resumo:
A mathematics classroom is comprised of many mathematicians with varying understanding of mathematics knowledge, including the teacher, students and sometimes researchers. To align with this conceptualisation of knowledge and understanding, the multi-faceted teaching experiment will be introduced as an approach to study all classroom participants’ interactions with the shared knowledge of mathematics. Drawing on the experiences of a large curriculum project, it is claimed that, unlike a multi-tiered teaching experiment, the multi-faceted teaching experiment provides a research framework that allows for the study of mathematicians’ building of knowledge in a classroom without privileging the experience of any one participant.
Resumo:
Purpose The purpose of this study is to identify and understand the emotions behind a passenger’s airport experience and how this can inform digital channel engagements. Design/methodology/approach This study investigates the emotional experience of two hundred (200) passengers’ journeys at an Australian domestic airport. A survey was conducted which implemented the use of Emocards and an interview approach of laddering. The responses were then analysed into attributes, consequences and values. Findings The results indicate that across key stages of the airport (parking, retail, gates and arrivals) passengers had different emotional experiences (positive, negative and neutral). The attributes, consequences and values behind these emotions were then used to propose digital channel content and purpose of various future digital channel engagements. Research limitations/implications By gaining emotional insights airports are able to generate digital channel engagements, which align with passengers’ needs and values rather than internal operational motivations. Theoretical contributions include the development of the Technology Acceptance Model to include emotional drivers as influences in the use of digital channels. Originality/value This research provides a unique method to understand the passengers’ emotional journey across the airport infrastructure and suggest how to better design digital channel engagements to address passenger latent needs.
Resumo:
A promotional brochure celebrating the completion of the Seagram Building in spring 1957 features on its cover intense portraits of seven men bisected by a single line of bold text that asks, “Who are these Men?” The answer appears on the next page: “They Dreamed of a Tower of Light” (Figures 1, 2). Each photograph is reproduced with the respective man’s name and project credit: architects, Mies van der Rohe and Philip Johnson; associate architect, Eli Jacques Kahn; electrical contractor, Harry F. Fischbach; lighting consultant, Richard Kelly; and electrical engineer, Clifton E. Smith. To the right, a rendering of the new Seagram Tower anchors the composition, standing luminous against a star-speckled night sky; its glass walls and bronze mullions are transformed into a gossamer skin that reveals the tower’s structural skeleton. Lightolier, the contract lighting manufacturer, produced the brochure to promote its role in the lighting of the Seagram Building, but Lightolier’s promotional copy was not far from the truth.
Resumo:
Objective: To identify key stakeholder preferences and priorities when considering a national healthcare-associated infection (HAI) surveillance programme through the use of a discrete choice experiment (DCE). Setting: Australia does not have a national HAI surveillance programme. An online web-based DCE was developed and made available to participants in Australia. Participants: A sample of 184 purposively selected healthcare workers based on their senior leadership role in infection prevention in Australia. Primary and secondary outcomes: A DCE requiring respondents to select 1 HAI surveillance programme over another based on 5 different characteristics (or attributes) in repeated hypothetical scenarios. Data were analysed using a mixed logit model to evaluate preferences and identify the relative importance of each attribute. Results: A total of 122 participants completed the survey (response rate 66%) over a 5-week period. Excluding 22 who mismatched a duplicate choice scenario, analysis was conducted on 100 responses. The key findings included: 72% of stakeholders exhibited a preference for a surveillance programme with continuous mandatory core components (mean coefficient 0.640 (p<0.01)), 65% for a standard surveillance protocol where patient-level data are collected on infected and non-infected patients (mean coefficient 0.641 (p<0.01)), and 92% for hospital-level data that are publicly reported on a website and not associated with financial penalties (mean coefficient 1.663 (p<0.01)). Conclusions: The use of the DCE has provided a unique insight to key stakeholder priorities when considering a national HAI surveillance programme. The application of a DCE offers a meaningful method to explore and quantify preferences in this setting.