979 resultados para sequential design
Resumo:
Simple meso-scale capacitor structures have been made by incorporating thin (300 nm) single crystal lamellae of KTiOPO4 (KTP) between two coplanar Pt electrodes. The influence that either patterned protrusions in the electrodes or focused ion beam milled holes in the KTP have on the nucleation of reverse domains during switching was mapped using piezoresponse force microscopy imaging. The objective was to assess whether or not variations in the magnitude of field enhancement at localised “hot-spots,” caused by such patterning, could be used to both control the exact locations and bias voltages at which nucleation events occurred. It was found that both the patterning of electrodes and the milling of various hole geometries into the KTP could allow controlled sequential injection of domain wall pairs at different bias voltages; this capability could have implications for the design and operation of domain wall electronic devices, such as memristors, in the future.
Resumo:
A potential fungal strain producing extracellular β-glucosidase enzyme was isolated from sea water and identified as ^ëéÉêJ Öáääìë=ëóÇçïáá BTMFS 55 by a molecular approach based on 28S rDNA sequence homology which showed 93% identity with already reported sequences of ^ëéÉêÖáääìë=ëóÇçïáá in the GenBank. A sequential optimization strategy was used to enhance the production of β-glucosidase under solid state fermentation (SSF) with wheat bran (WB) as the growth medium. The two-level Plackett-Burman (PB) design was implemented to screen medium components that influence β-glucosidase production and among the 11 variables, moisture content, inoculums, and peptone were identified as the most significant factors for β-glucosidase production. The enzyme was purified by (NH4)2SO4 precipitation followed by ion exchange chromatography on DEAE sepharose. The enzyme was a monomeric protein with a molecular weight of ~95 kDa as determined by SDS-PAGE. It was optimally active at pH 5.0 and 50°C. It showed high affinity towards éNPG and enzyme has a hã and sã~ñ of 0.67 mM and 83.3 U/mL, respectively. The enzyme was tolerant to glucose inhibition with a há of 17 mM. Low concentration of alcohols (10%), especially ethanol, could activate the enzyme. A considerable level of ethanol could produce from wheat bran and rice straw after 48 and 24 h, respectively, with the help of p~ÅÅÜ~êçãóÅÉë=ÅÉêÉîáëá~É in presence of cellulase and the purified β-glucosidase of ^ëéÉêÖáääìë=ëóÇçïáá BTMFS 55.
Resumo:
In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).
Resumo:
The conventional method for assessing acute oral toxicity (OECD Test Guideline 401) was designed to identify the median lethal dose (LD50), using the death of animals as an endpoint. Introduced as an alternative method (OECD Test Guideline 420), the Fixed Dose Procedure (FDP) relies on the observation of clear signs of toxicity, uses fewer animals and causes less suffering. More recently, the Acute Toxic Class method and the Up-and-Down Procedure have also been adopted as OECD test guidelines. Both of these methods also use fewer animals than the conventional method, although they still use death as an endpoint. Each of the three new methods incorporates a sequential dosing procedure, which results in increased efficiency. In 1999, with a view to replacing OECD Test Guideline 401, the OECD requested that the three new test guidelines be updated. This was to bring them in line with the regulatory needs of all OECD Member Countries, provide further reductions in the number of animals used, and introduce refinements to reduce the pain and distress experienced by the animals. This paper describes a statistical modelling approach for the evaluation of acute oral toxicity tests, by using the revised FDP for illustration. Opportunities for further design improvements are discussed.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
A sequential study design generally makes more efficient use of available information than a fixed sample counterpart of equal power. This feature is gradually being exploited by researchers in genetic and epidemiological investigations that utilize banked biological resources and in studies where time, cost and ethics are prominent considerations. Recent work in this area has focussed on the sequential analysis of matched case-control studies with a dichotomous trait. In this paper, we extend the sequential approach to a comparison of the associations within two independent groups of paired continuous observations. Such a comparison is particularly relevant in familial studies of phenotypic correlation using twins. We develop a sequential twin method based on the intraclass correlation and show that use of sequential methodology can lead to a substantial reduction in the number of observations without compromising the study error rates. Additionally, our approach permits straightforward allowance for other explanatory factors in the analysis. We illustrate our method in a sequential heritability study of dysplasia that allows for the effect of body mass index and compares monozygotes with pairs of singleton sisters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
We propose a bridge between two important parallel programming paradigms: data parallelism and communicating sequential processes (CSP). Data parallel pipelined architectures obtained with the Alpha language can be embedded in a control intensive application expressed in CSP-based Handel formalism. The interface is formally defined from the semantics of the languages Alpha and Handel. This work will ease the design of compute intensive applications on FPGAs.
Resumo:
A supramolecular polymer based upon two complementary polymer components is formed by sequential deposition from solution in THF, using a piezoelectric drop-on-demand inkjet printer. Highly efficient cycloaddition or ‘click’ chemistry afforded a well-defined poly(ethylene glycol) featuring chain-folding diimide end groups, which possesses greatly enhanced solubility in THF relative to earlier materials featuring random diimide sequences. Blending the new polyimide with a complementary poly(ethylene glycol) system bearing pyrene end groups (which bind to the chain-folding diimide units) overcomes the limited solubility encountered previously with chain-folding polyimides in inkjet printing applications. The solution state properties of the resulting polymer blend were assessed via viscometry to confirm the presence of a supramolecular polymer before depositing the two electronically complementary polymers by inkjet printing techniques. The novel materials so produced offer an insight into ways of controlling the properties of printed materials through tuning the structure of the polymer at the (supra)molecular level.
Resumo:
Background It can be argued that adaptive designs are underused in clinical research. We have explored concerns related to inadequate reporting of such trials, which may influence their uptake. Through a careful examination of the literature, we evaluated the standards of reporting of group sequential (GS) randomised controlled trials, one form of a confirmatory adaptive design. Methods We undertook a systematic review, by searching Ovid MEDLINE from the 1st January 2001 to 23rd September 2014, supplemented with trials from an audit study. We included parallel group, confirmatory, GS trials that were prospectively designed using a Frequentist approach. Eligible trials were examined for compliance in their reporting against the CONSORT 2010 checklist. In addition, as part of our evaluation, we developed a supplementary checklist to explicitly capture group sequential specific reporting aspects, and investigated how these are currently being reported. Results Of the 284 screened trials, 68(24%) were eligible. Most trials were published in “high impact” peer-reviewed journals. Examination of trials established that 46(68%) were stopped early, predominantly either for futility or efficacy. Suboptimal reporting compliance was found in general items relating to: access to full trials protocols; methods to generate randomisation list(s); details of randomisation concealment, and its implementation. Benchmarking against the supplementary checklist, GS aspects were largely inadequately reported. Only 3(7%) trials which stopped early reported use of statistical bias correction. Moreover, 52(76%) trials failed to disclose methods used to minimise the risk of operational bias, due to the knowledge or leakage of interim results. Occurrence of changes to trial methods and outcomes could not be determined in most trials, due to inaccessible protocols and amendments. Discussion and Conclusions There are issues with the reporting of GS trials, particularly those specific to the conduct of interim analyses. Suboptimal reporting of bias correction methods could potentially imply most GS trials stopping early are giving biased results of treatment effects. As a result, research consumers may question credibility of findings to change practice when trials are stopped early. These issues could be alleviated through a CONSORT extension. Assurance of scientific rigour through transparent adequate reporting is paramount to the credibility of findings from adaptive trials. Our systematic literature search was restricted to one database due to resource constraints.
Resumo:
Fundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Neste trabalho reportamos a investigação teórica da solvatação dos isômeros do tris- (8-idroxiquinolinolato) de alumínio III – Alq3, as propriedades eletroluminescentes na solvatação de Alq3 em líquidos orgânicos como metanol, etanol, dimetilformamida (DMF) e acetonitrila, a fim de se entender a dependência na variação de ambientes do sistema, aperfeiçoando o funcionamento de filmes transportadores em dispositivos eletroluminescentes do tipo OLED (Organic Light-Emitting Diodes) e por fim investigamos o mecanismo do transporte eletrônico no Alq3 aplicando uma baixa corrente elétrica na molécula e evidenciando as curvas corrente-voltagem característica do dispositivo. A simulação consiste na aplicação do método sequencial Monte Carlo / Mecânica quântica (S-MC/MQ), que parte de um tratamento inicial estocástico para separação das estruturas mais prováveis de menor energia e posteriormente com um tratamento quântico para plotar os espectros eletrônicos das camadas de solvatação separadas através do método ZINDOS/S. Nas propriedades elétricas do transporte utilizamos o método da função de Green de não equilíbrio acoplado a teoria do funcional densidade (DFT) inferindo que as ramificações mais externas correspondentes aos anéis no Alq3 seriam terminais para o translado eletrônico. Nossos resultados mostraram que a média dos espectros de absorção para solvatação do Alq3 em soluções sofre um desvio mínimo com a mudança de ambiente, estando em ótimo acordo com os resultados experimentais da literatura; e as curvas I-V confirmaram o comportamento diodo do dispositivo, corroborando com os sentidos mais pertinentes quanto aos terminais no Alq3 para se ter um transporte eletrônico satisfatório.
Resumo:
Little is known about the situational contexts in which individuals consume processed sources of dietary sugars. This study aimed to describe the situational contexts associated with the consumption of sweetened food and drink products in a Catholic Middle Eastern Canadian community. A two-stage exploratory sequential mixed-method design was employed with a rationale of triangulation. In stage 1 (n = 62), items and themes describing the situational contexts of sweetened food and drink product consumption were identified from semi-structured interviews and were used to develop the content for the Situational Context Instrument for Sweetened Product Consumption (SCISPC). Face validity, readability and cultural relevance of the instrument were assessed. In stage 2 (n = 192), a cross-sectional study was conducted and exploratory factor analysis was used to examine the structure of themes that emerged from the qualitative analysis as a means of furthering construct validation. The SCISPC reliability and predictive validity on the daily consumption of sweetened products were also assessed. In stage 1, six themes and 40-items describing the situational contexts of sweetened product consumption emerged from the qualitative analysis and were used to construct the first draft of the SCISPC. In stage 2, factor analysis enabled the clarification and/or expansion of the instrument's initial thematic structure. The revised SCISPC has seven factors and 31 items describing the situational contexts of sweetened product consumption. Initial validation of the instrument indicated it has excellent internal consistency and adequate test-retest reliability. Two factors of the SCISPC had predictive validity for the daily consumption of total sugar from sweetened products (Snacking and Energy demands) while the other factors (Socialization, Indulgence, Constraints, Visual Stimuli and Emotional needs) were rather associated to occasional consumption of these products.
Resumo:
The etiology of complex diseases is heterogeneous. The presence of risk alleles in one or more genetic loci affects the function of a variety of intermediate biological pathways, resulting in the overt expression of disease. Hence, there is an increasing focus on identifying the genetic basis of disease by sytematically studying phenotypic traits pertaining to the underlying biological functions. In this paper we focus on identifying genetic loci linked to quantitative phenotypic traits in experimental crosses. Such genetic mapping methods often use a one stage design by genotyping all the markers of interest on the available subjects. A genome scan based on single locus or multi-locus models is used to identify the putative loci. Since the number of quantitative trait loci (QTLs) is very likely to be small relative to the number of markers genotyped, a one-stage selective genotyping approach is commonly used to reduce the genotyping burden, whereby markers are genotyped solely on individuals with extreme trait values. This approach is powerful in the presence of a single quantitative trait locus (QTL) but may result in substantial loss of information in the presence of multiple QTLs. Here we investigate the efficiency of sequential two stage designs to identify QTLs in experimental populations. Our investigations for backcross and F2 crosses suggest that genotyping all the markers on 60% of the subjects in Stage 1 and genotyping the chromosomes significant at 20% level using additional subjects in Stage 2 and testing using all the subjects provides an efficient approach to identify the QTLs and utilizes only 70% of the genotyping burden relative to a one stage design, regardless of the heritability and genotyping density. Complex traits are a consequence of multiple QTLs conferring main effects as well as epistatic interactions. We propose a two-stage analytic approach where a single-locus genome scan is conducted in Stage 1 to identify promising chromosomes, and interactions are examined using the loci on these chromosomes in Stage 2. We examine settings under which the two-stage analytic approach provides sufficient power to detect the putative QTLs.