862 resultados para Sequential optimization
Resumo:
Pharmacovigilance, the monitoring of adverse events (AEs), is an integral part in the clinical evaluation of a new drug. Until recently, attempts to relate the incidence of AEs to putative causes have been restricted to the evaluation of simple demographic and environmental factors. The advent of large-scale genotyping, however, provides an opportunity to look for associations between AEs and genetic markers, such as single nucleotides polymorphisms (SNPs). It is envisaged that a very large number of SNPs, possibly over 500 000, will be used in pharmacovigilance in an attempt to identify any genetic difference between patients who have experienced an AE and those who have not. We propose a sequential genome-wide association test for analysing AEs as they arise, allowing evidence-based decision-making at the earliest opportunity. This gives us the capability of quickly establishing whether there is a group of patients at high-risk of an AE based upon their DNA. Our method provides a valid test which takes account of linkage disequilibrium and allows for the sequential nature of the procedure. The method is more powerful than using a correction, such as idák, that assumes that the tests are independent. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
While planning the GAIN International Study of gavestinel in acute stroke, a sequential triangular test was proposed but not implemented. Before the trial commenced it was agreed to evaluate the sequential design retrospectively to evaluate the differences in the resulting analyses, trial durations and sample sizes in order to assess the potential of sequential procedures for future stroke trials. This paper presents four sequential reconstructions of the GAIN study made under various scenarios. For the data as observed, the sequential design would have reduced the trial sample size by 234 patients and shortened its duration by 3 or 4 months. Had the study not achieved a recruitment rate that far exceeded expectation, the advantages of the sequential design would have been much greater. Sequential designs appear to be an attractive option for trials in stroke. Copyright 2004 S. Karger AG, Basel
Resumo:
A sequential study design generally makes more efficient use of available information than a fixed sample counterpart of equal power. This feature is gradually being exploited by researchers in genetic and epidemiological investigations that utilize banked biological resources and in studies where time, cost and ethics are prominent considerations. Recent work in this area has focussed on the sequential analysis of matched case-control studies with a dichotomous trait. In this paper, we extend the sequential approach to a comparison of the associations within two independent groups of paired continuous observations. Such a comparison is particularly relevant in familial studies of phenotypic correlation using twins. We develop a sequential twin method based on the intraclass correlation and show that use of sequential methodology can lead to a substantial reduction in the number of observations without compromising the study error rates. Additionally, our approach permits straightforward allowance for other explanatory factors in the analysis. We illustrate our method in a sequential heritability study of dysplasia that allows for the effect of body mass index and compares monozygotes with pairs of singleton sisters. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke. The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
This paper explores the theoretical developments and subsequent uptake of sequential methodology in clinical studies in the 25 years since Statistics in Medicine was launched. The review examines the contributions which have been made to all four phases into which clinical trials are traditionally classified and highlights major statistical advancements, together with assessing application of the techniques. The vast majority of work has been in the setting of phase III clinical trials and so emphasis will be placed here. Finally, comments are given indicating how the subject area may develop in the future.
Resumo:
The influence, was investigated, of abiotic parameters on the isolation of protoplasts from in vitro seedling cotyledons of white lupin. The protoplasts were found to be competent in withstanding a wide range of osmotic potentials of the enzyme medium, however, −2.25 MPa (0.5 M mannitol), resulted in the highest yield of protoplasts. The pH of the isolation medium also had a profound effect on protoplast production. Vacuum infiltration of the enzyme solution into the cotyledon tissue resulted in a progressive drop in the yield of protoplasts. The speed and duration of orbital agitation of the cotyledon tissue played a significant role in the release of protoplasts and a two step (stationary-gyratory) regime was found to be better than the gyratory-only system.
Resumo:
Sequential crystallization of poly(L-lactide) (PLLA) followed by poly(epsilon-caprolactone) (PCL) in double crystalline PLLA-b-PCL diblock copolymers is studied by differential scanning calorimetry (DSC), polarized optical microscopy (POM), wide-angle X-ray scattering (WAXS) and small-angle X-ray scattering (SAXS). Three samples with different compositions are studied. The sample with the shortest PLLA block (32 wt.-% PLLA) crystallizes from a homogeneous melt, the other two (with 44 and 60% PLLA) from microphase separated structures. The microphase structure of the melt is changed as PLLA crystallizes at 122 degrees C (a temperature at which the PCL block is molten) forming spherulites regardless of composition, even with 32% PLLA. SAXS indicates that a lamellar structure with a different periodicity than that obtained in the melt forms (for melt segregated samples). Where PCL is the majority block, PCL crystallization at 42 degrees C following PLLA crystallization leads to rearrangement of the lamellar structure, as observed by SAXS, possibly due to local melting at the interphases between domains. POM results showed that PCL crystallizes within previously formed PLLA spherulites. WAXS data indicate that the PLLA unit cell is modified by crystallization of PCL, at least for the two majority PCL samples. The PCL minority sample did not crystallize at 42 degrees C (well below the PCL homopolymer crystallization temperature), pointing to the influence of pre-crystallization of PLLA on PCL crystallization, although it did crystallize at lower temperature. Crystallization kinetics were examined by DSC and WAXS, with good agreement in general. The crystallization rate of PLLA decreased with increase in PCL content in the copolymers. The crystallization rate of PCL decreased with increasing PLLA content. The Avrami exponents were in general depressed for both components in the block copolymers compared to the parent homopolymers. Polarized optical micrographs during isothermal crystalli zation of (a) homo-PLLA, (b) homo-PCL, (c) and (d) block copolymer after 30 min at 122 degrees C and after 15 min at 42 degrees C.
Resumo:
The carbohydrate-derived substrate 3-C-allyl-1,2: 5,6-di-O-isopropylidene-alpha-D-allofuranose was judiciously manipulated for preparing suitable synthons, which could be converted to a variety of isoxazolidino-spirocycles and -tricycles through the application of ring-closing metathesis (RCM) and intramolecular nitrone cycloaddition (INC) reactions. Cleavage of the isoxazolidine rings of some of these derivatives by tranfer hydrogenolysis followed by coupling of the generated amino functionalities with 5-amino-4,6-dichloropyrimidine furnished the corresponding chloropyrimidine nucleosides, which were elaborated to spiroannulated carbanucleosides and conformationally locked bicyclo[2.2.1] heptane/ oxa-bicyclo[3.2.1]octane nucleosides. However, use of higher temperature for the cyclization of one of the chloropyrimidines led to the dimethylaminopurine analogue as a sole product, formed via nucleophilic displacement of the chloro group by dimethylamine generated from DMF.
Resumo:
Treatment of [Ir(bpa)(cod)](+) complex [1](+) with a strong base (e.g., tBuO(-)) led to unexpected double deprotonation to form the anionic [Ir-(bpa-2H)(cod)](-) species [3](-), via the mono-deprotonated neutral amido complex [Ir(bpa-H)(cod)] as an isolable intermediate. A certain degree of aromaticity of the obtained metal-chelate ring may explain the favourable double deprotonation. The rhodium analogue [4](-) was prepared in situ. The new species [M(bpa-2H)(cod)](-) (M = Rh, Ir) are best described as two-electron reduced analogues of the cationic imine complexes [M-I(cod)(Py-CH2-N=CH-Py)](+). One-electron oxidation of [3](-) and [4](-) produced the ligand radical complexes [3]* and [4]*. Oxygenation of [3](-) with O-2 gave the neutral carboxamido complex [Ir(cod)(py-CH2-N-CO-py)] via the ligand radical complex [3]* as a detectable intermediate.
Resumo:
Metallized plastics have recently received significant interest for their useful applications in electronic devices such as for integrated circuits, packaging, printed circuits and sensor applications. In this work the metallized films were developed by electroless copper plating of polyethylene films grafted with vinyl ether of monoethanoleamine. There are several techniques for metal deposition on surface of polymers such as evaporation, sputtering, electroless plating and electrolysis. In this work the metallized films were developed by electroless copper plating of polyethylene films grafted with vinyl ether of monoethanoleamine. Polyethylene films were subjected to gamma-radiation induced surface graft copolymerization with vinyl ether of monoethanolamine. Electroless copper plating was carried out effectively on the modified films. The catalytic processes for the electroless copper plating in the presence and the absence of SnCl2 sensitization were studied and the optimum activation conditions that give the highest plating rate were determined. The effect of grafting degree on the plating rate is studied. Electroless plating conditions (bath additives, pH and temperature) were optimized. Plating rate was determined gravimetrically and spectrophotometrically at different grafting degrees. The results reveal that plating rate is a function of degree of grafting and increases with increasing grafted vinyl ether of monoethanolamine onto polyethylene. It was found that pH 13 of electroless bath and plating temperature 40°C are the optimal conditions for the plating process. The increasing of grafting degree results in faster plating rate at the same pH and temperature. The surface morphology of the metallized films was investigated using scanning electron microscopy (SEM). The adhesion strength between the metallized layer and grafted polymer was studied using tensile machine. SEM photos and adhesion measurements clarified that uniform and adhered deposits were obtained under optimum conditions.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.
Resumo:
Whilst radial basis function (RBF) equalizers have been employed to combat the linear and nonlinear distortions in modern communication systems, most of them do not take into account the equalizer's generalization capability. In this paper, it is firstly proposed that the. model's generalization capability can be improved by treating the modelling problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets. Then, as a modelling application, a new RBF equalizer learning scheme is introduced based on the directional evolutionary MOO (EMOO). Directional EMOO improves the computational efficiency of conventional EMOO, which has been widely applied in solving MOO problems, by explicitly making use of the directional information. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good performance not only on explaining the training samples but on predicting the unseen samples.