38 resultados para TURF analysis, Binary programming, product design
Resumo:
The aim of phase II single-arm clinical trials of a new drug is to determine whether it has sufficient promising activity to warrant its further development. For the last several years Bayesian statistical methods have been proposed and used. Bayesian approaches are ideal for earlier phase trials as they take into account information that accrues during a trial. Predictive probabilities are then updated and so become more accurate as the trial progresses. Suitable priors can act as pseudo samples, which make small sample clinical trials more informative. Thus patients have better chances to receive better treatments. The goal of this paper is to provide a tutorial for statisticians who use Bayesian methods for the first time or investigators who have some statistical background. In addition, real data from three clinical trials are presented as examples to illustrate how to conduct a Bayesian approach for phase II single-arm clinical trials with binary outcomes.
Resumo:
Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
The results of an experimental study into the oxidative degradation of proxies for atmospheric aerosol are presented. We demonstrate that the laser Raman tweezers method can be used successfully to obtain uptake coeffcients for gaseous oxidants on individual aqueous and organic droplets, whilst the size and composition of the droplets is simultaneously followed. A laser tweezers system was used to trap individual droplets containing an unsaturated organic compound in either an aqueous or organic ( alkane) solvent. The droplet was exposed to gas- phase ozone and the reaction kinetics and products followed using Raman spectroscopy. The reactions of three different organic compounds with ozone were studied: fumarate anions, benzoate anions and alpha pinene. The fumarate and benzoate anions in aqueous solution were used to represent components of humic- like substances, HULIS; a alpha- pinene in an alkane solvent was studied as a proxy for biogenic aerosol. The kinetic analysis shows that for these systems the diffusive transport and mass accommodation of ozone is relatively fast, and that liquid- phase di. ffusion and reaction are the rate determining steps. Uptake coe. ffcients, g, were found to be ( 1.1 +/- 0.7) x 10(-5), ( 1.5 +/- 0.7) x 10 (-5) and ( 3.0 - 7.5) x 10 (-3) for the reactions of ozone with the fumarate, benzoate and a- pinene containing droplets, respectively. Liquid- phase bimolecular rate coe. cients for reactions of dissolved ozone molecules with fumarate, benzoate and a- pinene were also obtained: k(fumarate) = ( 2.7 +/- 2) x 10 (5), k(benzoate) = ( 3.5 +/- 3) x 10 (5) and k(alpha-pinene) = ( 1-3) x 10(7) dm(3) mol (-1) s (- 1). The droplet size was found to remain stable over the course of the oxidation process for the HULIS- proxies and for the oxidation of a- pinene in pentadecane. The study of the alpha- pinene/ ozone system is the first using organic seed particles to show that the hygroscopicity of the particle does not increase dramatically over the course of the oxidation. No products were detected by Raman spectroscopy for the reaction of benzoate ions with ozone. One product peak, consistent with aqueous carbonate anions, was observed when following the oxidation of fumarate ions by ozone. Product peaks observed in the reaction of ozone with alpha- pinene suggest the formation of new species containing carbonyl groups.
Resumo:
The journey from the concept of a building to the actual built form is mediated with the use of various artefacts, such as drawings, product samples and models. These artefacts are produced for different purposes and for people with different levels of understanding of the design and construction processes. This paper studies design practice as it occurs naturally in a real-world situation by observing the conversations that surround the use of artefacts at the early stages of a building's design. Drawing on ethnographic data, insights are given into how the use of artefacts can reveal a participant's understanding of the scheme. The appropriateness of the method of conversation analysis to reveal the users' understanding of a scheme is explored by observing spoken micro-interactional behaviours. It is shown that the users' understanding of the design was developed in the conversations around the use of artefacts, as well as the knowledge that is embedded in the artefacts themselves. The users' confidence in the appearance of the building was considered to be gained in conversation, rather than the ability of the artefacts to represent a future reality.
Resumo:
The purpose of this study was to apply and compare two time-domain analysis procedures in the determination of oxygen uptake (VO2) kinetics in response to a pseudorandom binary sequence (PRBS) exercise test. PRBS exercise tests have typically been analysed in the frequency domain. However, the complex interpretation of frequency responses may have limited the application of this procedure in both sporting and clinical contexts, where a single time measurement would facilitate subject comparison. The relative potential of both a mean response time (MRT) and a peak cross-correlation time (PCCT) was investigated. This study was divided into two parts: a test-retest reliability study (part A), in which 10 healthy male subjects completed two identical PRBS exercise tests, and a comparison of the VO2 kinetics of 12 elite endurance runners (ER) and 12 elite sprinters (SR; part B). In part A, 95% limits of agreement were calculated for comparison between MRT and PCCT. The results of part A showed no significant difference between test and retest as assessed by MRT [mean (SD) 42.2 (4.2) s and 43.8 (6.9) s] or by PCCT [21.8 (3.7) s and 22.7 (4.5) s]. Measurement error (%) was lower for MRT in comparison with PCCT (16% and 25%, respectively). In part B of the study, the VO2 kinetics of ER were significantly faster than those of SR, as assessed by MRT [33.4 (3.4) s and 39.9 (7.1) s, respectively; P<0.01] and PCCT [20.9 (3.8) s and 24.8 (4.5) s; P < 0.05]. It is possible that either analysis procedure could provide a single test measurement Of VO2 kinetics; however, the greater reliability of the MRT data suggests that this method has more potential for development in the assessment Of VO2 kinetics by PRBS exercise testing.
Resumo:
Proteomic analysis using electrospray liquid chromatography-mass spectrometry (ESI-LC-MS) has been used to compare the sites of glycation (Amadori adduct formation) and carboxymethylation of RNase and to assess the role of the Amadori adduct in the formation of the advanced glycation end-product (AGE), N-is an element of-(carboxymethyl)lysine (CIVIL). RNase (13.7 mg/mL, 1 mM) was incubated with glucose (0.4 M) at 37 degreesC for 14 days in phosphate buffer (0.2 M, pH 7.4) under air. On the basis of ESI-LC-MS of tryptic peptides, the major sites of glycation of RNase were, in order, K41, K7, K1, and K37. Three of these, in order, K41, K7, and K37 were also the major sites of CIVIL formation. In other experiments, RNase was incubated under anaerobic conditions (1 mM DTPA, N-2 purged) to form Amadori-modified protein, which was then incubated under aerobic conditions to allow AGE formation. Again, the major sites of glycation were, in order, K41, K7, K1, and K37 and the major sites of carboxymethylation were K41, K7, and K37. RNase was also incubated with 1-5 mM glyoxal, substantially more than is formed by autoxidation of glucose under experimental conditions, but there was only trace modification of lysine residues, primarily at K41. We conclude the following: (1) that the primary route to formation of CIVIL is by autoxidation of Amadori adducts on protein, rather than by glyoxal generated on autoxidation of glucose; and (2) that carboxymethylation, like glycation, is a site-specific modification of protein affected by neighboring amino acids and bound ligands, such as phosphate or phosphorylated compounds. Even when the overall extent of protein modification is low, localization of a high proportion of the modifications at a few reactive sites might have important implications for understanding losses in protein functionality in aging and diabetes and also for the design of AGE inhibitors.
Resumo:
Semiotics is the study of signs. Application of semiotics in information systems design is based on the notion that information systems are organizations within which agents deploy signs in the form of actions according to a set of norms. An analysis of the relationships among the agents, their actions and the norms would give a better specification of the system. Distributed multimedia systems (DMMS) could be viewed as a system consisted of many dynamic, self-controlled normative agents engaging in complex interaction and processing of multimedia information. This paper reports the work of applying the semiotic approach to the design and modeling of DMMS, with emphasis on using semantic analysis under the semiotic framework. A semantic model of DMMS describing various components and their ontological dependencies is presented, which then serves as a design model and implemented in a semantic database. Benefits of using the semantic database are discussed with reference to various design scenarios.
Resumo:
Across the world there are many bodies currently involved in researching into the design of autonomous guided vehicles (AGVs). One of the greatest problems at present however, is that much of the research work is being conducted in isolated groups, with the resulting AGVs sensor/control/command systems being almost completely nontransferable to other AGV designs. This paper describes a new modular method for robot design which when applied to AGVs overcomes the above problems. The method is explained here with respect to all forms of robotics but the examples have been specifically chosen to reflect typical AGV systems.
Resumo:
This report describes the analysis and development of novel tools for the global optimisation of relevant mission design problems. A taxonomy was created for mission design problems, and an empirical analysis of their optimisational complexity performed - it was demonstrated that the use of global optimisation was necessary on most classes and informed the selection of appropriate global algorithms. The selected algorithms were then applied to the di®erent problem classes: Di®erential Evolution was found to be the most e±cient. Considering the speci¯c problem of multiple gravity assist trajectory design, a search space pruning algorithm was developed that displays both polynomial time and space complexity. Empirically, this was shown to typically achieve search space reductions of greater than six orders of magnitude, thus reducing signi¯cantly the complexity of the subsequent optimisation. The algorithm was fully implemented in a software package that allows simple visualisation of high-dimensional search spaces, and e®ective optimisation over the reduced search bounds.