22 resultados para Sequential organ failure assessment score
em CentAUR: Central Archive University of Reading - UK
Resumo:
Background: Total enteral nutrition (TEN) within 48 h of admission has recently been shown to be safe and efficacious as part of the management of severe acute pancreatitis. Our aim was to ascertain the safety of immediate TEN in these patients and the effect of TEN on systemic inflammation, psychological state, oxidative stress, plasma glutamine levels and endotoxaemia. Methods: Patients admitted with predicted severe acute pancreatitis (APACHE II score 15) were randomised to total enteral (TEN; n = 8) or total parenteral nutrition (TPN; n = 9). Measurements of systemic inflammation (C-reactive protein), fatigue ( visual analogue scale), oxidative stress ( plasma thiobarbituric acid- reactive substances), plasma glutamine and anti-endotoxin IgG and IgM antibody concentrations were made on admission and repeated on days 3 and 7 thereafter. Clinical progress was monitored using APACHE II score. Organ failure and complications were recorded. Results: All patients tolerated the feeding regime well with few nutrition-related complications. Fatigue improved in both groups but more rapidly in the TEN group. Oxidative stress was high on admission and rose by similar amounts in both groups. Plasma glutamine concentrations did not change significantly in either group. In the TPN group, 3 patients developed respiratory failure and 3 developed non-respiratory single organ failure. There were no such complications in the TEN group. Hospital stay was shorter in the TEN group [ 7 (4-14) vs. 10 (7-26) days; p = 0.05] as was time to passing flatus and time to opening bowels [1 (0-2) vs. 2 (1-5) days; p = 0.01]. The cost of TEN was considerably less than of TPN. Conclusion: Immediate institution of nutritional support in the form of TEN is safe in predicted severe acute pancreatitis. It is as safe and as efficacious as TPN and may be beneficial in the clinical course of this disease. Copyright (C) 2003 S. Karger AG, Basel and IAP.
Resumo:
We propose a novel method for scoring the accuracy of protein binding site predictions – the Binding-site Distance Test (BDT) score. Recently, the Matthews Correlation Coefficient (MCC) has been used to evaluate binding site predictions, both by developers of new methods and by the assessors for the community wide prediction experiment – CASP8. Whilst being a rigorous scoring method, the MCC does not take into account the actual 3D location of the predicted residues from the observed binding site. Thus, an incorrectly predicted site that is nevertheless close to the observed binding site will obtain an identical score to the same number of nonbinding residues predicted at random. The MCC is somewhat affected by the subjectivity of determining observed binding residues and the ambiguity of choosing distance cutoffs. By contrast the BDT method produces continuous scores ranging between 0 and 1, relating to the distance between the predicted and observed residues. Residues predicted close to the binding site will score higher than those more distant, providing a better reflection of the true accuracy of predictions. The CASP8 function predictions were evaluated using both the MCC and BDT methods and the scores were compared. The BDT was found to strongly correlate with the MCC scores whilst also being less susceptible to the subjectivity of defining binding residues. We therefore suggest that this new simple score is a potentially more robust method for future evaluations of protein-ligand binding site predictions.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
The sequential analysis of repeated binary responses: a score test for the case of three time points
Resumo:
In this paper a robust method is developed for the analysis of data consisting of repeated binary observations taken at up to three fixed time points on each subject. The primary objective is to compare outcomes at the last time point, using earlier observations to predict this for subjects with incomplete records. A score test is derived. The method is developed for application to sequential clinical trials, as at interim analyses there will be many incomplete records occurring in non-informative patterns. Motivation for the methodology comes from experience with clinical trials in stroke and head injury, and data from one such trial is used to illustrate the approach. Extensions to more than three time points and to allow for stratification are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Radiometric data in the visible domain acquired by satellite remote sensing have proven to be powerful for monitoring the states of the ocean, both physical and biological. With the help of these data it is possible to understand certain variations in biological responses of marine phytoplankton on ecological time scales. Here, we implement a sequential data-assimilation technique to estimate from a conventional nutrient–phytoplankton–zooplankton (NPZ) model the time variations of observed and unobserved variables. In addition, we estimate the time evolution of two biological parameters, namely, the specific growth rate and specific mortality of phytoplankton. Our study demonstrates that: (i) the series of time-varying estimates of specific growth rate obtained by sequential data assimilation improves the fitting of the NPZ model to the satellite-derived time series: the model trajectories are closer to the observations than those obtained by implementing static values of the parameter; (ii) the estimates of unobserved variables, i.e., nutrient and zooplankton, obtained from an NPZ model by implementation of a pre-defined parameter evolution can be different from those obtained on applying the sequences of parameters estimated by assimilation; and (iii) the maximum estimated specific growth rate of phytoplankton in the study area is more sensitive to the sea-surface temperature than would be predicted by temperature-dependent functions reported previously. The overall results of the study are potentially useful for enhancing our understanding of the biological response of phytoplankton in a changing environment.
Resumo:
AEA Technology has provided an assessment of the probability of α-mode containment failure for the Sizewell B PWR. After a preliminary review of the methodologies available it was decided to use the probabilistic approach described in the paper, based on an extension of the methodology developed by Theofanous et al. (Nucl. Sci. Eng. 97 (1987) 259–325). The input to the assessment is 12 probability distributions; the bases for the quantification of these distributions are discussed. The α-mode assessment performed for the Sizewell B PWR has demonstrated the practicality of the event-tree method with input data represented by probability distributions. The assessment itself has drawn attention to a number of topics, which may be plant and sequence dependent, and has indicated the importance of melt relocation scenarios. The α-mode failure probability following an accident that leads to core melt relocation to the lower head for the Sizewell B PWR has been assessed as a few parts in 10 000, on the basis of current information. This assessment has been the first to consider elevated pressures (6 MPa and 15 MPa) besides atmospheric pressure, but the results suggest only a modest sensitivity to system pressure.
Resumo:
There is increasing interest in combining Phases II and III of clinical development into a single trial in which one of a small number of competing experimental treatments is ultimately selected and where a valid comparison is made between this treatment and the control treatment. Such a trial usually proceeds in stages, with the least promising experimental treatments dropped as soon as possible. In this paper we present a highly flexible design that uses adaptive group sequential methodology to monitor an order statistic. By using this approach, it is possible to design a trial which can have any number of stages, begins with any number of experimental treatments, and permits any number of these to continue at any stage. The test statistic used is based upon efficient scores, so the method can be easily applied to binary, ordinal, failure time, or normally distributed outcomes. The method is illustrated with an example, and simulations are conducted to investigate its type I error rate and power under a range of scenarios.
Resumo:
The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke. The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Background: Selecting the highest quality 3D model of a protein structure from a number of alternatives remains an important challenge in the field of structural bioinformatics. Many Model Quality Assessment Programs (MQAPs) have been developed which adopt various strategies in order to tackle this problem, ranging from the so called "true" MQAPs capable of producing a single energy score based on a single model, to methods which rely on structural comparisons of multiple models or additional information from meta-servers. However, it is clear that no current method can separate the highest accuracy models from the lowest consistently. In this paper, a number of the top performing MQAP methods are benchmarked in the context of the potential value that they add to protein fold recognition. Two novel methods are also described: ModSSEA, which based on the alignment of predicted secondary structure elements and ModFOLD which combines several true MQAP methods using an artificial neural network. Results: The ModSSEA method is found to be an effective model quality assessment program for ranking multiple models from many servers, however further accuracy can be gained by using the consensus approach of ModFOLD. The ModFOLD method is shown to significantly outperform the true MQAPs tested and is competitive with methods which make use of clustering or additional information from multiple servers. Several of the true MQAPs are also shown to add value to most individual fold recognition servers by improving model selection, when applied as a post filter in order to re-rank models. Conclusion: MQAPs should be benchmarked appropriately for the practical context in which they are intended to be used. Clustering based methods are the top performing MQAPs where many models are available from many servers; however, they often do not add value to individual fold recognition servers when limited models are available. Conversely, the true MQAP methods tested can often be used as effective post filters for re-ranking few models from individual fold recognition servers and further improvements can be achieved using a consensus of these methods.
Resumo:
Risk management (RM) comprises of risk identification, risk analysis, response planning, monitoring and action planning tasks that are carried out throughout the life cycle of a project in order to ensure that project objectives are met. Although the methodological aspects of RM are well-defined, the philosophical background is rather vague. In this paper, a learning-based approach is proposed. In order to implement this approach in practice, a tool has been developed to facilitate construction of a lessons learned database that contains risk-related information and risk assessment throughout the life cycle of a project. The tool is tested on a real construction project. The case study findings demonstrate that it can be used for storing as well as updating risk-related information and finally, carrying out a post-project appraisal. The major weaknesses of the tool are identified as, subjectivity of the risk rating process and unwillingness of people to enter information about reasons of failure.
Resumo:
A fast neutron-mutagenized population of Arabidopsis ( Arabidopsis thaliana) Columbia-0 wild-type plants was screened for floral phenotypes and a novel mutant, termed hawaiian skirt ( hws), was identified that failed to shed its reproductive organs. The mutation is the consequence of a 28 bp deletion that introduces a premature amber termination codon into the open reading frame of a putative F-box protein ( At3g61590). The most striking anatomical characteristic of hws plants is seen in flowers where individual sepals are fused along the lower part of their margins. Crossing of the abscission marker, Pro(PGAZAT):beta-glucuronidase, into the mutant reveals that while floral organs are retained it is not the consequence of a failure of abscission zone cells to differentiate. Anatomical analysis indicates that the fusion of sepal margins precludes shedding even though abscission, albeit delayed, does occur. Spatial and temporal characterization, using Pro(HWS):beta-glucuronidase or Pro(HWS):green fluorescent protein fusions, has identified HWS expression to be restricted to the stele and lateral root cap, cotyledonary margins, tip of the stigma, pollen, abscission zones, and developing seeds. Comparative phenotypic analyses performed on the hws mutant, Columbia-0 wild type, and Pro(35S):HWS ectopically expressing lines has revealed that loss of HWS results in greater growth of both aerial and below-ground organs while overexpressing the gene brings about a converse effect. These observations are consistent with HWS playing an important role in regulating plant growth and development.
Resumo:
Background: Shifting gaze and attention ahead of the hand is a natural component in the performance of skilled manual actions. Very few studies have examined the precise co-ordination between the eye and hand in children with Developmental Coordination Disorder (DCD). Methods This study directly assessed the maturity of eye-hand co-ordination in children with DCD. A double-step pointing task was used to investigate the coupling of the eye and hand in 7-year-old children with and without DCD. Sequential targets were presented on a computer screen, and eye and hand movements were recorded simultaneously. Results There were no differences between typically developing (TD) and DCD groups when completing fast single-target tasks. There were very few differences in the completion of the first movement in the double-step tasks, but differences did occur during the second sequential movement. One factor appeared to be the propensity for the DCD children to delay their hand movement until some period after the eye had landed on the target. This resulted in a marked increase in eye-hand lead during the second movement, disrupting the close coupling and leading to a slower and less accurate hand movement among children with DCD. Conclusions In contrast to skilled adults, both groups of children preferred to foveate the target prior to initiating a hand movement if time allowed. The TD children, however, were more able to reduce this foveation period and shift towards a feedforward mode of control for hand movements. The children with DCD persevered with a look-then-move strategy, which led to an increase in error. For the group of DCD children in this study, there was no evidence of a problem in speed or accuracy of simple movements, but there was a difficulty in concatenating the sequential shifts of gaze and hand required for the completion of everyday tasks or typical assessment items.
Resumo:
There is growing interest, especially for trials in stroke, in combining multiple endpoints in a single clinical evaluation of an experimental treatment. The endpoints might be repeated evaluations of the same characteristic or alternative measures of progress on different scales. Often they will be binary or ordinal, and those are the cases studied here. In this paper we take a direct approach to combining the univariate score statistics for comparing treatments with respect to each endpoint. The correlations between the score statistics are derived and used to allow a valid combined score test to be applied. A sample size formula is deduced and application in sequential designs is discussed. The method is compared with an alternative approach based on generalized estimating equations in an illustrative analysis and replicated simulations, and the advantages and disadvantages of the two approaches are discussed.
Resumo:
Huntington’s disease (HD) is a fatal, neurodegenerative disease for which there is no known cure. Proxy evaluation is relevant for HD as its manifestation might limit the ability of persons to report their health-related quality of life (HrQoL). This study explored patient–proxy ratings of HrQoL of persons at different stages of HD, and examined factors that may affect proxy ratings. A total of 105 patient–proxy pairs completed the Huntington’s disease health-related quality of life questionnaire (HDQoL) and other established HrQoL measures (EQ-5D and SF-12v2). Proxy–patient agreement was assessed in terms of absolute level (mean ratings) and intraclass correlation. Proxies’ ratings were at a similar level to patients’ self-ratings on an overall Summary Score and on most of the six Specific Scales of the HDQoL. On the Specific Hopes and Worries Scale, proxies on average rated HrQoL as better than patients’ self-ratings, while on both the Specific Cognitive Scale and Specific Physical and Functional Scale proxies tended to rate HrQoL more poorly than patients themselves. The patient’s disease stage and mental wellbeing (SF-12 Mental Component scale) were the two factors that primarily affected proxy assessment. Proxy scores were strongly correlated with patients’ self-ratings of HrQoL, on the Summary Scale and all Specific Scales. The patient–proxy correlation was lower for patients at moderate stages of HD compared to patients at early and advanced stages. The proxy report version of the HDQoL is a useful complementary tool to self-assessment, and a promising alternative when individual patients with advanced HD are unable to self-report.