842 resultados para systematic toxicological analysis
Resumo:
An analytic method to evaluate nuclear contributions to electrical properties of polyatomic molecules is presented. Such contributions control changes induced by an electric field on equilibrium geometry (nuclear relaxation contribution) and vibrational motion (vibrational contribution) of a molecular system. Expressions to compute the nuclear contributions have been derived from a power series expansion of the potential energy. These contributions to the electrical properties are given in terms of energy derivatives with respect to normal coordinates, electric field intensity or both. Only one calculation of such derivatives at the field-free equilibrium geometry is required. To show the useful efficiency of the analytical evaluation of electrical properties (the so-called AEEP method), results for calculations on water and pyridine at the SCF/TZ2P and the MP2/TZ2P levels of theory are reported. The results obtained are compared with previous theoretical calculations and with experimental values
Resumo:
We compare European Centre for Medium-Range Weather Forecasts 15-year reanalysis (ERA-15) moisture over the tropical oceans with satellite observations and the U.S. National Centers for Environmental Prediction (NCEP) National Center for Atmospheric Research 40-year reanalysis. When systematic differences in moisture between the observational and reanalysis data sets are removed, the NCEP data show excellent agreement with the observations while the ERA-15 variability exhibits remarkable differences. By forcing agreement between ERA-15 column water vapor and the observations, where available, by scaling the entire moisture column accordingly, the height-dependent moisture variability remains unchanged for all but the 550–850 hPa layer, where the moisture variability reduces significantly. Thus the excess variation of column moisture in ERA-15 appears to originate in this layer. The moisture variability provided by ERA-15 is not deemed of sufficient quality for use in the validation of climate models.
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
The principles of organization theory are applied to the organization of construction projects. This is done by proposing a framework for modelling the whole process of building procurement. This consists of a framework for describing the environments within which construction projects take place. This is followed by the development of a series of hypotheses about the organizational structure of construction projects. Four case studies are undertaken, and the extent to which their organizational structure matches the model is compared to the level of success achieved by each project. To this end there is a systematic method for evaluating the success of building project organizations, because any conclusions about the adequacy of a particular organization must be related to the degree of success achieved by that organization. In order to test these hypotheses, a mapping technique is developed. The technique offered is a development of a technique known as Linear Responsibility Analysis, and is called "3R analysis" as it deals with roles, responsibilities and relationships. The analysis of the case studies shows that they tended to suffer due to inappropriate organizational structure. One of the prevailing problems of public sector organization is that organizational structures are inadequately defined, and too cumbersome to respond to environmental demands on the project. The projects tended to be organized as rigid hierarchies, particularly at decision points, when what was required was a more flexible, dynamic and responsive organization. The study concludes with a series of recommendations; including suggestions for increasing the responsiveness of construction project organizations, and reducing the lead-in times for the inception periods.
Resumo:
BACKGROUND: Serial Analysis of Gene Expression (SAGE) is a powerful tool for genome-wide transcription studies. Unlike microarrays, it has the ability to detect novel forms of RNA such as alternatively spliced and antisense transcripts, without the need for prior knowledge of their existence. One limitation of using SAGE on an organism with a complex genome and lacking detailed sequence information, such as the hexaploid bread wheat Triticum aestivum, is accurate annotation of the tags generated. Without accurate annotation it is impossible to fully understand the dynamic processes involved in such complex polyploid organisms. Hence we have developed and utilised novel procedures to characterise, in detail, SAGE tags generated from the whole grain transcriptome of hexaploid wheat. RESULTS: Examination of 71,930 Long SAGE tags generated from six libraries derived from two wheat genotypes grown under two different conditions suggested that SAGE is a reliable and reproducible technique for use in studying the hexaploid wheat transcriptome. However, our results also showed that in poorly annotated and/or poorly sequenced genomes, such as hexaploid wheat, considerably more information can be extracted from SAGE data by carrying out a systematic analysis of both perfect and "fuzzy" (partially matched) tags. This detailed analysis of the SAGE data shows first that while there is evidence of alternative polyadenylation this appears to occur exclusively within the 3' untranslated regions. Secondly, we found no strong evidence for widespread alternative splicing in the developing wheat grain transcriptome. However, analysis of our SAGE data shows that antisense transcripts are probably widespread within the transcriptome and appear to be derived from numerous locations within the genome. Examination of antisense transcripts showing sequence similarity to the Puroindoline a and Puroindoline b genes suggests that such antisense transcripts might have a role in the regulation of gene expression. CONCLUSION: Our results indicate that the detailed analysis of transcriptome data, such as SAGE tags, is essential to understand fully the factors that regulate gene expression and that such analysis of the wheat grain transcriptome reveals that antisense transcripts maybe widespread and hence probably play a significant role in the regulation of gene expression during grain development.
Resumo:
Background: The objective was to evaluate the efficacy and tolerability of donepezil (5 and 10 mg/day) compared with placebo in alleviating manifestations of mild to moderate Alzheimer's disease (AD). Method: A systematic review of individual patient data from Phase II and III double-blind, randomised, placebo-controlled studies of up to 24 weeks and completed by 20 December 1999. The main outcome measures were the ADAS-cog, the CIBIC-plus, and reports of adverse events. Results: A total of 2376 patients from ten trials were randomised to either donepezil 5 mg/day (n = 821), 10 mg/day (n = 662) or placebo (n = 893). Cognitive performance was better in patients receiving donepezil than in patients receiving placebo. At 12 weeks the differences in ADAS-cog scores were 5 mg/day-placebo: - 2.1 [95% confidence interval (CI), - 2.6 to - 1.6; p < 0.001], 10 mg/day-placebo: - 2.5 ( - 3.1 to - 2.0; p < 0.001). The corresponding results at 24 weeks were - 2.0 ( - 2.7 to - 1.3; p < 0.001) and - 3.1 ( - 3.9 to - 2.4; p < 0.001). The difference between the 5 and 10 mg/day doses was significant at 24 weeks (p = 0.005). The odds ratios (OR) of improvement on the CIBIC-plus at 12 weeks were: 5 mg/day-placebo 1.8 (1.5 to 2.1; p < 0.001), 10 mg/day-placebo 1.9 (1.5 to 2.4; p < 0.001). The corresponding values at 24 weeks were 1.9 (1.5 to 2.4; p = 0.001) and 2.1 (1.6 to 2.8; p < 0.001). Donepezil was well tolerated; adverse events were cholinergic in nature and generally of mild severity and brief in duration. Conclusion: Donepezil (5 and 10 mg/day) provides meaningful benefits in alleviating deficits in cognitive and clinician-rated global function in AD patients relative to placebo. Increased improvements in cognition were indicated for the higher dose. Copyright © 2004 John Wiley & Sons, Ltd.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
OBJECTIVES: To evaluate the evidence for strategies to prevent falls or fractures in residents in care homes and hospital inpatients and to investigate the effect of dementia and cognitive impairment. DESIGN: Systematic review and meta-analyses of studies grouped by intervention and setting (hospital or care home). Meta-regression to investigate the effects of dementia and of study quality and design. DATA SOURCES: Medline, CINAHL, Embase, PsychInfo, Cochrane Database, Clinical Trials Register, and hand searching of references from reviews and guidelines to January 2005. RESULTS: 1207 references were identified, including 115 systematic reviews, expert reviews, or guidelines. Of the 92 full papers inspected, 43 were included. Meta-analysis for multifaceted interventions in hospital (13 studies) showed a rate ratio of 0.82 (95% confidence interval 0.68 to 0.997) for falls but no significant effect on the number of fallers or fractures. For hip protectors in care homes (11 studies) the rate ratio for hip fractures was 0.67 (0.46 to 0.98), but there was no significant effect on falls and not enough studies on fallers. For all other interventions (multifaceted interventions in care homes; removal of physical restraints in either setting; fall alarm devices in either setting; exercise in care homes; calcium/vitamin D in care homes; changes in the physical environment in either setting; medication review in hospital) meta-analysis was either unsuitable because of insufficient studies or showed no significant effect on falls, fallers, or fractures, despite strongly positive results in some individual studies. Meta-regression showed no significant association between effect size and prevalence of dementia or cognitive impairment. CONCLUSION: There is some evidence that multifaceted interventions in hospital reduce the number of falls and that use of hip protectors in care homes prevents hip fractures. There is insufficient evidence, however, for the effectiveness of other single interventions in hospitals or care homes or multifaceted interventions in care homes.
Resumo:
1. Population growth rate (PGR) is central to the theory of population ecology and is crucial for projecting population trends in conservation biology, pest management and wildlife harvesting. Furthermore, PGR is increasingly used to assess the effects of stressors. Image analysis that can automatically count and measure photographed individuals offers a potential methodology for estimating PGR. 2. This study evaluated two ways in which the PGR of Daphnia magna, exposed to different stressors, can be estimated using an image analysis system. The first method estimated PGR as the ratio of counts of individuals obtained at two different times, while the second method estimated PGR as the ratio of population sizes at two different times, where size is measured by the sum of the individuals' surface areas, i.e. total population surface area. This method is attractive if surface area is correlated with reproductive value (RV), as it is for D. magna, because of the theoretical result that PGR is the rate at which the population RV increases. 3. The image analysis system proved reliable and reproducible in counting populations of up to 440 individuals in 5 L of water. Image counts correlated well with manual counts but with a systematic underestimate of about 30%. This does not affect accuracy when estimating PGR as the ratio of two counts. Area estimates of PGR correlated well with count estimates, but were systematically higher, possibly reflecting their greater accuracy in the study situation. 4. Analysis of relevant scenarios suggested the correlation between RV and body size will generally be good for organisms in which fecundity correlates with body size. In these circumstances, area estimation of PGR is theoretically better than count estimation. 5. Synthesis and applications. There are both theoretical and practical advantages to area estimation of population growth rate when individuals' reproductive values are consistently well correlated with their surface areas. Because stressors may affect both the number and quality of individuals, area estimation of population growth rate should improve the accuracy of predicting stress impacts at the population level.
Resumo:
GP catalyzes the phosphorylation of glycogen to Glc-1-P. Because of its fundamental role in the metabolism of glycogen, GP has been the target for a systematic structure-assisted design of inhibitory compounds, which could be of value in the therapeutic treatment of type 2 diabetes mellitus. The most potent catalytic-site inhibitor of GP identified to date is spirohydantoin of glucopyranose (hydan). In this work, we employ MD free energy simulations to calculate the relative binding affinities for GP of hydan and two spirohydantoin analogues, methyl-hydan and n-hydan, in which a hydrogen atom is replaced by a methyl- or amino group, respectively. The results are compared with the experimental relative affinities of these ligands, estimated by kinetic measurements of the ligand inhibition constants. The calculated binding affinity for methyl-hydan (relative to hydan) is 3.75 +/- 1.4 kcal/mol, in excellent agreement with the experimental value (3.6 +/- 0.2 kcal/mol). For n-hydan, the calculated value is 1.0 +/- 1.1 kcal/mol, somewhat smaller than the experimental result (2.3 +/- 0.1 kcal/mol). A free energy decomposition analysis shows that hydan makes optimum interactions with protein residues and specific water molecules in the catalytic site. In the other two ligands, structural perturbations of the active site by the additional methyl- or amino group reduce the corresponding binding affinities. The computed binding free energies are sensitive to the preference of a specific water molecule for two well-defined positions in the catalytic site. The behavior of this water is analyzed in detail, and the free energy profile for the translocation of the water between the two positions is evaluated. The results provide insights into the role of water molecules in modulating ligand binding affinities. A comparison of the interactions between a set of ligands and their surrounding groups in X-ray structures is often used in the interpretation of binding free energy differences and in guiding the design of new ligands. For the systems in this work, such an approach fails to estimate the order of relative binding strengths, in contrast to the rigorous free energy treatment.
Resumo:
Details about the parameters of kinetic systems are crucial for progress in both medical and industrial research, including drug development, clinical diagnosis and biotechnology applications. Such details must be collected by a series of kinetic experiments and investigations. The correct design of the experiment is essential to collecting data suitable for analysis, modelling and deriving the correct information. We have developed a systematic and iterative Bayesian method and sets of rules for the design of enzyme kinetic experiments. Our method selects the optimum design to collect data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. The rules select features of the design such as the substrate range and the number of measurements. We show here that this method can be directly applied to the study of other important kinetic systems, including drug transport, receptor binding, microbial culture and cell transport kinetics. It is possible to reduce the errors in the estimated parameters and, most importantly, increase the efficiency and cost-effectiveness by reducing the necessary amount of experiments and data points measured. (C) 2003 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.
Resumo:
In areas such as drug development, clinical diagnosis and biotechnology research, acquiring details about the kinetic parameters of enzymes is crucial. The correct design of an experiment is critical to collecting data suitable for analysis, modelling and deriving the correct information. As classical design methods are not targeted to the more complex kinetics being frequently studied, attention is needed to estimate parameters of such models with low variance. We demonstrate that a Bayesian approach (the use of prior knowledge) can produce major gains quantifiable in terms of information, productivity and accuracy of each experiment. Developing the use of Bayesian Utility functions, we have used a systematic method to identify the optimum experimental designs for a number of kinetic model data sets. This has enabled the identification of trends between kinetic model types, sets of design rules and the key conclusion that such designs should be based on some prior knowledge of K-M and/or the kinetic model. We suggest an optimal and iterative method for selecting features of the design such as the substrate range, number of measurements and choice of intermediate points. The final design collects data suitable for accurate modelling and analysis and minimises the error in the parameters estimated. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Objectives: To clarify the role of growth monitoring in primary school children, including obesity, and to examine issues that might impact on the effectiveness and cost-effectiveness of such programmes. Data sources: Electronic databases were searched up to July 2005. Experts in the field were also consulted. Review methods: Data extraction and quality assessment were performed on studies meeting the review's inclusion criteria. The performance of growth monitoring to detect disorders of stature and obesity was evaluated against National Screening Committee (NSC) criteria. Results: In the 31 studies that were included in the review, there were no controlled trials of the impact of growth monitoring and no studies of the diagnostic accuracy of different methods for growth monitoring. Analysis of the studies that presented a 'diagnostic yield' of growth monitoring suggested that one-off screening might identify between 1: 545 and 1: 1793 new cases of potentially treatable conditions. Economic modelling suggested that growth monitoring is associated with health improvements [ incremental cost per quality-adjusted life-year (QALY) of pound 9500] and indicated that monitoring was cost-effective 100% of the time over the given probability distributions for a willingness to pay threshold of pound 30,000 per QALY. Studies of obesity focused on the performance of body mass index against measures of body fat. A number of issues relating to human resources required for growth monitoring were identified, but data on attitudes to growth monitoring were extremely sparse. Preliminary findings from economic modelling suggested that primary prevention may be the most cost-effective approach to obesity management, but the model incorporated a great deal of uncertainty. Conclusions: This review has indicated the potential utility and cost-effectiveness of growth monitoring in terms of increased detection of stature-related disorders. It has also pointed strongly to the need for further research. Growth monitoring does not currently meet all NSC criteria. However, it is questionable whether some of these criteria can be meaningfully applied to growth monitoring given that short stature is not a disease in itself, but is used as a marker for a range of pathologies and as an indicator of general health status. Identification of effective interventions for the treatment of obesity is likely to be considered a prerequisite to any move from monitoring to a screening programme designed to identify individual overweight and obese children. Similarly, further long-term studies of the predictors of obesity-related co-morbidities in adulthood are warranted. A cluster randomised trial comparing growth monitoring strategies with no growth monitoring in the general population would most reliably determine the clinical effectiveness of growth monitoring. Studies of diagnostic accuracy, alongside evidence of effective treatment strategies, could provide an alternative approach. In this context, careful consideration would need to be given to target conditions and intervention thresholds. Diagnostic accuracy studies would require long-term follow-up of both short and normal children to determine sensitivity and specificity of growth monitoring.
Resumo:
Aims: We conducted a systematic review of studies examining relationships between measures of beverage alcohol tax or price levels and alcohol sales or self-reported drinking. A total of 112 studies of alcohol tax or price effects were found, containing 1003 estimates of the tax/price–consumption relationship. Design: Studies included analyses of alternative outcome measures, varying subgroups of the population, several statistical models, and using different units of analysis. Multiple estimates were coded from each study, along with numerous study characteristics. Using reported estimates, standard errors, t-ratios, sample sizes and other statistics, we calculated the partial correlation for the relationship between alcohol price or tax and sales or drinking measures for each major model or subgroup reported within each study. Random-effects models were used to combine studies for inverse variance weighted overall estimates of the magnitude and significance of the relationship between alcohol tax/price and drinking. Findings: Simple means of reported elasticities are -0.46 for beer, -0.69 for wine and -0.80 for spirits. Meta-analytical results document the highly significant relationships (P < 0.001) between alcohol tax or price measures and indices of sales or consumption of alcohol (aggregate-level r = -0.17 for beer, -0.30 for wine, -0.29 for spirits and -0.44 for total alcohol). Price/tax also affects heavy drinking significantly (mean reported elasticity = -0.28, individual-level r = -0.01, P < 0.01), but the magnitude of effect is smaller than effects on overall drinking. Conclusions: A large literature establishes that beverage alcohol prices and taxes are related inversely to drinking. Effects are large compared to other prevention policies and programs. Public policies that raise prices of alcohol are an effective means to reduce drinking.
Resumo:
The Stochastic Diffusion Search (SDS) was developed as a solution to the best-fit search problem. Thus, as a special case it is capable of solving the transform invariant pattern recognition problem. SDS is efficient and, although inherently probabilistic, produces very reliable solutions in widely ranging search conditions. However, to date a systematic formal investigation of its properties has not been carried out. This thesis addresses this problem. The thesis reports results pertaining to the global convergence of SDS as well as characterising its time complexity. However, the main emphasis of the work, reports on the resource allocation aspect of the Stochastic Diffusion Search operations. The thesis introduces a novel model of the algorithm, generalising an Ehrenfest Urn Model from statistical physics. This approach makes it possible to obtain a thorough characterisation of the response of the algorithm in terms of the parameters describing the search conditions in case of a unique best-fit pattern in the search space. This model is further generalised in order to account for different search conditions: two solutions in the search space and search for a unique solution in a noisy search space. Also an approximate solution in the case of two alternative solutions is proposed and compared with predictions of the extended Ehrenfest Urn model. The analysis performed enabled a quantitative characterisation of the Stochastic Diffusion Search in terms of exploration and exploitation of the search space. It appeared that SDS is biased towards the latter mode of operation. This novel perspective on the Stochastic Diffusion Search lead to an investigation of extensions of the standard SDS, which would strike a different balance between these two modes of search space processing. Thus, two novel algorithms were derived from the standard Stochastic Diffusion Search, ‘context-free’ and ‘context-sensitive’ SDS, and their properties were analysed with respect to resource allocation. It appeared that they shared some of the desired features of their predecessor but also possessed some properties not present in the classic SDS. The theory developed in the thesis was illustrated throughout with carefully chosen simulations of a best-fit search for a string pattern, a simple but representative domain, enabling careful control of search conditions.