32 resultados para Abbott, Andrew: Methods of discovery
Resumo:
As part of an international intercomparison project, the weak temperature gradient (WTG) and damped gravity wave (DGW) methods are used to parameterize large-scale dynamics in a set of cloud-resolving models (CRMs) and single column models (SCMs). The WTG or DGW method is implemented using a configuration that couples a model to a reference state defined with profiles obtained from the same model in radiative-convective equilibrium. We investigated the sensitivity of each model to changes in SST, given a fixed reference state. We performed a systematic comparison of the WTG and DGW methods in different models, and a systematic comparison of the behavior of those models using the WTG method and the DGW method. The sensitivity to the SST depends on both the large-scale parameterization method and the choice of the cloud model. In general, SCMs display a wider range of behaviors than CRMs. All CRMs using either the WTG or DGW method show an increase of precipitation with SST, while SCMs show sensitivities which are not always monotonic. CRMs using either the WTG or DGW method show a similar relationship between mean precipitation rate and column-relative humidity, while SCMs exhibit a much wider range of behaviors. DGW simulations produce large-scale velocity profiles which are smoother and less top-heavy compared to those produced by the WTG simulations. These large-scale parameterization methods provide a useful tool to identify the impact of parameterization differences on model behavior in the presence of two-way feedback between convection and the large-scale circulation.
Resumo:
The Bureau International des Poids et Mesures, the BIPM, was established by Article 1 of the Convention du Mètre, on 20 May 1875, and is charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The decimal metric system, dating from the time of the French Revolution, was based on the metre and the kilogram. Under the terms of the 1875 Convention, new international prototypes of the metre and kilogram were made and formally adopted by the first Conférence Générale des Poids et Mesures (CGPM) in 1889. Over time this system developed, so that it now includes seven base units. In 1960 it was decided at the 11th CGPM that it should be called the Système International d’Unités, the SI (in English: the International System of Units). The SI is not static but evolves to match the world’s increasingly demanding requirements for measurements at all levels of precision and in all areas of science, technology, and human endeavour. This document is a summary of the SI Brochure, a publication of the BIPM which is a statement of the current status of the SI. The seven base units of the SI, listed in Table 1, provide the reference used to define all the measurement units of the International System. As science advances, and methods of measurement are refined, their definitions have to be revised. The more accurate the measurements, the greater the care required in the realization of the units of measurement.
Resumo:
This review article addresses recent advances in the analysis of foods and food components by capillary electrophoresis (CE). CE has found application to a number of important areas of food analysis, including quantitative chemical analysis of food additives, biochemical analysis of protein composition, and others. The speed, resolution and simplicity of CE, combined with low operating costs, make the technique an attractive option for the development of improved methods of food analysis for the new millennium.
Resumo:
In a sequential clinical trial, accrual of data on patients often continues after the stopping criterion for the study has been met. This is termed “overrunning.” Overrunning occurs mainly when the primary response from each patient is measured after some extended observation period. The objective of this article is to compare two methods of allowing for overrunning. In particular, simulation studies are reported that assess the two procedures in terms of how well they maintain the intended type I error rate. The effect on power resulting from the incorporation of “overrunning data” using the two procedures is evaluated.
Resumo:
Background: Meta-analyses based on individual patient data (IPD) are regarded as the gold standard for systematic reviews. However, the methods used for analysing and presenting results from IPD meta-analyses have received little discussion. Methods We review 44 IPD meta-analyses published during the years 1999–2001. We summarize whether they obtained all the data they sought, what types of approaches were used in the analysis, including assumptions of common or random effects, and how they examined the effects of covariates. Results: Twenty-four out of 44 analyses focused on time-to-event outcomes, and most analyses (28) estimated treatment effects within each trial and then combined the results assuming a common treatment effect across trials. Three analyses failed to stratify by trial, analysing the data is if they came from a single mega-trial. Only nine analyses used random effects methods. Covariate-treatment interactions were generally investigated by subgrouping patients. Seven of the meta-analyses included data from less than 80% of the randomized patients sought, but did not address the resulting potential biases. Conclusions: Although IPD meta-analyses have many advantages in assessing the effects of health care, there are several aspects that could be further developed to make fuller use of the potential of these time-consuming projects. In particular, IPD could be used to more fully investigate the influence of covariates on heterogeneity of treatment effects, both within and between trials. The impact of heterogeneity, or use of random effects, are seldom discussed. There is thus considerable scope for enhancing the methods of analysis and presentation of IPD meta-analysis.
Resumo:
The commercial process in construction projects is an expensive and highly variable overhead. Collaborative working practices carry many benefits, which are widely disseminated, but little information is available about their costs. Transaction Cost Economics is a theoretical framework that seeks explanations for why there are firms and how the boundaries of firms are defined through the “make-or-buy” decision. However, it is not a framework that offers explanations for the relative costs of procuring construction projects in different ways. The idea that different methods of procurement will have characteristically different costs is tested by way of a survey. The relevance of transaction cost economics to the study of commercial costs in procurement is doubtful. The survey shows that collaborative working methods cost neither more nor less than traditional methods. But the benefits of collaboration mean that there is a great deal of enthusiasm for collaboration rather than competition.
Resumo:
The assessment of cellular effects by the aqueous phase of human feces (fecal water, FW) is a useful biomarker approach to study cancer risks and protective activities of food. In order to refine and develop the biomarker, different protocols of preparing FW were compared. Fecal waters were prepared by 3 methods: (A) direct centrifugation; (B) extraction of feces in PBS before centrifugation; and (C) centrifugation of lyophilized and reconstituted feces. Genotoxicity was determined in colon cells using the Comet assay. Selected samples were investigated for additional parameters related to carcinogenesis. Two of 7 FWs obtained by methods A and B were similarly genotoxic. Method B, however, yielded higher volumes of FW, allowing sterile filtration for long-term culture experiments. Four of 7 samples were non-genotoxic when prepared according to all 3 methods. FW from lyophilized feces and from fresh samples were equally genotoxic. FWs modulated cytotoxicity, paracellular permeability, and invasion, independent of their genotoxicity. All 3 methods of FW preparation can be used to assess genotoxicity. The higher volumes of FWobtained by preparation method B greatly enhance the perspectives of measuring different types of biological parameters and using these to disclose activities related to cancer development.
Resumo:
Physiological evidence using Infrared Video Microscopy during the uncaging of glutamate has proven the existence of excitable calcium ion channels in spine heads, highlighting the need for reliable models of spines. In this study we compare the three main methods of simulating excitable spines: Baer & Rinzel's Continuum (B&R) model, Coombes' Spike-Diffuse-Spike (SDS) model and paired cable and ion channel equations (Cable model). Tests are done to determine how well the models approximate each other in terms of speed and heights of travelling waves. Significant quantitative differences are found between the models: travelling waves in the SDS model in particular are found to travel at much lower speeds and sometimes much higher voltages than in the Cable or B&R models. Meanwhile qualitative differences are found between the B&R and SDS models over realistic parameter ranges. The cause of these differences is investigated and potential solutions proposed.
Resumo:
Importance of biomarker discovery in men’s cancer diagnosis and prognosis Each year around 10,000 men in the UK die as a result of prostate cancer (PCa) making it the 3rd most common cancer behind lung and breast cancer; worldwide more than 670,000 men are diagnosed every year with the disease [1]. Current methods of diagnosis of PCa mainly rely on the detection of elevated prostate-specific antigen (PSA) levels in serum and/or physical examination by a doctor for the detection of an abnormal prostate. PSA is a glycoprotein produced almost exclusively by the epithelial cells of the prostate gland [2]. Its role is not fully understood, although it is known that it forms part of the ejaculate and its function is to solubilise the sperm to give them the mobility to swim. Raised PSA levels in serum are thought to be due to both an increased production of PSA from the proliferated prostate cells, and a diminished architecture of affected cells, allowing an easier distribution of PSA into the wider circulatory system.
Resumo:
This paper examines issues related to potential analytical performance systems for global property funds. These will include traditional attribution methods but will also cover the performance concepts of alpha and beta widely used in other asset classes. We look at issues including...what creates beta, and what drives alpha in real estate investment? How can it be measured and isolated? How do these concepts relate to traditional attribution systems? Can performance records and performance fees adequately distinguish between these drivers? In this paper we illustrate these issues by reference to a case study addressing the complete performance record of a single unlisted fund.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes a paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate an appropriate and diverse range of keyphrases that reflect the document. This paper proposes two possible solutions that examine the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. Using three different freely available thesauri, the work undertaken examines two different methods of producing keywords and compares the outcomes across multiple strands in the timeline. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work. In addition, the different qualities of the thesauri are examined and it is concluded that the more entries in a thesaurus, the better it is likely to perform. The age of the thesaurus or the size of each entry does not correlate to performance.
Resumo:
We discuss several methods of calculating the DIS structure functions F2(x,Q2) based on BFKL-type small x resummations. Taking into account new HERA data ranging down to small xand low Q2, the pure leading order BFKL-based approach is excluded. Other methods based on high energy factorization are closer to conventional renormalization group equations. Despite several difficulties and ambiguities in combining the renormalization group equations with small x resummed terms, we find that a fit to the current data is hardly feasible, since the data in the low Q2 region are not as steep as the BFKL formalism predicts. Thus we conclude that deviations from the (successful) renormalization group approach towards summing up logarithms in 1/x are disfavoured by experiment.
Resumo:
Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.
Resumo:
As climate changes, temperatures will play an increasing role in determining crop yield. Both climate model error and lack of constrained physiological thresholds limit the predictability of yield. We used a perturbed-parameter climate model ensemble with two methods of bias-correction as input to a regional-scale wheat simulation model over India to examine future yields. This model configuration accounted for uncertainty in climate, planting date, optimization, temperature-induced changes in development rate and reproduction. It also accounts for lethal temperatures, which have been somewhat neglected to date. Using uncertainty decomposition, we found that fractional uncertainty due to temperature-driven processes in the crop model was on average larger than climate model uncertainty (0.56 versus 0.44), and that the crop model uncertainty is dominated by crop development. Simulations with the raw compared to the bias-corrected climate data did not agree on the impact on future wheat yield, nor its geographical distribution. However the method of bias-correction was not an important source of uncertainty. We conclude that bias-correction of climate model data and improved constraints on especially crop development are critical for robust impact predictions.
Resumo:
This paper employs an extensive Monte Carlo study to test the size and power of the BDS and close return methods of testing for departures from independent and identical distribution. It is found that the finite sample properties of the BDS test are far superior and that the close return method cannot be recommended as a model diagnostic. Neither test can be reliably used for very small samples, while the close return test has low power even at large sample sizes