929 resultados para scientific computation
Resumo:
To investigate the effects of the middle atmosphere on climate, the World Climate Research Programme is supporting the project "Stratospheric Processes and their Role in Climate" (SPARC). A central theme of SPARC, to examine model simulations of the coupled troposphere—middle atmosphere system, is being performed through the initiative called GRIPS (GCM—Reality Intercomparison Project for SPARC). In this paper, an overview of the objectives of GRIPS is given. Initial activities include an assessment of the performance of middle atmosphere climate models, and preliminary results from this evaluation are presented here. It is shown that although all 13 models evaluated represent most major features of the mean atmospheric state, there are deficiencies in the magnitude and location of the features, which cannot easily be traced to the formulation (resolution or the parameterizations included) of the models. Most models show a cold bias in all locations, apart from the tropical tropopause region where they can be either too warm or too cold. The strengths and locations of the major jets are often misrepresented in the models. Looking at three—dimensional fields reveals, for some models, more severe deficiencies in the magnitude and positioning of the dominant structures (such as the Aleutian high in the stratosphere), although undersampling might explain some of these differences from observations. All the models have shortcomings in their simulations of the present—day climate, which might limit the accuracy of predictions of the climate response to ozone change and other anomalous forcing.
Resumo:
Approximate Bayesian computation (ABC) methods make use of comparisons between simulated and observed summary statistics to overcome the problem of computationally intractable likelihood functions. As the practical implementation of ABC requires computations based on vectors of summary statistics, rather than full data sets, a central question is how to derive low-dimensional summary statistics from the observed data with minimal loss of information. In this article we provide a comprehensive review and comparison of the performance of the principal methods of dimension reduction proposed in the ABC literature. The methods are split into three nonmutually exclusive classes consisting of best subset selection methods, projection techniques and regularization. In addition, we introduce two new methods of dimension reduction. The first is a best subset selection method based on Akaike and Bayesian information criteria, and the second uses ridge regression as a regularization procedure. We illustrate the performance of these dimension reduction techniques through the analysis of three challenging models and data sets.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
Traditionally, the formal scientific output in most fields of natural science has been limited to peer- reviewed academic journal publications, with less attention paid to the chain of intermediate data results and their associated metadata, including provenance. In effect, this has constrained the representation and verification of the data provenance to the confines of the related publications. Detailed knowledge of a dataset’s provenance is essential to establish the pedigree of the data for its effective re-use, and to avoid redundant re-enactment of the experiment or computation involved. It is increasingly important for open-access data to determine their authenticity and quality, especially considering the growing volumes of datasets appearing in the public domain. To address these issues, we present an approach that combines the Digital Object Identifier (DOI) – a widely adopted citation technique – with existing, widely adopted climate science data standards to formally publish detailed provenance of a climate research dataset as an associated scientific workflow. This is integrated with linked-data compliant data re-use standards (e.g. OAI-ORE) to enable a seamless link between a publication and the complete trail of lineage of the corresponding dataset, including the dataset itself.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.
Resumo:
Computational formalisms have been pushing the boundaries of the field of computing for the last 80 years and much debate has surrounded what computing entails; what it is, and what it is not. This paper seeks to explore the boundaries of the ideas of computation and provide a framework for enabling a constructive discussion of computational ideas. First, a review of computing is given, ranging from Turing Machines to interactive computing. Then, a variety of natural physical systems are considered for their computational qualities. From this exploration, a framework is presented under which all dynamical systems can be considered as instances of the class of abstract computational platforms. An abstract computational platform is defined by both its intrinsic dynamics and how it allows computation that is meaningful to an external agent through the configuration of constraints upon those dynamics. It is asserted that a platform’s computational expressiveness is directly related to the freedom with which constraints can be placed. Finally, the requirements for a formal constraint description language are considered and it is proposed that Abstract State Machines may provide a reasonable basis for such a language.
Resumo:
As the Enlightenment drew to a close, translation had gradually acquired an increasingly important role in the international circulation and transmission of scientific knowledge. Yet comparatively little attention has been paid to the translators responsible for making such accounts accessible in other languages, some of whom were women. In this article I explore how European women cast themselves as intellectually enquiring, knowledgeable and authoritative figures in their translations. Focusing specifically on the genre of scientific travel writing, I investigate the narrative strategies deployed by women translators to mark their involvement in the process of scientific knowledge-making. These strategies ranged from rhetorical near-invisibility, driven by women's modest marginalization of their own public engagement in science, to the active advertisement of themselves as intellectually curious consumers of scientific knowledge. A detailed study of Elizabeth Helme's translation of the French ornithologist Françoise le Vaillant's Voyage dans l'intérieur de l'Afrique [Voyage into the Interior of Africa] (1790) allows me to explore how her reworking of the original text for an Anglophone reading public enabled her to engage cautiously – or sometimes more openly – with questions regarding how scientific knowledge was constructed, for whom and with which aims in mind.
Resumo:
The purpose of this study was to develop an understanding of the current state of scientific data sharing that stakeholders could use to develop and implement effective data sharing strategies and policies. The study developed a conceptual model to describe the process of data sharing, and the drivers, barriers, and enablers that determine stakeholder engagement. The conceptual model was used as a framework to structure discussions and interviews with key members of all stakeholder groups. Analysis of data obtained from interviewees identified a number of themes that highlight key requirements for the development of a mature data sharing culture.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
The typographical naivety of much scientific legibility research has caused designers to question the value of the research and the results. Examining the reasons underlying this questioning, the paper discusses the importance of designers being more accepting of scientific findings, and why legibility investigations have value. To demonstrate how typographic knowledge can be incorporated into the design of studies to increase their validity, the paper reports on a new investigation into the role of serifs when viewed at a distance. The experiment looks into the identification of the lowercase letters ‘j’, ‘i’, ‘l’, ‘b’, ‘h’, ‘n’, ‘u’, and ‘a’ in isolation. All of the letters originate in the same typeface and are presented in one version with serifs and one version without serifs. Although the experiment found no overall legibility difference between the sans serif and the serif versions, the study showed that letters with serifs placed on the vertical extremes were more legible at a distance than the same letters in a sans serif. These findings can therefore provide specific guidance on the design of individual letters and demonstrate the product of collaboration between designer and scientist on the planning, implementation, and analysis of the study.
Resumo:
The automatic transformation of sequential programs for efficient execution on parallel computers involves a number of analyses and restructurings of the input. Some of these analyses are based on computing array sections, a compact description of a range of array elements. Array sections describe the set of array elements that are either read or written by program statements. These sections can be compactly represented using shape descriptors such as regular sections, simple sections, or generalized convex regions. However, binary operations such as Union performed on these representations do not satisfy a straightforward closure property, e.g., if the operands to Union are convex, the result may be nonconvex. Approximations are resorted to in order to satisfy this closure property. These approximations introduce imprecision in the analyses and, furthermore, the imprecisions resulting from successive operations have a cumulative effect. Delayed merging is a technique suggested and used in some of the existing analyses to minimize the effects of approximation. However, this technique does not guarantee an exact solution in a general setting. This article presents a generalized technique to precisely compute Union which can overcome these imprecisions.
Resumo:
An expert panel was convened in October 2013 by the International Scientific Association for Probiotics and Prebiotics (ISAPP) to discuss the field of probiotics. It is now 13 years since the definition of probiotics and 12 years after guidelines were published for regulators, scientists and industry by the Food and Agriculture Organization of the United Nations and the WHO (FAO/WHO). The FAO/WHO definition of a probiotic—“live microorganisms which when administered in adequate amounts confer a health benefit on the host”—was reinforced as relevant and sufficiently accommodating for current and anticipated applications. However, inconsistencies between the FAO/WHO Expert Consultation Report and the FAO/WHO Guidelines were clarified to take into account advances in science and applications. A more precise use of the term 'probiotic' will be useful to guide clinicians and consumers in differentiating the diverse products on the market. This document represents the conclusions of the ISAPP consensus meeting on the appropriate use and scope of the term probiotic.
Resumo:
This paper investigates the feasibility of using approximate Bayesian computation (ABC) to calibrate and evaluate complex individual-based models (IBMs). As ABC evolves, various versions are emerging, but here we only explore the most accessible version, rejection-ABC. Rejection-ABC involves running models a large number of times, with parameters drawn randomly from their prior distributions, and then retaining the simulations closest to the observations. Although well-established in some fields, whether ABC will work with ecological IBMs is still uncertain. Rejection-ABC was applied to an existing 14-parameter earthworm energy budget IBM for which the available data consist of body mass growth and cocoon production in four experiments. ABC was able to narrow the posterior distributions of seven parameters, estimating credible intervals for each. ABC’s accepted values produced slightly better fits than literature values do. The accuracy of the analysis was assessed using cross-validation and coverage, currently the best available tests. Of the seven unnarrowed parameters, ABC revealed that three were correlated with other parameters, while the remaining four were found to be not estimable given the data available. It is often desirable to compare models to see whether all component modules are necessary. Here we used ABC model selection to compare the full model with a simplified version which removed the earthworm’s movement and much of the energy budget. We are able to show that inclusion of the energy budget is necessary for a good fit to the data. We show how our methodology can inform future modelling cycles, and briefly discuss how more advanced versions of ABC may be applicable to IBMs. We conclude that ABC has the potential to represent uncertainty in model structure, parameters and predictions, and to embed the often complex process of optimizing an IBM’s structure and parameters within an established statistical framework, thereby making the process more transparent and objective.