24 resultados para Scientist
em CentAUR: Central Archive University of Reading - UK
Resumo:
Compute grids are used widely in many areas of environmental science, but there has been limited uptake of grid computing by the climate modelling community, partly because the characteristics of many climate models make them difficult to use with popular grid middleware systems. In particular, climate models usually produce large volumes of output data, and running them usually involves complicated workflows implemented as shell scripts. For example, NEMO (Smith et al. 2008) is a state-of-the-art ocean model that is used currently for operational ocean forecasting in France, and will soon be used in the UK for both ocean forecasting and climate modelling. On a typical modern cluster, a particular one year global ocean simulation at 1-degree resolution takes about three hours when running on 40 processors, and produces roughly 20 GB of output as 50000 separate files. 50-year simulations are common, during which the model is resubmitted as a new job after each year. Running NEMO relies on a set of complicated shell scripts and command utilities for data pre-processing and post-processing prior to job resubmission. Grid Remote Execution (G-Rex) is a pure Java grid middleware system that allows scientific applications to be deployed as Web services on remote computer systems, and then launched and controlled as if they are running on the user's own computer. Although G-Rex is general purpose middleware it has two key features that make it particularly suitable for remote execution of climate models: (1) Output from the model is transferred back to the user while the run is in progress to prevent it from accumulating on the remote system and to allow the user to monitor the model; (2) The client component is a command-line program that can easily be incorporated into existing model work-flow scripts. G-Rex has a REST (Fielding, 2000) architectural style, which allows client programs to be very simple and lightweight and allows users to interact with model runs using only a basic HTTP client (such as a Web browser or the curl utility) if they wish. This design also allows for new client interfaces to be developed in other programming languages with relatively little effort. The G-Rex server is a standard Web application that runs inside a servlet container such as Apache Tomcat and is therefore easy to install and maintain by system administrators. G-Rex is employed as the middleware for the NERC1 Cluster Grid, a small grid of HPC2 clusters belonging to collaborating NERC research institutes. Currently the NEMO (Smith et al. 2008) and POLCOMS (Holt et al, 2008) ocean models are installed, and there are plans to install the Hadley Centre’s HadCM3 model for use in the decadal climate prediction project GCEP (Haines et al., 2008). The science projects involving NEMO on the Grid have a particular focus on data assimilation (Smith et al. 2008), a technique that involves constraining model simulations with observations. The POLCOMS model will play an important part in the GCOMS project (Holt et al, 2008), which aims to simulate the world’s coastal oceans. A typical use of G-Rex by a scientist to run a climate model on the NERC Cluster Grid proceeds as follows :(1) The scientist prepares input files on his or her local machine. (2) Using information provided by the Grid’s Ganglia3 monitoring system, the scientist selects an appropriate compute resource. (3) The scientist runs the relevant workflow script on his or her local machine. This is unmodified except that calls to run the model (e.g. with “mpirun”) are simply replaced with calls to "GRexRun" (4) The G-Rex middleware automatically handles the uploading of input files to the remote resource, and the downloading of output files back to the user, including their deletion from the remote system, during the run. (5) The scientist monitors the output files, using familiar analysis and visualization tools on his or her own local machine. G-Rex is well suited to climate modelling because it addresses many of the middleware usability issues that have led to limited uptake of grid computing by climate scientists. It is a lightweight, low-impact and easy-to-install solution that is currently designed for use in relatively small grids such as the NERC Cluster Grid. A current topic of research is the use of G-Rex as an easy-to-use front-end to larger-scale Grid resources such as the UK National Grid service.
Resumo:
As a result of climate change over the past 5000 years the Sahara changed from savannah to a desert landscape. The beds of ancient lakes are home to snail shells and the petrified roots of trees and shrubs. Examples of human occupation can also be seen in the form of fireplaces and discarded tools. Examination of the geological history of these sites can give a clearer picture of how the climate changed and how humans coped with these changes.
Resumo:
As a result of climate change over the past 5000 years the Sahara changed from savannah to a desert landscape. The beds of ancient lakes are home to snail shells and the petrified roots of trees and shrubs. Examples of human occupation can also be seen in the form of fireplaces and discarded tools. Examination of the geological history of these sites can give a clearer picture of how the climate changed and how humans coped with these changes.
Resumo:
One of the recurring themes in any discussion concerning the application of genetic transformation technology is the role of Intellectual Property Rights (IPR). This term covers both the content of patents and the confidential expertise, usually related to methodology and referred to as “Trade Secrets”. This review will explain the concepts behind patent protection, and will discuss the wide-ranging scope of existing patents that cover all aspects of transgenic technology, from selectable markers and novel promoters to methods of gene introduction. Although few of these patents have any significant commercial value, there are a small number of key patents that may restrict the “freedom to operate” of any company seeking to exploit the methods. Over the last twenty years, these restrictions have forced extensive cross-licensing between ag-biotech companies and have been one of the driving forces behind the consolidation of these companies. Although such issues are often considered to be of little interest to the academic scientist working in the public sector, they are of great importance in any debate about the role of “public-good breeding” and of the relationship between the public and private sectors.
Resumo:
One of the recurring themes of the debates concerning the application of genetic transformation technology has been the role of Intellectual Property Rights (IPR). This term covers both the content of patents and the confidential expertise usually related to methodology and referred to as 'Trade Secrets'. This review explains the concepts behind patent protection, and discusses the wide-ranging scope of existing patents that cover all aspects of transgenic technology, from selectable markers and novel promoters to methods of gene introduction. Although few of the patents in this area have any real commercial value, there are a small number of key patents that restrict the 'freedom to operate' of new companies seeking to exploit the methods. Over the last 20 years, these restrictions have forced extensive cross-licensing between ag-biotech companies and have been one of the driving forces behind the consolidation of these companies. Although such issues are often considered of little interest to the academic scientist working in the public sector, they are of great importance in any discussion of the role of 'public-good breeding' and of the relationship between the public and private sectors.
Resumo:
In this review we describe how concepts of shoot apical meristem function have developed over time. The role of the scientist is emphasized, as proposer, receiver and evaluator of ideas about the shoot apical meristem. Models have become increasingly popular over the last 250 years, and we consider their role. They provide valuable grounding for the development of hypotheses, but in addition they have a strong human element and their uptake relies on various degrees of persuasion. The most influential models are probably those that most data support, consolidating them as an insight into reality; but they also work by altering how we see meristems, re-directing us to influence the data we collect and the questions we consider meaningful.
Resumo:
Focuses on recent advances in research on block copolymers, covering chemistry (synthesis), physics (phase behaviors, rheology, modeling), and applications (melts and solutions). Written by a team of internationally respected scientists from industry and academia, this text compiles and reviews the expanse of research that has taken place over the last five years into one accessible resource. Ian Hamley is the world-leading scientist in the field of block copolymer research Presents the recent advances in the area, covering chemistry, physics and applications. Provides a broad coverage from synthesis to fundamental physics through to applications Examines the potential of block copolymers in nanotechnology as self-assembling soft materials
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.