878 resultados para Scientists.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-gas approaches to climate change policies require a metric establishing ‘equivalences’ among emissions of various species. Climate scientists and economists have proposed four kinds of such metrics and debated their relative merits. We present a unifying framework that clarifies the relationships among them. We show, as have previous authors, that the global warming potential (GWP), used in international law to compare emissions of greenhouse gases, is a special case of the global damage potential (GDP), assuming (1) a finite time horizon, (2) a zero discount rate, (3) constant atmospheric concentrations, and (4) impacts that are proportional to radiative forcing. Both the GWP and GDP follow naturally from a cost–benefit framing of the climate change issue. We show that the global temperature change potential (GTP) is a special case of the global cost potential (GCP), assuming a (slight) fall in the global temperature after the target is reached. We show how the four metrics should be generalized if there are intertemporal spillovers in abatement costs, distinguishing between private (e.g., capital stock turnover) and public (e.g., induced technological change) spillovers. Both the GTP and GCP follow naturally from a cost-effectiveness framing of the climate change issue. We also argue that if (1) damages are zero below a threshold and (2) infinitely large above a threshold, then cost-effectiveness analysis and cost–benefit analysis lead to identical results. Therefore, the GCP is a special case of the GDP. The UN Framework Convention on Climate Change uses the GWP, a simplified cost–benefit concept. The UNFCCC is framed around the ultimate goal of stabilizing greenhouse gas concentrations. Once a stabilization target has been agreed under the convention, implementation is clearly a cost-effectiveness problem. It would therefore be more consistent to use the GCP or its simplification, the GTP.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The necessity and benefits for establishing the international Earth-system Prediction Initiative (EPI) are discussed by scientists associated with the World Meteorological Organization (WMO) World Weather Research Programme (WWRP), World Climate Research Programme (WCRP), International Geosphere–Biosphere Programme (IGBP), Global Climate Observing System (GCOS), and natural-hazards and socioeconomic communities. The proposed initiative will provide research and services to accelerate advances in weather, climate, and Earth system prediction and the use of this information by global societies. It will build upon the WMO, the Group on Earth Observations (GEO), the Global Earth Observation System of Systems (GEOSS) and the International Council for Science (ICSU) to coordinate the effort across the weather, climate, Earth system, natural-hazards, and socioeconomic disciplines. It will require (i) advanced high-performance computing facilities, supporting a worldwide network of research and operational modeling centers, and early warning systems; (ii) science, technology, and education projects to enhance knowledge, awareness, and utilization of weather, climate, environmental, and socioeconomic information; (iii) investments in maintaining existing and developing new observational capabilities; and (iv) infrastructure to transition achievements into operational products and services.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper has two aims. First, to present cases in which scientists developed a defensive system for their homeland: Blackett and the air defense of Britain in WWII, Forrester and the SAGE system for North America in the Cold War, and Archimedes’ work defending Syracuse during the Second Punic War. In each case the historical context and the individual’s other achievements are outlined, and a description of the contribution’s relationship to OR/MS is given. The second aim is to consider some of the features the cases share and examine them in terms of contemporary OR/MS methodology. Particular reference is made to a recent analysis of the field’s strengths and weaknesses. This allows both a critical appraisal of the field and a set of potential responses for strengthening it. Although a mixed set of lessons arise, the overall conclusion is that the cases are examples to build on and that OR/MS retains the ability to do high stakes work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: The prediction of protein structure and the precise understanding of protein folding and unfolding processes remains one of the greatest challenges in structural biology and bioinformatics. Computer simulations based on molecular dynamics (MD) are at the forefront of the effort to gain a deeper understanding of these complex processes. Currently, these MD simulations are usually on the order of tens of nanoseconds, generate a large amount of conformational data and are computationally expensive. More and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. METHODS: To adequately organize, manage, and analyze the data generated by unfolding simulation studies, we designed a data warehouse system that is embedded in a grid environment to facilitate the seamless sharing of available computer resources and thus enable many groups to share complex molecular dynamics simulations on a more regular basis. RESULTS: To gain insight into the conformational fluctuations and stability of the monomeric forms of the amyloidogenic protein transthyretin (TTR), molecular dynamics unfolding simulations of the monomer of human TTR have been conducted. Trajectory data and meta-data of the wild-type (WT) protein and the highly amyloidogenic variant L55P-TTR represent the test case for the data warehouse. CONCLUSIONS: Web and grid services, especially pre-defined data mining services that can run on or 'near' the data repository of the data warehouse, are likely to play a pivotal role in the analysis of molecular dynamics unfolding data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the increasing awareness of protein folding disorders, the explosion of genomic information, and the need for efficient ways to predict protein structure, protein folding and unfolding has become a central issue in molecular sciences research. Molecular dynamics computer simulations are increasingly employed to understand the folding and unfolding of proteins. Running protein unfolding simulations is computationally expensive and finding ways to enhance performance is a grid issue on its own. However, more and more groups run such simulations and generate a myriad of data, which raises new challenges in managing and analyzing these data. Because the vast range of proteins researchers want to study and simulate, the computational effort needed to generate data, the large data volumes involved, and the different types of analyses scientists need to perform, it is desirable to provide a public repository allowing researchers to pool and share protein unfolding data. This paper describes efforts to provide a grid-enabled data warehouse for protein unfolding data. We outline the challenge and present first results in the design and implementation of the data warehouse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data and a data warehouse. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular we look at two aspects, first how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories --- this is an important and challenging aspect of P-found because the data volumes involved are too large to be centralised. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling new scientific discoveries.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The P-found protein folding and unfolding simulation repository is designed to allow scientists to perform data mining and other analyses across large, distributed simulation data sets. There are two storage components in P-found: a primary repository of simulation data that is used to populate the second component, and a data warehouse that contains important molecular properties. These properties may be used for data mining studies. Here we demonstrate how grid technologies can support multiple, distributed P-found installations. In particular, we look at two aspects: firstly, how grid data management technologies can be used to access the distributed data warehouses; and secondly, how the grid can be used to transfer analysis programs to the primary repositories — this is an important and challenging aspect of P-found, due to the large data volumes involved and the desire of scientists to maintain control of their own data. The grid technologies we are developing with the P-found system will allow new large data sets of protein folding simulations to be accessed and analysed in novel ways, with significant potential for enabling scientific discovery.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The International Plant Proteomics Organization (INPPO) is a non-profit-organization consisting of people who are involved or interested in plant proteomics. INPPO is constantly growing in volume and activity, which is mostly due to the realization among plant proteomics researchers worldwide for the need of such a global platform. Their active participation resulted in the rapid growth within the first year of INPPO’s official launch in 2011 via its website (www.inppo.com) and publication of the ‘viewpoint paper’ in a special issue of PROTEOMICS (May 2011). Here, we will be highlighting the progress achieved in the year 2011 and the future targets for the year 2012 and onwards. INPPO has achieved a successful administrative structure, the Core Committee (CC; composed of President, Vice-President, and General Secretaries), Executive Council (EC), and General Body (GB) toward achieving the INPPO objectives by its proposed initiatives. Various committees and subcommittees are in the process of being functionalized via discussion amongst scientists around the globe. INPPO’s primary aim to popularize the plant proteomics research in biological sciences has also been recognized by PROTEOMICS where a new section has been introduced to plant proteomics starting January 2012, following the very first issue of this journal devoted to plant proteomics in May 2011. To disseminate organizational activities to the scientific community, INPPO has launched a biannual (in January & July) newsletter entitled “INPPO Express: News & Views” with the first issue published in January 2012. INPPO is also planning to have several activities in 2012, including programs within the Education Outreach committee in different countries, and the development of research ideas and proposals with priority on crop and horticultural plants, while keeping tight interactions with proteomics programs on model plants such as Arabidopsis thaliana, rice, or Medicago truncatula. Altogether, the INPPO progress and upcoming activities are because of immense support, dedication, and hard work of all members of the INPPO family, and also due to the wide encouragement and support from the communities (scientific and non-scientific).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In response to evidence of insect pollinator declines, organisations in many sectors, including the food and farming industry, are investing in pollinator conservation. They are keen to ensure that their efforts use the best available science. We convened a group of 32 ‘conservation practitioners’ with an active interest in pollinators and 16 insect pollinator scientists. The conservation practitioners include representatives from UK industry (including retail), environmental non-government organisations and nature conservation agencies. We collaboratively developed a long list of 246 knowledge needs relating to conservation of wild insect pollinators in the UK. We refined and selected the most important knowledge needs, through a three-stage process of voting and scoring, including discussions of each need at a workshop. We present the top 35 knowledge needs as scored by conservation practitioners or scientists. We find general agreement in priorities identified by these two groups. The priority knowledge needs will structure ongoing work to make science accessible to practitioners, and help to guide future science policy and funding. Understanding the economic benefits of crop pollination, basic pollinator ecology and impacts of pesticides on wild pollinators emerge strongly as priorities, as well as a need to monitor floral resources in the landscape.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

ESA’s Venus Express Mission has monitored Venus since April 2006, and scientists worldwide have used mathematical models to investigate its atmosphere and model its circulation. This book summarizes recent work to explore and understand the climate of the planet through a research program under the auspices of the International Space Science Institute (ISSI) in Bern, Switzerland. Some of the unique elements that are discussed are the anomalies with Venus’ surface temperature (the huge greenhouse effect causes the surface to rise to 460°C, without which would plummet as low as -40°C), its unusual lack of solar radiation (despite being closer to the Sun, Venus receives less solar radiation than Earth due to its dense cloud cover reflecting 76% back) and the juxtaposition of its atmosphere and planetary rotation (wind speeds can climb up to 200 m/s, much faster than Venus’ sidereal day of 243 Earth-days).