960 resultados para Data Availability


Relevância:

60.00% 60.00%

Publicador:

Resumo:

One of the current challenges in evolutionary ecology is understanding the long-term persistence of contemporary-evolving predator–prey interactions across space and time. To address this, we developed an extension of a multi-locus, multi-trait eco-evolutionary individual-based model that incorporates several interacting species in explicit landscapes. We simulated eco-evolutionary dynamics of multiple species food webs with different degrees of connectance across soil-moisture islands. A broad set of parameter combinations led to the local extinction of species, but some species persisted, and this was associated with (1) high connectance and omnivory and (2) ongoing evolution, due to multi-trait genetic variability of the embedded species. Furthermore, persistence was highest at intermediate island distances, likely because of a balance between predation-induced extinction (strongest at short island distances) and the coupling of island diversity by top predators, which by travelling among islands exert global top-down control of biodiversity. In the simulations with high genetic variation, we also found widespread trait evolutionary changes indicative of eco-evolutionary dynamics. We discuss how the ever-increasing computing power and high-resolution data availability will soon allow researchers to start bridging the in vivo–in silico gap.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the climate policy process, and project future climate change. Present-day analysis requires the combination of a range of data, algorithms, statistics and model estimates and their interpretation by a broad scientific community. Here we describe datasets and a methodology developed by the global carbon cycle science community to quantify all major components of the global carbon budget, including their uncertainties. We discuss changes compared to previous estimates, consistency within and among components, and methodology and data limitations. Based on energy statistics, we estimate that the global emissions of CO2 from fossil fuel combustion and cement production were 9.5 ± 0.5 PgC yr−1 in 2011, 3.0 percent above 2010 levels. We project these emissions will increase by 2.6% (1.9–3.5%) in 2012 based on projections of Gross World Product and recent changes in the carbon intensity of the economy. Global net CO2 emissions from Land-Use Change, including deforestation, are more difficult to update annually because of data availability, but combined evidence from land cover change data, fire activity in regions undergoing deforestation and models suggests those net emissions were 0.9 ± 0.5 PgC yr−1 in 2011. The global atmospheric CO2 concentration is measured directly and reached 391.38 ± 0.13 ppm at the end of year 2011, increasing 1.70 ± 0.09 ppm yr−1 or 3.6 ± 0.2 PgC yr−1 in 2011. Estimates from four ocean models suggest that the ocean CO2 sink was 2.6 ± 0.5 PgC yr−1 in 2011, implying a global residual terrestrial CO2 sink of 4.1 ± 0.9 PgC yr−1. All uncertainties are reported as ±1 sigma (68% confidence assuming Gaussian error distributions that the real value lies within the given interval), reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. This paper is intended to provide a baseline to keep track of annual carbon budgets in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Gravity field and steady-state Ocean Circulation Explorer (GOCE) was the first Earth explorer core mission of the European Space Agency. It was launched on March 17, 2009 into a Sun-synchronous dusk-dawn orbit and re-entered into the Earth’s atmosphere on November 11, 2013. The satellite altitude was between 255 and 225 km for the measurement phases. The European GOCE Gravity consortium is responsible for the Level 1b to Level 2 data processing in the frame of the GOCE High-level processing facility (HPF). The Precise Science Orbit (PSO) is one Level 2 product, which was produced under the responsibility of the Astronomical Institute of the University of Bern within the HPF. This PSO product has been continuously delivered during the entire mission. Regular checks guaranteed a high consistency and quality of the orbits. A correlation between solar activity, GPS data availability and quality of the orbits was found. The accuracy of the kinematic orbit primarily suffers from this. Improvements in modeling the range corrections at the retro-reflector array for the SLR measurements were made and implemented in the independent SLR validation for the GOCE PSO products. The satellite laser ranging (SLR) validation finally states an orbit accuracy of 2.42 cm for the kinematic and 1.84 cm for the reduced-dynamic orbits over the entire mission. The common-mode accelerations from the GOCE gradiometer were not used for the official PSO product, but in addition to the operational HPF work a study was performed to investigate to which extent common-mode accelerations improve the reduced-dynamic orbit determination results. The accelerometer data may be used to derive realistic constraints for the empirical accelerations estimated for the reduced-dynamic orbit determination, which already improves the orbit quality. On top of that the accelerometer data may further improve the orbit quality if realistic constraints and state-of-the-art background models such as gravity field and ocean tide models are used for the reduced-dynamic orbit determination.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The European Registry for Patients with Mechanical Circulatory Support (EUROMACS) was founded on 10 December 2009 with the initiative of Roland Hetzer (Deutsches Herzzentrum Berlin, Berlin, Germany) and Jan Gummert (Herz- und Diabeteszentrum Nordrhein-Westfalen, Bad Oeynhausen, Germany) with 15 other founding international members. It aims to promote scientific research to improve care of end-stage heart failure patients with ventricular assist device or a total artificial heart as long-term mechanical circulatory support. Likewise, the organization aims to provide and maintain a registry of device implantation data and long-term follow-up of patients with mechanical circulatory support. Hence, EUROMACS affiliated itself with Dendrite Clinical Systems Ltd to offer its members a software tool that allows input and analysis of patient clinical data on a daily basis. EUROMACS facilitates further scientific studies by offering research groups access to any available data wherein patients and centres are anonymized. Furthermore, EUROMACS aims to stimulate cooperation with clinical and research institutions and with peer associations involved to further its aims. EUROMACS is the only European-based Registry for Patients with Mechanical Circulatory Support with rapid increase in institutional and individual membership. Because of the expeditious data input, the European Association for Cardiothoracic Surgeons saw the need to optimize the data availability and the significance of the registry to improve care of patients with mechanical circulatory support and its potential contribution to scientific intents; hence, the beginning of their alliance in 2012. This first annual report is designed to provide an overview of EUROMACS' structure, its activities, a first data collection and an insight to its scientific contributions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Atoll islands are subject to a variety of processes that influence their geomorphological development. Analysis of historical shoreline changes using remotely sensed images has become an efficient approach to both quantify past changes and estimate future island response. However, the detection of long-term changes in beach width is challenging mainly for two reasons: first, data availability is limited for many remote Pacific islands. Second, beach environments are highly dynamic and strongly influenced by seasonal or episodic shoreline oscillations. Consequently, remote-sensing studies on beach morphodynamics of atoll islands deal with dynamic features covered by a low sampling frequency. Here we present a study of beach dynamics for nine islands on Takú Atoll, Papua New Guinea, over a seven-decade period. A considerable chronological gap between aerial photographs and satellite images was addressed by applying a new method that reweighted positions of the beach limit by identifying "outlier" shoreline positions. On top of natural beach variability observed along the reweighted beach sections, we found that one third of the analyzed islands show a statistically significant decrease in reweighted beach width since 1943. The total loss of beach area for all islands corresponds to 44% of the initial beach area. Variable shoreline trajectories suggest that changes in beach width on Takú Atoll are dependent on local control (that is, human activity and longshore sediment transport). Our results show that remote imagery with a low sampling frequency may be sufficient to characterize prominent morphological changes in planform beach configuration of reef islands.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Existe una creciente preocupación por las catástrofes de origen natural que están por llegar, motivo por el que se están realizando estudios desde prácticamente todas las ramas de la ciencia. La razón para ello se puede encontrar en el miedo a que los eventos futuros puedan dificultar las actividades humanas, aunque no es el único factor. Por todo ello, se produce una dispersión muy importante incluso en los conceptos más elementales como qué debe ser considerado o cómo debe llamarse y catalogarse uno u otro elemento. En consecuencia, los métodos para comprender los riesgos naturales también son muy diferentes, rara vez encontrándose enfoques realmente multidisciplinares. Se han realizado algunos esfuerzos para crear un marco de entendimiento común como por ejemplo, la "Directiva sobre inundaciones" o, más recientemente, la Directiva Inspire. Las entidades aseguradoras y reaseguradoras son un actor importante entre los muchos involucrados en los estudios de riesgos. Su interés radica en el hecho de que terminan pagando la mayor parte de la factura, si no toda. Pero, a cuánto puede ascender esa factura, no es una pregunta fácil de responder aún en casos muy concretos, y sin embargo, es la pregunta que constantemente se plantea por parte de los tomadores de decisiones a todos los niveles. Este documento resume las actividades de investigación que han llevado a cabo al objeto de sentar un marco de referencia, implementando de enfoques numéricos capaces de hacer frente a algunas de las cuestiones más relevantes que se encuentran en casi todos los estudios de riesgos naturales, ensayando conceptos de manera pragmática. Para ello, se escogió un lugar experimental de acuerdo a diferentes criterios, como la densidad de población, la facilidad de proporcionar los límites geográficos claros, la presencia de tres de los procesos geológicos más importantes (inundaciones, terremotos y vulcanismo) y la disponibilidad de datos. El modelo aquí propuesto aprovecha fuentes de datos muy diversas para evaluar los peligros naturales, poniendo de relieve la necesidad de un enfoque multidisciplinar y emplea un catálogo de datos único, unificado, independiente (no orientado), coherente y homogéneo para estimar el valor de las propiedades. Ahora bien, los datos se explotan de manera diferente según cada tipo de peligro, manteniendo sin variación los conceptos subyacentes. Durante esta investigación, se ha encontrado una gran brecha en la relación entre las pérdidas reales y las probabilidades del peligro, algo contrario a lo que se ha pensado que debía ser el comportamiento más probable de los riesgos naturales, demostrando que los estudios de riesgo tienen vida útil muy limitada. En parte debido ello, el modelo propuesto en este estudio es el de trabajar con escenarios, fijando una probabilidad de ocurrencia, lo que es contrario al modelo clásico de evaluar funciones continuas de riesgo. Otra razón para abordar la cuestión mediante escenarios es forzar al modelo para proporcionar unas cifras creíbles de daño máximo fijando cuestiones como la ubicación espacial de un evento y sus probabilidades, aportando una nueva visión del "peor escenario posible” de probabilidad conocida. ABSTRACT There is a growing concern about catastrophes of natural origin about to come hence many studies are being carried out from almost any science branch. Even though it is not the only one, fear for the upcoming events that might jeopardize any given human activity is the main motive. A forking effect is therefore heavily present even on the basic concepts of what is to be considered or how should it be named and catalogued; as a consequence, methods towards understanding natural risks also show great differences and a multidisciplinary approach has seldomly been followed. Some efforts were made to create a common understanding of such a matter, the “Floods Directive” or more recently the Inspire Directive, are a couple of examples. The insurance sector is an important actor among the many involved. Their interest relies on the fact that, eventually, they pay most of the bill if not all. But how much could that be is not an easy question to be answerd even in a very specific case, and it is almost always the question posed by decision makers at all levels. This document summarizes research activities that have being carried out in order to put some solid ground to be followed, implementing numerical approaches that are capable of coping with some of the most relevant issues found in almost all natural risk studies, testing concepts pragmatically. In order to do so, an experimental site was selected according to different criteria, such as population density, the ease of providing clear geographical boundaries, the presence of three of the most important geological processes (floods, earthquakes and volcanism) and data availability. The model herein proposed takes advantage of very diferent data sources in the assessment of hazard, pointing out how a multidisciplinary approach is needed, and uses only one unified, independent, consistent, homogeneous (non objective driven) source for assessing property value. Data is exploited differently according to each hazard type, but the underlying concepts remain the same. During this research, a deep detachment was found between actual loss and hazard chances, contrarily to what has been thought to be the most likely behaviour of natural hazards, proving that risk studies have a very limited lifespan. Partially because of such finding, the model in this study addresses scenarios with fixed probability of occurrence, as opposed to studying a continuous hazard function as usually proposed. Another reason for studying scenarios was to force the model to provide a reliable figure after a set of given parameters where fixed, such as the spatial location of an event and its chances, so the “worst case” of a given return period could be found.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In crop insurance, the accuracy with which the insurer quantifies the actual risk is highly dependent on the availability on actual yield data. Crop models might be valuable tools to generate data on expected yields for risk assessment when no historical records are available. However, selecting a crop model for a specific objective, location and implementation scale is a difficult task. A look inside the different crop and soil modules to understand how outputs are obtained might facilitate model choice. The objectives of this paper were (i) to assess the usefulness of crop models to be used within a crop insurance analysis and design and (ii) to select the most suitable crop model for drought risk assessment in semi-arid regions in Spain. For that purpose first, a pre-selection of crop models simulating wheat yield under rainfed growing conditions at the field scale was made, and second, four selected models (Aquacrop, CERES- Wheat, CropSyst and WOFOST) were compared in terms of modelling approaches, process descriptions and model outputs. Outputs of the four models for the simulation of winter wheat growth are comparable when water is not limiting, but differences are larger when simulating yields under rainfed conditions. These differences in rainfed yields are mainly related to the dissimilar simulated soil water availability and the assumed linkages with dry matter formation. We concluded that for the simulation of winter wheat growth at field scale in such semi-arid conditions, CERES-Wheat and CropSyst are preferred. WOFOST is a satisfactory compromise between data availability and complexity when detail data on soil is limited. Aquacrop integrates physiological processes in some representative parameters, thus diminishing the number of input parameters, what is seen as an advantage when observed data is scarce. However, the high sensitivity of this model to low water availability limits its use in the region considered. Contrary to the use of ensembles of crop models, we endorse that efforts be concentrated on selecting or rebuilding a model that includes approaches that better describe the agronomic conditions of the regions in which they will be applied. The use of such complex methodologies as crop models is associated with numerous sources of uncertainty, although these models are the best tools available to get insight in these complex agronomic systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Funding: Funded by the Scottish Government’s Rural and Environment Science and Analytical Services Division (RESAS, Theme 7: Diet and Health). The funder had no role in study design, data collection and analysis, decision to publish, or preparation of this manuscript. Data Availability: All relevant data are owned by the Aberdeen Maternity and Neonatal Databank. Interested parties may request access to the data by following the instructions at http://www.abdn.ac.uk/iahs/research/obsgynae/amnd/access.php.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We thank Karim Gharbi and Urmi Trivedi for their assistance with RNA sequencing, carried out in the GenePool genomics facility (University of Edinburgh). We also thank Susan Fairley and Eduardo De Paiva Alves (Centre for Genome Enabled Biology and Medicine, University of Aberdeen) for help with the initial bioinformatics analysis. We thank Aaron Mitchell for kindly providing the ALS3 mutant, Julian Naglik for the gift of TR146 cells, and Jon Richardson for technical assistance. We thank the Genomics and Bioinformatics core of the Faculty of Health Sciences for Next Generation Sequencing and Bioinformatics support, the Information and Communication Technology Office at the University of Macau for providing access to a High Performance Computer and Jacky Chan and William Pang for their expert support on the High Performance Computer. Finally, we thank Amanda Veri for generating CaLC2928. M.D.L. is supported by a Sir Henry Wellcome Postdoctoral Fellowship (Wellcome Trust 096072), R.A.F. by a Wellcome Trust-Massachusetts Institute of Technology (MIT) Postdoctoral Fellowship, L.E.C. by a Canada Research Chair in Microbial Genomics and Infectious Disease and by Canadian Institutes of Health Research Grants MOP-119520 and MOP-86452, A.J. P.B. was supported by the UK Biotechnology and Biological Sciences Research Council (BB/F00513X/1) and by the European Research Council (ERC-2009-AdG-249793-STRIFE), KHW is supported by the Science and Technology Development Fund of Macau S.A.R (FDCT) (085/2014/A2) and the Research and Development Administrative Office of the University of Macau (SRG2014-00003-FHS) and R.T.W. by the Burroughs Wellcome fund and NIH R15AO094406. Data availability RNA-sequencing data sets are available at ArrayExpress (www.ebi.ac.uk) under accession code E-MTAB-4075. ChIP-seq data sets are available at the NCBI SRA database (http://www.ncbi.nlm.nih.gov) under accession code SRP071687. The authors declare that all other data supporting the findings of this study are available within the article and its supplementary information files, or from the corresponding author upon request.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

At the height of the financial crisis, the Western welfare state prevented a repeat of the Great Depression. But there were also suggestions that social policy had contributed to the crisis, particularly by promoting households’ access to credit in pursuit of welfare goals. Others claim that it was the withdrawal of state welfare that led to the disaster. Against this background that motivated our interest, we propose a systematic way of assessing the relationship between financial market and public welfare provisions. We use structural vector auto-regression to establish the causal link and its direction. Two hypotheses about this relationship can be inferred from the literature. First, the notion that welfare states ‘decommodify’ livelihoods or that there is an equity-efficiency tradeoff would suggest that welfare states substitute to varying degrees for financial market offers of insurance and savings. By contrast, welfare states may support private interests selectively and/or help markets for households to function better; thus the nexus would be one of complementarity. Our empirical strategy is to spell out the causal mechanisms that can account for a substitutive or complementary relationship and then to see whether advanced econometric techniques find evidence for the existence of either of these mechanisms in six OECD countries. We find complementarity between public welfare (spending and tax subsidies) and life insurance markets for four out of our six countries, notably even for the United States. Substitution between welfare and finance is the more plausible interpretation for France and the Netherlands, which is surprising. Data availability constrains us from testing the implications for the welfare state contribution to the crisis directly but our findings suggest that the welfare state cannot generally be blamed for the financial crisis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Addressing high and volatile natural resource prices, uncertain supply prospects, reindustrialization attempts and environmental damages related to resource use, resource efficiency has evolved into a highly debated proposal among academia, policy makers, firms and international financial institutions (IFIs). In 2011, the European Union (EU) declared resource efficiency as one of its seven flagship initiatives in its Europe 2020 strategy. This paper contributes to the discussions by assessing its key initiative, the Roadmap to a Resource Efficient Europe (EC 2011 571), following two streams of evaluation. In a first step, resource efficiency is linked to two theoretical frameworks regarding sustainability, (i) the sustainability triangle (consisting of economic, social and ecological dimensions) and (ii) balanced sustainability (combining weak and strong sustainability). Subsequently, both sustainability frameworks are used to assess to which degree the Roadmap follows the concept of sustainability. It can be concluded that it partially respects the sustainability triangle as well as balanced sustainability, primarily lacking a social dimension. In a second step, following Steger and Bleischwitz (2009), the impact of resource efficiency on competitiveness as advocated in the Roadmap is empirically evaluated. Using an Arellano–Bond dynamic panel data model reveals no robust impact of resource efficiency on competiveness in the EU between 2004 and 2009 – a puzzling result. Further empirical research and enhanced data availability are needed to better understand the impacts of resource efficiency on competitiveness on the macroeconomic, microeconomic and industry level. In that regard, strengthening the methodologies of resource indicators seem essential. Last but certainly not least, political will is required to achieve the transition of the EU-economy into a resource efficient future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Addressing high and volatile natural resource prices, uncertain supply prospects, reindustrialization attempts and environmental damages related to resource use, resource efficiency has evolved into a highly debated proposal among academia, policy makers, firms and international financial institutions (IFIs). In 2011, the European Union (EU) declared resource efficiency as one of its seven flagship initiatives in its Europe 2020 strategy. This paper contributes to the discussions by assessing its key initiative, the Roadmap to a Resource Efficient Europe (EC 2011 571), following two streams of evaluation. In a first step, resource efficiency is linked to two theoretical frameworks regarding sustainability, (i) the sustainability triangle (consisting of economic, social and ecological dimensions) and (ii) balanced sustainability (combining weak and strong sustainability). Subsequently, both sustainability frameworks are used to assess to which degree the Roadmap follows the concept of sustainability. It can be concluded that it partially respects the sustainability triangle as well as balanced sustainability, primarily lacking a social dimension. In a second step, following Steger and Bleischwitz (2009), the impact of resource efficiency on competitiveness as advocated in the Roadmap is empirically evaluated. Using an Arellano–Bond dynamic panel data model reveals no robust impact of resource efficiency on competiveness in the EU between 2004 and 2009 – a puzzling result. Further empirical research and enhanced data availability are needed to better understand the impacts of resource efficiency on competitiveness on the macroeconomic, microeconomic and industry level. In that regard, strengthening the methodologies of resource indicators seem essential. Last but certainly not least, political will is required to achieve the transition of the EU-economy into a resource efficient future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Balkan Vegetation Database (BVD; GIVD ID: EU-00-019; http://www.givd.info/ID/EU-00- 019) is a regional database that consists of phytosociological relevés from different vegetation types from six countries on the Balkan Peninsula (Albania, Bosnia and Herzegovina, Bulgaria, Kosovo, Montenegro and Serbia). Currently, it contains 9,580 relevés, and most of them (78%) are geo-referenced. The database includes digitized relevés from the literature (79%) and unpublished data (21%). Herein we present descriptive statistics about attributive relevé information. We developed rules that regulate governance of the database, data provision, types of data availability regimes, data requests and terms of use, authorships and relationships with other databases. The database offers an extensive overview about studies on the local, regional and SE European levels including information about flora, vegetation and habitats.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Identifying cloud interference in satellite-derived data is a critical step toward developing useful remotely sensed products. Most MODIS land products use a combination of the MODIS (MOD35) cloud mask and the 'internal' cloud mask of the surface reflectance product (MOD09) to mask clouds, but there has been little discussion of how these masks differ globally. We calculated global mean cloud frequency for both products, for 2009, and found that inflated proportions of observations were flagged as cloudy in the Collection 5 MOD35 product. These erroneously categorized areas were spatially and environmentally non-random and usually occurred over high-albedo land-cover types (such as grassland and savanna) in several regions around the world. Additionally, we found that spatial variability in the processing path applied in the Collection 5 MOD35 algorithm affects the likelihood of a cloudy observation by up to 20% in some areas. These factors result in abrupt transitions in recorded cloud frequency across landcover and processing-path boundaries impeding their use for fine-scale spatially contiguous modeling applications. We show that together, these artifacts have resulted in significantly decreased and spatially biased data availability for Collection 5 MOD35-derived composite MODIS land products such as land surface temperature (MOD11) and net primary productivity (MOD17). Finally, we compare our results to mean cloud frequency in the new Collection 6 MOD35 product, and find that landcover artifacts have been reduced but not eliminated. Collection 6 thus increases data availability for some regions and land cover types in MOD35-derived products but practitioners need to consider how the remaining artifacts might affect their analysis.