874 resultados para Distributed data access


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Peer reviewed

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Funding and trial registration: Scottish Government Chief Scientist Office grant CZH/3/17. ClinicalTrials.gov registration NCT01602705.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acknowledgements. This work was mainly funded by the EU FP7 CARBONES project (contracts FP7-SPACE-2009-1-242316), with also a small contribution from GEOCARBON project (ENV.2011.4.1.1-1-283080). This work used eddy covariance data acquired by the FLUXNET community and in particular by the following networks: AmeriFlux (U.S. Department of Energy, Biological and Environmental Research, Terrestrial Carbon Program; DE-FG02-04ER63917 and DE-FG02-04ER63911), AfriFlux, AsiaFlux, CarboAfrica, CarboEuropeIP, CarboItaly, CarboMont, ChinaFlux, Fluxnet-Canada (supported by CFCAS, NSERC, BIOCAP, Environment Canada, and NRCan), GreenGrass, KoFlux, LBA, NECC, OzFlux, TCOS-Siberia, USCCC. We acknowledge the financial support to the eddy covariance data harmonization provided by CarboEuropeIP, FAO-GTOS-TCO, iLEAPS, Max Planck Institute for Biogeochemistry, National Science Foundation, University of Tuscia, Université Laval and Environment Canada and US Department of Energy and the database development and technical support from Berkeley Water Center, Lawrence Berkeley National Laboratory, Microsoft Research eScience, Oak Ridge National Laboratory, University of California-Berkeley, University of Virginia. Philippe Ciais acknowledges support from the European Research Council through Synergy grant ERC-2013-SyG-610028 “IMBALANCE-P”. The authors wish to thank M. Jung for providing access to the GPP MTE data, which were downloaded from the GEOCARBON data portal (https://www.bgc-jena.mpg.de/geodb/projects/Data.php). The authors are also grateful to computing support and resources provided at LSCE and to the overall ORCHIDEE project that coordinate the development of the code (http://labex.ipsl.fr/orchidee/index.php/about-the-team).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Statin therapy reduces the risk of occlusive vascular events, but uncertainty remains about potential effects on cancer. We sought to provide a detailed assessment of any effects on cancer of lowering LDL cholesterol (LDL-C) with a statin using individual patient records from 175,000 patients in 27 large-scale statin trials. Methods and Findings: Individual records of 134,537 participants in 22 randomised trials of statin versus control (median duration 4.8 years) and 39,612 participants in 5 trials of more intensive versus less intensive statin therapy (median duration 5.1 years) were obtained. Reducing LDL-C with a statin for about 5 years had no effect on newly diagnosed cancer or on death from such cancers in either the trials of statin versus control (cancer incidence: 3755 [1.4% per year [py]] versus 3738 [1.4% py], RR 1.00 [95% CI 0.96-1.05]; cancer mortality: 1365 [0.5% py] versus 1358 [0.5% py], RR 1.00 [95% CI 0.93-1.08]) or in the trials of more versus less statin (cancer incidence: 1466 [1.6% py] vs 1472 [1.6% py], RR 1.00 [95% CI 0.93-1.07]; cancer mortality: 447 [0.5% py] versus 481 [0.5% py], RR 0.93 [95% CI 0.82-1.06]). Moreover, there was no evidence of any effect of reducing LDL-C with statin therapy on cancer incidence or mortality at any of 23 individual categories of sites, with increasing years of treatment, for any individual statin, or in any given subgroup. In particular, among individuals with low baseline LDL-C (<2 mmol/L), there was no evidence that further LDL-C reduction (from about 1.7 to 1.3 mmol/L) increased cancer risk (381 [1.6% py] versus 408 [1.7% py]; RR 0.92 [99% CI 0.76-1.10]). Conclusions: In 27 randomised trials, a median of five years of statin therapy had no effect on the incidence of, or mortality from, any type of cancer (or the aggregate of all cancer).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Ocean acidification (OA), induced by rapid anthropogenic CO2 rise and its dissolution in seawater, is known to have consequences for marine organisms. However, knowledge on the evolutionary responses of phytoplankton to OA has been poorly studied. Here we examined the coccolithophore Gephyrocapsa oceanica, while growing it for 2000 generations under ambient and elevated CO2 levels. While OA stimulated growth in the earlier selection period (from generations 700 to 1550), it reduced it in the later selection period up to 2000 generations. Similarly, stimulated production of particulate organic carbon and nitrogen reduced with increasing selection period and decreased under OA up to 2000 generations. The specific adaptation of growth to OA disappeared in generations 1700 to 2000 when compared with that at 1000 generations. Both phenotypic plasticity and fitness decreased within selection time, suggesting that the species' resilience to OA decreased after 2000 generations under high CO2 selection.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We describe the contemporary hydrography of the pan-Arctic land area draining into the Arctic Ocean, northern Bering Sea, and Hudson Bay on the basis of observational records of river discharge and computed runoff. The Regional Arctic Hydrographic Network data set, R-ArcticNET, is presented, which is based on 3754 recording stations drawn from Russian, Canadian, European, and U.S. archives. R-ArcticNET represents the single largest data compendium of observed discharge in the Arctic. Approximately 73% of the nonglaciated area of the pan-Arctic is monitored by at least one river discharge gage giving a mean gage density of 168 gages per 106 km2. Average annual runoff is 212 mm yr?1 with approximately 60% of the river discharge occurring from April to July. Gridded runoff surfaces are generated for the gaged portion of the pan-Arctic region to investigate global change signals. Siberia and Alaska showed increases in winter runoff during the 1980s relative to the 1960s and 1970s during annual and seasonal periods. These changes are consistent with observations of change in the climatology of the region. Western Canada experienced decreased spring and summer runoff.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract not available

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Presentation from the MARAC conference in Pittsburgh, PA on April 14–16, 2016. S13 - Student Poster Session; Analysis of Federal Policy on Public Access to Scientific Research Data

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Presentation at the CRIS2016 conference in St Andrews, June 10, 2016

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose three research problems to explore the relations between trust and security in the setting of distributed computation. In the first problem, we study trust-based adversary detection in distributed consensus computation. The adversaries we consider behave arbitrarily disobeying the consensus protocol. We propose a trust-based consensus algorithm with local and global trust evaluations. The algorithm can be abstracted using a two-layer structure with the top layer running a trust-based consensus algorithm and the bottom layer as a subroutine executing a global trust update scheme. We utilize a set of pre-trusted nodes, headers, to propagate local trust opinions throughout the network. This two-layer framework is flexible in that it can be easily extensible to contain more complicated decision rules, and global trust schemes. The first problem assumes that normal nodes are homogeneous, i.e. it is guaranteed that a normal node always behaves as it is programmed. In the second and third problems however, we assume that nodes are heterogeneous, i.e, given a task, the probability that a node generates a correct answer varies from node to node. The adversaries considered in these two problems are workers from the open crowd who are either investing little efforts in the tasks assigned to them or intentionally give wrong answers to questions. In the second part of the thesis, we consider a typical crowdsourcing task that aggregates input from multiple workers as a problem in information fusion. To cope with the issue of noisy and sometimes malicious input from workers, trust is used to model workers' expertise. In a multi-domain knowledge learning task, however, using scalar-valued trust to model a worker's performance is not sufficient to reflect the worker's trustworthiness in each of the domains. To address this issue, we propose a probabilistic model to jointly infer multi-dimensional trust of workers, multi-domain properties of questions, and true labels of questions. Our model is very flexible and extensible to incorporate metadata associated with questions. To show that, we further propose two extended models, one of which handles input tasks with real-valued features and the other handles tasks with text features by incorporating topic models. Our models can effectively recover trust vectors of workers, which can be very useful in task assignment adaptive to workers' trust in the future. These results can be applied for fusion of information from multiple data sources like sensors, human input, machine learning results, or a hybrid of them. In the second subproblem, we address crowdsourcing with adversaries under logical constraints. We observe that questions are often not independent in real life applications. Instead, there are logical relations between them. Similarly, workers that provide answers are not independent of each other either. Answers given by workers with similar attributes tend to be correlated. Therefore, we propose a novel unified graphical model consisting of two layers. The top layer encodes domain knowledge which allows users to express logical relations using first-order logic rules and the bottom layer encodes a traditional crowdsourcing graphical model. Our model can be seen as a generalized probabilistic soft logic framework that encodes both logical relations and probabilistic dependencies. To solve the collective inference problem efficiently, we have devised a scalable joint inference algorithm based on the alternating direction method of multipliers. The third part of the thesis considers the problem of optimal assignment under budget constraints when workers are unreliable and sometimes malicious. In a real crowdsourcing market, each answer obtained from a worker incurs cost. The cost is associated with both the level of trustworthiness of workers and the difficulty of tasks. Typically, access to expert-level (more trustworthy) workers is more expensive than to average crowd and completion of a challenging task is more costly than a click-away question. In this problem, we address the problem of optimal assignment of heterogeneous tasks to workers of varying trust levels with budget constraints. Specifically, we design a trust-aware task allocation algorithm that takes as inputs the estimated trust of workers and pre-set budget, and outputs the optimal assignment of tasks to workers. We derive the bound of total error probability that relates to budget, trustworthiness of crowds, and costs of obtaining labels from crowds naturally. Higher budget, more trustworthy crowds, and less costly jobs result in a lower theoretical bound. Our allocation scheme does not depend on the specific design of the trust evaluation component. Therefore, it can be combined with generic trust evaluation algorithms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Following the workshop on new developments in daily licensing practice in November 2011, we brought together fourteen representatives from national consortia (from Denmark, Germany, Netherlands and the UK) and publishers (Elsevier, SAGE and Springer) met in Copenhagen on 9 March 2012 to discuss provisions in licences to accommodate new developments. The one day workshop aimed to: present background and ideas regarding the provisions KE Licensing Expert Group developed; introduce and explain the provisions the invited publishers currently use;ascertain agreement on the wording for long term preservation, continuous access and course packs; give insight and more clarity about the use of open access provisions in licences; discuss a roadmap for inclusion of the provisions in the publishers’ licences; result in report to disseminate the outcome of the meeting. Participants of the workshop were: United Kingdom: Lorraine Estelle (Jisc Collections) Denmark: Lotte Eivor Jørgensen (DEFF), Lone Madsen (Southern University of Denmark), Anne Sandfær (DEFF/Knowledge Exchange) Germany: Hildegard Schaeffler (Bavarian State Library), Markus Brammer (TIB) The Netherlands: Wilma Mossink (SURF), Nol Verhagen (University of Amsterdam), Marc Dupuis (SURF/Knowledge Exchange) Publishers: Alicia Wise (Elsevier), Yvonne Campfens (Springer), Bettina Goerner (Springer), Leo Walford (Sage) Knowledge Exchange: Keith Russell The main outcome of the workshop was that it would be valuable to have a standard set of clauses which could used in negotiations, this would make concluding licences a lot easier and more efficient. The comments on the model provisions the Licensing Expert group had drafted will be taken into account and the provisions will be reformulated. Data and text mining is a new development and demand for access to allow for this is growing. It would be easier if there was a simpler way to access materials so they could be more easily mined. However there are still outstanding questions on how authors of articles that have been mined can be properly attributed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cache-coherent non uniform memory access (ccNUMA) architecture is a standard design pattern for contemporary multicore processors, and future generations of architectures are likely to be NUMA. NUMA architectures create new challenges for managed runtime systems. Memory-intensive applications use the system’s distributed memory banks to allocate data, and the automatic memory manager collects garbage left in these memory banks. The garbage collector may need to access remote memory banks, which entails access latency overhead and potential bandwidth saturation for the interconnection between memory banks. This dissertation makes five significant contributions to garbage collection on NUMA systems, with a case study implementation using the Hotspot Java Virtual Machine. It empirically studies data locality for a Stop-The-World garbage collector when tracing connected objects in NUMA heaps. First, it identifies a locality richness which exists naturally in connected objects that contain a root object and its reachable set— ‘rooted sub-graphs’. Second, this dissertation leverages the locality characteristic of rooted sub-graphs to develop a new NUMA-aware garbage collection mechanism. A garbage collector thread processes a local root and its reachable set, which is likely to have a large number of objects in the same NUMA node. Third, a garbage collector thread steals references from sibling threads that run on the same NUMA node to improve data locality. This research evaluates the new NUMA-aware garbage collector using seven benchmarks of an established real-world DaCapo benchmark suite. In addition, evaluation involves a widely used SPECjbb benchmark and Neo4J graph database Java benchmark, as well as an artificial benchmark. The results of the NUMA-aware garbage collector on a multi-hop NUMA architecture show an average of 15% performance improvement. Furthermore, this performance gain is shown to be as a result of an improved NUMA memory access in a ccNUMA system. Fourth, the existing Hotspot JVM adaptive policy for configuring the number of garbage collection threads is shown to be suboptimal for current NUMA machines. The policy uses outdated assumptions and it generates a constant thread count. In fact, the Hotspot JVM still uses this policy in the production version. This research shows that the optimal number of garbage collection threads is application-specific and configuring the optimal number of garbage collection threads yields better collection throughput than the default policy. Fifth, this dissertation designs and implements a runtime technique, which involves heuristics from dynamic collection behavior to calculate an optimal number of garbage collector threads for each collection cycle. The results show an average of 21% improvements to the garbage collection performance for DaCapo benchmarks.