974 resultados para Orion DBMS, Database, Uncertainty, Uncertain values, Benchmark


Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] Marine N2 fixing microorganisms, termed diazotrophs, are a key functional group in marine pelagic ecosystems. The biological fixation of dinitrogen (N2) to bioavailable nitrogen provides an important new source of nitrogen for pelagic marine ecosystems 5 and influences primary productivity and organic matter export to the deep ocean. As one of a series of efforts to collect biomass and rates specific to different phytoplankton functional groups, we have constructed a database on diazotrophic organisms in the global pelagic upper ocean by compiling about 12 000 direct field measurements of cyanobacterial diazotroph abundances (based on microscopic cell counts or qPCR 10 assays targeting the nifH genes) and N2 fixation rates. Biomass conversion factors are estimated based on cell sizes to convert  abundance data to diazotrophic biomass. The database is limited spatially, lacking large regions of the ocean especially in the Indian Ocean. The data are approximately log-normal distributed, and large variances exist in most sub-databases with non-zero values differing 5 to 8 orders of magnitude. 15 Lower mean N2 fixation rate was found in the North Atlantic Ocean than the Pacific Ocean. Reporting the geometric mean and the range of one geometric standard error below and above the geometric mean, the pelagic N2 fixation rate in the global ocean is estimated to be 62 (53–73) TgNyr−1 and the pelagic diazotrophic biomass in the global ocean is estimated to be 4.7 (2.3–9.6) TgC from cell counts and to 89 (40–20 200) TgC from nifH-based abundances. Uncertainties related to biomass conversion factors can change the estimate of geometric mean pelagic diazotrophic biomass in the global ocean by about ±70 %. This evolving database can be used to study spatial and temporal distributions and variations of marine N2 fixation, to validate geochemical estimates and to parameterize and validate biogeochemical models. The database is 25 stored in PANGAEA (http://doi.pangaea.de/10.1594/PANGAEA.774851).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analisi del sistema NewSQL e sperimentazione del software VoltDB su un benchmark definito.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applicazione basata sul database non relazionale MongoDB. Integrata in un sistema di prenotazione turistico online.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nella tesi, inizialmente, viene introdotto il concetto di Big Data, descrivendo le caratteristiche principali, il loro utilizzo, la provenienza e le opportunità che possono apportare. Successivamente, si sono spiegati i motivi che hanno portato alla nascita del movimento NoSQL, come la necessità di dover gestire i Big Data pur mantenendo una struttura flessibile nel tempo. Inoltre, dopo un confronto con i sistemi tradizionali, si è passati al classificare questi DBMS in diverse famiglie, accennando ai concetti strutturali sulle quali si basano, per poi spiegare il funzionamento. In seguito è stato descritto il database MongoDB orientato ai documenti. Sono stati approfonditi i dettagli strutturali, i concetti sui quali si basa e gli obbiettivi che si pone, per poi andare ad analizzare nello specifico importanti funzioni, come le operazioni di inserimento e cancellazione, ma anche il modo di interrogare il database. Grazie alla sue caratteristiche che lo rendono molto performante, MonogDB, è stato utilizzato come supporto di base di dati per la realizzazione di un applicazione web che permette di mostrare la mappa della connettività urbana.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi tratta una panoramica generale sui Time Series database e relativi gestori. Successivamente l'attenzione è focalizzata sul DBMS InfluxDB. Infine viene mostrato un progetto che implementa InfluxDB

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Detecting a benefit from closure of patent foramen ovale in patients with cryptogenic stroke is hampered by low rates of stroke recurrence and uncertainty about the causal role of patent foramen ovale in the index event. A method to predict patent foramen ovale-attributable recurrence risk is needed. However, individual databases generally have too few stroke recurrences to support risk modeling. Prior studies of this population have been limited by low statistical power for examining factors related to recurrence. AIMS: The aim of this study was to develop a database to support modeling of patent foramen ovale-attributable recurrence risk by combining extant data sets. METHODS: We identified investigators with extant databases including subjects with cryptogenic stroke investigated for patent foramen ovale, determined the availability and characteristics of data in each database, collaboratively specified the variables to be included in the Risk of Paradoxical Embolism database, harmonized the variables across databases, and collected new primary data when necessary and feasible. RESULTS: The Risk of Paradoxical Embolism database has individual clinical, radiologic, and echocardiographic data from 12 component databases, including subjects with cryptogenic stroke both with (n = 1925) and without (n = 1749) patent foramen ovale. In the patent foramen ovale subjects, a total of 381 outcomes (stroke, transient ischemic attack, death) occurred (median follow-up 2·2 years). While there were substantial variations in data collection between studies, there was sufficient overlap to define a common set of variables suitable for risk modeling. CONCLUSION: While individual studies are inadequate for modeling patent foramen ovale-attributable recurrence risk, collaboration between investigators has yielded a database with sufficient power to identify those patients at highest risk for a patent foramen ovale-related stroke recurrence who may have the greatest potential benefit from patent foramen ovale closure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This project examined the change in values in the still unfinished transitional period in Serbia during the 1990s and compared it with Greece in the same period. During this period the social and political transition affected the ruling value system primarily through changes in the modes of the production and representation of reality. The most remarkable trait of this period in Serbia is the parallel and interweaving existence of different value systems. The very perception of reality has been blurred by the emergence of a very complex technical and ideological structure. Reality is presented by and through extensions and additions, the models of which are language and media. With the development of media technology and global communication and information systems, representation has become the only available reality. This enables the media to overtly and unlimitedly intervene in reality, to manage and change it without constraint and thus have a direct impact on values. The difference between public and private is abolished, so the media start promoting exclusive collective values. However, since the collective thus loses its counterpart, it itself needs to be redefined. This confusion of values make the possible results of their change uncertain. It will either open up a space for multiculturalism and social pluralism and thus completely replace the old systems of values, or result in an indefinite survival of different, often contradictory, value systems and conceptions of reality, which often lead to all forms of exclusivity and intolerance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyze three sets of doubly-censored cohort data on incubation times, estimating incubation distributions using semi-parametric methods and assessing the comparability of the estimates. Weibull models appear to be inappropriate for at least one of the cohorts, and the estimates for the different cohorts are substantially different. We use these estimates as inputs for backcalculation, using a nonparametric method based on maximum penalized likelihood. The different incubations all produce fits to the reported AIDS counts that are as good as the fit from a nonstationary incubation distribution that models treatment effects, but the estimated infection curves are very different. We also develop a method for estimating nonstationarity as part of the backcalculation procedure and find that such estimates also depend very heavily on the assumed incubation distribution. We conclude that incubation distributions are so uncertain that meaningful error bounds are difficult to place on backcalculated estimates and that backcalculation may be too unreliable to be used without being supplemented by other sources of information in HIV prevalence and incidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In recent years, treatment options for human immunodeficiency virus type 1 (HIV-1) infection have changed from nonboosted protease inhibitors (PIs) to nonnucleoside reverse-transcriptase inhibitors (NNRTIs) and boosted PI-based antiretroviral drug regimens, but the impact on immunological recovery remains uncertain. METHODS: During January 1996 through December 2004 [corrected] all patients in the Swiss HIV Cohort were included if they received the first combination antiretroviral therapy (cART) and had known baseline CD4(+) T cell counts and HIV-1 RNA values (n = 3293). For follow-up, we used the Swiss HIV Cohort Study database update of May 2007 [corrected] The mean (+/-SD) duration of follow-up was 26.8 +/- 20.5 months. The follow-up time was limited to the duration of the first cART. CD4(+) T cell recovery was analyzed in 3 different treatment groups: nonboosted PI, NNRTI, or boosted PI. The end point was the absolute increase of CD4(+) T cell count in the 3 treatment groups after the initiation of cART. RESULTS: Two thousand five hundred ninety individuals (78.7%) initiated a nonboosted-PI regimen, 452 (13.7%) initiated an NNRTI regimen, and 251 (7.6%) initiated a boosted-PI regimen. Absolute CD4(+) T cell count increases at 48 months were as follows: in the nonboosted-PI group, from 210 to 520 cells/muL; in the NNRTI group, from 220 to 475 cells/muL; and in the boosted-PI group, from 168 to 511 cells/muL. In a multivariate analysis, the treatment group did not affect the response of CD4(+) T cells; however, increased age, pretreatment with nucleoside reverse-transcriptase inhibitors, serological tests positive for hepatitis C virus, Centers for Disease Control and Prevention stage C infection, lower baseline CD4(+) T cell count, and lower baseline HIV-1 RNA level were risk factors for smaller increases in CD4(+) T cell count. CONCLUSION: CD4(+) T cell recovery was similar in patients receiving nonboosted PI-, NNRTI-, and boosted PI-based cART.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, Branzei, Dimitrov, and Tijs (2003) introduced cooperative interval-valued games. Among other insights, the notion of an interval core has been coined and proposed as a solution concept for interval-valued games. In this paper we will present a general mathematical programming algorithm which can be applied to find an element in the interval core. As an example, we discuss lot sizing with uncertain demand to provide an application for interval-valued games and to demonstrate how interval core elements can be computed. Also, we reveal that pitfalls exist if interval core elements are computed in a straightforward manner by considering the interval borders separately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing demand for marketing accountability requires an efficient allocation of marketing expenditures. Managers who know the elasticity of their marketing instruments can allocate their budgets optimally. Meta-analyses offer a basis for deriving benchmark elasticities for advertising. Although they provide a variety of valuable insights, a major shortcoming of prior meta-analyses is that they report only generalized results as the disaggregated raw data are not made available. This problem is highly relevant because coding of empirical studies, at least to a certain extent, involves subjective judgment. For this reason, meta-studies would be more valuable if researchers and practitioners had access to disaggregated data allowing them to conduct further analyses of individual, e.g., product-level-specific, interests. We are the first to address this gap by providing (1) an advertising elasticity database (AED) and (2) empirical generalizations about advertising elasticities and their determinants. Our findings indicate that the average current-period advertising elasticity is 0.09, which is substantially smaller than the value 0f 0.12 that was recently reported by Sethuraman, Tellis, and Briesch (2011). Furthermore, our meta-analysis reveals a wide range of significant determinants of advertising elasticity. For example, we find that advertising elasticities are higher (i) for hedonic and experience goods than for other goods; (ii) for new than for established goods; (iii) when advertising is measured in gross rating points (GRP) instead of absolute terms; and (iv) when the lagged dependent or lagged advertising variable is omitted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the Wilson loop. General arguments tell us that the lowest lying spectral peak encodes, through its position and shape, the real and imaginary parts of this complex potential. Here we benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. In other words, we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real and imaginary parts and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. Access to the full spectrum allows an investigation of spectral features that do not contribute to the potential but can pose a challenge to numerical attempts of an analytic continuation from imaginary time data. Differences in these contributions between the Wilson loop and gauge fixed Wilson line correlators are discussed. To better understand the difficulties in a numerical extraction we deploy the maximum entropy method with extended search space to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary parts are reproduced. Possible venues for improvement of the extraction strategy are discussed.