922 resultados para Operational Data Stores


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the results of an operational use of experimentally measured optical tomograms to determine state characteristics (purity) avoiding any reconstruction of quasiprobabilities. We also develop a natural way how to estimate the errors (including both statistical and systematic ones) by an analysis of the experimental data themselves. Precision of the experiment can be increased by postselecting the data with minimal (systematic) errors. We demonstrate those techniques by considering coherent and photon-added coherent states measured via the time-domain improved homodyne detection. The operational use and precision of the data allowed us to check purity-dependent uncertainty relations and uncertainty relations for Shannon and Renyi entropies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A faithful depiction of the tropical atmosphere requires three-dimensional sets of observations. Despite the increasing amount of observations presently available, these will hardly ever encompass the entire atmosphere and, in addition, observations have errors. Additional (background) information will always be required to complete the picture. Valuable added information comes from the physical laws governing the flow, usually mediated via a numerical weather prediction (NWP) model. These models are, however, never going to be error-free, why a reliable estimate of their errors poses a real challenge since the whole truth will never be within our grasp. The present thesis addresses the question of improving the analysis procedures for NWP in the tropics. Improvements are sought by addressing the following issues: - the efficiency of the internal model adjustment, - the potential of the reliable background-error information, as compared to observations, - the impact of a new, space-borne line-of-sight wind measurements, and - the usefulness of multivariate relationships for data assimilation in the tropics. Most NWP assimilation schemes are effectively univariate near the equator. In this thesis, a multivariate formulation of the variational data assimilation in the tropics has been developed. The proposed background-error model supports the mass-wind coupling based on convectively-coupled equatorial waves. The resulting assimilation model produces balanced analysis increments and hereby increases the efficiency of all types of observations. Idealized adjustment and multivariate analysis experiments highlight the importance of direct wind measurements in the tropics. In particular, the presented results confirm the superiority of wind observations compared to mass data, in spite of the exact multivariate relationships available from the background information. The internal model adjustment is also more efficient for wind observations than for mass data. In accordance with these findings, new satellite wind observations are expected to contribute towards the improvement of NWP and climate modeling in the tropics. Although incomplete, the new wind-field information has the potential to reduce uncertainties in the tropical dynamical fields, if used together with the existing satellite mass-field measurements. The results obtained by applying the new background-error representation to the tropical short-range forecast errors of a state-of-art NWP model suggest that achieving useful tropical multivariate relationships may be feasible within an operational NWP environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The quality of temperature and humidity retrievals from the infrared SEVIRI sensors on the geostationary Meteosat Second Generation (MSG) satellites is assessed by means of a one dimensional variational algorithm. The study is performed with the aim of improving the spatial and temporal resolution of available observations to feed analysis systems designed for high resolution regional scale numerical weather prediction (NWP) models. The non-hydrostatic forecast model COSMO (COnsortium for Small scale MOdelling) in the ARPA-SIM operational configuration is used to provide background fields. Only clear sky observations over sea are processed. An optimised 1D–VAR set-up comprising of the two water vapour and the three window channels is selected. It maximises the reduction of errors in the model backgrounds while ensuring ease of operational implementation through accurate bias correction procedures and correct radiative transfer simulations. The 1D–VAR retrieval quality is firstly quantified in relative terms employing statistics to estimate the reduction in the background model errors. Additionally the absolute retrieval accuracy is assessed comparing the analysis with independent radiosonde and satellite observations. The inclusion of satellite data brings a substantial reduction in the warm and dry biases present in the forecast model. Moreover it is shown that the retrieval profiles generated by the 1D–VAR are well correlated with the radiosonde measurements. Subsequently the 1D–VAR technique is applied to two three–dimensional case–studies: a false alarm case–study occurred in Friuli–Venezia–Giulia on the 8th of July 2004 and a heavy precipitation case occurred in Emilia–Romagna region between 9th and 12th of April 2005. The impact of satellite data for these two events is evaluated in terms of increments in the integrated water vapour and saturation water vapour over the column, in the 2 meters temperature and specific humidity and in the surface temperature. To improve the 1D–VAR technique a method to calculate flow–dependent model error covariance matrices is also assessed. The approach employs members from an ensemble forecast system generated by perturbing physical parameterisation schemes inside the model. The improved set–up applied to the case of 8th of July 2004 shows a substantial neutral impact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Assimilation in the Unstable Subspace (AUS) was introduced by Trevisan and Uboldi in 2004, and developed by Trevisan, Uboldi and Carrassi, to minimize the analysis and forecast errors by exploiting the flow-dependent instabilities of the forecast-analysis cycle system, which may be thought of as a system forced by observations. In the AUS scheme the assimilation is obtained by confining the analysis increment in the unstable subspace of the forecast-analysis cycle system so that it will have the same structure of the dominant instabilities of the system. The unstable subspace is estimated by Breeding on the Data Assimilation System (BDAS). AUS- BDAS has already been tested in realistic models and observational configurations, including a Quasi-Geostrophicmodel and a high dimensional, primitive equation ocean model; the experiments include both fixed and“adaptive”observations. In these contexts, the AUS-BDAS approach greatly reduces the analysis error, with reasonable computational costs for data assimilation with respect, for example, to a prohibitive full Extended Kalman Filter. This is a follow-up study in which we revisit the AUS-BDAS approach in the more basic, highly nonlinear Lorenz 1963 convective model. We run observation system simulation experiments in a perfect model setting, and with two types of model error as well: random and systematic. In the different configurations examined, and in a perfect model setting, AUS once again shows better efficiency than other advanced data assimilation schemes. In the present study, we develop an iterative scheme that leads to a significant improvement of the overall assimilation performance with respect also to standard AUS. In particular, it boosts the efficiency of regime’s changes tracking, with a low computational cost. Other data assimilation schemes need estimates of ad hoc parameters, which have to be tuned for the specific model at hand. In Numerical Weather Prediction models, tuning of parameters — and in particular an estimate of the model error covariance matrix — may turn out to be quite difficult. Our proposed approach, instead, may be easier to implement in operational models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le tecniche di next generation sequencing costituiscono un potente strumento per diverse applicazioni, soprattutto da quando i loro costi sono iniziati a calare e la qualità dei loro dati a migliorare. Una delle applicazioni del sequencing è certamente la metagenomica, ovvero l'analisi di microorganismi entro un dato ambiente, come per esempio quello dell'intestino. In quest'ambito il sequencing ha permesso di campionare specie batteriche a cui non si riusciva ad accedere con le tradizionali tecniche di coltura. Lo studio delle popolazioni batteriche intestinali è molto importante in quanto queste risultano alterate come effetto ma anche causa di numerose malattie, come quelle metaboliche (obesità, diabete di tipo 2, etc.). In questo lavoro siamo partiti da dati di next generation sequencing del microbiota intestinale di 5 animali (16S rRNA sequencing) [Jeraldo et al.]. Abbiamo applicato algoritmi ottimizzati (UCLUST) per clusterizzare le sequenze generate in OTU (Operational Taxonomic Units), che corrispondono a cluster di specie batteriche ad un determinato livello tassonomico. Abbiamo poi applicato la teoria ecologica a master equation sviluppata da [Volkov et al.] per descrivere la distribuzione dell'abbondanza relativa delle specie (RSA) per i nostri campioni. La RSA è uno strumento ormai validato per lo studio della biodiversità dei sistemi ecologici e mostra una transizione da un andamento a logserie ad uno a lognormale passando da piccole comunità locali isolate a più grandi metacomunità costituite da più comunità locali che possono in qualche modo interagire. Abbiamo mostrato come le OTU di popolazioni batteriche intestinali costituiscono un sistema ecologico che segue queste stesse regole se ottenuto usando diverse soglie di similarità nella procedura di clustering. Ci aspettiamo quindi che questo risultato possa essere sfruttato per la comprensione della dinamica delle popolazioni batteriche e quindi di come queste variano in presenza di particolari malattie.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The past decade has seen the energy consumption in servers and Internet Data Centers (IDCs) skyrocket. A recent survey estimated that the worldwide spending on servers and cooling have risen to above $30 billion and is likely to exceed spending on the new server hardware . The rapid rise in energy consumption has posted a serious threat to both energy resources and the environment, which makes green computing not only worthwhile but also necessary. This dissertation intends to tackle the challenges of both reducing the energy consumption of server systems and by reducing the cost for Online Service Providers (OSPs). Two distinct subsystems account for most of IDC’s power: the server system, which accounts for 56% of the total power consumption of an IDC, and the cooling and humidifcation systems, which accounts for about 30% of the total power consumption. The server system dominates the energy consumption of an IDC, and its power draw can vary drastically with data center utilization. In this dissertation, we propose three models to achieve energy effciency in web server clusters: an energy proportional model, an optimal server allocation and frequency adjustment strategy, and a constrained Markov model. The proposed models have combined Dynamic Voltage/Frequency Scaling (DV/FS) and Vary-On, Vary-off (VOVF) mechanisms that work together for more energy savings. Meanwhile, corresponding strategies are proposed to deal with the transition overheads. We further extend server energy management to the IDC’s costs management, helping the OSPs to conserve, manage their own electricity cost, and lower the carbon emissions. We have developed an optimal energy-aware load dispatching strategy that periodically maps more requests to the locations with lower electricity prices. A carbon emission limit is placed, and the volatility of the carbon offset market is also considered. Two energy effcient strategies are applied to the server system and the cooling system respectively. With the rapid development of cloud services, we also carry out research to reduce the server energy in cloud computing environments. In this work, we propose a new live virtual machine (VM) placement scheme that can effectively map VMs to Physical Machines (PMs) with substantial energy savings in a heterogeneous server cluster. A VM/PM mapping probability matrix is constructed, in which each VM request is assigned with a probability running on PMs. The VM/PM mapping probability matrix takes into account resource limitations, VM operation overheads, server reliability as well as energy effciency. The evolution of Internet Data Centers and the increasing demands of web services raise great challenges to improve the energy effciency of IDCs. We also express several potential areas for future research in each chapter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Time-averaged discharge rates (TADR) were calculated for five lava flows at Pacaya Volcano (Guatemala), using an adapted version of a previously developed satellite-based model. Imagery acquired during periods of effusive activity between the years 2000 and 2010 were obtained from two sensors of differing temporal and spatial resolutions; the Moderate Resolution Imaging Spectroradiometer (MODIS), and the Geostationary Operational Environmental Satellites (GOES) Imager. A total of 2873 MODIS and 2642 GOES images were searched manually for volcanic “hot spots”. It was found that MODIS imagery, with superior spatial resolution, produced better results than GOES imagery, so only MODIS data were used for quantitative analyses. Spectral radiances were transformed into TADR via two methods; first, by best-fitting some of the parameters (i.e. density, vesicularity, crystal content, temperature change) of the TADR estimation model to match flow volumes previously estimated from ground surveys and aerial photographs, and second by measuring those parameters from lava samples to make independent estimates. A relatively stable relationship was defined using the second method, which suggests the possibility of estimating lava discharge rates in near-real-time during future volcanic crises at Pacaya.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Seasonal snow cover is of great environmental and socio-economic importance for the European Alps. Therefore a high priority has been assigned to quantifying its temporal and spatial variability. Complementary to land-based monitoring networks, optical satellite observations can be used to derive spatially comprehensive information on snow cover extent. For understanding long-term changes in alpine snow cover extent, the data acquired by the Advanced Very High Resolution Radiometer (AVHRR) sensors mounted onboard the National Oceanic and Atmospheric Association (NOAA) and Meteorological Operational satellite (MetOp) platforms offer a unique source of information. In this paper, we present the first space-borne 1 km snow extent climatology for the Alpine region derived from AVHRR data over the period 1985–2011. The objective of this study is twofold: first, to generate a new set of cloud-free satellite snow products using a specific cloud gap-filling technique and second, to examine the spatiotemporal distribution of snow cover in the European Alps over the last 27 yr from the satellite perspective. For this purpose, snow parameters such as snow onset day, snow cover duration (SCD), melt-out date and the snow cover area percentage (SCA) were employed to analyze spatiotemporal variability of snow cover over the course of three decades. On the regional scale, significant trends were found toward a shorter SCD at lower elevations in the south-east and south-west. However, our results do not show any significant trends in the monthly mean SCA over the last 27 yr. This is in agreement with other research findings and may indicate a deceleration of the decreasing snow trend in the Alpine region. Furthermore, such data may provide spatially and temporally homogeneous snow information for comprehensive use in related research fields (i.e., hydrologic and economic applications) or can serve as a reference for climate models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Kidney recipients maintaining a prolonged allograft survival in the absence of immunosuppressive drugs and without evidence of rejection are supposed to be exceptional. The ERA-EDTA-DESCARTES working group together with Nantes University launched a European-wide survey to identify new patients, describe them and estimate their frequency for the first time. METHODS Seventeen coordinators distributed a questionnaire in 256 transplant centres and 28 countries in order to report as many 'operationally tolerant' patients (TOL; defined as having a serum creatinine <1.7 mg/dL and proteinuria <1 g/day or g/g creatinine despite at least 1 year without any immunosuppressive drug) and 'almost tolerant' patients (minimally immunosuppressed patients (MIS) receiving low-dose steroids) as possible. We reported their number and the total number of kidney transplants performed at each centre to calculate their frequency. RESULTS One hundred and forty-seven questionnaires were returned and we identified 66 TOL (61 with complete data) and 34 MIS patients. Of the 61 TOL patients, 26 were previously described by the Nantes group and 35 new patients are presented here. Most of them were noncompliant patients. At data collection, 31/35 patients were alive and 22/31 still TOL. For the remaining 9/31, 2 were restarted on immunosuppressive drugs and 7 had rising creatinine of whom 3 resumed dialysis. Considering all patients, 10-year death-censored graft survival post-immunosuppression weaning reached 85% in TOL patients and 100% in MIS patients. With 218 913 kidney recipients surveyed, cumulative incidences of operational tolerance and almost tolerance were estimated at 3 and 1.5 per 10 000 kidney recipients, respectively. CONCLUSIONS In kidney transplantation, operational tolerance and almost tolerance are infrequent findings associated with excellent long-term death-censored graft survival.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple method for efficient inversion of arbitrary radiative transfer models for image analysis is presented. The method operates by representing the shape of the function that maps model parameters to spectral reflectance by an adaptive look-up tree (ALUT) that evenly distributes the discretization error of tabulated reflectances in spectral space. A post-processing step organizes the data into a binary space partitioning tree that facilitates an efficient inversion search algorithm. In an example shallow water remote sensing application, the method performs faster than an implementation of previously published methodology and has the same accuracy in bathymetric retrievals. The method has no user configuration parameters requiring expert knowledge and minimizes the number of forward model runs required, making it highly suitable for routine operational implementation of image analysis methods. For the research community, straightforward and robust inversion allows research to focus on improving the radiative transfer models themselves without the added complication of devising an inversion strategy.