965 resultados para Eventual consistency
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Large scale distributed data stores rely on optimistic replication to scale and remain highly available in the face of net work partitions. Managing data without coordination results in eventually consistent data stores that allow for concurrent data updates. These systems often use anti-entropy mechanisms (like Merkle Trees) to detect and repair divergent data versions across nodes. However, in practice hash-based data structures are too expensive for large amounts of data and create too many false conflicts. Another aspect of eventual consistency is detecting write conflicts. Logical clocks are often used to track data causality, necessary to detect causally concurrent writes on the same key. However, there is a nonnegligible metadata overhead per key, which also keeps growing with time, proportional with the node churn rate. Another challenge is deleting keys while respecting causality: while the values can be deleted, perkey metadata cannot be permanently removed without coordination. Weintroduceanewcausalitymanagementframeworkforeventuallyconsistentdatastores,thatleveragesnodelogicalclocks(BitmappedVersion Vectors) and a new key logical clock (Dotted Causal Container) to provides advantages on multiple fronts: 1) a new efficient and lightweight anti-entropy mechanism; 2) greatly reduced per-key causality metadata size; 3) accurate key deletes without permanent metadata.
Resumo:
We present a concurrent semantics (i.e. a semantics where concurrency is explicitely represented) for CC programs with atomic tells. This allows to derive concurrency, dependency, and nondeterminism information for such languages. The ability to treat failure information puts CLP programs also in the range of applicability of our semantics: although such programs are not concurrent, the concurrency information derived in the semantics may be interpreted as possible parallelism, thus allowing to safely parallelize those computation steps which appear to be concurrent in the net. Dually, the dependency information may also be interpreted as necessary sequentialization, thus possibly exploiting it to schedule CC programs. The fact that the semantical structure contains dependency information suggests a new tell operation, which checks for consistency only the constraints it depends on, achieving a reasonable trade-off between efficiency and atomicity.
Resumo:
Monte Carlo track structures (MCTS) simulations have been recognized as useful tools for radiobiological modeling. However, the authors noticed several issues regarding the consistency of reported data. Therefore, in this work, they analyze the impact of various user defined parameters on simulated direct DNA damage yields. In addition, they draw attention to discrepancies in published literature in DNA strand break (SB) yields and selected methodologies. The MCTS code Geant4-DNA was used to compare radial dose profiles in a nanometer-scale region of interest (ROI) for photon sources of varying sizes and energies. Then, electron tracks of 0.28 keV-220 keV were superimposed on a geometric DNA model composed of 2.7 × 10(6) nucleosomes, and SBs were simulated according to four definitions based on energy deposits or energy transfers in DNA strand targets compared to a threshold energy ETH. The SB frequencies and complexities in nucleosomes as a function of incident electron energies were obtained. SBs were classified into higher order clusters such as single and double strand breaks (SSBs and DSBs) based on inter-SB distances and on the number of affected strands. Comparisons of different nonuniform dose distributions lacking charged particle equilibrium may lead to erroneous conclusions regarding the effect of energy on relative biological effectiveness. The energy transfer-based SB definitions give similar SB yields as the one based on energy deposit when ETH ≈ 10.79 eV, but deviate significantly for higher ETH values. Between 30 and 40 nucleosomes/Gy show at least one SB in the ROI. The number of nucleosomes that present a complex damage pattern of more than 2 SBs and the degree of complexity of the damage in these nucleosomes diminish as the incident electron energy increases. DNA damage classification into SSB and DSB is highly dependent on the definitions of these higher order structures and their implementations. The authors' show that, for the four studied models, different yields are expected by up to 54% for SSBs and by up to 32% for DSBs, as a function of the incident electrons energy and of the models being compared. MCTS simulations allow to compare direct DNA damage types and complexities induced by ionizing radiation. However, simulation results depend to a large degree on user-defined parameters, definitions, and algorithms such as: DNA model, dose distribution, SB definition, and the DNA damage clustering algorithm. These interdependencies should be well controlled during the simulations and explicitly reported when comparing results to experiments or calculations.
Resumo:
Quantum field theory with an external background can be considered as a consistent model only if backreaction is relatively small with respect to the background. To find the corresponding consistency restrictions on an external electric field and its duration in QED and QCD, we analyze the mean-energy density of quantized fields for an arbitrary constant electric field E, acting during a large but finite time T. Using the corresponding asymptotics with respect to the dimensionless parameter eET(2), one can see that the leading contributions to the energy are due to the creation of particles by the electric field. Assuming that these contributions are small in comparison with the energy density of the electric background, we establish the above-mentioned restrictions, which determine, in fact, the time scales from above of depletion of an electric field due to the backreaction.
Resumo:
The Cluster Variation Method (CVM), introduced over 50 years ago by Prof. Dr. Ryoichi Kikuchi, is applied to the thermodynamic modeling of the BCC Cr-Fe system in the irregular tetrahedron approximation, using experimental thermochemical data as initial input for accessing the model parameters. The results are checked against independent data on the low-temperature miscibility gap, using increasingly accurate thermodynamic models, first by the inclusion of the magnetic degrees of freedom of iron and then also by the inclusion of the magnetic degrees of freedom of chromium. It is shown that a reasonably accurate description of the phase diagram at the iron-rich side (i.e. the miscibility gap borders and the Curie line) is obtained, but only at expense of the agreement with the above mentioned thermochemical data. Reasons for these inconsistencies are discussed, especially with regard to the need of introducing vibrational degrees of freedom in the CVM model. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Introduction: Two hundred ten patients with newly diagnosed Hodgkin`s lymphoma (HL) were consecutively enrolled in this prospective trial to evaluate the cost-effectiveness of fluorine-18 ((18)F)-fluoro-2-deoxy-D-glucose-positron emission tomography (FDG-PET) scan in initial staging of patients with HL. Methods: All 210 patients were staged with conventional clinical staging (CCS) methods, including computed tomography (CT), bone marrow biopsy (BMB), and laboratory tests. Patients were also submitted to metabolic staging (MS) with whole-body FDG-PET scan before the beginning of treatment. A standard of reference for staging was determined with all staging procedures, histologic examination, and follow-up examinations. The accuracy of the CCS was compared with the MS. Local unit costs of procedures and tests were evaluated. Incremental cost-effectiveness ratio (ICER) was calculated for both strategies. Results: In the 210 patients with HL, the sensitivity for initial staging of FDG-PET was higher than that of CT and BMB in initial staging (97.9% vs. 87.3%; P < .001 and 94.2% vs. 71.4%, P < 0.003, respectively). The incorporation of FDG-PET in the staging procedure upstaged disease in 50 (24%) patients and downstaged disease in 17 (8%) patients. Changes in treatment would be seen in 32 (15%) patients. Cumulative cost for staging procedures was $3751/patient for CCS compared to $5081 for CCS + PET and $4588 for PET/CT. The ICER of PET/CT strategy was $16,215 per patient with modified treatment. PET/CT costs at the beginning and end of treatment would increase total costs of HL staging and first-line treatment by only 2%. Conclusion: FDG-PET is more accurate than CT and BMB in HL staging. Given observed probabilities, FDG-PET is highly cost-effective in the public health care program in Brazil.
Resumo:
Background & aim: Many disease outbreaks of food origin are caused by foods prepared in Food Service and Nutrition Units of hospitals, affecting hospitalized patients who, in most cases, are immunocompromised and therefore at a higher risk of severe worsening of their clinical status. The aim of this study was to determine the variations in temperature and the time-temperature factor of hospital diets. Methods: The time and temperature for the preparation of 4 diets of modified consistency were determined on 5 nonconsecutive days in a hospital Diet and Nutrition Unit at the end of preparation and during the maintenance period, portioning and distribution at 3 sites, i.e., the first, the middle and the last to receive the diets. Results and discussion: All foods reached an adequate temperature at the end of cooking, but temperature varied significantly from the maintenance period to the final distribution, characterizing critical periods for microorganism proliferation. During holding, temperatures that presented a risk were reached by 16.7% of the meats and 59% of the salads of the general diet, by 16.7% of the garnishes in the bland diet and by 20% of the meats and garnishes in the viscous diet. The same occurred at the end of distribution for 100% of the hot samples and of the salads and for 61% of the desserts. None of the preparations remained at risk temperature for a time exceeding that established by law. Conclusion: The exposure to inadequate temperature did not last long enough to pose risks to the patient.
Resumo:
The aim of this research was to examine the nature and order of recovery of orientation and memory functioning during Post-Traumatic Amnesia (PTA) in relation to injury severity and PTA duration. The Westmead PTA Scale was used across consecutive testing days to assess the recovery of orientation and memory during PTA in 113 patients. Two new indices were examined: a Consistency-of-Recovery and a Duration-to-Recovery index. a predictable order of recovery was observed during PTA: orientation-to-person recovered sooner and more consistently than the following cluster; orientation-to-time, orientation-to-place, and the ability to remember a face and name. However, the type of memory functioning required for the recall face and name task recovered more consistently than that required for memorizing three pictures. An important overall finding was that the order-of-recovery'' of orientation and memory functioning was dependent upon both the elapsed days since injury, and the consistency of recovery. The newly developed indices were shown to be a valuable means of accounting for differences between groups in the elapsed days to recovery of orientation and memory. These indices also clearly increase the clinical utility of the Westmead PTA Scale and supply an objective means of charting (and potentially predicting) patients' recovery on the different components of orientation and memory throughout their period of hospitalization.
Resumo:
Os procedimentos para a contrata????o de pessoas f??sicas prestadoras de servi??os t??cnicos profissionais especializados em car??ter eventual e para a concess??o da GECC est??o estabelecidos na Resolu????o n?? 7, de 16 de junho de 2014.