951 resultados para Query errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research focuses on automatically adapting a search engine size in response to fluctuations in query workload. Deploying a search engine in an Infrastructure as a Service (IaaS) cloud facilitates allocating or deallocating computer resources to or from the engine. Our solution is to contribute an adaptive search engine that will repeatedly re-evaluate its load and, when appropriate, switch over to a dierent number of active processors. We focus on three aspects and break them out into three sub-problems as follows: Continually determining the Number of Processors (CNP), New Grouping Problem (NGP) and Regrouping Order Problem (ROP). CNP means that (in the light of the changes in the query workload in the search engine) there is a problem of determining the ideal number of processors p active at any given time to use in the search engine and we call this problem CNP. NGP happens when changes in the number of processors are determined and it must also be determined which groups of search data will be distributed across the processors. ROP is how to redistribute this data onto processors while keeping the engine responsive and while also minimising the switchover time and the incurred network load. We propose solutions for these sub-problems. For NGP we propose an algorithm for incrementally adjusting the index to t the varying number of virtual machines. For ROP we present an ecient method for redistributing data among processors while keeping the search engine responsive. Regarding the solution for CNP, we propose an algorithm determining the new size of the search engine by re-evaluating its load. We tested the solution performance using a custom-build prototype search engine deployed in the Amazon EC2 cloud. Our experiments show that when we compare our NGP solution with computing the index from scratch, the incremental algorithm speeds up the index computation 2{10 times while maintaining a similar search performance. The chosen redistribution method is 25% to 50% faster than other methods and reduces the network load around by 30%. For CNP we present a deterministic algorithm that shows a good ability to determine a new size of search engine. When combined, these algorithms give an adapting algorithm that is able to adjust the search engine size with a variable workload.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A cikkben a szerző a piac és a kormányzat kudarcaiból kiindulva azonosítja a közjó elérését célzó harmadik rendszer, az etikai felelősség kudarcait. Statisztikai analógiát használva elsőfajú kudarcként azonosítja, mikor az etikát nem veszik figyelembe, pedig szükség van rá. Ugyanakkor másodfajú kudarcként kezeli az etika profitnövelést célzó használatát, mely megtéveszti az érintetteteket, így még szélesebb utat enged az opportunista üzleti tevékenységnek. Meglátása szerint a három rendszer egymást nemcsak kiegészíti, de kölcsönösen korrigálja is. Ez az elsőfajú kudarc esetében általánosabb, a másodfajú kudarc megoldásához azonban a gazdasági élet alapvetéseinek átfogalmazására, az önérdek és az egydimenziós teljesítményértékelés helyett egy új, holisztikusabb szemléletű közgazdaságra van szükség. _______ In the article the author identifies the errors of ethical responsibility. That is the third system to attain common good, but have similar failures like the other two: the hands of the market and the government. Using statistical analogy the author identifies Type I error when ethics are not considered but it should be (null hypothesis is rejected however it’s true). She treats the usage of ethics to extend profit as Type II error. This misleads the stakeholders and makes room for opportunistic behaviour in business (null hypothesis is accepted in turn it’s false). In her opinion the three systems: the hand of the market, the government and the ethical management not only amend but interdependently correct each other. In the case of Type I error it is more general. Nevertheless to solve the Type II error we have to redefine the core principles of business. We need a more holistic approach in economics instead of self-interest and one-dimensional interpretation of value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spill-over of the global fi nancial crisis has uncovered the weaknesses in the governance of the EMU. As one of the most open economies in Europe, Hungary has suff ered from the ups and downs of the global and European crisis and its mismanagement. Domestic policy blunders have complicated the situation. This paper examines how Hungary has withstood the ups and downs of the eurozone crisis. It also addresses the questions of whether the country has converged with or diverged from the EMU membership, whether joining the EMU is still a good idea for Hungary, and whether the measures to ward off the crisis have actually helped to face the challenge of growth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Rates of survival of victims of sudden cardiac arrest (SCA) using cardio pulmonary resuscitation (CPR) have shown little improvement over the past three decades. Since registered nurses (RNs) comprise the largest group of healthcare providers in U.S. hospitals, it is essential that they are competent in performing the four primary measures (compression, ventilation, medication administration, and defibrillation) of CPR in order to improve survival rates of SCA patients. The purpose of this experimental study was to test a color-coded SMOCK system on: 1) time to implement emergency patient care measures 2) technical skills performance 3) number of medical errors, and 4) team performance during simulated CPR exercises. The study sample was 260 RNs (M 40 years, SD=11.6) with work experience as an RN (M 7.25 years, SD=9.42).Nurses were allocated to a control or intervention arm consisting of 20 groups of 5-8 RNs per arm for a total of 130 RNs in each arm. Nurses in each study arm were given clinical scenarios requiring emergency CPR. Nurses in the intervention group wore different color labeled aprons (smocks) indicating their role assignment (medications, ventilation, compression, defibrillation, etc) on the code team during CPR. Findings indicated that the intervention using color-labeled smocks for pre-assigned roles had a significant effect on the time nurses started compressions (t=3.03, p=0.005), ventilations (t=2.86, p=0.004) and defibrillations (t=2.00, p=.05) when compared to the controls using the standard of care. In performing technical skills, nurses in the intervention groups performed compressions and ventilations significantly better than those in the control groups. The control groups made significantly (t=-2.61, p=0.013) more total errors (7.55 SD 1.54) than the intervention group (5.60, SD 1.90). There were no significant differences in team performance measures between the groups. Study findings indicate use of colored labeled smocks during CPR emergencies resulted in: shorter times to start emergency CPR; reduced errors; more technical skills completed successfully; and no differences in team performance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Current technology permits connecting local networks via high-bandwidth telephone lines. Central coordinator nodes may use Intelligent Networks to manage data flow over dialed data lines, e.g. ISDN, and to establish connections between LANs. This dissertation focuses on cost minimization and on establishing operational policies for query distribution over heterogeneous, geographically distributed databases. Based on our study of query distribution strategies, public network tariff policies, and database interface standards we propose methods for communication cost estimation, strategies for the reduction of bandwidth allocation, and guidelines for central to node communication protocols. Our conclusion is that dialed data lines offer a cost effective alternative for the implementation of distributed database query systems, and that existing commercial software may be adapted to support query processing in heterogeneous distributed database systems. ^

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research pursued the conceptualization, implementation, and verification of a system that enhances digital information displayed on an LCD panel to users with visual refractive errors. The target user groups for this system are individuals who have moderate to severe visual aberrations for which conventional means of compensation, such as glasses or contact lenses, does not improve their vision. This research is based on a priori knowledge of the user's visual aberration, as measured by a wavefront analyzer. With this information it is possible to generate images that, when displayed to this user, will counteract his/her visual aberration. The method described in this dissertation advances the development of techniques for providing such compensation by integrating spatial information in the image as a means to eliminate some of the shortcomings inherent in using display devices such as monitors or LCD panels. Additionally, physiological considerations are discussed and integrated into the method for providing said compensation. In order to provide a realistic sense of the performance of the methods described, they were tested by mathematical simulation in software, as well as by using a single-lens high resolution CCD camera that models an aberrated eye, and finally with human subjects having various forms of visual aberrations. Experiments were conducted on these systems and the data collected from these experiments was evaluated using statistical analysis. The experimental results revealed that the pre-compensation method resulted in a statistically significant improvement in vision for all of the systems. Although significant, the improvement was not as large as expected for the human subject tests. Further analysis suggest that even under the controlled conditions employed for testing with human subjects, the characterization of the eye may be changing. This would require real-time monitoring of relevant variables (e.g. pupil diameter) and continuous adjustment in the pre-compensation process to yield maximum viewing enhancement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Last Interglacial (LIG, 129-116 thousand of years BP, ka) represents a test bed for climate model feedbacks in warmer-than-present high latitude regions. However, mainly because aligning different palaeoclimatic archives and from different parts of the world is not trivial, a spatio-temporal picture of LIG temperature changes is difficult to obtain. Here, we have selected 47 polar ice core and sub-polar marine sediment records and developed a strategy to align them onto the recent AICC2012 ice core chronology. We provide the first compilation of high-latitude temperature changes across the LIG associated with a coherent temporal framework built between ice core and marine sediment records. Our new data synthesis highlights non-synchronous maximum temperature changes between the two hemispheres with the Southern Ocean and Antarctica records showing an early warming compared to North Atlantic records. We also observe warmer than present-day conditions that occur for a longer time period in southern high latitudes than in northern high latitudes. Finally, the amplitude of temperature changes at high northern latitudes is larger compared to high southern latitude temperature changes recorded at the onset and the demise of the LIG. We have also compiled four data-based time slices with temperature anomalies (compared to present-day conditions) at 115 ka, 120 ka, 125 ka and 130 ka and quantitatively estimated temperature uncertainties that include relative dating errors. This provides an improved benchmark for performing more robust model-data comparison. The surface temperature simulated by two General Circulation Models (CCSM3 and HadCM3) for 130 ka and 125 ka is compared to the corresponding time slice data synthesis. This comparison shows that the models predict warmer than present conditions earlier than documented in the North Atlantic, while neither model is able to produce the reconstructed early Southern Ocean and Antarctic warming. Our results highlight the importance of producing a sequence of time slices rather than one single time slice averaging the LIG climate conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing can be defined as a distributed computational model by through resources (hardware, storage, development platforms and communication) are shared, as paid services accessible with minimal management effort and interaction. A great benefit of this model is to enable the use of various providers (e.g a multi-cloud architecture) to compose a set of services in order to obtain an optimal configuration for performance and cost. However, the multi-cloud use is precluded by the problem of cloud lock-in. The cloud lock-in is the dependency between an application and a cloud platform. It is commonly addressed by three strategies: (i) use of intermediate layer that stands to consumers of cloud services and the provider, (ii) use of standardized interfaces to access the cloud, or (iii) use of models with open specifications. This paper outlines an approach to evaluate these strategies. This approach was performed and it was found that despite the advances made by these strategies, none of them actually solves the problem of lock-in cloud. In this sense, this work proposes the use of Semantic Web to avoid cloud lock-in, where RDF models are used to specify the features of a cloud, which are managed by SPARQL queries. In this direction, this work: (i) presents an evaluation model that quantifies the problem of cloud lock-in, (ii) evaluates the cloud lock-in from three multi-cloud solutions and three cloud platforms, (iii) proposes using RDF and SPARQL on management of cloud resources, (iv) presents the cloud Query Manager (CQM), an SPARQL server that implements the proposal, and (v) comparing three multi-cloud solutions in relation to CQM on the response time and the effectiveness in the resolution of cloud lock-in.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diffraction gratings are not always ideal but, due to the fabrication process, several errors can be produced. In this work we show that when the strips of a binary phase diffraction grating present certain randomness in their height, the intensity of the diffraction orders varies with respect to that obtained with a perfect grating. To show this, we perform an analysis of the mutual coherence function and then, the intensity distribution at the far field is obtained. In addition to the far field diffraction orders, a "halo" that surrounds the diffraction order is found, which is due to the randomness of the strips height.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Topological quantum error correction codes are currently among the most promising candidates for efficiently dealing with the decoherence effects inherently present in quantum devices. Numerically, their theoretical error threshold can be calculated by mapping the underlying quantum problem to a related classical statistical-mechanical spin system with quenched disorder. Here, we present results for the general fault-tolerant regime, where we consider both qubit and measurement errors. However, unlike in previous studies, here we vary the strength of the different error sources independently. Our results highlight peculiar differences between toric and color codes. This study complements previous results published in New J. Phys. 13, 083006 (2011).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

People are always at risk of making errors when they attempt to retrieve information from memory. An important question is how to create the optimal learning conditions so that, over time, the correct information is learned and the number of mistakes declines. Feedback is a powerful tool, both for reinforcing new learning and correcting memory errors. In 5 experiments, I sought to understand the best procedures for administering feedback during learning. First, I evaluated the popular recommendation that feedback is most effective when given immediately, and I showed that this recommendation does not always hold when correcting errors made with educational materials in the classroom. Second, I asked whether immediate feedback is more effective in a particular case—when correcting false memories, or strongly-held errors that may be difficult to notice even when the learner is confronted with the feedback message. Third, I examined whether varying levels of learner motivation might help to explain cross-experimental variability in feedback timing effects: Are unmotivated learners less likely to benefit from corrective feedback, especially when it is administered at a delay? Overall, the results revealed that there is no best “one-size-fits-all” recommendation for administering feedback; the optimal procedure depends on various characteristics of learners and their errors. As a package, the data are consistent with the spacing hypothesis of feedback timing, although this theoretical account does not successfully explain all of the data in the larger literature.