952 resultados para Data recovery (Computer science)
Resumo:
Mode of access: Internet.
Resumo:
Thesis (M. S.)--University of Illinois at Urbana-Champaign.
Resumo:
Originally presented as the author's thesis, University of Illinois at Urbana-Champaign, 1974.
Resumo:
Thesis--Illinois.
Resumo:
"COO-1469-0152. File no. 818."
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
The notion of compensation is widely used in advanced transaction models as means of recovery from a failure. Similar concepts are adopted for providing transaction-like behaviour for long business processes supported by workflows technology. In general, it is not trivial to design compensating tasks for tasks in the context of a workflow. Actually, a task in a workflow process does not have to be compensatable in the sense that the forcibility of reverse operations of the task is not always guaranteed by the application semantics. In addition, the isolation requirement on data resources may make a task difficult to compensate. In this paper, we first look into the requirements that a compensating task has to satisfy. Then we introduce a new concept called confirmation. With the help of confirmation, we are able to modify most non-compensatable tasks so that they become compensatable. This can substantially increase the availability of shared resources and greatly improve backward recovery for workflow applications in case of failures. To effectively incorporate confirmation and compensation into a workflow management environment, a three level bottom-up workflow design method is introduced. The implementation issues of this design are also discussed. (C) 2003 Elsevier Science Inc. All rights reserved.
Resumo:
The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.
Resumo:
Normal mixture models are often used to cluster continuous data. However, conventional approaches for fitting these models will have problems in producing nonsingular estimates of the component-covariance matrices when the dimension of the observations is large relative to the number of observations. In this case, methods such as principal components analysis (PCA) and the mixture of factor analyzers model can be adopted to avoid these estimation problems. We examine these approaches applied to the Cabernet wine data set of Ashenfelter (1999), considering the clustering of both the wines and the judges, and comparing our results with another analysis. The mixture of factor analyzers model proves particularly effective in clustering the wines, accurately classifying many of the wines by location.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
With mixed feature data, problems are induced in modeling the gating network of normalized Gaussian (NG) networks as the assumption of multivariate Gaussian becomes invalid. In this paper, we propose an independence model to handle mixed feature data within the framework of NG networks. The method is illustrated using a real example of breast cancer data.
Resumo:
Fuzzy data has grown to be an important factor in data mining. Whenever uncertainty exists, simulation can be used as a model. Simulation is very flexible, although it can involve significant levels of computation. This article discusses fuzzy decision-making using the grey related analysis method. Fuzzy models are expected to better reflect decision-making uncertainty, at some cost in accuracy relative to crisp models. Monte Carlo simulation is used to incorporate experimental levels of uncertainty into the data and to measure the impact of fuzzy decision tree models using categorical data. Results are compared with decision tree models based on crisp continuous data.
Resumo:
In this paper, we consider how refinements between state-based specifications (e.g., written in Z) can be checked by use of a model checker. Specifically, we are interested in the verification of downward and upward simulations which are the standard approach to verifying refinements in state-based notations. We show how downward and upward simulations can be checked using existing temporal logic model checkers. In particular, we show how the branching time temporal logic CTL can be used to encode the standard simulation conditions. We do this for both a blocking, or guarded, interpretation of operations (often used when specifying reactive systems) as well as the more common non-blocking interpretation of operations used in many state-based specification languages (for modelling sequential systems). The approach is general enough to use with any state-based specification language, and we illustrate how refinements between Z specifications can be checked using the SAL CTL model checker using a small example.
Resumo:
Bang-bang phase detector based PLLs are simple to design, suffer no systematic phase error, and can run at the highest speed a process can make a working flip-flop. For these reasons designers are employing them in the design of very high speed Clock Data Recovery (CDR) architectures. The major drawback of this class of PLL is the inherent jitter due to quantized phase and frequency corrections. Reducing loop gain can proportionally improve jitter performance, but also reduces locking time and pull-in range. This paper presents a novel PLL design that dynamically scales its gain in order to achieve fast lock times while improving fitter performance in lock. Under certain circumstances the design also demonstrates improved capture range. This paper also analyses the behaviour of a bang-bang type PLL when far from lock, and demonstrates that the pull-in range is proportional to the square root of the PLL loop gain.