896 resultados para Asset Maintenance
Resumo:
© Springer International Publishing Switzerland 2015. Making sound asset management decisions, such as whether to replace or maintain an ageing underground water pipe, are critical to ensure that organisations maximise the performance of their assets. These decisions are only as good as the data that supports them, and hence many asset management organisations are in desperate need to improve the quality of their data. This chapter reviews the key academic research on data quality (DQ) and Information Quality (IQ) (used interchangeably in this chapter) in asset management, combines this with the current DQ problems faced by asset management organisations in various business sectors, and presents a classification of the most important DQ problems that need to be tackled by asset management organisations. In this research, eleven semi structured interviews were carried out with asset management professionals in a range of business sectors in the UK. The problems described in the academic literature were cross checked against the problems found in industry. In order to support asset management professionals in solving these problems, we categorised them into seven different DQ dimensions, used in the academic literature, so that it is clear how these problems fit within the standard frameworks for assessing and improving data quality. Asset management professionals can therefore now use these frameworks to underpin their DQ improvement initiatives while focussing on the most critical DQ problems.
a constraint-driven human resource scheduling method in software development and maintenance process
Resumo:
Natl Tech Univ Ukraine, Huazhong Univ Sci & Technol, Huazhong Normal Univ, Wuhan Univ, Ternopil Natl Econ Univ, IEEE Ukraine Sect, I&M CI Joint Chapter
Resumo:
The thesis developed here is that reasoning programs which take care to record the logical justifications for program beliefs can apply several powerful, but simple, domain-independent algorithms to (1) maintain the consistency of program beliefs, (2) realize substantial search efficiencies, and (3) automatically summarize explanations of program beliefs. These algorithms are the recorded justifications to maintain the consistency and well founded basis of the set of beliefs. The set of beliefs can be efficiently updated in an incremental manner when hypotheses are retracted and when new information is discovered. The recorded justifications also enable the pinpointing of exactly whose assumptions which support any particular belief. The ability to pinpoint the underlying assumptions is the basis for an extremely powerful domain-independent backtracking method. This method, called Dependency-Directed Backtracking, offers vastly improved performance over traditional backtracking algorithms.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.
Resumo:
The desire to obtain competitive advantage is a motivator for implementing Enterprise Resource Planning (ERP) Systems (Adam & O’Doherty, 2000). However, while it is accepted that Information Technology (IT) in general may contribute to the improvement of organisational performance (Melville, Kraemer, & Gurbaxani, 2004), the nature and extent of that contribution is poorly understood (Jacobs & Bendoly, 2003; Ravichandran & Lertwongsatien, 2005). Accordingly, Henderson and Venkatraman (1993) assert that it is the application of business and IT capabilities to develop and leverage a firm’s IT resources for organisational transformation, rather than the acquired technological functionality, that secures competitive advantage for firms. Application of the Resource Based View of the firm (Wernerfelt, 1984) and Dynamic Capabilities Theory (DCT) (Teece and Pisano (1998) in particular) may yield insights into whether or not the use of Enterprise Systems enhances organisations’ core capabilities and thereby obtains competitive advantage, sustainable or otherwise (Melville et al., 2004). An operational definition of Core Capabilities that is independent of the construct of Sustained Competitive Advantage is formulated. This Study proposes and utilises an applied Dynamic Capabilities framework to facilitate the investigation of the role of Enterprise Systems. The objective of this research study is to investigate the role of Enterprise Systems in the Core Dynamic Capabilities of Asset Lifecycle Management. The Study explores the activities of Asset Lifecycle Management, the Core Dynamic Capabilities inherent in Asset Lifecycle Management and the footprint of Enterprise Systems on those Dynamic Capabilities. Additionally, the study explains the mechanisms by which Enterprise Systems sustain the Exploitability and the Renewability of those Core Dynamic Capabilities. The study finds that Enterprise Systems contribute directly to the Value, Exploitability and Renewability of Core Dynamic Capabilities and indirectly to their Inimitability and Non-substitutability. The study concludes by presenting an applied Dynamic Capabilities framework, which integrates Alter (1992)’s definition of Information Systems with Teece and Pisano (1998)’s model of Dynamic Capabilities to provide a robust diagnostic for determining the sustained value generating contributions of Enterprise Systems. These frameworks are used in the conclusions to frame the findings of the study. The conclusions go on to assert that these frameworks are free - standing and analytically generalisable, per Siggelkow (2007) and Yin (2003).
Resumo:
Using a classic grounded theory methodology (CGT), this study explores the phenomenon of moral shielding within mental health multidisciplinary teams (MDTS). The study was located within three catchment areas engaged in acute mental health service practice. The main concern identified was the maintenance of a sense of personal integrity during situational binds. Through theoretical sampling thirty two practitioners, including; doctors, nurses, social workers, occupational therapists, counsellors and psychologists, where interviewed face to face. In addition, emergent concepts were identified through observation of MDTs in clinical and research practice. Following a classic grounded theory methodology, data collection and analysis occurred simultaneously. A constant comparative approach was adopted and resulted in the immergence of three sub- core categories; moral abdication, moral hinting and pseudo-compliance. Moral abdication seeks to re-position within an event in order to avoid or deflect the initial obligation to act, it is a strategy used to remove or reduce moral ownership. Moral gauging represents the monitoring of an event with the goal of judging the congruence of personal principles and commitments with that of other practitioners. This strategy is enacted in a bid to seek allies for the support of a given moral position. Pseudo-compliance represents behaviour that hides desired principles and commitments in order to shield them from challenge. This strategy portrays agreement with the dominant position within the MDT, whilst holding a contrary position. It seeks to preserve a reservoir of emotional energy required to maintain a sense of personal integrity. Practitioners who were successful in enacting moral shielding were found to not experience significant emotional distress associated with the phenomenon of moral distress; suggesting that these practitioners had found mechanisms to manage situational binds that threatened their sense of personal integrity.