901 resultados para Lean maintenance
Resumo:
Studies have attributed several functions to the Eaf family, including tumor suppression and eye development. Given the potential association between cancer and development, we set forth to explore Eaf1 and Eaf2/U19 activity in vertebrate embryogenesis, using zebrafish. In situ hybridization revealed similar eaf1 and eaf2/u19 expression patterns. Morpholino-mediated knockdown of either eaf1 or eaf2/u19 expression produced similar morphological changes that could be reversed by ectopic expression of target or reciprocal-target mRNA. However, combination of Eaf1 and Eaf2/U19 (Eafs)-morpholinos increased the severity of defects, suggesting that Eaf1 and Eaf2/U19 only share some functional redundancy. The Eafs knockdown phenotype resembled that of embryos with defects in convergence and extension movements. Indeed, knockdown caused expression pattern changes for convergence and extension movement markers, whereas cell tracing experiments using kaeda mRNA showed a correlation between Eafs knockdown and cell migration defects. Cardiac and pancreatic differentiation markers revealed that Eafs knockdown also disrupted midline convergence of heart and pancreatic organ precursors. Noncanonical Wnt signaling plays a key role in both convergence and extension movements and midline convergence of organ precursors. We found that Eaf1 and Eaf2/U19 maintained expression levels of wnt11 and wnt5. Moreover, wnt11 or wnt5 mRNA partially rescued the convergence and extension movement defects occurring in eafs morphants. Wnt11 and Wnt5 converge on rhoA, so not surprisingly, rhoA mRNA more effectively rescued defects than either wnt11 or wnt5 mRNA alone. However, the ectopic expression of wnt11 and wnt5 did not affect eaf1 and eaf2/u19 expression. These data indicate that eaf1 and eaf2/u19 act upstream of noncanonical Wnt signaling to mediate convergence and extension movements.
Resumo:
This paper aims to elucidate practitioners' understanding and implementation of Lean in Product Development (LPD). We report on a workshop held in the UK during 2012. Managers and engineers from four organizations discussed their understanding of LPD and their ideas and practice regarding management and assessment of value and waste. The study resulted in a set of insights into current practice and lean thinking from the industry perspective. Building on this, the paper introduces a balanced value and waste model that can be used by practitioners as a checklist to identify issues that need to be considered when applying LPD. The main results indicate that organizations tend to focus on waste elimination rather than value enhancement in LPD. Moreover, the lean metrics that were discussed by the workshop participants do not link the strategic level with the operational one, and poorly reflect the value and waste generated in the process. Future directions for research are explored, and include the importance of a balanced approach considering both value and waste when applying LPD, and the need to link lean metrics with value and waste levels.
a constraint-driven human resource scheduling method in software development and maintenance process
Resumo:
perimentally at evaluated pressures and under normal- and micro-gravity conditions utilizing the 3.5 s drop tower of the National Microgravity Laboratory of China. The results showed that under micro-gravity conditions the natural convection is minimized and the flames become more planar and symmetric compared to normal gravity. In both normal- and micro-gravity experiments and for a given strain rate and fuel concentration, the flame luminosity was found to enhance as the pressure increases. On the other hand, at a given pressure, the flame luminosity was determined to weaken as the strain rate decreases. At a given strain rate, the fuel concentration at extinction was found to vary non-monotonically with pressure, namely it first increases and subsequently decreases with pressure. The limit fuel concentration peaks around 3 and 4 atm under normal- and micro-gravity, respectively. The extinction limits measured at micro-gravity were in good agreement with predictions obtained through detailed numerical simulations but they are notably lower compared to the data obtained under normal gravity. The simulations confirmed the non-monotonic variation of flammability limits with pressure, in agreement with previous studies. Sensitivity analysis showed that for pressures between one and 5 atm, the near-limit flame response is dominated by the competition between the main branching, H + O2 ? OH + O, and the pressure sensitive termination, H+O2+M? HO2 + M, reaction. However, for pressures greater than 5 atm it was determined that the HO2 kinetics result in further chain branching in a way that is analogous to the third explosion limit of H2/O2 mixtures. 2010 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Natl Tech Univ Ukraine, Huazhong Univ Sci & Technol, Huazhong Normal Univ, Wuhan Univ, Ternopil Natl Econ Univ, IEEE Ukraine Sect, I&M CI Joint Chapter
Resumo:
The thesis developed here is that reasoning programs which take care to record the logical justifications for program beliefs can apply several powerful, but simple, domain-independent algorithms to (1) maintain the consistency of program beliefs, (2) realize substantial search efficiencies, and (3) automatically summarize explanations of program beliefs. These algorithms are the recorded justifications to maintain the consistency and well founded basis of the set of beliefs. The set of beliefs can be efficiently updated in an incremental manner when hypotheses are retracted and when new information is discovered. The recorded justifications also enable the pinpointing of exactly whose assumptions which support any particular belief. The ability to pinpoint the underlying assumptions is the basis for an extremely powerful domain-independent backtracking method. This method, called Dependency-Directed Backtracking, offers vastly improved performance over traditional backtracking algorithms.
Resumo:
Overlay networks have been used for adding and enhancing functionality to the end-users without requiring modifications in the Internet core mechanisms. Overlay networks have been used for a variety of popular applications including routing, file sharing, content distribution, and server deployment. Previous work has focused on devising practical neighbor selection heuristics under the assumption that users conform to a specific wiring protocol. This is not a valid assumption in highly decentralized systems like overlay networks. Overlay users may act selfishly and deviate from the default wiring protocols by utilizing knowledge they have about the network when selecting neighbors to improve the performance they receive from the overlay. This thesis goes against the conventional thinking that overlay users conform to a specific protocol. The contributions of this thesis are threefold. It provides a systematic evaluation of the design space of selfish neighbor selection strategies in real overlays, evaluates the performance of overlay networks that consist of users that select their neighbors selfishly, and examines the implications of selfish neighbor and server selection to overlay protocol design and service provisioning respectively. This thesis develops a game-theoretic framework that provides a unified approach to modeling Selfish Neighbor Selection (SNS) wiring procedures on behalf of selfish users. The model is general, and takes into consideration costs reflecting network latency and user preference profiles, the inherent directionality in overlay maintenance protocols, and connectivity constraints imposed on the system designer. Within this framework the notion of user’s "best response" wiring strategy is formalized as a k-median problem on asymmetric distance and is used to obtain overlay structures in which no node can re-wire to improve the performance it receives from the overlay. Evaluation results presented in this thesis indicate that selfish users can reap substantial performance benefits when connecting to overlay networks composed of non-selfish users. In addition, in overlays that are dominated by selfish users, the resulting stable wirings are optimized to such great extent that even non-selfish newcomers can extract near-optimal performance through naïve wiring strategies. To capitalize on the performance advantages of optimal neighbor selection strategies and the emergent global wirings that result, this thesis presents EGOIST: an SNS-inspired overlay network creation and maintenance routing system. Through an extensive measurement study on the deployed prototype, results presented in this thesis show that EGOIST’s neighbor selection primitives outperform existing heuristics on a variety of performance metrics, including delay, available bandwidth, and node utilization. Moreover, these results demonstrate that EGOIST is competitive with an optimal but unscalable full-mesh approach, remains highly effective under significant churn, is robust to cheating, and incurs minimal overheads. This thesis also studies selfish neighbor selection strategies for swarming applications. The main focus is on n-way broadcast applications where each of n overlay user wants to push its own distinct file to all other destinations as well as download their respective data files. Results presented in this thesis demonstrate that the performance of our swarming protocol for n-way broadcast on top of overlays of selfish users is far superior than the performance on top of existing overlays. In the context of service provisioning, this thesis examines the use of distributed approaches that enable a provider to determine the number and location of servers for optimal delivery of content or services to its selfish end-users. To leverage recent advances in virtualization technologies, this thesis develops and evaluates a distributed protocol to migrate servers based on end-users demand and only on local topological knowledge. Results under a range of network topologies and workloads suggest that the performance of the distributed deployment is comparable to that of the optimal but unscalable centralized deployment.
Resumo:
Case-Based Reasoning (CBR) uses past experiences to solve new problems. The quality of the past experiences, which are stored as cases in a case base, is a big factor in the performance of a CBR system. The system's competence may be improved by adding problems to the case base after they have been solved and their solutions verified to be correct. However, from time to time, the case base may have to be refined to reduce redundancy and to get rid of any noisy cases that may have been introduced. Many case base maintenance algorithms have been developed to delete noisy and redundant cases. However, different algorithms work well in different situations and it may be difficult for a knowledge engineer to know which one is the best to use for a particular case base. In this thesis, we investigate ways to combine algorithms to produce better deletion decisions than the decisions made by individual algorithms, and ways to choose which algorithm is best for a given case base at a given time. We analyse five of the most commonly-used maintenance algorithms in detail and show how the different algorithms perform better on different datasets. This motivates us to develop a new approach: maintenance by a committee of experts (MACE). MACE allows us to combine maintenance algorithms to produce a composite algorithm which exploits the merits of each of the algorithms that it contains. By combining different algorithms in different ways we can also define algorithms that have different trade-offs between accuracy and deletion. While MACE allows us to define an infinite number of new composite algorithms, we still face the problem of choosing which algorithm to use. To make this choice, we need to be able to identify properties of a case base that are predictive of which maintenance algorithm is best. We examine a number of measures of dataset complexity for this purpose. These provide a numerical way to describe a case base at a given time. We use the numerical description to develop a meta-case-based classification system. This system uses previous experience about which maintenance algorithm was best to use for other case bases to predict which algorithm to use for a new case base. Finally, we give the knowledge engineer more control over the deletion process by creating incremental versions of the maintenance algorithms. These incremental algorithms suggest one case at a time for deletion rather than a group of cases, which allows the knowledge engineer to decide whether or not each case in turn should be deleted or kept. We also develop incremental versions of the complexity measures, allowing us to create an incremental version of our meta-case-based classification system. Since the case base changes after each deletion, the best algorithm to use may also change. The incremental system allows us to choose which algorithm is the best to use at each point in the deletion process.