852 resultados para Critical chain method
Resumo:
Principal components analysis (PCA) has been described for over 50 years; however, it is rarely applied to the analysis of epidemiological data. In this study PCA was critically appraised in its ability to reveal relationships between pulsed-field gel electrophoresis (PFGE) profiles of methicillin- resistant Staphylococcus aureus (MRSA) in comparison to the more commonly employed cluster analysis and representation by dendrograms. The PFGE type following SmaI chromosomal digest was determined for 44 multidrug-resistant hospital-acquired methicillin-resistant S. aureus (MR-HA-MRSA) isolates, two multidrug-resistant community-acquired MRSA (MR-CA-MRSA), 50 hospital-acquired MRSA (HA-MRSA) isolates (from the University Hospital Birmingham, NHS Trust, UK) and 34 community-acquired MRSA (CA-MRSA) isolates (from general practitioners in Birmingham, UK). Strain relatedness was determined using Dice band-matching with UPGMA clustering and PCA. The results indicated that PCA revealed relationships between MRSA strains, which were more strongly correlated with known epidemiology, most likely because, unlike cluster analysis, PCA does not have the constraint of generating a hierarchic classification. In addition, PCA provides the opportunity for further analysis to identify key polymorphic bands within complex genotypic profiles, which is not always possible with dendrograms. Here we provide a detailed description of a PCA method for the analysis of PFGE profiles to complement further the epidemiological study of infectious disease. © 2005 Elsevier B.V. All rights reserved.
Resumo:
In recent work we have developed a novel variational inference method for partially observed systems governed by stochastic differential equations. In this paper we provide a comparison of the Variational Gaussian Process Smoother with an exact solution computed using a Hybrid Monte Carlo approach to path sampling, applied to a stochastic double well potential model. It is demonstrated that the variational smoother provides us a very accurate estimate of mean path while conditional variance is slightly underestimated. We conclude with some remarks as to the advantages and disadvantages of the variational smoother. © 2008 Springer Science + Business Media LLC.
Resumo:
Matrix application continues to be a critical step in sample preparation for matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging (MSI). Imaging of small molecules such as drugs and metabolites is particularly problematic because the commonly used washing steps to remove salts are usually omitted as they may also remove the analyte, and analyte spreading is more likely with conventional wet matrix application methods. We have developed a method which uses the application of matrix as a dry, finely divided powder, here referred to as dry matrix application, for the imaging of drug compounds. This appears to offer a complementary method to wet matrix application for the MALDI-MSI of small molecules, with the alternative matrix application techniques producing different ion profiles, and allows the visualization of compounds not observed using wet matrix application methods. We demonstrate its value in imaging clozapine from rat kidney and 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylic acid from rat brain. In addition, exposure of the dry matrix coated sample to a saturated moist atmosphere appears to enhance the visualization of a different set of molecules.
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
In this paper we develop set of novel Markov chain Monte Carlo algorithms for Bayesian smoothing of partially observed non-linear diffusion processes. The sampling algorithms developed herein use a deterministic approximation to the posterior distribution over paths as the proposal distribution for a mixture of an independence and a random walk sampler. The approximating distribution is sampled by simulating an optimized time-dependent linear diffusion process derived from the recently developed variational Gaussian process approximation method. Flexible blocking strategies are introduced to further improve mixing, and thus the efficiency, of the sampling algorithms. The algorithms are tested on two diffusion processes: one with double-well potential drift and another with SINE drift. The new algorithm's accuracy and efficiency is compared with state-of-the-art hybrid Monte Carlo based path sampling. It is shown that in practical, finite sample, applications the algorithm is accurate except in the presence of large observation errors and low observation densities, which lead to a multi-modal structure in the posterior distribution over paths. More importantly, the variational approximation assisted sampling algorithm outperforms hybrid Monte Carlo in terms of computational efficiency, except when the diffusion process is densely observed with small errors in which case both algorithms are equally efficient.
Resumo:
This paper presents a Decision Support System framework based on Constrain Logic Programming and offers suggestions for using RFID technology to improve several of the critical procedures involved. This paper suggests that a widely distributed and semi-structured network of waste producing and waste collecting/processing enterprises can improve their planning both by the proposed Decision Support System, but also by implementing RFID technology to update and validate information in a continuous manner. © 2010 IEEE.
Resumo:
One of the most significant paradigm shifts of modern business management is that individual businesses no longer compete as solely autonomous entities, but rather as supply chains. Firms worldwide have embraced the concept of supply chain management as important and sometimes critical to their business. The idea of a collaborative supply chain is to gain a competitive advantage by improving overall performance through measuring a holistic perspective of the supply chain. However, contemporary performance measurement theory is somewhat fragmented and fails to support this idea. Therefore, this research develops and applies an integrated supply chain performance measurement framework that provides a more holistic approach to the study of supply chain performance measurement by combining both supply chain macro processes and decision making levels. Therefore, the proposed framework can provide a balanced horizontal (cross-process) and vertical (hierarchical decision) view and measure the performance of the entire supply chain system. Firstly, literature on performance measurement frameworks and performance measurement factors of supply chain management will help to develop a conceptual framework. Next the proposed framework will be presented. The framework will be validated through in-depth interviews with three Thai manufacturing companies. The fieldwork combined varied sources in order to understand the views of manufacturers on supply chain performance in the three case study companies. The collected data were analyzed, interpreted, and reported using thematic analysis and analysis hierarchy process (AHP), which was influenced by the study’s conceptual framework. This research contributes a new theory of supply chain performance measurement and knowledge on supply chain characteristics of a developing country, Thailand. The research also affects organisations by preparing decision makers to make strategic, tactical and operational level decisions with respect to supply chain macro processes. The results from the case studies also indicate the similarities and differences in their supply chain performance. Furthermore, the implications of the study are offered for both academic and practical use.
Resumo:
This study investigates the discursive patterns of interactions between police interviewers and women reporting rape in significant witness interviews. Data in the form of video recorded interviews were obtained from a UK police force for the purposes of this study. The data are analysed using a multi-method approach, incorporating tools from micro-sociology, Conversation Analysis and Discursive Psychology, to reveal patterns of interactional control, negotiation, and interpretation. The study adopts a critical approach, which is to say that as well as describing discursive patterns, it explains them in light of the discourse processes involved in the production and consumption of police interview talk, and comments on the relationship between these discourse processes and the social context in which they occur. A central focus of the study is how interviewers draw on particular interactional resources to shape interviewees? accounts in particular ways, and this is discussed in relation to the institutional role of the significant witness interview. The discussion is also extended to the ways in which mainstream rape ideology is both reflected in, and maintained by, the discursive choices of participants. The findings of this study indicate that there are a number of issues to be addressed in terms of the training currently offered to officers at Level 2 of the Professionalising Investigation Programme (PIP) (NPIA, 2009) who intend to conduct significant witness interviews. Furthermore, a need is identified to bring the linguistic and discursive processes of negotiation and transformation identified by the study to the attention of the justice system as a whole. This is a particularly pressing need in light of judicial reluctance to replace written witness statements, the current „end product? of significant witness interviews, with the video recorded interview in place of direct examination in cases of rape.
Resumo:
The oxidation of lipids has long been a topic of interest in biological and food sciences, and the fundamental principles of non-enzymatic free radical attack on phospholipids are well established, although questions about detail of the mechanisms remain. The number of end products that are formed following the initiation of phospholipid peroxidation is large, and is continually growing as new structures of oxidized phospholipids are elucidated. Common products are phospholipids with esterified isoprostane-like structures and chain-shortened products containing hydroxy, carbonyl or carboxylic acid groups; the carbonyl-containing compounds are reactive and readily form adducts with proteins and other biomolecules. Phospholipids can also be attacked by reactive nitrogen and chlorine species, further expanding the range of products to nitrated and chlorinated phospholipids. Key to understanding the mechanisms of oxidation is the development of advanced and sensitive technologies that enable structural elucidation. Tandem mass spectrometry has proved invaluable in this respect and is generally the method of choice for structural work. A number of studies have investigated whether individual oxidized phospholipid products occur in vivo, and mass spectrometry techniques have been instrumental in detecting a variety of oxidation products in biological samples such as atherosclerotic plaque material, brain tissue, intestinal tissue and plasma, although relatively few have achieved an absolute quantitative analysis. The levels of oxidized phospholipids in vivo is a critical question, as there is now substantial evidence that many of these compounds are bioactive and could contribute to pathology. The challenges for the future will be to adopt lipidomic approaches to map the profile of oxidized phospholipid formation in different biological conditions, and relate this to their effects in vivo. This article is part of a Special Issue entitled: Oxidized phospholipids-their properties and interactions with proteins.
Resumo:
Purpose – The purpose of this paper is to report on an investigation into the selection and evaluation of a suitable strategic positioning methodology for SMEs in Singapore. Design/methodology/approach – The research methodology is based on critical review of the literature to identify the potentially most suitable strategic positioning methodology, evaluation and testing of the methodology within the context of SME's in Singapore, and analysis to determine the strengths and weaknesses of the methodology and opportunities for further research. Findings – This paper illustrates a leading integrated strategic positioning decision making process, which has been found to be potentially suitable for SMEs in Singapore, and the process is then applied and evaluated in two industrial case studies. Results in the form of strengths, weaknesses and opportunities are evaluated and discussed in detail, and further research to improve the process has been identified. Practical implications – A practical and integrated strategic supply chain positioning methodology for SMEs to define their own competitive space, among other companies in the manufacturing supply chain, so as to maximize business competitiveness. Originality/value – This paper contributes to the knowledge of the strategic positioning decision process as well as identifies further research to adapt the process for SMEs in Singapore.
Resumo:
The purpose of this article is to highlight the value of ‘strategic positioning’ as a means of providing competitive edge, and to introduce and describe a novel method of managing this. Strategic positioning is concerned with the choice of business activities a company carries out itself, compared to those provided by suppliers, partners, distributors and even customers. It is therefore directly impacted by, and has direct impact upon, such decisions as outsourcing, off-shoring, partnering, innovation, technology acquisition and customer servicing.
Resumo:
Decentralised supply chain formation involves determining the set of producers within a network able to supply goods to one or more consumers at the lowest cost. This problem is frequently tackled using auctions and negotiations. In this paper we show how it can be cast as an optimisation of a pairwise cost function. Optimising this class of functions is NP-hard but good approximations to the global minimum can be obtained using Loopy Belief Propagation (LBP). Here we detail a LBP-based approach to the supply chain formation problem, involving decentralised message-passing between potential participants. Our approach is evaluated against a well-known double-auction method and an optimal centralised technique, showing several improvements: it obtains better solutions for most networks that admit a competitive equilibrium Competitive equilibrium as defined in [3] is used as a means of classifying results on certain networks to allow for minor inefficiencies in their auction protocol and agent bidding strategies. while also solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions.
Resumo:
This thesis examined solar thermal collectors for use in alternative hybrid solar-biomass power plant applications in Gujarat, India. Following a preliminary review, the cost-effective selection and design of the solar thermal field were identified as critical factors underlying the success of hybrid plants. Consequently, the existing solar thermal technologies were reviewed and ranked for use in India by means of a multi-criteria decision-making method, the Analytical Hierarchy Process (AHP). Informed by the outcome of the AHP, the thesis went on to pursue the Linear Fresnel Reflector (LFR), the design of which was optimised with the help of ray-tracing. To further enhance collector performance, LFR concepts incorporating novel mirror spacing and drive mechanisms were evaluated. Subsequently, a new variant, termed the Elevation Linear Fresnel Reflector (ELFR) was designed, constructed and tested at Aston University, UK, therefore allowing theoretical models for the performance of a solar thermal field to be verified. Based on the resulting characteristics of the LFR, and data gathered for the other hybrid system components, models of hybrid LFR- and ELFR-biomass power plants were developed and analysed in TRNSYS®. The techno-economic and environmental consequences of varying the size of the solar field in relation to the total plant capacity were modelled for a series of case studies to evaluate different applications: tri-generation (electricity, ice and heat), electricity-only generation, and process heat. The case studies also encompassed varying site locations, capacities, operational conditions and financial situations. In the case of a hybrid tri-generation plant in Gujarat, it was recommended to use an LFR solar thermal field of 14,000 m2 aperture with a 3 tonne biomass boiler, generating 815 MWh per annum of electricity for nearby villages and 12,450 tonnes of ice per annum for local fisheries and food industries. However, at the expense of a 0.3 ¢/kWh increase in levelised energy costs, the ELFR increased saving of biomass (100 t/a) and land (9 ha/a). For solar thermal applications in areas with high land cost, the ELFR reduced levelised energy costs. It was determined that off-grid hybrid plants for tri-generation were the most feasible application in India. Whereas biomass-only plants were found to be more economically viable, it was concluded that hybrid systems will soon become cost competitive and can considerably improve current energy security and biomass supply chain issues in India.
Resumo:
Supply chain formation is the process by which a set of producers within a network determine the subset of these producers able to form a chain to supply goods to one or more consumers at the lowest cost. This problem has been tackled in a number of ways, including auctions, negotiations, and argumentation-based approaches. In this paper we show how this problem can be cast as an optimization of a pairwise cost function. Optimizing this class of energy functions is NP-hard but efficient approximations to the global minimum can be obtained using loopy belief propagation (LBP). Here we detail a max-sum LBP-based approach to the supply chain formation problem, involving decentralized message-passing between supply chain participants. Our approach is evaluated against a well-known decentralized double-auction method and an optimal centralized technique, showing several improvements on the auction method: it obtains better solutions for most network instances which allow for competitive equilibrium (Competitive equilibrium in Walsh and Wellman is a set of producer costs which permits a Pareto optimal state in which agents in the allocation receive non-negative surplus and agents not in the allocation would acquire non-positive surplus by participating in the supply chain) while also optimally solving problems where no competitive equilibrium exists, for which the double-auction method frequently produces inefficient solutions. © 2012 Wiley Periodicals, Inc.
Resumo:
Simulation is an effective method for improving supply chain performance. However, there is limited advice available to assist practitioners in selecting the most appropriate method for a given problem. Much of the advice that does exist relies on custom and practice rather than a rigorous conceptual or empirical analysis. An analysis of the different modelling techniques applied in the supply chain domain was conducted, and the three main approaches to simulation used were identified; these are System Dynamics (SD), Discrete Event Simulation (DES) and Agent Based Modelling (ABM). This research has examined these approaches in two stages. Firstly, a first principles analysis was carried out in order to challenge the received wisdom about their strengths and weaknesses and a series of propositions were developed from this initial analysis. The second stage was to use the case study approach to test these propositions and to provide further empirical evidence to support their comparison. The contributions of this research are both in terms of knowledge and practice. In terms of knowledge, this research is the first holistic cross paradigm comparison of the three main approaches in the supply chain domain. Case studies have involved building ‘back to back’ models of the same supply chain problem using SD and a discrete approach (either DES or ABM). This has led to contributions concerning the limitations of applying SD to operational problem types. SD has also been found to have risks when applied to strategic and policy problems. Discrete methods have been found to have potential for exploring strategic problem types. It has been found that discrete simulation methods can model material and information feedback successfully. Further insights have been gained into the relationship between modelling purpose and modelling approach. In terms of practice, the findings have been summarised in the form of a framework linking modelling purpose, problem characteristics and simulation approach.