13 resultados para extended BHF approach
em Aston University Research Archive
Resumo:
Recently Homer and Percival have postulated that intermolecular van der Waals dispersion forces can be characterized by three mechanisms. The first arises via the mean square reaction field < R1; 2> due to the transient dipole of a particular solute molecule that is considered situated in a cavity surrounded by solvent molecules; this was characterized by an extended Onsager approach. The second stems from the extra cavity mean square reaction field < R2; 2> of the near neighbour solvent molecules. The third originates from square field electric fields E2BI due to a newly characterized effect in which solute atoms are `buffeted' by the peripheral atoms of adjacent solvent molecules. The present work concerns more detailed studies of the buffeting screening, which is governed by sterically controlled parameter (2T - T)2, where and are geometric structural parameters. The original approach is used to characterise the buffeting shifts induced by large solvent molecules and the approach is found to be inadequate. Consequently, improved methods of calculating and are reported. Using the improved approach it is shown that buffeting is dependent on the nature of the solvent as well as the nature of the solute molecule. Detailed investigation of the buffeting component of the van der Waals chemical shifts of selected solutes in a range of solvents containing either H or Cl as peripheral atoms have enabled the determination of a theoretical acceptable value for the classical screening coefficient B for protons. 1H and 13C resonance studies of tetraethylmethane and 1H, 13C and 29Si resonance studies of TMS have been used to support the original contention that three (< R1; 2> , < R2; 2> and E2BI) components of intermolecular van der Waals dispersion fields are required to characterise vdW chemical shifts.
Resumo:
The Thouless-Anderson-Palmer (TAP) approach was originally developed for analysing the Sherrington-Kirkpatrick model in the study of spin glass models and has been employed since then mainly in the context of extensively connected systems whereby each dynamical variable interacts weakly with the others. Recently, we extended this method for handling general intensively connected systems where each variable has only O(1) connections characterised by strong couplings. However, the new formulation looks quite different with respect to existing analyses and it is only natural to question whether it actually reproduces known results for systems of extensive connectivity. In this chapter, we apply our formulation of the TAP approach to an extensively connected system, the Hopfield associative memory model, showing that it produces identical results to those obtained by the conventional formulation.
Resumo:
In many Environmental Information Systems the actual observations arise from a discrete monitoring network which might be rather heterogeneous in both location and types of measurements made. In this paper we describe the architecture and infrastructure for a system, developed as part of the EU FP6 funded INTAMAP project, to provide a service oriented solution that allows the construction of an interoperable, automatic, interpolation system. This system will be based on the Open Geospatial Consortium’s Web Feature Service (WFS) standard. The essence of our approach is to extend the GML3.1 observation feature to include information about the sensor using SensorML, and to further extend this to incorporate observation error characteristics. Our extended WFS will accept observations, and will store them in a database. The observations will be passed to our R-based interpolation server, which will use a range of methods, including a novel sparse, sequential kriging method (only briefly described here) to produce an internal representation of the interpolated field resulting from the observations currently uploaded to the system. The extended WFS will then accept queries, such as ‘What is the probability distribution of the desired variable at a given point’, ‘What is the mean value over a given region’, or ‘What is the probability of exceeding a certain threshold at a given location’. To support information-rich transfer of complex and uncertain predictions we are developing schema to represent probabilistic results in a GML3.1 (object-property) style. The system will also offer more easily accessible Web Map Service and Web Coverage Service interfaces to allow users to access the system at the level of complexity they require for their specific application. Such a system will offer a very valuable contribution to the next generation of Environmental Information Systems in the context of real time mapping for monitoring and security, particularly for systems that employ a service oriented architecture.
Resumo:
Purpose - The purpose of this research paper is to demonstrate how existing performance measurement may be adopted to measure and manage performance in extended enterprises. Design/methodology/approach - The paper reviews the literature in performance measurement and extended enterprises. It explains the collaborative architecture of an extended enterprise and demonstrates this architecture through a case study. A model for measuring and managing performance in extended enterprises is developed using the case study. Findings - The research found that due to structural differences between traditional and extended enterprises, the systems required to measure and manage the performance of extended enterprises, whilst being based upon existing performance measurement frameworks, would be structurally and operationally different. Based on this, a model for measuring and managing performance in extended enterprises is proposed which includes intrinsic and extrinsic inter-enterprise coordinating measures. Research limitations/implications - There are two limitations this research. First, the evidence is based on a single case, thus further cases should be studied to establish the generalisibility of the presented results. Second, the practical limitations of the EE performance measurement model should be established through longitudinal action research. Practical implications - In practice the model proposed requires collaborating organisations to be more open and share critical performance information with one another. This will require change in practices and attitudes. Originality/value - The main contribution this paper makes is that it highlights the structural differences between traditional and collaborative enterprises and specifies performance measurement and management requirements of these collaborative organisations. © Emerald Group Publishing Limited.
Resumo:
This paper contributes to extend the minimax disparity to determine the ordered weighted averaging (OWA) model based on linear programming. It introduces the minimax disparity approach between any distinct pairs of the weights and uses the duality of linear programming to prove the feasibility of the extended OWA operator weights model. The paper finishes with an open problem. © 2006 Elsevier Ltd. All rights reserved.
Resumo:
In 1957, Farrell proposed to measure technical (in)efficiency as the realised deviation from a frontier isoquant. Since then, the research has developed several methods to derive the production frontier and it has also extended its scope in applying frontier techniques to the measurement of total factor productivity. In this paper, I present the core techniques for the measurement of technical efficiency and productivity based on the notion of frontier and introduce the more recent technological advances in the field.
Resumo:
A technique for interrogating multiplexed fibre Bragg grating (FBG) sensors using an arrayed waveguide grating (AWG) is described. The approach considerably extends the sensing range from that achieved previously, while providing a strain resolution of 17nevHz at 30 Hz.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
Central venous catheters (CVCs) are being utilized with increasing frequency in intensive care and general medical wards. In spite of the extensive experience gained in their application, CVCs are related to the long-term risks of catheter sheath formation, infection, and thrombosis (of the catheter or vessel itself) during catheterization. Such CVC-related-complications are associated with increased morbidity, mortality, duration of hospitalization, and medical care cost [1]. The present study incorporates a novel group of Factor XIIIa (FXIIIa, plasma transglutaminase) inhibitors into a lubricious silicone elastomer in order to generate an optimized drug delivery system whereby a secondary sustained drug release profile occurs following an initial burst release for catheters and other medical devices. We propose that the incorporation of FXIIIa inhibitors into catheters, stents, and other medical implant devices would reduce the incidence of catheter sheath formation, thrombotic occlusion, and associated staphylococcal infection. This technique could be used as a local delivery system for extended release with an immediate onset of action for other poorly aqueous soluble compounds. © 2012 Elsevier B.V. All rights reserved.
Resumo:
A technique for interrogating multiplexed fibre Bragg grating (FBG) sensors using an arrayed waveguide grating (AWG) is described. The approach considerably extends the sensing range from that achieved previously, while providing a strain resolution of 17nε/√Hz at 30 Hz.
Resumo:
A Finite Element Analysis (FEA) model is used to explore the relationship between clogging and hydraulics that occurs in Horizontal Subsurface Flow Treatment Wetlands (HSSF TWs) in the United Kingdom (UK). Clogging is assumed to be caused by particle transport and an existing single collector efficiency model is implemented to describe this behaviour. The flow model was validated against HSSF TW survey results obtained from the literature. The model successfully simulated the influence of overland flow on hydrodynamics, and the interaction between vertical flow through the low permeability surface layer and the horizontal flow of the saturated water table. The clogging model described the development of clogging within the system but under-predicted the extent of clogging which occurred over 15 years. This is because important clogging mechanisms were not considered by the model, such as biomass growth and vegetation establishment. The model showed the usefulness of FEA for linking hydraulic and clogging phenomenon in HSSF TWs and could be extended to include treatment processes. © 2011 Springer Science+Business Media B.V.