70 resultados para two input two output
em Aston University Research Archive
Resumo:
Data envelopment analysis (DEA) is a methodology for measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. Crisp input and output data are fundamentally indispensable in conventional DEA. However, the observed values of the input and output data in real-world problems are sometimes imprecise or vague. Many researchers have proposed various fuzzy methods for dealing with the imprecise and ambiguous data in DEA. In this study, we provide a taxonomy and review of the fuzzy DEA methods. We present a classification scheme with four primary categories, namely, the tolerance approach, the a-level based approach, the fuzzy ranking approach and the possibility approach. We discuss each classification scheme and group the fuzzy DEA papers published in the literature over the past 20 years. To the best of our knowledge, this paper appears to be the only review and complete source of references on fuzzy DEA. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Error rates of a Boolean perceptron with threshold and either spherical or Ising constraint on the weight vector are calculated for storing patterns from biased input and output distributions derived within a one-step replica symmetry breaking (RSB) treatment. For unbiased output distribution and non-zero stability of the patterns, we find a critical load, α p, above which two solutions to the saddlepoint equations appear; one with higher free energy and zero threshold and a dominant solution with non-zero threshold. We examine this second-order phase transition and the dependence of α p on the required pattern stability, κ, for both one-step RSB and replica symmetry (RS) in the spherical case and for one-step RSB in the Ising case.
Resumo:
This paper develops two new indices for measuring productivity in multi-input multi-output situations. One index enables the measure of productivity change of a unit over time while the second index makes it possible to compare two units on productivity at the same or different points in time. Productivity in a single input single output context is defined as the ratio of output to input. In multi-input multi-output contexts this ratio is not defined. Instead, one of the methods traditionally used is the Malmquist Index of productivity change over time. This is computed by reference to the distances of the input-output bundles of a production unit at two different points in time from the efficient boundaries corresponding to those two points in time. The indices developed in this paper depart form the use of two different reference boundaries and instead they use a single reference boundary which in a sense is the most efficient boundary observed over two or more successive time periods. We discuss the assumptions which make possible the definition of such a single reference boundary and proceed to develop the two Malmquist-type indices for measuring productivity. One key advantage of using a single reference boundary is that the resulting index values are circular. That is it is possible to use the index values of successive time periods to derive an index value of productivity change over a time period of any length covered by successive index values or vice versa. Further, the use of a single reference boundary makes it possible to construct an index for comparing the productivities of two units either at the same or at two different points in time. This was not possible with the traditional Malmquist Index. We decompose both new indices into components which isolate production unit from industry or comparator unit effects. The components themselves like the indices developed are also circular. The components of the indices drill down to reveal more clearly the performance of each unit over time relative either to itself or to other units. The indices developed and their components are aimed at managers of production units to enable them to diagnose the performance of their units with a view to guiding them to improved performance.
Resumo:
This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.
Resumo:
This thesis is concerned with the measurement of the characteristics of nonlinear systems by crosscorrelation, using pseudorandom input signals based on m sequences. The systems are characterised by Volterra series, and analytical expressions relating the rth order Volterra kernel to r-dimensional crosscorrelation measurements are derived. It is shown that the two-dimensional crosscorrelation measurements are related to the corresponding second order kernel values by a set of equations which may be structured into a number of independent subsets. The m sequence properties determine how the maximum order of the subsets for off-diagonal values is related to the upper bound of the arguments for nonzero kernel values. The upper bound of the arguments is used as a performance index, and the performance of antisymmetric pseudorandom binary, ternary and quinary signals is investigated. The performance indices obtained above are small in relation to the periods of the corresponding signals. To achieve higher performance with ternary signals, a method is proposed for combining the estimates of the second order kernel values so that the effects of some of the undesirable nonzero values in the fourth order autocorrelation function of the input signal are removed. The identification of the dynamics of two-input, single-output systems with multiplicative nonlinearity is investigated. It is shown that the characteristics of such a system may be determined by crosscorrelation experiments using phase-shifted versions of a common signal as inputs. The effects of nonlinearities on the estimates of system weighting functions obtained by crosscorrelation are also investigated. Results obtained by correlation testing of an industrial process are presented, and the differences between theoretical and experimental results discussed for this case;
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
In this study some common types of Rolling Bearing vibrations are analysed in depth both theoretically and experimentally. The study is restricted to vibrations in the radial direction of bearings having pure radial load and a positive radial clearance. The general vibrational behaviour of such bearings has been investigated with respect to the effects of varying compliance, manufacturing tolerances and the interaction between the bearing and the machine structure into which it is fitted. The equations of motion for a rotor supported by a bearing in which the stiffness varies with cage position has been set up and examples of solutions,obtained by digital simulation. is given. A method to calculate amplitudes and frequencies of vibration components due to out of roundness of the inner ring and varying roller diameters has been developed. The results from these investigations have been combined with a theory for bearing/machine frame interaction using mechanical impedance technique, thereby facilitating prediction of the vibrational behaviour of the whole set up. Finally. the effects of bearing fatigue and wear have been studied with particular emphasis on the use of vibration analysis for condition monitoring purposes. A number of monitoring methods have been tried and their effectiveness discussed. The experimental investigation was carried out using two purpose built rigs. For the purpose of analysis of the experimental measurements a digital mini computer was adapted for signal processing and a suite of programs was written. The program package performs several of the commonly used signal analysis processes and :include all necessary input and output functions.
Resumo:
This thesis is organised into three parts. In Part 1 relevant literature is reviewed and three critical components in the development of a cognitive approach to instruction are identified. These three components are considered to be the structure of the subject-matter, the learner's cognitive structures, and the learner's cognitive strategies which act as control and transfer devices between the instructional materials and the learner's cognitive structures. Six experiments are described in Part 2 which is divided into two methodologically distinct units. The three experiments of Unit 1 examined how learning from materials constructed from concept name by concept attribute matrices is influenced by learner or experimenter controlled sequence and organisation. The results suggested that the relationships between input organisation, output organisation and recall are complex and highlighted the importance of investigating organisational strategies at both acquisition and recall. The role of subjects previously acquired knowledge and skills in relation to the instructional material was considered to be an important factor. The three experiments of Unit 2 utilised a "diagramming relationships methodology" which was devised as one means of investigating the processes by which new information is assimilated into an individual's cognitive structure. The methodology was found to be useful in identifying cognitive strategies related to successful task performance. The results suggested that errors could be minimised and comprehension improved on the diagramming relationships task by instructing subjects in ways which induced successful processing operations. Part 3 of this thesis highlights salient issues raised by the experimental work within the framework outlined in Part 1 and discusses potential implications for future theoretical developments and research.
Resumo:
The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
With careful calculation of signal forwarding weights, relay nodes can be used to work collaboratively to enhance downlink transmission performance by forming a virtual multiple-input multiple-output beamforming system. Although collaborative relay beamforming schemes for single user have been widely investigated for cellular systems in previous literatures, there are few studies on the relay beamforming for multiusers. In this paper, we study the collaborative downlink signal transmission with multiple amplify-and-forward relay nodes for multiusers in cellular systems. We propose two new algorithms to determine the beamforming weights with the same objective of minimizing power consumption of the relay nodes. In the first algorithm, we aim to guarantee the received signal-to-noise ratio at multiusers for the relay beamforming with orthogonal channels. We prove that the solution obtained by a semidefinite relaxation technology is optimal. In the second algorithm, we propose an iterative algorithm that jointly selects the base station antennas and optimizes the relay beamforming weights to reach the target signal-to-interference-and-noise ratio at multiusers with nonorthogonal channels. Numerical results validate our theoretical analysis and demonstrate that the proposed optimal schemes can effectively reduce the relay power consumption compared with several other beamforming approaches. © 2012 John Wiley & Sons, Ltd.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
The entorhinal cortex (EC) controls hippocampal input and output, playing major roles in memory and spatial navigation. Different layers of the EC subserve different functions and a number of studies have compared properties of neurones across layers. We have studied synaptic inhibition and excitation in EC neurones, and we have previously compared spontaneous synaptic release of glutamate and GABA using patch clamp recordings of synaptic currents in principal neurones of layers II (L2) and V (L5). Here, we add comparative studies in layer III (L3). Such studies essentially look at neuronal activity from a presynaptic viewpoint. To correlate this with the postsynaptic consequences of spontaneous transmitter release, we have determined global postsynaptic conductances mediated by the two transmitters, using a method to estimate conductances from membrane potential fluctuations. We have previously presented some of this data for L3 and now extend to L2 and L5. Inhibition dominates excitation in all layers but the ratio follows a clear rank order (highest to lowest) of L2>L3>L5. The variance of the background conductances was markedly higher for excitation and inhibition in L2 compared to L3 or L5. We also show that induction of synchronized network epileptiform activity by blockade of GABA inhibition reveals a relative reluctance of L2 to participate in such activity. This was associated with maintenance of a dominant background inhibition in L2, whereas in L3 and L5 the absolute level of inhibition fell below that of excitation, coincident with the appearance of synchronized discharges. Further experiments identified potential roles for competition for bicuculline by ambient GABA at the GABAA receptor, and strychnine-sensitive glycine receptors in residual inhibition in L2. We discuss our results in terms of control of excitability in neuronal subpopulations of EC neurones and what these may suggest for their functional roles. © 2014 Greenhill et al.