929 resultados para two input two output
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
In this study some common types of Rolling Bearing vibrations are analysed in depth both theoretically and experimentally. The study is restricted to vibrations in the radial direction of bearings having pure radial load and a positive radial clearance. The general vibrational behaviour of such bearings has been investigated with respect to the effects of varying compliance, manufacturing tolerances and the interaction between the bearing and the machine structure into which it is fitted. The equations of motion for a rotor supported by a bearing in which the stiffness varies with cage position has been set up and examples of solutions,obtained by digital simulation. is given. A method to calculate amplitudes and frequencies of vibration components due to out of roundness of the inner ring and varying roller diameters has been developed. The results from these investigations have been combined with a theory for bearing/machine frame interaction using mechanical impedance technique, thereby facilitating prediction of the vibrational behaviour of the whole set up. Finally. the effects of bearing fatigue and wear have been studied with particular emphasis on the use of vibration analysis for condition monitoring purposes. A number of monitoring methods have been tried and their effectiveness discussed. The experimental investigation was carried out using two purpose built rigs. For the purpose of analysis of the experimental measurements a digital mini computer was adapted for signal processing and a suite of programs was written. The program package performs several of the commonly used signal analysis processes and :include all necessary input and output functions.
Resumo:
This thesis is organised into three parts. In Part 1 relevant literature is reviewed and three critical components in the development of a cognitive approach to instruction are identified. These three components are considered to be the structure of the subject-matter, the learner's cognitive structures, and the learner's cognitive strategies which act as control and transfer devices between the instructional materials and the learner's cognitive structures. Six experiments are described in Part 2 which is divided into two methodologically distinct units. The three experiments of Unit 1 examined how learning from materials constructed from concept name by concept attribute matrices is influenced by learner or experimenter controlled sequence and organisation. The results suggested that the relationships between input organisation, output organisation and recall are complex and highlighted the importance of investigating organisational strategies at both acquisition and recall. The role of subjects previously acquired knowledge and skills in relation to the instructional material was considered to be an important factor. The three experiments of Unit 2 utilised a "diagramming relationships methodology" which was devised as one means of investigating the processes by which new information is assimilated into an individual's cognitive structure. The methodology was found to be useful in identifying cognitive strategies related to successful task performance. The results suggested that errors could be minimised and comprehension improved on the diagramming relationships task by instructing subjects in ways which induced successful processing operations. Part 3 of this thesis highlights salient issues raised by the experimental work within the framework outlined in Part 1 and discusses potential implications for future theoretical developments and research.
Resumo:
The development of more realistic constitutive models for granular media, such as sand, requires ingredients which take into account the internal micro-mechanical response to deformation. Unfortunately, at present, very little is known about these mechanisms and therefore it is instructive to find out more about the internal nature of granular samples by conducting suitable tests. In contrast to physical testing the method of investigation used in this study employs the Distinct Element Method. This is a computer based, iterative, time-dependent technique that allows the deformation of granular assemblies to be numerically simulated. By making assumptions regarding contact stiffnesses each individual contact force can be measured and by resolution particle centroid forces can be calculated. Then by dividing particle forces by their respective mass, particle centroid velocities and displacements are obtained by numerical integration. The Distinct Element Method is incorporated into a computer program 'Ball'. This program is effectively a numerical apparatus which forms a logical housing for this method and allows data input and output, and also provides testing control. By using this numerical apparatus tests have been carried out on disc assemblies and many new interesting observations regarding the micromechanical behaviour are revealed. In order to relate the observed microscopic mechanisms of deformation to the flow of the granular system two separate approaches have been used. Firstly a constitutive model has been developed which describes the yield function, flow rule and translation rule for regular assemblies of spheres and discs when subjected to coaxial deformation. Secondly statistical analyses have been carried out using data which was extracted from the simulation tests. These analyses define and quantify granular structure and then show how the force and velocity distributions use the structure to produce the corresponding stress and strain-rate tensors.
Resumo:
Financial institutes are an integral part of any modern economy. In the 1970s and 1980s, Gulf Cooperation Council (GCC) countries made significant progress in financial deepening and in building a modern financial infrastructure. This study aims to evaluate the performance (efficiency) of financial institutes (banking sector) in GCC countries. Since, the selected variables include negative data for some banks and positive for others, and the available evaluation methods are not helpful in this case, so we developed a Semi Oriented Radial Model to perform this evaluation. Furthermore, since the SORM evaluation result provides a limited information for any decision maker (bankers, investors, etc...), we proposed a second stage analysis using classification and regression (C&R) method to get further results combining SORM results with other environmental data (Financial, economical and political) to set rules for the efficient banks, hence, the results will be useful for bankers in order to improve their bank performance and to the investors, maximize their returns. Mainly there are two approaches to evaluate the performance of Decision Making Units (DMUs), under each of them there are different methods with different assumptions. Parametric approach is based on the econometric regression theory and nonparametric approach is based on a mathematical linear programming theory. Under the nonparametric approaches, there are two methods: Data Envelopment Analysis (DEA) and Free Disposal Hull (FDH). While there are three methods under the parametric approach: Stochastic Frontier Analysis (SFA); Thick Frontier Analysis (TFA) and Distribution-Free Analysis (DFA). The result shows that DEA and SFA are the most applicable methods in banking sector, but DEA is seem to be most popular between researchers. However DEA as SFA still facing many challenges, one of these challenges is how to deal with negative data, since it requires the assumption that all the input and output values are non-negative, while in many applications negative outputs could appear e.g. losses in contrast with profit. Although there are few developed Models under DEA to deal with negative data but we believe that each of them has it is own limitations, therefore we developed a Semi-Oriented-Radial-Model (SORM) that could handle the negativity issue in DEA. The application result using SORM shows that the overall performance of GCC banking is relatively high (85.6%). Although, the efficiency score is fluctuated over the study period (1998-2007) due to the second Gulf War and to the international financial crisis, but still higher than the efficiency score of their counterpart in other countries. Banks operating in Saudi Arabia seem to be the highest efficient banks followed by UAE, Omani and Bahraini banks, while banks operating in Qatar and Kuwait seem to be the lowest efficient banks; this is because these two countries are the most affected country in the second Gulf War. Also, the result shows that there is no statistical relationship between the operating style (Islamic or Conventional) and bank efficiency. Even though there is no statistical differences due to the operational style, but Islamic bank seem to be more efficient than the Conventional bank, since on average their efficiency score is 86.33% compare to 85.38% for Conventional banks. Furthermore, the Islamic banks seem to be more affected by the political crisis (second Gulf War), whereas Conventional banks seem to be more affected by the financial crisis.
Resumo:
With careful calculation of signal forwarding weights, relay nodes can be used to work collaboratively to enhance downlink transmission performance by forming a virtual multiple-input multiple-output beamforming system. Although collaborative relay beamforming schemes for single user have been widely investigated for cellular systems in previous literatures, there are few studies on the relay beamforming for multiusers. In this paper, we study the collaborative downlink signal transmission with multiple amplify-and-forward relay nodes for multiusers in cellular systems. We propose two new algorithms to determine the beamforming weights with the same objective of minimizing power consumption of the relay nodes. In the first algorithm, we aim to guarantee the received signal-to-noise ratio at multiusers for the relay beamforming with orthogonal channels. We prove that the solution obtained by a semidefinite relaxation technology is optimal. In the second algorithm, we propose an iterative algorithm that jointly selects the base station antennas and optimizes the relay beamforming weights to reach the target signal-to-interference-and-noise ratio at multiusers with nonorthogonal channels. Numerical results validate our theoretical analysis and demonstrate that the proposed optimal schemes can effectively reduce the relay power consumption compared with several other beamforming approaches. © 2012 John Wiley & Sons, Ltd.
Resumo:
Performance evaluation in conventional data envelopment analysis (DEA) requires crisp numerical values. However, the observed values of the input and output data in real-world problems are often imprecise or vague. These imprecise and vague data can be represented by linguistic terms characterised by fuzzy numbers in DEA to reflect the decision-makers' intuition and subjective judgements. This paper extends the conventional DEA models to a fuzzy framework by proposing a new fuzzy additive DEA model for evaluating the efficiency of a set of decision-making units (DMUs) with fuzzy inputs and outputs. The contribution of this paper is threefold: (1) we consider ambiguous, uncertain and imprecise input and output data in DEA, (2) we propose a new fuzzy additive DEA model derived from the a-level approach and (3) we demonstrate the practical aspects of our model with two numerical examples and show its comparability with five different fuzzy DEA methods in the literature. Copyright © 2011 Inderscience Enterprises Ltd.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.
Resumo:
Conventional DEA models assume deterministic, precise and non-negative data for input and output observations. However, real applications may be characterized by observations that are given in form of intervals and include negative numbers. For instance, the consumption of electricity in decentralized energy resources may be either negative or positive, depending on the heat consumption. Likewise, the heat losses in distribution networks may be within a certain range, depending on e.g. external temperature and real-time outtake. Complementing earlier work separately addressing the two problems; interval data and negative data; we propose a comprehensive evaluation process for measuring the relative efficiencies of a set of DMUs in DEA. In our general formulation, the intervals may contain upper or lower bounds with different signs. The proposed method determines upper and lower bounds for the technical efficiency through the limits of the intervals after decomposition. Based on the interval scores, DMUs are then classified into three classes, namely, the strictly efficient, weakly efficient and inefficient. An intuitive ranking approach is presented for the respective classes. The approach is demonstrated through an application to the evaluation of bank branches. © 2013.
Resumo:
The entorhinal cortex (EC) controls hippocampal input and output, playing major roles in memory and spatial navigation. Different layers of the EC subserve different functions and a number of studies have compared properties of neurones across layers. We have studied synaptic inhibition and excitation in EC neurones, and we have previously compared spontaneous synaptic release of glutamate and GABA using patch clamp recordings of synaptic currents in principal neurones of layers II (L2) and V (L5). Here, we add comparative studies in layer III (L3). Such studies essentially look at neuronal activity from a presynaptic viewpoint. To correlate this with the postsynaptic consequences of spontaneous transmitter release, we have determined global postsynaptic conductances mediated by the two transmitters, using a method to estimate conductances from membrane potential fluctuations. We have previously presented some of this data for L3 and now extend to L2 and L5. Inhibition dominates excitation in all layers but the ratio follows a clear rank order (highest to lowest) of L2>L3>L5. The variance of the background conductances was markedly higher for excitation and inhibition in L2 compared to L3 or L5. We also show that induction of synchronized network epileptiform activity by blockade of GABA inhibition reveals a relative reluctance of L2 to participate in such activity. This was associated with maintenance of a dominant background inhibition in L2, whereas in L3 and L5 the absolute level of inhibition fell below that of excitation, coincident with the appearance of synchronized discharges. Further experiments identified potential roles for competition for bicuculline by ambient GABA at the GABAA receptor, and strychnine-sensitive glycine receptors in residual inhibition in L2. We discuss our results in terms of control of excitability in neuronal subpopulations of EC neurones and what these may suggest for their functional roles. © 2014 Greenhill et al.
Resumo:
Desktop user interface design originates from the fact that users are stationary and can devote all of their visual resource to the application with which they are interacting. In contrast, users of mobile and wearable devices are typically in motion whilst using their device which means that they cannot devote all or any of their visual resource to interaction with the mobile application -- it must remain with the primary task, often for safety reasons. Additionally, such devices have limited screen real estate and traditional input and output capabilities are generally restricted. Consequently, if we are to develop effective applications for use on mobile or wearable technology, we must embrace a paradigm shift with respect to the interaction techniques we employ for communication with such devices.This paper discusses why it is necessary to embrace a paradigm shift in terms of interaction techniques for mobile technology and presents two novel multimodal interaction techniques which are effective alternatives to traditional, visual-centric interface designs on mobile devices as empirical examples of the potential to achieve this shift.
Resumo:
The entorhinal cortex (EC) is a key brain area controlling both hippocampal input and output via neurones in layer II and layer V, respectively. It is also a pivotal area in the generation and propagation of epilepsies involving the temporal lobe. We have previously shown that within the network of the EC, neurones in layer V are subject to powerful synaptic excitation but weak inhibition, whereas the reverse is true in layer II. The deep layers are also highly susceptible to acutely provoked epileptogenesis. Considerable evidence now points to a role of spontaneous background synaptic activity in control of neuronal, and hence network, excitability. In the present article we describe results of studies where we have compared background release of the excitatory transmitter, glutamate, and the inhibitory transmitter, GABA, in the two layers, the role of this background release in the balance of excitability, and its control by presynaptic auto- and heteroreceptors on presynaptic terminals. © The Physiological Society 2004.
Resumo:
The energy balancing capability of cooperative communication is utilized to solve the energy hole problem in wireless sensor networks. We first propose a cooperative transmission strategy, where intermediate nodes participate in two cooperative multi-input single-output (MISO) transmissions with the node at the previous hop and a selected node at the next hop, respectively. Then, we study the optimization problems for power allocation of the cooperative transmission strategy by examining two different approaches: network lifetime maximization (NLM) and energy consumption minimization (ECM). For NLM, the numerical optimal solution is derived and a searching algorithm for suboptimal solution is provided when the optimal solution does not exist. For ECM, a closed-form solution is obtained. Numerical and simulation results show that both the approaches have much longer network lifetime than SISO transmission strategies and other cooperative communication schemes. Moreover, NLM which features energy balancing outperforms ECM which focuses on energy efficiency, in the network lifetime sense.
Resumo:
Clogging is the main operational problem associated with horizontal subsurface flow constructed wetlands (HSSF CWs). The measurement of saturated hydraulic conductivity has proven to be a suitable technique to assess clogging within HSSF CWs. The vertical and horizontal distribution of hydraulic conductivity was assessed in two full-scale HSSF CWs by using two different in situ permeameter methods (falling head (FH) and constant head (CH) methods). Horizontal hydraulic conductivity profiles showed that both methods are correlated by a power function (FH= CH 0.7821, r 2=0.76) within the recorded range of hydraulic conductivities (0-70 m/day). However, the FH method provided lower values of hydraulic conductivity than the CH method (one to three times lower). Despite discrepancies between the magnitudes of reported readings, the relative distribution of clogging obtained via both methods was similar. Therefore, both methods are useful when exploring the general distribution of clogging and, specially, the assessment of clogged areas originated from preferential flow paths within full-scale HSSF CWs. Discrepancy between methods (either in magnitude and pattern) aroused from the vertical hydraulic conductivity profiles under highly clogged conditions. It is believed this can be attributed to procedural differences between the methods, such as the method of permeameter insertion (twisting versus hammering). Results from both methods suggest that clogging develops along the shortest distance between water input and output. Results also evidence that the design and maintenance of inlet distributors and outlet collectors appear to have a great influence on the pattern of clogging, and hence the asset lifetime of HSSF CWs. © Springer Science+Business Media B.V. 2011.
Resumo:
Data Envelopment Analysis (DEA) is a powerful analytical technique for measuring the relative efficiency of alternatives based on their inputs and outputs. The alternatives can be in the form of countries who attempt to enhance their productivity and environmental efficiencies concurrently. However, when desirable outputs such as productivity increases, undesirable outputs increase as well (e.g. carbon emissions), thus making the performance evaluation questionable. In addition, traditional environmental efficiency has been typically measured by crisp input and output (desirable and undesirable). However, the input and output data, such as CO2 emissions, in real-world evaluation problems are often imprecise or ambiguous. This paper proposes a DEA-based framework where the input and output data are characterized by symmetrical and asymmetrical fuzzy numbers. The proposed method allows the environmental evaluation to be assessed at different levels of certainty. The validity of the proposed model has been tested and its usefulness is illustrated using two numerical examples. An application of energy efficiency among 23 European Union (EU) member countries is further presented to show the applicability and efficacy of the proposed approach under asymmetric fuzzy numbers.