937 resultados para modelli input-output programmazione lineare grafi pesati
Resumo:
Electronic communications devices intended for government or military applications must be rigorously evaluated to ensure that they maintain data confidentiality. High-grade information security evaluations require a detailed analysis of the device's design, to determine how it achieves necessary security functions. In practice, such evaluations are labour-intensive and costly, so there is a strong incentive to find ways to make the process more efficient. In this paper we show how well-known concepts from graph theory can be applied to a device's design to optimise information security evaluations. In particular, we use end-to-end graph traversals to eliminate components that do not need to be evaluated at all, and minimal cutsets to identify the smallest group of components that needs to be evaluated in depth.
Resumo:
Aim: To identify an appropriate dosage strategy for patients receiving enoxaparin by continuous intravenous infusion (CII). Methods: Monte Carlo simulations were performed in NONMEM, (200 replicates of 1000 patients) to predict steady state anti-Xa concentrations (Css) for patients receiving a CII of enoxaparin. The covariate distribution model was simulated based on covariate demographics in the CII study population. The impact of patient weight, renal function (creatinine clearance (CrCL)) and patient location (intensive care unit (ICU)) were evaluated. A population pharmacokinetic model was used as the input-output model (1-compartment first order output model with mixed residual error structure). Success of a dosing regimen was based on the percent of Css that is between the therapeutic range of 0.5 IU/ml to 1.2 IU/ml. Results: The best dose for patients in the ICU was 4.2IU/kg/h (success mean 64.8% and 90% prediction interval (PI): 60.1–69.8%) if CrCL60ml/min, the best dose was 8.3IU/kg/h (success mean 65.4%, 90% PI: 58.5–73.2%). Simulations suggest that there was a 50% improvement in the success of the CII if the dose rate for ICU patients with CrCL
Resumo:
Drawing on extensive academic research and theory on clusters and their analysis, the methodology employed in this pilot study (sponsored by the Welsh Assembly Government’s Economic Research Grants Assessment Board) seeks to create a framework for reviewing and monitoring clusters in Wales on an ongoing basis, and generate the information necessary for successful cluster development policy to occur. The multi-method framework developed and tested in the pilot study is designed to map existing Welsh sectors with cluster characteristics, uncover existing linkages, and better understand areas of strength and weakness. The approach adopted relies on synthesising both quantitative and qualitative evidence. Statistical measures, including the size of potential clusters, are united with other evidence on input-output derived inter-linkages within clusters and to other sectors in Wales and the UK, as well as the export and import intensity of the cluster. Multi Sector Qualitative Analysis is then designed for competencies/capacity, risk factors, markets, types and crucially, the perceived strengths of cluster structures and relationships. The approach outlined above can, with the refinements recommended through the review process, provide policy-makers with a valuable tool for reviewing and monitoring individual sectors and ameliorating problems in sectors likely to decline further.
Resumo:
Typical performance of low-density parity-check (LDPC) codes over a general binary-input output-symmetric memoryless channel is investigated using methods of statistical mechanics. The binary-input additive-white-Gaussian-noise channel and the binary-input Laplace channel are considered as specific channel noise models.
Resumo:
This paper presents a general methodology for estimating and incorporating uncertainty in the controller and forward models for noisy nonlinear control problems. Conditional distribution modeling in a neural network context is used to estimate uncertainty around the prediction of neural network outputs. The developed methodology circumvents the dynamic programming problem by using the predicted neural network uncertainty to localize the possible control solutions to consider. A nonlinear multivariable system with different delays between the input-output pairs is used to demonstrate the successful application of the developed control algorithm. The proposed method is suitable for redundant control systems and allows us to model strongly non Gaussian distributions of control signal as well as processes with hysteresis.
Resumo:
We consider the direct adaptive inverse control of nonlinear multivariable systems with different delays between every input-output pair. In direct adaptive inverse control, the inverse mapping is learned from examples of input-output pairs. This makes the obtained controller sub optimal, since the network may have to learn the response of the plant over a larger operational range than necessary. Moreover, in certain applications, the control problem can be redundant, implying that the inverse problem is ill posed. In this paper we propose a new algorithm which allows estimating and exploiting uncertainty in nonlinear multivariable control systems. This approach allows us to model strongly non-Gaussian distribution of control signals as well as processes with hysteresis. The proposed algorithm circumvents the dynamic programming problem by using the predicted neural network uncertainty to localise the possible control solutions to consider.
Resumo:
As we enter the 21st Century, technologies originally developed for defense purposes such as computers and satellite communications appear to have become a driving force behind economic growth in the United States. Paradoxically, almost all previous econometric models suggest that the largely defense-oriented federal industrial R&D funding that helped create these technologies had no discernible effect on U.S. industrial productivity growth. This paper addresses this paradox by stressing that defense procurement as well as federal R&D expenditures were targeted to a few narrowly defined manufacturing sub-sectors that produced high tech weaponry. Analysis employing data from the NBER Manufacturing Productivity Database and the BEA' s Input Output tables then demonstrates that defense procurement policies did have significant effects on the productivity performance of disaggregated manufacturing industries because of a process of procurement-driven technological change.
Resumo:
Using data from the UK Census of Production, including foreign ownership data, and information from UK industry input-output tables, this paper examines whether the intensity of transactions linkages between foreign and domestic firms affects productivity growth in domestic manufacturing industries. The implications of the findings for policies promoting linkages between multinational and domestic firms in the UK economy are outlined.
Resumo:
This thesis is concerned with the study of a non-sequential identification technique, so that it may be applied to the identification of process plant mathematical models from process measurements with the greatest degree of accuracy and reliability. In order to study the accuracy of the technique under differing conditions, simple mathematical models were set up on a parallel hybrid. computer and these models identified from input/output measurements by a small on-line digital computer. Initially, the simulated models were identified on-line. However, this method of operation was found not suitable for a thorough study of the technique due to equipment limitations. Further analysis was carried out in a large off-line computer using data generated by the small on-line computer. Hence identification was not strictly on-line. Results of the work have shovm that the identification technique may be successfully applied in practice. An optimum sampling period is suggested, together with noise level limitations for maximum accuracy. A description of a double-effect evaporator is included in this thesis. It is proposed that the next stage in the work will be the identification of a mathematical model of this evaporator using the teclmique described.
Resumo:
In this paper we propose a data envelopment analysis (DEA) based method for assessing the comparative efficiencies of units operating production processes where input-output levels are inter-temporally dependent. One cause of inter-temporal dependence between input and output levels is capital stock which influences output levels over many production periods. Such units cannot be assessed by traditional or 'static' DEA which assumes input-output correspondences are contemporaneous in the sense that the output levels observed in a time period are the product solely of the input levels observed during that same period. The method developed in the paper overcomes the problem of inter-temporal input-output dependence by using input-output 'paths' mapped out by operating units over time as the basis of assessing them. As an application we compare the results of the dynamic and static model for a set of UK universities. The paper is suggested that dynamic model capture the efficiency better than static model. © 2003 Elsevier Inc. All rights reserved.
Resumo:
In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.
Resumo:
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.
Resumo:
This paper develops two new indices for measuring productivity in multi-input multi-output situations. One index enables the measure of productivity change of a unit over time while the second index makes it possible to compare two units on productivity at the same or different points in time. Productivity in a single input single output context is defined as the ratio of output to input. In multi-input multi-output contexts this ratio is not defined. Instead, one of the methods traditionally used is the Malmquist Index of productivity change over time. This is computed by reference to the distances of the input-output bundles of a production unit at two different points in time from the efficient boundaries corresponding to those two points in time. The indices developed in this paper depart form the use of two different reference boundaries and instead they use a single reference boundary which in a sense is the most efficient boundary observed over two or more successive time periods. We discuss the assumptions which make possible the definition of such a single reference boundary and proceed to develop the two Malmquist-type indices for measuring productivity. One key advantage of using a single reference boundary is that the resulting index values are circular. That is it is possible to use the index values of successive time periods to derive an index value of productivity change over a time period of any length covered by successive index values or vice versa. Further, the use of a single reference boundary makes it possible to construct an index for comparing the productivities of two units either at the same or at two different points in time. This was not possible with the traditional Malmquist Index. We decompose both new indices into components which isolate production unit from industry or comparator unit effects. The components themselves like the indices developed are also circular. The components of the indices drill down to reveal more clearly the performance of each unit over time relative either to itself or to other units. The indices developed and their components are aimed at managers of production units to enable them to diagnose the performance of their units with a view to guiding them to improved performance.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
Hazard and operability (HAZOP) studies on chemical process plants are very time consuming, and often tedious, tasks. The requirement for HAZOP studies is that a team of experts systematically analyse every conceivable process deviation, identifying possible causes and any hazards that may result. The systematic nature of the task, and the fact that some team members may be unoccupied for much of the time, can lead to tedium, which in turn may lead to serious errors or omissions. An aid to HAZOP are fault trees, which present the system failure logic graphically such that the study team can readily assimilate their findings. Fault trees are also useful to the identification of design weaknesses, and may additionally be used to estimate the likelihood of hazardous events occurring. The one drawback of fault trees is that they are difficult to generate by hand. This is because of the sheer size and complexity of modern process plants. The work in this thesis proposed a computer-based method to aid the development of fault trees for chemical process plants. The aim is to produce concise, structured fault trees that are easy for analysts to understand. Standard plant input-output equation models for major process units are modified such that they include ancillary units and pipework. This results in a reduction in the nodes required to represent a plant. Control loops and protective systems are modelled as operators which act on process variables. This modelling maintains the functionality of loops, making fault tree generation easier and improving the structure of the fault trees produced. A method, called event ordering, is proposed which allows the magnitude of deviations of controlled or measured variables to be defined in terms of the control loops and protective systems with which they are associated.