834 resultados para Input-Output analysis
Resumo:
"June 1971."
Resumo:
Includes index.
Resumo:
Includes index.
Resumo:
"November 1970."
Resumo:
This paper re-examines the stability of multi-input multi-output (MIMO) control systems designed using sequential MIMO quantitative feedback theory (QFT). In order to establish the results, recursive design equations for the SISO equivalent plants employed in a sequential MIMO QFT design are established. The equations apply to sequential MIMO QFT designs in both the direct plant domain, which employs the elements of plant in the design, and the inverse plant domain, which employs the elements of the plant inverse in the design. Stability theorems that employ necessary and sufficient conditions for robust closed-loop internal stability are developed for sequential MIMO QFT designs in both domains. The theorems and design equations facilitate less conservative designs and improved design transparency.
Resumo:
In view of the need to provide tools to facilitate the re-use of existing knowledge structures such as ontologies, we present in this paper a system, AKTiveRank, for the ranking of ontologies. AKTiveRank uses as input the search terms provided by a knowledge engineer and, using the output of an ontology search engine, ranks the ontologies. We apply a number of metrics in an attempt to investigate their appropriateness for ranking ontologies, and compare the results with a questionnaire-based human study. Our results show that AKTiveRank will have great utility although there is potential for improvement.
Resumo:
Using analytical methods of statistical mechanics, we analyse the typical behaviour of a multiple-input multiple-output (MIMO) Gaussian channel with binary inputs under low-density parity-check (LDPC) network coding and joint decoding. The saddle point equations for the replica symmetric solution are found in particular realizations of this channel, including a small and large number of transmitters and receivers. In particular, we examine the cases of a single transmitter, a single receiver and symmetric and asymmetric interference. Both dynamical and thermodynamical transitions from the ferromagnetic solution of perfect decoding to a non-ferromagnetic solution are identified for the cases considered, marking the practical and theoretical limits of the system under the current coding scheme. Numerical results are provided, showing the typical level of improvement/deterioration achieved with respect to the single transmitter/receiver result, for the various cases. © 2007 IOP Publishing Ltd.
Resumo:
Data envelopment analysis defines the relative efficiency of a decision making unit (DMU) as the ratio of the sum of its weighted outputs to the sum of its weighted inputs allowing the DMUs to freely allocate weights to their inputs/outputs. However, this measure may not reflect a DMU's true efficiency as some inputs/outputs may not contribute reasonably to the efficiency measure. Traditionally, to overcome this problem weights restrictions have been imposed. This paper offers a new approach to this problem where DMUs operate a constant returns to scale technology in a single input multi-output context. The approach is based on introducing unobserved DMUs, created by adjusting the output levels of certain observed relatively efficient DMUs, reflecting a combination of technical information of feasible production levels and the DM's value judgments. Its main advantage is that the information conveyed by the DM is local, with reference to a specific observed DMU. The approach is illustrated on a real life application. © 2003 Elsevier B.V. All rights reserved.
Resumo:
Derivational morphology proposes meaningful connections between words and is largely unrepresented in lexical databases. This thesis presents a project to enrich a lexical database with morphological links and to evaluate their contribution to disambiguation. A lexical database with sense distinctions was required. WordNet was chosen because of its free availability and widespread use. Its suitability was assessed through critical evaluation with respect to specifications and criticisms, using a transparent, extensible model. The identification of serious shortcomings suggested a portable enrichment methodology, applicable to alternative resources. Although 40% of the most frequent words are prepositions, they have been largely ignored by computational linguists, so addition of prepositions was also required. The preferred approach to morphological enrichment was to infer relations from phenomena discovered algorithmically. Both existing databases and existing algorithms can capture regular morphological relations, but cannot capture exceptions correctly; neither of them provide any semantic information. Some morphological analysis algorithms are subject to the fallacy that morphological analysis can be performed simply by segmentation. Morphological rules, grounded in observation and etymology, govern associations between and attachment of suffixes and contribute to defining the meaning of morphological relationships. Specifying character substitutions circumvents the segmentation fallacy. Morphological rules are prone to undergeneration, minimised through a variable lexical validity requirement, and overgeneration, minimised by rule reformulation and restricting monosyllabic output. Rules take into account the morphology of ancestor languages through co-occurrences of morphological patterns. Multiple rules applicable to an input suffix need their precedence established. The resistance of prefixations to segmentation has been addressed by identifying linking vowel exceptions and irregular prefixes. The automatic affix discovery algorithm applies heuristics to identify meaningful affixes and is combined with morphological rules into a hybrid model, fed only with empirical data, collected without supervision. Further algorithms apply the rules optimally to automatically pre-identified suffixes and break words into their component morphemes. To handle exceptions, stoplists were created in response to initial errors and fed back into the model through iterative development, leading to 100% precision, contestable only on lexicographic criteria. Stoplist length is minimised by special treatment of monosyllables and reformulation of rules. 96% of words and phrases are analysed. 218,802 directed derivational links have been encoded in the lexicon rather than the wordnet component of the model because the lexicon provides the optimal clustering of word senses. Both links and analyser are portable to an alternative lexicon. The evaluation uses the extended gloss overlaps disambiguation algorithm. The enriched model outperformed WordNet in terms of recall without loss of precision. Failure of all experiments to outperform disambiguation by frequency reflects on WordNet sense distinctions.
Resumo:
To make vision possible, the visual nervous system must represent the most informative features in the light pattern captured by the eye. Here we use Gaussian scale-space theory to derive a multiscale model for edge analysis and we test it in perceptual experiments. At all scales there are two stages of spatial filtering. An odd-symmetric, Gaussian first derivative filter provides the input to a Gaussian second derivative filter. Crucially, the output at each stage is half-wave rectified before feeding forward to the next. This creates nonlinear channels selectively responsive to one edge polarity while suppressing spurious or "phantom" edges. The two stages have properties analogous to simple and complex cells in the visual cortex. Edges are found as peaks in a scale-space response map that is the output of the second stage. The position and scale of the peak response identify the location and blur of the edge. The model predicts remarkably accurately our results on human perception of edge location and blur for a wide range of luminance profiles, including the surprising finding that blurred edges look sharper when their length is made shorter. The model enhances our understanding of early vision by integrating computational, physiological, and psychophysical approaches. © ARVO.
Resumo:
Predicting future need for water resources has traditionally been, at best, a crude mixture of art and science. This has prevented the evaluation of water need from being carried out in either a consistent or comprehensive manner. This inconsistent and somewhat arbitrary approach to water resources planning led to well publicised premature developments in the 1970's and 1980's but privatisation of the Water Industry, including creation of the Office of Water Services and the National Rivers Authority in 1989, turned the tide of resource planning to the point where funding of schemes and their justification by the Regulators could no longer be assumed. Furthermore, considerable areas of uncertainty were beginning to enter the debate and complicate the assessment It was also no longer appropriate to consider that contingencies would continue to lie solely on the demand side of the equation. An inability to calculate the balance between supply and demand may mean an inability to meet standards of service or, arguably worse, an excessive provision of water resources and excessive costs to customers. United Kingdom Water Industry Research limited (UKWlR) Headroom project in 1998 provided a simple methodology for the calculation of planning margins. This methodology, although well received, was not, however, accepted by the Regulators as a tool sufficient to promote resource development. This thesis begins by considering the history of water resource planning in the UK, moving on to discuss events following privatisation of the water industry post·1985. The mid section of the research forms the bulk of original work and provides a scoping exercise which reveals a catalogue of uncertainties prevalent within the supply-demand balance. Each of these uncertainties is considered in terms of materiality, scope, and whether it can be quantified within a risk analysis package. Many of the areas of uncertainty identified would merit further research. A workable, yet robust, methodology for evaluating the balance between water resources and water demands by using a spreadsheet based risk analysis package is presented. The technique involves statistical sampling and simulation such that samples are taken from input distributions on both the supply and demand side of the equation and the imbalance between supply and demand is calculated in the form of an output distribution. The percentiles of the output distribution represent different standards of service to the customer. The model allows dependencies between distributions to be considered, for improved uncertainties to be assessed and for the impact of uncertain solutions to any imbalance to be calculated directly. The method is considered a Significant leap forward in the field of water resource planning.
Resumo:
As an alternative fuel for compression ignition engines, plant oils are in principle renewable and carbon-neutral. However, their use raises technical, economic and environmental issues. A comprehensive and up-to-date technical review of using both edible and non-edible plant oils (either pure or as blends with fossil diesel) in CI engines, based on comparisons with standard diesel fuel, has been carried out. The properties of several plant oils, and the results of engine tests using them, are reviewed based on the literature. Findings regarding engine performance, exhaust emissions and engine durability are collated. The causes of technical problems arising from the use of various oils are discussed, as are the modifications to oil and engine employed to alleviate these problems. The review shows that a number of plant oils can be used satisfactorily in CI engines, without transesterification, by preheating the oil and/or modifying the engine parameters and the maintenance schedule. As regards life-cycle energy and greenhouse gas emission analyses, these reveal considerable advantages of raw plant oils over fossil diesel and biodiesel. Typical results show that the life-cycle output-to-input energy ratio of raw plant oil is around 6 times higher than fossil diesel. Depending on either primary energy or fossil energy requirements, the life-cycle energy ratio of raw plant oil is in the range of 2–6 times higher than corresponding biodiesel. Moreover, raw plant oil has the highest potential of reducing life-cycle GHG emissions as compared to biodiesel and fossil diesel.
Resumo:
The main advantage of Data Envelopment Analysis (DEA) is that it does not require any priori weights for inputs and outputs and allows individual DMUs to evaluate their efficiencies with the input and output weights that are only most favorable weights for calculating their efficiency. It can be argued that if DMUs are experiencing similar circumstances, then the pricing of inputs and outputs should apply uniformly across all DMUs. That is using of different weights for DMUs makes their efficiencies unable to be compared and not possible to rank them on the same basis. This is a significant drawback of DEA; however literature observed many solutions including the use of common set of weights (CSW). Besides, the conventional DEA methods require accurate measurement of both the inputs and outputs; however, crisp input and output data may not relevant be available in real world applications. This paper develops a new model for the calculation of CSW in fuzzy environments using fuzzy DEA. Further, a numerical example is used to show the validity and efficacy of the proposed model and to compare the results with previous models available in the literature.
Resumo:
This study employs stochastic frontier analysis to analyze Malaysian commercial banks during 1996-2002, and particularly focuses on determining the impact of Islamic banking on performance. We derive both net and gross efficiency estimates, thereby demonstrating that differences in operating characteristics explain much of the difference in outputs between Malaysian banks. We also decompose productivity change into efficiency, technical, and scale change using a generalised Malmquist productivity index. On average, Malaysian banks experience mild decreasing return to scale and annual productivity change of 2.37 percent, with the latter driven primarily by technical change, which has declined over time. Our gross efficiency estimates suggest that Islamic banking is associated with higher input requirements. In addition, our productivity estimates indicate that the potential for full-fledged Islamic banks and conventional banks with Islamic banking operations to overcome the output disadvantages associated with Islamic banking are relatively limited. Merged banks are found to have higher input usage and lower productivity change, suggesting that bank mergers have not contributed positively to bank performance. Finally, our results suggest that while the East Asian financial crisis had an interim output-increasing effect in 1998, the crisis prompted a continuing negative impact on the output performance by increasing the volume of non-performing loans.