55 resultados para two input two output
em CentAUR: Central Archive University of Reading - UK
Resumo:
In this study a minimum variance neuro self-tuning proportional-integral-derivative (PID) controller is designed for complex multiple input-multiple output (MIMO) dynamic systems. An approximation model is constructed, which consists of two functional blocks. The first block uses a linear submodel to approximate dominant system dynamics around a selected number of operating points. The second block is used as an error agent, implemented by a neural network, to accommodate the inaccuracy possibly introduced by the linear submodel approximation, various complexities/uncertainties, and complicated coupling effects frequently exhibited in non-linear MIMO dynamic systems. With the proposed model structure, controller design of an MIMO plant with n inputs and n outputs could be, for example, decomposed into n independent single input-single output (SISO) subsystem designs. The effectiveness of the controller design procedure is initially verified through simulations of industrial examples.
Resumo:
In this paper we estimate a Translog output distance function for a balanced panel of state level data for the Australian dairy processing sector. We estimate a fixed effects specification employing Bayesian methods, with and without the imposition of monotonicity and curvature restrictions. Our results indicate that Tasmania and Victoria are the most technically efficient states with New South Wales being the least efficient. The imposition of theoretical restrictions marginally affects the results especially with respect to estimates of technical change and industry deregulation. Importantly, our bias estimates show changes in both input use and output mix that result from deregulation. Specifically, we find that deregulation has positively biased the production of butter, cheese and powders.
Resumo:
Motivation: There is a frequent need to apply a large range of local or remote prediction and annotation tools to one or more sequences. We have created a tool able to dispatch one or more sequences to assorted services by defining a consistent XML format for data and annotations. Results: By analyzing annotation tools, we have determined that annotations can be described using one or more of the six forms of data: numeric or textual annotation of residues, domains (residue ranges) or whole sequences. With this in mind, XML DTDs have been designed to store the input and output of any server. Plug-in wrappers to a number of services have been written which are called from a master script. The resulting APATML is then formatted for display in HTML. Alternatively further tools may be written to perform post-analysis.
Resumo:
In immediate recall tasks, visual recency is substantially enhanced when output interference is low (Cowan, Saults, Elliott, & Moreno, 2002; Craik, 1969) whereas auditory recency remains high even under conditions of high output interference. Ibis auditory advantage has been interpreted in terms of auditory resistance to output interference (e.g., Neath & Surprenant, 2003). In this study the auditory-visual difference at low output interference re-emerged when ceiling effects were accounted for, but only with spoken output. With written responding the auditory advantage remained significantly larger with high than with low output interference. These new data suggest that both superior auditory encoding and modality-specific output interference contribute to the classic auditory-visual modality effect.
Resumo:
To improve the welfare of the rural poor and keep them in the countryside, the government of Botswana has been spending 40% of the value of agricultural GDP on agricultural support services. But can investment make smallholder agriculture prosperous in such adverse conditions? This paper derives an answer by applying a two-output six-input stochastic translog distance function, with inefficiency effects and biased technical change to panel data for the 18 districts and the commercial agricultural sector, from 1979 to 1996 This model demonstrates that herds are the most important input, followed by draft power. land and seeds. Multilateral indices for technical change, technical efficiency and total factor productivity (TFP) show that the technology level of the commercial agricultural sector is more than six times that of traditional agriculture and that the gap has been increasing, due to technological regression in traditional agriculture and modest progress in commercial agriculture. Since the levels of efficiency are similar, the same patient is repeated by the TFP indices. This result highlights the policy dilemma of the trade-off between efficiency and equity objectives.
Resumo:
The use of special units for logarithmic ratio quantities is reviewed. The neper is used with a natural logarithm (logarithm to the base e) to express the logarithm of the amplitude ratio of two pure sinusoidal signals, particularly in the context of linear systems where it is desired to represent the gain or loss in amplitude of a single-frequency signal between the input and output. The bel, and its more commonly used submultiple, the decibel, are used with a decadic logarithm (logarithm to the base 10) to measure the ratio of two power-like quantities, such as a mean square signal or a mean square sound pressure in acoustics. Thus two distinctly different quantities are involved. In this review we define the quantities first, without reference to the units, as is standard practice in any system of quantities and units. We show that two different definitions of the quantity power level, or logarithmic power ratio, are possible. We show that this leads to two different interpretations for the meaning and numerical values of the units bel and decibel. We review the question of which of these alternative definitions is actually used, or is used by implication, by workers in the field. Finally, we discuss the relative advantages of the alternative definitions.
Resumo:
Modal filtering is based on the capability of single-mode waveguides to transmit only one complex amplitude function to eliminate virtually any perturbation of the interfering wavefronts, thus making very high rejection ratios possible in a nulling interferometer. In the present paper we focus on the progress of Integrated Optics in the thermal infrared [6-20 mu m] range, one of the two candidate technologies for the fabrication of Modal Filters, together with fiber optics. In conclusion of the European Space Agency's (ESA) "Integrated Optics for Darwin" activity, etched layers of clialcogenide material deposited on chalcogenide glass substrates was selected among four candidates as the technology with the best potential to simultaneously meet the filtering efficiency, absolute and spectral transmission, and beam coupling requirements. ESA's new "Integrated Optics" activity started at mid-2007 with the purpose of improving the technology until compliant prototypes can be manufactured and validated, expectedly by the end of 2009. The present paper aims at introducing the project and the components requirements and functions. The selected materials and preliminary designs, as well as the experimental validation logic and test benches are presented. More details are provided on the progress of the main technology: vacuum deposition in the co-evaporation mode and subsequent etching of chalcogenide layers. In addition., preliminary investigations of an alternative technology based on burying a chalcogenide optical fiber core into a chalcogenide substrate are presented. Specific developments of anti-reflective solutions designed for the mitigation of Fresnel losses at the input and output surface of the components are also introduced.
Resumo:
A neural network was used to map three PID operating regions for a two-input two-output steam generator system. The network was used in stand alone feedforward operation to control the whole operating range of the process, after being trained from the PID controllers corresponding to each control region. The network inputs are the plant error signals, their integral, their derivative and a 4-error delay train.
Cross-layer design for MIMO systems over spatially correlated and keyhole Nakagami-m fading channels
Resumo:
Cross-layer design is a generic designation for a set of efficient adaptive transmission schemes, across multiple layers of the protocol stack, that are aimed at enhancing the spectral efficiency and increasing the transmission reliability of wireless communication systems. In this paper, one such cross-layer design scheme that combines physical layer adaptive modulation and coding (AMC) with link layer truncated automatic repeat request (T-ARQ) is proposed for multiple-input multiple-output (MIMO) systems employing orthogonal space--time block coding (OSTBC). The performance of the proposed cross-layer design is evaluated in terms of achievable average spectral efficiency (ASE), average packet loss rate (PLR) and outage probability, for which analytical expressions are derived, considering transmission over two types of MIMO fading channels, namely, spatially correlated Nakagami-m fading channels and keyhole Nakagami-m fading channels. Furthermore, the effects of the maximum number of ARQ retransmissions, numbers of transmit and receive antennas, Nakagami fading parameter and spatial correlation parameters, are studied and discussed based on numerical results and comparisons. Copyright © 2009 John Wiley & Sons, Ltd.
Resumo:
This paper proposes a set of well defined steps to design functional verification monitors intended to verify Floating Point Units (FPU) described in HDL. The first step consists on defining the input and output domain coverage. Next, the corner cases are defined. Finally, an already verified reference model is used in order to test the correctness of the Device Under Verification (DUV). As a case study a monitor for an IEEE754-2008 compliant design is implemented. This monitor is built to be easily instantiated into verification frameworks such as OVM. Two different designs were verified reaching complete input coverage and successful compliant results.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products as demonstrated in Wireless- USB. However, these devices need high clock rates, particularly for the OFDM sections resulting in high silicon cost and high electrical power. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture to reduce the OFDM input and output clock rate by a factor of 2. The architecture has been implemented and tested for Wireless-USB (ECMA-368) resulting in a maximum clock of 264MHz instead of 528MHz existing anywhere on the die.
Resumo:
This paper reviews the economic framework for the delivery of livestock services to the poor. It is argued that the demand for livestock products is likely to increase rapidly and the ability of the poor to participate in the opportunities presented by this growth is linked critically to the availability of good service support, both on the input and output side. Governments therefore have a responsibility to supply the necessary public goods (including the institutions and legal frameworks), and the market infrastructure for facilitating the emergence of efficient markets for livestock services. The paper further argues that the dynamics of public policy in developing countries are much more complex than the simple application of economic logic. It is the larger political economy that often dictates policy choices. It is therefore important to integrate political economy and governance issues into the economic debate on livestock service delivery. The paper also reviews the context in which the markets for livestock services will need to function. Different countries are facing very different sets of issues, and the identification of possible interventions in livestock service markets would require careful field research and analysis. In this context, the paper suggests the elements of a research agenda for the next few years.