929 resultados para two input two output


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The creation of OFDM based Wireless Personal Area Networks (WPANs) has allowed high bit-rate wireless communication devices suitable for streaming High Definition video between consumer products as demonstrated in Wireless- USB. However, these devices need high clock rates, particularly for the OFDM sections resulting in high silicon cost and high electrical power. Acknowledging that electrical power in wireless consumer devices is more critical than the number of implemented logic gates, this paper presents a Double Data Rate (DDR) architecture to reduce the OFDM input and output clock rate by a factor of 2. The architecture has been implemented and tested for Wireless-USB (ECMA-368) resulting in a maximum clock of 264MHz instead of 528MHz existing anywhere on the die.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reviews the economic framework for the delivery of livestock services to the poor. It is argued that the demand for livestock products is likely to increase rapidly and the ability of the poor to participate in the opportunities presented by this growth is linked critically to the availability of good service support, both on the input and output side. Governments therefore have a responsibility to supply the necessary public goods (including the institutions and legal frameworks), and the market infrastructure for facilitating the emergence of efficient markets for livestock services. The paper further argues that the dynamics of public policy in developing countries are much more complex than the simple application of economic logic. It is the larger political economy that often dictates policy choices. It is therefore important to integrate political economy and governance issues into the economic debate on livestock service delivery. The paper also reviews the context in which the markets for livestock services will need to function. Different countries are facing very different sets of issues, and the identification of possible interventions in livestock service markets would require careful field research and analysis. In this context, the paper suggests the elements of a research agenda for the next few years.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The measurement of the impact of technical change has received significant attention within the economics literature. One popular method of quantifying the impact of technical change is the use of growth accounting index numbers. However, in a recent article Nelson and Pack (1999) criticise the use of such index numbers in situations where technical change is likely to be biased in favour of one or other inputs. In particular they criticise the common approach of applying observed cost shares, as proxies for partial output elasticities, to weight the change in quantities which they claim is only valid under Hicks neutrality. Recent advances in the measurement of product and factor biases of technical change developed by Balcombe et al (2000) provide a relatively straight-forward means of correcting product and factor shares in the face of biased technical progress. This paper demonstrates the correction of both revenue and cost shares used in the construction of a TFP index for UK agriculture over the period 1953 to 2000 using both revenue and cost function share equations appended with stochastic latent variables to capture the bias effect. Technical progress is shown to be biased between both individual input and output groups. Output and input quantity aggregates are then constructed using both observed and corrected share weights and the resulting TFPs are compared. There does appear to be some significant bias in TFP if the effect of biased technical progress is not taken into account when constructing the weights

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reviews four approaches used to create rational tools to aid the planning and the management of the building design process and then proposes a fifth approach. The new approach that has been developed is based on the mechanical aspects of technology rather than subjective design issues. The knowledge base contains, for each construction technology, a generic model of the detailed design process. Each activity in the process is specified by its input and output information needs. By connecting the input demands of one technology with the output supply from another technology a map or network of design activity is formed. Thus, it is possible to structure a specific model from the generic knowledge base within a KBE system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We argue that hyper-systemizing predisposes individuals to show talent, and review evidence that hyper-systemizing is part of the cognitive style of people with autism spectrum conditions (ASC). We then clarify the hyper-systemizing theory, contrasting it to the weak central coherence (WCC) and executive dysfunction (ED) theories. The ED theory has difficulty explaining the existence of talent in ASC. While both hyper-systemizing and WCC theories postulate excellent attention to detail, by itself excellent attention to detail will not produce talent. By contrast, the hyper-systemizing theory argues that the excellent attention to detail is directed towards detecting 'if p, then q' rules (or [input-operation-output] reasoning). Such law-based pattern recognition systems can produce talent in systemizable domains. Finally, we argue that the excellent attention to detail in ASC is itself a consequence of sensory hypersensitivity. We review an experiment from our laboratory demonstrating sensory hypersensitivity detection thresholds in vision. We conclude that the origins of the association between autism and talent begin at the sensory level, include excellent attention to detail and end with hyper-systemizing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes impedance control of redundant drive joints with double actuation (RDJ-DA) to produce compliant motions with the future goal of higher bandwidth. First, to reduce joint inertia, a double-input-single-output mechanism with one internal degree of freedom (DOF) is presented as part of the basic structure of the RDJ-DA. Next, the basic structure of RDJ-DA is further explained and its dynamics and statics are derived. Then, the impedance control scheme of RDJ-DA to produce compliant motions is proposed and the validity of the proposed controller is investigated using numerical examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When the orthogonal space-time block code (STBC), or the Alamouti code, is applied on a multiple-input multiple-output (MIMO) communications system, the optimum reception can be achieved by a simple signal decoupling at the receiver. The performance, however, deteriorates significantly in presence of co-channel interference (CCI) from other users. In this paper, such CCI problem is overcome by applying the independent component analysis (ICA), a blind source separation algorithm. This is based on the fact that, if the transmission data from every transmit antenna are mutually independent, they can be effectively separated at the receiver with the principle of the blind source separation. Then equivalently, the CCI is suppressed. Although they are not required by the ICA algorithm itself, a small number of training data are necessary to eliminate the phase and order ambiguities at the ICA outputs, leading to a semi-blind approach. Numerical simulation is also shown to verify the proposed ICA approach in the multiuser MIMO system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A quasi-optical technique for characterizing micromachined waveguides is demonstrated with wideband time-resolved terahertz spectroscopy. A transfer-function representation is adopted for the description of the relation between the signals in the input and output port of the waveguides. The time-domain responses were discretized, and the waveguide transfer function was obtained through a parametric approach in the z domain after describing the system with an autoregressive with exogenous input model. The a priori assumption of the number of modes propagating in the structure was inferred from comparisons of the theoretical with the measured characteristic impedance as well as with parsimony arguments. Measurements for a precision WR-8 waveguide-adjustable short as well as for G-band reduced-height micromachined waveguides are presented. (C) 2003 Optical Society of America.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A new identification algorithm is introduced for the Hammerstein model consisting of a nonlinear static function followed by a linear dynamical model. The nonlinear static function is characterised by using the Bezier-Bernstein approximation. The identification method is based on a hybrid scheme including the applications of the inverse of de Casteljau's algorithm, the least squares algorithm and the Gauss-Newton algorithm subject to constraints. The related work and the extension of the proposed algorithm to multi-input multi-output systems are discussed. Numerical examples including systems with some hard nonlinearities are used to illustrate the efficacy of the proposed approach through comparisons with other approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A processing system comprises: input means arranged to receive at least one input group of bits representing at least one respective input number; output means arranged to output at least one output group of bits representing at least one respective output number; and processing means arranged to perform an operation on the at least one input group of bits to produce the at least one output group of bits such that the at least one output number is related to the at least one input number by a mathematical operation; and wherein each of the numbers can be any of a set of numbers which includes a series of numbers, positive infinity, negative infinity and nullity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Symmetrical behaviour of the covariance matrix and the positive-definite criterion are used to simplify identification of single-input/single-output systems using recursive least squares. Simulation results are obtained and these are compared with ordinary recursive least squares. The adaptive nature of the identifier is verified by varying the system parameters on convergence.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A self-tuning controller which automatically assigns weightings to control and set-point following is introduced. This discrete-time single-input single-output controller is based on a generalized minimum-variance control strategy. The automatic on-line selection of weightings is very convenient, especially when the system parameters are unknown or slowly varying with respect to time, which is generally considered to be the type of systems for which self-tuning control is useful. This feature also enables the controller to overcome difficulties with non-minimum phase systems.