842 resultados para Input-output model
Resumo:
Better management of knowledge assets has the potential to improve business processes and increase productivity. This fact has led to considerable interest in recent years in the knowledge management (KM) phenomenon, and in the main dimensions that can impact on its application in construction. However, a lack of a systematic way of assessing KM initia-tives’ contribution towards achieving organisational business objectives is evident. This paper describes the first stage of a research project intended to develop, and empirically test, a KM input-process-output framework comprising unique and well-defined theoretical constructs representing the KM process and its internal and external determinants in the context of con-struction. The paper presents the underlying principles used in operationally defining each construct through the use of extant KM literature. The KM process itself is explicitly mod-elled via a number of clearly articulated phases that ultimately lead to knowledge utilisation and capitalisation, which in turn adds value or otherwise to meeting defined business objec-tives. The main objective of the model is to reduce the impact of subjectivity in assessing the contribution made by KM practices and initiatives toward achieving performance improvements.
Resumo:
In Chapters 1 through 9 of the book (with the exception of a brief discussion on observers and integral action in Section 5.5 of Chapter 5) we considered constrained optimal control problems for systems without uncertainty, that is, with no unmodelled dynamics or disturbances, and where the full state was available for measurement. More realistically, however, it is necessary to consider control problems for systems with uncertainty. This chapter addresses some of the issues that arise in this situation. As in Chapter 9, we adopt a stochastic description of uncertainty, which associates probability distributions to the uncertain elements, that is, disturbances and initial conditions. (See Section 12.6 for references to alternative approaches to model uncertainty.) When incomplete state information exists, a popular observer-based control strategy in the presence of stochastic disturbances is to use the certainty equivalence [CE] principle, introduced in Section 5.5 of Chapter 5 for deterministic systems. In the stochastic framework, CE consists of estimating the state and then using these estimates as if they were the true state in the control law that results if the problem were formulated as a deterministic problem (that is, without uncertainty). This strategy is motivated by the unconstrained problem with a quadratic objective function, for which CE is indeed the optimal solution (˚Astr¨om 1970, Bertsekas 1976). One of the aims of this chapter is to explore the issues that arise from the use of CE in RHC in the presence of constraints. We then turn to the obvious question about the optimality of the CE principle. We show that CE is, indeed, not optimal in general. We also analyse the possibility of obtaining truly optimal solutions for single input linear systems with input constraints and uncertainty related to output feedback and stochastic disturbances.We first find the optimal solution for the case of horizon N = 1, and then we indicate the complications that arise in the case of horizon N = 2. Our conclusion is that, for the case of linear constrained systems, the extra effort involved in the optimal feedback policy is probably not justified in practice. Indeed, we show by example that CE can give near optimal performance. We thus advocate this approach in real applications.
Resumo:
Costs of purchasing new piglets and of feeding them until slaughter are the main variable expenditures in pig fattening. They both depend on slaughter intensity, the nature of feeding patterns and the technological constraints of pig fattening, such as genotype. Therefore, it is of interest to examine the effect of production technology and changes in input and output prices on feeding and slaughter decisions. This study examines the problem by using a dynamic programming model that links genetic characteristics of a pig to feeding decisions and the timing of slaughter and takes into account how these jointly affect the quality-adjusted value of a carcass. The model simulates the growth mechanism of a pig under optional feeding and slaughter patterns and then solves the optimal feeding and slaughter decisions recursively. The state of nature and the genotype of a pig are known in the analysis. The main contribution of this study is the dynamic approach that explicitly takes into account carcass quality while simultaneously optimising feeding and slaughter decisions. The method maximises the internal rate of return to the capacity unit. Hence, the results can have vital impact on competitiveness of pig production, which is known to be quite capital-intensive. The results suggest that producer can significantly benefit from improvements in the pig's genotype, because they improve efficiency of pig production. The annual benefits from obtaining pigs of improved genotype can be more than €20 per capacity unit. The annual net benefits of animal breeding to pig farms can also be considerable. Animals of improved genotype can reach optimal slaughter maturity quicker and produce leaner meat than animals of poor genotype. In order to fully utilise the benefits of animal breeding, the producer must adjust feeding and slaughter patterns on the basis of genotype. The results suggest that the producer can benefit from flexible feeding technology. The flexible feeding technology segregates pigs into groups according to their weight, carcass leanness, genotype and sex and thereafter optimises feeding and slaughter decisions separately for these groups. Typically, such a technology provides incentives to feed piglets with protein-rich feed such that the genetic potential to produce leaner meat is fully utilised. When the pig approaches slaughter maturity, the share of protein-rich feed in the diet gradually decreases and the amount of energy-rich feed increases. Generally, the optimal slaughter weight is within the weight range that pays the highest price per kilogram of pig meat. The optimal feeding pattern and the optimal timing of slaughter depend on price ratios. Particularly, an increase in the price of pig meat provides incentives to increase the growth rates up to the pig's biological maximum by increasing the amount of energy in the feed. Price changes and changes in slaughter premium can also have large income effects. Key words: barley, carcass composition, dynamic programming, feeding, genotypes, lean, pig fattening, precision agriculture, productivity, slaughter weight, soybeans
Resumo:
The “distractor-frequency effect” refers to the finding that high-frequency (HF) distractor words slow picture naming less than low-frequency distractors in the picture–word interference paradigm. Rival input and output accounts of this effect have been proposed. The former attributes the effect to attentional selection mechanisms operating during distractor recognition, whereas the latter attributes it to monitoring/decision mechanisms operating on distractor and target responses in an articulatory buffer. Using high-density (128-channel) EEG, we tested hypotheses from these rival accounts. In addition to conducting stimulus- and response-locked whole-brain corrected analyses, we investigated the correct-related negativity, an ERP observed on correct trials at fronto-central electrodes proposed to reflect the involvement of domain general monitoring. The wholebrain ERP analysis revealed a significant effect of distractor frequency at inferior right frontal and temporal sites between 100 and 300-msec post-stimulus onset, during which lexical access is thought to occur. Response-locked, region of interest (ROI) analyses of fronto-central electrodes revealed a correct-related negativity starting 121 msec before and peaking 125 msec after vocal onset on the grand averages. Slope analysis of this component revealed a significant difference between HF and lowfrequency distractor words, with the former associated with a steeper slope on the time windowspanning from100 msec before to 100 msec after vocal onset. The finding of ERP effects in time windows and components corresponding to both lexical processing and monitoring suggests the distractor frequency effect is most likely associated with more than one physiological mechanism.
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
The present study of the stability of systems governed by a linear multidimensional time-varying equation, which are encountered in spacecraft dynamics, economics, demographics, and biological systems, gives attention the lemma dealing with L(inf) stability of an integral equation that results from the differential equation of the system under consideration. Using the proof of this lemma, the main result on L(inf) stability is derived according; a corollary of the theorem deals with constant coefficient systems perturbed by small periodic terms. (O.C.)
Resumo:
This paper presents a novel hypothesis on the function of massive feedback pathways in mammalian visual systems. We propose that the cortical feature detectors compete not for the right to represent the output at a point, but for exclusive rights to abstract and represent part of the underlying input. Feedback can do this very naturally. A computational model that implements the above idea for the problem of line detection is presented and based on that we suggest a functional role for the thalamo-cortical loop during perception of lines. We show that the model successfully tackles the so called Cross problem. Based on some recent experimental results, we discuss the biological plausibility of our model. We also comment on the relevance of our hypothesis (on the role of feedback) to general sensory information processing and recognition. (C) 1998 Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Sub-pixel classification is essential for the successful description of many land cover (LC) features with spatial resolution less than the size of the image pixels. A commonly used approach for sub-pixel classification is linear mixture models (LMM). Even though, LMM have shown acceptable results, pragmatically, linear mixtures do not exist. A non-linear mixture model, therefore, may better describe the resultant mixture spectra for endmember (pure pixel) distribution. In this paper, we propose a new methodology for inferring LC fractions by a process called automatic linear-nonlinear mixture model (AL-NLMM). AL-NLMM is a three step process where the endmembers are first derived from an automated algorithm. These endmembers are used by the LMM in the second step that provides abundance estimation in a linear fashion. Finally, the abundance values along with the training samples representing the actual proportions are fed to multi-layer perceptron (MLP) architecture as input to train the neurons which further refines the abundance estimates to account for the non-linear nature of the mixing classes of interest. AL-NLMM is validated on computer simulated hyperspectral data of 200 bands. Validation of the output showed overall RMSE of 0.0089±0.0022 with LMM and 0.0030±0.0001 with the MLP based AL-NLMM, when compared to actual class proportions indicating that individual class abundances obtained from AL-NLMM are very close to the real observations.
Resumo:
With the emergence of voltage scaling as one of the most powerful power reduction techniques, it has been important to support voltage scalable statistical static timing analysis (SSTA) in deep submicrometer process nodes. In this paper, we propose a single delay model of logic gate using neural network which comprehensively captures process, voltage, and temperature variation along with input slew and output load. The number of simulation programs with integrated circuit emphasis (SPICE) required to create this model over a large voltage and temperature range is found to be modest and 4x less than that required for a conventional table-based approach with comparable accuracy. We show how the model can be used to derive sensitivities required for linear SSTA for an arbitrary voltage and temperature. Our experimentation on ISCAS 85 benchmarks across a voltage range of 0.9-1.1V shows that the average error in mean delay is less than 1.08% and average error in standard deviation is less than 2.85%. The errors in predicting the 99% and 1% probability point are 1.31% and 1%, respectively, with respect to SPICE. The two potential applications of voltage-aware SSTA have been presented, i.e., one for improving the accuracy of timing analysis by considering instance-specific voltage drops in power grids and the other for determining optimum supply voltage for target yield for dynamic voltage scaling applications.
Resumo:
This paper is concerned with the optimal flow control of an ATM switching element in a broadband-integrated services digital network. We model the switching element as a stochastic fluid flow system with a finite buffer, a constant output rate server, and a Gaussian process to characterize the input, which is a heterogeneous set of traffic sources. The fluid level should be maintained between two levels namely b1 and b2 with b1
Achievable rate region of gaussian broadcast channel with finite input alphabet and quantized output
Resumo:
In this paper, we study the achievable rate region of two-user Gaussian broadcast channel (GBC) when the messages to be transmitted to both the users take values from finite signal sets and the received signal is quantized at both the users. We refer to this channel as quantized broadcast channel (QBC). We first observe that the capacity region defined for a GBC does not carry over as such to QBC. Also, we show that the optimal decoding scheme for GBC (i.e., high SNR user doing successive decoding and low SNR user decoding its message alone) is not optimal for QBC. We then propose an achievable rate region for QBC based on two different schemes. We present achievable rate region results for the case of uniform quantization at the receivers. We find that rotation of one of the user's input alphabet with respect to the other user's alphabet marginally enlarges the achievable rate region of QBC when almost equal powers are allotted to both the users.
Resumo:
The objective of this study is to evaluate the ability of a European chemistry transport model, `CHIMERE' driven by the US meteorological model MM5, in simulating aerosol concentrations dust, PM10 and black carbon (BC)] over the Indian region. An evaluation of a meteorological event (dust storm); impact of change in soil-related parameters and meteorological input grid resolution on these aerosol concentrations has been performed. Dust storm simulation over Indo-Gangetic basin indicates ability of the model to capture dust storm events. Measured (AERONET data) and simulated parameters such as aerosol optical depth (AOD) and Angstrom exponent are used to evaluate the performance of the model to capture the dust storm event. A sensitivity study is performed to investigate the impact of change in soil characteristics (thickness of the soil layer in contact with air, volumetric water, and air content of the soil) and meteorological input grid resolution on the aerosol (dust, PM10, BC) distribution. Results show that soil parameters and meteorological input grid resolution have an important impact on spatial distribution of aerosol (dust, PM10, BC) concentrations.