964 resultados para CHEMORECEPTOR INPUTS
Resumo:
Enhanced Scan design can significantly improve the fault coverage for two pattern delay tests at the cost of exorbitantly high area overhead. The redundant flip-flops introduced in the scan chains have traditionally only been used to launch the two-pattern delay test inputs, not to capture tests results. This paper presents a new, much lower cost partial Enhanced Scan methodology with both improved controllability and observability. Facilitating observation of some hard to observe internal nodes by capturing their response in the already available and underutilized redundant flip-flops improves delay fault coverage with minimal or almost negligible cost. Experimental results on ISCAS'89 benchmark circuits show significant improvement in TDF fault coverage for this new partial enhance scan methodology.
Resumo:
In this article, a new flame extinction model based on the k/epsilon turbulence time scale concept is proposed to predict the flame liftoff heights over a wide range of coflow temperature and O-2 mass fraction of the coflow. The flame is assumed to be quenched, when the fluid time scale is less than the chemical time scale ( Da < 1). The chemical time scale is derived as a function of temperature, oxidizer mass fraction, fuel dilution, velocity of the jet and fuel type. The present extinction model has been tested for a variety of conditions: ( a) ambient coflow conditions ( 1 atm and 300 K) for propane, methane and hydrogen jet flames, ( b) highly preheated coflow, and ( c) high temperature and low oxidizer concentration coflow. Predicted flame liftoff heights of jet diffusion and partially premixed flames are in excellent agreement with the experimental data for all the simulated conditions and fuels. It is observed that flame stabilization occurs at a point near the stoichiometric mixture fraction surface, where the local flow velocity is equal to the local flame propagation speed. The present method is used to determine the chemical time scale for the conditions existing in the mild/ flameless combustion burners investigated by the authors earlier. This model has successfully predicted the initial premixing of the fuel with combustion products before the combustion reaction initiates. It has been inferred from these numerical simulations that fuel injection is followed by intense premixing with hot combustion products in the primary zone and combustion reaction follows further downstream. Reaction rate contours suggest that reaction takes place over a large volume and the magnitude of the combustion reaction is lower compared to the conventional combustion mode. The appearance of attached flames in the mild combustion burners at low thermal inputs is also predicted, which is due to lower average jet velocity and larger residence times in the near injection zone.
Resumo:
Following the spirit of the enhanced Russell graph measure, this paper proposes an enhanced Russell-based directional distance measure (ERBDDM) model for dealing with desirable and undesirable outputs in data envelopment analysis (DEA) and allowing some inputs and outputs to be zero. The proposed method is analogous to the output oriented slacks-based measure (OSBM) and directional output distance function approach because it allows the expansion of desirable outputs and the contraction of undesirable outputs. The ERBDDM is superior to the OSBM model and traditional approach since it is not only able to identify all the inefficiency slacks just as the latter, but also avoids the misperception and misspecification of the former, which fails to identify null-jointness production of goods and bads. The paper also imposes a strong complementary slackness condition on the ERBDDM model to deal with the occurrence of multiple projections. Furthermore, we use the Penn Table data to help us explore our new approach in the context of environmental policy evaluations and guidance for performance improvements in 111 countries.
Resumo:
The purpose of this study is to investigate the accounting choice decisions of banks to employ Level 3 inputs in estimating the value of their financial assets and liabilities. Using a sample of 146 bank-year observations from 18 countries over 2009-2012, this study finds banks’ incentives to use Level 3 valuation inputs are associated with both firm-level and country-level determinants. At the firm-level, leverage, profitability (in term of net income), Tier 1 capital ratio, size and audit committee independence are associated with the percentage of Level 3 valuation inputs. At the country-level, economy development, legal region, legal enforcement and investor rights are also associated with the Level 3 classification choice. Lastly, ‘secrecy’, the proxy for culture dimensions and values, is found to be positively associated with the use of Level 3 valuation inputs. Altogether, these findings suggest that banks use the discretion available under Level 3 inputs opportunistically to avoid violating debt covenants limits, to increase earnings and manage their capital ratios. Results of this study also highlight that corporate governance quality at the firm-level (e.g. audit committee independence) and institutional features can constrain banks’ opportunistic behaviors in using the discretion available under Level 3 inputs. The results of this study have important implications for standard setters and contribute to the debate on the use of fair value accounting in an international context.
Resumo:
The purpose of this study is to investigate the accounting choice decisions of banks to employ Level 3 inputs in estimating the value of their financial assets and liabilities. Using a sample of 146 bank-year observations from 18 countries over 2009-2012, this study finds banks’ incentives to use Level 3 valuation inputs are associated with both firm-level and country-level determinants. At the firm-level, leverage, profitability (in term of net income), Tier 1 capital ratio, size and audit committee independence are associated with the percentage of Level 3 valuation inputs. At the country-level, economy development, legal region, legal enforcement and investor rights are also associated with the Level 3 classification choice. Lastly, ‘secrecy’, the proxy for culture dimensions and values, is found to be positively associated with the use of Level 3 valuation inputs. Altogether, these findings suggest that banks use the discretion available under Level 3 inputs opportunistically to avoid violating debt covenants limits, to increase earnings and manage their capital ratios. Results of this study also highlight that corporate governance quality at the firm-level (e.g. audit committee independence) and institutional features can constrain banks’ opportunistic behaviors in using the discretion available under Level 3 inputs. The results of this study have important implications for standard setters and contribute to the debate on the use of fair value accounting in an international context.
Resumo:
One of the objectives of general-purpose financial reporting is to provide information about the financial position, financial performance and cash flows of an entity that is useful to a wide range of users in making economic decisions. The current focus on potentially increased relevance of fair value accounting weighed against issues of reliability has failed to consider the potential impact on the predictive ability of accounting. Based on a sample of international (non-U.S.) banks from 24 countries during 2009-2012, we test the usefulness of fair values in improving the predictive ability of earnings. First, we find that the increasing use of fair values on balance-sheet financial instruments enhances the ability of current earnings to predict future earnings and cash flows. Second, we provide evidence that the fair value hierarchy classification choices affect the ability of earnings to predict future cash flows and future earnings. More precisely, we find that the non-discretionary fair value component (Level 1 assets) improves the predictability of current earnings whereas the discretionary fair value components (Level 2 and Level 3 assets) weaken the predictive power of earnings. Third, we find a consistent and strong association between factors reflecting country-wide institutional structures and predictive power of fair values based on discretionary measurement inputs (Level 2 and Level 3 assets and liabilities). Our study is timely and relevant. The findings have important implications for standard setters and contribute to the debate on the use of fair value accounting.
Resumo:
Pristine peatlands are carbon (C) accumulating wetland ecosystems sustained by a high water level (WL) and consequent anoxia that slows down decomposition. Persistent WL drawdown as a response to climate and/or land-use change directly affects decomposition: increased oxygenation stimulates decomposition of the old C (peat) sequestered under prior anoxic conditions. Responses of the new C (plant litter) in terms of quality, production and decomposability, and the consequences for the whole C cycle of peatlands are not fully understood. WL drawdown induces changes in plant community resulting in shift in dominance from Sphagnum and graminoids to shrubs and trees. There is increasing evidence that the indirect effects of WL drawdown via the changes in plant communities will have more impact on the ecosystem C cycling than any direct effects. The aim of this study is to disentangle the direct and indirect effects of WL drawdown on the new C by measuring the relative importance of 1) environmental parameters (WL depth, temperature, soil chemistry) and 2) plant community composition on litter production, microbial activity, litter decomposition rates and, consequently, on the C accumulation. This information is crucial for modelling C cycle under changing climate and/or land-use. The effects of WL drawdown were tested in a large-scale experiment with manipulated WL at two time scales and three nutrient regimes. Furthermore, the effect of climate on litter decomposability was tested along a north-south gradient. Additionally, a novel method for estimating litter chemical quality and decomposability was explored by combining Near infrared spectroscopy with multivariate modelling. WL drawdown had direct effects on litter quality, microbial community composition and activity and litter decomposition rates. However, the direct effects of WL drawdown were overruled by the indirect effects via changes in litter type composition and production. Short-term (years) responses to WL drawdown were small. In long-term (decades), dramatically increased litter inputs resulted in large accumulation of organic matter in spite of increased decomposition rates. Further, the quality of the accumulated matter greatly changed from that accumulated in pristine conditions. The response of a peatland ecosystem to persistent WL drawdown was more pronounced at sites with more nutrients. The study demonstrates that the shift in vegetation composition as a response to climate and/or land-use change is the main factor affecting peatland ecosystem C cycle and thus dynamic vegetation is a necessity in any models applied for estimating responses of C fluxes to changes in the environment. The time scale for vegetation changes caused by hydrological changes needs to extend to decades. This study provides grouping of litter types (plant species and part) into functional types based on their chemical quality and/or decomposability that the models could utilize. Further, the results clearly show a drop in soil temperature as a response to WL drawdown when an initially open peatland converts into a forest ecosystem, which has not yet been considered in the existing models.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
The swelling pressure of soil depends upon various soil parameters such as mineralogy, clay content, Atterberg's limits, dry density, moisture content, initial degree of saturation, etc. along with structural and environmental factors. It is very difficult to model and analyze swelling pressure effectively taking all the above aspects into consideration. Various statistical/empirical methods have been attempted to predict the swelling pressure based on index properties of soil. In this paper, the computational intelligence techniques artificial neural network and support vector machine have been used to develop models based on the set of available experimental results to predict swelling pressure from the inputs; natural moisture content, dry density, liquid limit, plasticity index, and clay fraction. The generalization of the model to new set of data other than the training set of data is discussed which is required for successful application of a model. A detailed study of the relative performance of the computational intelligence techniques has been carried out based on different statistical performance criteria.
Resumo:
This paper presents a study of kinematic and force singularities in parallel manipulators and closed-loop mechanisms and their relationship to accessibility and controllability of such manipulators and closed-loop mechanisms, Parallel manipulators and closed-loop mechanisms are classified according to their degrees of freedom, number of output Cartesian variables used to describe their motion and the number of actuated joint inputs. The singularities in the workspace are obtained by considering the force transformation matrix which maps the forces and torques in joint space to output forces and torques ill Cartesian space. The regions in the workspace which violate the small time local controllability (STLC) and small time local accessibility (STLA) condition are obtained by deriving the equations of motion in terms of Cartesian variables and by using techniques from Lie algebra.We show that for fully actuated manipulators when the number ofactuated joint inputs is equal to the number of output Cartesian variables, and the force transformation matrix loses rank, the parallel manipulator does not meet the STLC requirement. For the case where the number of joint inputs is less than the number of output Cartesian variables, if the constraint forces and torques (represented by the Lagrange multipliers) become infinite, the force transformation matrix loses rank. Finally, we show that the singular and non-STLC regions in the workspace of a parallel manipulator and closed-loop mechanism can be reduced by adding redundant joint actuators and links. The results are illustrated with the help of numerical examples where we plot the singular and non-STLC/non-STLA regions of parallel manipulators and closed-loop mechanisms belonging to the above mentioned classes. (C) 2000 Elsevier Science Ltd. All rights reserved.
Resumo:
A nonlinear adaptive system theoretic approach is presented in this paper for effective treatment of infectious diseases that affect various organs of the human body. The generic model used does not represent any specific disease. However, it mimics the generic immunological dynamics of the human body under pathological attack, including the response to external drugs. From a system theoretic point of view, drugs can be interpreted as control inputs. Assuming a set of nominal parameters in the mathematical model, first a nonlinear controller is designed based on the principle of dynamic inversion. This treatment strategy was found to be effective in completely curing "nominal patients". However, in some cases it is ineffective in curing "realistic patients". This leads to serious (sometimes fatal) damage to the affected organ. To make the drug dosage design more effective, a model-following neuro-adaptive control design is carried out using neural networks, which are trained (adapted) online. From simulation studies, this adaptive controller is found to be effective in killing the invading microbes and healing the damaged organ even in the presence of parameter uncertainties and continuing pathogen attack.
Resumo:
The specific objective of this paper is to develop multivariable controllers that would achieve asymptotic regulation in the presence of parameter variations and disturbance inputs for a tubular reactor used in ammonia synthesis. A ninth order state space model with three control inputs and two disturbance inputs is generated from the nonlinear distributed model using linearization and lumping approximations. Using this model, an approach for control system design is developed keeping in view the imperfections of the model and the measurability of the state variables. Specifically, the design of feedforward and robust integral controllers using state and output feedback is considered. Also, the design of robust multiloop proportional integral controllers is presented. Finally the performance of these controllers is evaluated through simulation.
Resumo:
The paper proposes two methodologies for damage identification from measured natural frequencies of a contiguously damaged reinforced concrete beam, idealised with distributed damage model. The first method identifies damage from Iso-Eigen-Value-Change contours, plotted between pairs of different frequencies. The performance of the method is checked for a wide variation of damage positions and extents. The method is also extended to a discrete structure in the form of a five-storied shear building and the simplicity of the method is demonstrated. The second method is through smeared damage model, where the damage is assumed constant for different segments of the beam and the lengths and centres of these segments are the known inputs. First-order perturbation method is used to derive the relevant expressions. Both these methods are based on distributed damage models and have been checked with experimental program on simply supported reinforced concrete beams, subjected to different stages of symmetric and un-symmetric damages. The results of the experiments are encouraging and show that both the methods can be adopted together in a damage identification scenario.
Resumo:
A scheme for the detection and isolation of actuator faults in linear systems is proposed. A bank of unknown input observers is constructed to generate residual signals which will deviate in characteristic ways in the presence of actuator faults. Residual signals are unaffected by the unknown inputs acting on the system and this decreases the false alarm and miss probabilities. The results are illustrated through a simulation study of actuator fault detection and isolation in a pilot plant doubleeffect evaporator.
Resumo:
Capacity region for two-user Gaussian Broadcast Channels (GBC) is well known with the optimal input being Gaussian. In this paper we explore the capacity region for GBC when the users' symbols are taken from finite complex alphabets (like M-QAM, M-PSK). When the alphabets for both the users are the same we show that rotation of one of the alphabets enlarges the capacity region. We arrive at an optimal angle of rotation by simulation. The effect of rotation on the capacity region at different SNRs is also studied using simulation results. Using the setup of Fading Broadcast Channel (FBC) given by [Li and Goldsmith, 2001], we study the ergodic capacity region with inputs from finite complex alphabets. It is seen that, using the procedure for optimum power allocation obtained in [Li and Goldsmith, 2001] for Gaussian inputs, to allocate power to symbols from finite complex alphabets, relative rotation between the alphabets does not improve the capacity region. Simulation results for a modified heuristic power allocation procedure for finite-constellation case, show that Constellation Constrained capacity region enlarges with rotation.