20 resultados para error model
Resumo:
Energy efficiency improvement has been a key objective of China’s long-term energy policy. In this paper, we derive single-factor technical energy efficiency (abbreviated as energy efficiency) in China from multi-factor efficiency estimated by means of a translog production function and a stochastic frontier model on the basis of panel data on 29 Chinese provinces over the period 2003–2011. We find that average energy efficiency has been increasing over the research period and that the provinces with the highest energy efficiency are at the east coast and the ones with the lowest in the west, with an intermediate corridor in between. In the analysis of the determinants of energy efficiency by means of a spatial Durbin error model both factors in the own province and in first-order neighboring provinces are considered. Per capita income in the own province has a positive effect. Furthermore, foreign direct investment and population density in the own province and in neighboring provinces have positive effects, whereas the share of state-owned enterprises in Gross Provincial Product in the own province and in neighboring provinces has negative effects. From the analysis it follows that inflow of foreign direct investment and reform of state-owned enterprises are important policy handles.
Resumo:
Estimating a time interval and temporally coordinating movements in space are fundamental skills, but the relationships between these different forms of timing, and the neural processes that they incur, are not well understood. While different theories have been proposed to account for time perception, time estimation, and the temporal patterns of coordination, there are no general mechanisms which unify these various timing skills. This study considers whether a model of perceptuo-motor timing, the tau(GUIDE), can also describe how certain judgements of elapsed time are made. To evaluate this, an equation for determining interval estimates was derived from the tau(GUIDE) model and tested in a task where participants had to throw a ball and estimate when it would hit the floor. The results showed that in accordance with the model, very accurate judgements could be made without vision (mean timing error -19.24 msec), and the model was a good predictor of skilled participants' estimate timing. It was concluded that since the tau(GUIDE) principle provides temporal information in a generic form, it could be a unitary process that links different forms of timing.
Resumo:
In the IEEE 802.11 MAC layer protocol, there are different trade-off points between the number of nodes competing for the medium and the network capacity provided to them. There is also a trade-off between the wireless channel condition during the transmission period and the energy consumption of the nodes. Current approaches at modeling energy consumption in 802.11 based networks do not consider the influence of the channel condition on all types of frames (control and data) in the WLAN. Nor do they consider the effect on the different MAC and PHY schemes that can occur in 802.11 networks. In this paper, we investigate energy consumption corresponding to the number of competing nodes in IEEE 802.11's MAC and PHY layers in error-prone wireless channel conditions, and present a new energy consumption model. Analysis of the power consumed by each type of MAC and PHY over different bit error rates shows that the parameters in these layers play a critical role in determining the overall energy consumption of the ad-hoc network. The goal of this research is not only to compare the energy consumption using exact formulae in saturated IEEE 802.11-based DCF networks under varying numbers of competing nodes, but also, as the results show, to demonstrate that channel errors have a significant impact on the energy consumption.
Resumo:
This study explores using artificial neural networks to predict the rheological and mechanical properties of underwater concrete (UWC) mixtures and to evaluate the sensitivity of such properties to variations in mixture ingredients. Artificial neural networks (ANN) mimic the structure and operation of biological neurons and have the unique ability of self-learning, mapping, and functional approximation. Details of the development of the proposed neural network model, its architecture, training, and validation are presented in this study. A database incorporating 175 UWC mixtures from nine different studies was developed to train and test the ANN model. The data are arranged in a patterned format. Each pattern contains an input vector that includes quantity values of the mixture variables influencing the behavior of UWC mixtures (that is, cement, silica fume, fly ash, slag, water, coarse and fine aggregates, and chemical admixtures) and a corresponding output vector that includes the rheological or mechanical property to be modeled. Results show that the ANN model thus developed is not only capable of accurately predicting the slump, slump-flow, washout resistance, and compressive strength of underwater concrete mixtures used in the training process, but it can also effectively predict the aforementioned properties for new mixtures designed within the practical range of the input parameters used in the training process with an absolute error of 4.6, 10.6, 10.6, and 4.4%, respectively.
Resumo:
A Newton–Raphson solution scheme with a stress point algorithm is presented for the implementation of an elastic–viscoplastic soilmodel in a finite element program. Viscoplastic strain rates are calculated using the stress and volumetric states of the soil. Sub-incrementsof time are defined for each iterative calculation of elastic–viscoplastic stress changes so that their sum adds up to the time incrementfor the load step. This carefully defined ‘iterative time’ ensures that the correct amount of viscoplastic straining is accumulated overthe applied load step. The algorithms and assumptions required to implement the solution scheme are provided. Verification of the solutionscheme is achieved by using it to analyze typical boundary value problems.
Resumo:
The development of computer-based devices for music control has created a need to study how spectators understand new performance technologies and practices. As a part of a larger project examining how interactions with technology can be communicated to spectators, we present a model of a spectator's understanding of error by a performer. This model is broadly applicable throughout HCI, as interactions with technology are increasingly public and spectatorship is becoming more common.
Resumo:
This brief investigates a possible application of the inverse Preisach model in combination with the feedforward and feedback control strategies to control shape memory alloy actuators. In the feedforward control design, a fuzzy-based inverse Preisach model is used to compensate for the hysteresis nonlinearity effect. An extrema input history and a fuzzy inference is utilized to replace the inverse classical Preisach model. This work allows for a reduction in the number of experimental parameters and computation time for the inversion of the classical Preisach model. A proportional-integral-derivative (PID) controller is used as a feedback controller to regulate the error between the desired output and the system output. To demonstrate the effectiveness of the proposed controller, real-time control experiment results are presented.
Resumo:
Shapememoryalloy (SMA) actuators, which have the ability to return to a predetermined shape when heated, have many potential applications in aeronautics, surgical tools, robotics and so on. Nonlinearity hysteresis effects existing in SMA actuators present a problem in the motion control of these smart actuators. This paper investigates the control problem of SMA actuators in both simulation and experiment. In the simulation, the numerical Preisachmodel with geometrical interpretation is used for hysteresis modeling of SMA actuators. This model is then incorporated in a closed loop PID control strategy. The optimal values of PID parameters are determined by using geneticalgorithm to minimize the mean squared error between desired output displacement and simulated output. However, the control performance is not good compared with the simulation results when these parameters are applied to the real SMA control since the system is disturbed by unknown factors and changes in the surrounding environment of the system. A further automated readjustment of the PID parameters using fuzzylogic is proposed for compensating the limitation. To demonstrate the effectiveness of the proposed controller, real time control experiment results are presented.
Resumo:
GC-MS data on veterinary drug residues in bovine urine are used for controlling the illegal practice of fattening cattle. According to current detection criteria, peak patterns of preferably four ions should agree within 10 or 20% from a corresponding standard pattern. These criteria are rigid, rather arbitrary and do not match daily practice. A new model, based on multivariate modeling of log peak abundance ratios, provides a theoretical basis for the identification of analytes and optimizes the balance between the avoidance of false positives and false negatives. The performance of the model is demonstrated on data provided by five laboratories, each supplying GC-MS measurements on the detection of clenbuterol, dienestrol and 19 beta-nortestosterone in urine. The proposed model shows a better performance than confirmation by using the current criteria and provides a statistical basis for inspection criteria in terms of error probabilities.
Resumo:
The development of accurate structural/thermal numerical models of complex systems, such as aircraft fuselage barrels, is often limited and determined by the smallest scales that need to be modelled. The development of reduced order models of the smallest scales and consequently their integration with higher level models can be a way to minimise the bottle neck present, while still having efficient, robust and accurate numerical models. In this paper a methodology on how to develop compact thermal fluid models (CTFMs) for compartments where mixed convection regimes are present is demonstrated. Detailed numerical simulations (CFD) have been developed for an aircraft crown compartment and validated against experimental data obtained from a 1:1 scale compartment rig. The crown compartment is defined as the confined area between the upper fuselage and the passenger cabin in a single aisle commercial aircraft. CFD results were utilised to extract average quantities (temperature and heat fluxes) and characteristic parameters (heat transfer coefficients) to generate CTFMs. The CTFMs have then been compared with the results obtained from the detailed models showing average errors for temperature predictions lower than 5%. This error can be deemed acceptable when compared to the nominal experimental error associated with the thermocouple measurements.
The CTFMs methodology developed allows to generate accurate reduced order models where accuracy is restricted to the region of Boundary Conditions applied. This limitation arises from the sensitivity of the internal flow structures to the applied boundary condition set. CTFMs thus generated can be then integrated in complex numerical modelling of whole fuselage sections.
Further steps in the development of an exhaustive methodology would be the implementation of a logic ruled based approach to extract directly from the CFD simulations numbers and positions of the nodes for the CTFM.
Resumo:
In this article the multibody simulation software package MADYMO for analysing and optimizing occupant safety design was used to model crash tests for Normal Containment barriers in accordance with EN 1317. The verification process was carried out by simulating a TB31 and a TB32 crash test performed on vertical portable concrete barriers and by comparing the numerical results to those obtained experimentally. The same modelling approach was applied to both tests to evaluate the predictive capacity of the modelling at two different impact speeds. A sensitivity analysis of the vehicle stiffness was also carried out. The capacity to predict all of the principal EN1317 criteria was assessed for the first time: the acceleration severity index, the theoretical head impact velocity, the barrier working width and the vehicle exit box. Results showed a maximum error of 6% for the acceleration severity index and 21% for theoretical head impact velocity for the numerical simulation in comparison to the recorded data. The exit box position was predicted with a maximum error of 4°. For the working width, a large percentage difference was observed for test TB31 due to the small absolute value of the barrier deflection but the results were well within the limit value from the standard for both tests. The sensitivity analysis showed the robustness of the modelling with respect to contact stiffness increase of ±20% and ±40%. This is the first multibody model of portable concrete barriers that can reproduce not only the acceleration severity index but all the test criteria of EN 1317 and is therefore a valuable tool for new product development and for injury biomechanics research.
Resumo:
This paper investigates the construction of linear-in-the-parameters (LITP) models for multi-output regression problems. Most existing stepwise forward algorithms choose the regressor terms one by one, each time maximizing the model error reduction ratio. The drawback is that such procedures cannot guarantee a sparse model, especially under highly noisy learning conditions. The main objective of this paper is to improve the sparsity and generalization capability of a model for multi-output regression problems, while reducing the computational complexity. This is achieved by proposing a novel multi-output two-stage locally regularized model construction (MTLRMC) method using the extreme learning machine (ELM). In this new algorithm, the nonlinear parameters in each term, such as the width of the Gaussian function and the power of a polynomial term, are firstly determined by the ELM. An initial multi-output LITP model is then generated according to the termination criteria in the first stage. The significance of each selected regressor is checked and the insignificant ones are replaced at the second stage. The proposed method can produce an optimized compact model by using the regularized parameters. Further, to reduce the computational complexity, a proper regression context is used to allow fast implementation of the proposed method. Simulation results confirm the effectiveness of the proposed technique. © 2013 Elsevier B.V.
Resumo:
Molecular communication is set to play an important role in the design of complex biological and chemical systems. An important class of molecular communication systems is based on the timing channel, where information is encoded in the delay of the transmitted molecule - a synchronous approach. At present, a widely used modeling assumption is the perfect synchronization between the transmitter and the receiver. Unfortunately, this assumption is unlikely to hold in most practical molecular systems. To remedy this, we introduce a clock into the model - leading to the molecular timing channel with synchronization error. To quantify the behavior of this new system, we derive upper and lower bounds on the variance-constrained capacity, which we view as the step between the mean-delay and the peak-delay constrained capacity. By numerically evaluating our bounds, we obtain a key practical insight: the drift velocity of the clock links does not need to be significantly larger than the drift velocity of the information link, in order to achieve the variance-constrained capacity with perfect synchronization.
Resumo:
A parametric regression model for right-censored data with a log-linear median regression function and a transformation in both response and regression parts, named parametric Transform-Both-Sides (TBS) model, is presented. The TBS model has a parameter that handles data asymmetry while allowing various different distributions for the error, as long as they are unimodal symmetric distributions centered at zero. The discussion is focused on the estimation procedure with five important error distributions (normal, double-exponential, Student's t, Cauchy and logistic) and presents properties, associated functions (that is, survival and hazard functions) and estimation methods based on maximum likelihood and on the Bayesian paradigm. These procedures are implemented in TBSSurvival, an open-source fully documented R package. The use of the package is illustrated and the performance of the model is analyzed using both simulated and real data sets.