951 resultados para Input Distance Function


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Climate change has become one of the most challenging issues facing the world. Chinese government has realized the importance of energy conservation and prevention of the climate changes for sustainable development of China's economy and set targets for CO2 emissions reduction in China. In China industry contributes 84.2% of the total CO2 emissions, especially manufacturing industries. Data envelopment analysis (DEA) and Malmquist productivity (MP) index are the widely used mathematical techniques to address the relative efficiency and productivity of a group of homogenous decision making units, e.g. industries or countries. However, in many real applications, especially those related to energy efficiency, there are often undesirable outputs, e.g. the pollutions, waste and CO2 emissions, which are produced inevitably with desirable outputs in the production. This paper introduces a novel Malmquist-Luenberger productivity (MLP) index based on directional distance function (DDF) to address the issue of productivity evolution of DMUs in the presence of undesirable outputs. The new RAM (Range-adjusted measure)-based global MLP index has been applied to evaluate CO2 emissions reduction in Chinese light manufacturing industries. Recommendations for policy makers have been discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forecast is the basis for making strategic, tactical and operational business decisions. In financial economics, several techniques have been used to predict the behavior of assets over the past decades.Thus, there are several methods to assist in the task of time series forecasting, however, conventional modeling techniques such as statistical models and those based on theoretical mathematical models have produced unsatisfactory predictions, increasing the number of studies in more advanced methods of prediction. Among these, the Artificial Neural Networks (ANN) are a relatively new and promising method for predicting business that shows a technique that has caused much interest in the financial environment and has been used successfully in a wide variety of financial modeling systems applications, in many cases proving its superiority over the statistical models ARIMA-GARCH. In this context, this study aimed to examine whether the ANNs are a more appropriate method for predicting the behavior of Indices in Capital Markets than the traditional methods of time series analysis. For this purpose we developed an quantitative study, from financial economic indices, and developed two models of RNA-type feedfoward supervised learning, whose structures consisted of 20 data in the input layer, 90 neurons in one hidden layer and one given as the output layer (Ibovespa). These models used backpropagation, an input activation function based on the tangent sigmoid and a linear output function. Since the aim of analyzing the adherence of the Method of Artificial Neural Networks to carry out predictions of the Ibovespa, we chose to perform this analysis by comparing results between this and Time Series Predictive Model GARCH, developing a GARCH model (1.1).Once applied both methods (ANN and GARCH) we conducted the results' analysis by comparing the results of the forecast with the historical data and by studying the forecast errors by the MSE, RMSE, MAE, Standard Deviation, the Theil's U and forecasting encompassing tests. It was found that the models developed by means of ANNs had lower MSE, RMSE and MAE than the GARCH (1,1) model and Theil U test indicated that the three models have smaller errors than those of a naïve forecast. Although the ANN based on returns have lower precision indicator values than those of ANN based on prices, the forecast encompassing test rejected the hypothesis that this model is better than that, indicating that the ANN models have a similar level of accuracy . It was concluded that for the data series studied the ANN models show a more appropriate Ibovespa forecasting than the traditional models of time series, represented by the GARCH model

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The problem of determining optimal power spectral density models for earthquake excitation which satisfy constraints on total average power, zero crossing rate and which produce the highest response variance in a given linear system is considered. The solution to this problem is obtained using linear programming methods. The resulting solutions are shown to display a highly deterministic structure and, therefore, fail to capture the stochastic nature of the input. A modification to the definition of critical excitation is proposed which takes into account the entropy rate as a measure of uncertainty in the earthquake loads. The resulting problem is solved using calculus of variations and also within linear programming framework. Illustrative examples on specifying seismic inputs for a nuclear power plant and a tall earth dam are considered and the resulting solutions are shown to be realistic.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The effect of structure height on the lightning striking distance is estimated using a lightning strike model that takes into account the effect of connecting leaders. According to the results, the lightning striking distance may differ significantly from the values assumed in the IEC standard for structure heights beyond 30m. However, for structure heights smaller than about 30m, the results show that the values assumed by IEC do not differ significantly from the predictions based on a lightning attachment model taking into account the effect of connecting leaders. However, since IEC assumes a smaller striking distance than the ones predicted by the adopted model one can conclude that the safety is not compromised in adhering to the IEC standard. Results obtained from the model are also compared with Collection Volume Method (CVM) and other commonly used lightning attachment models available in the literature. The results show that in the case of CVM the calculated attractive distances are much larger than the ones obtained using the physically based lightning attachment models. This indicates the possibility of compromising the lightning protection procedures when using CVM. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The study introduces two new alternatives for global response sensitivity analysis based on the application of the L-2-norm and Hellinger's metric for measuring distance between two probabilistic models. Both the procedures are shown to be capable of treating dependent non-Gaussian random variable models for the input variables. The sensitivity indices obtained based on the L2-norm involve second order moments of the response, and, when applied for the case of independent and identically distributed sequence of input random variables, it is shown to be related to the classical Sobol's response sensitivity indices. The analysis based on Hellinger's metric addresses variability across entire range or segments of the response probability density function. The measure is shown to be conceptually a more satisfying alternative to the Kullback-Leibler divergence based analysis which has been reported in the existing literature. Other issues addressed in the study cover Monte Carlo simulation based methods for computing the sensitivity indices and sensitivity analysis with respect to grouped variables. Illustrative examples consist of studies on global sensitivity analysis of natural frequencies of a random multi-degree of freedom system, response of a nonlinear frame, and safety margin associated with a nonlinear performance function. (C) 2015 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study uses a discrete choice experiment (DCE) to elicit willingness to pay estimates for changes in the water quality of three rivers. As many regions the metropolitan region Berlin-Brandenburg struggles to achieve the objectives of the Water Framework Directive until 2015. A major problem is the high load of nutrients. As the region is part of two states (Länder) and the river sections are common throughout the whole region we account for the spatial context twofold. Firstly, we incorporate the distance between each respondent and all river stretches in all MNL and RPL models, and, secondly, we consider whether respondents reside in the state of Berlin or Brandenburg. The compensating variation (CV) calculated for various scenarios shows that overall people would significantly benefit from improved water quality. The CV measures, however, also reveal that not considering the spatial context would result in severely biased welfare measures. While the distance decay effect lowers CV, state residency is connected to the frequency of status quo choices and not accounting for residency would underestimate possible welfare gains in one state. Another finding is that the extent of the market varies with respect to attributes (river stretches) and attribute levels (water quality levels).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context. The ESA Rosetta spacecraft, currently orbiting around comet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments,particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution.
Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014.
Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately.
Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is α = −3, instead of α = −2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study provides detailed information on the ability of healthy ears to generate distortion product otoacoustic emissions (DPOAEs).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A new structure of Radial Basis Function (RBF) neural network called the Dual-orthogonal RBF Network (DRBF) is introduced for nonlinear time series prediction. The hidden nodes of a conventional RBF network compare the Euclidean distance between the network input vector and the centres, and the node responses are radially symmetrical. But in time series prediction where the system input vectors are lagged system outputs, which are usually highly correlated, the Euclidean distance measure may not be appropriate. The DRBF network modifies the distance metric by introducing a classification function which is based on the estimation data set. Training the DRBF networks consists of two stages. Learning the classification related basis functions and the important input nodes, followed by selecting the regressors and learning the weights of the hidden nodes. In both cases, a forward Orthogonal Least Squares (OLS) selection procedure is applied, initially to select the important input nodes and then to select the important centres. Simulation results of single-step and multi-step ahead predictions over a test data set are included to demonstrate the effectiveness of the new approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

UNLABELLED: Individuals who are involved in explosive sport types, such as 100-m sprints and long jump, have greater bone density, leg muscle size, jumping height and grip strength than individuals involved in long-distance running. INTRODUCTION: The purpose of this study is to examine the relationship between different types of physical activity with bone, lean mass and neuromuscular performance in older individuals. METHODS: We examined short- (n = 50), middle- (n = 19) and long-distance (n = 109) athletes at the 15th European Masters Championships in Poznań, Poland. Dual X-ray absorptiometry was used to measure areal bone mineral density (aBMD) and lean tissue mass. Maximal countermovement jump, multiple one-leg hopping and maximal grip force tests were performed. RESULTS: Short-distance athletes showed significantly higher aBMD at the legs, hip, lumbar spine and trunk compared to long-distance athletes (p ≤ 0.0012). Countermovement jump performance, hop force, grip force, leg lean mass and arm lean mass were greater in short-distance athletes (p ≤ 0.027). A similar pattern was seen in middle-distance athletes who typically showed higher aBMD and better neuromuscular performance than long-distance athletes, but lower in magnitude than short-distance athletes. In all athletes, aBMD was the same or higher than the expected age-adjusted population mean at the lumbar spine, hip and whole body. This effect was greater in the short- and middle-distance athletes. CONCLUSIONS: The stepwise relation between short-, middle- and long-distance athletes on bone suggests that the higher-impact loading protocols in short-distance disciplines are more effective in promoting aBMD. The regional effect on bone, with the differences between the groups being most marked at load-bearing regions (legs, hip, spine and trunk) rather than non-load-bearing regions, is further evidence in support of the idea that bone adaptation to exercise is dependent upon the local loading environment, rather than as part of a systemic effect.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The influence of curing tip distance and storage time in the kinetics of water diffusion (water sorption-W SP, solubility-W SB, and net water uptake) and color stability of a composite were evaluated. Composite samples were polymerized at different distances (5, 10, and 15 mm) and compared to a control group (0 mm). After desiccation, the specimens were stored in distilled water to evaluate the water diffusion over a 120-day period. Net water uptake was calculated (sum of WSP and WSB). The color stability after immersion in a grape juice was compared to distilled water. Data were submitted to three-way ANOVA/Tukey's test (α = 5%). The higher distances caused higher net water uptake (p < 0.05). The immersion in the juice caused significantly higher color change as a function of curing tip distance and the time (p < 0.05). The distance of photoactivation and storage time provide the color alteration and increased net water uptake of the resin composite tested.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The estimation of the average travel distance in a low-level picker-to-part order picking system can be done by analytical methods in most cases. Often a uniform distribution of the access frequency over all bin locations is assumed in the storage system. This only applies if the bin location assignment is done randomly. If the access frequency of the articles is considered in the bin location assignment to reduce the average total travel distance of the picker, the access frequency over the bin locations of one aisle can be approximated by an exponential density function or any similar density function. All known calculation methods assume that the average number of orderlines per order is greater than the number of aisles of the storage system. In case of small orders this assumption is often invalid. This paper shows a new approach for calculating the average total travel distance taking into account that the average number of orderlines per order is lower than the total number of aisles in the storage system and the access frequency over the bin locations of an aisle can be approximated by any density function.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

DCE-MRI is an important technique in the study of small animal cancer models because its sensitivity to vascular changes opens the possibility of quantitative assessment of early therapeutic response. However, extraction of physiologically descriptive parameters from DCE-MRI data relies upon measurement of the vascular input function (VIF), which represents the contrast agent concentration time course in the blood plasma. This is difficult in small animal models due to artifacts associated with partial volume, inflow enhancement, and the limited temporal resolution achievable with MR imaging. In this work, the development of a suite of techniques for high temporal resolution, artifact resistant measurement of the VIF in mice is described. One obstacle in VIF measurement is inflow enhancement, which decreases the sensitivity of the MR signal to the presence of contrast agent. Because the traditional techniques used to suppress inflow enhancement degrade the achievable spatiotemporal resolution of the pulse sequence, improvements can be achieved by reducing the time required for the suppression. Thus, a novel RF pulse which provides spatial presaturation contemporaneously with the RF excitation was implemented and evaluated. This maximizes the achievable temporal resolution by removing the additional RF and gradient pulses typically required for suppression of inflow enhancement. A second challenge is achieving the temporal resolution required for accurate characterization of the VIF, which exceeds what can be achieved with conventional imaging techniques while maintaining adequate spatial resolution and tumor coverage. Thus, an anatomically constrained reconstruction strategy was developed that allows for sampling of the VIF at extremely high acceleration factors, permitting capture of the initial pass of the contrast agent in mice. Simulation, phantom, and in vivo validation of all components were performed. Finally, the two components were used to perform VIF measurement in the murine heart. An in vivo study of the VIF reproducibility was performed, and an improvement in the measured injection-to-injection variation was observed. This will lead to improvements in the reliability of quantitative DCE-MRI measurements and increase their sensitivity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Context. The ESA Rosetta spacecraft, currently orbiting around cornet 67P/Churyumov-Gerasimenko, has already provided in situ measurements of the dust grain properties from several instruments, particularly OSIRIS and GIADA. We propose adding value to those measurements by combining them with ground-based observations of the dust tail to monitor the overall, time-dependent dust-production rate and size distribution. Aims. To constrain the dust grain properties, we take Rosetta OSIRIS and GIADA results into account, and combine OSIRIS data during the approach phase (from late April to early June 2014) with a large data set of ground-based images that were acquired with the ESO Very Large Telescope (VLT) from February to November 2014. Methods. A Monte Carlo dust tail code, which has already been used to characterise the dust environments of several comets and active asteroids, has been applied to retrieve the dust parameters. Key properties of the grains (density, velocity, and size distribution) were obtained from. Rosetta observations: these parameters were used as input of the code to considerably reduce the number of free parameters. In this way, the overall dust mass-loss rate and its dependence on the heliocentric distance could be obtained accurately. Results. The dust parameters derived from the inner coma measurements by OSIRIS and GIADA and from distant imaging using VLT data are consistent, except for the power index of the size-distribution function, which is alpha = -3, instead of alpha = -2, for grains smaller than 1 mm. This is possibly linked to the presence of fluffy aggregates in the coma. The onset of cometary activity occurs at approximately 4.3 AU, with a dust production rate of 0.5 kg/s, increasing up to 15 kg/s at 2.9 AU. This implies a dust-to-gas mass ratio varying between 3.8 and 6.5 for the best-fit model when combined with water-production rates from the MIRO experiment.