943 resultados para engineering, electrical


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new portfolio risk measure that is the uncertainty of portfolio fuzzy return is introduced in this paper. Beyond the well-known Sharpe ratio (i.e., the reward-to-variability ratio) in modern portfolio theory, we initiate the so-called fuzzy Sharpe ratio in the fuzzy modeling context. In addition to the introduction of the new risk measure, we also put forward the reward-to-uncertainty ratio to assess the portfolio performance in fuzzy modeling. Corresponding to two approaches based on TM and TW fuzzy arithmetic, two portfolio optimization models are formulated in which the uncertainty of portfolio fuzzy returns is minimized, while the fuzzy Sharpe ratio is maximized. These models are solved by the fuzzy approach or by the genetic algorithm (GA). Solutions of the two proposed models are shown to be dominant in terms of portfolio return uncertainty compared with those of the conventional mean-variance optimization (MVO) model used prevalently in the financial literature. In terms of portfolio performance evaluated by the fuzzy Sharpe ratio and the reward-to-uncertainty ratio, the model using TW fuzzy arithmetic results in higher performance portfolios than those obtained by both the MVO and the fuzzy model, which employs TM fuzzy arithmetic. We also find that using the fuzzy approach for solving multiobjective problems appears to achieve more optimal solutions than using GA, although GA can offer a series of well-diversified portfolio solutions diagrammed in a Pareto frontier.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The nonlinear, noisy and outlier characteristics of electroencephalography (EEG) signals inspire the employment of fuzzy logic due to its power to handle uncertainty. This paper introduces an approach to classify motor imagery EEG signals using an interval type-2 fuzzy logic system (IT2FLS) in a combination with wavelet transformation. Wavelet coefficients are ranked based on the statistics of the receiver operating characteristic curve criterion. The most informative coefficients serve as inputs to the IT2FLS for the classification task. Two benchmark datasets, named Ia and Ib, downloaded from the brain-computer interface (BCI) competition II, are employed for the experiments. Classification performance is evaluated using accuracy, sensitivity, specificity and F-measure. Widely-used classifiers, including feedforward neural network, support vector machine, k-nearest neighbours, AdaBoost and adaptive neuro-fuzzy inference system, are also implemented for comparisons. The wavelet-IT2FLS method considerably dominates the comparable classifiers on both datasets, and outperforms the best performance on the Ia and Ib datasets reported in the BCI competition II by 1.40% and 2.27% respectively. The proposed approach yields great accuracy and requires low computational cost, which can be applied to a real-time BCI system for motor imagery data analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traffic congestion in urban roads is one of the biggest challenges of 21 century. Despite a myriad of research work in the last two decades, optimization of traffic signals in network level is still an open research problem. This paper for the first time employs advanced cuckoo search optimization algorithm for optimally tuning parameters of intelligent controllers. Neural Network (NN) and Adaptive Neuro-Fuzzy Inference System (ANFIS) are two intelligent controllers implemented in this study. For the sake of comparison, we also implement Q-learning and fixed-time controllers as benchmarks. Comprehensive simulation scenarios are designed and executed for a traffic network composed of nine four-way intersections. Obtained results for a few scenarios demonstrate the optimality of trained intelligent controllers using the cuckoo search method. The average performance of NN, ANFIS, and Q-learning controllers against the fixed-time controller are 44%, 39%, and 35%, respectively.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This brief proposes an efficient technique for the construction of optimized prediction intervals (PIs) by using the bootstrap technique. The method employs an innovative PI-based cost function in the training of neural networks (NNs) used for estimation of the target variance in the bootstrap method. An optimization algorithm is developed for minimization of the cost function and adjustment of NN parameters. The performance of the optimized bootstrap method is examined for seven synthetic and real-world case studies. It is shown that application of the proposed method improves the quality of constructed PIs by more than 28% over the existing technique, leading to narrower PIs with a coverage probability greater than the nominal confidence level.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neural networks (NNs) are an effective tool to model nonlinear systems. However, their forecasting performance significantly drops in the presence of process uncertainties and disturbances. NN-based prediction intervals (PIs) offer an alternative solution to appropriately quantify uncertainties and disturbances associated with point forecasts. In this paper, an NN ensemble procedure is proposed to construct quality PIs. A recently developed lower-upper bound estimation method is applied to develop NN-based PIs. Then, constructed PIs from the NN ensemble members are combined using a weighted averaging mechanism. Simulated annealing and a genetic algorithm are used to optimally adjust the weights for the aggregation mechanism. The proposed method is examined for three different case studies. Simulation results reveal that the proposed method improves the average PI quality of individual NNs by 22%, 18%, and 78% for the first, second, and third case studies, respectively. The simulation study also demonstrates that a 3%-4% improvement in the quality of PIs can be achieved using the proposed method compared to the simple averaging aggregation method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The aim of this research is to examine the efficiency of different aggregation algorithms to the forecasts obtained from individual neural network (NN) models in an ensemble. In this study an ensemble of 100 NN models are constructed with a heterogeneous architecture. The outputs from NN models are combined by three different aggregation algorithms. These aggregation algorithms comprise of a simple average, trimmed mean, and a Bayesian model averaging. These methods are utilized with certain modifications and are employed on the forecasts obtained from all individual NN models. The output of the aggregation algorithms is analyzed and compared with the individual NN models used in NN ensemble and with a Naive approach. Thirty-minutes interval electricity demand data from Australian Energy Market Operator (AEMO) and the New York Independent System Operator's web site (NYISO) are used in the empirical analysis. It is observed that the aggregation algorithm perform better than many of the individual NN models. In comparison with the Naive approach, the aggregation algorithms exhibit somewhat better forecasting performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Vision based tracking of an object using the ideas of perspective projection inherently consists of nonlinearly modelled measurements although the underlying dynamic system that encompasses the object and the vision sensors can be linear. Based on a necessary stereo vision setting, we introduce an appropriate measurement conversion techniques which subsequently facilitate using a linear filter. Linear filter together with the aforementioned measurement conversion approach conforms a robust linear filter that is based on the set values state estimation ideas; a particularly rich area in the robust control literature. We provide a rigorously theoretical analysis to ensure bounded state estimation errors formulated in terms of an ellipsoidal set in which the actual state is guaranteed to be included to an arbitrary high probability. Using computer simulations as well as a practical implementation consisting of a robotic manipulator, we demonstrate our linear robust filter significantly outperforms the traditionally used extended Kalman filter under this stereo vision scenario. © 2008 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In 2010 the Australian government commissioned the Australian Learning and Teaching Council (ALTC) to undertake a national project to facilitate disciplinary development of threshold learning standards. The aim was to lay the foundation for all higher education providers to demonstrate to the new national higher education regulator, the Tertiary Education Quality and Standards Agency (TEQSA), that graduates achieved or exceeded minimum academic standards. Through a yearlong consultative process, representatives of employers, professional bodies, academics and students, developed learning standards applying to any Australian higher education provider. Willey and Gardner reported using a software tool, SPARKPLUS, in calibrating academic standards amongst teaching staff in large classes. In this paper, we investigate the effectiveness of this technology to promote calibrated understandings with the national accounting learning standards. We found that integrating the software with a purposely designed activity provided significant efficiencies in calibrating understandings about learning standards, developed expertise and a better understanding of what is required to meet these standards and how best to demonstrate them. The software and supporting calibration and assessment process can be adopted by other disciplines, including engineering, seeking to provide direct evidence about performance against learning standards. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper addresses the problem of fully-automatic localization and segmentation of 3D intervertebral discs (IVDs) from MR images. Our method contains two steps, where we first localize the center of each IVD, and then segment IVDs by classifying image pixels around each disc center as foreground (disc) or background. The disc localization is done by estimating the image displacements from a set of randomly sampled 3D image patches to the disc center. The image displacements are estimated by jointly optimizing the training and test displacement values in a data-driven way, where we take into consideration both the training data and the geometric constraint on the test image. After the disc centers are localized, we segment the discs by classifying image pixels around disc centers as background or foreground. The classification is done in a similar data-driven approach as we used for localization, but in this segmentation case we are aiming to estimate the foreground/background probability of each pixel instead of the image displacements. In addition, an extra neighborhood smooth constraint is introduced to enforce the local smoothness of the label field. Our method is validated on 3D T2-weighted turbo spin echo MR images of 35 patients from two different studies. Experiments show that compared to state of the art, our method achieves better or comparable results. Specifically, we achieve for localization a mean error of 1.6-2.0 mm, and for segmentation a mean Dice metric of 85%-88% and a mean surface distance of 1.3-1.4 mm.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Despite significant advancements in wireless sensor networks (WSNs), energy conservation in the networks remains one of the most important research challenges. One approach commonly used to prolong the network lifetime is through aggregating data at the cluster heads (CHs). However, there is possibility that the CHs may fail and function incorrectly due to a number of reasons such as power instability. During the failure, the CHs are unable to collect and transfer data correctly. This affects the performance of the WSN. Early detection of failure of CHs will reduce the data loss and provide possible minimal recovery efforts. This paper proposes a self-configurable clustering mechanism to detect the disordered CHs and replace them with other nodes. Simulation results verify the effectiveness of the proposed approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2001-2012 IEEE. Sensing coverage is a fundamental design problem in wireless sensor networks (WSNs). This is because there is always a possibility that the sensor nodes may function incorrectly due to a number of reasons, such as failure, power, or noise instability, which negatively influences the coverage of the WSNs. In order to address this problem, we propose a fuzzy-based self-healing coverage scheme for randomly deployed mobile sensor nodes. The proposed scheme determines the uncovered sensing areas and then select the best mobile nodes to be moved to minimize the coverage hole. In addition, it distributes the sensor nodes uniformly considering Euclidean distance and coverage redundancy among the mobile nodes. We have performed an extensive performance analysis of the proposed scheme. The results of the experiment show that the proposed scheme outperforms the existing approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract—Nowadays, classical washout filters are extensively used in commercial motion simulators. Even though there are several advantages for classical washout filters, such as short processing time, simplicity and ease of adjustment, they have several shortcomings. The main disadvantage is the fixed scheme and parameters of the classical washout filter cause inflexibility of the structure and thus the resulting simulator fails to suit all circumstances. Moreover, it is a conservative approach and the platform cannot be fully exploited. The aim of this research is to present a fuzzy logic approach and take the human perception error into account in the classical motion cueing algorithm, in order to improve both the physical limits of restitution and realistic human sensations. The fuzzy compensator signal is applied to adjust the filtered signals on the longitudinal and rotational channels online, as well as the tilt coordination to minimize the vestibular sensation error below the human perception threshold. The results indicate that the proposed fuzzy logic controllers significantly minimize the drawbacks of having fixed parameters and conservativeness in the classical washout filter. In addition, the performance of motion cueing algorithm and human perception for most occasions is improved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In many network applications, the nature of traffic is of burst type. Often, the transient response of network to such traffics is the result of a series of interdependant events whose occurrence prediction is not a trivial task. The previous efforts in IEEE 802.15.4 networks often followed top-down approaches to model those sequences of events, i.e., through making top-view models of the whole network, they tried to track the transient response of network to burst packet arrivals. The problem with such approaches was that they were unable to give station-level views of network response and were usually complex. In this paper, we propose a non-stationary analytical model for the IEEE 802.15.4 slotted CSMA/CA medium access control (MAC) protocol under burst traffic arrival assumption and without the optional acknowledgements. We develop a station-level stochastic time-domain method from which the network-level metrics are extracted. Our bottom-up approach makes finding station-level details such as delay, collision and failure distributions possible. Moreover, network-level metrics like the average packet loss or transmission success rate can be extracted from the model. Compared to the previous models, our model is proven to be of lower memory and computational complexity order and also supports contention window sizes of greater than one. We have carried out extensive and comparative simulations to show the high accuracy of our model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Online social networks (OSN) have become one of the major platforms for people to exchange information. Both positive information (e.g., ideas, news and opinions) and negative information (e.g., rumors and gossips) spreading in social media can greatly influence our lives. Previously, researchers have proposed models to understand their propagation dynamics. However, those were merely simulations in nature and only focused on the spread of one type of information. Due to the human-related factors involved, simultaneous spread of negative and positive information cannot be thought of the superposition of two independent propagations. In order to fix these deficiencies, we propose an analytical model which is built stochastically from a node level up. It can present the temporal dynamics of spread such as the time people check newly arrived messages or forward them. Moreover, it is capable of capturing people's behavioral differences in preferring what to believe or disbelieve. We studied the social parameters impact on propagation using this model. We found that some factors such as people's preference and the injection time of the opposing information are critical to the propagation but some others such as the hearsay forwarding intention have little impact on it. The extensive simulations conducted on the real topologies confirm the high accuracy of our model.