956 resultados para Computer Science, Artificial Intelligence


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, a five-level cascaded H-bridge multilevel inverters topology is applied on induction motor control known as direct torque control (DTC) strategy. More inverter states can be generated by a five-level inverter which improves voltage selection capability. This paper also introduces two different control methods to select the appropriate output voltage vector for reducing the torque and flux error to zero. The first is based on the conventional DTC scheme using a pair of hysteresis comparators and look up table to select the output voltage vector for controlling the torque and flux. The second is based on a new fuzzy logic controller using Sugeno as the inference method to select the output voltage vector by replacing the hysteresis comparators and lookup table in the conventional DTC, to which the results show more reduction in torque ripple and feasibility of smooth stator current. By using Matlab/Simulink, it is verified that using five-level inverter in DTC drive can reduce the torque ripple in comparison with conventional DTC, and further torque ripple reduction is obtained by applying fuzzy logic controller. The simulation results have also verified that using a fuzzy controller instead of a hysteresis controller has resulted in reduction in the flux ripples significantly as well as reduces the total harmonic distortion of the stator current to below 4 %.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Certain tasks in image processing require the preservation of fine image details, while applying a broad operation to the image, such as image reduction, filtering, or smoothing. In such cases, the objects of interest are typically represented by small, spatially cohesive clusters of pixels which are to be preserved or removed, depending on the requirements. When images are corrupted by the noise or contain intensity variations generated by imaging sensors, identification of these clusters within the intensity space is problematic as they are corrupted by outliers. This paper presents a novel approach to accounting for spatial organization of the pixels and to measuring the compactness of pixel clusters based on the construction of fuzzy measures with specific properties: monotonicity with respect to the cluster size; invariance with respect to translation, reflection, and rotation; and discrimination between pixel sets of fixed cardinality with different spatial arrangements. We present construction methods based on Sugeno-type fuzzy measures, minimum spanning trees, and fuzzy measure decomposition. We demonstrate their application to generating fuzzy measures on real and artificial images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Internet has provided an ever increasingly popular platform for individuals to voice their thoughts, and like-minded people to share stories. This unintentionally leaves characteristics of individuals and communities, which are often difficult to be collected in traditional studies. Individuals with autism are such a case, in which the Internet could facilitate even more communication given its social-spatial distance being a characteristic preference for individuals with autism. Previous studies examined the traces left in the posts of online autism communities (Autism) in comparison with other online communities (Control). This work further investigates these online populations through the contents of not only their posts but also their comments. We first compare the Autism and Control blogs based on three features: topics, language styles and affective information. The autism groups are then further examined, based on the same three features, by looking at their personal (Personal) and community (Community) blogs separately. Machine learning and statistical methods are used to discriminate blog contents in both cases. All three features are found to be significantly different between Autism and Control, and between autism Personal and Community. These features also show good indicative power in prediction of autism blogs in both personal and community settings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The support vector machine (SVM) is a popular method for classification, well known for finding the maximum-margin hyperplane. Combining SVM with l1-norm penalty further enables it to simultaneously perform feature selection and margin maximization within a single framework. However, l1-norm SVM shows instability in selecting features in presence of correlated features. We propose a new method to increase the stability of l1-norm SVM by encouraging similarities between feature weights based on feature correlations, which is captured via a feature covariance matrix. Our proposed method can capture both positive and negative correlations between features. We formulate the model as a convex optimization problem and propose a solution based on alternating minimization. Using both synthetic and real-world datasets, we show that our model achieves better stability and classification accuracy compared to several state-of-the-art regularized classification methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cancer remains a major challenge in modern medicine. Increasing prevalence of cancer, particularly in developing countries, demands better understanding of the effectiveness and adverse consequences of different cancer treatment regimes in real patient population. Current understanding of cancer treatment toxicities is often derived from either “clean” patient cohorts or coarse population statistics. It is difficult to get up-to-date and local assessment of treatment toxicities for specific cancer centres. In this paper, we applied an Apriori-based method for discovering toxicity progression patterns in the form of temporal association rules. Our experiments show the effectiveness of the proposed method in discovering major toxicity patterns in comparison with the pairwise association analysis. Our method is applicable for most cancer centres with even rudimentary electronic medical records and has the potential to provide real-time surveillance and quality assurance in cancer care.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visual notations are a key aspect of visual languages. They provide a direct mapping between the intended information and set of graphical symbols. Visual notations are most often implemented using the low level syntax of programming languages which is time consuming, error prone, difficult to maintain and hardly human-centric. In this paper we describe an alternative approach to generating visual notations using by-example model transformations. In our new approach, a semantic mapping between model and view is implemented using model transformations. The notations resulting from this approach can be reused by mapping varieties of input data to their model and can be composed into different visualizations. Our approach is implemented in the CONVErT framework and has been applied to many visualization examples. Three case studies for visualizing statistical charts, visualization of traffic data, and reuse of a Minard's map visualization's components, are presented in this paper. A detailed user study of our approach for reusing notations and generating visualizations has been provided. 80% of the participants in this user study agreed that the novel approach to visualization was easy and 87% stated that they quickly learned to use the tool support.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Requirements validation is a crucial process to determine whether client-stakeholders' needs and expectations of a product are sufficiently correct and complete. Various requirements validation techniques have been used to evaluate the correctness and quality of requirements, but most of these techniques are tedious, expensive and time consuming. Accordingly, most project members are reluctant to invest their time and efforts in the requirements validation process. Moreover, automated tool supports that promote effective collaboration between the client-stakeholders and the engineers are still lacking. In this paper, we describe a novel approach that combines prototyping and test-based requirements techniques to improve the requirements validation process and promote better communication and collaboration between requirements engineers and clientstakeholders. To justify the potential of this prototype tool, we also present three types of evaluation conducted on the prototpye tool, which are the usability survey, 3-tool comparison analysis and expert reviews.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dynamically changing background (dynamic background) still presents a great challenge to many motion-based video surveillance systems. In the context of event detection, it is a major source of false alarms. There is a strong need from the security industry either to detect and suppress these false alarms, or dampen the effects of background changes, so as to increase the sensitivity to meaningful events of interest. In this paper, we restrict our focus to one of the most common causes of dynamic background changes: 1) that of swaying tree branches and 2) their shadows under windy conditions. Considering the ultimate goal in a video analytics pipeline, we formulate a new dynamic background detection problem as a signal processing alternative to the previously described but unreliable computer vision-based approaches. Within this new framework, we directly reduce the number of false alarms by testing if the detected events are due to characteristic background motions. In addition, we introduce a new data set suitable for the evaluation of dynamic background detection. It consists of real-world events detected by a commercial surveillance system from two static surveillance cameras. The research question we address is whether dynamic background can be detected reliably and efficiently using simple motion features and in the presence of similar but meaningful events, such as loitering. Inspired by the tree aerodynamics theory, we propose a novel method named local variation persistence (LVP), that captures the key characteristics of swaying motions. The method is posed as a convex optimization problem, whose variable is the local variation. We derive a computationally efficient algorithm for solving the optimization problem, the solution of which is then used to form a powerful detection statistic. On our newly collected data set, we demonstrate that the proposed LVP achieves excellent detection results and outperforms the best alternative adapted from existing art in the dynamic background literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Despite several years of research, type reduction (TR) operation in interval type-2 fuzzy logic system (IT2FLS) cannot perform as fast as a type-1 defuzzifier. In particular, widely used Karnik-Mendel (KM) TR algorithm is computationally much more demanding than alternative TR approaches. In this work, a data driven framework is proposed to quickly, yet accurately, estimate the output of the KM TR algorithm using simple regression models. Comprehensive simulation performed in this study shows that the centroid end-points of KM algorithm can be approximated with a mean absolute percentage error as low as 0.4%. Also, switch point prediction accuracy can be as high as 100%. In conjunction with the fact that simple regression model can be trained with data generated using exhaustive defuzzification method, this work shows the potential of proposed method to provide highly accurate, yet extremely fast, TR approximation method. Speed of the proposed method should theoretically outperform all available TR methods while keeping the uncertainty information intact in the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prognosis, such as predicting mortality, is common in medicine. When confronted with small numbers of samples, as in rare medical conditions, the task is challenging. We propose a framework for classification with data with small numbers of samples. Conceptually, our solution is a hybrid of multi-task and transfer learning, employing data samples from source tasks as in transfer learning, but considering all tasks together as in multi-task learning. Each task is modelled jointly with other related tasks by directly augmenting the data from other tasks. The degree of augmentation depends on the task relatedness and is estimated directly from the data. We apply the model on three diverse real-world data sets (healthcare data, handwritten digit data and face data) and show that our method outperforms several state-of-the-art multi-task learning baselines. We extend the model for online multi-task learning where the model parameters are incrementally updated given new data or new tasks. The novelty of our method lies in offering a hybrid multi-task/transfer learning model to exploit sharing across tasks at the data-level and joint parameter learning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The methodology for selecting the individual numerical scale and prioritization method has recently been presented and justified in the analytic hierarchy process (AHP). In this study, we further propose a novel AHP-group decision making (GDM) model in a local context (a unique criterion), based on the individual selection of the numerical scale and prioritization method. The resolution framework of the AHP-GDM with the individual numerical scale and prioritization method is first proposed. Then, based on linguistic Euclidean distance (LED) and linguistic minimum violations (LMV), the novel consensus measure is defined so that the consensus degree among decision makers who use different numerical scales and prioritization methods can be analyzed. Next, a consensus reaching model is proposed to help decision makers improve the consensus degree. In this consensus reaching model, the LED-based and LMV-based consensus rules are proposed and used. Finally, a new individual consistency index and its properties are proposed for the use of the individual numerical scale and prioritization method in the AHP-GDM. Simulation experiments and numerical examples are presented to demonstrate the validity of the proposed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Malware is pervasive in networks, and poses a critical threat to network security. However, we have very limited understanding of malware behavior in networks to date. In this paper, we investigate how malware propagates in networks from a global perspective. We formulate the problem, and establish a rigorous two layer epidemic model for malware propagation from network to network. Based on the proposed model, our analysis indicates that the distribution of a given malware follows exponential distribution, power law distribution with a short exponential tail, and power law distribution at its early, late and final stages, respectively. Extensive experiments have been performed through two real-world global scale malware data sets, and the results confirm our theoretical findings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a group decision making setting, we consider the potential impact an expert can have on the overall ranking by providing a biased assessment of the alternatives that differs substantially from the majority opinion. In the framework of similarity based averaging functions, we show that some alternative approaches to weighting the experts' inputs during the aggregation process can minimize the influence the biased expert is able to exert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Regression is at the cornerstone of statistical analysis. Multilevel regression, on the other hand, receives little research attention, though it is prevalent in economics, biostatistics and healthcare to name a few. We present a Bayesian nonparametric framework for multilevel regression where individuals including observations and outcomes are organized into groups. Furthermore, our approach exploits additional group-specific context observations, we use Dirichlet Process with product-space base measure in a nested structure to model group-level context distribution and the regression distribution to accommodate the multilevel structure of the data. The proposed model simultaneously partitions groups into cluster and perform regression. We provide collapsed Gibbs sampler for posterior inference. We perform extensive experiments on econometric panel data and healthcare longitudinal data to demonstrate the effectiveness of the proposed model