970 resultados para information sciences
Resumo:
This volume is based on a seminar concerned with advanced methods in adaptive control for industrial applications which was held in Prague in May 1990 and which brought together experts in the UK and Czechoslovakia in order to suggest solutions to specific current and anticipated problems faced by industry.
Resumo:
In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.
Resumo:
The 'Shelves' facility in CentAUR enables users to search for, save and share lists of selected publications with other logged in users.
Resumo:
This paper provides a high-level overview of E-UTRAN interworking and interoperability with existing Third Generation Partnership Project (3GPP) and non-3GPP wireless networks. E-UTRAN access networks (LTE and LTE-A) are currently the latest technologies for 3GPP evolution specified in Release 8, 9 and beyond. These technologies promise higher throughputs and lower latency while also reducing the cost of delivering the services to fit with subscriber demands. 3GPP offers a direct transition path from the current 3GPP UTRAN/GERAN networks to LTE including seamless handover. E-UTRAN and other wireless networks interworking is an option that allows operators to maximize the life of their existing network components before a complete transition to truly 4G networks. Network convergence, backward compatibility and interpretability are regarded as the next major challenge in the evolution and the integration of mobile wireless communications. In this paper, interworking and interoperability between the E-UTRAN Evolved Packet Core (EPC) architecture and 3GPP, 3GPP2 and IEEE based networks are clearly explained. How the EPC is designed to deliver multimedia and facilitate interworking is also explained. Moreover, the seamless handover needed to perform this interworking efficiently is described briefly. This study showed that interoperability and interworking between existing networks and E-UTRAN are highly recommended as an interim solution before the transition to full 4G. Furthermore, wireless operators have to consider a clear interoperability and interworking plan for their existing networks before making a decision to migrate completely to LTE. Interworking provides not only communication between different wireless networks; in many scenarios it contributes to add technical enhancements to one or both environments.
Resumo:
Knowledge management has become a promising method in supporting the clinicians′ decisions and improving the quality of medical services in the constantly changing clinical environment. However, current medical knowledge management systems cannot understand users′ requirements accurately and realize personalized matching. Therefore this paper proposes an ontological approach based on semiotic principles to personalized medical knowledge matching. In particular, healthcare domain knowledge is conceptualized and an ontology-based user profile is built. Furthmore, the personalized matching mechanism and algorithm are illustrated.
Resumo:
The extensive use of cloud computing in educational institutes around the world brings unique challenges for universities. Some of these challenges are due to clear differences between Europe and Middle East universities. These differences stem from the natural variation between people. Cloud computing has created a new concept to deal with software services and hardware infrastructure. Some benefits are immediately gained, for instance, to allow students to share their information easily and to discover new experiences of the education system. However, this introduces more challenges, such as security and configuration of resources in shared environments. Educational institutes cannot escape from these challenges. Yet some differences occur between universities which use cloud computing as an educational tool or a form of social connection. This paper discusses some benefits and limitations of using cloud computing and major differences in using cloud computing at universities in Europe and the Middle East, based on the social perspective, security and economics concepts, and personal responsibility.
Resumo:
Environment monitoring applications using Wireless Sensor Networks (WSNs) have had a lot of attention in recent years. In much of this research tasks like sensor data processing, environment states and events decision making and emergency message sending are done by a remote server. A proposed cross layer protocol for two different applications where, reliability for delivered data, delay and life time of the network need to be considered, has been simulated and the results are presented in this paper. A WSN designed for the proposed applications needs efficient MAC and routing protocols to provide a guarantee for the reliability of the data delivered from source nodes to the sink. A cross layer based on the design given in [1] has been extended and simulated for the proposed applications, with new features, such as routes discovery algorithms added. Simulation results show that the proposed cross layer based protocol can conserve energy for nodes and provide the required performance such as life time of the network, delay and reliability.
Resumo:
Recent advancement in wireless communication technologies and automobiles have enabled the evolution of Intelligent Transport System (ITS) which addresses various vehicular traffic issues like traffic congestion, information dissemination, accident etc. Vehicular Ad-hoc Network (VANET) a distinctive class of Mobile ad-hoc Network (MANET) is an integral component of ITS in which moving vehicles are connected and communicate wirelessly. Wireless communication technologies play a vital role in supporting both Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I) communication in VANET. This paper surveys some of the key vehicular wireless access technology standards such as 802.11p, P1609 protocols, Cellular System, CALM, MBWA, WiMAX, Microwave, Bluetooth and ZigBee which served as a base for supporting both Safety and Non Safety applications. It also analyses and compares the wireless standards using various parameters such as bandwidth, ease of use, upfront cost, maintenance, accessibility, signal coverage, signal interference and security. Finally, it discusses some of the issues associated with the interoperability among those protocols.
Resumo:
This paper proposes a filter-based algorithm for feature selection. The filter is based on the partitioning of the set of features into clusters. The number of clusters, and consequently the cardinality of the subset of selected features, is automatically estimated from data. The computational complexity of the proposed algorithm is also investigated. A variant of this filter that considers feature-class correlations is also proposed for classification problems. Empirical results involving ten datasets illustrate the performance of the developed algorithm, which in general has obtained competitive results in terms of classification accuracy when compared to state of the art algorithms that find clusters of features. We show that, if computational efficiency is an important issue, then the proposed filter May be preferred over their counterparts, thus becoming eligible to join a pool of feature selection algorithms to be used in practice. As an additional contribution of this work, a theoretical framework is used to formally analyze some properties of feature selection methods that rely on finding clusters of features. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
An important feature of a database management systems (DBMS) is its client/server architecture, where managing shared memory among the clients and the server is always an tough issue. However, similarity queries are specially sensitive to this kind of architecture, since the answer sizes vary widely. Usually, the answers of similarity query are fully processed to be sent in full to the user, who often is interested in just parts of the answer, e.g. just few elements closer or farther to the query reference. Compelling the DBMS to retrieve the full answer, further ignoring its majority is at least a waste of server processing power. Paging the answer is a technique that splits the answer onto several pages, following client requests. Despite the success of paging on traditional queries, little work has been done to support it in similarity queries. In this work, we present a technique that not only provides paging in similarity range or k-nearest neighbor queries, but also supports them in two variations: the forward similarity query and the backward similarity query. They return elements either increasingly farther of increasingly closer to the query reference. The reported experiments show that, depending on the proportion of the interesting part over the full answer, both techniques allow answering queries much faster than it is obtained in the non-paged way. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
A conceptual problem that appears in different contexts of clustering analysis is that of measuring the degree of compatibility between two sequences of numbers. This problem is usually addressed by means of numerical indexes referred to as sequence correlation indexes. This paper elaborates on why some specific sequence correlation indexes may not be good choices depending on the application scenario in hand. A variant of the Product-Moment correlation coefficient and a weighted formulation for the Goodman-Kruskal and Kendall`s indexes are derived that may be more appropriate for some particular application scenarios. The proposed and existing indexes are analyzed from different perspectives, such as their sensitivity to the ranks and magnitudes of the sequences under evaluation, among other relevant aspects of the problem. The results help suggesting scenarios within the context of clustering analysis that are possibly more appropriate for the application of each index. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Aspect-oriented programming (AOP) is a promising technology that supports separation of crosscutting concerns (i.e., functionality that tends to be tangled with, and scattered through the rest of the system). In AOP, a method-like construct named advice is applied to join points in the system through a special construct named pointcut. This mechanism supports the modularization of crosscutting behavior; however, since the added interactions are not explicit in the source code, it is hard to ensure their correctness. To tackle this problem, this paper presents a rigorous coverage analysis approach to ensure exercising the logic of each advice - statements, branches, and def-use pairs - at each affected join point. To make this analysis possible, a structural model based on Java bytecode - called PointCut-based Del-Use Graph (PCDU) - is proposed, along with three integration testing criteria. Theoretical, empirical, and exploratory studies involving 12 aspect-oriented programs and several fault examples present evidence of the feasibility and effectiveness of the proposed approach. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Model trees are a particular case of decision trees employed to solve regression problems. They have the advantage of presenting an interpretable output, helping the end-user to get more confidence in the prediction and providing the basis for the end-user to have new insight about the data, confirming or rejecting hypotheses previously formed. Moreover, model trees present an acceptable level of predictive performance in comparison to most techniques used for solving regression problems. Since generating the optimal model tree is an NP-Complete problem, traditional model tree induction algorithms make use of a greedy top-down divide-and-conquer strategy, which may not converge to the global optimal solution. In this paper, we propose a novel algorithm based on the use of the evolutionary algorithms paradigm as an alternate heuristic to generate model trees in order to improve the convergence to globally near-optimal solutions. We call our new approach evolutionary model tree induction (E-Motion). We test its predictive performance using public UCI data sets, and we compare the results to traditional greedy regression/model trees induction algorithms, as well as to other evolutionary approaches. Results show that our method presents a good trade-off between predictive performance and model comprehensibility, which may be crucial in many machine learning applications. (C) 2010 Elsevier Inc. All rights reserved.