905 resultados para Eigensystem realization algorithms
Resumo:
With the emerging prevalence of smart phones and 4G LTE networks, the demand for faster-better-cheaper mobile services anytime and anywhere is ever growing. The Dynamic Network Optimization (DNO) concept emerged as a solution that optimally and continuously tunes the network settings, in response to varying network conditions and subscriber needs. Yet, the DNO realization is still at infancy, largely hindered by the bottleneck of the lengthy optimization runtime. This paper presents the design and prototype of a novel cloud based parallel solution that further enhances the scalability of our prior work on various parallel solutions that accelerate network optimization algorithms. The solution aims to satisfy the high performance required by DNO, preliminarily on a sub-hourly basis. The paper subsequently visualizes a design and a full cycle of a DNO system. A set of potential solutions to large network and real-time DNO are also proposed. Overall, this work creates a breakthrough towards the realization of DNO.
Resumo:
The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
Background
It is generally acknowledged that a functional understanding of a biological system can only be obtained by an understanding of the collective of molecular interactions in form of biological networks. Protein networks are one particular network type of special importance, because proteins form the functional base units of every biological cell. On a mesoscopic level of protein networks, modules are of significant importance because these building blocks may be the next elementary functional level above individual proteins allowing to gain insight into fundamental organizational principles of biological cells.
Results
In this paper, we provide a comparative analysis of five popular and four novel module detection algorithms. We study these module prediction methods for simulated benchmark networks as well as 10 biological protein interaction networks (PINs). A particular focus of our analysis is placed on the biological meaning of the predicted modules by utilizing the Gene Ontology (GO) database as gold standard for the definition of biological processes. Furthermore, we investigate the robustness of the results by perturbing the PINs simulating in this way our incomplete knowledge of protein networks.
Conclusions
Overall, our study reveals that there is a large heterogeneity among the different module prediction algorithms if one zooms-in the biological level of biological processes in the form of GO terms and all methods are severely affected by a slight perturbation of the networks. However, we also find pathways that are enriched in multiple modules, which could provide important information about the hierarchical organization of the system
Resumo:
Purpose: To clarify the most appropriate treatment regimen for congenital nasolacrimal duct obstruction (CNLDO). Methods: A retrospective observational analysis was performed of patients undergoing probing with or without intubation to treat CNLDO in a single institution (Royal Victoria Hospital, Belfast) from 2006 to 2011. Results: Based on exclusion criteria, 246 eyes of 177 patients (aged 0 to 9.8 years with a mean age of 2.1 years) were included in this study: 187 (76%) eyes had successful outcome at first intervention with primary probing, whereas 56 (23%) eyes underwent secondary intervention. There were no significant differences by gender, age, or obstruction complexity between the successful and unsuccessful patients with first intervention. For those patients requiring secondary intervention, 16 of 24 (67%) eyes had successful probing, whereas 22 of 24 (92%) had successful intubation. Patients with intubation as a secondary procedure were significantly more likely to have a successful outcome (P = .037). Statistical analysis was performed using the Fisher's exact test and Barnard's exact test. Conclusions: Primary probing for CNLDO has a high success rate that is not adversely affected by increasing age. This study also indicates that if initial probing is unsuccessful, nasolacrimal intubation rather than repeat probing yields a significantly higher success rate.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
Among various optical sensing schemes, infrared spectroscopy is a powerful tool for detecting and determining the composition of complex organic samples since vibrational finger prints of all biomolecules and organic species are located in this window. This spectroscopic technique is simple, reliable, fast, non-destructive, cost-effective while having low sensitivity. Use of metallic nanoparticles in association with a good IR transparent sensing substrate, is one of the promising solutions to enhance the sensitivity. Chalcogenide glasses are promising substrate material because of their extended optical transmission window starting from the visible to the far infrared range up to 20 μm, high refractive index usually between 2 and 3 and high optical nonlinearity, which make them good candidates as IR sensors and optical ultrafast nonlinear devices. These glasses are favorable sensor materials for the infrared spectral range because of their high IR transparency to allow for low optical loss at wavelengths corresponding to the characteristic optical absorption bands of organic molecules, high refractive index for tight confinement of optical energy within the resonator structure, processibility into thin film form, chemical compatibility for adhesion of silver nano particles and thin films and resistance to the chemical environment to be sensed. Molecules adsorbed to silver island structures shows enhanced IR absorption spectra and the extent of enhancement is determined by many factors such as the size, density and morphology of silver structures, optical and dielectric properties of the substrate material etc.
Resumo:
Nella seguente tesi è descritto il principio di sviluppo di una macchina industriale di alimentazione. Il suddetto sistema dovrà essere installato fra due macchine industriali. L’apparato dovrà mettere al passo e sincronizzare con la macchina a valle i prodotti che arriveranno in input. La macchina ordina gli oggetti usando una serie di nastri trasportatori a velocità regolabile. Lo sviluppo è stato effettuato al Laboratorio Liam dopo la richiesta dell’azienda Sitma. Sitma produceva già un tipo di sistema come quello descritto in questa tesi. Il deisderio di Sitma è quindi quello di modernizzare la precedente applicazione poiché il dispositivo che le permetteva di effettuare la messa al passo di prodotti era un PLC Siemens che non è più commercializzato. La tesi verterà sullo studio dell’applicazione e la modellazione tramite Matlab-Simulink per poi proseguire ad una applicazione, seppure non risolutiva, in TwinCAT 3.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06