883 resultados para pacs: information technolgy applications
Resumo:
Utilization of renewable energy sources and energy storage systems is increasing with fostering new policies on energy industries. However, the increase of distributed generation hinders the reliability of power systems. In order to stabilize them, a virtual power plant emerges as a novel power grid management system. The VPP has a role to make a participation of different distributed energy resources and energy storage systems. This paper defines core technology of the VPP which are demand response and ancillary service concerning about Korea, America and Europe cases. It also suggests application solutions of the VPP to V2G market for restructuring national power industries in Korea.
Resumo:
Much of the bridge stock on major transport links in North America and Europe was constructed in the 1950s and 1960s and has since deteriorated or is carrying loads far in excess of the original design loads. Structural Health Monitoring Systems (SHM) can provide valuable information on the bridge capacity but the application of such systems is currently limited by access and bridge type. This paper investigates the use of computer vision systems for SHM. A series of field tests have been carried out to test the accuracy of displacement measurements using contactless methods. A video image of each test was processed using a modified version of the optical flow tracking method to track displacement. These results have been validated with an established measurement method using linear variable differential transformers (LVDTs). The results obtained from the algorithm provided an accurate comparison with the validation measurements. The calculated displacements agree within 2% of the verified LVDT measurements, a number of post processing methods were then applied to attempt to reduce this error.
Resumo:
Educational systems worldwide are facing an enormous shift as a result of sociocultural, political, economic, and technological changes. The technologies and practices that have developed over the last decade have been heralded as opportunities to transform both online and traditional education systems. While proponents of these new ideas often postulate that they have the potential to address the educational problems facing both students and institutions and that they could provide an opportunity to rethink the ways that education is organized and enacted, there is little evidence of emerging technologies and practices in use in online education. Because researchers and practitioners interested in these possibilities often reside in various disciplines and academic departments the sharing and dissemination of their work across often rigid boundaries is a formidable task. Contributors to Emergence and Innovation in Digital Learning include individuals who are shaping the future of online learning with their innovative applications and investigations on the impact of issues such as openness, analytics, MOOCs, and social media. Building on work first published in Emerging Technologies in Distance Education, the contributors to this collection harness the dispersed knowledge in online education to provide a one-stop locale for work on emergent approaches in the field. Their conclusions will influence the adoption and success of these approaches to education and will enable researchers and practitioners to conceptualize, critique, and enhance their understanding of the foundations and applications of new technologies.
Resumo:
Single-page applications have historically been subject to strong market forces driving fast development and deployment in lieu of quality control and changeable code, which are important factors for maintainability. In this report we develop two functionally equivalent applications using AngularJS and React and compare their maintainability as defined by ISO/IEC 9126. AngularJS and React represent two distinct approaches to web development, with AngularJS being a general framework providing rich base functionality and React a small specialized library for efficient view rendering. The quality comparison was accomplished by calculating Maintainability Index for each application. Version control analysis was used to determine quality indicators during development and subsequent maintenance where new functionality was added in two steps. The results show no major differences in maintainability in the initial applications. As more functionality is added the Maintainability Index decreases faster in the AngularJS application, indicating a steeper increase in complexity compared to the React application. Source code analysis reveals that changes in data flow requires significantly larger modifications of the AngularJS application due to its inherent architecture for data flow. We conclude that frameworks are useful when they facilitate development of known requirements but less so when applications and systems grow in size. Sammanfattning: Ensidesapplikationer har historiskt sett påverkats av starka marknadskrafter som pådriver snabba utvecklingscykler och leveranser. Detta medför att kvalitetskontroll och förändringsbar kod, som är viktiga faktorer för förvaltningsbarhet, blir lidande. I denna rapport utvecklar vi två funktionellt ekvi-valenta ensidesapplikationer med AngularJS och React samt jämför dessa applikationers förvaltningsbarhet enligt ISO/IEC 9126. AngularJS och React representerar två distinkta angreppsätt på webbutveckling, där AngularJS är ett ramverk med mycket färdig funktionalitet och React ett mindre bibliotek specialiserat på vyrendering. Kvalitetsjämförelsen utfördes genom att beräkna förvaltningsbarhetsindex för respektive applikation. Versionshanteringsanalys användes för att bestämma andra kvalitetsindikatorer efter den initiala utvecklingen samt två efterföljande underhållsarbeten. Resultaten visar inga markanta skillnader i förvaltningsbarhet för de initiala applikationerna. I takt med att mer funktionalitet lades till sjönk förvaltnings-barhetsindex snabbare för AngularJS-applikationen, vilket motsvarar en kraftigare ökning i komplexitet jämfört med React-applikationen. Versionshanteringsanalys visar att ändringar i dataflödet kräver större modifikationer för AngularJS-applikationen på grund av dess förbestämda arkitektur. Utifrån detta drar vi slutsatsen att ramverk är användbara när de understödjer utvecklingen mot kända krav men att deras nytta blir begränsad ju mer en applikation växer i storlek.
Resumo:
Digital Image Processing is a rapidly evolving eld with growing applications in Science and Engineering. It involves changing the nature of an image in order to either improve its pictorial information for human interpretation or render it more suitable for autonomous machine perception. One of the major areas of image processing for human vision applications is image enhancement. The principal goal of image enhancement is to improve visual quality of an image, typically by taking advantage of the response of human visual system. Image enhancement methods are carried out usually in the pixel domain. Transform domain methods can often provide another way to interpret and understand image contents. A suitable transform, thus selected, should have less computational complexity. Sequency ordered arrangement of unique MRT (Mapped Real Transform) coe cients can give rise to an integer-to-integer transform, named Sequency based unique MRT (SMRT), suitable for image processing applications. The development of the SMRT from UMRT (Unique MRT), forward & inverse SMRT algorithms and the basis functions are introduced. A few properties of the SMRT are explored and its scope in lossless text compression is presented.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The overwhelming amount and unprecedented speed of publication in the biomedical domain make it difficult for life science researchers to acquire and maintain a broad view of the field and gather all information that would be relevant for their research. As a response to this problem, the BioNLP (Biomedical Natural Language Processing) community of researches has emerged and strives to assist life science researchers by developing modern natural language processing (NLP), information extraction (IE) and information retrieval (IR) methods that can be applied at large-scale, to scan the whole publicly available biomedical literature and extract and aggregate the information found within, while automatically normalizing the variability of natural language statements. Among different tasks, biomedical event extraction has received much attention within BioNLP community recently. Biomedical event extraction constitutes the identification of biological processes and interactions described in biomedical literature, and their representation as a set of recursive event structures. The 2009–2013 series of BioNLP Shared Tasks on Event Extraction have given raise to a number of event extraction systems, several of which have been applied at a large scale (the full set of PubMed abstracts and PubMed Central Open Access full text articles), leading to creation of massive biomedical event databases, each of which containing millions of events. Sinece top-ranking event extraction systems are based on machine-learning approach and are trained on the narrow-domain, carefully selected Shared Task training data, their performance drops when being faced with the topically highly varied PubMed and PubMed Central documents. Specifically, false-positive predictions by these systems lead to generation of incorrect biomolecular events which are spotted by the end-users. This thesis proposes a novel post-processing approach, utilizing a combination of supervised and unsupervised learning techniques, that can automatically identify and filter out a considerable proportion of incorrect events from large-scale event databases, thus increasing the general credibility of those databases. The second part of this thesis is dedicated to a system we developed for hypothesis generation from large-scale event databases, which is able to discover novel biomolecular interactions among genes/gene-products. We cast the hypothesis generation problem as a supervised network topology prediction, i.e predicting new edges in the network, as well as types and directions for these edges, utilizing a set of features that can be extracted from large biomedical event networks. Routine machine learning evaluation results, as well as manual evaluation results suggest that the problem is indeed learnable. This work won the Best Paper Award in The 5th International Symposium on Languages in Biology and Medicine (LBM 2013).
Resumo:
Future pervasive environments will take into consideration not only individual user’s interest, but also social relationships. In this way, pervasive communities can lead the user to participate beyond traditional pervasive spaces, enabling the cooperation among groups and taking into account not only individual interests, but also the collective and social context. Social applications in CSCW (Computer Supported Cooperative Work) field represent new challenges and possibilities in terms of use of social context information for adaptability in pervasive environments. In particular, the research describes the approach in the design and development of a context.aware framework for collaborative applications (CAFCA), utilizing user’s context social information for proactive adaptations in pervasive environments. In order to validate the proposed framework an evaluation was conducted with a group of users based on enterprise scenario. The analysis enabled to verify the impact of the framework in terms of functionality and efficiency in real-world conditions. The main contribution of this thesis was to provide a context-aware framework to support collaborative applications in pervasive environments. The research focused on providing an innovative socio-technical approach to exploit collaboration in pervasive communities. Finally, the main results reside in social matching capabilities for session formation, communication and coordinations of groupware for collaborative activities.
Resumo:
Recent advances in the massively parallel computational abilities of graphical processing units (GPUs) have increased their use for general purpose computation, as companies look to take advantage of big data processing techniques. This has given rise to the potential for malicious software targeting GPUs, which is of interest to forensic investigators examining the operation of software. The ability to carry out reverse-engineering of software is of great importance within the security and forensics elds, particularly when investigating malicious software or carrying out forensic analysis following a successful security breach. Due to the complexity of the Nvidia CUDA (Compute Uni ed Device Architecture) framework, it is not clear how best to approach the reverse engineering of a piece of CUDA software. We carry out a review of the di erent binary output formats which may be encountered from the CUDA compiler, and their implications on reverse engineering. We then demonstrate the process of carrying out disassembly of an example CUDA application, to establish the various techniques available to forensic investigators carrying out black-box disassembly and reverse engineering of CUDA binaries. We show that the Nvidia compiler, using default settings, leaks useful information. Finally, we demonstrate techniques to better protect intellectual property in CUDA algorithm implementations from reverse engineering.
Resumo:
Les jeux de policiers et voleurs sont étudiés depuis une trentaine d’années en informatique et en mathématiques. Comme dans les jeux de poursuite en général, des poursuivants (les policiers) cherchent à capturer des évadés (les voleurs), cependant ici les joueurs agissent tour à tour et sont contraints de se déplacer sur une structure discrète. On suppose toujours que les joueurs connaissent les positions exactes de leurs opposants, autrement dit le jeu se déroule à information parfaite. La première définition d’un jeu de policiers-voleurs remonte à celle de Nowakowski et Winkler [39] et, indépendamment, Quilliot [46]. Cette première définition présente un jeu opposant un seul policier et un seul voleur avec des contraintes sur leurs vitesses de déplacement. Des extensions furent graduellement proposées telles que l’ajout de policiers et l’augmentation des vitesses de mouvement. En 2014, Bonato et MacGillivray [6] proposèrent une généralisation des jeux de policiers-voleurs pour permettre l’étude de ceux-ci dans leur globalité. Cependant, leur modèle ne couvre aucunement les jeux possédant des composantes stochastiques tels que ceux dans lesquels les voleurs peuvent bouger de manière aléatoire. Dans ce mémoire est donc présenté un nouveau modèle incluant des aspects stochastiques. En second lieu, on présente dans ce mémoire une application concrète de l’utilisation de ces jeux sous la forme d’une méthode de résolution d’un problème provenant de la théorie de la recherche. Alors que les jeux de policiers et voleurs utilisent l’hypothèse de l’information parfaite, les problèmes de recherches ne peuvent faire cette supposition. Il appert cependant que le jeu de policiers et voleurs peut être analysé comme une relaxation de contraintes d’un problème de recherche. Ce nouvel angle de vue est exploité pour la conception d’une borne supérieure sur la fonction objectif d’un problème de recherche pouvant être mise à contribution dans une méthode dite de branch and bound.
Resumo:
This book presents research in the field of Geophysics, particularly referring to principles, applications and emerging technologies. Table of Contents: Preface pp. i-xxi Environmental Geophysics: Techniques, advantages and limitations (Pantelis Soupios and Eleni Kokinou, Department of Environmental and Natural Resources Engineering, Technological Educational Institute of Crete, Dynamics of the Ocean Floor, Helmholtz Centre for Ocean Research Kiel, Geomar)pp i-xxi Application of Innovative Geophysical Techniques in Coastal Areas (V. Di Fiore, M. Punzo, D. Tarallo, and G. Cavuoto, Institute for Marine Coastal Environment, National Research Council, Naples)pp. i-xxi Marine Geophysics of the Naples Bay (Southern Tyrrhenian sea, Italy): Principles, Applications and Emerging Technologies (Gemma Aiello and Ennio Marsella, Institute for Marine Coastal Environment, National Research Council, Naples)pp. i-xxi Oceanic Oscillation Phenomena: Relation to Synchronization and Stochastic Resonance (Shinya Shimokawa and Tomonori Matsuura, National Research Institute for Earth Science and Disaster Prevention, Univ. of Toyama)pp. i-xxi Assessment of ocean variability in the Sicily Channel from a numerical three-dimensional model using EOFs decomposition (R. Sorgente, A. Olita, A.F. Drago, A. Ribotti, L. Fazioli, and C. Tedesco, Institute for Marine Coastal Environment, National Research Council, Oristano)pp. i-xxi Monitoring Test of Crack Opening in Volcanic Tuff (Coroglio Cliff. Italy) Using Distributed Optical Fiber Sensor (A. Minardo, A. Coscetta, M. Caccavale, G. Esposito, F. Matano, M. Sacchi, R. Somma, G. Zeni, and L. Zeni, Department of Industrial and Information Eng., Second University of Naples Aversa, Institute for Marine Coastal Environment, National Research Council Naples, National Institute for Geophysics and Volcanology, Osservatorio Vesuviano Naples, Institute for Electromagnetic Sensing of the Environment, National Research Council Naples)pp. i-xxi