970 resultados para automated process discovery
Resumo:
The development of self-adaptive software (SaS) has specific characteristics compared to traditional one, since it allows that changes to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct the software adaptation at runtime. In parallel, reference model has been used to aggregate knowledge and architectural artifacts, since capture the systems essence of specific domains. However, there is currently no reference model based on reflection for the development of SaS. Thus, the main contribution of this paper is to present a reference model based on reflection for development of SaS that have a need to adapt at runtime. To present the applicability of this model, a case study was conducted and good perspective to efficiently contribute to the area of SaS has been obtained.
Resumo:
Mobile network coverage is traditionally provided by outdoor macro base stations, which have a long range and serve several of customers. Due to modern passive houses and tightening construction legislation, mobile network service is deteriorated in many indoor locations. Typically, solutions for indoor coverage problem are expensive and demand actions from the mobile operator. Due to these, superior solutions are constantly researched. The solution presented in this thesis is based on Small Cell technology. Small Cells are low power access nodes designed to provide voice and data services.. This thesis concentrates on a specific Small Cell solution, which is called a Pico Cell. The problem regarding Pico Cells and Small Cells in general is that they are a new technological solution for the mobile operator, and the possible problem sources and incidents are not properly mapped. The purpose of this thesis is to figure out the possible problems in the Pico Cell deployment and how they could be solved within the operator’s incident management process. The research in the thesis is carried out with a literature research and a case study. The possible problems are investigated through lab testing. Pico Cell automated deployment process was tested in the lab environment and its proper functionality is confirmed. The related network elements were also tested and examined, and the emerged problems are resolvable. Operators existing incident management process can be used for Pico Cell troubleshooting with minor updates. Certain pre-requirements have to be met before Pico Cell deployment can be considered. The main contribution of this thesis is the Pico Cell integrated incident management process. The presented solution works in theory and solves the problems found during the lab testing. The limitations in the customer service level were solved by adding the necessary tools and by designing a working question pattern. Process structures for automated network discovery and pico specific radio parameter planning were also added for the mobile network management layer..
Resumo:
The problem of software (SW) defaults is becoming more and more topical because of increasing amount of the SW and its complication. The majority of these defaults are founded during the test part that consumes about 40-50% of the development efforts. Test automation allows reducing the cost of this process and increasing testing effectiveness. In the middle of 1980 the first tools for automated testing appeared and the automated process was implemented in different kinds of SW testing. In short time, it became obviously, automated testing can cause many problems such as increasing product cost, decreasing reliability and even project fail. This thesis describes automated testing process, its concept, lists main problems, and gives an algorithm for automated test tools selection. Also this work presents an overview of the main automated test tools for embedded systems.
Resumo:
Il presente elaborato esplora l’attitudine delle organizzazioni nei confronti dei processi di business che le sostengono: dalla semi-assenza di struttura, all’organizzazione funzionale, fino all’avvento del Business Process Reengineering e del Business Process Management, nato come superamento dei limiti e delle problematiche del modello precedente. All’interno del ciclo di vita del BPM, trova spazio la metodologia del process mining, che permette un livello di analisi dei processi a partire dagli event data log, ossia dai dati di registrazione degli eventi, che fanno riferimento a tutte quelle attività supportate da un sistema informativo aziendale. Il process mining può essere visto come naturale ponte che collega le discipline del management basate sui processi (ma non data-driven) e i nuovi sviluppi della business intelligence, capaci di gestire e manipolare l’enorme mole di dati a disposizione delle aziende (ma che non sono process-driven). Nella tesi, i requisiti e le tecnologie che abilitano l’utilizzo della disciplina sono descritti, cosi come le tre tecniche che questa abilita: process discovery, conformance checking e process enhancement. Il process mining è stato utilizzato come strumento principale in un progetto di consulenza da HSPI S.p.A. per conto di un importante cliente italiano, fornitore di piattaforme e di soluzioni IT. Il progetto a cui ho preso parte, descritto all’interno dell’elaborato, ha come scopo quello di sostenere l’organizzazione nel suo piano di improvement delle prestazioni interne e ha permesso di verificare l’applicabilità e i limiti delle tecniche di process mining. Infine, nell’appendice finale, è presente un paper da me realizzato, che raccoglie tutte le applicazioni della disciplina in un contesto di business reale, traendo dati e informazioni da working papers, casi aziendali e da canali diretti. Per la sua validità e completezza, questo documento è stata pubblicato nel sito dell'IEEE Task Force on Process Mining.
Resumo:
O desenvolvimento de software orientado a modelos defende a utilização dos modelos como um artefacto que participa activamente no processo de desenvolvimento. O modelo ocupa uma posição que se encontra ao mesmo nível do código. Esta é uma abordagem importante que tem sido alvo de atenção crescente nos últimos tempos. O Object Management Group (OMG) é o responsável por uma das principais especificações utilizadas na definição da arquitectura dos sistemas cujo desenvolvimento é orientado a modelos: o Model Driven Architecture (MDA). Os projectos que têm surgido no âmbito da modelação e das linguagens específicas de domínio para a plataforma Eclipse são um bom exemplo da atenção dada a estas áreas. São projectos totalmente abertos à comunidade, que procuram respeitar os standards e que constituem uma excelente oportunidade para testar e por em prática novas ideias e abordagens. Nesta dissertação foram usadas ferramentas criadas no âmbito do Amalgamation Project, desenvolvido para a plataforma Eclipse. Explorando o UML e usando a linguagem QVT, desenvolveu-se um processo automático para extrair elementos da arquitectura do sistema a partir da definição de requisitos. Os requisitos são representados por modelos UML que são transformados de forma a obter elementos para uma aproximação inicial à arquitectura do sistema. No final, obtêm-se um modelo UML que agrega os componentes, interfaces e tipos de dados extraídos a partir dos modelos dos requisitos. É uma abordagem orientada a modelos que mostrou ser exequível, capaz de oferecer resultados práticos e promissora no que concerne a trabalho futuro.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Pavements require maintenance in order to provide good service levels during their life period. Because of the significant costs of this operation and the importance of a proper planning, a pavement evaluation methodology, named Pavement Condition Index (PCI), was created by the U.S. Army Corps of Engineers. This methodology allows for the evaluation of the pavement condition along the life period, generally yearly, with minimum costs and, in this way, it is possible to plan the maintenance action and to adopt adequate measures, minimising the rehabilitation costs. The PCI methodology provides an evaluation based on visual inspection, namely on the distresses observed on the pavement. This condition index of the pavement is classified from 0 to 100, where 0 it is the worst possible condition and 100 the best possible condition. This methodology of pavement assessment represents a significant tool for management methods such as airport pavement management system (APMS) and life-cycle costs analysis (LCCA). Nevertheless, it has some limitations which can jeopardize the correct evaluation of the pavement behavior. Therefore the objective of this dissertation is to help reducing its limitations and make it easier and faster to use. Thus, an automated process of PCI calculation was developed, avoiding the abaci consultation, and consequently, minimizing the human error. To facilitate also the visual inspection a Tablet application was developed to replace the common inspection data sheet and thus making the survey easier to be undertaken. Following, an airport pavement condition was study accordingly with the methodology described at Standard Test Method for Airport Pavement Condition Index Surveys D5340, 2011 where its original condition level is compared with the condition level after iterate possible erroneous considered distresses as well as possible rehabilitations. Afterwards, the results obtained were analyzed and the main conclusions presented together with some future developments.
Resumo:
Kasvava kilpailu ja alati globalisoituvat markkinat ovat pakottaneet yrityksiä hakemaan tehokkuutta myös taloushallinnon kaltaisista prosesseista, joiden tehokkuuteen ei aikaisemmin kiinnitetty juurikaan huomiota. Tutkielman ensimmäisenä keskeisenä tavoitteena on lisätä tietoa taloushallinnon tehostamisesta peilaamalla sitä tutkielman teoreettisen viitekehyksen kolmen keskeisen elementin – liiketoimintaprosessien tehostamisen, sähköisen taloushallinnon innovaatioiden ja taloushallinnon rajat ylittävän ulkoistamisen – kautta. Tutkimuksen toisena keskeisenä tavoitteena on syventää tätä teoreettista tietoa ja analysoida taloushallinnon tehostamiseksi kehitettyjen menetelmien toimivuutta Suomessa toimivan yrityksen ostolaskuprosessin osalta. Tutkielmassa pystyttiin selkeästi tuomaan esiin taloushallinnon tehostumiseen vaikuttaneita tekijöitä ja osoittamaan näiden ostolaskuprosessia tehostava vaikutus. Analysoitaessa eri menetelmien keskinäistä tehokkuutta voidaan suomalaisen yrityksen näkökulmasta perustellusti olettaa, että rajat ylittävät ulkoistamiset ovat vain välivaihe matkalla mahdollisimman pitkälle automatisoituun prosessiin – kun laskut vastaanotetaan sähköisinä verkkolaskuina ja täsmäytetään automaattisesti sähköisiin ostotilauksiin, voidaankin kysyä mitä ostolaskuprosessista voitaisiin ylipäänsä ulkoistaa.
Resumo:
It is rare for data's history to include computational processes alone. Even when software generates data, users ultimately decide to execute software procedures, choose their configuration and inputs, reconfigure, halt and restart processes, and so on. Understanding the provenance of data thus involves understanding the reasoning of users behind these decisions, but demanding that users explicitly document decisions could be intrusive if implemented naively, and impractical in some cases. In this paper, therefore, we explore an approach to transparently deriving the provenance of user decisions at query time. The user reasoning is simulated, and if the result of the simulation matches the documented decision, the simulation is taken to approximate the actual reasoning. The plausibility of this approach requires that the simulation mirror human decision -making, so we adopt an automated process explicitly modelled on human psychology. The provenance of the decision is modelled in OPM, allowing it to be queried as part of a larger provenance graph, and an OPM profile is provided to allow consistent querying of provenance across user decisions.
Resumo:
In any welding process is of utmost importance by welders and responsible qualities of the area understand the process and the variables involved in it, in order to have maximum efficiency in welding both in terms of quality as the final cost , never forgetting, of course, the process conditions which the welder or welding operator shall be submitted. Therefore, we sought to understand the variables relevant to the welding process and develop an EPS (Welding Procedure Specification) as ASME IX for cored wire welding process (FCAW Specification AWS) with shielding gas and automated process for base material ASTM a 131, with 5/16 thick, using a single pass weld, for conditions with pre-and post-heating and the destructive testing for verification and analysis of the resulting weld bead
Resumo:
Automated Production Systems Development involves aspects concerning the integration of technological components that exist on the market, such as: Programmable Logic Controllers (PLC), robot manipulators, various sensors and actuators, image processing systems, communication networks and collaborative supervisory systems; all integrated into a single application. This paper proposes an automated platform for experimentation, implemented through typical architecture for Automated Production Systems, which integrates the technological components described above, in order to allow researchers and students to carry out practical laboratory activities. These activities will complement the theoretical knowledge acquired by the students in the classroom, thus improving their training and professional skills. A platform designed using this generic structure will allow users to work within an educational environment that reflects most aspects found in Industrial Automated Manufacturing Systems, such as technology integration, communication networks, process control and production management. In addition, this platform offers the possibility complete automated process of control and supervision via remote connection through the internet (WebLab), enabling knowledge sharing between different teaching and research groups.
Resumo:
Ontology design and population -core aspects of semantic technologies- re- cently have become fields of great interest due to the increasing need of domain-specific knowledge bases that can boost the use of Semantic Web. For building such knowledge resources, the state of the art tools for ontology design require a lot of human work. Producing meaningful schemas and populating them with domain-specific data is in fact a very difficult and time-consuming task. Even more if the task consists in modelling knowledge at a web scale. The primary aim of this work is to investigate a novel and flexible method- ology for automatically learning ontology from textual data, lightening the human workload required for conceptualizing domain-specific knowledge and populating an extracted schema with real data, speeding up the whole ontology production process. Here computational linguistics plays a fundamental role, from automati- cally identifying facts from natural language and extracting frame of relations among recognized entities, to producing linked data with which extending existing knowledge bases or creating new ones. In the state of the art, automatic ontology learning systems are mainly based on plain-pipelined linguistics classifiers performing tasks such as Named Entity recognition, Entity resolution, Taxonomy and Relation extraction [11]. These approaches present some weaknesses, specially in capturing struc- tures through which the meaning of complex concepts is expressed [24]. Humans, in fact, tend to organize knowledge in well-defined patterns, which include participant entities and meaningful relations linking entities with each other. In literature, these structures have been called Semantic Frames by Fill- 6 Introduction more [20], or more recently as Knowledge Patterns [23]. Some NLP studies has recently shown the possibility of performing more accurate deep parsing with the ability of logically understanding the structure of discourse [7]. In this work, some of these technologies have been investigated and em- ployed to produce accurate ontology schemas. The long-term goal is to collect large amounts of semantically structured information from the web of crowds, through an automated process, in order to identify and investigate the cognitive patterns used by human to organize their knowledge.
Resumo:
Computer-assisted translation (or computer-aided translation or CAT) is a form of language translation in which a human translator uses computer software in order to facilitate the translation process. Machine translation (MT) is the automated process by which a computerized system produces a translated text or speech from one natural language to another. Both of them are leading and promising technologies in the translation industry; it therefore seems important that translation students and professional translators become familiar with this relatively new types of technology. Whether used together, not only might these two different types of systems reduce translation time, but also lead to a further improvement in the field of translation technologies. The dissertation consists of four chapters. The first one surveys the chronological development of MT and CAT tools, the emergence of pre-editing, post-editing and controlled language and the very last frontiers in this sector. The second one provide a general overview on the four main CAT tools that are used nowadays and tested hereto. The third chapter is dedicated to the experimentations that have been conducted in order to analyze and evaluate the performance of the four integrated systems that are the core subject of this dissertation. Finally, the fourth chapter deals with the issue of terminological equivalence in interlinguistic translation. The purpose of this dissertation is not to provide an objective and definitive solution to the complex issues that arise at any time in the field of translation technologies, this aim being well away from being achieved, but to supply information about the limits and potentiality that are typical of those instruments which are now essential to any professional translator.
Resumo:
ab-initio Hartree Fock (HF), density functional theory (DFT) and hybrid potentials were employed to compute the optimized lattice parameters and elastic properties of perovskite 3-d transition metal oxides. The optimized lattice parameters and elastic properties are interdependent in these materials. An interaction is observed between the electronic charge, spin and lattice degrees of freedom in 3-d transition metal oxides. The coupling between the electronic charge, spin and lattice structures originates due to localization of d-atomic orbitals. The coupling between the electronic charge, spin and crystalline lattice also contributes in the ferroelectric and ferromagnetic properties in perovskites. The cubic and tetragonal crystalline structures of perovskite transition metal oxides of ABO3 are studied. The electronic structure and the physics of 3-d perovskite materials is complex and less well considered. Moreover, the novelty of the electronic structure and properties of these perovskites transition metal oxides exceeds the challenge offered by their complex crystalline structures. To achieve the objective of understanding the structure and property relationship of these materials the first-principle computational method is employed. CRYSTAL09 code is employed for computing crystalline structure, elastic, ferromagnetic and other electronic properties. Second-order elastic constants (SOEC) and bulk moduli (B) are computed in an automated process by employing ELASTCON (elastic constants) and EOS (equation of state) programs in CRYSTAL09 code. ELASTCON, EOS and other computational algorithms are utilized to determine the elastic properties of tetragonal BaTiO3, rutile TiO2, cubic and tetragonal BaFeO3 and the ferromagentic properties of 3-d transition metal oxides. Multiple methods are employed to crosscheck the consistency of our computational results. Computational results have motivated us to explore the ferromagnetic properties of 3-d transition metal oxides. Billyscript and CRYSTAL09 code are employed to compute the optimized geometry of the cubic and tetragonal crystalline structure of transition metal oxides of Sc to Cu. Cubic crystalline structure is initially chosen to determine the effect of lattice strains on ferromagnetism due to the spin angular momentum of an electron. The 3-d transition metals and their oxides are challenging as the basis functions and potentials are not fully developed to address the complex physics of the transition metals. Moreover, perovskite crystalline structures are extremely challenging with respect to the quality of computations as the latter requires the well established methods. Ferroelectric and ferromagnetic properties of bulk, surfaces and interfaces are explored by employing CRYSTAL09 code. In our computations done on cubic TMOs of Sc-Fe it is observed that there is a coupling between the crystalline structure and FM/AFM spin polarization. Strained crystalline structures of 3-d transition metal oxides are subjected to changes in the electromagnetic and electronic properties. The electronic structure and properties of bulk, composites, surfaces of 3-d transition metal oxides are computed successfully.
Resumo:
Secure Access For Everyone (SAFE), is an integrated system for managing trust
using a logic-based declarative language. Logical trust systems authorize each
request by constructing a proof from a context---a set of authenticated logic
statements representing credentials and policies issued by various principals
in a networked system. A key barrier to practical use of logical trust systems
is the problem of managing proof contexts: identifying, validating, and
assembling the credentials and policies that are relevant to each trust
decision.
SAFE addresses this challenge by (i) proposing a distributed authenticated data
repository for storing the credentials and policies; (ii) introducing a
programmable credential discovery and assembly layer that generates the
appropriate tailored context for a given request. The authenticated data
repository is built upon a scalable key-value store with its contents named by
secure identifiers and certified by the issuing principal. The SAFE language
provides scripting primitives to generate and organize logic sets representing
credentials and policies, materialize the logic sets as certificates, and link
them to reflect delegation patterns in the application. The authorizer fetches
the logic sets on demand, then validates and caches them locally for further
use. Upon each request, the authorizer constructs the tailored proof context
and provides it to the SAFE inference for certified validation.
Delegation-driven credential linking with certified data distribution provides
flexible and dynamic policy control enabling security and trust infrastructure
to be agile, while addressing the perennial problems related to today's
certificate infrastructure: automated credential discovery, scalable
revocation, and issuing credentials without relying on centralized authority.
We envision SAFE as a new foundation for building secure network systems. We
used SAFE to build secure services based on case studies drawn from practice:
(i) a secure name service resolver similar to DNS that resolves a name across
multi-domain federated systems; (ii) a secure proxy shim to delegate access
control decisions in a key-value store; (iii) an authorization module for a
networked infrastructure-as-a-service system with a federated trust structure
(NSF GENI initiative); and (iv) a secure cooperative data analytics service
that adheres to individual secrecy constraints while disclosing the data. We
present empirical evaluation based on these case studies and demonstrate that
SAFE supports a wide range of applications with low overhead.