38 resultados para Plugins


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entire set of working mu-specific plugins to create a structured/secure/LDAP-authenticated WP-multisite environment in WP3.1.3

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work aims to allow developers to implement small features on a certain Android application in a fast and easy manner, as well as provide their users to install them ondemand, i.e., they can install the ones they are interested in. These small packages of features are called plugins, and the chosen development language to develop these in was JavaScript. In order to achieve that, an Android framework was developed that enables the host application to install, manage and run these plugins at runtime. This framework was designed to have a very clean and almost readable API, which allowed for better code organization and maintainability. The implementation used the Google’s engine “V8” to interpret the JavaScript code and through a set of JNI calls made that code call certain Android methods previously registered in the runtime. In order to test the framework, it was integrated with the client’s communication application RCS+ using two plugins developed alongside the framework. Although these plugins had only the more common requirements, they were proven to work successfully as intended. Concluding, the framework although successful made it clear that this kind of development through a non-native API has its set of difficulties especially regarding the implementation of complex features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

SAP and its research partners have been developing a lan- guage for describing details of Services from various view- points called the Unified Service Description Language (USDL). At the time of writing, version 3.0 describes technical implementation aspects of services, as well as stakeholders, pricing, lifecycle, and availability. Work is also underway to address other business and legal aspects of services. This language is designed to be used in service portfolio management, with a repository of service descriptions being available to various stakeholders in an organisation to allow for service prioritisation, development, deployment and lifecycle management. The structure of the USDL metadata is specified using an object-oriented metamodel that conforms to UML, MOF and EMF Ecore. As such it is amenable to code gener-ation for implementations of repositories that store service description instances. Although Web services toolkits can be used to make these programming language objects available as a set of Web services, the practicalities of writing dis- tributed clients against over one hundred class definitions, containing several hundred attributes, will make for very large WSDL interfaces and highly inefficient “chatty” implementations. This paper gives the high-level design for a completely model-generated repository for any version of USDL (or any other data-only metamodel), which uses the Eclipse Modelling Framework’s Java code generation, along with several open source plugins to create a robust, transactional repository running in a Java application with a relational datastore. However, the repository exposes a generated WSDL interface at a coarse granularity, suitable for distributed client code and user-interface creation. It uses heuristics to drive code generation to bridge between the Web service and EMF granularities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many software applications extend their functionality by dynamically loading executable components into their allocated address space. Such components, exemplified by browser plugins and other software add-ons, not only enable reusability, but also promote programming simplicity, as they reside in the same address space as their host application, supporting easy sharing of complex data structures and pointers. However, such components are also often of unknown provenance and quality and may be riddled with accidental bugs or, in some cases, deliberately malicious code. Statistics show that such component failures account for a high percentage of software crashes and vulnerabilities. Enabling isolation of such fine-grained components is therefore necessary to increase the stability, security and resilience of computer programs. This thesis addresses this issue by showing how host applications can create isolation domains for individual components, while preserving the benefits of a single address space, via a new architecture for software isolation called LibVM. Towards this end, we define a specification which outlines the functional requirements for LibVM, identify the conditions under which these functional requirements can be met, define an abstract Application Programming Interface (API) that encompasses the general problem of isolating shared libraries, thus separating policy from mechanism, and prove its practicality with two concrete implementations based on hardware virtualization and system call interpositioning, respectively. The results demonstrate that hardware isolation minimises the difficulties encountered with software based approaches, while also reducing the size of the trusted computing base, thus increasing confidence in the solution’s correctness. This thesis concludes that, not only is it feasible to create such isolation domains for individual components, but that it should also be a fundamental operating system supported abstraction, which would lead to more stable and secure applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Process mining encompasses the research area which is concerned with knowledge discovery from event logs. One common process mining task focuses on conformance checking, comparing discovered or designed process models with actual real-life behavior as captured in event logs in order to assess the “goodness” of the process model. This paper introduces a novel conformance checking method to measure how well a process model performs in terms of precision and generalization with respect to the actual executions of a process as recorded in an event log. Our approach differs from related work in the sense that we apply the concept of so-called weighted artificial negative events towards conformance checking, leading to more robust results, especially when dealing with less complete event logs that only contain a subset of all possible process execution behavior. In addition, our technique offers a novel way to estimate a process model’s ability to generalize. Existing literature has focused mainly on the fitness (recall) and precision (appropriateness) of process models, whereas generalization has been much more difficult to estimate. The described algorithms are implemented in a number of ProM plugins, and a Petri net conformance checking tool was developed to inspect process model conformance in a visual manner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Designing the smart grid requires combining varied models. As their number increases, so does the complexity of the software. Having a well thought architecture for the software then becomes crucial. This paper presents MODAM, a framework designed to combine agent-based models in a flexible and extensible manner, using well known software engineering design solutions (OSGi specification [1] and Eclipse plugins [2]). Details on how to build a modular agent-based model for the smart grid are given in this paper, illustrated by an example for a small network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A modular, graphic-oriented Internet browser has been developed to enable non-technical client access to a literal spinning world of information and remotely sensed. The Earth Portal (www.earthportal.net) uses the ManyOne browser (www.manyone.net) to provide engaging point and click views of the Earth fully tessellated with remotely sensed imagery and geospatial data. The ManyOne browser technology use Mozilla with embedded plugins to apply multiple 3-D graphics engines, e.g. ArcGlobe or GeoFusion, that directly link with the open-systems architecture of the geo-spatial infrastructure. This innovation allows for rendering of satellite imagery directly over the Earth's surface and requires no technical training by the web user. Effective use of this global distribution system for the remote sensing community requires a minimal compliance with protocols and standards that have been promoted by NSDI and other open-systems standards organizations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with several of the most important aspects of Competence-Based Learning (CBL): course authoring, assignments, and categorization of learning content. The latter is part of the so-called Bologna Process (BP) and can effectively be supported by integrating knowledge resources like, e.g., standardized skill and competence taxonomies into the target implementation approach, aiming at making effective use of an open integration architecture while fostering the interoperability of hybrid knowledge-based e-learning solutions. Modern scenarios ask for interoperable software solutions to seamlessly integrate existing e-learning infrastructures and legacy tools with innovative technologies while being cognitively efficient to handle. In this way, prospective users are enabled to use them without learning overheads. At the same time, methods of Learning Design (LD) in combination with CBL are getting more and more important for production and maintenance of easy to facilitate solutions. We present our approach of developing a competence-based course-authoring and assignment support software. It is bridging the gaps between contemporary Learning Management Systems (LMS) and established legacy learning infrastructures by embedding existing resources via Learning Tools Interoperability (LTI). Furthermore, the underlying conceptual architecture for this integration approach will be explained. In addition, a competence management structure based on knowledge technologies supporting standardized skill and competence taxonomies will be introduced. The overall goal is to develop a software solution which will not only flawlessly merge into a legacy platform and several other learning environments, but also remain intuitively usable. As a proof of concept, the so-called platform independent conceptual architecture model will be validated by a concrete use case scenario.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aujourd’hui, on parle du Web social. Facebook par exemple, porte bien la marque de son époque ; il est devenu le réseau social le plus convoité dans le monde. Toutefois, l’entreprise a été souvent critiquée en raison de sa politique qui porte atteinte à la vie privée des personnes. Par le truchement de ses modules sociaux, Facebook a le potentiel de collecter et d’utiliser des informations considérables sur les internautes à leur insu et sans leur consentement. Ce fait est malheureusement méconnu de la majorité d’entre eux. Certes, l’entreprise doit vivre économiquement et l’exploitation des renseignements personnels constitue pour elle une source de revenu. Toutefois, cette quête de subsistance ne doit pas se faire au détriment de la vie privée des gens. En dépit des outils juridiques dont le Canada dispose en matière de protection de la vie privée, des entreprises du Web à l’image de Facebook réussissent à les contourner.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulation hat sich in der Vergangenheit als unterstützendes Planungsinstrument und Prognosemethode zur Untersuchung von Materialflusssystemen in verschiedenen Branchen etabliert. Dafür werden neben Simulationswerkzeugen auch zunehmend Virtual Reality (VR), insbesondere mit dem Ansatz Digitale Fabrik, eingesetzt. Ein Materialflusssimulator, der für den VR–Einsatz geeignet ist,existiert noch nicht, so dass Anwender gegenwärtig mit verschiedenen Systemen arbeiten müssen, um Untersuchungen eines Simulationsmodells in einem VR–Umfeld wie beispielsweise in einem CAVE zu ermöglichen. Zeitlicher Aufwand ist dadurch vorprogrammiert und das Auftreten von Fehlern bei der Konvertierung des Simulationsmodells möglich. Da auch der hauseigene Materialflusssimulator SIMFLEX/3D nicht für beide Anwendungsgebiete genutzt werden kann und für einen solchen Einsatz mit sehr hohem Aufwand angepasst werden müsste, wurde stattdessen ein neues Simulationssystem entworfen. Das Simulationssystem wird in der vorliegenden Arbeit als ein interoperables und offenes System beschrieben, das über eine flexible Softwarearchitektur verfügt und in einem vernetzten Umfeld mit anderen Werkzeugen Daten austauschen kann. Die grundlegende Idee besteht darin, eine flexible Softwarearchitektur zu entwerfen, die zunächst interaktive und ereignisorientierte 3D–Simulation erlaubt, aber zusätzlich für andere Anwendungszwecke offen gehalten wird. Durch den offenen Ansatz können Erweiterungen, die in Form von Plugins entwickelt werden, mit geringem Aufwand in das System integriert werden, wodurch eine hohe Flexibilität des Systems erreicht wird. Für interoperable Zwecke werden Softwaremodule vorgestellt, die optional eingesetzt werden können und standardisierte Formate wie bspw. XML benutzen. Mit dem neuen Simulationssystem wird die Lücke zwischen Desktop– und VR–Einsatz geschlossen und aus diesem Grund der Zeitaufwand und Fehlerquellen reduziert. Darüber hinaus ermöglicht der offene Ansatz, das Simulationssystem rollen- und aufgabenspezifisch anzupassen, indem die erforderlichen Plugins bereitgestellt werden. Aus diesem Grund kann das Simulationssystem mit sehr geringem Aufwand um weitere Untersuchungsschwerpunkte wie beispielsweise Avatare ergänzt werden. Erste Untersuchungen in einem CAVE wurden erfolgreich durchgeführt. Für die Digitale Fabrik kann der Prototyp eingesetzt werden, um die Produktionsplanung mit Hilfe der Simulation und die Entwicklung mit Hilfe der entwickelten Viewer zu unterstützen. Letzteres ist möglich, da die Viewer zahlreiche CAD–Formate lesen und um weitere Formate ergänzt werden können. Das entwickelte System ist für den Einsatz in verschiedenen Prozessen einer Wertschöpfungskette geeignet,um es als Untersuchungs-, Kommunikations- und Steuerungswerkzeug einzusetzen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Kern der vorliegenden Arbeit ist die Erforschung von Methoden, Techniken und Werkzeugen zur Fehlersuche in modellbasierten Softwareentwicklungsprozessen. Hierzu wird zuerst ein von mir mitentwickelter, neuartiger und modellbasierter Softwareentwicklungsprozess, der sogenannte Fujaba Process, vorgestellt. Dieser Prozess wird von Usecase Szenarien getrieben, die durch spezielle Kollaborationsdiagramme formalisiert werden. Auch die weiteren Artefakte des Prozess bishin zur fertigen Applikation werden durch UML Diagrammarten modelliert. Es ist keine Programmierung im Quelltext nötig. Werkzeugunterstützung für den vorgestellte Prozess wird von dem Fujaba CASE Tool bereitgestellt. Große Teile der Werkzeugunterstützung für den Fujaba Process, darunter die Toolunterstützung für das Testen und Debuggen, wurden im Rahmen dieser Arbeit entwickelt. Im ersten Teil der Arbeit wird der Fujaba Process im Detail erklärt und unsere Erfahrungen mit dem Einsatz des Prozesses in Industrieprojekten sowie in der Lehre dargestellt. Der zweite Teil beschreibt die im Rahmen dieser Arbeit entwickelte Testgenerierung, die zu einem wichtigen Teil des Fujaba Process geworden ist. Hierbei werden aus den formalisierten Usecase Szenarien ausführbare Testfälle generiert. Es wird das zugrunde liegende Konzept, die konkrete technische Umsetzung und die Erfahrungen aus der Praxis mit der entwickelten Testgenerierung dargestellt. Der letzte Teil beschäftigt sich mit dem Debuggen im Fujaba Process. Es werden verschiedene im Rahmen dieser Arbeit entwickelte Konzepte und Techniken vorgestellt, die die Fehlersuche während der Applikationsentwicklung vereinfachen. Hierbei wurde darauf geachtet, dass das Debuggen, wie alle anderen Schritte im Fujaba Process, ausschließlich auf Modellebene passiert. Unter anderem werden Techniken zur schrittweisen Ausführung von Modellen, ein Objekt Browser und ein Debugger, der die rückwärtige Ausführung von Programmen erlaubt (back-in-time debugging), vorgestellt. Alle beschriebenen Konzepte wurden in dieser Arbeit als Plugins für die Eclipse Version von Fujaba, Fujaba4Eclipse, implementiert und erprobt. Bei der Implementierung der Plugins wurde auf eine enge Integration mit Fujaba zum einen und mit Eclipse auf der anderen Seite geachtet. Zusammenfassend wird also ein Entwicklungsprozess vorgestellt, die Möglichkeit in diesem mit automatischen Tests Fehler zu identifizieren und diese Fehler dann mittels spezieller Debuggingtechniken im Programm zu lokalisieren und schließlich zu beheben. Dabei läuft der komplette Prozess auf Modellebene ab. Für die Test- und Debuggingtechniken wurden in dieser Arbeit Plugins für Fujaba4Eclipse entwickelt, die den Entwickler bestmöglich bei der zugehörigen Tätigkeit unterstützen.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Presentación del software desarrollado inicialmente por Horacio González Diéguez y Berio Molina Quiroga para el proyecto Escoitar (http://www.escoitar.org); un sitio Web elaborado mediante un mashup que combina la tecnología podcast con los mapas de Google. En breve Escoitar.org lanzará su versión 2.0 desarrollada íntegramente con SPIP, CMS distribuido bajo licencia GNU/GPL de origen francés (http://www.SPIP.net/es). Nuestra contribución a SPIP ha sido un conjunto de plugins que utilizan GeoRSS, tecnología de sindicación con información geográfica, para facilitar la integración de artículos, imágenes, o sonidos, en mapas como los de Google (http://www.SPIP-contrib.net/Plugin-GIS-escoitar). SPIP GIS permite asociar información geográfica a los elementos tradicionales de un gestor de contenidos como artículos, temas o etiquetas. Google Map Api Plugin utiliza dicha información geográfica para construir google maps en los que se integre la información contenida en el sitio Web. La arquitectura de estos dos plugins ha sido desarrollada para posibilitar en un futuro el uso de otras plataformas como los mapas de Yahoo o de OpenStreetMap

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wednesday 23rd April 2014 Speaker(s): Willi Hasselbring Organiser: Leslie Carr Time: 23/04/2014 14:00-15:00 Location: B32/3077 File size: 802Mb Abstract The internal behavior of large-scale software systems cannot be determined on the basis of static (e.g., source code) analysis alone. Kieker provides complementary dynamic analysis capabilities, i.e., monitoring/profiling and analyzing a software system's runtime behavior. Application Performance Monitoring is concerned with continuously observing a software system's performance-specific runtime behavior, including analyses like assessing service level compliance or detecting and diagnosing performance problems. Architecture Discovery is concerned with extracting architectural information from an existing software system, including both structural and behavioral aspects like identifying architectural entities (e.g., components and classes) and their interactions (e.g., local or remote procedure calls). In addition to the Architecture Discovery of Java systems, Kieker supports Architecture Discovery for other platforms, including legacy systems, for instance, inplemented in C#, C++, Visual Basic 6, COBOL or Perl. Thanks to Kieker's extensible architecture it is easy to implement and use custom extensions and plugins. Kieker was designed for continuous monitoring in production systems inducing only a very low overhead, which has been evaluated in extensive benchmark experiments. Please, refer to http://kieker-monitoring.net/ for more information.