989 resultados para software deployment


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the work in progress of an on-demand software deployment system based on application virtualization concepts which eliminates the need of software installation and configuration on each computer. Some mechanisms were created, such as mapping of utilization of resources by the application to improve the software distribution and startup; a virtualization middleware which give all resources needed for the software execution; an asynchronous P2P transport used to optimizing distribution on the network; and off-line support where the user can execute the application even when the server is not available or when is out of the network. © Springer-Verlag Berlin Heidelberg 2010.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The development of High-Integrity Real-Time Systems has a high footprint in terms of human, material and schedule costs. Factoring functional, reusable logic in the application favors incremental development and contains costs. Yet, achieving incrementality in the timing behavior is a much harder problem. Complex features at all levels of the execution stack, aimed to boost average-case performance, exhibit timing behavior highly dependent on execution history, which wrecks time composability and incrementaility with it. Our goal here is to restitute time composability to the execution stack, working bottom up across it. We first characterize time composability without making assumptions on the system architecture or the software deployment to it. Later, we focus on the role played by the real-time operating system in our pursuit. Initially we consider single-core processors and, becoming less permissive on the admissible hardware features, we devise solutions that restore a convincing degree of time composability. To show what can be done for real, we developed TiCOS, an ARINC-compliant kernel, and re-designed ORK+, a kernel for Ada Ravenscar runtimes. In that work, we added support for limited-preemption to ORK+, an absolute premiere in the landscape of real-word kernels. Our implementation allows resource sharing to co-exist with limited-preemptive scheduling, which extends state of the art. We then turn our attention to multicore architectures, first considering partitioned systems, for which we achieve results close to those obtained for single-core processors. Subsequently, we shy away from the over-provision of those systems and consider less restrictive uses of homogeneous multiprocessors, where the scheduling algorithm is key to high schedulable utilization. To that end we single out RUN, a promising baseline, and extend it to SPRINT, which supports sporadic task sets, hence matches real-world industrial needs better. To corroborate our results we present findings from real-world case studies from avionic industry.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Software deployment, eller mjukvarudistribution översatt till svenska kan ses som processen där alla aktiviteter ingår för att göra en mjukvara tillgänglig för användare utan en manuell installation på användarens dator eller annan maskin. Det finns ett flertal software deployment-verktyg, som hanterar automatiska installationer, tillgängliga för företag på marknaden idag.  Avdelningen HVDC på ABB i Ludvika har behov att börja använda ett verktyg för automatiserade installationer av mjukvaror då installationer idag utförs manuellt och är tidsödande. Som Microsoftpartners vill ABB se hur Microsofts verktyg för mjukvarudistribution skulle kunna hjälpa för detta behov.  Vår studie syftade till att undersöka hur arbetet med installationer av mjukvara ser ut idag, samt hitta förbättringsmöjligheter för installationer som inte kan automatiseras i nuläget. I studien ingick även att ta fram ett generellt ramverk för hur verksamheter kan gå tillväga när de vill börja använda sig utav software deployment-verktyg. I ramverket ingår en utformad kravspecifikation som ska utvärderas mot Microsofts verktyg.  För att skapa en uppfattning om hur arbetet i verksamheten ser ut idag har vi utfört enkätundersökning och intervjuer med personal på HVDC. För att utveckla ett ramverk har vi använt oss av insamlade data från de intervjuer, enkätundersökning och gruppintervju som utförts, detta för att identifiera krav och önskemål från personalen hos ett software deployment-verktyg. Litteraturstudier utfördes för att skapa en teoretisk referensram att utgå ifrån vid utvecklande av ramverket och kravspecifikationen.  Studien har resulterat i en beskrivning av software deployment, förbättringsmöjligheter i arbetet med installationer av mjukvara samt ett generellt ramverk som beskriver hur verksamheter kan gå tillväga när de ska börja använda ett software deployment-verktyg. Ramverket innehåller också en kravspecifikation som använts för att utvärdera Microsofts verktyg för mjukvarudistribution. I vår studie har vi inte sett att någon tidigare har tagit fram ett generellt ramverk och kravspecifikation som verksamheter kan använda sig av som underlag när de ska börja använda ett software deployment-verktyg. Vårt resultat av studien kan täcka upp detta kunskapsgap.    

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The number of software vendors offering ‘Software-as-a-Service’ has been increasing in recent years. In the Software-as-a-Service model software is operated by the software vendor and delivered to the customer as a service. Existing business models and industry structures are challenged by the changes to the deployment and pricing model compared to traditional software. However, the full implications on the way companies create, deliver and capture value are not yet sufficiently analyzed. Current research is scattered on specific aspects, only a few studies provide a more holistic view of the impact from a business model perspective. For vendors it is, however, crucial to be aware of the potentially far reaching consequences of Software-as-a-Service. Therefore, a literature review and three exploratory case studies of leading software vendors are used to evaluate possible implications of Software-as-a-Service on business models. The results show an impact on all business model building blocks and highlight in particular the often less articulated impact on key activities, customer relationship and key partnerships for leading software vendors and show related challenges, for example, with regard to the integration of development and operations processes. The observed implications demonstrate the disruptive character of the concept and identify future research requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Post-deployment maintenance and evolution can account for up to 75% of the cost of developing a software system. Software refactoring can reduce the costs associated with evolution by improving system quality. Although refactoring can yield benefits, the process includes potentially complex, error-prone, tedious and time-consuming tasks. It is these tasks that automated refactoring tools seek to address. However, although the refactoring process is well-defined, current refactoring tools do not support the full process. To develop better automated refactoring support, we have completed a usability study of software refactoring tools. In the study, we analysed the task of software refactoring using the ISO 9241-11 usability standard and Fitts' List of task allocation. Expanding on this analysis, we reviewed 11 collections of usability guidelines and combined these into a single list of 38 guidelines. From this list, we developed 81 usability requirements for refactoring tools. Using these requirements, the software refactoring tools Eclipse 3.2, Condenser 1.05, RefactorIT 2.5.1, and Eclipse 3.2 with the Simian UI 2.2.12 plugin were studied. Based on the analysis, we have selected a subset of the requirements that can be incorporated into a prototype refactoring tool intended to address the full refactoring process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a software architecture for real-world robotic applications. We discuss issues of software reliability, testing and realistic off-line simulation that allows the majority of the automation system to be tested off-line in the laboratory before deployment in the field. A recent project, the automation of a very large mining machine is used to illustrate the discussion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We share our experience in planning, designing and deploying a wireless sensor network of one square kilometre area. Environmental data such as soil moisture, temperature, barometric pressure, and relative humidity are collected in this area situated in the semi-arid region of Karnataka, India. It is a hope that information derived from this data will benefit the marginal farmer towards improving his farming practices. Soon after establishing the need for such a project, we begin by showing the big picture of such a data gathering network, the software architecture we have used, the range measurements needed for determining the sensor density, and the packaging issues that seem to play a crucial role in field deployments. Our field deployment experiences include designing with intermittent grid power, enhancing software tools to aid quicker and effective deployment, and flash memory corruption. The first results on data gathering look encouraging.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the single building information model has existed for at least thirty years and various standards have been published leading up to the ten-year development of the Industry Foundation Classes. These have been initiatives from researchers, software developers and standards committees. Now large property owners are becoming aware of the benefits of moving IT tools from specific applications towards more comprehensive solutions. This study addresses the state of Building Information Models and the conditions necessary for them to become more widely used. It is a qualitative study based on information from a number of international experts and has asked a series of questions about the feasibility of BIMs, the conditions necessary for their success, and the role of standards with particular reference to the IFCs. Some key statements were distilled from the diverse answers received and indicate that BIM solutions appear too complex for many and may need to be applied in limited areas initially. Standards are generally supported but not applied rigorously and a range of these are relevant to BIM. Benefits will depend upon the building procurement methods used and there should be special roles within the project team to manage information. Case studies are starting to appear and these could be used for publicity. The IFCs are rather oversold and their complexities should be hidden within simple-to-use software. Inevitably major questions remain and property owners may be the key to answering some of these. A framework for presenting standards, backed up by case studies of successful projects, is the solution proposed to provide better information on where particular BIM standards and solutions should be applied in building projects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[ES]Este documento tiene la intención de presentar un Trabajo de Fin de Grado (TFG). Este proyecto consiste en una serie de herramientas que permitan el diseño, implementación y desarrollo del software de control de un robot humanoide. El proyecto se centra en la mejora de la efectividad, robustez, rendimiento y fiabilidad del software. Los cambios propuestos introducen mejoras sobre el robot comercial robo nova. En concreto la capacidad de ser modular, permitiendo de esta forma el uso total o parcial de las soluciones escogidas, ahorrando tiempo y dinero en futuros desarrollos de esta plataforma

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the commoditization of sensing, actuation and communication hardware increases, so does the potential for dynamically tasked sense and respond networked systems (i.e., Sensor Networks or SNs) to replace existing disjoint and inflexible special-purpose deployments (closed-circuit security video, anti-theft sensors, etc.). While various solutions have emerged to many individual SN-centric challenges (e.g., power management, communication protocols, role assignment), perhaps the largest remaining obstacle to widespread SN deployment is that those who wish to deploy, utilize, and maintain a programmable Sensor Network lack the programming and systems expertise to do so. The contributions of this thesis centers on the design, development and deployment of the SN Workbench (snBench). snBench embodies an accessible, modular programming platform coupled with a flexible and extensible run-time system that, together, support the entire life-cycle of distributed sensory services. As it is impossible to find a one-size-fits-all programming interface, this work advocates the use of tiered layers of abstraction that enable a variety of high-level, domain specific languages to be compiled to a common (thin-waist) tasking language; this common tasking language is statically verified and can be subsequently re-translated, if needed, for execution on a wide variety of hardware platforms. snBench provides: (1) a common sensory tasking language (Instruction Set Architecture) powerful enough to express complex SN services, yet simple enough to be executed by highly constrained resources with soft, real-time constraints, (2) a prototype high-level language (and corresponding compiler) to illustrate the utility of the common tasking language and the tiered programming approach in this domain, (3) an execution environment and a run-time support infrastructure that abstract a collection of heterogeneous resources into a single virtual Sensor Network, tasked via this common tasking language, and (4) novel formal methods (i.e., static analysis techniques) that verify safety properties and infer implicit resource constraints to facilitate resource allocation for new services. This thesis presents these components in detail, as well as two specific case-studies: the use of snBench to integrate physical and wireless network security, and the use of snBench as the foundation for semester-long student projects in a graduate-level Software Engineering course.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The emergence of a sensor-networked world produces a clear and urgent need for well-planned, safe and secure software engineering. It is the role of universities to prepare graduates with the knowledge and experience to enter the work-force with a clear understanding of software design and its application to the future safety of computing. The snBench (Sensor Network WorkBench) project aims to provide support to the programming and deployment of Sensor Network Applications, enabling shared sensor embedded spaces to be easily tasked with various sensory applications by different users for simultaneous execution. In this report we discus our experience using the snBench research project as the foundation for semester-long project in a graduate level software engineering class at Boston University (CS511).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research focuses on the design and implementation of a tool to speed-up the development and deployment of heterogeneous wireless sensor networks. The THAWS (Tyndall Heterogeneous Automated Wireless Sensors) tool can be used to quickly create and configure application-specific sensor networks. THAWS presents the user with a choice of options, in order to characterise the desired functionality of the network. With this information, THAWS generates the necessary code from pre-written templates and well-tested, optimized software modules. This is then automatically compiled to form binary files for each node in the network. Wireless programming of the network completes the task of targeting the wireless network towards a specific sensing application. THAWS is an adaptable tool that works with both homogeneous and heterogeneous networks built from wireless sensor nodes that have been developed in the Tyndall National Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The anticipated rewards of adaptive approaches will only be fully realised when autonomic algorithms can take configuration and deployment decisions that match and exceed those of human engineers. Such decisions are typically characterised as being based on a foundation of experience and knowledge. In humans, these underpinnings are themselves founded on the ashes of failure, the exuberance of courage and (sometimes) the outrageousness of fortune. In this paper we describe an application framework that will allow the incorporation of similarly risky, error prone and downright dangerous software artefacts into live systems – without undermining the certainty of correctness at application level. We achieve this by introducing the notion of application dreaming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a highly flexible component architecture, primarily designed for automotive control systems, that supports distributed dynamically- configurable context-aware behaviour. The architecture enforces a separation of design-time and run-time concerns, enabling almost all decisions concerning runtime composition and adaptation to be deferred beyond deployment. Dynamic context management contributes to flexibility. The architecture is extensible, and can embed potentially many different self-management decision technologies simultaneously. The mechanism that implements the run-time configuration has been designed to be very robust, automatically and silently handling problems arising from the evaluation of self- management logic and ensuring that in the worst case the dynamic aspects of the system collapse down to static behavior in totally predictable ways.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the deployment on GPUs of PROP, a program of the 2DRMP suite which models electron collisions with H-like atoms and ions. Because performance on GPUs is better in single precision than in double precision, the numerical stability of the PROP program in single precision has been studied. The numerical quality of PROP results computed in single precision and their impact on the next program of the 2DRMP suite has been analyzed. Successive versions of the PROP program on GPUs have been developed in order to improve its performance. Particular attention has been paid to the optimization of data transfers and of linear algebra operations. Performance obtained on several architectures (including NVIDIA Fermi) are presented.