882 resultados para software management infrastructure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The complexity associated with fast growing of B2B and the lack of a (complete) suite of open standards makes difficulty to maintain the underlying collaborative processes. Aligned to this challenge, this paper aims to be a contribution to an open architecture of logistics and transport processes management system. A model of an open integrated system is being defined as an open computational responsibility from the embedded systems (on-board) as well as a reference implementation (prototype) of a host system to validate the proposed open interfaces. Embedded subsystem can, natively, be prepared to cooperate with other on-board units and with IT-systems in an infrastructure commonly referred to as a center information system or back-office. In interaction with a central system the proposal is to adopt an open framework for cooperation where the embedded unit or the unit placed somewhere (land/sea) interacts in response to a set of implemented capabilities.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliographical references (p. 48-49).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This deliverable (D1.4) is an intermediate document, expressly included to inform the first project review about RAGE’s methodology of software asset creation and management. The final version of the methodology description (D1.1) will be delivered in Month 29. The document explains how the RAGE project defines, develops, distributes and maintains a series of applied gaming software assets that it aims to make available. It describes a high-level methodology and infrastructure that are needed to support the work in the project as well as after the project has ended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con la creciente popularidad de las soluciones de IT como factor clave para aumentar la competitividad y la creación de valor para las empresas, la necesidad de invertir en proyectos de IT se incrementa considerablemente. La limitación de los recursos como un obstáculo para invertir ha obligado a las empresas a buscar metodologías para seleccionar y priorizar proyectos, asegurándose de que las decisiones que se toman son aquellas que van alineadas con las estrategias corporativas para asegurar la creación de valor y la maximización de los beneficios. Esta tesis proporciona los fundamentos para la implementación del Portafolio de dirección de Proyectos de IT (IT PPM) como una metodología eficaz para la gestión de proyectos basados en IT, y una herramienta para proporcionar criterios claros para los directores ejecutivos para la toma de decisiones. El documento proporciona la información acerca de cómo implementar el IT PPM en siete pasos, el análisis de los procesos y las funciones necesarias para su ejecución exitosa. Además, proporciona diferentes métodos y criterios para la selección y priorización de proyectos. Después de la parte teórica donde se describe el IT PPM, la tesis aporta un análisis del estudio de caso de una empresa farmacéutica. La empresa ya cuenta con un departamento de gestión de proyectos, pero se encontró la necesidad de implementar el IT PPM debido a su amplia cobertura de procesos End-to-End en Proyectos de IT, y la manera de asegurar la maximización de los beneficios. Con la investigación teórica y el análisis del estudio de caso, la tesis concluye con una definición práctica de un modelo aproximado IT PPM como una recomendación para su implementación en el Departamento de Gestión de Proyectos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to simplify computer management, several system administrators are adopting advanced techniques to manage software configuration of enterprise computer networks, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. Virtualization is an established technology, however its use is been more focused on server consolidation and virtual desktop infrastructure, not for managing distributed computers over a network. This paper discusses the feasibility of the Distributed Virtual Machine Environment, a new approach for enterprise computer management that combines virtualization and distributed system architecture as the basis of the management architecture. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mit dem zunehmenden Einsatz von E-Learning-Plattformen rücken verstärkt Wirtschaftlichkeitsaspekte in den Betrachtungsmittelpunkt, die Methoden zur Ermittlung systembedingter Kosten voraussetzen. Durch die zunehmende Serviceorientierung und Integration von LMS mit bestehenden Komponenten der Anwendungsarchitektur sind hierfür jedoch neue Methoden notwendig, welche die Defizite traditioneller Total Cost of Ownership-Modelle abbauen. Einen Ansatzpunkt hierfür bietet das ITIL-Referenzmodell, das einen Rahmen für taktische und operative IT-Services vorgibt und somit die Grundlage für eine serviceorientierte Gesamtkostenermittlung in Form der Total Cost of Services (TCS) liefert.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Software development teams increasingly adopt platform-as-a-service (PaaS), i.e., cloud services that make software development infrastructure available over the internet. Yet, empirical evidence of whether and how software development work changes with the use of PaaS is difficult to find. We performed a grounded-theory study to explore the affordances of PaaS for software development teams. We find that PaaS enables software development teams to enforce uniformity, to exploit knowledge embedded in technology, to enhance agility, and to enrich jobs. These affordances do not arise in a vacuum. Their emergence is closely interwoven with changes in methodologies, roles, and norms that give rise to self-organizing, loosely coupled teams. Our study provides rich descriptions of PaaS-based software development and an emerging theory of affordances of PaaS for software development teams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As researchers and practitioners move towards a vision of software systems that configure, optimize, protect, and heal themselves, they must also consider the implications of such self-management activities on software reliability. Autonomic computing (AC) describes a new generation of software systems that are characterized by dynamically adaptive self-management features. During dynamic adaptation, autonomic systems modify their own structure and/or behavior in response to environmental changes. Adaptation can result in new system configurations and capabilities, which need to be validated at runtime to prevent costly system failures. However, although the pioneers of AC recognize that validating autonomic systems is critical to the success of the paradigm, the architectural blueprint for AC does not provide a workflow or supporting design models for runtime testing. ^ This dissertation presents a novel approach for seamlessly integrating runtime testing into autonomic software. The approach introduces an implicit self-test feature into autonomic software by tailoring the existing self-management infrastructure to runtime testing. Autonomic self-testing facilitates activities such as test execution, code coverage analysis, timed test performance, and post-test evaluation. In addition, the approach is supported by automated testing tools, and a detailed design methodology. A case study that incorporates self-testing into three autonomic applications is also presented. The findings of the study reveal that autonomic self-testing provides a flexible approach for building safe, reliable autonomic software, while limiting the development and performance overhead through software reuse. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowledge is one of the most important assets for surviving in the modern business environment. The effective management of that asset mandates continuous adaptation by organizations, and requires employees to strive to improve the company's work processes. Organizations attempt to coordinate their unique knowledge with traditional means as well as in new and distinct ways, and to transform them into innovative resources better than those of their competitors. As a result, how to manage the knowledge asset has become a critical issue for modern organizations, and knowledge management is considered the most feasible solution. Knowledge management is a multidimensional process that identifies, acquires, develops, distributes, utilizes, and stores knowledge. However, many related studies focus only on fragmented or limited knowledge-management perspectives. In order to make knowledge management more effective, it is important to identify the qualitative and quantitative issues that are the foundation of the challenge of effective knowledge management in organizations. The main purpose of this study was to integrate the fragmented knowledge management perspectives into the holistic framework, which includes knowledge infrastructure capability (technology, structure, and culture) and knowledge process capability (acquisition, conversion, application, and protection), based on Gold's (2001) study. Additionally, because the effect of incentives ̶̶ which is widely acknowledged as a prime motivator in facilitating the knowledge management process ̶̶ was missing in the original framework, this study included the importance of incentives in the knowledge management framework. This study also identified the relationship of organizational performance from the standpoint of the Balanced Scorecard, which includes the customer-related, internal business process, learning & growth, and perceptual financial aspects of organizational performance in the Korean business context. Moreover, this study identified the relationship with the objective financial performance by calculating the Tobin's q ratio. Lastly, this study compared the group differences between larger and smaller organizations, and manufacturing and nonmanufacturing firms in the study of knowledge management. Since this study was conducted in Korea, the original instrument was translated into Korean through the back translation technique. A confirmatory factor analysis (CFA) was used to examine the validity and reliability of the instrument. To identify the relationship between knowledge management capabilities and organizational performance, structural equation modeling (SEM) and multiple regression analysis were conducted. A Student's t test was conducted to examine the mean differences. The results of this study indicated that there is a positive relationship between effective knowledge management and organizational performance. However, no empirical evidence was found to suggest that knowledge management capabilities are linked to the objective financial performance, which remains a topic for future review. Additionally, findings showed that knowledge management is affected by organization's size, but not by type of organization. The results of this study are valuable in establishing a valid and reliable survey instrument, as well as in providing strong evidence that knowledge management capabilities are essential to improving organizational performance currently and making important recommendations for future research.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este trabajo de fin de grado intenta, mediante el caso real de la implantación de un ERP en una micro PYME, ajustar a la realidad socioeconómica de la gran mayoría de empresas españolas las técnicas de implantación de software y de gestión de proyectos, usando como elemento conductor un ERP de software libre, para conseguir el triple objetivo de mantener los ratios de calidad deseados en cualquier proyecto de ingeniería de software, con unos costes asumibles por una micro PYME y satisfaciendo las necesidades de negocio.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Managing software maintenance is rarely a precise task due to uncertainties concerned with resources and services descriptions. Even when a well-established maintenance process is followed, the risk of delaying tasks remains if the new services are not precisely described or when resources change during process execution. Also, the delay of a task at an early process stage may represent a different delay at the end of the process, depending on complexity or services reliability requirements. This paper presents a knowledge-based representation (Bayesian Networks) for maintenance project delays based on specialists experience and a corresponding tool to help in managing software maintenance projects. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To simplify computer management, several system administrators are adopting advanced techniques to manage software configuration on grids, but the tight coupling between hardware and software makes every PC an individual managed entity, lowering the scalability and increasing the costs to manage hundreds or thousands of PCs. This paper discusses the feasibility of a distributed virtual machine environment, named Flexlab: a new approach for computer management that combines virtualization and distributed system architectures as the basis of a management system. Flexlab is able to extend the coverage of a computer management solution beyond client operating system limitations and also offers a convenient hardware abstraction, decoupling software and hardware, simplifying computer management. The results obtained in this work indicate that FlexLab is able to overcome the limitations imposed by the coupling between software and hardware, simplifying the management of homogeneous and heterogeneous grids. © 2009 IEEE.