851 resultados para software management


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Testaus ketterissä menetelmissä (agile) on kirjallisuudessa heikosti määritelty, ja yritykset toteuttavat laatu- ja testauskäytäntöjä vaihtelevasti. Tämän tutkielman tavoitteena oli löytää malli testauksen järjestämiseen ketterissä menetelmissä. Tavoitetta lähestyttiin keräämällä kirjallisista lähteistä kokemuksia, vaihtoehtoja ja malleja. Löydettyjä tietoja verrattiin ohjelmistoyritysten käytännön ratkaisuihin ja näkemyksiin, joita saatiin suorittamalla kyselytutkimus kahdessa Scrum-prosessimallia käyttävässä ohjelmistoyrityksessä. Kirjallisuuskatsauksessa selvisi, että laatusuunnitelman ja testausstrategian avulla voidaan tunnistaa kussakin kontekstissa tarvittavat testausmenetelmät. Menetelmiä kannattaa tarkastella ja suunnitella iteratiivisten prosessien aikajänteiden (sydämenlyönti, iteraatio, julkaisu ja strateginen) avulla. Tutkimuksen suurin löytö oli, että yrityksiltä puuttui laajempi ja suunnitelmallinen näkemys testauksen ja laadun kehittämiseen. Uusien laatu- ja testaustoimenpiteiden tarvetta ei analysoitu järjestelmällisesti, olemassa olevien käyttöä ei kehitetty pitkäjänteisesti, eikä yrityksillä ollut kokonaiskuvaa tarvittavien toimenpiteiden keskinäisistä suhteista. Lisäksi tutkimuksessa selvisi, etteivät tiimit kyenneet ottamaan vastuuta laadusta, koska laatuun liittyviä toimenpiteitä tehdään iteraatioissa liian vähän. Myös Scrum-prosessimallin noudattamisessa oli korjaamisen varaa. Yritykset kuitenkin osoittivat halua ja kykyä kehittää toimintaansa ongelmien tunnistamisen jälkeen. ACM Computing Classification System (CCS 1998): D.2.5 Testing and Debugging, D.2.9 Management, K.6.1 Project and People Management, K.6.3 Software Management

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Diseño e implementación de modelo de datos que admite el inventario de la red de telecomunicaciones y su gestión desde sistemas de información geográfica. Incluye el desarrollo de los clientes e interficies con otras aplicaciones existentes y la integración con los procesos de trabajo. Se tienen en cuenta aspectos innovadores que permitan la retroalimentación del sistema por sus propios usuarios, admitiéndose soluciones basadas en software libre o en los procesos de desarrollo implantados en dicho tipo de software

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forskningen visar att förändringar av informationsteknologin och en ökande anskaffning av nya programvaror har lett till underliggande problem som kan drabba heterogena programvarulicensmiljöer och stora organisationer. Underliggande problem i den stora kontexten är mjukvaruhantering. Licenshantering av programvaror är just en förgrening av det stora problemet. Stora organisationer som en kommunal verksamhet är drabbad av det här underliggande problemet på grund av komplexitet hos organisationens miljö. Att tillämpa förändringar i området programvarulicens är omöjligt utan att göra förändringar i hela den organisationsprocess som följer med det. Fallstudiens uppdrag är ett nytt omfattande område kring licenshantering av programvaror som kan vara väldigt lärorikt och en bra erfarenhet att ta del av. Uppsatsen beskriver hur en kommunal verksamhets licenshantering av programvaror ser ut och de problem som finns med den nuvarande licenshanteringsprocessen. Förarbetet med en litteraturstudie tillsammans med datagenereringsmetoderna intervjuer, dokumentstudier och observationer används för att studera fallet på djupet. Målet är att kunna ta fram de nuvarande problem som finns, analysera dem och ge rekommendation för åtgärder som det studerade fallobjektet, Falu Kommuns IT-kontor, kan använda. En rekommendation för en tydlig licenshanteringsprocessmodell anses vara ett bra akademiskt bidrag eftersom problemet med licenshanteringen av programvaror är ett generellt problem. Uppsatsens resultat är en processmodell om licenshantering av programvaror för organisationer med IT-tjänstkunder. Det är en generisk lösning som skulle kunna användas av andra kommunverksamheter och liknande organisationer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[ES] El Detector de Efectos Stroop (SED - Stroop Effect Detector), es una herramienta informática de asistencia, desarrollada a través del programa de investigación de Desarrollo Tecnológico Social de la Universidad de Las Palmas de Gran Canaria, que ayuda a profesionales del sector neuropsicológico a identificar problemas en la corteza orbitofrontal de un individuo, usándose para ello la técnica ideada por Schenker en 1998. Como base metodológica, se han utilizado los conocimientos adquiridos en las diferentes materias de la adaptación al grado en Ingeniería Informática como Gestión del Software, Arquitectura del Software y Desarrollo de Interfaces de Usuario así como conocimiento adquirido con anterioridad en asignaturas de Programación e Ingeniería del Software I y II. Como para realizar este proyecto sólo el conocimiento informático no era suficiente, he realizado una labor de investigación acerca del problema, teniendo que recopilar información de otros documentos científicos que abordan el tema, consultas a profesionales del sector como son el Doctor Don Ayoze Nauzet González Hernández, neurólogo del hospital Doctor Negrín de Las Palmas de Gran Canaria y el psicólogo Don José Manuel Rodríguez Pellejero que habló de este problema en clase del máster de Formación del Profesorado y que actualmente estoy cursando. Este trabajo presenta el test de Stroop con las dos versiones de Schenker: RCN (Reading Color Names) y NCW (Naming Colored Words). Como norma general, ambas pruebas presentan ante los sujetos estudios palabras (nombres de colores) escritas con la tinta de colores diferentes. De esta forma, el RCN consiste en leer la palabra escrita omitiendo la tonalidad de su fuente e intentando que no nos influya. Por el contrario, el NCW requiere enunciar el nombre del color de la tinta con la que está escrita la palabra sin que nos influya que ésta última sea el nombre de un color.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Zur administrativen Unterstützung von Lehr- und Lernprozessen werden E-Learning-Plattformen eingesetzt, die auf der Grundlage des Internet Funktionen zur Distribution von Lehr- und Lernmaterialien und zur Kommunikation zwischen Lehrenden und Lernenden anbieten. Zahlreiche wissenschaftliche Beiträge und Marktstudien beschäftigen sich mit der multikriteriellen Evaluation dieser Softwareprodukte zur informatorischen Fundierung strategischer Investitionsentscheidungen. Demgegenüber werden Instrumente zum kostenorientierten Controlling von E-Learning-Plattformen allenfalls marginal thematisiert. Dieser Beitrag greift daher das Konzept der Total Cost of Ownership (TCO) auf, das einen methodischen Ansatzpunkt zur Schaffung von Kostentransparenz von E-Learning-Plattformen bildet. Aufbauend auf den konzeptionellen Grundlagen werden Problembereiche und Anwendungspotenziale für das kostenorientierte Controlling von LMS identifiziert. Zur softwaregestützten Konstruktion und Analyse von TCO-Modellen wird das Open Source-Werkzeug TCO-Tool eingeführt und seine Anwendung anhand eines synthetischen Fallbeispiels erörtert. Abschließend erfolgt die Identifikation weiterführender Entwicklungsperspektiven des TCO-Konzepts im Kontext des E-Learning. Die dargestellte Thematik ist nicht nur von theoretischem Interesse, sondern adressiert auch den steigenden Bedarf von Akteuren aus der Bildungspraxis nach Instrumenten zur informatorischen Fundierung von Investitions- und Desinvestitionsentscheidungen im Umfeld des E-Learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

PAMELA (Phased Array Monitoring for Enhanced Life Assessment) SHMTM System is an integrated embedded ultrasonic guided waves based system consisting of several electronic devices and one system manager controller. The data collected by all PAMELA devices in the system must be transmitted to the controller, who will be responsible for carrying out the advanced signal processing to obtain SHM maps. PAMELA devices consist of hardware based on a Virtex 5 FPGA with a PowerPC 440 running an embedded Linux distribution. Therefore, PAMELA devices, in addition to the capability of performing tests and transmitting the collected data to the controller, have the capability of perform local data processing or pre-processing (reduction, normalization, pattern recognition, feature extraction, etc.). Local data processing decreases the data traffic over the network and allows CPU load of the external computer to be reduced. Even it is possible that PAMELA devices are running autonomously performing scheduled tests, and only communicates with the controller in case of detection of structural damages or when programmed. Each PAMELA device integrates a software management application (SMA) that allows to the developer downloading his own algorithm code and adding the new data processing algorithm to the device. The development of the SMA is done in a virtual machine with an Ubuntu Linux distribution including all necessary software tools to perform the entire cycle of development. Eclipse IDE (Integrated Development Environment) is used to develop the SMA project and to write the code of each data processing algorithm. This paper presents the developed software architecture and describes the necessary steps to add new data processing algorithms to SMA in order to increase the processing capabilities of PAMELA devices.An example of basic damage index estimation using delay and sum algorithm is provided.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

En el presente trabajo se plantea el estudio de las características acústicas del ladrillo artesanal e industrial fabricado en Ecuador, considerando las características particulares respecto a la calidad de la materia prima, además del hecho de que a nivel artesanal su producción no está regularizada, si bien existen una serie de reglamentos no siempre son acatados por el productor artesanal, lo que hace que presente propiedades particulares. La idea principal de este trabajo es generar datos referenciales e iniciales, sobre las propiedades acústicas del ladrillo artesanal e industrial ya que en Ecuador, no existe ningún estudio de esta naturaleza sobre el tema. Además de crear los mecanismos necesarios para una posible ampliación del estudio a otros materiales propios de Ecuador que permitan generar una base de datos sobre sus propiedades acústicas. Otro aspecto importante sobre esta investigación es el familiarizarse con el uso de técnicas de medición, manejo de equipamiento y software diverso, del manejo y comparación de normativa. ABSTRACT. The purpose of this paper is the study of the acoustic characteristics of artisanal and industrial brick manufactured in Ecuador, considering the particular characteristics regarding the quality of raw materials, besides the fact that artisanal production level is unregulated, although there are a number of regulations are not always complied with by the artisan producer, which makes this particular properties. The main idea of this paper is to generate reference and baseline data on the acoustic properties of artisanal and industrial brick as in Ecuador, there is no study of this nature on the subject. In addition to creating the necessary mechanisms for a possible extension of the study to other materials from Ecuador that will generate a database on its acoustic properties. Another important aspect of this research is to get familiar with the use of measurement techniques, equipment and miscellaneous management software, management and comparison of legislation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This deliverable (D1.4) is an intermediate document, expressly included to inform the first project review about RAGE’s methodology of software asset creation and management. The final version of the methodology description (D1.1) will be delivered in Month 29. The document explains how the RAGE project defines, develops, distributes and maintains a series of applied gaming software assets that it aims to make available. It describes a high-level methodology and infrastructure that are needed to support the work in the project as well as after the project has ended.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Although the service-oriented paradigm has been well established in the technical domain for quite some time now, service governance is still considered a research gap. To ensure adequate governance, there is a necessity to manage services as first-class assets throughout the lifecycle. Now that the concept of ser-vice-orientation is also increasingly applied on the business level to structure an organisation’s capabili-ties, the problem has become an even bigger chal-lenge. This paper presents a generic business and software service lifecycle and aligns it with the com-mon management layers in organisations. Using ser-vice analysis as an example, it moreover illustrates how activities in the service lifecycle may vary on lower levels of granularity depending on the focus on business or software services.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This report demonstrates the development of: • Development of software agents for data mining • Link data mining to building model in virtual environments • Link knowledge development with building model in virtual environments • Demonstration of software agents for data mining • Populate with maintenance data

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This report presents the demonstration of software agents prototype system for improving maintenance management [AIMM] including: • Developing and implementing a user focused approach for mining the maintenance data of buildings. This report presents the demonstration of software agents prototype system for improving maintenance management [AIMM] including: • Developing and implementing a user focused approach for mining the maintenance data of buildings. • Refining the development of a multi agent system for data mining in virtual environments (Active Worlds) by developing and implementing a filtering agent on the results obtained from applying data mining techniques on the maintenance data. • Integrating the filtering agent within the multi agents system in an interactive networked multi-user 3D virtual environment. • Populating maintenance data and discovering new rules of knowledge.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

"This column is distinguished from previous Impact columns in that it concerns the development tightrope between research and commercial take-up and the role of the LGPL in an open source workflow toolkit produced in a University environment. Many ubiquitous systems have followed this route, (Apache, BSD Unix, ...), and the lessons this Service Oriented Architecture produces cast yet more light on how software diffuses out to impact us all." Michiel van Genuchten and Les Hatton Workflow management systems support the design, execution and analysis of business processes. A workflow management system needs to guarantee that work is conducted at the right time, by the right person or software application, through the execution of a workflow process model. Traditionally, there has been a lack of broad support for a workflow modeling standard. Standardization efforts proposed by the Workflow Management Coalition in the late nineties suffered from limited support for routing constructs. In fact, as later demonstrated by the Workflow Patterns Initiative (www.workflowpatterns.com), a much wider range of constructs is required when modeling realistic workflows in practice. YAWL (Yet Another Workflow Language) is a workflow language that was developed to show that comprehensive support for the workflow patterns is achievable. Soon after its inception in 2002, a prototype system was built to demonstrate that it was possible to have a system support such a complex language. From that initial prototype, YAWL has grown into a fully-fledged, open source workflow management system and support environment

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the large diffusion of Business Process Managemen (BPM) automation suites, the possibility of managing process-related risks arises. This paper introduces an innovative framework for process-related risk management and describes a working implementation realized by extending the YAWL system. The framework covers three aspects of risk management: risk monitoring, risk prevention, and risk mitigation. Risk monitoring functionality is provided using a sensor-based architecture, where sensors are defined at design time and used at run-time for monitoring purposes. Risk prevention functionality is provided in the form of suggestions about what should be executed, by who, and how, through the use of decision trees. Finally, risk mitigation functionality is provided as a sequence of remedial actions (e.g. reallocating, skipping, rolling back of a work item) that should be executed to restore the process to a normal situation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: With the advances in DNA sequencer-based technologies, it has become possible to automate several steps of the genotyping process leading to increased throughput. To efficiently handle the large amounts of genotypic data generated and help with quality control, there is a strong need for a software system that can help with the tracking of samples and capture and management of data at different steps of the process. Such systems, while serving to manage the workflow precisely, also encourage good laboratory practice by standardizing protocols, recording and annotating data from every step of the workflow Results: A laboratory information management system (LIMS) has been designed and implemented at the International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) that meets the requirements of a moderately high throughput molecular genotyping facility. The application is designed as modules and is simple to learn and use. The application leads the user through each step of the process from starting an experiment to the storing of output data from the genotype detection step with auto-binning of alleles; thus ensuring that every DNA sample is handled in an identical manner and all the necessary data are captured. The application keeps track of DNA samples and generated data. Data entry into the system is through the use of forms for file uploads. The LIMS provides functions to trace back to the electrophoresis gel files or sample source for any genotypic data and for repeating experiments. The LIMS is being presently used for the capture of high throughput SSR (simple-sequence repeat) genotyping data from the legume (chickpea, groundnut and pigeonpea) and cereal (sorghum and millets) crops of importance in the semi-arid tropics. Conclusions: A laboratory information management system is available that has been found useful in the management of microsatellite genotype data in a moderately high throughput genotyping laboratory. The application with source code is freely available for academic users and can be downloaded from http://www.icrisat.org/bt-software-d-lims.htm