978 resultados para software framework
Resumo:
This paper proposes a highly automated mechanism to build an undo facility into a new or existing system easily. Our proposal is based on the observation that for a large set of operators it is not necessary to store in-memory object states or executed system commands to undo an action; the storage of input data is instead enough. This strategy simplifies greatly the design of the undo process and encapsulates most of the functionalities required in a framework structure similar to the many object-oriented programming frameworks.
Resumo:
During the last century many researches on the business, marketing and technology fields have developed the innovation research line and large amount of knowledge can be found in the literature. Currently, the importance of systematic and openness approaches to manage the available innovation sources is well established in many knowledge fields. Also in the software engineering sector, where the organizations need to absorb and to exploit as much innovative ideas as possible to get success in the current competitive environment. This Master Thesis presents an study related with the innovation sources in the software engineering eld. The main research goals of this work are the identication and the relevance assessment of the available innovation sources and the understanding of the trends on the innovation sources usage. Firstly, a general review of the literature have been conducted in order to define the research area and to identify research gaps. Secondly, the Systematic Literature Review (SLR) has been proposed as the research method in this work to report reliable conclusions collecting systematically quality evidences about the innovation sources in software engineering field. This contribution provides resources, built-on empirical studies included in the SLR, to support a systematic identication and an adequate exploitation of the innovation sources most suitable in the software engineering field. Several artefacts such as lists, taxonomies and relevance assessments of the innovation sources most suitable for software engineering have been built, and their usage trends in the last decades and their particularities on some countries and knowledge fields, especially on the software engineering, have been researched. This work can facilitate to researchers, managers and practitioners of innovative software organizations the systematization of critical activities on innovation processes like the identication and exploitation of the most suitable opportunities. Innovation researchers can use the results of this work to conduct research studies involving the innovation sources research area. Whereas, organization managers and software practitioners can use the provided outcomes in a systematic way to improve their innovation capability, increasing consequently the value creation in the processes that they run to provide products and services useful to their environment. In summary, this Master Thesis research the innovation sources in the software engineering field, providing useful resources to support an effective innovation sources management. Moreover, several aspects should be deeply study to increase the accuracy of the presented results and to obtain more resources built-on empirical knowledge. It can be supported by the INno- vation SOurces MAnagement (InSoMa) framework, which is introduced in this work in order to encourage openness and systematic approaches to identify and to exploit the innovation sources in the software engineering field.
Resumo:
Las redes son la esencia de comunidades y sociedades humanas; constituyen el entramado en el que nos relacionamos y determinan cómo lo hacemos, cómo se disemina la información o incluso cómo las cosas se llevan a cabo. Pero el protagonismo de las redes va más allá del que adquiere en las redes sociales. Se encuentran en el seno de múltiples estructuras que conocemos, desde las interaciones entre las proteínas dentro de una célula hasta la interconexión de los routers de internet. Las redes sociales están presentes en internet desde sus principios, en el correo electrónico por tomar un ejemplo. Dentro de cada cliente de correo se manejan listas contactos que agregadas constituyen una red social. Sin embargo, ha sido con la aparición de los sitios web de redes sociales cuando este tipo de aplicaciones web han llegado a la conciencia general. Las redes sociales se han situado entre los sitios más populares y con más tráfico de la web. Páginas como Facebook o Twitter manejan cifras asombrosas en cuanto a número de usuarios activos, de tráfico o de tiempo invertido en el sitio. Pero las funcionalidades de red social no están restringidas a las redes sociales orientadas a contactos, aquellas enfocadas a construir tu lista de contactos e interactuar con ellos. Existen otros ejemplos de sitios que aprovechan las redes sociales para aumentar la actividad de los usuarios y su involucración alrededor de algún tipo de contenido. Estos ejemplos van desde una de las redes sociales más antiguas, Flickr, orientada al intercambio de fotografías, hasta Github, la red social de código libre más popular hoy en día. No es una casualidad que la popularidad de estos sitios web venga de la mano de sus funcionalidades de red social. El escenario es más rico aún, ya que los sitios de redes sociales interaccionan entre ellos, compartiendo y exportando listas de contactos, servicios de autenticación y proporcionando un valioso canal para publicitar la actividad de los usuarios en otros sitios web. Esta funcionalidad es reciente y aún les queda un paso hasta que las redes sociales superen su condición de bunkers y lleguen a un estado de verdadera interoperabilidad entre ellas, tal como funcionan hoy en día el correo electrónico o la mensajería instantánea. Este trabajo muestra una tecnología que permite construir sitios web con características de red social distribuída. En primer lugar, se presenta una tecnología para la construcción de un componente intermedio que permite proporcionar cualquier característica de gestión de contenidos al popular marco de desarrollo web modelo-vista-controlador (MVC) Ruby on Rails. Esta técnica constituye una herramienta para desarrolladores que les permita abstraerse de las complejidades de la gestión de contenidos y enfocarse en las particularidades de los propios contenidos. Esta técnica se usará también para proporcionar las características de red social. Se describe una nueva métrica de reusabilidad de código para demostrar la validez del componente intermedio en marcos MVC. En segundo lugar, se analizan las características de los sitios web de redes sociales más populares, con el objetivo de encontrar los patrones comunes que aparecen en ellos. Este análisis servirá como base para definir los requisitos que debe cumplir un marco para construir redes sociales. A continuación se propone una arquitectura de referencia que proporcione este tipo de características. Dicha arquitectura ha sido implementada en un componente, Social Stream, y probada en varias redes sociales, tanto orientadas a contactos como a contenido, en el contexto de una asociación vecinal tanto como en proyectos de investigación financiados por la UE. Ha sido la base de varios proyectos fin de carrera. Además, ha sido publicado como código libre, obteniendo una comunidad creciente y está siendo usado más allá del ámbito de este trabajo. Dicha arquitectura ha permitido la definición de un nuevo modelo de control de acceso social que supera varias limitaciones presentes en los modelos de control de acceso para redes sociales. Más aún, se han analizado casos de estudio de sitios de red social distribuídos, reuniendo un conjunto de caraterísticas que debe cumplir un marco para construir redes sociales distribuídas. Por último, se ha extendido la arquitectura del marco para dar cabida a las características de redes sociales distribuídas. Su implementación ha sido validada en proyectos de investigación financiados por la UE. Abstract Networks are the substance of human communities and societies; they constitute the structural framework on which we relate to each other and determine the way we do it, the way information is diseminated or even the way people get things done. But network prominence goes beyond the importance it acquires in social networks. Networks are found within numerous known structures, from protein interactions inside a cell to router connections on the internet. Social networks are present on the internet since its beginnings, in emails for example. Inside every email client, there are contact lists that added together constitute a social network. However, it has been with the emergence of social network sites (SNS) when these kinds of web applications have reached general awareness. SNS are now among the most popular sites in the web and with the higher traffic. Sites such as Facebook and Twitter hold astonishing figures of active users, traffic and time invested into the sites. Nevertheless, SNS functionalities are not restricted to contact-oriented social networks, those that are focused on building your own list of contacts and interacting with them. There are other examples of sites that leverage social networking to foster user activity and engagement around other types of content. Examples go from early SNS such as Flickr, the photography related networking site, to Github, the most popular social network repository nowadays. It is not an accident that the popularity of these websites comes hand-in-hand with their social network capabilities The scenario is even richer, due to the fact that SNS interact with each other, sharing and exporting contact lists and authentication as well as providing a valuable channel to publize user activity in other sites. These interactions are very recent and they are still finding their way to the point where SNS overcome their condition of data silos to a stage of full interoperability between sites, in the same way email and instant messaging networks work today. This work introduces a technology that allows to rapidly build any kind of distributed social network website. It first introduces a new technique to create middleware that can provide any kind of content management feature to a popular model-view-controller (MVC) web development framework, Ruby on Rails. It provides developers with tools that allow them to abstract from the complexities related with content management and focus on the development of specific content. This same technique is also used to provide the framework with social network features. Additionally, it describes a new metric of code reuse to assert the validity of the kind of middleware that is emerging in MVC frameworks. Secondly, the characteristics of top popular SNS are analysed in order to find the common patterns shown in them. This analysis is the ground for defining the requirements of a framework for building social network websites. Next, a reference architecture for supporting the features found in the analysis is proposed. This architecture has been implemented in a software component, called Social Stream, and tested in several social networks, both contact- and content-oriented, in local neighbourhood associations and EU-founded research projects. It has also been the ground for several Master’s theses. It has been released as a free and open source software that has obtained a growing community and that is now being used beyond the scope of this work. The social architecture has enabled the definition of a new social-based access control model that overcomes some of the limitations currenly present in access control models for social networks. Furthermore, paradigms and case studies in distributed SNS have been analysed, gathering a set of features for distributed social networking. Finally the architecture of the framework has been extended to support distributed SNS capabilities. Its implementation has also been validated in EU-founded research projects.
Resumo:
Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.
Resumo:
Quantification of neurotransmission Single-Photon Emission Computed Tomography (SPECT) studies of the dopaminergic system can be used to track, stage and facilitate early diagnosis of the disease. The aim of this study was to implement QuantiDOPA, a semi-automatic quantification software of application in clinical routine to reconstruct and quantify neurotransmission SPECT studies using radioligands which bind the dopamine transporter (DAT). To this end, a workflow oriented framework for the biomedical imaging (GIMIAS) was employed. QuantiDOPA allows the user to perform a semiautomatic quantification of striatal uptake by following three stages: reconstruction, normalization and quantification. QuantiDOPA is a useful tool for semi-automatic quantification inDAT SPECT imaging and it has revealed simple and flexible
Resumo:
Actualmente existen aplicaciones que permiten simular el comportamiento de bacterias en distintos hábitats y los procesos que ocurren en estos para facilitar su estudio y experimentación sin la necesidad de un laboratorio. Una de las aplicaciones de software libre para la simulación de poblaciones bacteriológicas mas usada es iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), un simulador basado en agentes que permite trabajar con varios modelos computacionales de bacterias en 2D y 3D. Este simulador permite una gran libertad al configurar una numerosa cantidad de variables con respecto al entorno, reacciones químicas y otros detalles importantes. Una característica importante es el poder simular de manera sencilla la conjugación de plásmidos entre bacterias. Los plásmidos son moléculas de ADN diferentes del cromosoma celular, generalmente circularles, que se replican, transcriben y conjugan independientemente del ADN cromosómico. Estas están presentes normalmente en bacterias procariotas, y en algunas ocasiones en eucariotas, sin embargo, en este tipo de células son llamados episomas. Dado el complejo comportamiento de los plásmidos y la gama de posibilidades que estos presentan como mecanismos externos al funcionamiento básico de la célula, en la mayoría de los casos confiriéndole distintas ventajas evolutivas, como por ejemplo: resistencia antibiótica, entre otros, resulta importante su estudio y subsecuente manipulación. Sin embargo, el marco operativo del iDynoMiCS, en cuanto a simulación de plásmidos se refiere, es demasiado sencillo y no permite realizar operaciones más complejas que el análisis de la propagación de un plásmido en la comunidad. El presente trabajo surge para resolver esta deficiencia de iDynomics. Aquí se analizarán, desarrollarán e implementarán las modificaciones necesarias para que iDynomics pueda simular satisfactoriamente y mas apegado a la realidad la conjugación de plásmidos y permita así mismo resolver distintas operaciones lógicas, como lo son los circuitos genéticos, basadas en plásmidos. También se analizarán los resultados obtenidos de acuerdo a distintos estudios relevantes y a la comparación de los resultados obtenidos con el código original de iDynomics. Adicionalmente se analizará un estudio comparando la eficiencia de detección de una sustancia mediante dos circuitos genéticos distintos. Asimismo el presente trabajo puede tener interés para el grupo LIA de la Facultad de Informática de la Universidad Politécnica de Madrid, el cual está participando en el proyecto europeo BACTOCOM que se centra en el estudio de la conjugación de plásmidos y circuitos genéticos. --ABSTRACT--Currently there are applications that simulate the behavior of bacteria in different habitats and the ongoing processes inside them to facilitate their study and experimentation without the need for an actual laboratory. One of the most used open source applications to simulate bacterial populations is iDynoMiCS (individual-based Dynamics of Microbial Communities Simulator), an agent-based simulator that allows working with several computer models of 2D and 3D bacteria in biofilms. This simulator allows great freedom by means of a large number of configurable variables regarding environment, chemical reactions and other important details of the simulation. Within these characteristics there exists a very basic framework to simulate plasmid conjugation. Plasmids are DNA molecules physically different from the cell’s chromosome, commonly found as small circular, double-stranded DNA molecules that are replicated, conjugated and transcribed independently of chromosomal DNA. These bacteria are normally present in prokaryotes and sometimes in eukaryotes, which in this case these cells are called episomes. Plasmids are external mechanisms to the cells basic operations, and as such, in the majority of the cases, confer to the host cell various evolutionary advantages, like antibiotic resistance for example. It is mperative to further study plasmids and the possibilities they present. However, the operational framework of the iDynoMiCS plasmid simulation is too simple, and does not allow more complex operations that the analysis of the spread of a plasmid in the community. This project was conceived to resolve this particular deficiency in iDynomics, moreover, in this paper is discussed, developed and implemented the necessary changes to iDynomics simulation software so it can satisfactorily and realistically simulate plasmid conjugation, and allow the possibility to solve various ogic operations, such as plasmid-based genetic circuits. Moreover the results obtained will be analyzed and compared with other relevant studies and with those obtained with the original iDynomics code. Conjointly, an additional study detailing the sensing of a substance with two different genetic circuits will be presented. This work may also be relevant to the LIA group of the Faculty of Informatics of the Polytechnic University of Madrid, which is participating in the European project BACTOCOM that focuses on the study of the of plasmid conjugation and genetic circuits.
Resumo:
In this dissertation, after testing that neither the definition of Agile methodologies, nor the current tools that support them, such as Scrum or XP, gave guidance for stages of software development prior to the definition of the first interaction of development; we proceeded to study the state of the art of Inception techniques, that is, techniques to deal with this early phase of the project, that would help guide its development. From the analysis of these Inception techniques, we defined what we considered as the essential properties of an Inception framework. With that list at hand, it was found that no current Inception framework supported all the features, also, we found that it did not exist, either, any software application on the market that did it. Finally, after checking the above gaps, we defined the Inception framework "Agile Incepti-ON", with all the practices necessary to meet the requirements specified above. In addition to this, a software application was developed to support the practices defined in the Inception framework, called "Agile Dojo".
Resumo:
La innovación en Sistemas Intesivos en Software está alcanzando relevancia por múltiples razones: el software está presente en sectores como automóvil, teléfonos móviles o salud. Las empresas necesitan conocer aquellos factores que afectan a la innovación para incrementar las probabilidades de éxito en el desarrollo de sus productos y, la evaluación de productos sofware es un mecanismo potente para capturar este conocimiento. En consecuencia, las empresas necesitan evaluar sus productos desde la perpectiva de innovación para reducir la distancia entre los productos desarrollados y el mercado. Esto es incluso más relevante en el caso de los productos intensivos en software, donde el tiempo real, la oportunidad, complejidad, interoperabilidad, capacidad de respuesta y compartción de recursos son características críticas de los nuevos sistemas. La evaluación de la innovación de productos ya ha sido estudiada y se han definido algunos esquemas de evaluación pero no son específicos para Sistemas intensivos en Sofwtare; además, no se ha alcanzado consenso en los factores ni el procedimiento de evaluación. Por lo tanto, tiene sentido trabajar en la definición de un marco de evaluación de innovación enfocado a Sistemas intesivos en Software. Esta tesis identifica los elementos necesarios para construir in marco para la evaluación de de Sistemas intensivos en Software desde el punto de vista de la innovación. Se han identificado dos componentes como partes del marco de evaluación: un modelo de referencia y una herramienta adaptativa y personalizable para la realización de la evaluación y posicionamiento de la innovación. El modelo de referencia está compuesto por cuatro elementos principales que caracterizan la evaluación de innovación de productos: los conceptos, modelos de innovación, cuestionarios de evaluación y la evaluación de productos. El modelo de referencia aporta las bases para definir instancias de los modelos de evaluación de innovación de productos que pueden se evaluados y posicionados en la herramienta a través de cuestionarios y que de forma automatizada aporta los resultados de la evaluación y el posicionamiento respecto a la innovación de producto. El modelo de referencia ha sido rigurosamente construido aplicando modelado conceptual e integración de vistas junto con la aplicación de métodos cualitativos de investigación. La herramienta ha sido utilizada para evaluar productos como Skype a través de la instanciación del modelo de referencia. ABSTRACT Innovation in Software intensive Systems is becoming relevant for several reasons: software is present embedded in many sectors like automotive, robotics, mobile phones or heath care. Firms need to have knowledge about factors affecting the innovation to increase the probability of success in their product development and the assessment of innovation in software products is a powerful mechanism to capture this knowledge. Therefore, companies need to assess products from an innovation perspective to reduce the gap between their developed products and the market. This is even more relevant in the case of SiSs, where real time, timeliness, complexity, interoperability, reactivity, and resource sharing are critical features of a new system. Many authors have analysed product innovation assessment and some schemas have been developed but they are not specific to SiSs; in addition, there is no consensus about the factors or the procedures for performing an assessment. Therefore, it has sense to work in the definition of a customized software product innovation evaluation framework. This thesis identifies the elements needed to build a framework to assess software products from the innovation perspective. Two components have been identified as part of the framework to assess Software intensive Systems from the innovation perspective: a reference-model and an adaptive and customizable tool to perform the assessment and to position product innovation. The reference-model is composed by four main elements characterizing product innovation assessment: concepts, innovation models, assessment questionnaires and product assessment. The reference model provides the umbrella to define instances of product innovation assessment models that can be assessed and positioned through questionnaires in the proposed tool that also provides automation in the assessment and positioning of innovation. The reference-model has been rigorously built by applying conceptual modelling and view integration integrated with qualitative research methods. The tool has been used to assess products like Skype through models instantiated from the reference-model.
Resumo:
A gene expression atlas is an essential resource to quantify and understand the multiscale processes of embryogenesis in time and space. The automated reconstruction of a prototypic 4D atlas for vertebrate early embryos, using multicolor fluorescence in situ hybridization with nuclear counterstain, requires dedicated computational strategies. To this goal, we designed an original methodological framework implemented in a software tool called Match-IT. With only minimal human supervision, our system is able to gather gene expression patterns observed in different analyzed embryos with phenotypic variability and map them onto a series of common 3D templates over time, creating a 4D atlas. This framework was used to construct an atlas composed of 6 gene expression templates from a cohort of zebrafish early embryos spanning 6 developmental stages from 4 to 6.3 hpf (hours post fertilization). They included 53 specimens, 181,415 detected cell nuclei and the segmentation of 98 gene expression patterns observed in 3D for 9 different genes. In addition, an interactive visualization software, Atlas-IT, was developed to inspect, supervise and analyze the atlas. Match-IT and Atlas-IT, including user manuals, representative datasets and video tutorials, are publicly and freely available online. We also propose computational methods and tools for the quantitative assessment of the gene expression templates at the cellular scale, with the identification, visualization and analysis of coexpression patterns, synexpression groups and their dynamics through developmental stages.
Resumo:
La sociedad depende hoy más que nunca de la tecnología, pero la inversión en seguridad es escasa y los sistemas informáticos siguen estando muy lejos de ser seguros. La criptografía es una de las piedras angulares de la seguridad en este ámbito, por lo que recientemente se ha dedicado una cantidad considerable de recursos al desarrollo de herramientas que ayuden en la evaluación y mejora de los algoritmos criptográficos. EasyCrypt es uno de estos sistemas, desarrollado recientemente en el Instituto IMDEA Software en respuesta a la creciente necesidad de disponer de herramientas fiables de verificación formal de criptografía. En este trabajo se abordará la implementación de una mejora en el reductor de términos de EasyCrypt, sustituyéndolo por una máquina abstracta simbólica. Para ello se estudiarán e implementarán previamente dos máquinas abstractas muy conocidas, la Máquina de Krivine y la ZAM, introduciendo variaciones sobre ellas y estudiando sus diferencias desde un punto de vista práctico.---ABSTRACT---Today, society depends more than ever on technology, but the investment in security is still scarce and using computer systems are still far from safe to use. Cryptography is one of the cornerstones of security, so there has been a considerable amount of effort devoted recently to the development of tools oriented to the evaluation and improvement of cryptographic algorithms. One of these tools is EasyCrypt, developed recently at IMDEA Software Institute in response to the increasing need of reliable formal verification tools for cryptography. This work will focus on the improvement of the EasyCrypt’s term rewriting system, replacing it with a symbolic abstract machine. In order to do that, we will previously study and implement two widely known abstract machines, the Krivine Machine and the ZAM, introducing some variations and studying their differences from a practical point of view.
Resumo:
Nowadays, there are sound methods and tools which implement the Model-Driven Development approach (MDD) satisfactorily. However, MDD approaches focus on representing and generating code that represents functionality, behaviour and persistence, putting the interaction, and more specifically the usability, in a second place. If we aim to include usability features in a system developed with a MDD tool, we need to extend manually the generated code
Resumo:
Context: Replication plays an important role in experimental disciplines. There are still many uncertain- ties about how to proceed with replications of SE experiments. Should replicators reuse the baseline experiment materials? How much liaison should there be among the original and replicating experiment- ers, if any? What elements of the experimental configuration can be changed for the experiment to be considered a replication rather than a new experiment? Objective: To improve our understanding of SE experiment replication, in this work we propose a classi- fication which is intend to provide experimenters with guidance about what types of replication they can perform. Method: The research approach followed is structured according to the following activities: (1) a litera- ture review of experiment replication in SE and in other disciplines, (2) identification of typical elements that compose an experimental configuration, (3) identification of different replications purposes and (4) development of a classification of experiment replications for SE. Results: We propose a classification of replications which provides experimenters in SE with guidance about what changes can they make in a replication and, based on these, what verification purposes such a replication can serve. The proposed classification helped to accommodate opposing views within a broader framework, it is capable of accounting for less similar replications to more similar ones regarding the baseline experiment. Conclusion: The aim of replication is to verify results, but different types of replication serve special ver- ification purposes and afford different degrees of change. Each replication type helps to discover partic- ular experimental conditions that might influence the results. The proposed classification can be used to identify changes in a replication and, based on these, understand the level of verification.
Resumo:
Mosaics are high-resolution images obtained aerially and employed in several scientific research areas, such for example, in the field of environmental monitoring and precision agriculture. Although many high resolution maps are obtained by commercial demand, they can also be acquired with commercial aerial vehicles which provide more experimental autonomy and availability. For what regard to mosaicing-based aerial mission planners, there are not so many - if any - free of charge software. Therefore, in this paper is presented a framework designed with open source tools and libraries as an alternative to commercial tools to carry out mosaicing tasks.
Resumo:
El uso de la computación en la nube ofrece un nuevo paradigma que procura proporcionar servicios informáticos para los cuales no es necesario contar con grandes infraestructuras y sobre todo, con las complejidades de costos, seguridad y mantenimiento implícitas. Si bien se ha posicionado en los últimos años como una plataforma innovadora en el ámbito de la tecnología de consumo masivo y organizacional, también puede ser tópico de investigación importante en ciertas áreas de interés como el desarrollo de Software, presentando en ese campo, una serie de ventajas y retos estimulantes que pueden ser explorados. Este trabajo de investigación, sigue con dicho sentido, el objetivo de exponer la situación actual sobre el empleo de la computación en la nube como entorno de desarrollo de Software, sectorizando a través de su capa PaaS, el modelo conceptual de trabajo, las perspectivas recientes, problemas e implicaciones generales del uso de ésta como herramienta plausible en proyectos de desarrollo de Software. El análisis de los diferentes temas abordados, tiene la intención en general, de proporcionar información objetiva, crítica y cuantitativa sobre la concentración de la investigación relacionada a PaaS, así como un marco de interpretación reciente que aporte una perspectiva referencial para futuras investigaciones asociadas.---ABSTRACT---The use of cloud computing offers a new paradigm to provide computer services for which it is not necessary to have large infrastructure and especially with the complexities of cost, safety and maintenance implied. While it has positioned itself in recent years as an innovative platform in the field of technology and massive organizational consumption, can also be an important research topic in certain areas of interest including, the development of Software, presenting in this field, a series of advantages, disadvantages and stimulating challenges that can be explored. This research, following with that sense, try to present the current situation related to the use of cloud computing as a software development environment, through its sectorized PaaS layer, showing the conceptual working model, actual perspectives, problems and general implications of using this as a possible tool in Software development projects. The analysis of the different topics covered, intends in a general form, provide objective, critical and quantitative information about the concentration of research related to PaaS, and a recent interpretation framework to provide a referential perspective for future related researches.
Resumo:
En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.