998 resultados para Software Acquisition


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service providers make use of cost-effective wireless solutions to identify, localize, and possibly track users using their carried MDs to support added services, such as geo-advertisement, security, and management. Indoor and outdoor hotspot areas play a significant role for such services. However, GPS does not work in many of these areas. To solve this problem, service providers leverage available indoor radio technologies, such as WiFi, GSM, and LTE, to identify and localize users. We focus our research on passive services provided by third parties, which are responsible for (i) data acquisition and (ii) processing, and network-based services, where (i) and (ii) are done inside the serving network. For better understanding of parameters that affect indoor localization, we investigate several factors that affect indoor signal propagation for both Bluetooth and WiFi technologies. For GSM-based passive services, we developed first a data acquisition module: a GSM receiver that can overhear GSM uplink messages transmitted by MDs while being invisible. A set of optimizations were made for the receiver components to support wideband capturing of the GSM spectrum while operating in real-time. Processing the wide-spectrum of the GSM is possible using a proposed distributed processing approach over an IP network. Then, to overcome the lack of information about tracked devices’ radio settings, we developed two novel localization algorithms that rely on proximity-based solutions to estimate in real environments devices’ locations. Given the challenging indoor environment on radio signals, such as NLOS reception and multipath propagation, we developed an original algorithm to detect and remove contaminated radio signals before being fed to the localization algorithm. To improve the localization algorithm, we extended our work with a hybrid based approach that uses both WiFi and GSM interfaces to localize users. For network-based services, we used a software implementation of a LTE base station to develop our algorithms, which characterize the indoor environment before applying the localization algorithm. Experiments were conducted without any special hardware, any prior knowledge of the indoor layout or any offline calibration of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a scalable software architecture for on-line multi-camera video processing, that guarantees a good trade off between computational power, scalability and flexibility. The software system is modular and its main blocks are the Processing Units (PUs), and the Central Unit. The Central Unit works as a supervisor of the running PUs and each PU manages the acquisition phase and the processing phase. Furthermore, an approach to easily parallelize the desired processing application has been presented. In this paper, as case study, we apply the proposed software architecture to a multi-camera system in order to efficiently manage multiple 2D object detection modules in a real-time scenario. System performance has been evaluated under different load conditions such as number of cameras and image sizes. The results show that the software architecture scales well with the number of camera and can easily works with different image formats respecting the real time constraints. Moreover, the parallelization approach can be used in order to speed up the processing tasks with a low level of overhead

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo de este proyecto es diseñar un sistema capaz de controlar la velocidad de rotación de un motor DC en función del valor de temperatura obtenido de un sensor. Para ello se generará con un microcontrolador una señal PWM, cuyo ciclo de trabajo estará en función de la temperatura medida. En lo que respecta a la fase de diseño, hay dos partes claramente diferenciadas, relativas al hardware y al software. En cuanto al diseño del hardware puede hacerse a su vez una división en dos partes. En primer lugar, hubo que diseñar la circuitería necesaria para adaptar los niveles de tensión entregados por el sensor de temperatura a los niveles requeridos por ADC, requerido para digitalizar la información para su posterior procesamiento por parte del microcontrolador. Por tanto hubo que diseñar capaz de corregir el offset y la pendiente de la función tensión-temperatura del sensor, a fin de adaptarlo al rango de tensión requerido por el ADC. Por otro lado, hubo que diseñar el circuito encargado de controlar la velocidad de rotación del motor. Este circuito estará basado en un transistor MOSFET en conmutación, controlado mediante una señal PWM como se mencionó anteriormente. De esta manera, al variar el ciclo de trabajo de la señal PWM, variará de manera proporcional la tensión que cae en el motor, y por tanto su velocidad de rotación. En cuanto al diseño del software, se programó el microcontrolador para que generase una señal PWM en uno de sus pines en función del valor entregado por el ADC, a cuya entrada está conectada la tensión obtenida del circuito creado para adaptar la tensión generada por el sensor. Así mismo, se utiliza el microcontrolador para representar el valor de temperatura obtenido en una pantalla LCD. Para este proyecto se eligió una placa de desarrollo mbed, que incluye el microcontrolador integrado, debido a que facilita la tarea del prototipado. Posteriormente se procedió a la integración de ambas partes, y testeado del sistema para comprobar su correcto funcionamiento. Puesto que el resultado depende de la temperatura medida, fue necesario simular variaciones en ésta, para así comprobar los resultados obtenidos a distintas temperaturas. Para este propósito se empleó una bomba de aire caliente. Una vez comprobado el funcionamiento, como último paso se diseñó la placa de circuito impreso. Como conclusión, se consiguió desarrollar un sistema con un nivel de exactitud y precisión aceptable, en base a las limitaciones del sistema. SUMMARY: It is obvious that day by day people’s daily life depends more on technology and science. Tasks tend to be done automatically, making them simpler and as a result, user life is more comfortable. Every single task that can be controlled has an electronic system behind. In this project, a control system based on a microcontroller was designed for a fan, allowing it to go faster when temperature rises or slowing down as the environment gets colder. For this purpose, a microcontroller was programmed to generate a signal, to control the rotation speed of the fan depending on the data acquired from a temperature sensor. After testing the whole design developed in the laboratory, the next step taken was to build a prototype, which allows future improvements in the system that are discussed in the corresponding section of the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows the main contributions of the 1st Symposium on Improvement Process Models and Software Quality of Public Administrations. The obtained results expose the need to promote the implementation of Software Maturity Models and show possible advantages of its application in software processes of Public Administrations. Specifically, it was analyzed the current status in two process areas: Requirements Management and Subcontracting Management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A basic requirement of the data acquisition systems used in long pulse fusion experiments is the real time physical events detection in signals. Developing such applications is usually a complex task, so it is necessary to develop a set of hardware and software tools that simplify their implementation. This type of applications can be implemented in ITER using fast controllers. ITER is standardizing the architectures to be used for fast controller implementation. Until now the standards chosen are PXIe architectures (based on PCIe) for the hardware and EPICS middleware for the software. This work presents the methodology for implementing data acquisition and pre-processing using FPGA-based DAQ cards and how to integrate these in fast controllers using EPICS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acourse focused on the acquisition of integration competencies in ship production engineering, organized in collaboration with selected industry partners, is presented in this paper. The first part of the course is dedicated to Project Management: the students acquire skills in defining, using MS-PROJECT, the work breakdown structure (WBS), and the organization breakdown structure (OBS) in Engineering projects, through a series of examples of increasing complexity with the final one being the construction planning of a vessel. The second part of the course is dedicated to the use of a database manager, MS-ACCESS, in managing production related information.Aseries of increasing complexity examples is treated, the final one being the management of the piping database of a real vessel. This database consists of several thousand pipes, for which a production timing frame is defined connecting this part of the course with the first one. Finally, the third part of the course is devoted to working withFORAN,an Engineering Production application developed bySENERand widely used in the shipbuilding industry. With this application, the structural elements where all the outfittings will be located are defined through cooperative work by the students, working simultaneously in the same 3D model. In this paper, specific details about the learning process are given. Surveys have been posed to the students in order to get feedback from their experience as well as to assess their satisfaction with the learning process, compared to more traditional ones. Results from these surveys are discussed in the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Gray scale images make the bulk of data in bio-medical image analysis, and hence, the main focus of many image processing tasks lies in the processing of these monochrome images. With ever improving acquisition devices, spatial and temporal image resolution increases, and data sets become very large. Various image processing frameworks exists that make the development of new algorithms easy by using high level programming languages or visual programming. These frameworks are also accessable to researchers that have no background or little in software development because they take care of otherwise complex tasks. Specifically, the management of working memory is taken care of automatically, usually at the price of requiring more it. As a result, processing large data sets with these tools becomes increasingly difficult on work station class computers. One alternative to using these high level processing tools is the development of new algorithms in a languages like C++, that gives the developer full control over how memory is handled, but the resulting workflow for the prototyping of new algorithms is rather time intensive, and also not appropriate for a researcher with little or no knowledge in software development. Another alternative is in using command line tools that run image processing tasks, use the hard disk to store intermediate results, and provide automation by using shell scripts. Although not as convenient as, e.g. visual programming, this approach is still accessable to researchers without a background in computer science. However, only few tools exist that provide this kind of processing interface, they are usually quite task specific, and don’t provide an clear approach when one wants to shape a new command line tool from a prototype shell script. Results The proposed framework, MIA, provides a combination of command line tools, plug-ins, and libraries that make it possible to run image processing tasks interactively in a command shell and to prototype by using the according shell scripting language. Since the hard disk becomes the temporal storage memory management is usually a non-issue in the prototyping phase. By using string-based descriptions for filters, optimizers, and the likes, the transition from shell scripts to full fledged programs implemented in C++ is also made easy. In addition, its design based on atomic plug-ins and single tasks command line tools makes it easy to extend MIA, usually without the requirement to touch or recompile existing code. Conclusion In this article, we describe the general design of MIA, a general purpouse framework for gray scale image processing. We demonstrated the applicability of the software with example applications from three different research scenarios, namely motion compensation in myocardial perfusion imaging, the processing of high resolution image data that arises in virtual anthropology, and retrospective analysis of treatment outcome in orthognathic surgery. With MIA prototyping algorithms by using shell scripts that combine small, single-task command line tools is a viable alternative to the use of high level languages, an approach that is especially useful when large data sets need to be processed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of the paper is to discuss the use of knowledge models to formulate general applications. First, the paper presents the recent evolution of the software field where increasing attention is paid to conceptual modeling. Then, the current state of knowledge modeling techniques is described where increased reliability is available through the modern knowledge acquisition techniques and supporting tools. The KSM (Knowledge Structure Manager) tool is described next. First, the concept of knowledge area is introduced as a building block where methods to perform a collection of tasks are included together with the bodies of knowledge providing the basic methods to perform the basic tasks. Then, the CONCEL language to define vocabularies of domains and the LINK language for methods formulation are introduced. Finally, the object oriented implementation of a knowledge area is described and a general methodology for application design and maintenance supported by KSM is proposed. To illustrate the concepts and methods, an example of system for intelligent traffic management in a road network is described. This example is followed by a proposal of generalization for reuse of the resulting architecture. Finally, some concluding comments are proposed about the feasibility of using the knowledge modeling tools and methods for general application design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

EPICS (Experimental Physics and Industrial Control System) lies in a set of software tools and applications which provide a software infrastructure for building distributed data acquisition and control systems. Currently there is an increase in use of such systems in large Physics experiments like ITER, ESS, and FREIA. In these experiments, advanced data acquisition systems using FPGA-based technology like FlexRIO are more frequently been used. The particular case of ITER (International Thermonuclear Experimental Reactor), the instrumentation and control system is supported by CCS (CODAC Core System), based on RHEL (Red Hat Enterprise Linux) operating system, and by the plant design specifications in which every CCS element is defined either hardware, firmware or software. In this degree final project the methodology proposed in Implementation of Intelligent Data Acquisition Systems for Fusion Experiments using EPICS and FlexRIO Technology Sanz et al. [1] is used. The final objective is to provide a document describing the fulfilled process and the source code of the data acquisition system accomplished. The use of the proposed methodology leads to have two diferent stages. The first one consists of the hardware modelling with graphic design tools like LabVIEWFPGA which later will be implemented in the FlexRIO device. In the next stage the design cycle is completed creating an EPICS controller that manages the device using a generic device support layer named NDS (Nominal Device Support). This layer integrates the data acquisition system developed into CCS (Control, data access and communication Core System) as an EPICS interface to the system. The use of FlexRIO technology drives the use of LabVIEW and LabVIEW FPGA respectively. RESUMEN. EPICS (Experimental Physics and Industrial Control System) es un conjunto de herramientas software utilizadas para el desarrollo e implementación de sistemas de adquisición de datos y control distribuidos. Cada vez es más utilizado para entornos de experimentación física a gran escala como ITER, ESS y FREIA entre otros. En estos experimentos se están empezando a utilizar sistemas de adquisición de datos avanzados que usan tecnología basada en FPGA como FlexRIO. En el caso particular de ITER, el sistema de instrumentación y control adoptado se basa en el uso de la herramienta CCS (CODAC Core System) basado en el sistema operativo RHEL (Red Hat) y en las especificaciones del diseño del sistema de planta, en la cual define todos los elementos integrantes del CCS, tanto software como firmware y hardware. En este proyecto utiliza la metodología propuesta para la implementación de sistemas de adquisición de datos inteligente basada en EPICS y FlexRIO. Se desea generar una serie de ejemplos que cubran dicho ciclo de diseño completo y que serían propuestos como casos de uso de dichas tecnologías. Se proporcionará un documento en el que se describa el trabajo realizado así como el código fuente del sistema de adquisición. La metodología adoptada consta de dos etapas diferenciadas. En la primera de ellas se modela el hardware y se sintetiza en el dispositivo FlexRIO utilizando LabVIEW FPGA. Posteriormente se completa el ciclo de diseño creando un controlador EPICS que maneja cada dispositivo creado utilizando una capa software genérica de manejo de dispositivos que se denomina NDS (Nominal Device Support). Esta capa integra la solución en CCS realizando la interfaz con la capa EPICS del sistema. El uso de la tecnología FlexRIO conlleva el uso del lenguaje de programación y descripción hardware LabVIEW y LabVIEW FPGA respectivamente.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we want to point out, by means of a case study, the importance of incorporating some knowledge engineering techniques to the processes of software engineering. Precisely, we are referring to the knowledge eduction techniques. We know the difficulty of requirements acquisition and its importance to minimise the risks of a software project, both in the development phase and in the maintenance phase. To capture the functional requirements use cases are generally used. However, as we will show in this paper, this technique is insufficient when the problem domain knowledge is only in the "experts? mind". In this situation, the combination of the use case with eduction techniques, in every development phase, will let us to discover the correct requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the framework of the ITER Control Breakdown Structure (CBS), Plant System Instrumentation & Control (I&C) defines the hardware and software required to control one or more plant systems [1]. For diagnostics, most of the complex Plant System I&C are to be delivered by ITER Domestic Agencies (DAs). As an example for the DAs, ITER Organization (IO) has developed several use cases for diagnostics Plant System I&C that fully comply with guidelines presented in the Plant Control Design Handbook (PCDH) [2]. One such use case is for neutron diagnostics, specifically the Fission Chamber (FC), which is responsible for delivering time-resolved measurements of neutron source strength and fusion power to aid in assessing the functional performance of ITER [3]. ITER will deploy four Fission Chamber units, each consisting of three individual FC detectors. Two of these detectors contain Uranium 235 for Neutron detection, while a third "dummy" detector will provide gamma and noise detection. The neutron flux from each MFC is measured by the three methods: . Counting Mode: measures the number of individual pulses and their location in the record. Pulse parameters (threshold and width) are user configurable. . Campbelling Mode (Mean Square Voltage): measures the RMS deviation in signal amplitude from its average value. .Current Mode: integrates the signal amplitude over the measurement period

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las herramientas de configuración basadas en lenguajes de alto nivel como LabVIEW permiten el desarrollo de sistemas de adquisición de datos basados en hardware reconfigurable FPGA muy complejos en un breve periodo de tiempo. La estandarización del ciclo de diseño hardware/software y la utilización de herramientas como EPICS facilita su integración con la plataforma de adquisición y control ITER CODAC CORE SYSTEM (CCS) basada en Linux. En este proyecto se propondrá una metodología que simplificará el ciclo completo de integración de plataformas novedosas, como cRIO, en las que el funcionamiento del hardware de adquisición puede ser modificado por el usuario para que éste se amolde a sus requisitos específicos. El objetivo principal de este proyecto fin de master es realizar la integración de un sistema cRIO NI9159 y diferentes módulos de E/S analógica y digital en EPICS y en CODAC CORE SYSTEM (CCS). Este último consiste en un conjunto de herramientas software que simplifican la integración de los sistemas de instrumentación y control del experimento ITER. Para cumplir el objetivo se realizarán las siguientes tareas: • Desarrollo de un sistema de adquisición de datos basado en FPGA con la plataforma hardware CompactRIO. En esta tarea se realizará la configuración del sistema y la implementación en LabVIEW para FPGA del hardware necesario para comunicarse con los módulos: NI9205, NI9264, NI9401.NI9477, NI9426, NI9425 y NI9476 • Implementación de un driver software utilizando la metodología de AsynDriver para integración del cRIO con EPICS. Esta tarea requiere definir todos los records necesarios que exige EPICS y crear las interfaces adecuadas que permitirán comunicarse con el hardware. • Implementar la descripción del sistema cRIO y del driver EPICS en el sistema de descripción de plantas de ITER llamado SDD. Esto automatiza la creación de las aplicaciones de EPICS que se denominan IOCs. SUMMARY The configuration tools based in high-level programing languages like LabVIEW allows the development of high complex data acquisition systems based on reconfigurable hardware FPGA in a short time period. The standardization of the hardware/software design cycle and the use of tools like EPICS ease the integration with the data acquisition and control platform of ITER, the CODAC Core System based on Linux. In this project a methodology is proposed in order to simplify the full integration cycle of new platforms like CompactRIO (cRIO), in which the data acquisition functionality can be reconfigured by the user to fits its concrete requirements. The main objective of this MSc final project is to develop the integration of a cRIO NI-9159 and its different analog and digital Input/Output modules with EPICS in a CCS. The CCS consists of a set of software tools that simplifies the integration of instrumentation and control systems in the International Thermonuclear Reactor (ITER) experiment. To achieve such goal the following tasks are carried out: • Development of a DAQ system based on FPGA using the cRIO hardware platform. This task comprehends the configuration of the system and the implementation of the mandatory hardware to communicate to the I/O adapter modules NI9205, NI9264, NI9401, NI9477, NI9426, NI9425 y NI9476 using LabVIEW for FPGA. • Implementation of a software driver using the asynDriver methodology to integrate such cRIO system with EPICS. This task requires the definition of the necessary EPICS records and the creation of the appropriate interfaces that allow the communication with the hardware. • Develop the cRIO system’s description and the EPICS driver in the ITER plant description tool named SDD. This development will automate the creation of EPICS applications, called IOCs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.