946 resultados para software failure prediction
Resumo:
Peritoneal transport characteristics and residual renal function require regular control and subsequent adjustment of the peritoneal dialysis (PD) prescription. Prescription models shall facilitate the prediction of the outcome of such adaptations for a given patient. In the present study, the prescription model implemented in the PatientOnLine software was validated in patients requiring a prescription change. This multicenter, international prospective cohort study with the aim to validate a PD prescription model included patients treated with continuous ambulatory peritoneal dialysis. Patients were examined with the peritoneal function test (PFT) to determine the outcome of their current prescription and the necessity for a prescription change. For these patients, a new prescription was modeled using the PatientOnLine software (Fresenius Medical Care, Bad Homburg, Germany). Two to four weeks after implementation of the new PD regimen, a second PFT was performed. The validation of the prescription model included 54 patients. Predicted and measured peritoneal Kt/V were 1.52 ± 0.31 and 1.66 ± 0.35, and total (peritoneal + renal) Kt/V values were 1.96 ± 0.48 and 2.06 ± 0.44, respectively. Predicted and measured peritoneal creatinine clearances were 42.9 ± 8.6 and 43.0 ± 8.8 L/1.73 m2/week and total creatinine clearances were 65.3 ± 26.0 and 63.3 ± 21.8 L/1.73 m2/week, respectively. The analysis revealed a Pearson's correlation coefficient for peritoneal Kt/V of 0.911 and Lin's concordance coefficient of 0.829. The value of both coefficients was 0.853 for peritoneal creatinine clearance. Predicted and measured daily net ultrafiltration was 0.77 ± 0.49 and 1.16 ± 0.63 L/24 h, respectively. Pearson's correlation and Lin's concordance coefficient were 0.518 and 0.402, respectively. Predicted and measured peritoneal glucose absorption was 125.8 ± 38.8 and 79.9 ± 30.7 g/24 h, respectively, and Pearson's correlation and Lin's concordance coefficient were 0.914 and 0.477, respectively. With good predictability of peritoneal Kt/V and creatinine clearance, the present model provides support for individual dialysis prescription in clinical practice. Peritoneal glucose absorption and ultrafiltration are less predictable and are likely to be influenced by additional clinical factors to be taken into consideration.
Resumo:
Acute-on-chronic liver failure (ACLF) is characterized by acute decompensation (AD) of cirrhosis, organ failure(s), and high 28-day mortality. We investigated whether assessments of patients at specific time points predicted their need for liver transplantation (LT) or the potential futility of their care. We assessed clinical courses of 388 patients who had ACLF at enrollment, from February through September 2011, or during early (28-day) follow-up of the prospective multicenter European Chronic Liver Failure (CLIF) ACLF in Cirrhosis study. We assessed ACLF grades at different time points to define disease resolution, improvement, worsening, or steady or fluctuating course. ACLF resolved or improved in 49.2%, had a steady or fluctuating course in 30.4%, and worsened in 20.4%. The 28-day transplant-free mortality was low-to-moderate (6%-18%) in patients with nonsevere early course (final no ACLF or ACLF-1) and high-to-very high (42%-92%) in those with severe early course (final ACLF-2 or -3) independently of initial grades. Independent predictors of course severity were CLIF Consortium ACLF score (CLIF-C ACLFs) and presence of liver failure (total bilirubin ≥12 mg/dL) at ACLF diagnosis. Eighty-one percent had their final ACLF grade at 1 week, resulting in accurate prediction of short- (28-day) and mid-term (90-day) mortality by ACLF grade at 3-7 days. Among patients that underwent early LT, 75% survived for at least 1 year. Among patients with ≥4 organ failures, or CLIF-C ACLFs >64 at days 3-7 days, and did not undergo LT, mortality was 100% by 28 days. CONCLUSIONS Assessment of ACLF patients at 3-7 days of the syndrome provides a tool to define the emergency of LT and a rational basis for intensive care discontinuation owing to futility.
Resumo:
BACKGROUND & AIMS Cirrhotic patients with acute decompensation frequently develop acute-on-chronic liver failure (ACLF), which is associated with high mortality rates. Recently, a specific score for these patients has been developed using the CANONIC study database. The aims of this study were to develop and validate the CLIF-C AD score, a specific prognostic score for hospitalised cirrhotic patients with acute decompensation (AD), but without ACLF, and to compare this with the Child-Pugh, MELD, and MELD-Na scores. METHODS The derivation set included 1016 CANONIC study patients without ACLF. Proportional hazards models considering liver transplantation as a competing risk were used to identify score parameters. Estimated coefficients were used as relative weights to compute the CLIF-C ADs. External validation was performed in 225 cirrhotic AD patients. CLIF-C ADs was also tested for sequential use. RESULTS Age, serum sodium, white-cell count, creatinine and INR were selected as the best predictors of mortality. The C-index for prediction of mortality was better for CLIF-C ADs compared with Child-Pugh, MELD, and MELD-Nas at predicting 3- and 12-month mortality in the derivation, internal validation and the external dataset. CLIF-C ADs improved in its ability to predict 3-month mortality using data from days 2, 3-7, and 8-15 (C-index: 0.72, 0.75, and 0.77 respectively). CONCLUSIONS The new CLIF-C ADs is more accurate than other liver scores in predicting prognosis in hospitalised cirrhotic patients without ACLF. CLIF-C ADs therefore may be used to identify a high-risk cohort for intensive management and a low-risk group that may be discharged early.
Resumo:
BACKGROUND Strategies to improve risk prediction are of major importance in patients with heart failure (HF). Fibroblast growth factor 23 (FGF-23) is an endocrine regulator of phosphate and vitamin D homeostasis associated with an increased cardiovascular risk. We aimed to assess the prognostic effect of FGF-23 on mortality in HF patients with a particular focus on differences between patients with HF with preserved ejection fraction and patients with HF with reduced ejection fraction (HFrEF). METHODS AND RESULTS FGF-23 levels were measured in 980 patients with HF enrolled in the Ludwigshafen Risk and Cardiovascular Health (LURIC) study including 511 patients with HFrEF and 469 patients with HF with preserved ejection fraction and a median follow-up time of 8.6 years. FGF-23 was additionally measured in a second cohort comprising 320 patients with advanced HFrEF. FGF-23 was independently associated with mortality with an adjusted hazard ratio per 1-SD increase of 1.30 (95% confidence interval, 1.14-1.48; P<0.001) in patients with HFrEF, whereas no such association was found in patients with HF with preserved ejection fraction (for interaction, P=0.043). External validation confirmed the significant association with mortality with an adjusted hazard ratio per 1 SD of 1.23 (95% confidence interval, 1.02-1.60; P=0.027). FGF-23 demonstrated an increased discriminatory power for mortality in addition to N-terminal pro-B-type natriuretic peptide (C-statistic: 0.59 versus 0.63) and an improvement in net reclassification index (39.6%; P<0.001). CONCLUSIONS FGF-23 is independently associated with an increased risk of mortality in patients with HFrEF but not in those with HF with preserved ejection fraction, suggesting a different pathophysiologic role for both entities.
Resumo:
Objective. To determine whether transforming growth factor beta (TGF-β) receptor blockade using an oral antagonist has an effect on cardiac myocyte size in the hearts of transgenic mice with a heart failure phenotype. ^ Methods. In this pilot experimental study, cardiac tissue sections from the hearts of transgenic mice overexpressing tumor necrosis factor (MHCsTNF mice) having a phenotype of heart failure and wild-type mice, treated with an orally available TGF-β receptor antagonist were stained with wheat germ agglutinin to delineate the myocyte cell membrane and imaged using fluorescence microscopy. Using MetaVue software, the cardiac myocyte circumference was traced and the cross sectional area (CSA) of individual myocytes were measured. Measurements were repeated at the epicardial, mid-myocardial and endocardial levels to ensure adequate sampling and to minimize the effect of regional variations in myocyte size. ANOVA testing with post-hoc pairwise comparisons was done to assess any difference between the drug-treated and diluent-treated groups. ^ Results. There were no statistically significant differences in the average myocyte CSA measured at the epicardial, mid-myocardial or endocardial levels between diluent treated littermate control mice, drug treated normal mice, diluent-treated transgenic mice and drug-treated transgenic mice. There was no difference between the average pan-myocardial cross sectional area between any of the four groups mentioned above. ^ Conclusions. TGF-β receptor blockade using oral TGF-β receptor antagonist does not alter myocyte size in MHCsTNF mice that have a phenotype of heart failure. ^
Resumo:
Sepsis is a significant cause for multiple organ failure and death in the burn patient, yet identification in this population is confounded by chronic hypermetabolism and impaired immune function. The purpose of this study was twofold: 1) determine the ability of the systemic inflammatory response syndrome (SIRS) and American Burn Association (ABA) criteria to predict sepsis in the burn patient; and 2) develop a model representing the best combination of clinical predictors associated with sepsis in the same population. A retrospective, case-controlled, within-patient comparison of burn patients admitted to a single intensive care unit (ICU) was conducted for the period January 2005 to September 2010. Blood culture results were paired with clinical condition: "positive-sick"; "negative-sick", and "screening-not sick". Data were collected for the 72 hours prior to each blood culture. The most significant predictors were evaluated using logistic regression, Generalized Estimating Equations (GEE) and ROC area under the curve (AUC) analyses to assess model predictive ability. Bootstrapping methods were employed to evaluate potential model over-fitting. Fifty-nine subjects were included, representing 177 culture periods. SIRS criteria were not found to be associated with culture type, with an average of 98% of subjects meeting criteria in the 3 days prior. ABA sepsis criteria were significantly different among culture type only on the day prior (p = 0.004). The variables identified for the model included: heart rate>130 beats/min, mean blood pressure<60 mmHg, base deficit<-6 mEq/L, temperature>36°C, use of vasoactive medications, and glucose>150 mg/d1. The model was significant in predicting "positive culture-sick" and sepsis state, with AUC of 0.775 (p < 0.001) and 0.714 (p < .001), respectively; comparatively, the ABA criteria AUC was 0.619 (p = 0.028) and 0.597 (p = .035), respectively. SIRS criteria are not appropriate for identifying sepsis in the burn population. The ABA criteria perform better, but only for the day prior to positive blood culture results. The time period useful to diagnose sepsis using clinical criteria may be limited to 24 hours. A combination of predictors is superior to individual variable trends, yet algorithms or computer support will be necessary for the clinician to find such models useful. ^
Resumo:
We introduce two probabilistic, data-driven models that predict a ship's speed and the situations where a ship is probable to get stuck in ice based on the joint effect of ice features such as the thickness and concentration of level ice, ice ridges, rafted ice, moreover ice compression is considered. To develop the models to datasets were utilized. First, the data from the Automatic Identification System about the performance of a selected ship was used. Second, a numerical ice model HELMI, developed in the Finnish Meteorological Institute, provided information about the ice field. The relations between the ice conditions and ship movements were established using Bayesian learning algorithms. The case study presented in this paper considers a single and unassisted trip of an ice-strengthened bulk carrier between two Finnish ports in the presence of challenging ice conditions, which varied in time and space. The obtained results show good prediction power of the models. This means, on average 80% for predicting the ship's speed within specified bins, and above 90% for predicting cases where a ship may get stuck in ice. We expect this new approach to facilitate the safe and effective route selection problem for ice-covered waters where the ship performance is reflected in the objective function.
Resumo:
A finite element model was used to simulate timberbeams with defects and predict their maximum load in bending. Taking into account the elastoplastic constitutive law of timber, the prediction of fracture load gives information about the mechanisms of timber failure, particularly with regard to the influence of knots, and their local graindeviation, on the fracture. A finite element model was constructed using the ANSYS element Plane42 in a plane stress 2D-analysis, which equates thickness to the width of the section to create a mesh which is as uniform as possible. Three sub-models reproduced the bending test according to UNE EN 408: i) timber with holes caused by knots; ii) timber with adherent knots which have structural continuity with the rest of the beam material; iii) timber with knots but with only partial contact between knot and beam which was artificially simulated by means of contact springs between the two materials. The model was validated using ten 45 × 145 × 3000 mm beams of Pinus sylvestris L. which presented knots and graindeviation. The fracture stress data obtained was compared with the results of numerical simulations, resulting in an adjustment error less of than 9.7%
Resumo:
Most empirical disciplines promote the reuse and sharing of datasets, as it leads to greater possibility of replication. While this is increasingly the case in Empirical Software Engineering, some of the most popular bug-fix datasets are now known to be biased. This raises two significants concerns: first, that sample bias may lead to underperforming prediction models, and second, that the external validity of the studies based on biased datasets may be suspect. This issue has raised considerable consternation in the ESE literature in recent years. However, there is a confounding factor of these datasets that has not been examined carefully: size. Biased datasets are sampling only some of the data that could be sampled, and doing so in a biased fashion; but biased samples could be smaller, or larger. Smaller data sets in general provide less reliable bases for estimating models, and thus could lead to inferior model performance. In this setting, we ask the question, what affects performance more? bias, or size? We conduct a detailed, large-scale meta-analysis, using simulated datasets sampled with bias from a high-quality dataset which is relatively free of bias. Our results suggest that size always matters just as much bias direction, and in fact much more than bias direction when considering information-retrieval measures such as AUC and F-score. This indicates that at least for prediction models, even when dealing with sampling bias, simply finding larger samples can sometimes be sufficient. Our analysis also exposes the complexity of the bias issue, and raises further issues to be explored in the future.
Resumo:
Case-based reasoning (CBR) is a unique tool for the evaluation of possible failure of firms (EOPFOF) for its eases of interpretation and implementation. Ensemble computing, a variation of group decision in society, provides a potential means of improving predictive performance of CBR-based EOPFOF. This research aims to integrate bagging and proportion case-basing with CBR to generate a method of proportion bagging CBR for EOPFOF. Diverse multiple case bases are first produced by multiple case-basing, in which a volume parameter is introduced to control the size of each case base. Then, the classic case retrieval algorithm is implemented to generate diverse member CBR predictors. Majority voting, the most frequently used mechanism in ensemble computing, is finally used to aggregate outputs of member CBR predictors in order to produce final prediction of the CBR ensemble. In an empirical experiment, we statistically validated the results of the CBR ensemble from multiple case bases by comparing them with those of multivariate discriminant analysis, logistic regression, classic CBR, the best member CBR predictor and bagging CBR ensemble. The results from Chinese EOPFOF prior to 3 years indicate that the new CBR ensemble, which significantly improved CBRs predictive ability, outperformed all the comparative methods.
Resumo:
New concepts in air navigation have been introduced recently. Among others, are the concepts of trajectory optimization, 4D trajectories, RBT (Reference Business Trajectory), TBO (trajectory based operations), CDA (Continuous Descent Approach) and ACDA (Advanced CDA), conflict resolution, arrival time (AMAN), introduction of new aircraft (UAVs, UASs) in air space, etc. Although some of these concepts are new, the future Air Traffic Management will maintain the four ATM key performance areas such as Safety, Capacity, Efficiency, and Environmental impact. So much, the performance of the ATM system is directly related to the accuracy with which the future evolution of the traffic can be predicted. In this sense, future air traffic management will require a variety of support tools to provide suitable help to users and engineers involved in the air space management. Most of these tools are based on an appropriate trajectory prediction module as main component. Therefore, the purposes of these tools are related with testing and evaluation of any air navigation concept before they become fully operative. The aim of this paper is to provide an overview to the design of a software tool useful to estimate aircraft trajectories adapted to air navigation concepts. Other usage of the tool, like controller design, vertical navigation assessment, procedures validation and hardware and software in the loop are available in the software tool. The paper will show the process followed to design the tool, the software modules needed to perform accurately and the process followed to validate the output data.
Resumo:
In the last decades, software systems have become an intrinsic element in our daily lives. Software exists in our computers, in our cars, and even in our refrigerators. Today’s world has become heavily dependent on software and yet, we still struggle to deliver quality software products, on-time and within budget. When searching for the causes of such alarming scenario, we find concurrent voices pointing to the role of the project manager. But what is project management and what makes it so challenging? Part of the answer to this question requires a deeper analysis of why software project managers have been largely ineffective. Answering this question might assist current and future software project managers in avoiding, or at least effectively mitigating, problematic scenarios that, if unresolved, will eventually lead to additional failures. This is where anti-patterns come into play and where they can be a useful tool in identifying and addressing software project management failure. Unfortunately, anti-patterns are still a fairly recent concept, and thus, available information is still scarce and loosely organized. This thesis will attempt to help remedy this scenario. The objective of this work is to help organize existing, documented software project management anti-patterns by answering our two research questions: · What are the different anti-patterns in software project management? · How can these anti-patterns be categorized?
Resumo:
A numerical and experimental study of ballistic impacts at various temperatures on precipitation hardened Inconel 718 nickel-base superalloy plates has been performed. A coupled elastoplastic-damage constitutive model with Lode angle dependent failure criterion has been implemented in LS-DYNA non-linear finite element code to model the mechanical behaviour of such an alloy. The ballistic impact tests have been carried out at three temperatures: room temperature (25 °C), 400 °C and 700 °C. The numerical study showed that the mesh size is crucial to predict correctly the shear bands detected in the tested plates. Moreover, the mesh size convergence has been achieved for element sizes on the same order that the shear bands. The residual velocity as well as the ballistic limit prediction has been considered excellent for high temperature ballistic tests. Nevertheless, the model has been less accurate for the numerical simulations performed at room temperature, being though in reasonable agreement with the experimental data. Additionally, the influence that the Lode angle had on quasi-static failure patterns such as cup-cone and slanted failure has been studied numerically. The study has revealed that the combined action of weakened constitutive equations and Lode angle dependent failure criterion has been necessary to predict the previously-mentioned failure patterns
Resumo:
A coupled elastoplastic-damage constitutive model with Lode angle dependent failure criterion for high strain and ballistic applications is presented. A Lode angle dependent function is added to the equivalent plastic strain to failure definition of the Johnson–Cook failure criterion. The weakening in the elastic law and in the Johnson–Cook-like constitutive relation implicitly introduces the Lode angle dependency in the elastoplastic behaviour. The material model is calibrated for precipitation hardened Inconel 718 nickel-base superalloy. The combination of a Lode angle dependent failure criterion with weakened constitutive equations is proven to predict fracture patterns of the mechanical tests performed and provide reliable results. Additionally, the mesh size dependency on the prediction of the fracture patterns was studied, showing that was crucial to predict such patterns
Resumo:
En las últimas dos décadas, se ha puesto de relieve la importancia de los procesos de adquisición y difusión del conocimiento dentro de las empresas, y por consiguiente el estudio de estos procesos y la implementación de tecnologías que los faciliten ha sido un tema que ha despertado un creciente interés en la comunidad científica. Con el fin de facilitar y optimizar la adquisición y la difusión del conocimiento, las organizaciones jerárquicas han evolucionado hacia una configuración más plana, con estructuras en red que resulten más ágiles, disminuyendo la dependencia de una autoridad centralizada, y constituyendo organizaciones orientadas a trabajar en equipo. Al mismo tiempo, se ha producido un rápido desarrollo de las herramientas de colaboración Web 2.0, tales como blogs y wikis. Estas herramientas de colaboración se caracterizan por una importante componente social, y pueden alcanzar todo su potencial cuando se despliegan en las estructuras organizacionales planas. La Web 2.0 aparece como un concepto enfrentado al conjunto de tecnologías que existían a finales de los 90s basadas en sitios web, y se basa en la participación de los propios usuarios. Empresas del Fortune 500 –HP, IBM, Xerox, Cisco– las adoptan de inmediato, aunque no hay unanimidad sobre su utilidad real ni sobre cómo medirla. Esto se debe en parte a que no se entienden bien los factores que llevan a los empleados a adoptarlas, lo que ha llevado a fracasos en la implantación debido a la existencia de algunas barreras. Dada esta situación, y ante las ventajas teóricas que tienen estas herramientas de colaboración Web 2.0 para las empresas, los directivos de éstas y la comunidad científica muestran un interés creciente en conocer la respuesta a la pregunta: ¿cuáles son los factores que contribuyen a que los empleados de las empresas adopten estas herramientas Web 2.0 para colaborar? La respuesta a esta pregunta es compleja ya que se trata de herramientas relativamente nuevas en el contexto empresarial mediante las cuales se puede llevar a cabo la gestión del conocimiento en lugar del manejo de la información. El planteamiento que se ha llevado a cabo en este trabajo para dar respuesta a esta pregunta es la aplicación de los modelos de adopción tecnológica, que se basan en las percepciones de los individuos sobre diferentes aspectos relacionados con el uso de la tecnología. Bajo este enfoque, este trabajo tiene como objetivo principal el estudio de los factores que influyen en la adopción de blogs y wikis en empresas, mediante un modelo predictivo, teórico y unificado, de adopción tecnológica, con un planteamiento holístico a partir de la literatura de los modelos de adopción tecnológica y de las particularidades que presentan las herramientas bajo estudio y en el contexto especifico. Este modelo teórico permitirá determinar aquellos factores que predicen la intención de uso de las herramientas y el uso real de las mismas. El trabajo de investigación científica se estructura en cinco partes: introducción al tema de investigación, desarrollo del marco teórico, diseño del trabajo de investigación, análisis empírico, y elaboración de conclusiones. Desde el punto de vista de la estructura de la memoria de la tesis, las cinco partes mencionadas se desarrollan de forma secuencial a lo largo de siete capítulos, correspondiendo la primera parte al capítulo 1, la segunda a los capítulos 2 y 3, la tercera parte a los capítulos 4 y 5, la cuarta parte al capítulo 6, y la quinta y última parte al capítulo 7. El contenido del capítulo 1 se centra en el planteamiento del problema de investigación así como en los objetivos, principal y secundarios, que se pretenden cumplir a lo largo del trabajo. Así mismo, se expondrá el concepto de colaboración y su encaje con las herramientas colaborativas Web 2.0 que se plantean en la investigación y una introducción a los modelos de adopción tecnológica. A continuación se expone la justificación de la investigación, los objetivos de la misma y el plan de trabajo para su elaboración. Una vez introducido el tema de investigación, en el capítulo 2 se lleva a cabo una revisión de la evolución de los principales modelos de adopción tecnológica existentes (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), dando cuenta de sus fundamentos y factores empleados. Sobre la base de los modelos de adopción tecnológica expuestos en el capítulo 2, en el capítulo 3 se estudian los factores que se han expuesto en el capítulo 2 pero adaptados al contexto de las herramientas colaborativas Web 2.0. Con el fin de facilitar la comprensión del modelo final, los factores se agrupan en cuatro tipos: tecnológicos, de control, socio-normativos y otros específicos de las herramientas colaborativas. En el capítulo 4 se lleva a cabo la relación de los factores que son más apropiados para estudiar la adopción de las herramientas colaborativas y se define un modelo que especifica las relaciones entre los diferentes factores. Estas relaciones finalmente se convertirán en hipótesis de trabajo, y que habrá que contrastar mediante el estudio empírico. A lo largo del capítulo 5 se especifican las características del trabajo empírico que se lleva a cabo para contrastar las hipótesis que se habían enunciado en el capítulo 4. La naturaleza de la investigación es de carácter social, de tipo exploratorio, y se basa en un estudio empírico cuantitativo cuyo análisis se llevará a cabo mediante técnicas de análisis multivariante. En este capítulo se describe la construcción de las escalas del instrumento de medida, la metodología de recogida de datos, y posteriormente se presenta un análisis detallado de la población muestral, así como la comprobación de la existencia o no del sesgo atribuible al método de medida, lo que se denomina sesgo de método común (en inglés, Common Method Bias). El contenido del capítulo 6 corresponde al análisis de resultados, aunque previamente se expone la técnica estadística empleada, PLS-SEM, como herramienta de análisis multivariante con capacidad de análisis predictivo, así como la metodología empleada para validar el modelo de medida y el modelo estructural, los requisitos que debe cumplir la muestra, y los umbrales de los parámetros considerados. En la segunda parte del capítulo 6 se lleva a cabo el análisis empírico de los datos correspondientes a las dos muestras, una para blogs y otra para wikis, con el fin de validar las hipótesis de investigación planteadas en el capítulo 4. Finalmente, en el capítulo 7 se revisa el grado de cumplimiento de los objetivos planteados en el capítulo 1 y se presentan las contribuciones teóricas, metodológicas y prácticas derivadas del trabajo realizado. A continuación se exponen las conclusiones generales y detalladas por cada grupo de factores, así como las recomendaciones prácticas que se pueden extraer para orientar la implantación de estas herramientas en situaciones reales. Como parte final del capítulo se incluyen las limitaciones del estudio y se sugiere una serie de posibles líneas de trabajo futuras de interés, junto con los resultados de investigación parciales que se han obtenido durante el tiempo que ha durado la investigación. ABSTRACT In the last two decades, the relevance of knowledge acquisition and dissemination processes has been highlighted and consequently, the study of these processes and the implementation of the technologies that make them possible has generated growing interest in the scientific community. In order to ease and optimize knowledge acquisition and dissemination, hierarchical organizations have evolved to a more horizontal configuration with more agile net structures, decreasing the dependence of a centralized authority, and building team-working oriented organizations. At the same time, Web 2.0 collaboration tools such as blogs and wikis have quickly developed. These collaboration tools are characterized by a strong social component and can reach their full potential when they are deployed in horizontal organization structures. Web 2.0, based on user participation, arises as a concept to challenge the existing technologies of the 90’s which were based on websites. Fortune 500 companies – HP, IBM, Xerox, Cisco- adopted the concept immediately even though there was no unanimity about its real usefulness or how it could be measured. This is partly due to the fact that the factors that make the drivers for employees to adopt these tools are not properly understood, consequently leading to implementation failure due to the existence of certain barriers. Given this situation, and faced with theoretical advantages that these Web 2.0 collaboration tools seem to have for companies, managers and the scientific community are showing an increasing interest in answering the following question: Which factors contribute to the decision of the employees of a company to adopt the Web 2.0 tools for collaborative purposes? The answer is complex since these tools are relatively new in business environments. These tools allow us to move from an information Management approach to Knowledge Management. In order to answer this question, the chosen approach involves the application of technology adoption models, all of them based on the individual’s perception of the different aspects related to technology usage. From this perspective, this thesis’ main objective is to study the factors influencing the adoption of blogs and wikis in a company. This is done by using a unified and theoretical predictive model of technological adoption with a holistic approach that is based on literature of technological adoption models and the particularities that these tools presented under study and in a specific context. This theoretical model will allow us to determine the factors that predict the intended use of these tools and their real usage. The scientific research is structured in five parts: Introduction to the research subject, development of the theoretical framework, research work design, empirical analysis and drawing the final conclusions. This thesis develops the five aforementioned parts sequentially thorough seven chapters; part one (chapter one), part two (chapters two and three), part three (chapters four and five), parte four (chapters six) and finally part five (chapter seven). The first chapter is focused on the research problem statement and the objectives of the thesis, intended to be reached during the project. Likewise, the concept of collaboration and its link with the Web 2.0 collaborative tools is discussed as well as an introduction to the technology adoption models. Finally we explain the planning to carry out the research and get the proposed results. After introducing the research topic, the second chapter carries out a review of the evolution of the main existing technology adoption models (IDT, TRA, SCT, TPB, DTPB, C-TAM-TPB, UTAUT, UTAUT2), highlighting its foundations and factors used. Based on technology adoption models set out in chapter 2, the third chapter deals with the factors which have been discussed previously in chapter 2, but adapted to the context of Web 2.0 collaborative tools under study, blogs and wikis. In order to better understand the final model, the factors are grouped into four types: technological factors, control factors, social-normative factors and other specific factors related to the collaborative tools. The first part of chapter 4 covers the analysis of the factors which are more relevant to study the adoption of collaborative tools, and the second part proceeds with the theoretical model which specifies the relationship between the different factors taken into consideration. These relationships will become specific hypotheses that will be tested by the empirical study. Throughout chapter 5 we cover the characteristics of the empirical study used to test the research hypotheses which were set out in chapter 4. The nature of research is social, exploratory, and it is based on a quantitative empirical study whose analysis is carried out using multivariate analysis techniques. The second part of this chapter includes the description of the scales of the measuring instrument; the methodology for data gathering, the detailed analysis of the sample, and finally the existence of bias attributable to the measurement method, the "Bias Common Method" is checked. The first part of chapter 6 corresponds to the analysis of results. The statistical technique employed (PLS-SEM) is previously explained as a tool of multivariate analysis, capable of carrying out predictive analysis, and as the appropriate methodology used to validate the model in a two-stages analysis, the measurement model and the structural model. Futhermore, it is necessary to check the requirements to be met by the sample and the thresholds of the parameters taken into account. In the second part of chapter 6 an empirical analysis of the data is performed for the two samples, one for blogs and the other for wikis, in order to validate the research hypothesis proposed in chapter 4. Finally, in chapter 7 the fulfillment level of the objectives raised in chapter 1 is reviewed and the theoretical, methodological and practical conclusions derived from the results of the study are presented. Next, we cover the general conclusions, detailing for each group of factors including practical recommendations that can be drawn to guide implementation of these tools in real situations in companies. As a final part of the chapter the limitations of the study are included and a number of potential future researches suggested, along with research partial results which have been obtained thorough the research.