915 resultados para Precision and recall


Relevância:

90.00% 90.00%

Publicador:

Resumo:

For popular software systems, the number of daily submitted bug reports is high. Triaging these incoming reports is a time consuming task. Part of the bug triage is the assignment of a report to a developer with the appropriate expertise. In this paper, we present an approach to automatically suggest developers who have the appropriate expertise for handling a bug report. We model developer expertise using the vocabulary found in their source code contributions and compare this vocabulary to the vocabulary of bug reports. We evaluate our approach by comparing the suggested experts to the persons who eventually worked on the bug. Using eight years of Eclipse development as a case study, we achieve 33.6\% top-1 precision and 71.0\% top-10 recall.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Ethyl glucuronide (EtG) and ethyl sulfate (EtS) are non-oxidative minor metabolites of ethanol. They are detectable in various body fluids shortly after initial consumption of ethanol and have a longer detection time frame than the parent compound. They are regarded highly sensitive and specific markers of recent alcohol uptake. This study evaluates the determination of EtG and EtS from dried blood spots (DBS), a simple and cost-effective sampling method that would shorten the time gap between offense and blood sampling and lead to a better reflectance of the actual impairment. METHODS: For method validation, EtG and EtS standard and quality control samples were prepared in fresh human heparinized blood and spotted on DBS cards, then extracted and measured by an LC-ESI-MS/MS method. Additionally, 76 heparinized blood samples from traffic offense cases were analyzed for EtG and EtS as whole blood and as DBS specimens. The results from these measurements were then compared by calculating the respective mean values, by a matched-paired t test, by a Wilcoxon test, and by Bland-Altman and Mountain plots. RESULTS AND DISCUSSION: Calibrations for EtG and EtS in DBS were linear over the studied calibration range. The precision and accuracy of the method met the requirements of the validation guidelines that were employed in the study. The stability of the biomarkers stored as DBS was demonstrated under different storage conditions. The t test showed no significant difference between whole blood and DBS in the determination of EtG and EtS. In addition, the Bland-Altman analysis and Mountain plot confirmed that the concentration differences that were measured in DBS specimens were not relevant.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: The aim of this study was to determine whether the Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI)- or Cockcroft-Gault (CG)-based estimated glomerular filtration rates (eGFRs) performs better in the cohort setting for predicting moderate/advanced chronic kidney disease (CKD) or end-stage renal disease (ESRD). METHODS: A total of 9521 persons in the EuroSIDA study contributed 133 873 eGFRs. Poisson regression was used to model the incidence of moderate and advanced CKD (confirmed eGFR < 60 and < 30 mL/min/1.73 m(2) , respectively) or ESRD (fatal/nonfatal) using CG and CKD-EPI eGFRs. RESULTS: Of 133 873 eGFR values, the ratio of CG to CKD-EPI was ≥ 1.1 in 22 092 (16.5%) and the difference between them (CG minus CKD-EPI) was ≥ 10 mL/min/1.73 m(2) in 20 867 (15.6%). Differences between CKD-EPI and CG were much greater when CG was not standardized for body surface area (BSA). A total of 403 persons developed moderate CKD using CG [incidence 8.9/1000 person-years of follow-up (PYFU); 95% confidence interval (CI) 8.0-9.8] and 364 using CKD-EPI (incidence 7.3/1000 PYFU; 95% CI 6.5-8.0). CG-derived eGFRs were equal to CKD-EPI-derived eGFRs at predicting ESRD (n = 36) and death (n = 565), as measured by the Akaike information criterion. CG-based moderate and advanced CKDs were associated with ESRD [adjusted incidence rate ratio (aIRR) 7.17; 95% CI 2.65-19.36 and aIRR 23.46; 95% CI 8.54-64.48, respectively], as were CKD-EPI-based moderate and advanced CKDs (aIRR 12.41; 95% CI 4.74-32.51 and aIRR 12.44; 95% CI 4.83-32.03, respectively). CONCLUSIONS: Differences between eGFRs using CG adjusted for BSA or CKD-EPI were modest. In the absence of a gold standard, the two formulae predicted clinical outcomes with equal precision and can be used to estimate GFR in HIV-positive persons.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computational network analysis provides new methods to analyze the human connectome. Brain structural networks can be characterized by global and local metrics that recently gave promising insights for diagnosis and further understanding of neurological, psychiatric and neurodegenerative disorders. In order to ensure the validity of results in clinical settings the precision and repeatability of the networks and the associated metrics must be evaluated. In the present study, nineteen healthy subjects underwent two consecutive measurements enabling us to test reproducibility of the brain network and its global and local metrics. As it is known that the network topology depends on the network density, the effects of setting a common density threshold for all networks were also assessed. Results showed good to excellent repeatability for global metrics, while for local metrics it was more variable and some metrics were found to have locally poor repeatability. Moreover, between subjects differences were slightly inflated when the density was not fixed. At the global level, these findings confirm previous results on the validity of global network metrics as clinical biomarkers. However, the new results in our work indicate that the remaining variability at the local level as well as the effect of methodological characteristics on the network topology should be considered in the analysis of brain structural networks and especially in networks comparisons.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE Confidence intervals (CIs) are integral to the interpretation of the precision and clinical relevance of research findings. The aim of this study was to ascertain the frequency of reporting of CIs in leading prosthodontic and dental implantology journals and to explore possible factors associated with improved reporting. MATERIALS AND METHODS Thirty issues of nine journals in prosthodontics and implant dentistry were accessed, covering the years 2005 to 2012: The Journal of Prosthetic Dentistry, Journal of Oral Rehabilitation, The International Journal of Prosthodontics, The International Journal of Periodontics & Restorative Dentistry, Clinical Oral Implants Research, Clinical Implant Dentistry and Related Research, The International Journal of Oral & Maxillofacial Implants, Implant Dentistry, and Journal of Dentistry. Articles were screened and the reporting of CIs and P values recorded. Other information including study design, region of authorship, involvement of methodologists, and ethical approval was also obtained. Univariable and multivariable logistic regression was used to identify characteristics associated with reporting of CIs. RESULTS Interrater agreement for the data extraction performed was excellent (kappa = 0.88; 95% CI: 0.87 to 0.89). CI reporting was limited, with mean reporting across journals of 14%. CI reporting was associated with journal type, study design, and involvement of a methodologist or statistician. CONCLUSIONS Reporting of CI in implant dentistry and prosthodontic journals requires improvement. Improved reporting will aid appraisal of the clinical relevance of research findings by providing a range of values within which the effect size lies, thus giving the end user the opportunity to interpret the results in relation to clinical practice.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study focuses on relations between 7- and 9-year-old children’s and adults’ metacognitive monitoring and control processes. In addition to explicit confidence judgments (CJ), data for participants’ control behavior during learning and recall as well as implicit CJs were collected with an eye-tracking device (Tobii 1750). Results revealed developmental progression in both accuracy of implicit and explicit monitoring across age groups. In addition, efficiency of learning and recall strategies increases with age, as older participants allocate more fixation time to critical information and less time to peripheral or potentially interfering information. Correlational analyses, recall performance, metacognitive monitoring, and controlling indicate significant interrelations between all of these measures, with varying patterns of correlations within age groups. Results are discussed in regard to the intricate relationship between monitoring and recall and their relation to performance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE To validate a radioimmunoassay for measurement of procollagen type III amino terminal propeptide (PIIINP) concentrations in canine serum and bronchoalveolar lavage fluid (BALF) and investigate the effects of physiologic and pathologic conditions on PIIINP concentrations. SAMPLE POPULATION Sera from healthy adult (n = 70) and growing dogs (20) and dogs with chronic renal failure (CRF; 10), cardiomyopathy (CMP; 12), or degenerative valve disease (DVD; 26); and sera and BALF from dogs with chronic bronchopneumopathy (CBP; 15) and healthy control dogs (10 growing and 9 adult dogs). PROCEDURE A radioimmunoassay was validated, and a reference range for serum PIIINP (S-PIIINP) concentration was established. Effects of growth, age, sex, weight, CRF, and heart failure on S-PIIINP concentration were analyzed. In CBP-affected dogs, S-PIIINP and BALF-PIIINP concentrations were evaluated. RESULTS The radioimmunoassay had good sensitivity, linearity, precision, and reproducibility and reasonable accuracy for measurement of S-PIIINP and BALF-PIIINP concentrations. The S-PIIINP concentration reference range in adult dogs was 8.86 to 11.48 mug/L. Serum PIIINP concentration correlated with weight and age. Growing dogs had significantly higher S-PIIINP concentrations than adults, but concentrations in CRF-, CMP-, DVD-, or CBP-affected dogs were not significantly different from control values. Mean BALF-PIIINP concentration was significantly higher in CBP-affected dogs than in healthy adults. CONCLUSIONS AND CLINICAL RELEVANCE In dogs, renal or cardiac disease or CBP did not significantly affect S-PIIINP concentration; dogs with CBP had high BALF-PIIINP concentrations. Data suggest that the use of PIIINP as a marker of pathologic fibrosis might be limited in growing dogs.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hip dysplasia is characterized by insufficient femoral head coverage (FHC). Quantification of FHC is of importance as the underlying goal of the surgery to treat hip dysplasia is to restore a normal acetabular morphology and thereby to improve FHC. Unlike a pure 2D X-ray radiograph-based measurement method or a pure 3D CT-based measurement method, previously we presented a 2.5D method to quantify FHC from a single anteriorposterior (AP) pelvic radiograph. In this study, we first quantified and compared 3D FHC between a normal control group and a patient group using a CT-based measurement method. Taking the CT-based 3D measurements of FHC as the gold standard, we further quantified the bias, precision and correlation between the 2.5D measurements and the 3D measurements on both the control group and the patient group. Based on digitally reconstructed radiographs (DRRs), we investigated the influence of the pelvic tilt on the 2.5D measurements of FHC. The intraclass correlation coefficients (ICCs) for absolute agreement was used to quantify interobserver reliability and intraobserver reproducibility of the 2.5D measurement technique. The Pearson correlation coefficient, r, was used to determine the strength of the linear association between the 2.5D and the 3D measurements. Student's t-test was used to determine whether the differences between different measurements were statistically significant. Our experimental results demonstrated that both the interobserver reliability and the intraobserver reproducibility of the 2.5D measurement technique were very good (ICCs > 0.8). Regression analysis indicated that the correlation was very strong between the 2.5D and the 3D measurements (r = 0.89, p < 0.001). Student's t-test showed that there were no statistically significant differences between the 2.5D and the 3D measurements of FHC on the patient group (p > 0.05). The results of this study provided convincing evidence demonstrating the validity of the 2.5D measurements of FHC from a single AP pelvic radiograph and proved that it could serve as a surrogate for 3D CT-based measurements. Thus it may be possible to use this method to avoid a CT scan for the purpose of estimating 3D FHC in diagnosis and post-operative treatment evaluation of patients with hip dysplasia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The isotope composition of selenium (Se) can provide important constraints on biological, geochemical, and cosmochemical processes taking place in different reservoirs on Earth and during planet formation. To provide precise qualitative and quantitative information on these processes, accurate and highly precise isotope data need to be obtained. The currently applied ICP-MS methods for Se isotope measurements are compromised by the necessity to perform a large number of interference corrections. Differences in these correction methods can lead to discrepancies in published Se isotope values of rock standards which are significantly higher than the acclaimed precision. An independent analytical approach applying a double spike (DS) and state-of-the-art TIMS may yield better precision due to its smaller number of interferences and could test the accuracy of data obtained by ICP-MS approaches. This study shows that the precision of Se isotope measurements performed with two different Thermo Scientific™ Triton™ Plus TIMS is distinctly deteriorated by about ±1‰ (2 s.d.) due to δ80/78Se by a memory Se signal of up to several millivolts and additional minor residual mass bias which could not be corrected for with the common isotope fractionation laws. This memory Se has a variable isotope composition with a DS fraction of up to 20% and accumulates with increasing number of measurements. Thus it represents an accumulation of Se from previous Se measurements with a potential addition from a sample or machine blank. Several cleaning techniques of the MS parts were tried to decrease the memory signal, but were not sufficient to perform precise Se isotope analysis. If these serious memory problems can be overcome in the future, the precision and accuracy of Se isotope analysis with TIMS should be significantly better than those of the current ICP-MS approaches.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Strontium isotopes are useful tracers of fluid-rock interaction in marine hydrothermal systems and provide a potential way to quantify the amount of seawater that passes through these systems. We have determined the whole-rock Sr-isotopic compositions of a section of upper oceanic crust that formed at the fast-spreading East Pacific Rise, now exposed at Hess Deep. This dataset provides the first detailed comparison for the much-studied Ocean Drilling Program (ODP) drill core from Site 504B. Whole-rock and mineral Sr concentrations indicate that Sr-exchange between hydrothermal fluids and the oceanic crust is complex, being dependent on the mineralogical reactions occurring; in particular, epidote formation takes up Sr from the fluid increasing the 87Sr/86Sr of the bulk-rock. Calculating the fluid-flux required to shift the Sr-isotopic composition of the Hess Deep sheeted-dike complex, using the approach of Bickle and Teagle (1992, doi:10.1016/0012-821X(92)90221-G) gives a fluid-flux similar to that determined for ODP Hole 504B. This suggests that the level of isotopic exchange observed in these two regions is probably typical for modern oceanic crust. Unfortunately, uncertainties in the modeling approach do not allow us to determine a fluid-flux that is directly comparable to fluxes calculated by other methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Percent CaCO3 was determined in selected samples aboard the ship by the carbonate-bomb technique (Müller and Gastner, 1971). Results of these analyses are listed in Table 1 and plotted in Figures 1, 3, 4, and 5 as plus signs (+). Samples collected specifically for analyses of CaCO3 and organic carbon were analyzed at three shore-based laboratories. Concentrations of total carbon, organic carbon, and CaCO3 were determined in some samples at the DSDP sediment laboratory, using a Leco carbon analyzer, by personnel of the U.S. Geological Survey, under the supervision of T. L. Valuer. Most of these samples were collected from lithologic units containing relatively high concentrations of organic carbon. Sample procedures are outlined in Boyce and Bode (1972). Precision and accuracy are both ±0.3% absolute for total carbon, ±0.06% absolute for organic carbon, and ±3% absolute for CaCO3.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study presents a new Miocene biostratigraphic synthesis for the high-latitude northeastern North Atlantic region. Via correlations to the bio-magnetostratigraphy and oxygen isotope records of Ocean Drilling Program and Deep Sea Drilling Project Sites, the ages of shallower North Sea deposits have been better constrained. The result has been an improved precision and documentation of the age designations of the existing North Sea foraminiferal zonal boundaries of King (1989) and Gradstein and Bäckström (1996). All calibrations have been updated to the Astronomically Tuned Neogene Time Scale (ATNTS) of Lourens et al. (2004). This improved Miocene biozonation has been achieved through: the updating of age calibrations for key microfossil bioevents, identification of new events, and integration of new biostratigraphic data from a foraminiferal analysis of commercial wells in the North Sea and Norwegian Sea. The new zonation has been successfully applied to two commercial wells and an onshore research borehole. At these high latitudes, where standard zonal markers are often absent, integration of microfossil groups significantly improves temporal resolution. The new zonation comprises 11 Nordic Miocene (NM) Zones with an average duration of 1 to 2 million years. This multi-group combination of a total of 92 bioevents (70 foraminifers and bolboformids; 16 dinoflagellate cysts and acritarchs; 6 marine diatoms) facilitates zonal identification throughout the Nordic Atlantic region. With the highest proportion of events being of calcareous walled microfossils, this zonation is primarily suited to micropaleontologists. A correlation of this Miocene biostratigraphy with a re-calibrated oxygen isotope record for DSDP Site 608 suggests a strong correlation between Miocene planktonic microfossil turnover rates and the inferred paleoclimatic trends. Benthic foraminifera zonal boundaries appear to often coincide with Miocene global sequence boundaries. The biostratigraphic record is punctuated by four main stratigraphic hiati which show variation in their geographic and temporal extent. These are related to the following regional unconformities: basal Neogene, Lower/Middle Miocene ("mid-Miocene unconformity"), basal Upper Miocene and basal Messinian unconformities. Further coring of Neogene sections in the North Sea and Norwegian Sea may better constrain their extent and their effect on the biostratigraphic record.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El diseño y desarrollo de sistemas de suspensión para vehículos se basa cada día más en el diseño por ordenador y en herramientas de análisis por ordenador, las cuales permiten anticipar problemas y resolverlos por adelantado. El comportamiento y las características dinámicas se calculan con precisión, bajo coste, y recursos y tiempos de cálculo reducidos. Sin embargo, existe una componente iterativa en el proceso, que requiere la definición manual de diseños a través de técnicas “prueba y error”. Esta Tesis da un paso hacia el desarrollo de un entorno de simulación eficiente capaz de simular, analizar y evaluar diseños de suspensiones vehiculares, y de mejorarlos hacia la solución optima mediante la modificación de los parámetros de diseño. La modelización mediante sistemas multicuerpo se utiliza aquí para desarrollar un modelo de autocar con 18 grados de libertad, de manera detallada y eficiente. La geometría y demás características de la suspensión se ajustan a las del vehículo real, así como los demás parámetros del modelo. Para simular la dinámica vehicular, se utiliza una formulación multicuerpo moderna y eficiente basada en las ecuaciones de Maggi, a la que se ha incorporado un visor 3D. Así, se consigue simular maniobras vehiculares en tiempos inferiores al tiempo real. Una vez que la dinámica está disponible, los análisis de sensibilidad son cruciales para una optimización robusta y eficiente. Para ello, se presenta una técnica matemática que permite derivar las variables dinámicas dentro de la formulación, de forma algorítmica, general, con la precisión de la maquina, y razonablemente eficiente: la diferenciación automática. Este método propaga las derivadas con respecto a las variables de diseño a través del código informático y con poca intervención del usuario. En contraste con otros enfoques en la bibliografía, generalmente particulares y limitados, se realiza una comparación de librerías, se desarrolla una formulación híbrida directa-automática para el cálculo de sensibilidades, y se presentan varios ejemplos reales. Finalmente, se lleva a cabo la optimización de la respuesta dinámica del vehículo citado. Se analizan cuatro tipos distintos de optimización: identificación de parámetros, optimización de la maniobrabilidad, optimización del confort y optimización multi-objetivo, todos ellos aplicados al diseño del autocar. Además de resultados analíticos y gráficos, se incluyen algunas consideraciones acerca de la eficiencia. En resumen, se mejora el comportamiento dinámico de vehículos por medio de modelos multicuerpo y de técnicas de diferenciación automática y optimización avanzadas, posibilitando un ajuste automático, preciso y eficiente de los parámetros de diseño. ABSTRACT Each day, the design and development of vehicle suspension systems relies more on computer-aided design and computer-aided engineering tools, which allow anticipating the problems and solving them ahead of time. Dynamic behavior and characteristics are thus simulated accurately and inexpensively with moderate computational times and resources. There is, however, an iterative component in the process, which involves the manual definition of designs in a trialand-error manner. This Thesis takes a step towards the development of an efficient simulation framework capable of simulating, analyzing and evaluating vehicle suspension designs, and automatically improving them by varying the design parameters towards the optimal solution. The multibody systems approach is hereby used to model a three-dimensional 18-degrees-of-freedom coach in a comprehensive yet efficient way. The suspension geometry and characteristics resemble the ones from the real vehicle, as do the rest of vehicle parameters. In order to simulate vehicle dynamics, an efficient, state-of-the-art multibody formulation based on Maggi’s equations is employed, and a three-dimensional graphics viewer is developed. As a result, vehicle maneuvers can be simulated faster than real-time. Once the dynamics are ready, a sensitivity analysis is crucial for a robust optimization. To that end, a mathematical technique is introduced, which allows differentiating the dynamic variables within the multibody formulation in a general, algorithmic, accurate to machine precision, and reasonably efficient way: automatic differentiation. This method propagates the derivatives with respect to the design parameters throughout the computer code, with little user interaction. In contrast with other attempts in the literature, mostly not generalpurpose, a benchmarking of libraries is carried out, a hybrid direct-automatic differentiation approach for the computation of sensitivities is developed, and several real-life examples are analyzed. Finally, a design optimization process of the aforementioned vehicle is carried out. Four different types of dynamic response optimization are presented: parameter identification, handling optimization, ride comfort optimization and multi-objective optimization; all of which are applied to the design of the coach example. Together with analytical and visual proof of the results, efficiency considerations are made. In summary, the dynamic behavior of vehicles is improved by using the multibody systems approach, along with advanced differentiation and optimization techniques, enabling an automatic, accurate and efficient tuning of design parameters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

La importancia de los sistemas de recomendación ha experimentado un crecimiento exponencial como consecuencia del auge de las redes sociales. En esta tesis doctoral presentaré una amplia visión sobre el estado del arte de los sistemas de recomendación. Incialmente, estos estaba basados en fitrado demográfico, basado en contendio o colaborativo. En la actualidad, estos sistemas incorporan alguna información social al proceso de recomendación. En el futuro utilizarán información implicita, local y personal proveniente del Internet de las cosas. Los sistemas de recomendación basados en filtrado colaborativo se pueden modificar con el fin de realizar recomendaciones a grupos de usuarios. Existen trabajos previos que han incluido estas modificaciones en diferentes etapas del algoritmo de filtrado colaborativo: búsqueda de los vecinos, predicción de las votaciones y elección de las recomendaciones. En esta tesis doctoral proporcionaré un nuevo método que realizar el proceso de unficación (pasar de varios usuarios a un grupo) en el primer paso del algoritmo de filtrado colaborativo: cálculo de la métrica de similaridad. Proporcionaré una formalización completa del método propuesto. Explicaré cómo obtener el conjunto de k vecinos del grupo de usuarios y mostraré cómo obtener recomendaciones usando dichos vecinos. Asimismo, incluiré un ejemplo detallando cada paso del método propuesto en un sistema de recomendación compuesto por 8 usuarios y 10 items. Las principales características del método propuesto son: (a) es más rápido (más eficiente) que las alternativas proporcionadas por otros autores, y (b) es al menos tan exacto y preciso como otras soluciones estudiadas. Para contrastar esta hipótesis realizaré varios experimentos que miden la precisión, la exactitud y el rendimiento del método. Los resultados obtenidos se compararán con los resultados de otras alternativas utilizadas en la recomendación de grupos. Los experimentos se realizarán con las bases de datos de MovieLens y Netflix. ABSTRACT The importance of recommender systems has grown exponentially with the advent of social networks. In this PhD thesis I will provide a wide vision about the state of the art of recommender systems. They were initially based on demographic, contentbased and collaborative filtering. Currently, these systems incorporate some social information to the recommendation process. In the future, they will use implicit, local and personal information from the Internet of Things. As we will see here, recommender systems based on collaborative filtering can be used to perform recommendations to group of users. Previous works have made this modification in different stages of the collaborative filtering algorithm: establishing the neighborhood, prediction phase and determination of recommended items. In this PhD thesis I will provide a new method that carry out the unification process (many users to one group) in the first stage of the collaborative filtering algorithm: similarity metric computation. I will provide a full formalization of the proposed method. I will explain how to obtain the k nearest neighbors of the group of users and I will show how to get recommendations using those users. I will also include a running example of a recommender system with 8 users and 10 items detailing all the steps of the method I will present. The main highlights of the proposed method are: (a) it will be faster (more efficient) that the alternatives provided by other authors, and (b) it will be at least as precise and accurate as other studied solutions. To check this hypothesis I will conduct several experiments measuring the accuracy, the precision and the performance of my method. I will compare these results with the results generated by other methods of group recommendation. The experiments will be carried out using MovieLens and Netflix datasets.