939 resultados para false positive rates


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective. To use the Pediatric Rheumatology International Trials Organization (PRINTO) core set of outcome measures to develop a validated definition of improvement for the evaluation of response to therapy in juvenile systemic lupus erythematosus (SLE).Methods. Thirty-seven experienced pediatric rheumatologists from 27 countries, each of whom had specific experience in the assessment of juvenile SLE patients, achieved consensus on 128 patient profiles as being clinically improved or not improved. Using the physicians' consensus ratings as the gold standard measure, the chi-square, sensitivity, specificity, false-positive and false-negative rates, area under the receiver operating characteristic curve, and kappa level of agreement for 597 candidate definitions of improvement were calculated. Only definitions with a kappa value greater than 0.7 were retained. The top definitions were selected based on the product of the content validity score multiplied by its kappa statistic.Results. The definition of improvement with the highest final score was at least 50% improvement from baseline in any 2 of the 5 core set measures, with no more than 1 of the remaining worsening by more than 30%.Conclusion. PRINTO proposes a valid and reproducible definition of improvement that reflects well the consensus rating of experienced clinicians and that incorporates clinically meaningful change in core set measures in a composite end point for the evaluation of global response to therapy in patients with juvenile SLE. The definition is now proposed for use in juvenile SLE clinical trials and may help physicians to decide whether a child with SLE responded adequately to therapy.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Although enzymuria tends to be associated to renal injury, there are no studies that have evaluated the presence of the enzyme gamma-glutamyl transpeptidase (GGT) spectrophotometry in the urine using a non-nephrotoxic agent (Nerium oleander) in order to evaluate the possibility of false positive results. The urinary GGT/urinary creatinine concentration ratio (uGGT/uCr) of 10 healthy dogs was calculated and posteriorly confronted with data from clinical evaluation, hematological and serum biochemical profiles, creatinine clearance (CrC), urinalysis, urine protein/creatinine ratio (UPC), electrocardiogram, systemic blood pressure (SBP) and light and electron microscopy. The results for kidney histology, SBP, UPC and CrC were not significantly different in any of the time-points analyzed. However, uGGT/uCr was significantly higher when measured 4 hours and 24 hours after administration of N. oleander. The measurement of the urinary GGT enzyme, as performed in many studies, yielded false positive results in dogs poisoned by a non-nephrotoxic agent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Recurrent nerve injury is 1 of the most important complications of thyroidectomy. During the last decade, nerve monitoring has gained increasing acceptance in several centers as a method to predict and to document nerve function at the end of the operation. We evaluated the efficacy of a nerve monitoring system in a series of patients who underwent thyroidectomy and critically analyzed the negative predictive value (NPV) and positive predictive value (PPV) of the method. Methods. NIM System efficacy was prospectively analyzed in 447 patients who underwent thyroidectomy between 2001 and 2008 (366 female/81 male; 420 white/47 nonwhite; 11 to 82 years of age; median, 43 years old). There were 421 total thyroidectomies and 26 partial thyroidectomies, leading to 868 nerves at risk. The gold standard to evaluate inferior laryngeal nerve function was early postoperative videolaryngoscopy, which was repeated after 4 to 6 months in all patients with abnormal endoscopic findings. Results. At the early evaluation, 858 nerves (98.8%) presented normal videolaryngoscopic features after surgery. Ten paretic/paralyzed nerves (1.2%) were detected (2 unexpected unilateral paresis, 2 unexpected bilateral paresis, 1 unexpected unilateral paralysis, 1 unexpected bilateral paralyses, and 1 expected unilateral paralysis). At the late videolaryngoscopy, only 2 permanent nerve paralyses were noted (0.2%), with an ultimate result of 99.8% functioning nerves. Nerve monitoring showed absent or markedly reduced electrical activity at the end of the operations in 25/868 nerves (2.9%), including all 10 endoscopically compromised nerves, with 15 false-positive results. There were no false-negative results. Therefore, the PPV was 40.0%, and the NPV was 100%. Conclusions. In the present series, nerve monitoring had a very high PPV but a low NPV for the detection of recurrent nerve injury. (C) 2011 Wiley Periodicals, Inc. Head Neck 34: 175-179, 2012

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: We aimed to investigate the performance of five different trend analysis criteria for the detection of glaucomatous progression and to determine the most frequently and rapidly progressing locations of the visual field. Design: Retrospective cohort. Participants or Samples: Treated glaucoma patients with =8 Swedish Interactive Thresholding Algorithm (SITA)-standard 24-2 visual field tests. Methods: Progression was determined using trend analysis. Five different criteria were used: (A) =1 significantly progressing point; (B) =2 significantly progressing points; (C) =2 progressing points located in the same hemifield; (D) at least two adjacent progressing points located in the same hemifield; (E) =2 progressing points in the same Garway-Heath map sector. Main Outcome Measures: Number of progressing eyes and false-positive results. Results: We included 587 patients. The number of eyes reaching a progression endpoint using each criterion was: A = 300 (51%); B = 212 (36%); C = 194 (33%); D = 170 (29%); and E = 186 (31%) (P = 0.03). The numbers of eyes with positive slopes were: A = 13 (4.3%); B = 3 (1.4%); C = 3 (1.5%); D = 2 (1.1%); and E = 3 (1.6%) (P = 0.06). The global slopes for progressing eyes were more negative in Groups B, C and D than in Group A (P = 0.004). The visual field locations that progressed more often were those in the nasal field adjacent to the horizontal midline. Conclusions: Pointwise linear regression criteria that take into account the retinal nerve fibre layer anatomy enhances the specificity of trend analysis for the detection glaucomatous visual field progression.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Ultrasonography has an inherent noise pattern, called speckle, which is known to hamper object recognition for both humans and computers. Speckle noise is produced by the mutual interference of a set of scattered wavefronts. Depending on the phase of the wavefronts, the interference may be constructive or destructive, which results in brighter or darker pixels, respectively. We propose a filter that minimizes noise fluctuation while simultaneously preserving local gray level information. It is based on steps to attenuate the destructive and constructive interference present in ultrasound images. This filter, called interference-based speckle filter followed by anisotropic diffusion (ISFAD), was developed to remove speckle texture from B-mode ultrasound images, while preserving the edges and the gray level of the region. The ISFAD performance was compared with 10 other filters. The evaluation was based on their application to images simulated by Field II (developed by Jensen et al.) and the proposed filter presented the greatest structural similarity, 0.95. Functional improvement of the segmentation task was also measured, comparing rates of true positive, false positive and accuracy. Using three different segmentation techniques, ISFAD also presented the best accuracy rate (greater than 90% for structures with well-defined borders). (E-mail: fernando.okara@gmail.com) (C) 2012 World Federation for Ultrasound in Medicine & Biology.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract Background Spotted cDNA microarrays generally employ co-hybridization of fluorescently-labeled RNA targets to produce gene expression ratios for subsequent analysis. Direct comparison of two RNA samples in the same microarray provides the highest level of accuracy; however, due to the number of combinatorial pair-wise comparisons, the direct method is impractical for studies including large number of individual samples (e.g., tumor classification studies). For such studies, indirect comparisons using a common reference standard have been the preferred method. Here we evaluated the precision and accuracy of reconstructed ratios from three indirect methods relative to ratios obtained from direct hybridizations, herein considered as the gold-standard. Results We performed hybridizations using a fixed amount of Cy3-labeled reference oligonucleotide (RefOligo) against distinct Cy5-labeled targets from prostate, breast and kidney tumor samples. Reconstructed ratios between all tissue pairs were derived from ratios between each tissue sample and RefOligo. Reconstructed ratios were compared to (i) ratios obtained in parallel from direct pair-wise hybridizations of tissue samples, and to (ii) reconstructed ratios derived from hybridization of each tissue against a reference RNA pool (RefPool). To evaluate the effect of the external references, reconstructed ratios were also calculated directly from intensity values of single-channel (One-Color) measurements derived from tissue sample data collected in the RefOligo experiments. We show that the average coefficient of variation of ratios between intra- and inter-slide replicates derived from RefOligo, RefPool and One-Color were similar and 2 to 4-fold higher than ratios obtained in direct hybridizations. Correlation coefficients calculated for all three tissue comparisons were also similar. In addition, the performance of all indirect methods in terms of their robustness to identify genes deemed as differentially expressed based on direct hybridizations, as well as false-positive and false-negative rates, were found to be comparable. Conclusion RefOligo produces ratios as precise and accurate as ratios reconstructed from a RNA pool, thus representing a reliable alternative in reference-based hybridization experiments. In addition, One-Color measurements alone can reconstruct expression ratios without loss in precision or accuracy. We conclude that both methods are adequate options in large-scale projects where the amount of a common reference RNA pool is usually restrictive.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To validate a monitoring questionnaire about hearing and language development applied by community health agents in the first year of life. Methods: Seventy six community health agents, previously trained on infant hearing health, administered a questionnaire to the families of 304 children with ages from 0 to 1 year. The questionnaire contains questions regarding hearing and language development and, for all age groups, the question “Does your child hear well?” was presented. The validity of the questionnaire was assessed by analyzing false positive and false negative rates of the identified children. A double-blind study was conducted so that all children assessed by the questionnaire were submitted to hearing evaluation performed by audiologists. Results: Four children (1.32%) were diagnosed with sensorineural hearing loss (two unilateral), and 69 (22.7%) with conductive hearing loss. The monitoring questionnaire showed specificity of 96% and sensitivity of 67%, with a false-negative rate of 33% for not identifying the unilateral hearing loss, and a false-positive rate of 4%. Conclusion: The questionnaire used has shown to be feasible and relevant to actions of the community health agents of the Family Health Strategy program, with high specificity and moderate sensitivity. The use of the validated instrument should be considered to complement Newborn Hearing Screening Programs, in order to identify late onset or acquired hearing loss.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A non-parametric method was developed and tested to compare the partial areas under two correlated Receiver Operating Characteristic curves. Based on the theory of generalized U-statistics the mathematical formulas have been derived for computing ROC area, and the variance and covariance between the portions of two ROC curves. A practical SAS application also has been developed to facilitate the calculations. The accuracy of the non-parametric method was evaluated by comparing it to other methods. By applying our method to the data from a published ROC analysis of CT image, our results are very close to theirs. A hypothetical example was used to demonstrate the effects of two crossed ROC curves. The two ROC areas are the same. However each portion of the area between two ROC curves were found to be significantly different by the partial ROC curve analysis. For computation of ROC curves with large scales, such as a logistic regression model, we applied our method to the breast cancer study with Medicare claims data. It yielded the same ROC area computation as the SAS Logistic procedure. Our method also provides an alternative to the global summary of ROC area comparison by directly comparing the true-positive rates for two regression models and by determining the range of false-positive values where the models differ. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background The usefulness and modalities of cardiovascular screening in young athletes remain controversial, particularly concerning the role of 12-lead ECG. One of the reasons refers to the presumed false-positive ECGs requiring additional examinations and higher costs. Our study aimed to assess the total costs and yield of a preparticipation cardiovascular examination with ECG in young athletes in Switzerland. Methods Athletes aged 14–35 years were examined according to the 2005 European Society of Cardiology (ESC) protocol. ECGs were interpreted based on the 2010 ESC-adapted recommendations. The costs of the overall screening programme until diagnosis were calculated according to Swiss medical rates. Results A total of 1070 athletes were examined (75% men, 19.7±6.3 years) over a 15-month period. Among them, 67 (6.3%) required further examinations: 14 (1.3%) due to medical history, 15 (1.4%) due to physical examination and 42 (3.9%) because of abnormal ECG findings. A previously unknown cardiac abnormality was established in 11 athletes (1.0%). In four athletes (0.4%), the abnormality may potentially lead to sudden cardiac death and all of them were identified by ECG alone. The cost was 157 464 Swiss francs (CHF) for the overall programme, CHF147 per athlete and CHF14 315  per finding. Conclusions Cardiovascular preparticipation examination in young athletes using modern and athlete-specific criteria for interpreting ECG is feasible in Switzerland at reasonable cost. ECG alone is used to detect all potentially lethal cardiac diseases. The results of our study support the inclusion of ECG in routine preparticipation screening.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In population studies, most current methods focus on identifying one outcome-related SNP at a time by testing for differences of genotype frequencies between disease and healthy groups or among different population groups. However, testing a great number of SNPs simultaneously has a problem of multiple testing and will give false-positive results. Although, this problem can be effectively dealt with through several approaches such as Bonferroni correction, permutation testing and false discovery rates, patterns of the joint effects by several genes, each with weak effect, might not be able to be determined. With the availability of high-throughput genotyping technology, searching for multiple scattered SNPs over the whole genome and modeling their joint effect on the target variable has become possible. Exhaustive search of all SNP subsets is computationally infeasible for millions of SNPs in a genome-wide study. Several effective feature selection methods combined with classification functions have been proposed to search for an optimal SNP subset among big data sets where the number of feature SNPs far exceeds the number of observations. ^ In this study, we take two steps to achieve the goal. First we selected 1000 SNPs through an effective filter method and then we performed a feature selection wrapped around a classifier to identify an optimal SNP subset for predicting disease. And also we developed a novel classification method-sequential information bottleneck method wrapped inside different search algorithms to identify an optimal subset of SNPs for classifying the outcome variable. This new method was compared with the classical linear discriminant analysis in terms of classification performance. Finally, we performed chi-square test to look at the relationship between each SNP and disease from another point of view. ^ In general, our results show that filtering features using harmononic mean of sensitivity and specificity(HMSS) through linear discriminant analysis (LDA) is better than using LDA training accuracy or mutual information in our study. Our results also demonstrate that exhaustive search of a small subset with one SNP, two SNPs or 3 SNP subset based on best 100 composite 2-SNPs can find an optimal subset and further inclusion of more SNPs through heuristic algorithm doesn't always increase the performance of SNP subsets. Although sequential forward floating selection can be applied to prevent from the nesting effect of forward selection, it does not always out-perform the latter due to overfitting from observing more complex subset states. ^ Our results also indicate that HMSS as a criterion to evaluate the classification ability of a function can be used in imbalanced data without modifying the original dataset as against classification accuracy. Our four studies suggest that Sequential Information Bottleneck(sIB), a new unsupervised technique, can be adopted to predict the outcome and its ability to detect the target status is superior to the traditional LDA in the study. ^ From our results we can see that the best test probability-HMSS for predicting CVD, stroke,CAD and psoriasis through sIB is 0.59406, 0.641815, 0.645315 and 0.678658, respectively. In terms of group prediction accuracy, the highest test accuracy of sIB for diagnosing a normal status among controls can reach 0.708999, 0.863216, 0.639918 and 0.850275 respectively in the four studies if the test accuracy among cases is required to be not less than 0.4. On the other hand, the highest test accuracy of sIB for diagnosing a disease among cases can reach 0.748644, 0.789916, 0.705701 and 0.749436 respectively in the four studies if the test accuracy among controls is required to be at least 0.4. ^ A further genome-wide association study through Chi square test shows that there are no significant SNPs detected at the cut-off level 9.09451E-08 in the Framingham heart study of CVD. Study results in WTCCC can only detect two significant SNPs that are associated with CAD. In the genome-wide study of psoriasis most of top 20 SNP markers with impressive classification accuracy are also significantly associated with the disease through chi-square test at the cut-off value 1.11E-07. ^ Although our classification methods can achieve high accuracy in the study, complete descriptions of those classification results(95% confidence interval or statistical test of differences) require more cost-effective methods or efficient computing system, both of which can't be accomplished currently in our genome-wide study. We should also note that the purpose of this study is to identify subsets of SNPs with high prediction ability and those SNPs with good discriminant power are not necessary to be causal markers for the disease.^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nutrient intake and specific food item data from 24-hour dietary recalls were utilized to study the relationship between measures of diet diversity and dietary adequacy in a population of white females of child-bearing age and socioeconomic subgroups of that population. As the basis of the diet diversity measures, twelve food groups were constructed from the 24-hour recall data and the number of unique foods per food group counted and weighted according to specified weighting schemes. Utilizing these food groups, nine diet diversity indices were developed.^ Sensitivity/specificity analysis was used to determine the ability of varying levels of selected diet diversity indices to identify individuals above and below preselected intakes of different nutrients. The true prevalence proportions, sensitivity and specificity, false positive and false negative rates, and positive predictive values observed at the selected levels of diet diversity indices were investigated in relation to the objectives and resources of a variety of nutrition improvement programs. Diet diversity indices constructed from the total population data were evaluated as screening tools for respondent nutrient intakes in each of the socioeconomic subgroups as well.^ The results of the sensitivity/specificity analysis demonstrated that the false positive rate, the false negative rate, or both were too high at each diversity cut-off level to validate the widespread use of any of the diversity indices in the dietary assessment of the study population. Although diet diversity has been shown to be highly correlated with the intakes of a number of nutrients, the diet diversity indices constructed in this study did not adequately represent nutrient intakes in the diet as reported, in this study, intakes as reported in the 24-hour dietary recall. Specific cut-off levels of selected diversity indices might have limited application in some nutrition programs. The results were applicable to the sensitivity/specificity analyses in the socioeconomic subgroups as well as in the total population. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Entendemos por inteligencia colectiva una forma de inteligencia que surge de la colaboración y la participación de varios individuos o, siendo más estrictos, varias entidades. En base a esta sencilla definición podemos observar que este concepto es campo de estudio de las más diversas disciplinas como pueden ser la sociología, las tecnologías de la información o la biología, atendiendo cada una de ellas a un tipo de entidades diferentes: seres humanos, elementos de computación o animales. Como elemento común podríamos indicar que la inteligencia colectiva ha tenido como objetivo el ser capaz de fomentar una inteligencia de grupo que supere a la inteligencia individual de las entidades que lo forman a través de mecanismos de coordinación, cooperación, competencia, integración, diferenciación, etc. Sin embargo, aunque históricamente la inteligencia colectiva se ha podido desarrollar de forma paralela e independiente en las distintas disciplinas que la tratan, en la actualidad, los avances en las tecnologías de la información han provocado que esto ya no sea suficiente. Hoy en día seres humanos y máquinas a través de todo tipo de redes de comunicación e interfaces, conviven en un entorno en el que la inteligencia colectiva ha cobrado una nueva dimensión: ya no sólo puede intentar obtener un comportamiento superior al de sus entidades constituyentes sino que ahora, además, estas inteligencias individuales son completamente diferentes unas de otras y aparece por lo tanto el doble reto de ser capaces de gestionar esta gran heterogeneidad y al mismo tiempo ser capaces de obtener comportamientos aún más inteligentes gracias a las sinergias que los distintos tipos de inteligencias pueden generar. Dentro de las áreas de trabajo de la inteligencia colectiva existen varios campos abiertos en los que siempre se intenta obtener unas prestaciones superiores a las de los individuos. Por ejemplo: consciencia colectiva, memoria colectiva o sabiduría colectiva. Entre todos estos campos nosotros nos centraremos en uno que tiene presencia en la práctica totalidad de posibles comportamientos inteligentes: la toma de decisiones. El campo de estudio de la toma de decisiones es realmente amplio y dentro del mismo la evolución ha sido completamente paralela a la que citábamos anteriormente en referencia a la inteligencia colectiva. En primer lugar se centró en el individuo como entidad decisoria para posteriormente desarrollarse desde un punto de vista social, institucional, etc. La primera fase dentro del estudio de la toma de decisiones se basó en la utilización de paradigmas muy sencillos: análisis de ventajas e inconvenientes, priorización basada en la maximización de algún parámetro del resultado, capacidad para satisfacer los requisitos de forma mínima por parte de las alternativas, consultas a expertos o entidades autorizadas o incluso el azar. Sin embargo, al igual que el paso del estudio del individuo al grupo supone una nueva dimensión dentro la inteligencia colectiva la toma de decisiones colectiva supone un nuevo reto en todas las disciplinas relacionadas. Además, dentro de la decisión colectiva aparecen dos nuevos frentes: los sistemas de decisión centralizados y descentralizados. En el presente proyecto de tesis nos centraremos en este segundo, que es el que supone una mayor atractivo tanto por las posibilidades de generar nuevo conocimiento y trabajar con problemas abiertos actualmente así como en lo que respecta a la aplicabilidad de los resultados que puedan obtenerse. Ya por último, dentro del campo de los sistemas de decisión descentralizados existen varios mecanismos fundamentales que dan lugar a distintas aproximaciones a la problemática propia de este campo. Por ejemplo el liderazgo, la imitación, la prescripción o el miedo. Nosotros nos centraremos en uno de los más multidisciplinares y con mayor capacidad de aplicación en todo tipo de disciplinas y que, históricamente, ha demostrado que puede dar lugar a prestaciones muy superiores a otros tipos de mecanismos de decisión descentralizados: la confianza y la reputación. Resumidamente podríamos indicar que confianza es la creencia por parte de una entidad que otra va a realizar una determinada actividad de una forma concreta. En principio es algo subjetivo, ya que la confianza de dos entidades diferentes sobre una tercera no tiene porqué ser la misma. Por otro lado, la reputación es la idea colectiva (o evaluación social) que distintas entidades de un sistema tiene sobre otra entidad del mismo en lo que respecta a un determinado criterio. Es por tanto una información de carácter colectivo pero única dentro de un sistema, no asociada a cada una de las entidades del sistema sino por igual a todas ellas. En estas dos sencillas definiciones se basan la inmensa mayoría de sistemas colectivos. De hecho muchas disertaciones indican que ningún tipo de organización podría ser viable de no ser por la existencia y la utilización de los conceptos de confianza y reputación. A partir de ahora, a todo sistema que utilice de una u otra forma estos conceptos lo denominaremos como sistema de confianza y reputación (o TRS, Trust and Reputation System). Sin embargo, aunque los TRS son uno de los aspectos de nuestras vidas más cotidianos y con un mayor campo de aplicación, el conocimiento que existe actualmente sobre ellos no podría ser más disperso. Existen un gran número de trabajos científicos en todo tipo de áreas de conocimiento: filosofía, psicología, sociología, economía, política, tecnologías de la información, etc. Pero el principal problema es que no existe una visión completa de la confianza y reputación en su sentido más amplio. Cada disciplina focaliza sus estudios en unos aspectos u otros dentro de los TRS, pero ninguna de ellas trata de explotar el conocimiento generado en el resto para mejorar sus prestaciones en su campo de aplicación concreto. Aspectos muy detallados en algunas áreas de conocimiento son completamente obviados por otras, o incluso aspectos tratados por distintas disciplinas, al ser estudiados desde distintos puntos de vista arrojan resultados complementarios que, sin embargo, no son aprovechados fuera de dichas áreas de conocimiento. Esto nos lleva a una dispersión de conocimiento muy elevada y a una falta de reutilización de metodologías, políticas de actuación y técnicas de una disciplina a otra. Debido su vital importancia, esta alta dispersión de conocimiento se trata de uno de los principales problemas que se pretenden resolver con el presente trabajo de tesis. Por otro lado, cuando se trabaja con TRS, todos los aspectos relacionados con la seguridad están muy presentes ya que muy este es un tema vital dentro del campo de la toma de decisiones. Además también es habitual que los TRS se utilicen para desempeñar responsabilidades que aportan algún tipo de funcionalidad relacionada con el mundo de la seguridad. Por último no podemos olvidar que el acto de confiar está indefectiblemente unido al de delegar una determinada responsabilidad, y que al tratar estos conceptos siempre aparece la idea de riesgo, riesgo de que las expectativas generadas por el acto de la delegación no se cumplan o se cumplan de forma diferente. Podemos ver por lo tanto que cualquier sistema que utiliza la confianza para mejorar o posibilitar su funcionamiento, por su propia naturaleza, es especialmente vulnerable si las premisas en las que se basa son atacadas. En este sentido podemos comprobar (tal y como analizaremos en más detalle a lo largo del presente documento) que las aproximaciones que realizan las distintas disciplinas que tratan la violación de los sistemas de confianza es de lo más variado. únicamente dentro del área de las tecnologías de la información se ha intentado utilizar alguno de los enfoques de otras disciplinas de cara a afrontar problemas relacionados con la seguridad de TRS. Sin embargo se trata de una aproximación incompleta y, normalmente, realizada para cumplir requisitos de aplicaciones concretas y no con la idea de afianzar una base de conocimiento más general y reutilizable en otros entornos. Con todo esto en cuenta, podemos resumir contribuciones del presente trabajo de tesis en las siguientes. • La realización de un completo análisis del estado del arte dentro del mundo de la confianza y la reputación que nos permite comparar las ventajas e inconvenientes de las diferentes aproximación que se realizan a estos conceptos en distintas áreas de conocimiento. • La definición de una arquitectura de referencia para TRS que contempla todas las entidades y procesos que intervienen en este tipo de sistemas. • La definición de un marco de referencia para analizar la seguridad de TRS. Esto implica tanto identificar los principales activos de un TRS en lo que respecta a la seguridad, así como el crear una tipología de posibles ataques y contramedidas en base a dichos activos. • La propuesta de una metodología para el análisis, el diseño, el aseguramiento y el despliegue de un TRS en entornos reales. Adicionalmente se exponen los principales tipos de aplicaciones que pueden obtenerse de los TRS y los medios para maximizar sus prestaciones en cada una de ellas. • La generación de un software que permite simular cualquier tipo de TRS en base a la arquitectura propuesta previamente. Esto permite evaluar las prestaciones de un TRS bajo una determinada configuración en un entorno controlado previamente a su despliegue en un entorno real. Igualmente es de gran utilidad para evaluar la resistencia a distintos tipos de ataques o mal-funcionamientos del sistema. Además de las contribuciones realizadas directamente en el campo de los TRS, hemos realizado aportaciones originales a distintas áreas de conocimiento gracias a la aplicación de las metodologías de análisis y diseño citadas con anterioridad. • Detección de anomalías térmicas en Data Centers. Hemos implementado con éxito un sistema de deteción de anomalías térmicas basado en un TRS. Comparamos la detección de prestaciones de algoritmos de tipo Self-Organized Maps (SOM) y Growing Neural Gas (GNG). Mostramos como SOM ofrece mejores resultados para anomalías en los sistemas de refrigeración de la sala mientras que GNG es una opción más adecuada debido a sus tasas de detección y aislamiento para casos de anomalías provocadas por una carga de trabajo excesiva. • Mejora de las prestaciones de recolección de un sistema basado en swarm computing y odometría social. Gracias a la implementación de un TRS conseguimos mejorar las capacidades de coordinación de una red de robots autónomos distribuidos. La principal contribución reside en el análisis y la validación de las mejoras increméntales que pueden conseguirse con la utilización apropiada de la información existente en el sistema y que puede ser relevante desde el punto de vista de un TRS, y con la implementación de algoritmos de cálculo de confianza basados en dicha información. • Mejora de la seguridad de Wireless Mesh Networks contra ataques contra la integridad, la confidencialidad o la disponibilidad de los datos y / o comunicaciones soportadas por dichas redes. • Mejora de la seguridad de Wireless Sensor Networks contra ataques avanzamos, como insider attacks, ataques desconocidos, etc. Gracias a las metodologías presentadas implementamos contramedidas contra este tipo de ataques en entornos complejos. En base a los experimentos realizados, hemos demostrado que nuestra aproximación es capaz de detectar y confinar varios tipos de ataques que afectan a los protocoles esenciales de la red. La propuesta ofrece unas velocidades de detección muy altas así como demuestra que la inclusión de estos mecanismos de actuación temprana incrementa significativamente el esfuerzo que un atacante tiene que introducir para comprometer la red. Finalmente podríamos concluir que el presente trabajo de tesis supone la generación de un conocimiento útil y aplicable a entornos reales, que nos permite la maximización de las prestaciones resultantes de la utilización de TRS en cualquier tipo de campo de aplicación. De esta forma cubrimos la principal carencia existente actualmente en este campo, que es la falta de una base de conocimiento común y agregada y la inexistencia de una metodología para el desarrollo de TRS que nos permita analizar, diseñar, asegurar y desplegar TRS de una forma sistemática y no artesanal y ad-hoc como se hace en la actualidad. ABSTRACT By collective intelligence we understand a form of intelligence that emerges from the collaboration and competition of many individuals, or strictly speaking, many entities. Based on this simple definition, we can see how this concept is the field of study of a wide range of disciplines, such as sociology, information science or biology, each of them focused in different kinds of entities: human beings, computational resources, or animals. As a common factor, we can point that collective intelligence has always had the goal of being able of promoting a group intelligence that overcomes the individual intelligence of the basic entities that constitute it. This can be accomplished through different mechanisms such as coordination, cooperation, competence, integration, differentiation, etc. Collective intelligence has historically been developed in a parallel and independent way among the different disciplines that deal with it. However, this is not enough anymore due to the advances in information technologies. Nowadays, human beings and machines coexist in environments where collective intelligence has taken a new dimension: we yet have to achieve a better collective behavior than the individual one, but now we also have to deal with completely different kinds of individual intelligences. Therefore, we have a double goal: being able to deal with this heterogeneity and being able to get even more intelligent behaviors thanks to the synergies that the different kinds of intelligence can generate. Within the areas of collective intelligence there are several open topics where they always try to get better performances from groups than from the individuals. For example: collective consciousness, collective memory, or collective wisdom. Among all these topics we will focus on collective decision making, that has influence in most of the collective intelligent behaviors. The field of study of decision making is really wide, and its evolution has been completely parallel to the aforementioned collective intelligence. Firstly, it was focused on the individual as the main decision-making entity, but later it became involved in studying social and institutional groups as basic decision-making entities. The first studies within the decision-making discipline were based on simple paradigms, such as pros and cons analysis, criteria prioritization, fulfillment, following orders, or even chance. However, in the same way that studying the community instead of the individual meant a paradigm shift within collective intelligence, collective decision-making means a new challenge for all the related disciplines. Besides, two new main topics come up when dealing with collective decision-making: centralized and decentralized decision-making systems. In this thesis project we focus in the second one, because it is the most interesting based on the opportunities to generate new knowledge and deal with open issues in this area, as well as these results can be put into practice in a wider set of real-life environments. Finally, within the decentralized collective decision-making systems discipline, there are several basic mechanisms that lead to different approaches to the specific problems of this field, for example: leadership, imitation, prescription, or fear. We will focus on trust and reputation. They are one of the most multidisciplinary concepts and with more potential for applying them in every kind of environments. Besides, they have historically shown that they can generate better performance than other decentralized decision-making mechanisms. Shortly, we say trust is the belief of one entity that the outcome of other entities’ actions is going to be in a specific way. It is a subjective concept because the trust of two different entities in another one does not have to be the same. Reputation is the collective idea (or social evaluation) that a group of entities within a system have about another entity based on a specific criterion. Thus, it is a collective concept in its origin. It is important to say that the behavior of most of the collective systems are based on these two simple definitions. In fact, a lot of articles and essays describe how any organization would not be viable if the ideas of trust and reputation did not exist. From now on, we call Trust an Reputation System (TRS) to any kind of system that uses these concepts. Even though TRSs are one of the most common everyday aspects in our lives, the existing knowledge about them could not be more dispersed. There are thousands of scientific works in every field of study related to trust and reputation: philosophy, psychology, sociology, economics, politics, information sciences, etc. But the main issue is that a comprehensive vision of trust and reputation for all these disciplines does not exist. Every discipline focuses its studies on a specific set of topics but none of them tries to take advantage of the knowledge generated in the other disciplines to improve its behavior or performance. Detailed topics in some fields are completely obviated in others, and even though the study of some topics within several disciplines produces complementary results, these results are not used outside the discipline where they were generated. This leads us to a very high knowledge dispersion and to a lack in the reuse of methodologies, policies and techniques among disciplines. Due to its great importance, this high dispersion of trust and reputation knowledge is one of the main problems this thesis contributes to solve. When we work with TRSs, all the aspects related to security are a constant since it is a vital aspect within the decision-making systems. Besides, TRS are often used to perform some responsibilities related to security. Finally, we cannot forget that the act of trusting is invariably attached to the act of delegating a specific responsibility and, when we deal with these concepts, the idea of risk is always present. This refers to the risk of generated expectations not being accomplished or being accomplished in a different way we anticipated. Thus, we can see that any system using trust to improve or enable its behavior, because of its own nature, is especially vulnerable if the premises it is based on are attacked. Related to this topic, we can see that the approaches of the different disciplines that study attacks of trust and reputation are very diverse. Some attempts of using approaches of other disciplines have been made within the information science area of knowledge, but these approaches are usually incomplete, not systematic and oriented to achieve specific requirements of specific applications. They never try to consolidate a common base of knowledge that could be reusable in other context. Based on all these ideas, this work makes the following direct contributions to the field of TRS: • The compilation of the most relevant existing knowledge related to trust and reputation management systems focusing on their advantages and disadvantages. • We define a generic architecture for TRS, identifying the main entities and processes involved. • We define a generic security framework for TRS. We identify the main security assets and propose a complete taxonomy of attacks for TRS. • We propose and validate a methodology to analyze, design, secure and deploy TRS in real-life environments. Additionally we identify the principal kind of applications we can implement with TRS and how TRS can provide a specific functionality. • We develop a software component to validate and optimize the behavior of a TRS in order to achieve a specific functionality or performance. In addition to the contributions made directly to the field of the TRS, we have made original contributions to different areas of knowledge thanks to the application of the analysis, design and security methodologies previously presented: • Detection of thermal anomalies in Data Centers. Thanks to the application of the TRS analysis and design methodologies, we successfully implemented a thermal anomaly detection system based on a TRS.We compare the detection performance of Self-Organized- Maps and Growing Neural Gas algorithms. We show how SOM provides better results for Computer Room Air Conditioning anomaly detection, yielding detection rates of 100%, in training data with malfunctioning sensors. We also show that GNG yields better detection and isolation rates for workload anomaly detection, reducing the false positive rate when compared to SOM. • Improving the performance of a harvesting system based on swarm computing and social odometry. Through the implementation of a TRS, we achieved to improve the ability of coordinating a distributed network of autonomous robots. The main contribution lies in the analysis and validation of the incremental improvements that can be achieved with proper use information that exist in the system and that are relevant for the TRS, and the implementation of the appropriated trust algorithms based on such information. • Improving Wireless Mesh Networks security against attacks against the integrity, confidentiality or availability of data and communications supported by these networks. Thanks to the implementation of a TRS we improved the detection time rate against these kind of attacks and we limited their potential impact over the system. • We improved the security of Wireless Sensor Networks against advanced attacks, such as insider attacks, unknown attacks, etc. Thanks to the TRS analysis and design methodologies previously described, we implemented countermeasures against such attacks in a complex environment. In our experiments we have demonstrated that our system is capable of detecting and confining various attacks that affect the core network protocols. We have also demonstrated that our approach is capable of rapid attack detection. Also, it has been proven that the inclusion of the proposed detection mechanisms significantly increases the effort the attacker has to introduce in order to compromise the network. Finally we can conclude that, to all intents and purposes, this thesis offers a useful and applicable knowledge in real-life environments that allows us to maximize the performance of any system based on a TRS. Thus, we deal with the main deficiency of this discipline: the lack of a common and complete base of knowledge and the lack of a methodology for the development of TRS that allow us to analyze, design, secure and deploy TRS in a systematic way.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sepsis continues to be a major cause of morbidity and mortality as it can readily lead tosevere sepsis, septic shock, multiple organ failure and death. The onset can be rapid and difficult to define clinically. Despite the numerous candidate markers proposed in the literature, to date a serum marker for sepsis has not been found. The aim of this study was to assay the serum of clinically diagnosed patients with eithera Gram-negative or Gram- positive bacterial sepsis for elevated levels of nine potentialmarkers of sepsis, using commercially produced enzyme linked immunosorbent assays(ELISA). The purpose was to find a test marker for sepsis that would be helpful toclinicians in cases of uncertain sepsis and consequently expose false positive BC'scaused by skin or environmental contaminants. Nine test markers were assayed including IL-6, IL-I 0, ILI2, TNF-α, lipopolysaccharide binding protein, procalcitonin, sE-selectin, sICAM -1 and a potential differential marker for Gram-positive sepsis- anti-lipid S antibody. A total of 445 patients were enrolled into this study from the Queen Elizabeth Hospital and Selly Oak Hospital (Birmingham). The results showed that all the markers were elevated in patients with sepsis and that patients with a Gram-negative sepsis consistently produced higher median/range serum levels than those with a Gram-positive sepsis. No single marker was able to identify all the septic patients. Combining two markers caused the sensitivities and specificities for a diagnosis of sepsis to increase to within a 90% to 100% range. By a process of elimination the markers that survived into the last phase were IL-6 with sICAM -1, and anti-lipid S IgG assays Defining cut-off levels for a diagnosis of sepsis became problematic and a semi-blind trial was devised to test the markers in the absence of both clinical details and positive blood cultures. Patients with pyrexia of unknown origin and negative BC were included in this phase (4). The results showed that IL-6 with sICAM-l are authentic markers of sepsis. There was 82% agreement between the test marker diagnosis and the clinical diagnosis for sepsis in patients with a Gram-positive BC and 78% agreement in cases of Gram-negative Be. In the PUO group the test markers identified 12 cases of sepsis and the clinical diagnosis 15. The markers were shown to differentiate between early sepsis and sepsis, inflammatory responses and infection. Anti-lipid S with IL-6 proved be a sensitive marker for Gram-positive infections/sepsis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Due to the powerful nature of confession evidence, it is imperative that we investigate the factors that affect the likelihood of obtaining true and false confessions. Previous research has been conducted with a paradigm limited to the study of false confessions to an act of negligence, thereby limiting the generalizability of the findings. The first goal of the current study was to introduce a novel paradigm involving a more serious, intentional act that can be used in the study of both true and false confessions. The second goal was to explore the effects of two police interrogation tactics, minimization and an offer of leniency, on true and false confession rates. ^ Three hundred and thirty-four undergraduates at a large southeastern university were recruited to participate in a study on problem-solving and decision-making. During the course of the laboratory experiment, participants were induced to intentionally break or not break an experimental rule, an act that was characterized as “cheating.” All participants (i.e., both innocent and guilty) were later accused of the act and interrogated. For half of the participants, the interrogator used minimization tactics, which involved downplaying the seriousness of the offense, expressing sympathy, and providing face-saving excuses, in order to encourage the participant to confess. An offer of leniency was also manipulated in which half the participants were offered a “deal” that involved the option of confessing and accepting a known punishment or not confessing and facing the threat of harsher punishment. Results indicated that guilty persons were more likely to confess than innocent persons, and that the use of minimization and an explicit offer of leniency increased both the true and false confession rates. Furthermore, a cumulative effect of techniques was observed, such that the diagnosticity of the interrogation (the ratio of true confessions to false confessions) decreased as the number of techniques used increased. Taken together, the results suggest that caution should be used when implementing these techniques in the interrogation room. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human use of the oceans is increasingly in conflict with conservation of endangered species. Methods for managing the spatial and temporal placement of industries such as military, fishing, transportation and offshore energy, have historically been post hoc; i.e. the time and place of human activity is often already determined before assessment of environmental impacts. In this dissertation, I build robust species distribution models in two case study areas, US Atlantic (Best et al. 2012) and British Columbia (Best et al. 2015), predicting presence and abundance respectively, from scientific surveys. These models are then applied to novel decision frameworks for preemptively suggesting optimal placement of human activities in space and time to minimize ecological impacts: siting for offshore wind energy development, and routing ships to minimize risk of striking whales. Both decision frameworks relate the tradeoff between conservation risk and industry profit with synchronized variable and map views as online spatial decision support systems.

For siting offshore wind energy development (OWED) in the U.S. Atlantic (chapter 4), bird density maps are combined across species with weights of OWED sensitivity to collision and displacement and 10 km2 sites are compared against OWED profitability based on average annual wind speed at 90m hub heights and distance to transmission grid. A spatial decision support system enables toggling between the map and tradeoff plot views by site. A selected site can be inspected for sensitivity to a cetaceans throughout the year, so as to capture months of the year which minimize episodic impacts of pre-operational activities such as seismic airgun surveying and pile driving.

Routing ships to avoid whale strikes (chapter 5) can be similarly viewed as a tradeoff, but is a different problem spatially. A cumulative cost surface is generated from density surface maps and conservation status of cetaceans, before applying as a resistance surface to calculate least-cost routes between start and end locations, i.e. ports and entrance locations to study areas. Varying a multiplier to the cost surface enables calculation of multiple routes with different costs to conservation of cetaceans versus cost to transportation industry, measured as distance. Similar to the siting chapter, a spatial decisions support system enables toggling between the map and tradeoff plot view of proposed routes. The user can also input arbitrary start and end locations to calculate the tradeoff on the fly.

Essential to the input of these decision frameworks are distributions of the species. The two preceding chapters comprise species distribution models from two case study areas, U.S. Atlantic (chapter 2) and British Columbia (chapter 3), predicting presence and density, respectively. Although density is preferred to estimate potential biological removal, per Marine Mammal Protection Act requirements in the U.S., all the necessary parameters, especially distance and angle of observation, are less readily available across publicly mined datasets.

In the case of predicting cetacean presence in the U.S. Atlantic (chapter 2), I extracted datasets from the online OBIS-SEAMAP geo-database, and integrated scientific surveys conducted by ship (n=36) and aircraft (n=16), weighting a Generalized Additive Model by minutes surveyed within space-time grid cells to harmonize effort between the two survey platforms. For each of 16 cetacean species guilds, I predicted the probability of occurrence from static environmental variables (water depth, distance to shore, distance to continental shelf break) and time-varying conditions (monthly sea-surface temperature). To generate maps of presence vs. absence, Receiver Operator Characteristic (ROC) curves were used to define the optimal threshold that minimizes false positive and false negative error rates. I integrated model outputs, including tables (species in guilds, input surveys) and plots (fit of environmental variables, ROC curve), into an online spatial decision support system, allowing for easy navigation of models by taxon, region, season, and data provider.

For predicting cetacean density within the inner waters of British Columbia (chapter 3), I calculated density from systematic, line-transect marine mammal surveys over multiple years and seasons (summer 2004, 2005, 2008, and spring/autumn 2007) conducted by Raincoast Conservation Foundation. Abundance estimates were calculated using two different methods: Conventional Distance Sampling (CDS) and Density Surface Modelling (DSM). CDS generates a single density estimate for each stratum, whereas DSM explicitly models spatial variation and offers potential for greater precision by incorporating environmental predictors. Although DSM yields a more relevant product for the purposes of marine spatial planning, CDS has proven to be useful in cases where there are fewer observations available for seasonal and inter-annual comparison, particularly for the scarcely observed elephant seal. Abundance estimates are provided on a stratum-specific basis. Steller sea lions and harbour seals are further differentiated by ‘hauled out’ and ‘in water’. This analysis updates previous estimates (Williams & Thomas 2007) by including additional years of effort, providing greater spatial precision with the DSM method over CDS, novel reporting for spring and autumn seasons (rather than summer alone), and providing new abundance estimates for Steller sea lion and northern elephant seal. In addition to providing a baseline of marine mammal abundance and distribution, against which future changes can be compared, this information offers the opportunity to assess the risks posed to marine mammals by existing and emerging threats, such as fisheries bycatch, ship strikes, and increased oil spill and ocean noise issues associated with increases of container ship and oil tanker traffic in British Columbia’s continental shelf waters.

Starting with marine animal observations at specific coordinates and times, I combine these data with environmental data, often satellite derived, to produce seascape predictions generalizable in space and time. These habitat-based models enable prediction of encounter rates and, in the case of density surface models, abundance that can then be applied to management scenarios. Specific human activities, OWED and shipping, are then compared within a tradeoff decision support framework, enabling interchangeable map and tradeoff plot views. These products make complex processes transparent for gaming conservation, industry and stakeholders towards optimal marine spatial management, fundamental to the tenets of marine spatial planning, ecosystem-based management and dynamic ocean management.