892 resultados para Detection system
Resumo:
Insect bite hypersensitivity (IBH) is an allergic dermatitis of horses caused by bites of Culicoides and sometimes Simulium spp. The aim of this investigation was to identify Simulium allergens associated with IBH. A phage surface display cDNA library expressing recombinant Simulium vittatum salivary gland proteins was screened using sera of IBH-affected horses sensitized to S. vittatum salivary gland proteins as shown in immunoblot, resulting in the identification of seven cDNAs encoding IgE-binding proteins. The deduced amino acid sequences of these proteins showed sequence similarities to antigen 5 like protein (Sim v 1), to a serine protease inhibitor (Sim v 2), to two alpha-amylases (Sim v 3 and Sim v 4), and to three S. vittatum erythema proteins (SVEPs). The cDNA inserts were subcloned and expressed as [His](6)-tagged protein in Escherichia coli and purified using Ni(2+)-chelate affinity chromatography. Mice were immunised with the seven recombinant proteins and the antibodies tested against the recombinant proteins and salivary gland extract (SGE) of S. vittatum and Culicoides nubeculosus in immunoblot analyses. r-Sim v 1 specific mouse Abs recognized a band of about 32 kDa in immunoblots of both S. vittatum and C. nubeculosus SGE, detectable also by serum IgE of IBH-affected horses. Preincubation of horse serum with r-Sim v 1 completely inhibited IgE binding to the 32 kDa band demonstrating the presence of cross-reactive antigen 5 like proteins in both SGE. Determination of IgE levels against the r-Sim v proteins and crude S. vittatum extract by ELISA in sera from 25 IBH-affected and 20 control horses showed that IBH-affected horses had significantly higher IgE levels than controls against r-Sim v 1, 2, 3, 4 and S. vittatum extract, whereas the r-SVEP showed only marginal IgE binding. Further analyses showed that 60% of IBH-affected horses reacted to r-Sim v 1, suggesting that this could be a major allergen for IBH. Forty to twenty percent of the IBH-affected horses reacted with r-Sim v 2, 3 or 4. Combination of the results obtained with the 4 r-Sim v proteins showed that 92% of the IBH-affected but only 15% of the healthy horses had IgE levels against one or more of the 4 r-Sim v proteins. Seventy percent of the healthy horses had detectable IgE against S. vittatum extract, indicating a low specificity of the detection system used. Optimization of the ELISA system will be required to determine reliable cut-off values for the IBH-related allergens. Their in vivo relevance needs to be carefully assessed.
Resumo:
The widespread species Escherichia coli includes a broad variety of different types, ranging from highly pathogenic strains causing worldwide outbreaks of severe disease to avirulent isolates which are part of the normal intestinal flora or which are well characterized and safe laboratory strains. The pathogenicity of a given E. coli strain is mainly determined by specific virulence factors which include adhesins, invasins, toxins and capsule. They are often organized in large genetic blocks either on the chromosome ('pathogenicity islands'), on large plasmids or on phages and can be transmitted horizontally between strains. In this review we summarize the current knowledge of the virulence attributes which determine the pathogenic potential of E. coli strains and the methodology available to assess the virulence of E. coli isolates. We also focus on a recently developed procedure based on a broad-range detection system for E. coli-specific virulence genes that makes it possible to determine the potential pathogenicity and its nature in E. coli strains from various sources. This makes it possible to determine the pathotype of E. coli strains in medical diagnostics, to assess the virulence and health risks of E. coli contaminating water, food and the environment and to study potential reservoirs of virulence genes which might contribute to the emergence of new forms of pathogenic E. coli.
Resumo:
Background: The lymphocyte transformation test (LTT) is used for in vitro diagnosis of drug hypersensitivity reactions. While its specificity is over 90%, sensitivity is limited and depends on the type of reaction, drug and possibly time interval between the event and analysis. Removal of regulatory T cells (Treg/CD25(hi)) from in vitro stimulated cell cultures was previously reported to be a promising method to increase the sensitivity of proliferation tests. Objective: The aim of this investigation is to evaluate the effect of removal of regulatory T cells on the sensitivity of the LTT. Methods: Patients with well-documented drug hypersensitivity were recruited. Peripheral blood mononuclear cells, isolated CD3(+) and CD3(+) T cells depleted of the CD25(hi) fraction were used as effector cells in the LTT. Irrelevant drugs were also included to determine specificity. (3)H-thymidine incorporation was utilized as the detection system and results were expressed as a stimulation index (SI). Results: SIs of 7/11 LTTs were reduced after a mean time interval of 10.5 months (LTT 1 vs. LTT 2). Removal of the CD25(hi) fraction, which was FOXP3(+) and had a suppressive effect on drug-induced proliferation, resulted in an increased response to the relevant drugs. Sensitivity was increased from 25 to 82.35% with dramatically enhanced SI (2.05 to 6.02). Specificity was not affected. Conclusion: Removal of Treg/CD25(hi) cells can increase the frequency and strengths of drug-specific proliferation without affecting specificity. This approach might be useful in certain drug hypersensitivity reactions with borderline responses or long time interval since the hypersensitivity reaction. © 2014 S. Karger AG, Basel.
Resumo:
Systems for the identification and registration of cattle have gradually been receiving attention for use in syndromic surveillance, a relatively recent approach for the early detection of infectious disease outbreaks. Real or near real-time monitoring of deaths or stillbirths reported to these systems offer an opportunity to detect temporal or spatial clusters of increased mortality that could be caused by an infectious disease epidemic. In Switzerland, such data are recorded in the "Tierverkehrsdatenbank" (TVD). To investigate the potential of the Swiss TVD for syndromic surveillance, 3 years of data (2009-2011) were assessed in terms of data quality, including timeliness of reporting and completeness of geographic data. Two time-series consisting of reported on-farm deaths and stillbirths were retrospectively analysed to define and quantify the temporal patterns that result from non-health related factors. Geographic data were almost always present in the TVD data; often at different spatial scales. On-farm deaths were reported to the database by farmers in a timely fashion; stillbirths were less timely. Timeliness and geographic coverage are two important features of disease surveillance systems, highlighting the suitability of the TVD for use in a syndromic surveillance system. Both time series exhibited different temporal patterns that were associated with non-health related factors. To avoid false positive signals, these patterns need to be removed from the data or accounted for in some way before applying aberration detection algorithms in real-time. Evaluating mortality data reported to systems for the identification and registration of cattle is of value for comparing national data systems and as a first step towards a European-wide early detection system for emerging and re-emerging cattle diseases.
Resumo:
An interleaved, dual resonance, volume localization technique for $\sp1$H/$\sp{31}$P magnetic resonance spectroscopy has been designed, implemented on a 2 T imager/spectrometer, and verified with phantom studies.^ Localization techniques, including several single voxel techniques and spectroscopic imaging, were implemented, and studies were performed to compare the efficiency of each sequence of $\sp1$H/$\sp{31}$P spectral acquisitions. The sequence chosen was a hybrid of the stimulated echo single voxel technique and the spectroscopic imaging technique.^ Water suppression during the $\sp1$H spectral acquisitions was accomplished by the use of three narrow bandwidth RF saturation pulses in combination with three spoiler gradients. The spoiler gradient amplitudes were selected on the basis of a numerical solution of the Bloch equations. A post-acquisition water suppression algorithm was used to minimize any residual water signal.^ For interleaved $\sp1$H/$\sp{31}$P acquisitions, a dual resonance RF coil was constructed and interfaced to the existing RF detection system via a custom-designed dual resonance transcoupler and switching system. Programmable attenuators were incorporated to allow for changes in receiver and transmitter attenuation "on the fly".^ To provide the rapidly switched gradient fields required for the $\sp1$H/$\sp{31}$P acquisitions, an actively screened gradient coil system was designed and implemented. With this system, gradient field rise times on the order of 100 $\mu$s were obtained. These rapid switching times were necessary for minimizing intrasequence delays and for improving localization quality and water suppression efficiency.^ The interleaved $\sp1$H/$\sp{31}$P volume localization technique was tested using a two-compartment phantom. Analysis of the data showed that the spectral contamination was less than three percent. One-to-one spatial correspondence of the $\sp1$H and $\sp{31}$P spectra was verified and allowed for direct correlation of the spectral data with a standard magnetic resonance image. ^
Resumo:
Recently, vision-based advanced driver-assistance systems (ADAS) have received a new increased interest to enhance driving safety. In particular, due to its high performance–cost ratio, mono-camera systems are arising as the main focus of this field of work. In this paper we present a novel on-board road modeling and vehicle detection system, which is a part of the result of the European I-WAY project. The system relies on a robust estimation of the perspective of the scene, which adapts to the dynamics of the vehicle and generates a stabilized rectified image of the road plane. This rectified plane is used by a recursive Bayesian classi- fier, which classifies pixels as belonging to different classes corresponding to the elements of interest of the scenario. This stage works as an intermediate layer that isolates subsequent modules since it absorbs the inherent variability of the scene. The system has been tested on-road, in different scenarios, including varied illumination and adverse weather conditions, and the results have been proved to be remarkable even for such complex scenarios.
Resumo:
Los lenguajes de programación son el idioma que los programadores usamos para comunicar a los computadores qué queremos que hagan. Desde el lenguaje ensamblador, que traduce una a una las instrucciones que interpreta un computador hasta lenguajes de alto nivel, se ha buscado desarrollar lenguajes más cercanos a la forma de pensar y expresarse de los humanos. Los lenguajes de programación lógicos como Prolog utilizan a su vez el lenguaje de la lógica de 1er orden de modo que el programador puede expresar las premisas del problema que se quiere resolver sin preocuparse del cómo se va a resolver dicho problema. La resolución del problema se equipara a encontrar una deducción del objetivo a alcanzar a partir de las premisas y equivale a lo que entendemos por la ejecución de un programa. Ciao es una implementación de Prolog (http://www.ciao-lang.org) y utiliza el método de resolución SLD, que realiza el recorrido de los árboles de decisión en profundidad(depth-first) lo que puede derivar en la ejecución de una rama de busqueda infinita (en un bucle infinito) sin llegar a dar respuestas. Ciao, al ser un sistema modular, permite la utilización de extensiones para implementar estrategias de resolución alternativas como la tabulación (OLDT). La tabulación es un método alternativo que se basa en memorizar las llamadas realizadas y sus respuestas para no repetir llamadas y poder usar las respuestas sin recomputar las llamadas. Algunos programas que con SLD entran en un bucle infinito, gracias a la tabulación dán todas las respuestas y termina. El modulo tabling es una implementación de tabulación mediante el algoritmo CHAT. Esta implementación es una versión beta que no tiene implementado un manejador de memoria. Entendemos que la gestión de memoria en el módulo de tabling tiene gran importancia, dado que la resolución con tabulación permite reducir el tiempo de computación (al no repetir llamadas), aumentando los requerimientos de memoria (para guardar las llamadas y las respuestas). Por lo tanto, el objetivo de este trabajo es implementar un mecanismo de gestión de la memoria en Ciao con el módulo tabling cargado. Para ello se ha realizado la implementación de: Un mecanismo de captura de errores que: detecta cuando el computador se queda sin memoria y activa la reinicialización del sitema. Un procedimiento que ajusta los punteros del modulo de tabling que apuntan a la WAM tras un proceso de realojo de algunas de las áreas de memoria de la WAM. Un gestor de memoria del modulo de tabling que detecta c realizar una ampliación de las áreas de memoria del modulo de tabling, realiza la solicitud de más memoria y realiza el ajuste de los punteros. Para ayudar al lector no familiarizado con este tema, describimos los datos que Ciao y el módulo de tabling alojan en las áreas de memoria dinámicas que queremos gestionar. Los casos de pruebas desarrollados para evaluar la implementación del gestor de memoria, ponen de manifiesto que: Disponer de un gestor de memoria dinámica permite la ejecución de programas en un mayor número de casos. La política de gestión de memoria incide en la velocidad de ejecución de los programas. ---ABSTRACT---Programming languages are the language that programmers use in order to communicate to computers what we want them to do. Starting from the assembly language, which translates one by one the instructions to the computer, and arriving to highly complex languages, programmers have tried to develop programming languages that resemble more closely the way of thinking and communicating of human beings. Logical programming languages, such as Prolog, use the language of logic of the first order so that programmers can express the premise of the problem that they want to solve without having to solve the problem itself. The solution to the problem is equal to finding a deduction of the objective to reach starting from the premises and corresponds to what is usually meant as the execution of a program. Ciao is an implementation of Prolog (http://www.ciao-lang.org) and uses the method of resolution SLD that carries out the path of the decision trees in depth (depth-frist). This can cause the execution of an infinite searching branch (an infinite loop) without getting to an answer. Since Ciao is a modular system, it allows the use of extensions to implement alternative resolution strategies, such as tabulation (OLDT). Tabulation is an alternative method that is based on the memorization of executions and their answers, in order to avoid the repetition of executions and to be able to use the answers without reexecutions. Some programs that get into an infinite loop with SLD are able to give all the answers and to finish thanks to tabulation. The tabling package is an implementation of tabulation through the algorithm CHAT. This implementation is a beta version which does not present a memory handler. The management of memory in the tabling package is highly important, since the solution with tabulation allows to reduce the system time (because it does not repeat executions) and increases the memory requirements (in order to save executions and answers). Therefore, the objective of this work is to implement a memory management mechanism in Ciao with the tabling package loaded. To achieve this goal, the following implementation were made: An error detection system that reveals when the computer is left without memory and activate the reinizialitation of the system. A procedure that adjusts the pointers of the tabling package which points to the WAM after a process of realloc of some of the WAM memory stacks. A memory manager of the tabling package that detects when it is necessary to expand the memory stacks of the tabling package, requests more memory, and adjusts the pointers. In order to help the readers who are not familiar with this topic, we described the data which Ciao and the tabling package host in the dynamic memory stacks that we want to manage. The test cases developed to evaluate the implementation of the memory manager show that: A manager for the dynamic memory allows the execution of programs in a larger number of cases. Memory management policy influences the program execution speed.
Resumo:
The concept of service oriented architecture has been extensively explored in software engineering, due to the fact that it produces architectures made up of several interconnected modules, easy to reuse when building new systems. This approach to design would be impossible without interconnection mechanisms such as REST (Representationa State Transfer) services, which allow module communication while minimizing coupling. . However, this low coupling brings disadvantages, such as the lack of transparency, which makes it difficult to sistematically create tests without knowledge of the inner working of a system. In this article, we present an automatic error detection system for REST services, based on a statistical analysis over responses produced at multiple service invocations. Thus, a service can be systematically tested without knowing its full specification. The method can find errors in REST services which could not be identified by means of traditional testing methods, and provides limited testing coverage for services whose response format is unknown. It can be also useful as a complement to other testing mechanisms.
Resumo:
El objetivo de este trabajo es analizar las propiedades dinámicas de una presa bóveda de doble curvatura (presa de La Tajera, Guadalajara) para ajustar un modelo de elementos finitos. Para ello se han utilizado acelerómetros de alta sensibilidad sincronizados inalámbricamente. Se han obtenido las frecuencias, amortiguamientos y formas modales frente a los efectos de las acciones de tipo ambiental (viento, paso de vehículos). Se ha modelado mediante elementos finitos la presa y su cimiento incorporando el efecto del nivel del embalse. Con las propiedades dinámicas de la estructura halladas numéricamente se ha realizado un plan de medidas en los puntos que se consideraban más significativos. Tras realizar las medidas, se ha procedido al análisis de resultados mediante un Análisis Modal Operacional. Ello permite estimar los parámetros modales (frecuencias, amortiguamientos y formas modales) experimentalmente y se ha valorado el alcance de los mismos. Posteriormente viene la parte fundamental de este trabajo, que es el ajuste del modelo de elementos finitos inicial considerando el comportamiento dinámico obtenido experimentalmente. El modelo actualizado puede utilizarse dentro de un sistema de detección de daños, o por ejemplo, para el estudio del comportamiento ante un sismo considerando la interacción presa-embalse-cimiento. The purpose of this paper is to study the dynamic characteristics of a double curvature arch dam (La Tajera arch dam) for a Finite Element Model Updating. To achieve it, high sensitivity accelerometers synchronized wirelessly have been used. The system modal dampings, natural frequencies mode shapes are identified using output only identification techniques under environmental loads (wind, vehicles). Firstly, a finite element model of the dam-reservoir-foundation system was created. Once the dynamic properties of the structure were numerically obtained, a testing plan was then carried out identifying the most significant test points. After the measurements were carried out, an Operational Modal Analysis was performed to obtain experimentally the structure dynamic properties: natural frequencies, modal dampings and mode shapes. experimentally and to assess its reach. Then, the finite element model updating of the initial model was carried out to match the recorded dynamic behavior. The updated model may be used within a structural health monitoring and damage detection system or, as it is proposed on this thesis, for the analysis of the seismic response of arch dam-reservoir-foundation coupled systems
Resumo:
Entendemos por inteligencia colectiva una forma de inteligencia que surge de la colaboración y la participación de varios individuos o, siendo más estrictos, varias entidades. En base a esta sencilla definición podemos observar que este concepto es campo de estudio de las más diversas disciplinas como pueden ser la sociología, las tecnologías de la información o la biología, atendiendo cada una de ellas a un tipo de entidades diferentes: seres humanos, elementos de computación o animales. Como elemento común podríamos indicar que la inteligencia colectiva ha tenido como objetivo el ser capaz de fomentar una inteligencia de grupo que supere a la inteligencia individual de las entidades que lo forman a través de mecanismos de coordinación, cooperación, competencia, integración, diferenciación, etc. Sin embargo, aunque históricamente la inteligencia colectiva se ha podido desarrollar de forma paralela e independiente en las distintas disciplinas que la tratan, en la actualidad, los avances en las tecnologías de la información han provocado que esto ya no sea suficiente. Hoy en día seres humanos y máquinas a través de todo tipo de redes de comunicación e interfaces, conviven en un entorno en el que la inteligencia colectiva ha cobrado una nueva dimensión: ya no sólo puede intentar obtener un comportamiento superior al de sus entidades constituyentes sino que ahora, además, estas inteligencias individuales son completamente diferentes unas de otras y aparece por lo tanto el doble reto de ser capaces de gestionar esta gran heterogeneidad y al mismo tiempo ser capaces de obtener comportamientos aún más inteligentes gracias a las sinergias que los distintos tipos de inteligencias pueden generar. Dentro de las áreas de trabajo de la inteligencia colectiva existen varios campos abiertos en los que siempre se intenta obtener unas prestaciones superiores a las de los individuos. Por ejemplo: consciencia colectiva, memoria colectiva o sabiduría colectiva. Entre todos estos campos nosotros nos centraremos en uno que tiene presencia en la práctica totalidad de posibles comportamientos inteligentes: la toma de decisiones. El campo de estudio de la toma de decisiones es realmente amplio y dentro del mismo la evolución ha sido completamente paralela a la que citábamos anteriormente en referencia a la inteligencia colectiva. En primer lugar se centró en el individuo como entidad decisoria para posteriormente desarrollarse desde un punto de vista social, institucional, etc. La primera fase dentro del estudio de la toma de decisiones se basó en la utilización de paradigmas muy sencillos: análisis de ventajas e inconvenientes, priorización basada en la maximización de algún parámetro del resultado, capacidad para satisfacer los requisitos de forma mínima por parte de las alternativas, consultas a expertos o entidades autorizadas o incluso el azar. Sin embargo, al igual que el paso del estudio del individuo al grupo supone una nueva dimensión dentro la inteligencia colectiva la toma de decisiones colectiva supone un nuevo reto en todas las disciplinas relacionadas. Además, dentro de la decisión colectiva aparecen dos nuevos frentes: los sistemas de decisión centralizados y descentralizados. En el presente proyecto de tesis nos centraremos en este segundo, que es el que supone una mayor atractivo tanto por las posibilidades de generar nuevo conocimiento y trabajar con problemas abiertos actualmente así como en lo que respecta a la aplicabilidad de los resultados que puedan obtenerse. Ya por último, dentro del campo de los sistemas de decisión descentralizados existen varios mecanismos fundamentales que dan lugar a distintas aproximaciones a la problemática propia de este campo. Por ejemplo el liderazgo, la imitación, la prescripción o el miedo. Nosotros nos centraremos en uno de los más multidisciplinares y con mayor capacidad de aplicación en todo tipo de disciplinas y que, históricamente, ha demostrado que puede dar lugar a prestaciones muy superiores a otros tipos de mecanismos de decisión descentralizados: la confianza y la reputación. Resumidamente podríamos indicar que confianza es la creencia por parte de una entidad que otra va a realizar una determinada actividad de una forma concreta. En principio es algo subjetivo, ya que la confianza de dos entidades diferentes sobre una tercera no tiene porqué ser la misma. Por otro lado, la reputación es la idea colectiva (o evaluación social) que distintas entidades de un sistema tiene sobre otra entidad del mismo en lo que respecta a un determinado criterio. Es por tanto una información de carácter colectivo pero única dentro de un sistema, no asociada a cada una de las entidades del sistema sino por igual a todas ellas. En estas dos sencillas definiciones se basan la inmensa mayoría de sistemas colectivos. De hecho muchas disertaciones indican que ningún tipo de organización podría ser viable de no ser por la existencia y la utilización de los conceptos de confianza y reputación. A partir de ahora, a todo sistema que utilice de una u otra forma estos conceptos lo denominaremos como sistema de confianza y reputación (o TRS, Trust and Reputation System). Sin embargo, aunque los TRS son uno de los aspectos de nuestras vidas más cotidianos y con un mayor campo de aplicación, el conocimiento que existe actualmente sobre ellos no podría ser más disperso. Existen un gran número de trabajos científicos en todo tipo de áreas de conocimiento: filosofía, psicología, sociología, economía, política, tecnologías de la información, etc. Pero el principal problema es que no existe una visión completa de la confianza y reputación en su sentido más amplio. Cada disciplina focaliza sus estudios en unos aspectos u otros dentro de los TRS, pero ninguna de ellas trata de explotar el conocimiento generado en el resto para mejorar sus prestaciones en su campo de aplicación concreto. Aspectos muy detallados en algunas áreas de conocimiento son completamente obviados por otras, o incluso aspectos tratados por distintas disciplinas, al ser estudiados desde distintos puntos de vista arrojan resultados complementarios que, sin embargo, no son aprovechados fuera de dichas áreas de conocimiento. Esto nos lleva a una dispersión de conocimiento muy elevada y a una falta de reutilización de metodologías, políticas de actuación y técnicas de una disciplina a otra. Debido su vital importancia, esta alta dispersión de conocimiento se trata de uno de los principales problemas que se pretenden resolver con el presente trabajo de tesis. Por otro lado, cuando se trabaja con TRS, todos los aspectos relacionados con la seguridad están muy presentes ya que muy este es un tema vital dentro del campo de la toma de decisiones. Además también es habitual que los TRS se utilicen para desempeñar responsabilidades que aportan algún tipo de funcionalidad relacionada con el mundo de la seguridad. Por último no podemos olvidar que el acto de confiar está indefectiblemente unido al de delegar una determinada responsabilidad, y que al tratar estos conceptos siempre aparece la idea de riesgo, riesgo de que las expectativas generadas por el acto de la delegación no se cumplan o se cumplan de forma diferente. Podemos ver por lo tanto que cualquier sistema que utiliza la confianza para mejorar o posibilitar su funcionamiento, por su propia naturaleza, es especialmente vulnerable si las premisas en las que se basa son atacadas. En este sentido podemos comprobar (tal y como analizaremos en más detalle a lo largo del presente documento) que las aproximaciones que realizan las distintas disciplinas que tratan la violación de los sistemas de confianza es de lo más variado. únicamente dentro del área de las tecnologías de la información se ha intentado utilizar alguno de los enfoques de otras disciplinas de cara a afrontar problemas relacionados con la seguridad de TRS. Sin embargo se trata de una aproximación incompleta y, normalmente, realizada para cumplir requisitos de aplicaciones concretas y no con la idea de afianzar una base de conocimiento más general y reutilizable en otros entornos. Con todo esto en cuenta, podemos resumir contribuciones del presente trabajo de tesis en las siguientes. • La realización de un completo análisis del estado del arte dentro del mundo de la confianza y la reputación que nos permite comparar las ventajas e inconvenientes de las diferentes aproximación que se realizan a estos conceptos en distintas áreas de conocimiento. • La definición de una arquitectura de referencia para TRS que contempla todas las entidades y procesos que intervienen en este tipo de sistemas. • La definición de un marco de referencia para analizar la seguridad de TRS. Esto implica tanto identificar los principales activos de un TRS en lo que respecta a la seguridad, así como el crear una tipología de posibles ataques y contramedidas en base a dichos activos. • La propuesta de una metodología para el análisis, el diseño, el aseguramiento y el despliegue de un TRS en entornos reales. Adicionalmente se exponen los principales tipos de aplicaciones que pueden obtenerse de los TRS y los medios para maximizar sus prestaciones en cada una de ellas. • La generación de un software que permite simular cualquier tipo de TRS en base a la arquitectura propuesta previamente. Esto permite evaluar las prestaciones de un TRS bajo una determinada configuración en un entorno controlado previamente a su despliegue en un entorno real. Igualmente es de gran utilidad para evaluar la resistencia a distintos tipos de ataques o mal-funcionamientos del sistema. Además de las contribuciones realizadas directamente en el campo de los TRS, hemos realizado aportaciones originales a distintas áreas de conocimiento gracias a la aplicación de las metodologías de análisis y diseño citadas con anterioridad. • Detección de anomalías térmicas en Data Centers. Hemos implementado con éxito un sistema de deteción de anomalías térmicas basado en un TRS. Comparamos la detección de prestaciones de algoritmos de tipo Self-Organized Maps (SOM) y Growing Neural Gas (GNG). Mostramos como SOM ofrece mejores resultados para anomalías en los sistemas de refrigeración de la sala mientras que GNG es una opción más adecuada debido a sus tasas de detección y aislamiento para casos de anomalías provocadas por una carga de trabajo excesiva. • Mejora de las prestaciones de recolección de un sistema basado en swarm computing y odometría social. Gracias a la implementación de un TRS conseguimos mejorar las capacidades de coordinación de una red de robots autónomos distribuidos. La principal contribución reside en el análisis y la validación de las mejoras increméntales que pueden conseguirse con la utilización apropiada de la información existente en el sistema y que puede ser relevante desde el punto de vista de un TRS, y con la implementación de algoritmos de cálculo de confianza basados en dicha información. • Mejora de la seguridad de Wireless Mesh Networks contra ataques contra la integridad, la confidencialidad o la disponibilidad de los datos y / o comunicaciones soportadas por dichas redes. • Mejora de la seguridad de Wireless Sensor Networks contra ataques avanzamos, como insider attacks, ataques desconocidos, etc. Gracias a las metodologías presentadas implementamos contramedidas contra este tipo de ataques en entornos complejos. En base a los experimentos realizados, hemos demostrado que nuestra aproximación es capaz de detectar y confinar varios tipos de ataques que afectan a los protocoles esenciales de la red. La propuesta ofrece unas velocidades de detección muy altas así como demuestra que la inclusión de estos mecanismos de actuación temprana incrementa significativamente el esfuerzo que un atacante tiene que introducir para comprometer la red. Finalmente podríamos concluir que el presente trabajo de tesis supone la generación de un conocimiento útil y aplicable a entornos reales, que nos permite la maximización de las prestaciones resultantes de la utilización de TRS en cualquier tipo de campo de aplicación. De esta forma cubrimos la principal carencia existente actualmente en este campo, que es la falta de una base de conocimiento común y agregada y la inexistencia de una metodología para el desarrollo de TRS que nos permita analizar, diseñar, asegurar y desplegar TRS de una forma sistemática y no artesanal y ad-hoc como se hace en la actualidad. ABSTRACT By collective intelligence we understand a form of intelligence that emerges from the collaboration and competition of many individuals, or strictly speaking, many entities. Based on this simple definition, we can see how this concept is the field of study of a wide range of disciplines, such as sociology, information science or biology, each of them focused in different kinds of entities: human beings, computational resources, or animals. As a common factor, we can point that collective intelligence has always had the goal of being able of promoting a group intelligence that overcomes the individual intelligence of the basic entities that constitute it. This can be accomplished through different mechanisms such as coordination, cooperation, competence, integration, differentiation, etc. Collective intelligence has historically been developed in a parallel and independent way among the different disciplines that deal with it. However, this is not enough anymore due to the advances in information technologies. Nowadays, human beings and machines coexist in environments where collective intelligence has taken a new dimension: we yet have to achieve a better collective behavior than the individual one, but now we also have to deal with completely different kinds of individual intelligences. Therefore, we have a double goal: being able to deal with this heterogeneity and being able to get even more intelligent behaviors thanks to the synergies that the different kinds of intelligence can generate. Within the areas of collective intelligence there are several open topics where they always try to get better performances from groups than from the individuals. For example: collective consciousness, collective memory, or collective wisdom. Among all these topics we will focus on collective decision making, that has influence in most of the collective intelligent behaviors. The field of study of decision making is really wide, and its evolution has been completely parallel to the aforementioned collective intelligence. Firstly, it was focused on the individual as the main decision-making entity, but later it became involved in studying social and institutional groups as basic decision-making entities. The first studies within the decision-making discipline were based on simple paradigms, such as pros and cons analysis, criteria prioritization, fulfillment, following orders, or even chance. However, in the same way that studying the community instead of the individual meant a paradigm shift within collective intelligence, collective decision-making means a new challenge for all the related disciplines. Besides, two new main topics come up when dealing with collective decision-making: centralized and decentralized decision-making systems. In this thesis project we focus in the second one, because it is the most interesting based on the opportunities to generate new knowledge and deal with open issues in this area, as well as these results can be put into practice in a wider set of real-life environments. Finally, within the decentralized collective decision-making systems discipline, there are several basic mechanisms that lead to different approaches to the specific problems of this field, for example: leadership, imitation, prescription, or fear. We will focus on trust and reputation. They are one of the most multidisciplinary concepts and with more potential for applying them in every kind of environments. Besides, they have historically shown that they can generate better performance than other decentralized decision-making mechanisms. Shortly, we say trust is the belief of one entity that the outcome of other entities’ actions is going to be in a specific way. It is a subjective concept because the trust of two different entities in another one does not have to be the same. Reputation is the collective idea (or social evaluation) that a group of entities within a system have about another entity based on a specific criterion. Thus, it is a collective concept in its origin. It is important to say that the behavior of most of the collective systems are based on these two simple definitions. In fact, a lot of articles and essays describe how any organization would not be viable if the ideas of trust and reputation did not exist. From now on, we call Trust an Reputation System (TRS) to any kind of system that uses these concepts. Even though TRSs are one of the most common everyday aspects in our lives, the existing knowledge about them could not be more dispersed. There are thousands of scientific works in every field of study related to trust and reputation: philosophy, psychology, sociology, economics, politics, information sciences, etc. But the main issue is that a comprehensive vision of trust and reputation for all these disciplines does not exist. Every discipline focuses its studies on a specific set of topics but none of them tries to take advantage of the knowledge generated in the other disciplines to improve its behavior or performance. Detailed topics in some fields are completely obviated in others, and even though the study of some topics within several disciplines produces complementary results, these results are not used outside the discipline where they were generated. This leads us to a very high knowledge dispersion and to a lack in the reuse of methodologies, policies and techniques among disciplines. Due to its great importance, this high dispersion of trust and reputation knowledge is one of the main problems this thesis contributes to solve. When we work with TRSs, all the aspects related to security are a constant since it is a vital aspect within the decision-making systems. Besides, TRS are often used to perform some responsibilities related to security. Finally, we cannot forget that the act of trusting is invariably attached to the act of delegating a specific responsibility and, when we deal with these concepts, the idea of risk is always present. This refers to the risk of generated expectations not being accomplished or being accomplished in a different way we anticipated. Thus, we can see that any system using trust to improve or enable its behavior, because of its own nature, is especially vulnerable if the premises it is based on are attacked. Related to this topic, we can see that the approaches of the different disciplines that study attacks of trust and reputation are very diverse. Some attempts of using approaches of other disciplines have been made within the information science area of knowledge, but these approaches are usually incomplete, not systematic and oriented to achieve specific requirements of specific applications. They never try to consolidate a common base of knowledge that could be reusable in other context. Based on all these ideas, this work makes the following direct contributions to the field of TRS: • The compilation of the most relevant existing knowledge related to trust and reputation management systems focusing on their advantages and disadvantages. • We define a generic architecture for TRS, identifying the main entities and processes involved. • We define a generic security framework for TRS. We identify the main security assets and propose a complete taxonomy of attacks for TRS. • We propose and validate a methodology to analyze, design, secure and deploy TRS in real-life environments. Additionally we identify the principal kind of applications we can implement with TRS and how TRS can provide a specific functionality. • We develop a software component to validate and optimize the behavior of a TRS in order to achieve a specific functionality or performance. In addition to the contributions made directly to the field of the TRS, we have made original contributions to different areas of knowledge thanks to the application of the analysis, design and security methodologies previously presented: • Detection of thermal anomalies in Data Centers. Thanks to the application of the TRS analysis and design methodologies, we successfully implemented a thermal anomaly detection system based on a TRS.We compare the detection performance of Self-Organized- Maps and Growing Neural Gas algorithms. We show how SOM provides better results for Computer Room Air Conditioning anomaly detection, yielding detection rates of 100%, in training data with malfunctioning sensors. We also show that GNG yields better detection and isolation rates for workload anomaly detection, reducing the false positive rate when compared to SOM. • Improving the performance of a harvesting system based on swarm computing and social odometry. Through the implementation of a TRS, we achieved to improve the ability of coordinating a distributed network of autonomous robots. The main contribution lies in the analysis and validation of the incremental improvements that can be achieved with proper use information that exist in the system and that are relevant for the TRS, and the implementation of the appropriated trust algorithms based on such information. • Improving Wireless Mesh Networks security against attacks against the integrity, confidentiality or availability of data and communications supported by these networks. Thanks to the implementation of a TRS we improved the detection time rate against these kind of attacks and we limited their potential impact over the system. • We improved the security of Wireless Sensor Networks against advanced attacks, such as insider attacks, unknown attacks, etc. Thanks to the TRS analysis and design methodologies previously described, we implemented countermeasures against such attacks in a complex environment. In our experiments we have demonstrated that our system is capable of detecting and confining various attacks that affect the core network protocols. We have also demonstrated that our approach is capable of rapid attack detection. Also, it has been proven that the inclusion of the proposed detection mechanisms significantly increases the effort the attacker has to introduce in order to compromise the network. Finally we can conclude that, to all intents and purposes, this thesis offers a useful and applicable knowledge in real-life environments that allows us to maximize the performance of any system based on a TRS. Thus, we deal with the main deficiency of this discipline: the lack of a common and complete base of knowledge and the lack of a methodology for the development of TRS that allow us to analyze, design, secure and deploy TRS in a systematic way.
Resumo:
El presente trabajo consiste en el estudio de la viabilidad en el uso de tres posibles opciones orientadas a la captura de la posición y la postura de personas en entornos reales, así como el diseño e implementación de un prototipo de captura en cada uno de ellos. También se incluye una comparativa con el fin de destacar los pros y los contras de cada solución. Una de las alternativas para llevarlo a cabo consiste en un sistema de tracking óptico por infrarrojos de alta calidad y precisión, como es Optitrack; la segunda se basa en una solución de bajo coste como es el periférico Kinect de Microsoft y la tercera consiste en la combinación de ambos dispositivos para encontrar un equilibrio entre precisión y economía, tomando los puntos fuertes de cada uno para contrarrestar sus debilidades. Uno de los puntos importantes del trabajo es que el uso de los prototipos de captura está orientado a entornos de trabajo reales (en concreto en la captura de los movimientos del personal que trabaja en un quirófano), así que han sido necesarias pruebas para minimizar el efecto de las fuentes de luz en los sistemas de infrarrojos, el estudio de los dispositivos para determinar el número de personas que son capaces de capturar a la vez sin que esto afecte a su rendimiento y el nivel de invasión de los dispositivos en los trabajadores (marcadores para el tracking), además de los mecanismos apropiados para minimizar el impacto de las oclusiones utilizando métodos de interpolación y ayudándose del conocimiento del contexto, las restricciones de movimiento del cuerpo humano y la evolución en el tiempo. Se han desarrollado conocimientos en el funcionamiento y configuración dispositivos como el sistema de captura Optitrack de Natural Point y el sistema de detección de movimiento Kinect desarrollado por Microsoft. También se ha aprendido el funcionamiento del entorno de desarrollo y motor de videojuegos multiplataforma homónimos Unity y del lenguaje de programación C# que utiliza dicho entorno para sus scripts de control, así como los protocolos de comunicación entre los distintos sistemas que componen los prototipos como son VRPN y NatNet.---ABSTRACT---This project is about a viability study in the use of three possible options, oriented towards the capture of the position and view of people in a real environment, as well as the design and implementation of a capturing prototype in each of them. A comparative study is also included, in order to emphasise the pros and cons of each solution. One of the alternatives consists of an optical tracking system via high quality and precision infrareds, like Optitrack; the second is based on a low cost solution, such as Microsoft’s Kinect peripheral, and the third consists on a combination of both devices to find a balance between precision and price, taking the strong points of each of the mechanisms to make up for the weaknesses. One of the important parts of this project is that the use of the capturing prototypes is directed towards real life work situations (specifically towards the capturing of the movements of surgery personnel), so various tests have been necessary in order to minimize the effect of light sources in infrared systems, the study of the devices to determine the number of people that they are capable of capturing simultaneously without affecting their performance and the invasion level of the devices towards the workers (tracking markers), as well as the mechanisms adopted to minimize the impact of the occlusions using interpolation methods and with help of the knowledge of the surroundings, the human movement restrictions and the passing of time. Knowledge has been developed on the functioning and configuration of the devices such as Natural Point’s Optitrak capturing system, and the Kinect movement detection system developed by Microsoft. We have also learned about the operating of the development and incentive environment of multiplatform videogames of namesake Unity and of C# programming language, which uses said environment for its control scripts, as well as communication protocols between the different systems that make up prototypes like VRPN and NatNet.
Resumo:
Reassembly of enzymes from peptide fragments has been used as a strategy for understanding the evolution, folding, and role of individual subdomains in catalysis and regulation of activity. We demonstrate an oligomerization-assisted enzyme reassembly strategy whereby fragments are covalently linked to independently folding and interacting domains whose interactions serve to promote efficient refolding and complementation of fragments, forming active enzyme. We show that active murine dihydrofolate reductase (E.C. 1.5.1.3) can be reassembled from complementary N- and C-terminal fragments when fused to homodimerizing GCN4 leucine zipper-forming sequences as well as heterodimerizing protein partners. Reassembly is detected by an in vivo selection assay in Escherichia coli and in vitro. The effects of mutations that disrupt fragment affinity or enzyme activity were assessed. The steady–state kinetic parameters for the reassembled mutant (Phe-31 → Ser) were determined; they are not significantly different from the full-length mutant. The strategy described here provides a general approach for protein dissection and domain swapping studies, with the capacity both for rapid in vivo screening as well as in vitro characterization. Further, the strategy suggests a simple in vivo enzyme-based detection system for protein–protein interactions, which we illustrate with two examples: ras–GTPase and raf–ras-binding domain and FK506-binding protein-rapamycin complexed with the target of rapamycin TOR2.
Resumo:
The factors that regulate the perpetuation and invasiveness of rheumatoid synovitis have been the subject of considerable inquiry, and the possibility that nonimmunologic defects can contribute to the disease has not been rigorously addressed. Using a mismatch detection system, we report that synovial tissue from the joints of severe chronic rheumatoid arthritis patients contain mutant p53 transcripts, which were not found in skin samples from the same patients or in joints of patients with osteoarthritis. Mutant p53 transcripts also were identified in synoviocytes cultured from rheumatoid joints. The predicted amino acid substitutions in p53 were identical or similar to those commonly observed in a variety of tumors and might influence growth and survival of rheumatoid synoviocytes. Thus, mutations in p53 and subsequent selection of the mutant cells may occur in the joints of patients as a consequence of inflammation and contribute to the pathogenesis of the disease.
Resumo:
We report automated DNA sequencing in 16-channel microchips. A microchip prefilled with sieving matrix is aligned on a heating plate affixed to a movable platform. Samples are loaded into sample reservoirs by using an eight-tip pipetting device, and the chip is docked with an array of electrodes in the focal plane of a four-color scanning detection system. Under computer control, high voltage is applied to the appropriate reservoirs in a programmed sequence that injects and separates the DNA samples. An integrated four-color confocal fluorescent detector automatically scans all 16 channels. The system routinely yields more than 450 bases in 15 min in all 16 channels. In the best case using an automated base-calling program, 543 bases have been called at an accuracy of >99%. Separations, including automated chip loading and sample injection, normally are completed in less than 18 min. The advantages of DNA sequencing on capillary electrophoresis chips include uniform signal intensity and tolerance of high DNA template concentration. To understand the fundamentals of these unique features we developed a theoretical treatment of cross-channel chip injection that we call the differential concentration effect. We present experimental evidence consistent with the predictions of the theory.
Resumo:
Linkage and association analyses were performed to identify loci affecting disease susceptibility by scoring previously characterized sequence variations such as microsatellites and single nucleotide polymorphisms. Lack of markers in regions of interest, as well as difficulty in adapting various methods to high-throughput settings, often limits the effectiveness of the analyses. We have adapted the Escherichia coli mismatch detection system, employing the factors MutS, MutL and MutH, for use in PCR-based, automated, high-throughput genotyping and mutation detection of genomic DNA. Optimal sensitivity and signal-to-noise ratios were obtained in a straightforward fashion because the detection reaction proved to be principally dependent upon monovalent cation concentration and MutL concentration. Quantitative relationships of the optimal values of these parameters with length of the DNA test fragment were demonstrated, in support of the translocation model for the mechanism of action of these enzymes, rather than the molecular switch model. Thus, rapid, sequence-independent optimization was possible for each new genomic target region. Other factors potentially limiting the flexibility of mismatch scanning, such as positioning of dam recognition sites within the target fragment, have also been investigated. We developed several strategies, which can be easily adapted to automation, for limiting the analysis to intersample heteroduplexes. Thus, the principal barriers to the use of this methodology, which we have designated PCR candidate region mismatch scanning, in cost-effective, high-throughput settings have been removed.