934 resultados para computed tomographic scan artifact, false positive, facet subluxation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main purpose of a gene interaction network is to map the relationships of the genes that are out of sight when a genomic study is tackled. DNA microarrays allow the measure of gene expression of thousands of genes at the same time. These data constitute the numeric seed for the induction of the gene networks. In this paper, we propose a new approach to build gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling. The interactions induced by the Bayesian classifiers are based both on the expression levels and on the phenotype information of the supervised variable. Feature selection and bootstrap resampling add reliability and robustness to the overall process removing the false positive findings. The consensus among all the induced models produces a hierarchy of dependences and, thus, of variables. Biologists can define the depth level of the model hierarchy so the set of interactions and genes involved can vary from a sparse to a dense set. Experimental results show how these networks perform well on classification tasks. The biological validation matches previous biological findings and opens new hypothesis for future studies

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The area of Human-Machine Interface is growing fast due to its high importance in all technological systems. The basic idea behind designing human-machine interfaces is to enrich the communication with the technology in a natural and easy way. Gesture interfaces are a good example of transparent interfaces. Such interfaces must identify properly the action the user wants to perform, so the proper gesture recognition is of the highest importance. However, most of the systems based on gesture recognition use complex methods requiring high-resource devices. In this work, we propose to model gestures capturing their temporal properties, which significantly reduce storage requirements, and use clustering techniques, namely self-organizing maps and unsupervised genetic algorithm, for their classification. We further propose to train a certain number of algorithms with different parameters and combine their decision using majority voting in order to decrease the false positive rate. The main advantage of the approach is its simplicity, which enables the implementation using devices with limited resources, and therefore low cost. The testing results demonstrate its high potential.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study discusses the optimisation of a selectiv e and differential medium which would facilitate the isolation of Schizosaccharomyces (a genus with a low incidence compared to other microorganisms) to select individuals from this genus for industrial purposes, especially in light of the recent approval of the use of yeasts from this genus in the wine industry by the International Organisation of Vine and Wine, or to detect the presence of such yeasts, for those many authors who consider them food spoilers. To this end, we studied various selective differential agents based on the main thephysiological characteristics of this species, such as its high resistance to high concentrations of sugar, sulfur dioxide, sorbic acid, benzoic acid, acetic acid or malo ethanolic fermentation. This selective medium is based on the resistance of the genus to the antibiotic actidione and its high resistance to inhibitory agents such as benzoic acid compared to possible microorganisms which can give rise to false positive results. Malic acid was used as a differential fact or due to the ability of this genus to metabolise it to ethanol, which allows detecting of the degradation of this compound. Lastly, the medium was successfully used to isolate strains of Schizosaccharomyces pombe from honey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Entendemos por inteligencia colectiva una forma de inteligencia que surge de la colaboración y la participación de varios individuos o, siendo más estrictos, varias entidades. En base a esta sencilla definición podemos observar que este concepto es campo de estudio de las más diversas disciplinas como pueden ser la sociología, las tecnologías de la información o la biología, atendiendo cada una de ellas a un tipo de entidades diferentes: seres humanos, elementos de computación o animales. Como elemento común podríamos indicar que la inteligencia colectiva ha tenido como objetivo el ser capaz de fomentar una inteligencia de grupo que supere a la inteligencia individual de las entidades que lo forman a través de mecanismos de coordinación, cooperación, competencia, integración, diferenciación, etc. Sin embargo, aunque históricamente la inteligencia colectiva se ha podido desarrollar de forma paralela e independiente en las distintas disciplinas que la tratan, en la actualidad, los avances en las tecnologías de la información han provocado que esto ya no sea suficiente. Hoy en día seres humanos y máquinas a través de todo tipo de redes de comunicación e interfaces, conviven en un entorno en el que la inteligencia colectiva ha cobrado una nueva dimensión: ya no sólo puede intentar obtener un comportamiento superior al de sus entidades constituyentes sino que ahora, además, estas inteligencias individuales son completamente diferentes unas de otras y aparece por lo tanto el doble reto de ser capaces de gestionar esta gran heterogeneidad y al mismo tiempo ser capaces de obtener comportamientos aún más inteligentes gracias a las sinergias que los distintos tipos de inteligencias pueden generar. Dentro de las áreas de trabajo de la inteligencia colectiva existen varios campos abiertos en los que siempre se intenta obtener unas prestaciones superiores a las de los individuos. Por ejemplo: consciencia colectiva, memoria colectiva o sabiduría colectiva. Entre todos estos campos nosotros nos centraremos en uno que tiene presencia en la práctica totalidad de posibles comportamientos inteligentes: la toma de decisiones. El campo de estudio de la toma de decisiones es realmente amplio y dentro del mismo la evolución ha sido completamente paralela a la que citábamos anteriormente en referencia a la inteligencia colectiva. En primer lugar se centró en el individuo como entidad decisoria para posteriormente desarrollarse desde un punto de vista social, institucional, etc. La primera fase dentro del estudio de la toma de decisiones se basó en la utilización de paradigmas muy sencillos: análisis de ventajas e inconvenientes, priorización basada en la maximización de algún parámetro del resultado, capacidad para satisfacer los requisitos de forma mínima por parte de las alternativas, consultas a expertos o entidades autorizadas o incluso el azar. Sin embargo, al igual que el paso del estudio del individuo al grupo supone una nueva dimensión dentro la inteligencia colectiva la toma de decisiones colectiva supone un nuevo reto en todas las disciplinas relacionadas. Además, dentro de la decisión colectiva aparecen dos nuevos frentes: los sistemas de decisión centralizados y descentralizados. En el presente proyecto de tesis nos centraremos en este segundo, que es el que supone una mayor atractivo tanto por las posibilidades de generar nuevo conocimiento y trabajar con problemas abiertos actualmente así como en lo que respecta a la aplicabilidad de los resultados que puedan obtenerse. Ya por último, dentro del campo de los sistemas de decisión descentralizados existen varios mecanismos fundamentales que dan lugar a distintas aproximaciones a la problemática propia de este campo. Por ejemplo el liderazgo, la imitación, la prescripción o el miedo. Nosotros nos centraremos en uno de los más multidisciplinares y con mayor capacidad de aplicación en todo tipo de disciplinas y que, históricamente, ha demostrado que puede dar lugar a prestaciones muy superiores a otros tipos de mecanismos de decisión descentralizados: la confianza y la reputación. Resumidamente podríamos indicar que confianza es la creencia por parte de una entidad que otra va a realizar una determinada actividad de una forma concreta. En principio es algo subjetivo, ya que la confianza de dos entidades diferentes sobre una tercera no tiene porqué ser la misma. Por otro lado, la reputación es la idea colectiva (o evaluación social) que distintas entidades de un sistema tiene sobre otra entidad del mismo en lo que respecta a un determinado criterio. Es por tanto una información de carácter colectivo pero única dentro de un sistema, no asociada a cada una de las entidades del sistema sino por igual a todas ellas. En estas dos sencillas definiciones se basan la inmensa mayoría de sistemas colectivos. De hecho muchas disertaciones indican que ningún tipo de organización podría ser viable de no ser por la existencia y la utilización de los conceptos de confianza y reputación. A partir de ahora, a todo sistema que utilice de una u otra forma estos conceptos lo denominaremos como sistema de confianza y reputación (o TRS, Trust and Reputation System). Sin embargo, aunque los TRS son uno de los aspectos de nuestras vidas más cotidianos y con un mayor campo de aplicación, el conocimiento que existe actualmente sobre ellos no podría ser más disperso. Existen un gran número de trabajos científicos en todo tipo de áreas de conocimiento: filosofía, psicología, sociología, economía, política, tecnologías de la información, etc. Pero el principal problema es que no existe una visión completa de la confianza y reputación en su sentido más amplio. Cada disciplina focaliza sus estudios en unos aspectos u otros dentro de los TRS, pero ninguna de ellas trata de explotar el conocimiento generado en el resto para mejorar sus prestaciones en su campo de aplicación concreto. Aspectos muy detallados en algunas áreas de conocimiento son completamente obviados por otras, o incluso aspectos tratados por distintas disciplinas, al ser estudiados desde distintos puntos de vista arrojan resultados complementarios que, sin embargo, no son aprovechados fuera de dichas áreas de conocimiento. Esto nos lleva a una dispersión de conocimiento muy elevada y a una falta de reutilización de metodologías, políticas de actuación y técnicas de una disciplina a otra. Debido su vital importancia, esta alta dispersión de conocimiento se trata de uno de los principales problemas que se pretenden resolver con el presente trabajo de tesis. Por otro lado, cuando se trabaja con TRS, todos los aspectos relacionados con la seguridad están muy presentes ya que muy este es un tema vital dentro del campo de la toma de decisiones. Además también es habitual que los TRS se utilicen para desempeñar responsabilidades que aportan algún tipo de funcionalidad relacionada con el mundo de la seguridad. Por último no podemos olvidar que el acto de confiar está indefectiblemente unido al de delegar una determinada responsabilidad, y que al tratar estos conceptos siempre aparece la idea de riesgo, riesgo de que las expectativas generadas por el acto de la delegación no se cumplan o se cumplan de forma diferente. Podemos ver por lo tanto que cualquier sistema que utiliza la confianza para mejorar o posibilitar su funcionamiento, por su propia naturaleza, es especialmente vulnerable si las premisas en las que se basa son atacadas. En este sentido podemos comprobar (tal y como analizaremos en más detalle a lo largo del presente documento) que las aproximaciones que realizan las distintas disciplinas que tratan la violación de los sistemas de confianza es de lo más variado. únicamente dentro del área de las tecnologías de la información se ha intentado utilizar alguno de los enfoques de otras disciplinas de cara a afrontar problemas relacionados con la seguridad de TRS. Sin embargo se trata de una aproximación incompleta y, normalmente, realizada para cumplir requisitos de aplicaciones concretas y no con la idea de afianzar una base de conocimiento más general y reutilizable en otros entornos. Con todo esto en cuenta, podemos resumir contribuciones del presente trabajo de tesis en las siguientes. • La realización de un completo análisis del estado del arte dentro del mundo de la confianza y la reputación que nos permite comparar las ventajas e inconvenientes de las diferentes aproximación que se realizan a estos conceptos en distintas áreas de conocimiento. • La definición de una arquitectura de referencia para TRS que contempla todas las entidades y procesos que intervienen en este tipo de sistemas. • La definición de un marco de referencia para analizar la seguridad de TRS. Esto implica tanto identificar los principales activos de un TRS en lo que respecta a la seguridad, así como el crear una tipología de posibles ataques y contramedidas en base a dichos activos. • La propuesta de una metodología para el análisis, el diseño, el aseguramiento y el despliegue de un TRS en entornos reales. Adicionalmente se exponen los principales tipos de aplicaciones que pueden obtenerse de los TRS y los medios para maximizar sus prestaciones en cada una de ellas. • La generación de un software que permite simular cualquier tipo de TRS en base a la arquitectura propuesta previamente. Esto permite evaluar las prestaciones de un TRS bajo una determinada configuración en un entorno controlado previamente a su despliegue en un entorno real. Igualmente es de gran utilidad para evaluar la resistencia a distintos tipos de ataques o mal-funcionamientos del sistema. Además de las contribuciones realizadas directamente en el campo de los TRS, hemos realizado aportaciones originales a distintas áreas de conocimiento gracias a la aplicación de las metodologías de análisis y diseño citadas con anterioridad. • Detección de anomalías térmicas en Data Centers. Hemos implementado con éxito un sistema de deteción de anomalías térmicas basado en un TRS. Comparamos la detección de prestaciones de algoritmos de tipo Self-Organized Maps (SOM) y Growing Neural Gas (GNG). Mostramos como SOM ofrece mejores resultados para anomalías en los sistemas de refrigeración de la sala mientras que GNG es una opción más adecuada debido a sus tasas de detección y aislamiento para casos de anomalías provocadas por una carga de trabajo excesiva. • Mejora de las prestaciones de recolección de un sistema basado en swarm computing y odometría social. Gracias a la implementación de un TRS conseguimos mejorar las capacidades de coordinación de una red de robots autónomos distribuidos. La principal contribución reside en el análisis y la validación de las mejoras increméntales que pueden conseguirse con la utilización apropiada de la información existente en el sistema y que puede ser relevante desde el punto de vista de un TRS, y con la implementación de algoritmos de cálculo de confianza basados en dicha información. • Mejora de la seguridad de Wireless Mesh Networks contra ataques contra la integridad, la confidencialidad o la disponibilidad de los datos y / o comunicaciones soportadas por dichas redes. • Mejora de la seguridad de Wireless Sensor Networks contra ataques avanzamos, como insider attacks, ataques desconocidos, etc. Gracias a las metodologías presentadas implementamos contramedidas contra este tipo de ataques en entornos complejos. En base a los experimentos realizados, hemos demostrado que nuestra aproximación es capaz de detectar y confinar varios tipos de ataques que afectan a los protocoles esenciales de la red. La propuesta ofrece unas velocidades de detección muy altas así como demuestra que la inclusión de estos mecanismos de actuación temprana incrementa significativamente el esfuerzo que un atacante tiene que introducir para comprometer la red. Finalmente podríamos concluir que el presente trabajo de tesis supone la generación de un conocimiento útil y aplicable a entornos reales, que nos permite la maximización de las prestaciones resultantes de la utilización de TRS en cualquier tipo de campo de aplicación. De esta forma cubrimos la principal carencia existente actualmente en este campo, que es la falta de una base de conocimiento común y agregada y la inexistencia de una metodología para el desarrollo de TRS que nos permita analizar, diseñar, asegurar y desplegar TRS de una forma sistemática y no artesanal y ad-hoc como se hace en la actualidad. ABSTRACT By collective intelligence we understand a form of intelligence that emerges from the collaboration and competition of many individuals, or strictly speaking, many entities. Based on this simple definition, we can see how this concept is the field of study of a wide range of disciplines, such as sociology, information science or biology, each of them focused in different kinds of entities: human beings, computational resources, or animals. As a common factor, we can point that collective intelligence has always had the goal of being able of promoting a group intelligence that overcomes the individual intelligence of the basic entities that constitute it. This can be accomplished through different mechanisms such as coordination, cooperation, competence, integration, differentiation, etc. Collective intelligence has historically been developed in a parallel and independent way among the different disciplines that deal with it. However, this is not enough anymore due to the advances in information technologies. Nowadays, human beings and machines coexist in environments where collective intelligence has taken a new dimension: we yet have to achieve a better collective behavior than the individual one, but now we also have to deal with completely different kinds of individual intelligences. Therefore, we have a double goal: being able to deal with this heterogeneity and being able to get even more intelligent behaviors thanks to the synergies that the different kinds of intelligence can generate. Within the areas of collective intelligence there are several open topics where they always try to get better performances from groups than from the individuals. For example: collective consciousness, collective memory, or collective wisdom. Among all these topics we will focus on collective decision making, that has influence in most of the collective intelligent behaviors. The field of study of decision making is really wide, and its evolution has been completely parallel to the aforementioned collective intelligence. Firstly, it was focused on the individual as the main decision-making entity, but later it became involved in studying social and institutional groups as basic decision-making entities. The first studies within the decision-making discipline were based on simple paradigms, such as pros and cons analysis, criteria prioritization, fulfillment, following orders, or even chance. However, in the same way that studying the community instead of the individual meant a paradigm shift within collective intelligence, collective decision-making means a new challenge for all the related disciplines. Besides, two new main topics come up when dealing with collective decision-making: centralized and decentralized decision-making systems. In this thesis project we focus in the second one, because it is the most interesting based on the opportunities to generate new knowledge and deal with open issues in this area, as well as these results can be put into practice in a wider set of real-life environments. Finally, within the decentralized collective decision-making systems discipline, there are several basic mechanisms that lead to different approaches to the specific problems of this field, for example: leadership, imitation, prescription, or fear. We will focus on trust and reputation. They are one of the most multidisciplinary concepts and with more potential for applying them in every kind of environments. Besides, they have historically shown that they can generate better performance than other decentralized decision-making mechanisms. Shortly, we say trust is the belief of one entity that the outcome of other entities’ actions is going to be in a specific way. It is a subjective concept because the trust of two different entities in another one does not have to be the same. Reputation is the collective idea (or social evaluation) that a group of entities within a system have about another entity based on a specific criterion. Thus, it is a collective concept in its origin. It is important to say that the behavior of most of the collective systems are based on these two simple definitions. In fact, a lot of articles and essays describe how any organization would not be viable if the ideas of trust and reputation did not exist. From now on, we call Trust an Reputation System (TRS) to any kind of system that uses these concepts. Even though TRSs are one of the most common everyday aspects in our lives, the existing knowledge about them could not be more dispersed. There are thousands of scientific works in every field of study related to trust and reputation: philosophy, psychology, sociology, economics, politics, information sciences, etc. But the main issue is that a comprehensive vision of trust and reputation for all these disciplines does not exist. Every discipline focuses its studies on a specific set of topics but none of them tries to take advantage of the knowledge generated in the other disciplines to improve its behavior or performance. Detailed topics in some fields are completely obviated in others, and even though the study of some topics within several disciplines produces complementary results, these results are not used outside the discipline where they were generated. This leads us to a very high knowledge dispersion and to a lack in the reuse of methodologies, policies and techniques among disciplines. Due to its great importance, this high dispersion of trust and reputation knowledge is one of the main problems this thesis contributes to solve. When we work with TRSs, all the aspects related to security are a constant since it is a vital aspect within the decision-making systems. Besides, TRS are often used to perform some responsibilities related to security. Finally, we cannot forget that the act of trusting is invariably attached to the act of delegating a specific responsibility and, when we deal with these concepts, the idea of risk is always present. This refers to the risk of generated expectations not being accomplished or being accomplished in a different way we anticipated. Thus, we can see that any system using trust to improve or enable its behavior, because of its own nature, is especially vulnerable if the premises it is based on are attacked. Related to this topic, we can see that the approaches of the different disciplines that study attacks of trust and reputation are very diverse. Some attempts of using approaches of other disciplines have been made within the information science area of knowledge, but these approaches are usually incomplete, not systematic and oriented to achieve specific requirements of specific applications. They never try to consolidate a common base of knowledge that could be reusable in other context. Based on all these ideas, this work makes the following direct contributions to the field of TRS: • The compilation of the most relevant existing knowledge related to trust and reputation management systems focusing on their advantages and disadvantages. • We define a generic architecture for TRS, identifying the main entities and processes involved. • We define a generic security framework for TRS. We identify the main security assets and propose a complete taxonomy of attacks for TRS. • We propose and validate a methodology to analyze, design, secure and deploy TRS in real-life environments. Additionally we identify the principal kind of applications we can implement with TRS and how TRS can provide a specific functionality. • We develop a software component to validate and optimize the behavior of a TRS in order to achieve a specific functionality or performance. In addition to the contributions made directly to the field of the TRS, we have made original contributions to different areas of knowledge thanks to the application of the analysis, design and security methodologies previously presented: • Detection of thermal anomalies in Data Centers. Thanks to the application of the TRS analysis and design methodologies, we successfully implemented a thermal anomaly detection system based on a TRS.We compare the detection performance of Self-Organized- Maps and Growing Neural Gas algorithms. We show how SOM provides better results for Computer Room Air Conditioning anomaly detection, yielding detection rates of 100%, in training data with malfunctioning sensors. We also show that GNG yields better detection and isolation rates for workload anomaly detection, reducing the false positive rate when compared to SOM. • Improving the performance of a harvesting system based on swarm computing and social odometry. Through the implementation of a TRS, we achieved to improve the ability of coordinating a distributed network of autonomous robots. The main contribution lies in the analysis and validation of the incremental improvements that can be achieved with proper use information that exist in the system and that are relevant for the TRS, and the implementation of the appropriated trust algorithms based on such information. • Improving Wireless Mesh Networks security against attacks against the integrity, confidentiality or availability of data and communications supported by these networks. Thanks to the implementation of a TRS we improved the detection time rate against these kind of attacks and we limited their potential impact over the system. • We improved the security of Wireless Sensor Networks against advanced attacks, such as insider attacks, unknown attacks, etc. Thanks to the TRS analysis and design methodologies previously described, we implemented countermeasures against such attacks in a complex environment. In our experiments we have demonstrated that our system is capable of detecting and confining various attacks that affect the core network protocols. We have also demonstrated that our approach is capable of rapid attack detection. Also, it has been proven that the inclusion of the proposed detection mechanisms significantly increases the effort the attacker has to introduce in order to compromise the network. Finally we can conclude that, to all intents and purposes, this thesis offers a useful and applicable knowledge in real-life environments that allows us to maximize the performance of any system based on a TRS. Thus, we deal with the main deficiency of this discipline: the lack of a common and complete base of knowledge and the lack of a methodology for the development of TRS that allow us to analyze, design, secure and deploy TRS in a systematic way.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nucleic acid sequence-based amplification (NASBA) has proved to be an ultrasensitive method for HIV-1 diagnosis in plasma even in the primary HIV infection stage. This technique was combined with fluorescence correlation spectroscopy (FCS) which enables online detection of the HIV-1 RNA molecules amplified by NASBA. A fluorescently labeled DNA probe at nanomolar concentration was introduced into the NASBA reaction mixture and hybridizing to a distinct sequence of the amplified RNA molecule. The specific hybridization and extension of this probe during amplification reaction, resulting in an increase of its diffusion time, was monitored online by FCS. As a consequence, after having reached a critical concentration of 0.1–1 nM (threshold for unaided FCS detection), the number of amplified RNA molecules in the further course of reaction could be determined. Evaluation of the hybridization/extension kinetics allowed an estimation of the initial HIV-1 RNA concentration that was present at the beginning of amplification. The value of initial HIV-1 RNA number enables discrimination between positive and false-positive samples (caused for instance by carryover contamination)—this possibility of discrimination is an essential necessity for all diagnostic methods using amplification systems (PCR as well as NASBA). Quantitation of HIV-1 RNA in plasma by combination of NASBA with FCS may also be useful in assessing the efficacy of anti-HIV agents, especially in the early infection stage when standard ELISA antibody tests often display negative results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a method for discovering conserved sequence motifs from families of aligned protein sequences. The method has been implemented as a computer program called emotif (http://motif.stanford.edu/emotif). Given an aligned set of protein sequences, emotif generates a set of motifs with a wide range of specificities and sensitivities. emotif also can generate motifs that describe possible subfamilies of a protein superfamily. A disjunction of such motifs often can represent the entire superfamily with high specificity and sensitivity. We have used emotif to generate sets of motifs from all 7,000 protein alignments in the blocks and prints databases. The resulting database, called identify (http://motif.stanford.edu/identify), contains more than 50,000 motifs. For each alignment, the database contains several motifs having a probability of matching a false positive that range from 10−10 to 10−5. Highly specific motifs are well suited for searching entire proteomes, while generating very few false predictions. identify assigns biological functions to 25–30% of all proteins encoded by the Saccharomyces cerevisiae genome and by several bacterial genomes. In particular, identify assigned functions to 172 of proteins of unknown function in the yeast genome.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Precise mapping of DNA methylation patterns in CpG islands has become essential for understanding diverse biological processes such as the regulation of imprinted genes, X chromosome inactivation, and tumor suppressor gene silencing in human cancer. We describe a new method, MSP (methylation-specific PCR), which can rapidly assess the methylation status of virtually any group of CpG sites within a CpG island, independent of the use of methylation-sensitive restriction enzymes. This assay entails initial modification of DNA by sodium bisulfite, converting all unmethylated, but not methylated, cytosines to uracil, and subsequent amplification with primers specific for methylated versus unmethylated DNA. MSP requires only small quantities of DNA, is sensitive to 0.1% methylated alleles of a given CpG island locus, and can be performed on DNA extracted from paraffin-embedded samples. MSP eliminates the false positive results inherent to previous PCR-based approaches which relied on differential restriction enzyme cleavage to distinguish methylated from unmethylated DNA. In this study, we demonstrate the use of MSP to identify promoter region hypermethylation changes associated with transcriptional inactivation in four important tumor suppressor genes (p16, p15, E-cadherin, and von Hippel-Lindau) in human cancer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introdução: A Hiperplasia Adrenal Congênita por deficiência da 21-hidroxilase (HAC) é uma doença com mortalidade neonatal elevada sendo elegível para programas públicos de Triagem Neonatal (TN). A HAC é causada por mutações no gene CYP21A2, as quais acarretam diferentes comprometimentos da atividade enzimática e resultam em espectro amplo de manifestações clínicas. Apesar da eficiência da TN para diagnosticar os casos graves, a taxa elevada de resultados falso-positivos (RFP), principalmente relacionados à prematuridade, é um dos maiores problemas. Porém, resultados falso-negativos também podem ocorrer em coletas antes de 24 horas de vida. No Brasil, a coleta da amostra neonatal difere entre os municípios, podendo ser no terceiro dia de vida como após. Objetivo: Avaliar se os valores da 17OH-progesterona neonatal (N17OHP) das coletas no terceiro dia de vida diferem significativamente das coletas a partir do quarto dia. Determinar qual percentil (99,5 ou 99,8) pode ser utilizado como valor de corte para a N17OHP, de acordo com o peso ao nascimento e tempo de vida na coleta, a fim de que proporcione taxa menor de RFP. Métodos: Foi avaliada, retrospectivamente, a N17OHP de 271.810 recém-nascidos (Rns) de acordo com o tempo de vida na coleta (G1: 48 - = 72h) e peso ao nascimento (P1: <= 1.500g, P2: 1.501-2.000g, P3: 2.001-2.500g e P4: >= 2.500g), pelo método imunofluorimétrico. Testes com resultados alterados foram confirmados no soro por Espectrometria de Massas em Tandem - LC-MS/MS. Rns afetados e/ou assintomáticos e com valores persistentemente elevados de 17OHP sérica foram submetidos ao estudo molécular, sequenciamento do gene CYP21A2. Resultados: os valores da N17OHP no grupo G1 foram significativamente menores do que em G2 em todos os grupos de peso (p < 0.001). A taxa de RFP em G1 e G2 foi de 0,2% para o percentil 99,8 e de 0,5% para o percentil 99,5 em ambos os grupos. O percentil 99,8 da N17OHP foi o melhor valor de corte para distinguir os Rns não afetados dos afetados, cujos valores são: G1 (P1: 120; P2: 71; P3: 39 e P4: 20 ng /mL) e em G2 (P1: 173; P2: 90; P3: 66 e P4: 25 ng/mL). Vinte e seis Rns do grupo G1 apresentaram a forma perdedora de sal (PS) (13H e 13M), nestes a N17OHP variou de 31 a 524 ng/mL e vinte Rns no grupo G2 (8H e 12M), nestes a N17OHP variou de 53 a 736 ng/mL. Para ambos os grupos foram encontrados três Rns com a forma virilizante simples (1H e 2M) e os valores da N17OHP variaram de 36 a 51 ng/mL. Resultados falso-negativos não foram relatados. O valor preditivo positivo (VPP) no teste do papel filtro foi de 5,6% e 14,1% nos grupos G1 e G2, respectivamente, ao se utilizar o percentil 99,8, e de 2,3% e 7% nos grupos G1 e G2 ao se utilizar o percentil 99,5. Dentre os casos com TN alterada (RFP), 29 deles também apresentaram 17OHP sérica elevada quando dosada por LC-MS/MS. Os casos assintomáticos foram acompanhados até normalização da 17OHP sérica e/ou submetidos ao estudo molecular, que identificou dois Rns com genótipo que prediz a forma não clássica. Conclusão: a melhor estratégia para otimização do diagnóstico da HAC na triagem neonatal é se padronizar valores de corte da N17OHP em dois grupos de acordo com o tempo de vida na coleta (antes e depois de 72 horas), subdivididos em quatro grupos de peso. A utilização dos valores de corte do percentil 99,8 se mantém eficaz no diagnóstico da HAC-21OH na triagem neonatal, reduzindo de forma significativa a taxa de RFP, sem perda do diagnóstico da forma PS

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les artéfacts métalliques entraînent un épaississement artéfactuel de la paroi des tuteurs en tomodensitométrie (TDM) avec réduction apparente de leur lumière. Cette étude transversale prospective, devis mesures répétées et observateurs avec méthode en aveugle, chez 24 patients consécutifs/71 tuteurs coronariens a pour objectif de comparer l’épaisseur de paroi des tuteurs en TDM après reconstruction par un algorithme avec renforcement des bords et un algorithme standard. Une angiographie coronarienne par TDM 256 coupes a été réalisée, avec reconstruction par algorithmes avec renforcement des bords et standard. L’épaisseur de paroi des tuteurs était mesurée par méthodes orthogonale (diamètres) et circonférentielle (circonférences). La qualité d’image des tuteurs était évaluée par échelle ordinale, et les données analysées par modèles linéaire mixte et régression logistique des cotes proportionnelles. L’épaisseur de paroi des tuteurs était inférieure avec l’algorithme avec renforcement des bords comparé à l’algorithme standard, avec les méthodes orthogonale (0,97±0,02 vs 1,09±0,03 mm, respectivement; p<0,001) et circonférentielle (1,13±0,02 vs 1,21±0,02 mm, respectivement; p<0,001). Le premier causait moins de surestimation par rapport à l’épaisseur nominale comparé au second, avec méthodes orthogonale (0,89±0,19 vs 1,00±0,26 mm, respectivement; p<0,001) et circonférentielle (1,06±0,26 vs 1,13±0,31 mm, respectivement; p=0,005) et diminuait de 6 % la surestimation. Les scores de qualité étaient meilleurs avec l’algorithme avec renforcement des bords (OR 3,71; IC 95% 2,33–5,92; p<0,001). En conclusion, la reconstruction des images avec l’algorithme avec renforcement des bords génère des parois de tuteurs plus minces, moins de surestimation, et de meilleurs scores de qualité d’image que l’algorithme standard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Les néoplasies pulmonaires demeurent la première cause de décès par cancer au Québec représentant près de 6000 décès par année. Au cours des dernières années, la radiothérapie stéréotaxique d’ablation (SABR) s’est imposée comme un traitement alternatif à la résection anatomique pour les patients inopérables atteints d’un cancer pulmonaire non à petites cellules de stade précoce. Il s’agit d’une modalité de traitement qui permet d’administrer des doses élevées, typiquement 30-60 Gy en 1-8 fractions, dans le but de cibler précisément le volume de traitement tout en épargnant les tissus sains. Le Centre Hospitalier de l’Université de Montréal s’est muni en 2009 d’un appareil de SABR de fine pointe, le CyberKnife™ (CK), un accélérateur linéaire produisant un faisceau de photons de 6 MV dirigé par un bras robotisé, permettant d’administrer des traitements non-coplanaires avec une précision infra-millimétrique. Ce mémoire est dédié à la caractérisation de certains enjeux cliniques et physiques associés au traitement par CK. Il s’articule autour de deux articles scientifiques revus par les pairs. D’une part, une étude prospective clinique présentant les avantages de la SABR pulmonaire, une technique qui offre un excellent contrôle tumoral à long terme et aide au maintien de la qualité de vie et de la fonction pulmonaire. D’autre part, une étude de physique médicale illustrant les limites de l’acquisition d’images tomodensitométriques en auto-rétention respiratoire lors de la planification de traitement par CK.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sediments of Hydrate Ridge/Cascadia margin contain extensive amounts of gas hydrate. A total of 57 sediment samples including gas hydrate were preserved in liquid nitrogen and have been imaged using computerized tomography to visualize hydrate distribution and shape. The analysis gives evidence that gas hydrate in vein and veinlet structures is the predominant shape in the deeper gas hydrate stability zone with dipping angles from 30° to 90°(vertical).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To determine the efficacy and toxicity of chemotherapy in the treatment of canine nasal tumours. Design Retrospective clinical study Procedure Eight dogs with histologically confirmed nasal tumours were staged by means of complete blood count, serum biochemical analysis, cytological analysis of fine needle aspirate of the regional lymph nodes, thoracic radiographs and computed tomography scan of the nasal cavity. All dogs were treated with alternating doses of doxorubicin, carboplatin and oral piroxicam. All dogs were monitored for side effects of chemotherapy and evaluated for response to treatment by computed tomography scan of the nasal cavity after the first four treatments. Results Complete remission was achieved in four dogs, partial remission occurred in two dogs and two had stable disease on the basis of computed tomography evaluation. There was resolution of clinical signs after one to two doses of chemotherapy in all dogs. Conclusions This chemotherapy protocol was efficacious and well tolerated in this series of eight cases of canine nasal tumours.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Roche Cobas Amplicor system is widely used for the detection of Neisseria gonorrhoeae but is known to cross react with some commensal Neisseria spp. Therefore, a confirmatory test is required. The most common target for confirmatory tests is the cppB gene of N. gonorrhoeae. However, the cppB gene is also present in other Neisseria spp. and is absent in some N. gonorrhoeae isolates. As a result, laboratories targeting this gene run the risk of obtaining both false-positive and false-negative results. In the study presented here, a newly developed N. gonorrhoeae LightCycler assay (NGpapLC) targeting the N. gonorrhoeae porA pseudogene was tested. The NGpapLC assay was used to test 282 clinical samples, and the results were compared to those obtained using a testing algorithm combining the Cobas Amplicor System (Roche Diagnostics, Sydney, Australia) and an in-house LightCycler assay targeting the cppB gene (cppB-LC). In addition, the specificity of the NGpapLC assay was investigated by testing a broad panel of bacteria including isolates of several Neisseria spp. The NGpapLC assay proved to have comparable clinical sensitivity to the cppB-LC assay. In addition; testing of the bacterial panel showed the NGpapLC assay to be highly specific for N. gonorrhoeae DNA. The results of this study show the NGpapLC assay is a suitable alternative to the cppB-LC assay for confirmation of N. gonorrhoeae-positive results obtained with Cobas Amplicor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Physical activity can significantly reduce the risk of cardiovascular disease, diabetes, some forms of cancer, osteoporosis, obesity, falls and fractures, and some mental health problems. While the benefits of physical activity are clear, there is a slightly increased risk of sudden death while exercising (compared with while sedentary), especially in untrained people undertaking unaccustomed vigorous activity. Routine exercise testing yields a significant number of false-positive results, and has not been shown to prevent exercise-related acute cardiac events. There is no convincing evidence that exercise is itself associated with osteoarthritis, but significant joint injury which occurs during sport is associated with an increased risk of subsequent development of osteoarthritis.