956 resultados para automated analysis


Relevância:

60.00% 60.00%

Publicador:

Resumo:

PURPOSE. To assess whether baseline Glaucoma Probability Score (GPS; HRT-3; Heidelberg Engineering, Dossenheim, Germany) results are predictive of progression in patients with suspected glaucoma. The GPS is a new feature of the confocal scanning laser ophthalmoscope that generates an operator-independent, three-dimensional model of the optic nerve head and gives a score for the probability that this model is consistent with glaucomatous damage. METHODS. The study included 223 patients with suspected glaucoma during an average follow-up of 63.3 months. Included subjects had a suspect optic disc appearance and/or elevated intraocular pressure, but normal visual fields. Conversion was defined as development of either repeatable abnormal visual fields or glaucomatous deterioration in the appearance of the optic disc during the study period. The association between baseline GPS and conversion was investigated by Cox regression models. RESULTS. Fifty-four (24.2%) eyes converted. In multivariate models, both higher values of GPS global and subjective stereophotograph assessment ( larger cup-disc ratio and glaucomatous grading) were predictive of conversion: adjusted hazard ratios (95% CI): 1.31 (1.15 - 1.50) per 0.1 higher global GPS, 1.34 (1.12 - 1.62) per 0.1 higher CDR, and 2.34 (1.22 - 4.47) for abnormal grading, respectively. No significant differences ( P > 0.05 for all comparisons) were found between the c-index values ( equivalent to area under ROC curve) for the multivariate models (0.732, 0.705, and 0.699, respectively). CONCLUSIONS. GPS values were predictive of conversion in our population of patients with suspected glaucoma. Further, they performed as well as subjective assessment of the optic disc. These results suggest that GPS could potentially replace stereophotograph as a tool for estimating the likelihood of conversion to glaucoma.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Mestre em Engenharia Biomédica

Relevância:

60.00% 60.00%

Publicador:

Resumo:

RESUMO: A pele é o maior órgão do corpo humano e a sua pigmentação é essencial para a sua coloração e proteção contra os efeitos nocivos da radiação ultravioleta (UV). A pigmentação da pele resulta essencialmente de três processos: a síntese e o armazenamento de melanina pelos melanócitos, em organelos especializados denominados melanossomas; o transporte dos melanossomas dentro dos melanócitos; e finalmente, a transferência dos melanossomas para os queratinócitos adjacentes. Nos queratinócitos, a melanina migra para a região perinuclear apical da célula para formar um escudo protetor,responsável pela proteção do DNA dos danos causados pela radiação UV. Os melanócitos estão localizados na camada basal da epiderme e contactam com 30-40 queratinócitos. Em conjunto, estas células formam a “unidade melano-epidérmica”. Apesar dos processos de síntese e transporte de melanina nos melanócitos estarem bastante bem caracterizados, os mecanismos moleculares subjacentes à transferência inter-celular de melanina são menos conhecidos e ainda controversos. Dados preliminares obtidos pelo nosso grupo, que se basearam na observação de amostras de pele humana por microscopia electrónica, indicam que a forma predominante de transferência de melanina na epiderme consiste na exocitose dos melanossomas pelos melanócitos e subsequente endocitose da melanina por queratinócitos. Para além disso sabe-se que as proteínas Rab, que controlam o tráfego membranar, estão envolvidas em várias etapas de pigmentação da pele, nomeadamente na biogénese e no transporte de melanina. Assim, dado o seu papel fundamental nestes processos, questionámo-nos sobre o seu envolvimento na transferência de melanina. Com este trabalho, propomo-nos a expandir o conhecimento atual sobre a transferência de melanina na pele, através do estudo detalhado dos seus mecanismos moleculares, identificando as proteínas Rab que regulam o processo. Pretendemos também confirmar o modelo de exo/endocitose como sendo o mecanismo principal de transferência de melanina. Primeiro, explorámos a regulação da secreção de melanina pelos melanócitos e analisámos o papel de proteínas Rab neste processo. Os resultados foram obtidos recorrendo a um método in vitro, desenvolvido previamente no laboratório, que avalia a quantidade de melanina segregada para o meio de cultura por espectrofotometria, e ainda por microscopia, contando o número de melanossomas transferidos para os queratinócitos. Através de co-culturas de melanócitos e queratinócitos, verificou-se que os queratinócitos estimulam a libertação de melanina dos melanócitos para o meio extra-celular, bem como a sua transferência para os queratinócitos. Além disso, a proteína Rab11b foi identificada como um regulador da exocitose de melanina e da sua transferência para os queratinócitos. De facto, a diminuição da expressão de Rab11b em melanócitos provocou a redução da secreção de melanina estimulada por queratinócitos, bem como da transferência desta. Em segundo lugar, para complementar o nosso estudo, centrámos a nossa investigação na internalização de melanina por queratinócitos. Especificamente, usando uma biblioteca de siRNA, explorámos o envolvimento de proteínas Rab na captação de melanina por queratinócitos. Como primeira abordagem, usámos esferas fluorescentes como substituto de melanina, avaliando os resultados por citometria de fluxo. No entanto, este método revelou-se ineficaz uma vez que a internalização destas esferas é independente do recetor PAR-2 (recetor 2 ativado por protease), que foi previamente descrito como essencial na captação de melanina por queratinócitos Posteriormente, foi desenvolvido um novo protocolo de endocitose baseado em microscopia, usando melanossomas sem a membrana envolvente (melanocores) purificados do meio de cultura de melanócitos, incluindo um programa informático especialmente desenhado para realizar uma análise semi-automatizada. Após internalização, os melanocores acumulam-se na região perinuclear dos queratinócitos, em estruturas que se assemelham ao escudo supranuclear observado na pele humana. Seguidamente, o envolvimento do recetor PAR-2 na captação de melanocores por queratinócitos foi confirmado, utilizando o novo protocolo de endocitose desenvolvido. Para além disso, a necessidade de quatro proteínas Rab foi identificada na internalização de melanocores por queratinócitos. A redução da expressão de Rab1a ou Rab5b em queratinócitos diminuiu significativamente o nível de internalização de melanocores, enquanto o silenciamento da expressão de Rab2a ou Rab14 aumentou a quantidade de melanocores internalizados por estas células. Em conclusão, os resultados apresentados corroboram as observações anteriores, obtidas em amostras de pele humana, e sugerem que o mecanismo de transferência predominante é a exocitose de melanina pelos melanócitos, induzida por queratinócitos, seguida por endocitose pelos queratinócitos. A pigmentação da pele tem implicações tanto ao nível da cosmética, como ao nível médico, relacionadas com foto-envelhecimento e com doenças pigmentares. Assim sendo, ao esclarecer quais os mecanismos moleculares que regulam a transferência de melanina na pele, este trabalho pode conduzir ao desenvolvimento de novas estratégias para modular a pigmentação da pele.----------------ABSTRACT: Skin pigmentation is achieved through the highly regulated production of the pigment melanin in specialized organelles, termed melanosomes within melanocytes. These are transported from their site of synthesis to the melanocyte periphery before being transferred to keratinocytes where melanin forms a supra-nuclear cap to protect the DNA from UVinduced damage. Together, melanocytes and keratinocytes form a functional complex, termed “epidermal-melanin unit”, that confers color and photoprotective properties to the skin. Skin pigmentation requires three processes: the biogenesis of melanin; its intracelular transport within the melanocyte to the cell periphery; and the melanin transfer to keratinocytes. The first two processes have been extensively characterized. However, despite significant advances that have been made over the past few years, the mechanisms underlying inter-cellular transfer of pigment from melanocytes to keratinocytes remain controversial.Preliminary studies from our group using electron microscopy and human skin samples found evidence for a mechanism of coupled exocytosis-endocytosis. Rab GTPases are master regulators of intracellular trafficking and have already been implicated in several steps of skin pigmentation. Thus, we proposed to explore and characterize the molecular mechanisms of melanin transfer and the role of Rab GTPases in this process. Moreover, we investigated whether the exo/endocytosis model is the main mechanism of melanin transfer. We first focused on melanin exocytosis by melanocytes. Then, we started to investigate the key regulatory Rab proteins involved in this step by establishing an in vitro tissue culture model of melanin secretion. Using co-cultures of melanocytes and keratinocytes, we found that keratinocytes stimulate melanin release and transfer. Moreover, depletion of Rab11b decreases keratinocyte-induced melanin exocytosis by melanocytes. In order to determine whether melanin exocytosis is a predominant mechanism of melanin transfer, the amount of melanin transferred to keratinocytes was then assayed in conditions where melanin exocytosis was inhibited. Indeed, Rab11b depletion resulted in a significant decrease in melanin uptake by keratinocytes. Taken together, these observations suggest that Rab11b mediates melanosome exocytosis from melanocytes and transfer to keratinocytes. To complement and extend our study, we of melanin by keratinocytes. Thus, we aimed to explore the effect of depleting Rab GTPases on melanin uptake and trafficking within keratinocytes. As a first approach, we used fluorescent microspheres as a melanin surrogate. However, the uptake of microspheres was observed to be independent of PAR-2, a receptor that is required for melanin uptakecentred our attention in the internalization of melanin by keratinocytes. Thus, we aimed to explore the effect of depleting Rab GTPases on melanin uptake and trafficking within keratinocytes. As a first approach, we used fluorescent microspheres as a melanin surrogate. However, the uptake of microspheres was observed to be independent of PAR-2, a receptor that is required for melanin uptake.Therefore, we concluded that microspheres were uptaken by keratinocytes through a different pathway than melanin. Subsequently, we developed a microscopy-based endocytosis assay using purified melanocores (melanosomes lacking the limiting membrane) from melanocytes, including a program to perform a semi-automated analysis. Melanocores are taken up by keratinocytes and accumulate in structures in the perinuclear area that resemble the physiological supranuclear cap observed in human skin. We then confirmed the involvement of PAR-2 receptor in the uptake of melanocores by keratinocytes, using the newly developed assay. Furthermore, we identified the role of four Rab GTPases on the uptake of melanocores by keratinocytes. Depletion of Rab1a and Rab5b from keratinocytes significantly reduced the uptake of melanocores, whereas Rab2a, and Rab14 silencing increased the amount the melanocores internalized by XB2 keratinocytes. In conclusion, we present evidence supporting keratinocyte-inducedmelanosome exocytosis from melanocytes, followed by endocytosis of the melanin core by keratinocytes as the predominant mechanism of melanin transfer in skin. Although advances have been made, there is a need for more effective and safer therapies directed at pigmentation disorders and also treatments for cosmetic applications. Hence, the understanding of the above mechanisms of skin pigmentation will lead to a greater appreciation of the molecular machinery underlying human skin pigmentation and could interest the pharmaceutical and cosmetic industries.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The recent advances in sequencing technologies have given all microbiology laboratories access to whole genome sequencing. Providing that tools for the automated analysis of sequence data and databases for associated meta-data are developed, whole genome sequencing will become a routine tool for large clinical microbiology laboratories. Indeed, the continuing reduction in sequencing costs and the shortening of the 'time to result' makes it an attractive strategy in both research and diagnostics. Here, we review how high-throughput sequencing is revolutionizing clinical microbiology and the promise that it still holds. We discuss major applications, which include: (i) identification of target DNA sequences and antigens to rapidly develop diagnostic tools; (ii) precise strain identification for epidemiological typing and pathogen monitoring during outbreaks; and (iii) investigation of strain properties, such as the presence of antibiotic resistance or virulence factors. In addition, recent developments in comparative metagenomics and single-cell sequencing offer the prospect of a better understanding of complex microbial communities at the global and individual levels, providing a new perspective for understanding host-pathogen interactions. Being a high-resolution tool, high-throughput sequencing will increasingly influence diagnostics, epidemiology, risk management, and patient care.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

AIMS: Although the coronary artery vessel wall can be imaged non-invasively using magnetic resonance imaging (MRI), the in vivo reproducibility of wall thickness measures has not been previously investigated. Using a refined magnetization preparation scheme, we sought to assess the reproducibility of three-dimensional (3D) free-breathing black-blood coronary MRI in vivo. METHODS AND RESULTS: MRI vessel wall scans parallel to the right coronary artery (RCA) were obtained in 18 healthy individuals (age range 25-43, six women), with no known history of coronary artery disease, using a 3D dual-inversion navigator-gated black-blood spiral imaging sequence. Vessel wall scans were repeated 1 month later in eight subjects. The visible vessel wall segment and the wall thickness were quantitatively assessed using a semi-automatic tool and the intra-observer, inter-observer, and inter-scan reproducibilities were determined. The average imaged length of the RCA vessel wall was 44.5+/-7 mm. The average wall thickness was 1.6+/-0.2 mm. There was a highly significant intra-observer (r=0.97), inter-observer (r=0.94), and inter-scan (r=0.90) correlation for wall thickness (all P<0.001). There was also a significant agreement for intra-observer, inter-observer, and inter-scan measurements on Bland-Altman analysis. The intra-class correlation coefficients for intra-observer (r=0.97), inter-observer (r=0.92), and inter-scan (r=0.86) analyses were also excellent. CONCLUSION: The use of black-blood free-breathing 3D MRI in conjunction with semi-automated analysis software allows for reproducible measurements of right coronary arterial vessel-wall thickness. This technique may be well-suited for non-invasive longitudinal studies of coronary atherosclerosis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Reconstruction of genes and/or protein networks from automated analysis of the literature is one of the current targets of text mining in biomedical research. Some user-friendly tools already perform this analysis on precompiled databases of abstracts of scientific papers. Other tools allow expert users to elaborate and analyze the full content of a corpus of scientific documents. However, to our knowledge, no user friendly tool that simultaneously analyzes the latest set of scientific documents available on line and reconstructs the set of genes referenced in those documents is available. Results: This article presents such a tool, Biblio-MetReS, and compares its functioning and results to those of other user-friendly applications (iHOP, STRING) that are widely used. Under similar conditions, Biblio-MetReS creates networks that are comparable to those of other user friendly tools. Furthermore, analysis of full text documents provides more complete reconstructions than those that result from using only the abstract of the document. Conclusions: Literature-based automated network reconstruction is still far from providing complete reconstructions of molecular networks. However, its value as an auxiliary tool is high and it will increase as standards for reporting biological entities and relationships become more widely accepted and enforced. Biblio- MetReS is an application that can be downloaded from http://metres.udl.cat/. It provides an easy to use environment for researchers to reconstruct their networks of interest from an always up to date set of scientific documents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A didactic experiment is proposed aimed to extend the Flow Injection Analysis (FIA) based methodology to the area of physical chemistry/chemical reactors for undergraduate labs. Our prime objective was to describe the use of a gradient chamber for determination of the rate constant for the reaction between crystal violet and the hydroxide ion. The study was complemented by determining the effect of temperature on the rate constant. The kinetic parameters, activation energy and reaction rate constant are determined based on an assumption of rate orders. The main didactic advantages of the proposed experimental set-up are the use of less reagents, contributing to a more environmental friendly experiment. The experiment illustrates also the reduction of associated errors and time by using automated analysis owing to decreased operator manipulation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The ongoing global financial crisis has demonstrated the importance of a systemwide, or macroprudential, approach to safeguarding financial stability. An essential part of macroprudential oversight concerns the tasks of early identification and assessment of risks and vulnerabilities that eventually may lead to a systemic financial crisis. Thriving tools are crucial as they allow early policy actions to decrease or prevent further build-up of risks or to otherwise enhance the shock absorption capacity of the financial system. In the literature, three types of systemic risk can be identified: i ) build-up of widespread imbalances, ii ) exogenous aggregate shocks, and iii ) contagion. Accordingly, the systemic risks are matched by three categories of analytical methods for decision support: i ) early-warning, ii ) macro stress-testing, and iii ) contagion models. Stimulated by the prolonged global financial crisis, today's toolbox of analytical methods includes a wide range of innovative solutions to the two tasks of risk identification and risk assessment. Yet, the literature lacks a focus on the task of risk communication. This thesis discusses macroprudential oversight from the viewpoint of all three tasks: Within analytical tools for risk identification and risk assessment, the focus concerns a tight integration of means for risk communication. Data and dimension reduction methods, and their combinations, hold promise for representing multivariate data structures in easily understandable formats. The overall task of this thesis is to represent high-dimensional data concerning financial entities on lowdimensional displays. The low-dimensional representations have two subtasks: i ) to function as a display for individual data concerning entities and their time series, and ii ) to use the display as a basis to which additional information can be linked. The final nuance of the task is, however, set by the needs of the domain, data and methods. The following ve questions comprise subsequent steps addressed in the process of this thesis: 1. What are the needs for macroprudential oversight? 2. What form do macroprudential data take? 3. Which data and dimension reduction methods hold most promise for the task? 4. How should the methods be extended and enhanced for the task? 5. How should the methods and their extensions be applied to the task? Based upon the Self-Organizing Map (SOM), this thesis not only creates the Self-Organizing Financial Stability Map (SOFSM), but also lays out a general framework for mapping the state of financial stability. This thesis also introduces three extensions to the standard SOM for enhancing the visualization and extraction of information: i ) fuzzifications, ii ) transition probabilities, and iii ) network analysis. Thus, the SOFSM functions as a display for risk identification, on top of which risk assessments can be illustrated. In addition, this thesis puts forward the Self-Organizing Time Map (SOTM) to provide means for visual dynamic clustering, which in the context of macroprudential oversight concerns the identification of cross-sectional changes in risks and vulnerabilities over time. Rather than automated analysis, the aim of visual means for identifying and assessing risks is to support disciplined and structured judgmental analysis based upon policymakers' experience and domain intelligence, as well as external risk communication.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Tässä työssä tutkittiin magneettikuvauslaitteiden vääristymän korjausalgoritmien toimintaa ja niiden vaikutusta kuvien laatuun. Kuvaukset tehtiin yhteensä kuudella eri magneettikuvauslaitteella ja kuvat analysoitiin käyttäen kahta eri menetelmää. Ensimmäisessä testissä kuvattiin ACR:n (American College of Radiology) valmistama testikappale, eli fantomi, ACR:n suosittelemien kuvausparametrien mukaan. Fantomi kuvatiin vääristymänkorjausalgoritmi päällä ja pois päältä. Molemmat kuvasarjat analysoitiin kahdella tavalla: manuaalisesti ja automaattisesti. ACR on laatinut tarkat ohjeet analyysien tekemiseen. Automaattiseen analysointiin käytettiin kehitysvaiheessa olevaa Automated Analysis Tool for ACR MRI Phantom Measurements (AAT-ACR) -ohjelmaa. Eri laitteiden tuloksia vertailtiin toisiinsa. Toista testiä varten rakennettiin MaxFOV-fantomi niminen testifantomi, joka kuvattiin yhteensä kolmella eri laitteella. Kyseisen fantomin avulla on mahdollista tutkia magneettikuvauslaitteiden todellinen maksimaalinen kuva-ala. Kuvaukset tehtiin vääristymäkorjaus päällä ja pois päältä. Analysointia varten kehitettiin oma automaattinen Matlab-ohjelmisto. Eri laitteiden tuloksia vertailtiin toisiinsa. ACR:n fantomilla tehtyjen testien tulos näyttää, että pienellä kuva-alalla (noin 20 cm ja alle) ei virheenkorjausalgoritmin käyttö tai käyttämättä jättäminen aiheuta suurta virhettä tuloksiin vaan tulokset ovat hyvin lähellä toisiaan. Tämä vastaa hyvin sitä, että magneettikentän ja gradienttien lineaarisuus on kaikkein suurin juuri magneettikuvauslaitteen keskiössä ja sen lähistöllä. Automaattinen ohjelmisto tuotti tuloksiin suurempia vaihteluja kuin manuaalinen analyysi. Automaattinen ohjelmisto on kuitenkin vasta kehitysvaiheessa, joten on selvää, että ohjelmiston antamissa tuloksissa saattaa olla algoritmien toiminnasta aiheutuvia virheitä. MaxFOV-fantomilla tehdyt testit osoittavat, että siirryttäessä suuriin kuva-aloihin (yli 20 cm) on virheenkorjauksella suuri merkitys. Jo 30 cm kuva-alalla erot korjattujen ja korjaamattomien kuvien välillä olivat suuret. Kuvia vertailtaessa on myös selvästi nähtävissä, että virheenkorjausalgoritmi rajoittaa kuva-alan maksimia.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

L'imagerie intravasculaire ultrasonore (IVUS) est une technologie médicale par cathéter qui produit des images de coupe des vaisseaux sanguins. Elle permet de quantifier et d'étudier la morphologie de plaques d'athérosclérose en plus de visualiser la structure des vaisseaux sanguins (lumière, intima, plaque, média et adventice) en trois dimensions. Depuis quelques années, cette méthode d'imagerie est devenue un outil de choix en recherche aussi bien qu'en clinique pour l'étude de la maladie athérosclérotique. L'imagerie IVUS est par contre affectée par des artéfacts associés aux caractéristiques des capteurs ultrasonores, par la présence de cônes d'ombre causés par les calcifications ou des artères collatérales, par des plaques dont le rendu est hétérogène ou par le chatoiement ultrasonore (speckle) sanguin. L'analyse automatisée de séquences IVUS de grande taille représente donc un défi important. Une méthode de segmentation en trois dimensions (3D) basée sur l'algorithme du fast-marching à interfaces multiples est présentée. La segmentation utilise des attributs des régions et contours des images IVUS. En effet, une nouvelle fonction de vitesse de propagation des interfaces combinant les fonctions de densité de probabilité des tons de gris des composants de la paroi vasculaire et le gradient des intensités est proposée. La segmentation est grandement automatisée puisque la lumière du vaisseau est détectée de façon entièrement automatique. Dans une procédure d'initialisation originale, un minimum d'interactions est nécessaire lorsque les contours initiaux de la paroi externe du vaisseau calculés automatiquement sont proposés à l'utilisateur pour acceptation ou correction sur un nombre limité d'images de coupe longitudinale. La segmentation a été validée à l'aide de séquences IVUS in vivo provenant d'artères fémorales provenant de différents sous-groupes d'acquisitions, c'est-à-dire pré-angioplastie par ballon, post-intervention et à un examen de contrôle 1 an suivant l'intervention. Les résultats ont été comparés avec des contours étalons tracés manuellement par différents experts en analyse d'images IVUS. Les contours de la lumière et de la paroi externe du vaisseau détectés selon la méthode du fast-marching sont en accord avec les tracés manuels des experts puisque les mesures d'aire sont similaires et les différences point-à-point entre les contours sont faibles. De plus, la segmentation par fast-marching 3D s'est effectuée en un temps grandement réduit comparativement à l'analyse manuelle. Il s'agit de la première étude rapportée dans la littérature qui évalue la performance de la segmentation sur différents types d'acquisition IVUS. En conclusion, la segmentation par fast-marching combinant les informations des distributions de tons de gris et du gradient des intensités des images est précise et efficace pour l'analyse de séquences IVUS de grandes tailles. Un outil de segmentation robuste pourrait devenir largement répandu pour la tâche ardue et fastidieuse qu'est l'analyse de ce type d'images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

De nos jours, les applications de grande taille sont développées à l’aide de nom- breux cadres d’applications (frameworks) et intergiciels (middleware). L’utilisation ex- cessive d’objets temporaires est un problème de performance commun à ces applications. Ce problème est appelé “object churn”. Identifier et comprendre des sources d’“object churn” est une tâche difficile et laborieuse, en dépit des récentes avancées dans les tech- niques d’analyse automatiques. Nous présentons une approche visuelle interactive conçue pour aider les développeurs à explorer rapidement et intuitivement le comportement de leurs applications afin de trouver les sources d’“object churn”. Nous avons implémenté cette technique dans Vasco, une nouvelle plate-forme flexible. Vasco se concentre sur trois principaux axes de con- ception. Premièrement, les données à visualiser sont récupérées dans les traces d’exécu- tion et analysées afin de calculer et de garder seulement celles nécessaires à la recherche des sources d’“object churn”. Ainsi, des programmes de grande taille peuvent être vi- sualisés tout en gardant une représentation claire et compréhensible. Deuxièmement, l’utilisation d’une représentation intuitive permet de minimiser l’effort cognitif requis par la tâche de visualisation. Finalement, la fluidité des transitions et interactions permet aux utilisateurs de garder des informations sur les actions accomplies. Nous démontrons l’efficacité de l’approche par l’identification de sources d’“object churn” dans trois ap- plications utilisant intensivement des cadres d’applications framework-intensive, inclu- ant un système commercial.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

S’insérant dans les domaines de la Lecture et de l’Analyse de Textes Assistées par Ordinateur (LATAO), de la Gestion Électronique des Documents (GÉD), de la visualisation de l’information et, en partie, de l’anthropologie, cette recherche exploratoire propose l’expérimentation d’une méthodologie descriptive en fouille de textes afin de cartographier thématiquement un corpus de textes anthropologiques. Plus précisément, nous souhaitons éprouver la méthode de classification hiérarchique ascendante (CHA) pour extraire et analyser les thèmes issus de résumés de mémoires et de thèses octroyés de 1985 à 2009 (1240 résumés), par les départements d’anthropologie de l’Université de Montréal et de l’Université Laval, ainsi que le département d’histoire de l’Université Laval (pour les résumés archéologiques et ethnologiques). En première partie de mémoire, nous présentons notre cadre théorique, c'est-à-dire que nous expliquons ce qu’est la fouille de textes, ses origines, ses applications, les étapes méthodologiques puis, nous complétons avec une revue des principales publications. La deuxième partie est consacrée au cadre méthodologique et ainsi, nous abordons les différentes étapes par lesquelles ce projet fut conduit; la collecte des données, le filtrage linguistique, la classification automatique, pour en nommer que quelques-unes. Finalement, en dernière partie, nous présentons les résultats de notre recherche, en nous attardant plus particulièrement sur deux expérimentations. Nous abordons également la navigation thématique et les approches conceptuelles en thématisation, par exemple, en anthropologie, la dichotomie culture ̸ biologie. Nous terminons avec les limites de ce projet et les pistes d’intérêts pour de futures recherches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

La fonction des canaux ioniques est finement régulée par des changements structuraux de sites clés contrôlant l’ouverture du pore. Ces modulations structurales découlent de l’interaction du canal avec l’environnement local, puisque certains domaines peuvent être suffisamment sensibles à des propriétés physico-chimiques spécifiques. Les mouvements engendrés dans la structure sont notamment perceptibles fonctionnellement lorsque le canal ouvre un passage à certains ions, générant ainsi un courant ionique mesurable selon le potentiel électrochimique. Une description détaillée de ces relations structure-fonction est cependant difficile à obtenir à partir de mesures sur des ensembles de canaux identiques, puisque les fluctuations et les distributions de différentes propriétés individuelles demeurent cachées dans une moyenne. Pour distinguer ces propriétés, des mesures à l’échelle de la molécule unique sont nécessaires. Le but principal de la présente thèse est d’étudier la structure et les mécanismes moléculaires de canaux ioniques par mesures de spectroscopie de fluorescence à l’échelle de la molécule unique. Les études sont particulièrement dirigées vers le développement de nouvelles méthodes ou leur amélioration. Une classe de toxine formeuse de pores a servi de premier modèle d’étude. La fluorescence à l’échelle de la molécule unique a aussi été utilisée pour l’étude d’un récepteur glutamate, d’un récepteur à la glycine et d’un canal potassique procaryote. Le premier volet porte sur l’étude de la stœchiométrie par mesures de photoblanchiment en temps résolu. Cette méthode permet de déterminer directement le nombre de monomères fluorescents dans un complexe isolé par le décompte des sauts discrets de fluorescence suivant les événements de photoblanchiment. Nous présentons ici la première description, à notre connaissance, de l’assemblage dynamique d’une protéine membranaire dans un environnement lipidique. La toxine monomérique purifiée Cry1Aa s’assemble à d’autres monomères selon la concentration et sature en conformation tétramérique. Un programme automatique est ensuite développé pour déterminer la stœchiométrie de protéines membranaires fusionnées à GFP et exprimées à la surface de cellules mammifères. Bien que ce système d’expression soit approprié pour l’étude de protéines d’origine mammifère, le bruit de fluorescence y est particulièrement important et augmente significativement le risque d’erreur dans le décompte manuel des monomères fluorescents. La méthode présentée permet une analyse rapide et automatique basée sur des critères fixes. L’algorithme chargé d’effectuer le décompte des monomères fluorescents a été optimisé à partir de simulations et ajuste ses paramètres de détection automatiquement selon la trace de fluorescence. La composition de deux canaux ioniques a été vérifiée avec succès par ce programme. Finalement, la fluorescence à l’échelle de la molécule unique est mesurée conjointement au courant ionique de canaux potassiques KcsA avec un système de fluorométrie en voltage imposé. Ces enregistrements combinés permettent de décrire la fonction de canaux ioniques simultanément à leur position et densité alors qu’ils diffusent dans une membrane lipidique dont la composition est choisie. Nous avons observé le regroupement de canaux KcsA pour différentes compositions lipidiques. Ce regroupement ne paraît pas être causé par des interactions protéine-protéine, mais plutôt par des microdomaines induits par la forme des canaux reconstitués dans la membrane. Il semble que des canaux regroupés puissent ensuite devenir couplés, se traduisant en ouvertures et fermetures simultanées où les niveaux de conductance sont un multiple de la conductance « normale » d’un canal isolé. De plus, contrairement à ce qui est actuellement suggéré, KcsA ne requiert pas de phospholipide chargé négativement pour sa fonction. Plusieurs mesures indiquent plutôt que des lipides de forme conique dans la phase cristalline liquide sont suffisants pour permettre l’ouverture de canaux KcsA isolés. Des canaux regroupés peuvent quant à eux surmonter la barrière d’énergie pour s’ouvrir de manière coopérative dans des lipides non chargés de forme cylindrique.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.