988 resultados para Automatic detection of upwelling
Resumo:
The difficulty on identifying, lack of segregation systems and absence of suitable standards for coexistence of non trangenic and transgenic soybean are contributing for contaminations that occur during productive system. The objective of this study was to evaluate the efficiency of two methods for detecting mixtures of seeds genetically modified (GM) into samples of non-GM soybean, in a way that seed lots can be assessed within the standards established by seed legislation. Two sizes of soybean samples (200 and 400 seeds), cv. BRSMG 810C (non-GM) and BRSMG 850GRR (GM), were assessed with four contamination levels (addition of GM seeds, for obtaining 0.0%, 0.5%, 1.0%, and 1.5% contamination), and two detection methods: immunoassay of lateral flux (ILF) and bioassay (pre-imbibition into 0.6% herbicide solution; 25 ºC; 16 h). The bioassay is efficient in detecting presence of GM seeds in seed samples of non-GM soybean, even for contamination lower than 1.0%, provided that seeds have high physiological quality. The ILF was positive, detecting the presence of target protein in contaminated samples, indicating test effectiveness. There was significant correlation between the two detection methods (r = 0.82; p < 0.0001). Sample size did not influence efficiency of the two methods in detecting presence of GM seeds.
Resumo:
This thesis describes research in which genetic programming is used to automatically evolve shape grammars that construct three dimensional models of possible external building architectures. A completely automated fitness function is used, which evaluates the three dimensional building models according to different geometric properties such as surface normals, height, building footprint, and more. In order to evaluate the buildings on the different criteria, a multi-objective fitness function is used. The results obtained from the automated system were successful in satisfying the multiple objective criteria as well as creating interesting and unique designs that a human-aided system might not discover. In this study of evolutionary design, the architectures created are not meant to be fully functional and structurally sound blueprints for constructing a building, but are meant to be inspirational ideas for possible architectural designs. The evolved models are applicable for today's architectural industries as well as in the video game and movie industries. Many new avenues for future work have also been discovered and highlighted.
Resumo:
Lung cancer is a major chronic disease responsible for the highest mortality rate, among other types of cancer, and represents 29% of all deaths in Canada. The clinical diagnosis of lung carcinoma still requires a standard diagnostic approach, as there are no symptoms in its early stage. Therefore, it is usually diagnosed at a later stage, when the survival rate is low. With the recent advancement in molecular biology and biotechnology, a molecular biomarker approach for the diagnosis of early lung cancer seems to be a potential option. In this study, we aimed to investigate and standardize a promising Lung ,Cancer Biomarker by studying the aberrant methylation of two tumour suppressor genes, namely RASSFIA and RAR-B, and the miRNA profiling of four . commonly deregulated miRNA (miR-199a-3p, miR-182, miR-lOO and miR-221). Four lung cancer cell lines were used (two SCLC and two NSCLC), with comparisons being made with normal lung cell lines. Our results, we found that none of these genes were methylated. We then evaluated TP53, and found the promoter of this gene to be methylated in the cancer cell lines, as compared to the normal cell lines, indicating gene inactivation. We carried out miRNA profiling of the cancer cell lines and reported that 80 miRNAs are deregulated in lung cancer cell lines as compared to the normal cell lines. Our study was the first of its kind to indicate that hsa-mir-4301, hsa-mir-4707-5p and hsa-mir-4497 (newly discovered miRNAs) are deregulated in lung cancer cell lines. We also investigated miR-199a-3p, mir-lOO and miR-182, and found that miR-199a -3p and mir-l00 were down-regulated in cancer lines, whereas miR-182 was up-regulated in the cancer cell lines. In the final part of the study we observed that mir-221 could be a putative biomarker to distinguish between the two types of lung cancer because it was down-regulated in SCLC, and up-regulated in the NSCLC cell lines. In conclusion, we found four miRNA molecular biomarkers that possibly could be used in the early diagnosis of the lung cancer. More studies are still required with larger numbers of samples to effectively establish these as molecular biomarkers for the diagnosis of lung cancer
Resumo:
Complex networks can arise naturally and spontaneously from all things that act as a part of a larger system. From the patterns of socialization between people to the way biological systems organize themselves, complex networks are ubiquitous, but are currently poorly understood. A number of algorithms, designed by humans, have been proposed to describe the organizational behaviour of real-world networks. Consequently, breakthroughs in genetics, medicine, epidemiology, neuroscience, telecommunications and the social sciences have recently resulted. The algorithms, called graph models, represent significant human effort. Deriving accurate graph models is non-trivial, time-intensive, challenging and may only yield useful results for very specific phenomena. An automated approach can greatly reduce the human effort required and if effective, provide a valuable tool for understanding the large decentralized systems of interrelated things around us. To the best of the author's knowledge this thesis proposes the first method for the automatic inference of graph models for complex networks with varied properties, with and without community structure. Furthermore, to the best of the author's knowledge it is the first application of genetic programming for the automatic inference of graph models. The system and methodology was tested against benchmark data, and was shown to be capable of reproducing close approximations to well-known algorithms designed by humans. Furthermore, when used to infer a model for real biological data the resulting model was more representative than models currently used in the literature.
Resumo:
A complex network is an abstract representation of an intricate system of interrelated elements where the patterns of connection hold significant meaning. One particular complex network is a social network whereby the vertices represent people and edges denote their daily interactions. Understanding social network dynamics can be vital to the mitigation of disease spread as these networks model the interactions, and thus avenues of spread, between individuals. To better understand complex networks, algorithms which generate graphs exhibiting observed properties of real-world networks, known as graph models, are often constructed. While various efforts to aid with the construction of graph models have been proposed using statistical and probabilistic methods, genetic programming (GP) has only recently been considered. However, determining that a graph model of a complex network accurately describes the target network(s) is not a trivial task as the graph models are often stochastic in nature and the notion of similarity is dependent upon the expected behavior of the network. This thesis examines a number of well-known network properties to determine which measures best allowed networks generated by different graph models, and thus the models themselves, to be distinguished. A proposed meta-analysis procedure was used to demonstrate how these network measures interact when used together as classifiers to determine network, and thus model, (dis)similarity. The analytical results form the basis of the fitness evaluation for a GP system used to automatically construct graph models for complex networks. The GP-based automatic inference system was used to reproduce existing, well-known graph models as well as a real-world network. Results indicated that the automatically inferred models exemplified functional similarity when compared to their respective target networks. This approach also showed promise when used to infer a model for a mammalian brain network.
Resumo:
Globally, Prostate cancer (PCa) is the most frequently occurring non-cutaneous cancer, and is the second highest cause of cancer mortality in men. Serum prostate specific antigen (PSA) has been the standard in PCa screening since its approval by the American Food & Drug Administration (FDA) in 1994. Currently, PSA is used as an indicator for PCa - patients with a serum PSA level above 4ng/mL will often undergo prostate biopsy to confirm cancer. Unfortunately fewer than similar to 30% of these men will biopsy positive for cancer, meaning that the majority of men undergo invasive biopsy with little benefit. Despite PSA's notoriously poor specificity (33%), there is still a significant lack of credible alternatives. Therefore an ideal biomarker that can specifically detect PCa at an early stage is urgently required. The aim of this study was to investigate the potential of using deregulation of urinary proteins in order to detect Prostate Cancer (PCa) among Benign Prostatic Hyperplasia (BPH). To identify the protein signatures specific for PCa, protein expression profiling of 8 PCa patients, 12 BPH patients and 10 healthy males was carried out using LC-MS/MS. This was followed by validating relative expression levels of proteins present in urine among all the patients using quantitative real time-PCR. This was followed by validating relative expression levels of proteins present in urine among all the patients using quantitative real time-PCR. This approach revealed that significant the down-regulation of Fibronectin and TP53INP2 was a characteristic event among PCa patients. Fibronectin mRNA down-regulation, was identified as offering improved specificity (50%) over PSA, albeit with a slightly lower although still acceptable sensitivity (75%) for detecting PCa. As for TP53INP2 on the other hand, its down-regulation was moderately sensitive (75%), identifying many patients with PCa, but was entirely non-specific (7%), designating many of the benign samples as malignant and being unable to accurately identify more than one negative.
Resumo:
MicroRNAs (miRNAs) are a class of short (similar to 22nt), single stranded RNA molecules that function as post-transcriptional regulators of gene expression. MiRNAs can regulate a variety of important biological pathways, including: cellular proliferation, differentiation and apoptosis. Profiling of miRNA expression patterns was shown to be more useful than the equivalent mRNA profiles for characterizing poorly differentiated tumours. As such, miRNA expression "signatures" are expected to offer serious potential for diagnosing and prognosing cancers of any provenance. The aim of this study was to investigate the potential of using deregulation of urinary miRNAs in order to detect Prostate Cancer (PCa) among Benign Prostatic Hyperplasia (BPH). To identify the miRNA signatures specific for PCa, miRNA expression profiling of 8 PCa patients, 12 BPH patients and 10 healthy males was carried out using whole genome expression profiling. Differential expression of two individual miRNAs between healthy males and BPH patients was detected and found to possibly target genes related to PCa development and progression. The sensitivity and specificity of miR-1825 for detecting PCa among BPH individuals was found to be 60% and 69%, respectively. Whereas, the sensitivity and specificity of miR-484 were 80% and 19%, respectively. Additionally, the sensitivity and specificity for miR-1825/484 in tandem were 45% and 75%, respectively. The proposed PCa miRNA signatures may therefore be of great value for the accurate diagnosis of PCa and BPH. This exploratory study has identified several possible targets that merit further investigation towards the development and validation of diagnostically useful, non-invasive, urine-based tests that might not only help diagnose PCa but also possibly help differentiate it from BPH.
Resumo:
A big challenge associated with getting an institutional repository off the ground is getting content into it. This article will look at how to use digitization services at the Internet Archive alongside software utilities that the author developed to automate the harvesting of scanned dissertations and associated Dublin Core XML files to create an ETD Portal using the DSpace platform. The end result is a metadata-rich, full-text collection of theses that can be constructed for little out of pocket cost.
Object-Oriented Genetic Programming for the Automatic Inference of Graph Models for Complex Networks
Resumo:
Complex networks are systems of entities that are interconnected through meaningful relationships. The result of the relations between entities forms a structure that has a statistical complexity that is not formed by random chance. In the study of complex networks, many graph models have been proposed to model the behaviours observed. However, constructing graph models manually is tedious and problematic. Many of the models proposed in the literature have been cited as having inaccuracies with respect to the complex networks they represent. However, recently, an approach that automates the inference of graph models was proposed by Bailey [10] The proposed methodology employs genetic programming (GP) to produce graph models that approximate various properties of an exemplary graph of a targeted complex network. However, there is a great deal already known about complex networks, in general, and often specific knowledge is held about the network being modelled. The knowledge, albeit incomplete, is important in constructing a graph model. However it is difficult to incorporate such knowledge using existing GP techniques. Thus, this thesis proposes a novel GP system which can incorporate incomplete expert knowledge that assists in the evolution of a graph model. Inspired by existing graph models, an abstract graph model was developed to serve as an embryo for inferring graph models of some complex networks. The GP system and abstract model were used to reproduce well-known graph models. The results indicated that the system was able to evolve models that produced networks that had structural similarities to the networks generated by the respective target models.
Resumo:
Affiliation: Institut de recherche en immunologie et en cancérologie, Université de Montréal
Resumo:
Les changements sont faits de façon continue dans le code source des logiciels pour prendre en compte les besoins des clients et corriger les fautes. Les changements continus peuvent conduire aux défauts de code et de conception. Les défauts de conception sont des mauvaises solutions à des problèmes récurrents de conception ou d’implémentation, généralement dans le développement orienté objet. Au cours des activités de compréhension et de changement et en raison du temps d’accès au marché, du manque de compréhension, et de leur expérience, les développeurs ne peuvent pas toujours suivre les normes de conception et les techniques de codage comme les patrons de conception. Par conséquent, ils introduisent des défauts de conception dans leurs systèmes. Dans la littérature, plusieurs auteurs ont fait valoir que les défauts de conception rendent les systèmes orientés objet plus difficile à comprendre, plus sujets aux fautes, et plus difficiles à changer que les systèmes sans les défauts de conception. Pourtant, seulement quelques-uns de ces auteurs ont fait une étude empirique sur l’impact des défauts de conception sur la compréhension et aucun d’entre eux n’a étudié l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes. Dans cette thèse, nous proposons trois principales contributions. La première contribution est une étude empirique pour apporter des preuves de l’impact des défauts de conception sur la compréhension et le changement. Nous concevons et effectuons deux expériences avec 59 sujets, afin d’évaluer l’impact de la composition de deux occurrences de Blob ou deux occurrences de spaghetti code sur la performance des développeurs effectuant des tâches de compréhension et de changement. Nous mesurons la performance des développeurs en utilisant: (1) l’indice de charge de travail de la NASA pour leurs efforts, (2) le temps qu’ils ont passé dans l’accomplissement de leurs tâches, et (3) les pourcentages de bonnes réponses. Les résultats des deux expériences ont montré que deux occurrences de Blob ou de spaghetti code sont un obstacle significatif pour la performance des développeurs lors de tâches de compréhension et de changement. Les résultats obtenus justifient les recherches antérieures sur la spécification et la détection des défauts de conception. Les équipes de développement de logiciels doivent mettre en garde les développeurs contre le nombre élevé d’occurrences de défauts de conception et recommander des refactorisations à chaque étape du processus de développement pour supprimer ces défauts de conception quand c’est possible. Dans la deuxième contribution, nous étudions la relation entre les défauts de conception et les fautes. Nous étudions l’impact de la présence des défauts de conception sur l’effort nécessaire pour corriger les fautes. Nous mesurons l’effort pour corriger les fautes à l’aide de trois indicateurs: (1) la durée de la période de correction, (2) le nombre de champs et méthodes touchés par la correction des fautes et (3) l’entropie des corrections de fautes dans le code-source. Nous menons une étude empirique avec 12 défauts de conception détectés dans 54 versions de quatre systèmes: ArgoUML, Eclipse, Mylyn, et Rhino. Nos résultats ont montré que la durée de la période de correction est plus longue pour les fautes impliquant des classes avec des défauts de conception. En outre, la correction des fautes dans les classes avec des défauts de conception fait changer plus de fichiers, plus les champs et des méthodes. Nous avons également observé que, après la correction d’une faute, le nombre d’occurrences de défauts de conception dans les classes impliquées dans la correction de la faute diminue. Comprendre l’impact des défauts de conception sur l’effort des développeurs pour corriger les fautes est important afin d’aider les équipes de développement pour mieux évaluer et prévoir l’impact de leurs décisions de conception et donc canaliser leurs efforts pour améliorer la qualité de leurs systèmes. Les équipes de développement doivent contrôler et supprimer les défauts de conception de leurs systèmes car ils sont susceptibles d’augmenter les efforts de changement. La troisième contribution concerne la détection des défauts de conception. Pendant les activités de maintenance, il est important de disposer d’un outil capable de détecter les défauts de conception de façon incrémentale et itérative. Ce processus de détection incrémentale et itérative pourrait réduire les coûts, les efforts et les ressources en permettant aux praticiens d’identifier et de prendre en compte les occurrences de défauts de conception comme ils les trouvent lors de la compréhension et des changements. Les chercheurs ont proposé des approches pour détecter les occurrences de défauts de conception, mais ces approches ont actuellement quatre limites: (1) elles nécessitent une connaissance approfondie des défauts de conception, (2) elles ont une précision et un rappel limités, (3) elles ne sont pas itératives et incrémentales et (4) elles ne peuvent pas être appliquées sur des sous-ensembles de systèmes. Pour surmonter ces limitations, nous introduisons SMURF, une nouvelle approche pour détecter les défauts de conception, basé sur une technique d’apprentissage automatique — machines à vecteur de support — et prenant en compte les retours des praticiens. Grâce à une étude empirique portant sur trois systèmes et quatre défauts de conception, nous avons montré que la précision et le rappel de SMURF sont supérieurs à ceux de DETEX et BDTEX lors de la détection des occurrences de défauts de conception. Nous avons également montré que SMURF peut être appliqué à la fois dans les configurations intra-système et inter-système. Enfin, nous avons montré que la précision et le rappel de SMURF sont améliorés quand on prend en compte les retours des praticiens.
Resumo:
Le méthotrexate (MTX), un agent anti-cancéreux fréquemment utilisé en chimiothérapie, requiert généralement un suivi thérapeutique de la médication (Therapeutic Drug Monitoring, TDM) pour surveiller son niveau sanguin chez le patient afin de maximiser son efficacité tout en limitant ses effets secondaires. Malgré la fenêtre thérapeutique étroite entre l’efficacité et la toxicité, le MTX reste, à ce jour, un des agents anti-cancéreux les plus utilisés au monde. Les techniques analytiques existantes pour le TDM du MTX sont coûteuses, requièrent temps et efforts, sans nécessairement fournir promptement les résultats dans le délai requis. Afin d’accélérer le processus de dosage du MTX en TDM, une stratégie a été proposée basée sur un essai compétitif caractérisé principalement par le couplage plasmonique d’une surface métallique et de nanoparticules d’or. Plus précisément, l’essai quantitatif exploite la réaction de compétition entre le MTX et une nanoparticule d’or fonctionnalisée avec l’acide folique (FA-AuNP) ayant une affinité pour un récepteur moléculaire, la réductase humaine de dihydrofolate (hDHFR), une enzyme associée aux maladies prolifératives. Le MTX libre mixé avec les FA-AuNP, entre en compétition pour les sites de liaison de hDHFR immobilisés sur une surface active en SPR ou libres en solution. Par la suite, les FA-AuNP liées au hDHFR fournissent une amplification du signal qui est inversement proportionnelle à la concentration de MTX. La résonance des plasmons de surface (SPR) est généralement utilisée comme une technique spectroscopique pour l’interrogation des interactions biomoléculaires. Les instruments SPR commerciaux sont généralement retrouvés dans les grands laboratoires d’analyse. Ils sont également encombrants, coûteux et manquent de sélectivité dans les analyses en matrice complexe. De plus, ceux-ci n’ont pas encore démontré de l’adaptabilité en milieu clinique. Par ailleurs, les analyses SPR des petites molécules comme les médicaments n’ont pas été explorés de manière intensive dû au défi posé par le manque de la sensibilité de la technique pour cette classe de molécules. Les développements récents en science des matériaux et chimie de surfaces exploitant l’intégration des nanoparticules d’or pour l’amplification de la réponse SPR et la chimie de surface peptidique ont démontré le potentiel de franchir les limites posées par le manque de sensibilité et l’adsorption non-spécifique pour les analyses directes dans les milieux biologiques. Ces nouveaux concepts de la technologie SPR seront incorporés à un système SPR miniaturisé et compact pour exécuter des analyses rapides, fiables et sensibles pour le suivi du niveau du MTX dans le sérum de patients durant les traitements de chimiothérapie. L’objectif de cette thèse est d’explorer différentes stratégies pour améliorer l’analyse des médicaments dans les milieux complexes par les biocapteurs SPR et de mettre en perspective le potentiel des biocapteurs SPR comme un outil utile pour le TDM dans le laboratoire clinique ou au chevet du patient. Pour atteindre ces objectifs, un essai compétitif colorimétrique basé sur la résonance des plasmons de surface localisée (LSPR) pour le MTX fut établi avec des nanoparticules d’or marquées avec du FA. Ensuite, cet essai compétitif colorimétrique en solution fut adapté à une plateforme SPR. Pour les deux essais développés, la sensibilité, sélectivité, limite de détection, l’optimisation de la gamme dynamique et l’analyse du MTX dans les milieux complexes ont été inspectés. De plus, le prototype de la plateforme SPR miniaturisée fut validé par sa performance équivalente aux systèmes SPR existants ainsi que son utilité pour analyser les échantillons cliniques des patients sous chimiothérapie du MTX. Les concentrations de MTX obtenues par le prototype furent comparées avec des techniques standards, soit un essai immunologique basé sur la polarisation en fluorescence (FPIA) et la chromatographie liquide couplée avec de la spectrométrie de masse en tandem (LC-MS/MS) pour valider l’utilité du prototype comme un outil clinique pour les tests rapides de quantification du MTX. En dernier lieu, le déploiement du prototype à un laboratoire de biochimie dans un hôpital démontre l’énorme potentiel des biocapteurs SPR pour utilisation en milieux clinique.
Resumo:
3-D assessment of scoliotic deformities relies on an accurate 3-D reconstruction of bone structures from biplanar X-rays, which requires a precise detection and matching of anatomical structures in both views. In this paper, we propose a novel semiautomated technique for detecting complete scoliotic rib borders from PA-0° and PA-20° chest radiographs, by using an edge-following approach with multiple-path branching and oriented filtering. Edge-following processes are initiated from user starting points along upper and lower rib edges and the final rib border is obtained by finding the most parallel pair among detected edges. The method is based on a perceptual analysis leading to the assumption that no matter how bent a scoliotic rib is, it will always present relatively parallel upper and lower edges. The proposed method was tested on 44 chest radiographs of scoliotic patients and was validated by comparing pixels from all detected rib borders against their reference locations taken from the associated manually delineated rib borders. The overall 2-D detection accuracy was 2.64 ± 1.21 pixels. Comparing this accuracy level to reported results in the literature shows that the proposed method is very well suited for precisely detecting borders of scoliotic ribs from PA-0° and PA-20° chest radiographs.
Resumo:
The main objective of the work undertaken here was to develop an appropriate microbial technology to protect the larvae of M.rosenbergii in hatchery from vibriosis. This technology precisely is consisted of a rapid detection system of vibrios and effective antagonistic probiotics for the management of vibrios. The present work was undertaken with the realizations that to stabilize the production process of commercial hatcheries an appropriate, comprehensive and fool proof technology is required primarily for the rapid detection of Vibrio and subsequently for its management. Nine species of Vibrio have been found to be associated with larvae of M. rosenbergii in hatchery. Haemolytic assay of the Vibrio and Aeromonas on prawn blood agar showed that all isolates of V. alginolyticus and Aeromonas sp., from moribund, necrotized larve were haemolytic and the isolates of V.cholerae, V.splendidus II, V.proteolyticus and V.fluvialis from the larvae obtained from apparently healthy larval rearing systems were non-haemolytic. Hydrolytic enzymes such as lipase, chitinase and gelatinase were widespread amongst the Vibrio and Aeromonas isolates. Dominance of V.alginolyticus among the isolates from necrotic larvae and the failure in isolating them from rearing water strongly suggest that they infect larvae and multiply in the larval body and cause mortality in the hatchery. The observation suggested that the isolate V. alginolyticus was a pathogen to the larvae of M.rosenbergii. To sum up, through this work, nine species of Vibrio and genus Aeromonas associated with M.rosenbergii larval rearing systems could be isolated and segregated based on the haemolytic activity and the antibodies (PA bs) for use in diagnosis or epidemiological studies could be produced, based on a virulent culture of V.alginolyticus. This could possibly replace the conventional biochemical tests for identification. As prophylaxis to vibriosis, four isolates of Micrococcus spp. and an isolate of Pseudomonas sp. could be obtained which could possibly be used as antagonistic probiotics in the larval rearing system of M.rosenbergii.