901 resultados para Subfractals, Subfractal Coding, Model Analysis, Digital Imaging, Pattern Recognition
Resumo:
The objective of this thesis was to study the removal of gases from paper mill circulation waters experimentally and to provide data for CFD modeling. Flow and bubble size measurements were carried out in a laboratory scale open gas separation channel. Particle Image Velocimetry (PIV) technique was used to measure the gas and liquid flow fields, while bubble size measurements were conducted using digital imaging technique with back light illumination. Samples of paper machine waters as well as a model solution were used for the experiments. The PIV results show that the gas bubbles near the feed position have the tendency to escape from the circulation channel at a faster rate than those bubbles which are further away from the feed position. This was due to an increased rate of bubble coalescence as a result of the relatively larger bubbles near the feed position. Moreover, a close similarity between the measured slip velocities of the paper mill waters and that of literature values was obtained. It was found that due to dilution of paper mill waters, the observed average bubble size was considerably large as compared to the average bubble sizes in real industrial pulp suspension and circulation waters. Among the studied solutions, the model solution has the highest average drag coefficient value due to its relatively high viscosity. The results were compared to a 2D steady sate CFD simulation model. A standard Euler-Euler k-ε turbulence model was used in the simulations. The channel free surface was modeled as a degassing boundary. From the drag models used in the simulations, the Grace drag model gave velocity fields closest to the experimental values. In general, the results obtained from experiments and CFD simulations are in good qualitative agreement.
Resumo:
Paper presented at the 40th Annual Conference of LIBER (Ligue des Bibliothèques Européennes de Recherche - Association of European Research Libraries) on July 1st, 2011; with the slides used at the presentation.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
Linear programming models are effective tools to support initial or periodic planning of agricultural enterprises, requiring, however, technical coefficients that can be determined using computer simulation models. This paper, presented in two parts, deals with the development, application and tests of a methodology and of a computational modeling tool to support planning of irrigated agriculture activities. Part I aimed at the development and application, including sensitivity analysis, of a multiyear linear programming model to optimize the financial return and water use, at farm level for Jaíba irrigation scheme, Minas Gerais State, Brazil, using data on crop irrigation requirement and yield, obtained from previous simulation with MCID model. The linear programming model outputted a crop pattern to which a maximum total net present value of R$ 372,723.00 for the four years period, was obtained. Constraints on monthly water availability, labor, land and production were critical in the optimal solution. In relation to the water use optimization, it was verified that an expressive reductions on the irrigation requirements may be achieved by small reductions on the maximum total net present value.
Resumo:
The condensation rate has to be high in the safety pressure suppression pool systems of Boiling Water Reactors (BWR) in order to fulfill their safety function. The phenomena due to such a high direct contact condensation (DCC) rate turn out to be very challenging to be analysed either with experiments or numerical simulations. In this thesis, the suppression pool experiments carried out in the POOLEX facility of Lappeenranta University of Technology were simulated. Two different condensation modes were modelled by using the 2-phase CFD codes NEPTUNE CFD and TransAT. The DCC models applied were the typical ones to be used for separated flows in channels, and their applicability to the rapidly condensing flow in the condensation pool context had not been tested earlier. A low Reynolds number case was the first to be simulated. The POOLEX experiment STB-31 was operated near the conditions between the ’quasi-steady oscillatory interface condensation’ mode and the ’condensation within the blowdown pipe’ mode. The condensation models of Lakehal et al. and Coste & Lavi´eville predicted the condensation rate quite accurately, while the other tested ones overestimated it. It was possible to get the direct phase change solution to settle near to the measured values, but a very high resolution of calculation grid was needed. Secondly, a high Reynolds number case corresponding to the ’chugging’ mode was simulated. The POOLEX experiment STB-28 was chosen, because various standard and highspeed video samples of bubbles were recorded during it. In order to extract numerical information from the video material, a pattern recognition procedure was programmed. The bubble size distributions and the frequencies of chugging were calculated with this procedure. With the statistical data of the bubble sizes and temporal data of the bubble/jet appearance, it was possible to compare the condensation rates between the experiment and the CFD simulations. In the chugging simulations, a spherically curvilinear calculation grid at the blowdown pipe exit improved the convergence and decreased the required cell count. The compressible flow solver with complete steam-tables was beneficial for the numerical success of the simulations. The Hughes-Duffey model and, to some extent, the Coste & Lavi´eville model produced realistic chugging behavior. The initial level of the steam/water interface was an important factor to determine the initiation of the chugging. If the interface was initialized with a water level high enough inside the blowdown pipe, the vigorous penetration of a water plug into the pool created a turbulent wake which invoked the chugging that was self-sustaining. A 3D simulation with a suitable DCC model produced qualitatively very realistic shapes of the chugging bubbles and jets. The comparative FFT analysis of the bubble size data and the pool bottom pressure data gave useful information to distinguish the eigenmodes of chugging, bubbling, and pool structure oscillations.
Resumo:
This thesis researches automatic traffic sign inventory and condition analysis using machine vision and pattern recognition methods. Automatic traffic sign inventory and condition analysis can be used to more efficient road maintenance, improving the maintenance processes, and to enable intelligent driving systems. Automatic traffic sign detection and classification has been researched before from the viewpoint of self-driving vehicles, driver assistance systems, and the use of signs in mapping services. Machine vision based inventory of traffic signs consists of detection, classification, localization, and condition analysis of traffic signs. The produced machine vision system performance is estimated with three datasets, from which two of have been been collected for this thesis. Based on the experiments almost all traffic signs can be detected, classified, and located and their condition analysed. In future, the inventory system performance has to be verified in challenging conditions and the system has to be pilot tested.
Resumo:
Tutkielma käyttää automaattista kuviontunnistusalgoritmia ja yleisiä kahden liukuvan keskiarvon leikkauspiste –sääntöjä selittääkseen Stuttgartin pörssissä toimivien yksityissijoittajien myynti-osto –epätasapainoa ja siten vastatakseen kysymykseen ”käyttävätkö yksityissijoittajat teknisen analyysin menetelmiä kaupankäyntipäätöstensä perustana?” Perusolettama sijoittajien käyttäytymisestä ja teknisen analyysin tuottavuudesta tehtyjen tutkimusten perusteella oli, että yksityissijoittajat käyttäisivät teknisen analyysin metodeja. Empiirinen tutkimus, jonka aineistona on DAX30 yhtiöiden data vuosilta 2009 – 2013, ei tuottanut riittävän selkeää vastausta tutkimuskysymykseen. Heikko todistusaineisto näyttää kuitenkin osoittavan, että yksityissijoittajat muuttavat kaupankäyntikäyttäytymistänsä eräiden kuvioiden ja leikkauspistesääntöjen ohjastamaan suuntaan.
Resumo:
The objective of the present study was to examine gender differences in the influence of paternal alcoholism on children's social-emotional development and to determine whether paternal alcoholism is associated with a greater number of externalizing symptoms in the male offspring. From the Mannheim Study of Risk Children, an ongoing longitudinal study of a high-risk population, the developmental data of 219 children [193 (95 boys and 98 girls) of non-alcoholic fathers, non-COAs, and 26 (14 boys, 12 girls) of alcoholic fathers, COAs] were analyzed from birth to the age of 11 years. Paternal alcoholism was defined according to the ICD-10 categories of alcohol dependence and harmful use. Socio-demographic data, cognitive development, number and severity of behavior problems, and gender-related differences in the rates of externalizing and internalizing symptoms were assessed using standardized instruments (IQ tests, Child Behavior Checklist questionnaire and diagnostic interviews). The general linear model analysis revealed a significant overall effect of paternal alcoholism on the number of child psychiatric problems (F = 21.872, d.f. = 1.217, P < 0.001). Beginning at age 2, significantly higher numbers of externalizing symptoms were observed among COAs. In female COAs, a pattern similar to that of the male COAs emerged, with the predominance of delinquent and aggressive behavior. Unlike male COAs, females showed an increase of internalizing symptoms up to age 11 years. Of these, somatic complaints revealed the strongest discriminating effect in 11-year-old females. Children of alcoholic fathers are at high risk for psychopathology. Gender-related differences seem to exist and may contribute to different phenotypes during development from early childhood to adolescence.
Resumo:
High resolution proton nuclear magnetic resonance spectroscopy (¹H MRS) can be used to detect biochemical changes in vitro caused by distinct pathologies. It can reveal distinct metabolic profiles of brain tumors although the accurate analysis and classification of different spectra remains a challenge. In this study, the pattern recognition method partial least squares discriminant analysis (PLS-DA) was used to classify 11.7 T ¹H MRS spectra of brain tissue extracts from patients with brain tumors into four classes (high-grade neuroglial, low-grade neuroglial, non-neuroglial, and metastasis) and a group of control brain tissue. PLS-DA revealed 9 metabolites as the most important in group differentiation: γ-aminobutyric acid, acetoacetate, alanine, creatine, glutamate/glutamine, glycine, myo-inositol, N-acetylaspartate, and choline compounds. Leave-one-out cross-validation showed that PLS-DA was efficient in group characterization. The metabolic patterns detected can be explained on the basis of previous multimodal studies of tumor metabolism and are consistent with neoplastic cell abnormalities possibly related to high turnover, resistance to apoptosis, osmotic stress and tumor tendency to use alternative energetic pathways such as glycolysis and ketogenesis.
Resumo:
Osteoporosis has become a serious global public health issue. Hence, osteoporotic fracture healing has been investigated in several previous studies because there is still controversy over the effect osteoporosis has on the healing process. The current study aimed to analyze two different periods of bone healing in normal and osteopenic rats. Sixty, 7-week-old female Wistar rats were randomly divided into four groups: unrestricted and immobilized for 2 weeks after osteotomy (OU2), suspended and immobilized for 2 weeks after osteotomy (OS2), unrestricted and immobilized for 6 weeks after osteotomy (OU6), and suspended and immobilized for 6 weeks after osteotomy (OS6). Osteotomy was performed in the middle third of the right tibia 21 days after tail suspension, when the osteopenic condition was already set. The fractured limb was then immobilized by orthosis. Tibias were collected 2 and 6 weeks after osteotomy, and were analyzed by bone densitometry, mechanical testing, and histomorphometry. Bone mineral density values from bony calluses were significantly lower in the 2-week post-osteotomy groups compared with the 6-week post-osteotomy groups (multivariate general linear model analysis, P<0.000). Similarly, the mechanical properties showed that animals had stronger bones 6 weeks after osteotomy compared with 2 weeks after osteotomy (multivariate general linear model analysis, P<0.000). Histomorphometry indicated gradual bone healing. Results showed that osteopenia did not influence the bone healing process, and that time was an independent determinant factor regardless of whether the fracture was osteopenic. This suggests that the body is able to compensate for the negative effects of suspension.
Resumo:
Remote sensing techniques involving hyperspectral imagery have applications in a number of sciences that study some aspects of the surface of the planet. The analysis of hyperspectral images is complex because of the large amount of information involved and the noise within that data. Investigating images with regard to identify minerals, rocks, vegetation and other materials is an application of hyperspectral remote sensing in the earth sciences. This thesis evaluates the performance of two classification and clustering techniques on hyperspectral images for mineral identification. Support Vector Machines (SVM) and Self-Organizing Maps (SOM) are applied as classification and clustering techniques, respectively. Principal Component Analysis (PCA) is used to prepare the data to be analyzed. The purpose of using PCA is to reduce the amount of data that needs to be processed by identifying the most important components within the data. A well-studied dataset from Cuprite, Nevada and a dataset of more complex data from Baffin Island were used to assess the performance of these techniques. The main goal of this research study is to evaluate the advantage of training a classifier based on a small amount of data compared to an unsupervised method. Determining the effect of feature extraction on the accuracy of the clustering and classification method is another goal of this research. This thesis concludes that using PCA increases the learning accuracy, and especially so in classification. SVM classifies Cuprite data with a high precision and the SOM challenges SVM on datasets with high level of noise (like Baffin Island).
Resumo:
Le but de cette thèse est d'étudier les corrélats comportementaux et neuronaux du transfert inter-linguistique (TIL) dans l'apprentissage d’une langue seconde (L2). Compte tenu de nos connaissances sur l'influence de la distance linguistique sur le TIL (Paradis, 1987, 2004; Odlin, 1989, 2004, 2005; Gollan, 2005; Ringbom, 2007), nous avons examiné l'effet de facilitation de la similarité phonologique à l’aide de la résonance magnétique fonctionnelle entre des langues linguistiquement proches (espagnol-français) et des langues linguistiquement éloignées (persan-français). L'étude I rapporte les résultats obtenus pour des langues linguistiquement proches (espagnol-français), alors que l'étude II porte sur des langues linguistiquement éloignées (persan-français). Puis, les changements de connectivité fonctionnelle dans le réseau langagier (Price, 2010) et dans le réseau de contrôle supplémentaire impliqué dans le traitement d’une langue seconde (Abutalebi & Green, 2007) lors de l’apprentissage d’une langue linguistiquement éloignée (persan-français) sont rapportés dans l’étude III. Les résultats des analyses d’IRMF suivant le modèle linéaire général chez les bilingues de langues linguistiquement proches (français-espagnol) montrent que le traitement des mots phonologiquement similaires dans les deux langues (cognates et clangs) compte sur un réseau neuronal partagé par la langue maternelle (L1) et la L2, tandis que le traitement des mots phonologiquement éloignés (non-clang-non-cognates) active des structures impliquées dans le traitement de la mémoire de travail et d'attention. Toutefois, chez les personnes bilingues de L1-L2 linguistiquement éloignées (français-persan), même les mots phonologiquement similaires à travers les langues (cognates et clangs) activent des régions connues pour être impliquées dans l'attention et le contrôle cognitif. Par ailleurs, les mots phonologiquement éloignés (non-clang-non-cognates) activent des régions usuellement associées à la mémoire de travail et aux fonctions exécutives. Ainsi, le facteur de distance inter-linguistique entre L1 et L2 module la charge cognitive sur la base du degré de similarité phonologiques entres les items en L1 et L2. Des structures soutenant les processus impliqués dans le traitement exécutif sont recrutées afin de compenser pour des demandes cognitives. Lorsque la compétence linguistique en L2 augmente et que les tâches linguistiques exigent ainsi moins d’effort, la demande pour les ressources cognitives diminue. Tel que déjà rapporté (Majerus, et al, 2008; Prat, et al, 2007; Veroude, et al, 2010; Dodel, et al, 2005; Coynel, et al ., 2009), les résultats des analyses de connectivité fonctionnelle montrent qu’après l’entraînement la valeur d'intégration (connectivité fonctionnelle) diminue puisqu’il y a moins de circulation du flux d'information. Les résultats de cette recherche contribuent à une meilleure compréhension des aspects neurocognitifs et de plasticité cérébrale du TIL ainsi que l'impact de la distance linguistique dans l'apprentissage des langues. Ces résultats ont des implications dans les stratégies d'apprentissage d’une L2, les méthodes d’enseignement d’une L2 ainsi que le développement d'approches thérapeutiques chez des patients bilingues qui souffrent de troubles langagiers.
Resumo:
Les collisions proton-proton produites par le LHC imposent un environnement radiatif hostile au détecteur ATLAS. Afin de quantifier les effets de cet environnement sur la performance du détecteur et la sécurité du personnel, plusieurs simulations Monte Carlo ont été réalisées. Toutefois, la mesure directe est indispensable pour suivre les taux de radiation dans ATLAS et aussi pour vérifier les prédictions des simulations. À cette fin, seize détecteurs ATLAS-MPX ont été installés à différents endroits dans les zones expérimentale et technique d'ATLAS. Ils sont composés d'un détecteur au silicium à pixels appelé MPX dont la surface active est partiellement recouverte de convertisseurs de neutrons thermiques, lents et rapides. Les détecteurs ATLAS-MPX mesurent en temps réel les champs de radiation en enregistrant les traces des particules détectées sous forme d'images matricielles. L'analyse des images acquises permet d'identifier les types des particules détectées à partir des formes de leurs traces. Dans ce but, un logiciel de reconnaissance de formes appelé MAFalda a été conçu. Étant donné que les traces des particules fortement ionisantes sont influencées par le partage de charge entre pixels adjacents, un modèle semi-empirique décrivant cet effet a été développé. Grâce à ce modèle, l'énergie des particules fortement ionisantes peut être estimée à partir de la taille de leurs traces. Les convertisseurs de neutrons qui couvrent chaque détecteur ATLAS-MPX forment six régions différentes. L'efficacité de chaque région à détecter les neutrons thermiques, lents et rapides a été déterminée par des mesures d'étalonnage avec des sources connues. L'étude de la réponse des détecteurs ATLAS-MPX à la radiation produite par les collisions frontales de protons à 7TeV dans le centre de masse a montré que le nombre de traces enregistrées est proportionnel à la luminosité du LHC. Ce résultat permet d'utiliser les détecteurs ATLAS-MPX comme moniteurs de luminosité. La méthode proposée pour mesurer et étalonner la luminosité absolue avec ces détecteurs est celle de van der Meer qui est basée sur les paramètres des faisceaux du LHC. Vu la corrélation entre la réponse des détecteurs ATLAS-MPX et la luminosité, les taux de radiation mesurés sont exprimés en termes de fluences de différents types de particules par unité de luminosité intégrée. Un écart significatif a été obtenu en comparant ces fluences avec celles prédites par GCALOR qui est l'une des simulations Monte Carlo du détecteur ATLAS. Par ailleurs, les mesures effectuées après l'arrêt des collisions proton-proton ont montré que les détecteurs ATLAS-MPX permettent d'observer la désintégration des isotopes radioactifs générés au cours des collisions. L'activation résiduelle des matériaux d'ATLAS peut être mesurée avec ces détecteurs grâce à un étalonnage en équivalent de dose ambiant.
Resumo:
Présentation: Cet article a été publié dans le journal : Computerised medical imaging and graphics (CMIG). Le but de cet article est de recaler les vertèbres extraites à partir d’images RM avec des vertèbres extraites à partir d’images RX pour des patients scoliotiques, en tenant compte des déformations non-rigides due au changement de posture entre ces deux modalités. À ces fins, une méthode de recalage à l’aide d’un modèle articulé est proposée. Cette méthode a été comparée avec un recalage rigide en calculant l’erreur sur des points de repère, ainsi qu’en calculant la différence entre l’angle de Cobb avant et après recalage. Une validation additionelle de la méthode de recalage présentée ici se trouve dans l’annexe A. Ce travail servira de première étape dans la fusion des images RM, RX et TP du tronc complet. Donc, cet article vérifie l’hypothèse 1 décrite dans la section 3.2.1.
Resumo:
International School of Photonics, Cochin University of Science and Technology