913 resultados para MS-based methods
Resumo:
Previous studies have demonstrated that long chain fatty acids influence fibroblast function at sub-lethal concentrations. This study is the first to assess the effects of oleic, linoleic or palmitic acids on protein expression of fibroblasts, as determined by standard proteomic techniques. The fatty acids were not cytotoxic at the concentration used in this work as assessed by membrane integrity, DNA fragmentation and the MTT assay but significantly increased cell proliferation. Subsequently, a proteomic analysis was performed using two dimensional difference gel electrophoresis (2D-DIGE) and MS based identification. Cells treated with 50 μM oleic, linoleic or palmitic acid for 24 h were associated with 24, 22, 16 spots differentially expressed, respectively. Among the identified proteins, α-enolase and far upstream element binding protein 1 (FBP-1) are of importance due to their function in fibroblast-associated diseases. However, modulation of α-enolase and FBP-1 expression by fatty acids was not validated by the Western blot technique.
Resumo:
Industrial recurrent event data where an event of interest can be observed more than once in a single sample unit are presented in several areas, such as engineering, manufacturing and industrial reliability. Such type of data provide information about the number of events, time to their occurrence and also their costs. Nelson (1995) presents a methodology to obtain asymptotic confidence intervals for the cost and the number of cumulative recurrent events. Although this is a standard procedure, it can not perform well in some situations, in particular when the sample size available is small. In this context, computer-intensive methods such as bootstrap can be used to construct confidence intervals. In this paper, we propose a technique based on the bootstrap method to have interval estimates for the cost and the number of cumulative events. One of the advantages of the proposed methodology is the possibility for its application in several areas and its easy computational implementation. In addition, it can be a better alternative than asymptotic-based methods to calculate confidence intervals, according to some Monte Carlo simulations. An example from the engineering area illustrates the methodology.
Resumo:
Haemophilus parasuis infection, known as Glässer’s disease, is characterized by fibrinous polyserositis, arthritis and meningitis in piglets. Although traditional diagnosis is based on herd history, clinical signs, bacterial isolation and serotyping, the molecular-based methods are alternatives for species-specific tests and epidemiologic study. The aim of this study was to characterize H. parasuis strains isolated from different states of Brazil by serotyping, PCR and ERIC-PCR. Serotyping revealed serovar 4 as the most prevalent (24 %), followed by serovars 14 (14 %), 5 (12 %), 13 (8 %) and 2 (2 %), whereas 40 % of the strains were considered as non-typeable. From 50 strains tested 43 (86%) were positive to Group 1 vtaA gene that have been related to virulent strains of H.parasuis. ERIC-PCR was able to type isolates tested among 23 different patterns, including non-typeable strains. ERIC-PCR patterns were very heterogeneous and presented high similarity between strains of the same animal or farm origin. The results indicated ERIC-PCR as a valuable tool for typing H. parasuis isolates collected in Brazil.
Resumo:
The ideal approach for the long term treatment of intestinal disorders, such as inflammatory bowel disease (IBD), is represented by a safe and well tolerated therapy able to reduce mucosal inflammation and maintain homeostasis of the intestinal microbiota. A combined therapy with antimicrobial agents, to reduce antigenic load, and immunomodulators, to ameliorate the dysregulated responses, followed by probiotic supplementation has been proposed. Because of the complementary mechanisms of action of antibiotics and probiotics, a combined therapeutic approach would give advantages in terms of enlargement of the antimicrobial spectrum, due to the barrier effect of probiotic bacteria, and limitation of some side effects of traditional chemiotherapy (i.e. indiscriminate decrease of aggressive and protective intestinal bacteria, altered absorption of nutrient elements, allergic and inflammatory reactions). Rifaximin (4-deoxy-4’-methylpyrido[1’,2’-1,2]imidazo[5,4-c]rifamycin SV) is a product of synthesis experiments designed to modify the parent compound, rifamycin, in order to achieve low gastrointestinal absorption while retaining good antibacterial activity. Both experimental and clinical pharmacology clearly show that this compound is a non systemic antibiotic with a broad spectrum of antibacterial action, covering Gram-positive and Gram-negative organisms, both aerobes and anaerobes. Being virtually non absorbed, its bioavailability within the gastrointestinal tract is rather high with intraluminal and faecal drug concentrations that largely exceed the MIC values observed in vitro against a wide range of pathogenic microorganisms. The gastrointestinal tract represents therefore the primary therapeutic target and gastrointestinal infections the main indication. The little value of rifaximin outside the enteric area minimizes both antimicrobial resistance and systemic adverse events. Fermented dairy products enriched with probiotic bacteria have developed into one of the most successful categories of functional foods. Probiotics are defined as “live microorganisms which, when administered in adequate amounts, confer a health benefit on the host” (FAO/WHO, 2002), and mainly include Lactobacillus and Bifidobacterium species. Probiotic bacteria exert a direct effect on the intestinal microbiota of the host and contribute to organoleptic, rheological and nutritional properties of food. Administration of pharmaceutical probiotic formula has been associated with therapeutic effects in treatment of diarrhoea, constipation, flatulence, enteropathogens colonization, gastroenteritis, hypercholesterolemia, IBD, such as ulcerative colitis (UC), Crohn’s disease, pouchitis and irritable bowel syndrome. Prerequisites for probiotics are to be effective and safe. The characteristics of an effective probiotic for gastrointestinal tract disorders are tolerance to upper gastrointestinal environment (resistance to digestion by enteric or pancreatic enzymes, gastric acid and bile), adhesion on intestinal surface to lengthen the retention time, ability to prevent the adherence, establishment and/or replication of pathogens, production of antimicrobial substances, degradation of toxic catabolites by bacterial detoxifying enzymatic activities, and modulation of the host immune responses. This study was carried out using a validated three-stage fermentative continuous system and it is aimed to investigate the effect of rifaximin on the colonic microbial flora of a healthy individual, in terms of bacterial composition and production of fermentative metabolic end products. Moreover, this is the first study that investigates in vitro the impact of the simultaneous administration of the antibiotic rifaximin and the probiotic B. lactis BI07 on the intestinal microbiota. Bacterial groups of interest were evaluated using culture-based methods and molecular culture-independent techniques (FISH, PCR-DGGE). Metabolic outputs in terms of SCFA profiles were determined by HPLC analysis. Collected data demonstrated that rifaximin as well as antibiotic and probiotic treatment did not change drastically the intestinal microflora, whereas bacteria belonging to Bifidobacterium and Lactobacillus significantly increase over the course of the treatment, suggesting a spontaneous upsurge of rifaximin resistance. These results are in agreement with a previous study, in which it has been demonstrated that rifaximin administration in patients with UC, affects the host with minor variations of the intestinal microflora, and that the microbiota is restored over a wash-out period. In particular, several Bifidobacterium rifaximin resistant mutants could be isolated during the antibiotic treatment, but they disappeared after the antibiotic suspension. Furthermore, bacteria belonging to Atopobium spp. and E. rectale/Clostridium cluster XIVa increased significantly after rifaximin and probiotic treatment. Atopobium genus and E. rectale/Clostridium cluster XIVa are saccharolytic, butyrate-producing bacteria, and for these characteristics they are widely considered health-promoting microorganisms. The absence of major variations in the intestinal microflora of a healthy individual and the significant increase in probiotic and health-promoting bacteria concentrations support the rationale of the administration of rifaximin as efficacious and non-dysbiosis promoting therapy and suggest the efficacy of an antibiotic/probiotic combined treatment in several gut pathologies, such as IBD. To assess the use of an antibiotic/probiotic combination for clinical management of intestinal disorders, genetic, proteomic and physiologic approaches were employed to elucidate molecular mechanisms determining rifaximin resistance in Bifidobacterium, and the expected interactions occurring in the gut between these bacteria and the drug. The ability of an antimicrobial agent to select resistance is a relevant factor that affects its usefulness and may diminish its useful life. Rifaximin resistance phenotype was easily acquired by all bifidobacteria analyzed [type strains of the most representative intestinal bifidobacterial species (B. infantis, B. breve, B. longum, B. adolescentis and B. bifidum) and three bifidobacteria included in a pharmaceutical probiotic preparation (B. lactis BI07, B. breve BBSF and B. longum BL04)] and persisted for more than 400 bacterial generations in the absence of selective pressure. Exclusion of any reversion phenomenon suggested two hypotheses: (i) stable and immobile genetic elements encode resistance; (ii) the drug moiety does not act as an inducer of the resistance phenotype, but enables selection of resistant mutants. Since point mutations in rpoB have been indicated as representing the principal factor determining rifampicin resistance in E. coli and M. tuberculosis, whether a similar mechanism also occurs in Bifidobacterium was verified. The analysis of a 129 bp rpoB core region of several wild-type and resistant bifidobacteria revealed five different types of miss-sense mutations in codons 513, 516, 522 and 529. Position 529 was a novel mutation site, not previously described, and position 522 appeared interesting for both the double point substitutions and the heterogeneous profile of nucleotide changes. The sequence heterogeneity of codon 522 in Bifidobacterium leads to hypothesize an indirect role of its encoded amino acid in the binding with the rifaximin moiety. These results demonstrated the chromosomal nature of rifaximin resistance in Bifidobacterium, minimizing risk factors for horizontal transmission of resistance elements between intestinal microbial species. Further proteomic and physiologic investigations were carried out using B. lactis BI07, component of a pharmaceutical probiotic preparation, as a model strain. The choice of this strain was determined based on the following elements: (i) B. lactis BI07 is able to survive and persist in the gut; (ii) a proteomic overview of this strain has been recently reported. The involvement of metabolic changes associated with rifaximin resistance was investigated by proteomic analysis performed with two-dimensional electrophoresis and mass spectrometry. Comparative proteomic mapping of BI07-wt and BI07-res revealed that most differences in protein expression patterns were genetically encoded rather than induced by antibiotic exposure. In particular, rifaximin resistance phenotype was characterized by increased expression levels of stress proteins. Overexpression of stress proteins was expected, as they represent a common non specific response by bacteria when stimulated by different shock conditions, including exposure to toxic agents like heavy metals, oxidants, acids, bile salts and antibiotics. Also, positive transcription regulators were found to be overexpressed in BI07-res, suggesting that bacteria could activate compensatory mechanisms to assist the transcription process in the presence of RNA polymerase inhibitors. Other differences in expression profiles were related to proteins involved in central metabolism; these modifications suggest metabolic disadvantages of resistant mutants in comparison with sensitive bifidobacteria in the gut environment, without selective pressure, explaining their disappearance from faeces of patients with UC after interruption of antibiotic treatment. The differences observed between BI07-wt e BI07-res proteomic patterns, as well as the high frequency of silent mutations reported for resistant mutants of Bifidobacterium could be the consequences of an increased mutation rate, mechanism which may lead to persistence of resistant bacteria in the population. However, the in vivo disappearance of resistant mutants in absence of selective pressure, allows excluding the upsurge of compensatory mutations without loss of resistance. Furthermore, the proteomic characterization of the resistant phenotype suggests that rifaximin resistance is associated with a reduced bacterial fitness in B. lactis BI07-res, supporting the hypothesis of a biological cost of antibiotic resistance in Bifidobacterium. The hypothesis of rifaximin inactivation by bacterial enzymatic activities was verified by using liquid chromatography coupled with tandem mass spectrometry. Neither chemical modifications nor degradation derivatives of the rifaximin moiety were detected. The exclusion of a biodegradation pattern for the drug was further supported by the quantitative recovery in BI07-res culture fractions of the total rifaximin amount (100 μg/ml) added to the culture medium. To confirm the main role of the mutation on the β chain of RNA polymerase in rifaximin resistance acquisition, transcription activity of crude enzymatic extracts of BI07-res cells was evaluated. Although the inhibition effects of rifaximin on in vitro transcription were definitely higher for BI07-wt than for BI07-res, a partial resistance of the mutated RNA polymerase at rifaximin concentrations > 10 μg/ml was supposed, on the basis of the calculated differences in inhibition percentages between BI07-wt and BI07-res. By considering the resistance of entire BI07-res cells to rifaximin concentrations > 100 μg/ml, supplementary resistance mechanisms may take place in vivo. A barrier for the rifaximin uptake in BI07-res cells was suggested in this study, on the basis of the major portion of the antibiotic found to be bound to the cellular pellet respect to the portion recovered in the cellular lysate. Related to this finding, a resistance mechanism involving changes of membrane permeability was supposed. A previous study supports this hypothesis, demonstrating the involvement of surface properties and permeability in natural resistance to rifampicin in mycobacteria, isolated from cases of human infection, which possessed a rifampicin-susceptible RNA polymerase. To understand the mechanism of membrane barrier, variations in percentage of saturated and unsaturated FAs and their methylation products in BI07-wt and BI07-res membranes were investigated. While saturated FAs confer rigidity to membrane and resistance to stress agents, such as antibiotics, a high level of lipid unsaturation is associated with high fluidity and susceptibility to stresses. Thus, the higher percentage of saturated FAs during the stationary phase of BI07-res could represent a defence mechanism of mutant cells to prevent the antibiotic uptake. Furthermore, the increase of CFAs such as dihydrosterculic acid during the stationary phase of BI07-res suggests that this CFA could be more suitable than its isomer lactobacillic acid to interact with and prevent the penetration of exogenous molecules including rifaximin. Finally, the impact of rifaximin on immune regulatory functions of the gut was evaluated. It has been suggested a potential anti-inflammatory effect of rifaximin, with reduced secretion of IFN-γ in a rodent model of colitis. Analogously, it has been reported a significant decrease in IL-8, MCP-1, MCP-3 e IL-10 levels in patients affected by pouchitis, treated with a combined therapy of rifaximin and ciprofloxacin. Since rifaximin enables in vivo and in vitro selection of Bifidobacterium resistant mutants with high frequency, the immunomodulation activities of rifaximin associated with a B. lactis resistant mutant were also taken into account. Data obtained from PBMC stimulation experiments suggest the following conclusions: (i) rifaximin does not exert any effect on production of IL-1β, IL-6 and IL-10, whereas it weakly stimulates production of TNF-α; (ii) B. lactis appears as a good inducer of IL-1β, IL-6 and TNF-α; (iii) combination of BI07-res and rifaximin exhibits a lower stimulation effect than BI07-res alone, especially for IL-6. These results confirm the potential anti-inflammatory effect of rifaximin, and are in agreement with several studies that report a transient pro-inflammatory response associated with probiotic administration. The understanding of the molecular factors determining rifaximin resistance in the genus Bifidobacterium assumes an applicative significance at pharmaceutical and medical level, as it represents the scientific basis to justify the simultaneous use of the antibiotic rifaximin and probiotic bifidobacteria in the clinical treatment of intestinal disorders.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
In this work seismic upgrading of existing masonry structures by means of hysteretic ADAS dampers is treated. ADAS are installed on external concrete walls, which are built parallel to the building, and then linked to the building's slab by means of steel rod connection system. In order to assess the effectiveness of the intervention, a parametric study considering variation of damper main features has been conducted. To this aim, the concepts of equivalent linear system (ELS) or equivalent viscous damping are deepen. Simplified equivalent linear model results are then checked respect results of the yielding structures. Two alternative displacement based methods for damper design are herein proposed. Both methods have been validated through non linear time history analyses with spectrum compatible accelerograms. Finally ADAS arrangement for the non conventional implementation is proposed.
Resumo:
Das Ziel der Arbeit war die Entwicklung computergestützter Methoden zur Erstellung einer Gefahrenhinweiskarte für die Region Rheinhessen, zur Minimierung der Hangrutschungsgefährdung. Dazu wurde mit Hilfe zweier statistischer Verfahren (Diskriminanzanalyse, Logistische Regression) und einer Methode aus dem Bereich der Künstlichen Intelligenz (Fuzzy Logik) versucht, die potentielle Gefährdung auch solcher Hänge zu klassifizieren, die bis heute noch nicht durch Massenbewegungen aufgefallen sind. Da ingenieurgeologische und geotechnische Hanguntersuchungen aus Zeit und Kostengründen im regionalen Maßstab nicht möglich sind, wurde auf punktuell vorhandene Datenbestände zu einzelnen Rutschungen des Winters 1981/82, die in einer Rutschungsdatenbank zusammengefaßt sind, zurückgegriffen, wobei die daraus gewonnenen Erkenntnisse über Prozeßmechanismen und auslösende Faktoren genutzt und in das jeweilige Modell integriert wurden. Flächenhafte Daten (Lithologie, Hangneigung, Landnutzung, etc.), die für die Berechnung der Hangstabilität notwendig sind, wurden durch Fernerkundungsmethoden, dem Digitalisieren von Karten und der Auswertung von Digitalen Geländemodellen (Reliefanalyse) gewonnen. Für eine weiterführende Untersuchung von einzelnen, als rutschgefährdet klassifizierten Bereichen der Gefahrenhinweiskarte, wurde am Beispiel eines Testgebietes, eine auf dem infinite-slope-stability Modell aufbauende Methode untersucht, die im Maßstabsbereich von Grundkarten (1:5000) auch geotechnische und hydrogeologische Parameter berücksichtigt und damit eine genauere, der jeweiligen klimatischen Situation angepaßte, Gefahrenabschätzung ermöglicht.
Resumo:
In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.
Resumo:
There are different ways to do cluster analysis of categorical data in the literature and the choice among them is strongly related to the aim of the researcher, if we do not take into account time and economical constraints. Main approaches for clustering are usually distinguished into model-based and distance-based methods: the former assume that objects belonging to the same class are similar in the sense that their observed values come from the same probability distribution, whose parameters are unknown and need to be estimated; the latter evaluate distances among objects by a defined dissimilarity measure and, basing on it, allocate units to the closest group. In clustering, one may be interested in the classification of similar objects into groups, and one may be interested in finding observations that come from the same true homogeneous distribution. But do both of these aims lead to the same clustering? And how good are clustering methods designed to fulfil one of these aims in terms of the other? In order to answer, two approaches, namely a latent class model (mixture of multinomial distributions) and a partition around medoids one, are evaluated and compared by Adjusted Rand Index, Average Silhouette Width and Pearson-Gamma indexes in a fairly wide simulation study. Simulation outcomes are plotted in bi-dimensional graphs via Multidimensional Scaling; size of points is proportional to the number of points that overlap and different colours are used according to the cluster membership.
Resumo:
Die Verifikation numerischer Modelle ist für die Verbesserung der Quantitativen Niederschlagsvorhersage (QNV) unverzichtbar. Ziel der vorliegenden Arbeit ist die Entwicklung von neuen Methoden zur Verifikation der Niederschlagsvorhersagen aus dem regionalen Modell der MeteoSchweiz (COSMO-aLMo) und des Globalmodells des Europäischen Zentrums für Mittelfristvorhersage (engl.: ECMWF). Zu diesem Zweck wurde ein neuartiger Beobachtungsdatensatz für Deutschland mit stündlicher Auflösung erzeugt und angewandt. Für die Bewertung der Modellvorhersagen wurde das neue Qualitätsmaß „SAL“ entwickelt. Der neuartige, zeitlich und räumlich hoch-aufgelöste Beobachtungsdatensatz für Deutschland wird mit der während MAP (engl.: Mesoscale Alpine Program) entwickelten Disaggregierungsmethode erstellt. Die Idee dabei ist, die zeitlich hohe Auflösung der Radardaten (stündlich) mit der Genauigkeit der Niederschlagsmenge aus Stationsmessungen (im Rahmen der Messfehler) zu kombinieren. Dieser disaggregierte Datensatz bietet neue Möglichkeiten für die quantitative Verifikation der Niederschlagsvorhersage. Erstmalig wurde eine flächendeckende Analyse des Tagesgangs des Niederschlags durchgeführt. Dabei zeigte sich, dass im Winter kein Tagesgang existiert und dies vom COSMO-aLMo gut wiedergegeben wird. Im Sommer dagegen findet sich sowohl im disaggregierten Datensatz als auch im COSMO-aLMo ein deutlicher Tagesgang, wobei der maximale Niederschlag im COSMO-aLMo zu früh zwischen 11-14 UTC im Vergleich zu 15-20 UTC in den Beobachtungen einsetzt und deutlich um das 1.5-fache überschätzt wird. Ein neues Qualitätsmaß wurde entwickelt, da herkömmliche, gitterpunkt-basierte Fehlermaße nicht mehr der Modellentwicklung Rechnung tragen. SAL besteht aus drei unabhängigen Komponenten und basiert auf der Identifikation von Niederschlagsobjekten (schwellwertabhängig) innerhalb eines Gebietes (z.B. eines Flusseinzugsgebietes). Berechnet werden Unterschiede der Niederschlagsfelder zwischen Modell und Beobachtungen hinsichtlich Struktur (S), Amplitude (A) und Ort (L) im Gebiet. SAL wurde anhand idealisierter und realer Beispiele ausführlich getestet. SAL erkennt und bestätigt bekannte Modelldefizite wie das Tagesgang-Problem oder die Simulation zu vieler relativ schwacher Niederschlagsereignisse. Es bietet zusätzlichen Einblick in die Charakteristiken der Fehler, z.B. ob es sich mehr um Fehler in der Amplitude, der Verschiebung eines Niederschlagsfeldes oder der Struktur (z.B. stratiform oder kleinskalig konvektiv) handelt. Mit SAL wurden Tages- und Stundensummen des COSMO-aLMo und des ECMWF-Modells verifiziert. SAL zeigt im statistischen Sinne speziell für stärkere (und damit für die Gesellschaft relevante Niederschlagsereignisse) eine im Vergleich zu schwachen Niederschlägen gute Qualität der Vorhersagen des COSMO-aLMo. Im Vergleich der beiden Modelle konnte gezeigt werden, dass im Globalmodell flächigere Niederschläge und damit größere Objekte vorhergesagt werden. Das COSMO-aLMo zeigt deutlich realistischere Niederschlagsstrukturen. Diese Tatsache ist aufgrund der Auflösung der Modelle nicht überraschend, konnte allerdings nicht mit herkömmlichen Fehlermaßen gezeigt werden. Die im Rahmen dieser Arbeit entwickelten Methoden sind sehr nützlich für die Verifikation der QNV zeitlich und räumlich hoch-aufgelöster Modelle. Die Verwendung des disaggregierten Datensatzes aus Beobachtungen sowie SAL als Qualitätsmaß liefern neue Einblicke in die QNV und lassen angemessenere Aussagen über die Qualität von Niederschlagsvorhersagen zu. Zukünftige Anwendungsmöglichkeiten für SAL gibt es hinsichtlich der Verifikation der neuen Generation von numerischen Wettervorhersagemodellen, die den Lebenszyklus hochreichender konvektiver Zellen explizit simulieren.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Vergleichende computergestützte funktionsmorphologische Analyse an Molaren cercopithecoider Primaten
Resumo:
Die Analyse funktioneller Zusammenhänge zwischen Ernährung und Zahnmorphologie ist ein wichtiger Aspekt primatologischer und paläontologischer Forschung. Als überdauernder Teil des Verdauungssystems geben Zähne die bestmöglichen Hinweise auf die Ernährungsstrategien (ausgestorbener) Arten und eine Fülle weiterer Informationen. Aufgrund dessen ist es für die wissenschaftliche Arbeit von größter Bedeutung, die Zähne so detailliert und exakt wie möglich in ihrer gesamten Struktur zu erfassen. Bisher wurden zumeist zweidimensionale Parameter verwendet, um die komplexe Kronenmorphologie von Primatenmolaren vergleichend zu untersuchen. Die vorliegende Arbeit hatte das Ziel, Zähne verschiedener Arten von Altweltaffen mittels computerbasierter Methoden dreidimensional zu erfassen und neue Parameter zu definieren, mit denen die Form dieser Zähne objektiv erfasst und funktionell interpretiert werden kann. Mit einem Oberflächen-Scanner wurden die Gebisse einer Stichprobe von insgesamt 48 Primaten von fünf verschiedenen Arten eingescannt und mit Bildverarbeitungsmethoden so bearbeitet, dass dreidimensionale digitale Modelle einzelner Backenzähne zur Analyse vorlagen. Es wurden dabei sowohl Arten ausgewählt, die eine für ihre Gattung typische Ernährungsweise besitzen - also Frugivorie bei den Cercopithecinen und Folivorie bei den Colobinen - als auch solche, die eine davon abweichende Alimentation bevorzugen. Alle Altweltaffen haben sehr ähnliche Molaren. Colobinen haben jedoch höhere und spitzere Zahnhöcker, dünneren Zahnschmelz und scheinen ihre Zähne weniger stark abzukauen als die Meerkatzen. Diese Beobachtungen konnten mit Hilfe der neuen Parameter quantifiziert werden. Aus der 3D-Oberfläche und der Grundfläche der Zähne wurde ein Index gebildet, der die Stärke des Oberflächenreliefs angibt. Dieser Index hat bei Colobinen deutlich höhere Werte als bei Cercopithecinen, auch bei Zähnen, die schon stark abgekaut sind. Die Steilheit der Höcker und ihre Ausrichtung wurden außerdem gemessen. Auch diese Winkelmessungen bestätigten das Bild. Je höher der Blätteranteil an der Ernährung ist, desto höher sind die Indexwerte und umso steiler sind die Höcker. Besonders wichtig war es, dies auch für abgekaute Zähne zu bestätigen, die bisher nicht in funktionelle Analysen miteinbezogen wurden. Die Ausrichtung der Höckerseiten gibt Hinweise auf die Kaubewegung, die zum effizienten Zerkleinern der Nahrung notwendig ist. Die Ausrichtung der Höcker der Colobinen deutet darauf hin, dass diese Primaten flache, gleitende Kaubewegungen machen, bei denen die hohen Höcker aneinander vorbei scheren. Dies ist sinnvoll zum Zerschneiden von faserreicher Nahrung wie Blättern. Cercopithecinen scheinen ihre Backenzähne eher wie Mörser und Stößel zu verwenden, um Früchte und Samen zu zerquetschen und zu zermahlen. Je nachdem, was neben der hauptsächlichen Nahrung noch gekaut wird, unterscheiden sich die Arten graduell. Anders als bisher vermutet wurde, konnte gezeigt werden, dass Colobinen trotz des dünnen Zahnschmelzes ihre Zähne weniger stark abkauen und weniger Dentin freigelegt wird. Dies gibt eindeutige Hinweise auf die Unterschiede in der mechanischen Belastung, die während des Kauvorgangs auf die Zähne wirkt, und lässt sich gut mit der Ernährung der Arten in Zusammenhang bringen. Anhand dieser modellhaften Beobachtungen können in Zukunft ausgestorbene Arten hinsichtlich ihrer Ernährungsweise mit 3D-Techniken untersucht werden.
Resumo:
We have developed a method for locating sources of volcanic tremor and applied it to a dataset recorded on Stromboli volcano before and after the onset of the February 27th 2007 effusive eruption. Volcanic tremor has attracted considerable attention by seismologists because of its potential value as a tool for forecasting eruptions and for better understanding the physical processes that occur inside active volcanoes. Commonly used methods to locate volcanic tremor sources are: 1) array techniques, 2) semblance based methods, 3) calculation of wave field amplitude. We have choosen the third approach, using a quantitative modeling of the seismic wavefield. For this purpose, we have calculated the Green Functions (GF) in the frequency domain with the Finite Element Method (FEM). We have used this method because it is well suited to solve elliptic problems, as the elastodynamics in the Fourier domain. The volcanic tremor source is located by determining the source function over a regular grid of points. The best fit point is choosen as the tremor source location. The source inversion is performed in the frequency domain, using only the wavefield amplitudes. We illustrate the method and its validation over a synthetic dataset. We show some preliminary results on the Stromboli dataset, evidencing temporal variations of the volcanic tremor sources.
Resumo:
In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.
Resumo:
I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.