923 resultados para model selection in binary regression
Resumo:
We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.
Resumo:
[EN]Ensemble forecasting is a methodology to deal with uncertainties in the numerical wind prediction. In this work we propose to apply ensemble methods to the adaptive wind forecasting model presented in. The wind field forecasting is based on a mass-consistent model and a log-linear wind profile using as input data the resulting forecast wind from Harmonie, a Non-Hydrostatic Dynamic model used experimentally at AEMET with promising results. The mass-consistent model parameters are estimated by using genetic algorithms. The mesh is generated using the meccano method and adapted to the geometry…
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
Zusammmenfassung:Um Phasenseparation in binären Polymermischungen zuuntersuchen, werden zwei dynamische Erweiterungen der selbstkonsistenten Feldtheorie (SCFT)entwickelt. Die erste Methode benutzt eine zeitliche Entwicklung der Dichten und wird dynamische selbstkonsistente Feldtheorie (DSCFT) genannt, während die zweite Methode die zeitliche Propagation der effektiven äußeren Felder der SCFT ausnutzt. Diese Methode wird mit External Potential Dynamics (EPD) bezeichnet. Für DSCFT werden kinetische Koeffizienten verwendet, die entweder die lokale Dynamik von Punktteilchen oder die nichtlokale Dynamik von Rouse'schen Polymeren nachbilden. Die EPD-Methode erzeugt mit einem konstanten kinetischen Koeffizienten die Dynamik von Rouse'schen Ketten und benötigt weniger Rechenzeit als DSCFT. Diese Methoden werden für verschiedene Systeme angewendet.Zuerst wird spinodale Entmischung im Volumen untersucht,wobei der Unterschied zwischen lokaler und nichtlokalerDynamik im Mittelpunkt steht. Um die Gültigkeit derErgebnisse zu überprüfen, werden Monte-Carlo-Simulationen durchgeführt. In Polymermischungen, die von zwei Wänden, die beide die gleiche Sorte Polymere bevorzugen, eingeschränkt werden, wird die Bildung von Anreicherungsschichten an den Wänden untersucht. Für dünne Polymerfilme zwischen antisymmetrischen Wänden, d.h. jede Wand bevorzugt eine andere Polymerspezies, wird die Spannung einer parallel zu den Wänden gebildeten Grenzfläche analysiert und der Phasenübergang von einer anfänglich homogenen Mischung zur lokalisierten Phase betrachtet. Des Weiteren wird die Dynamik von Kapillarwellenmoden untersucht.
Resumo:
GERMAN:Im Rahmen der vorliegenden Arbeit soll der Einfluß einerräumlichen Beschränkung auf die Dynamik einer unterkühltenFlüssigkeit charakterisiert werden. Insbesondere sollgeklärt werden, welche Rolle die Kooperativität derTeilchenbewegung bei niedrigen Temperaturen spielt. Hierzuuntersuchen wir mit Hilfe einer Molekulardynamik-Computersimulation die dynamischen Eigenschaften eineseinfachen Modellglasbildners, einer binäre Lennard-Jones-Flüssigkeit, für Systeme mit unterschiedlichen Geometrienund Wandarten. Durch geschickte Wahl der Wandpotentiale konnte erreichtwerden, daß die Struktur der Flüssigkeit mit der im Bulknahezu identisch ist.In Filmen mit glatten Wänden beobachtet man, daß dieDynamik der Flüssigkeit in der Nähe der Wand starkbeschleunigt ist und sich diese veränderte Dynamik bis weitin den Film ausbreitet. Den umgekehrten Effekt erhält man,wenn man eine strukturierte, rauhe Wand verwendet, in derenNähe die Dynamik stark verlangsamt ist.Die kontinuierliche Verlangsamung bzw. Beschleunigung derDynamik vom Verhalten an der Oberfläche zum Bulkverhaltenin genügend großem Abstand zur Wand können wirphänomenologisch beschreiben. Hieraus kann mancharakteristische dynamische Längenskalen ablesen, die mitsinkender Temperatur kontinuierlich anwachsen, d.h. derBereich, in dem die Existenz der Wand einen (indirekten)Einfluß auf die Dynamik eines Flüssigkeitsteilchens hat,breitet sich immer weiter aus. Man kann daher vonBereichen kooperativer Bewegung sprechen, die mit sinkenderTemperatur anwachsen.Unsere Untersuchungen von Röhren zeigen, daß aufgrund desstärkeren Einflusses der Wände die beobachteten Effektegrößer sind als in Filmgeometrie. Bei Reduzierung derSystemgröße zeigen sich immer größere Unterschiede zumBulkverhalten.
Resumo:
Among the experimental methods commonly used to define the behaviour of a full scale system, dynamic tests are the most complete and efficient procedures. A dynamic test is an experimental process, which would define a set of characteristic parameters of the dynamic behaviour of the system, such as natural frequencies of the structure, mode shapes and the corresponding modal damping values associated. An assessment of these modal characteristics can be used both to verify the theoretical assumptions of the project, to monitor the performance of the structural system during its operational use. The thesis is structured in the following chapters: The first introductive chapter recalls some basic notions of dynamics of structure, focusing the discussion on the problem of systems with multiply degrees of freedom (MDOF), which can represent a generic real system under study, when it is excited with harmonic force or in free vibration. The second chapter is entirely centred on to the problem of dynamic identification process of a structure, if it is subjected to an experimental test in forced vibrations. It first describes the construction of FRF through classical FFT of the recorded signal. A different method, also in the frequency domain, is subsequently introduced; it allows accurately to compute the FRF using the geometric characteristics of the ellipse that represents the direct input-output comparison. The two methods are compared and then the attention is focused on some advantages of the proposed methodology. The third chapter focuses on the study of real structures when they are subjected to experimental test, where the force is not known, like in an ambient or impact test. In this analysis we decided to use the CWT, which allows a simultaneous investigation in the time and frequency domain of a generic signal x(t). The CWT is first introduced to process free oscillations, with excellent results both in terms of frequencies, dampings and vibration modes. The application in the case of ambient vibrations defines accurate modal parameters of the system, although on the damping some important observations should be made. The fourth chapter is still on the problem of post processing data acquired after a vibration test, but this time through the application of discrete wavelet transform (DWT). In the first part the results obtained by the DWT are compared with those obtained by the application of CWT. Particular attention is given to the use of DWT as a tool for filtering the recorded signal, in fact in case of ambient vibrations the signals are often affected by the presence of a significant level of noise. The fifth chapter focuses on another important aspect of the identification process: the model updating. In this chapter, starting from the modal parameters obtained from some environmental vibration tests, performed by the University of Porto in 2008 and the University of Sheffild on the Humber Bridge in England, a FE model of the bridge is defined, in order to define what type of model is able to capture more accurately the real dynamic behaviour of the bridge. The sixth chapter outlines the necessary conclusions of the presented research. They concern the application of a method in the frequency domain in order to evaluate the modal parameters of a structure and its advantages, the advantages in applying a procedure based on the use of wavelet transforms in the process of identification in tests with unknown input and finally the problem of 3D modeling of systems with many degrees of freedom and with different types of uncertainty.
Resumo:
Zusammenfassung Um zu einem besseren Verständnis des Prozesses der Biomineralisation zu gelangen, muss das Zusammenwirken der verschiedenen Typen biologischer Makromoleküle, die am Keimbildungs- und Wachstumsprozess der Minerale beteiligt sind, berücksichtigt werden. In dieser Arbeit wird ein neues Modellsystem eingeführt, das aus einem SAM (self-assembled monolayer) mit verschiedenen Funktionalitäten und unterschiedlichen, gelösten Makromolekülen besteht. Es konnte gezeigt werden, dass die Kristallisation von Vaterit (CaCO3) sowie Strontianit (SrCO3) Nanodrähten der Präsenz von Polyacrylat in Kooperation mit einer COOH-funktionalisierten SAM-Oberfläche zugeschrieben werden kann. Die Kombination bestehend aus einer polaren SAM-Oberfläche und Polyacrylat fungiert als Grenzfläche für die Struktur dirigierende Kristallisation von Nanodraht-Kristallen. Weiter konnte gezeigt werden, dass die Phasenselektion von CaCO3 durch die kooperative Wechselwirkung zwischen einer SAM-Oberfläche und einem daran adsorbierten hb-Polyglycerol kontrolliert wird. Auch die Funktionalität einer SAM-Oberfläche in Gegenwart von Carboxymethyl-cellulose übt einen entscheidenden Einfluss auf die Phasenselektion des entstehenden Produktes aus. In der vorliegenden Arbeit wurden Untersuchungen an CaCO3 zur homogenen Keimbildung, zur Nukleation in Gegenwart eines Proteins sowie auf Kolloiden, die als Template fungieren, mittels Kleinwinkel-Neutronenstreuung durchgeführt. Die homogene Kristallisation in wässriger Lösung stellte sich als ein mehrstufiger Prozess heraus. In Gegenwart des Eiweißproteins Ovalbumin konnten drei Phasen identifiziert werden, darunter eine anfänglich vorhandene amorphe sowie zwei kristalline Phasen.
Resumo:
Pig meat quality is determined by several parameters, such as lipid content, tenderness, water-holding capacity, pH, color and flavor, that affect consumers’ acceptance and technological properties of meat. Carcass quality parameters are important for the production of fresh and dry-cure high-quality products, in particular the fat deposition and the lean cut yield. The identification of genes and markers associated with meat and carcass quality traits is of prime interest, for the possibility of improving the traits by marker-assisted selection (MAS) schemes. Therefore, the aim of this thesis was to investigate seven candidate genes for meat and carcass quality traits in pigs. In particular, we focused on genes belonging to the family of the lipid droplet coat proteins perilipins (PLIN1 and PLIN2) and to the calpain/calpastatin system (CAST, CAPN1, CAPN3, CAPNS1) and on the gene encoding for PPARg-coactivator 1A (PPARGC1A). In general, the candidate genes investigation included the protein localization, the detection of polymorphisms, the association analysis with meat and carcass traits and the analysis of the expression level, in order to assess the involvement of the gene in pork quality. Some of the analyzed genes showed effects on various pork traits that are subject to selection in genetic improvement programs, suggesting a possible involvement of the genes in controlling the traits variability. In particular, significant association results have been obtained for PLIN2, CAST and PPARGC1A genes, that are worthwhile of further validation. The obtained results contribute to a better understanding of biological mechanisms important for pig production as well as for a possible use of pig as animal model for studies regarding obesity in humans.
Resumo:
In der vorliegenden Arbeit wird mittels Molekulardynamik(MD)-Computersimulationen die Dynamik von verschiedenen Alkalisilikaten in der Schmelze und im Glas untersucht. Es ist bekannt, daß diese Systeme ionenleitend sind, was auf eine hohe Mobilität der Alkaliionen im Vergleich zu den glasbildenden Komponenten Si und O zurückzuführen ist. Im Mittelpunkt des Interesses steht der sog. Mischalkalieffekt (MAE), der in ternären Mischungen aus Siliziumdioxid mit zwei Alkalioxiden auftritt. Gegenüber Mischungen mit nur einer Alkaliionensorte weisen letztere Systeme eine signifikante Verlangsamung der Alkaliionendiffusion auf. Zunächst werden zwei binäre Alkalisilikate simuliert, nämlich Lithiumdisilikat (LS2) und Kaliumdisilikat (KS2). Die Simulationen zeigen, daß der Ursprung der hohen Mobilität der Alkaliionen in der Struktur begründet ist. KS2 und LS2 weisen auf intermediären Längenskalen Ordnung auf, die in partiellen statischen Strukturfaktoren durch Prepeaks reflektiert ist. Die den Prepeaks zugrundeliegende Struktur erklärt sich durch perkolierende Netzwerke aus alkalioxidreichen Kanälen, die als Diffusionskanäle für die mobilen Alkaliionen fungieren. In diesen Kanälen bewegen sich die Ionen mittels Sprüngen (Hopping) zwischen ausgezeichneten Plätzen. In der Simulation beobachtet man für die hohen Temperaturen (4000K>=1500K) eine ähnliche Aktivierungsenergie wie im Experiment. Im Experiment findet allerdings unterhalb von ca.1200K ein Crossover in ein Arrheniusverhalten mit höherer Aktivierungsenergie statt, welches von der Simulation nicht nachvollzogen wird. Das kann mit der in der Simulation nicht im Gleichgewicht befindlichen Si-O-Matrix erklärt werden, bei der Alterungseffekte beobachtet werden. Am stärksten ist der MAE für eine Alkalikomponente, wenn deren Konzentrationsanteil in einem ternären Mischalkalisystem gegen 0 geht. Daher wird ein LS2-System untersucht, in dem ein Li-Ion gegen ein K-Ion getauscht wird. Der Einfluß des K-Ions ist sowohl lokal in den charakteristischen Abständen zu den ersten nächsten Nachbarn (NN) zu sehen, als auch in der ortsaufgelösten Koordinationszahlverteilung bis zu Längenskalen von ca. 8,5 Angstrom. Die Untersuchung der Dynamik des eingesetzten K-Ions zeigt, daß die Sprungwahrscheinlichkeit nicht mit der Lokalisierung, einem Maß für die Bewegung eines Teilchens um seine Ruheposition, korreliert ist, aber daß eine chemische Umgebung mit wenig Li- und vielen O-NN oder vielen Li- und wenig O-NN ein Sprungereignis begünstigt. Zuletzt wird ein ternäres Alkalisilikat (LKS2) untersucht, dessen Struktur alle charakteristischen Längenskalen von LS2 und KS2 aufweist. Es stellt sich also eine komplexe Struktur mit zwei perkolierenden Subnetzwerken für Alkaliionen ein. Die Untersuchung der Dynamik zeigt eine geringe Wahrscheinlichkeit dafür auf, daß Ionen in ein Subnetzwerk andersnamiger Ionen springen. Auch kann gezeigt werden, daß das Modellpotential den MAE reproduzieren kann, daß also die Diffusionskonstanten in LKS2 bei bis zu einer Größenordnung langsamer sind als in KS2 bzw. LS2. Der beobachtete Effekt stellt sich zudem vom funktionalen Verlauf her so dar, wie er beim MAE erwartet wird. Es wurde auch festgestellt, daß trotz der zeitlichen Verzögerung in den dynamischen Größen die Anzahl der Sprünge pro Zeit nicht geringer ist und daß für niedrige Temperaturen (d.h.im Glas) Sprünge auf den Nachbarplatz mit anschließendem Rücksprung auf die vorherige Position deutlich wahrscheinlicher sind als bei hohen Temperaturen (also in der Schmelze). Die vorliegenden Resultate geben Aufschluß über die Details der Mechanismen mikroskopischer Ionenleitung in binären und ternären Alkalisilikaten sowie dem MAE.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
The objective of this work is to characterize the genome of the chromosome 1 of A.thaliana, a small flowering plants used as a model organism in studies of biology and genetics, on the basis of a recent mathematical model of the genetic code. I analyze and compare different portions of the genome: genes, exons, coding sequences (CDS), introns, long introns, intergenes, untranslated regions (UTR) and regulatory sequences. In order to accomplish the task, I transformed nucleotide sequences into binary sequences based on the definition of the three different dichotomic classes. The descriptive analysis of binary strings indicate the presence of regularities in each portion of the genome considered. In particular, there are remarkable differences between coding sequences (CDS and exons) and non-coding sequences, suggesting that the frame is important only for coding sequences and that dichotomic classes can be useful to recognize them. Then, I assessed the existence of short-range dependence between binary sequences computed on the basis of the different dichotomic classes. I used three different measures of dependence: the well-known chi-squared test and two indices derived from the concept of entropy i.e. Mutual Information (MI) and Sρ, a normalized version of the “Bhattacharya Hellinger Matusita distance”. The results show that there is a significant short-range dependence structure only for the coding sequences whose existence is a clue of an underlying error detection and correction mechanism. No doubt, further studies are needed in order to assess how the information carried by dichotomic classes could discriminate between coding and noncoding sequence and, therefore, contribute to unveil the role of the mathematical structure in error detection and correction mechanisms. Still, I have shown the potential of the approach presented for understanding the management of genetic information.
Resumo:
In this thesis three measurements of top-antitop differential cross section at an energy in the center of mass of 7 TeV will be shown, as a function of the transverse momentum, the mass and the rapidity of the top-antitop system. The analysis has been carried over a data sample of about 5/fb recorded with the ATLAS detector. The events have been selected with a cut based approach in the "one lepton plus jets" channel, where the lepton can be either an electron or a muon. The most relevant backgrounds (multi-jet QCD and W+jets) have been extracted using data driven methods; the others (Z+ jets, diboson and single top) have been simulated with Monte Carlo techniques. The final, background-subtracted, distributions have been corrected, using unfolding methods, for the detector and selection effects. At the end, the results have been compared with the theoretical predictions. The measurements are dominated by the systematic uncertainties and show no relevant deviation from the Standard Model predictions.
Resumo:
The diagnosis, grading and classification of tumours has benefited considerably from the development of DCE-MRI which is now essential to the adequate clinical management of many tumour types due to its capability in detecting active angiogenesis. Several strategies have been proposed for DCE-MRI evaluation. Visual inspection of contrast agent concentration curves vs time is a very simple yet operator dependent procedure, therefore more objective approaches have been developed in order to facilitate comparison between studies. In so called model free approaches, descriptive or heuristic information extracted from time series raw data have been used for tissue classification. The main issue concerning these schemes is that they have not a direct interpretation in terms of physiological properties of the tissues. On the other hand, model based investigations typically involve compartmental tracer kinetic modelling and pixel-by-pixel estimation of kinetic parameters via non-linear regression applied on region of interests opportunely selected by the physician. This approach has the advantage to provide parameters directly related to the pathophysiological properties of the tissue such as vessel permeability, local regional blood flow, extraction fraction, concentration gradient between plasma and extravascular-extracellular space. Anyway, nonlinear modelling is computational demanding and the accuracy of the estimates can be affected by the signal-to-noise ratio and by the initial solutions. The principal aim of this thesis is investigate the use of semi-quantitative and quantitative parameters for segmentation and classification of breast lesion. The objectives can be subdivided as follow: describe the principal techniques to evaluate time intensity curve in DCE-MRI with focus on kinetic model proposed in literature; to evaluate the influence in parametrization choice for a classic bi-compartmental kinetic models; to evaluate the performance of a method for simultaneous tracer kinetic modelling and pixel classification; to evaluate performance of machine learning techniques training for segmentation and classification of breast lesion.
Resumo:
Inbreeding can lead to a fitness reduction due to the unmasking of deleterious recessive alleles and the loss of heterosis. Therefore, most sexually reproducing organisms avoid inbreeding, often by disperal. Besides the avoidance of inbreeding, dispersal lowers intraspecific competition on a local scale and leads to a spreading of genotypes into new habitats. In social insects, winged reproductives disperse and mate during nuptial flights. Therafter, queens independently found a new colony. However, some species also produce wingless sexuals as an alternative reproductive tactic. Wingless sexuals mate within or close to their colony and queens either stay in the nest or they found a new colony by budding. During this dependent colony foundation, wingless queens are accompanied by a fraction of nestmate workers. The production of wingless reproductives therefore circumvents the risks associated with dispersal and independent colony foundation. However, the absence of dispersal can lead to inbreeding and local competition.rnIn my PhD-project, I investigated the mating biology of Hypoponera opacior, an ant that produces winged and wingless reproductives in a population in Arizona. Besides the investigation of the annual reproductive cycle, I particularly focused on the consequences of wingless reproduction. An analysis of sex ratios in wingless sexuals should reveal the relative importance of local resource competition among queens (that mainly compete for the help of workers) and local mate competition among males. Further, sexual selection was expected to act on wingless males that were previously found to mate with and mate-guard pupal queens in response to local mate competition. We studied whether males are able to adapt their mating behaviour to the current competitive situation in the nest and which traits are under selection in this mating situation. Last, we investigated the extent and effects of inbreeding. As the species appeared to produce non-dispersive males and queens quite frequently, we assumed to find no or only weak negative effects of inbreeding and potentially mechanisms that moderate inbreeding levels despite frequent nest-matings.rnWe found that winged and wingless males and queens are produced during two separate seasons of the year. Winged sexuals emerge in early summer and conduct nuptial flights in July, when climate conditions due to frequent rainfalls lower the risks of dispersal and independent colony foundation. In fall, wingless sexuals are produced that reproduce within the colonies leading to an expansion on the local scale. The absence of dispersal during this second reproductive season resulted in a local genetic population viscosity and high levels of inbreeding within the colonies. Male-biased sex ratios in fall indicated a greater importance of local resource competition among queens than local mate competition among males. Males were observed to adjust mate-guarding durations to the competitive situation (i.e. the number of competing males and pupae) in the nest, an adaptation that helps maximising their reproductive success. Further, sexual selection was found to act on the timing of emergence as well as on body size in these males, i.e. earlier emerging and larger males show a higher mating success. Genetic analyses revealed that wingless males do not actively avoid inbreeding by choosing less related queens as mating partners. Further, we detected diploid males, a male type that is produced instead of diploid females if close relatives mate. In contrast to many other Hymenopteran species, diploid males were here viable and able to sire sterile triploid offspring. They did not differ in lifespan, body size and mating success from “normal” haploid males. Hence, diploid male production in H. opacior is less costly than in other social Hymenopteran species. No evidence of inbreeding depression was found on the colony level but more inbred colonies invested more resources into the production of sexuals. This effect was more pronounced in the dispersive summer generation. The increased investment in outbreeding sexuals can be regarded as an active strategy to moderate the extent and effects of inbreeding. rnIn summary, my thesis describes an ant species that has evolved alternative reproductive tactics as an adaptation to seasonal environmental variations. Hereby, the species is able to maintain its adaptive mating system without suffering from negative effects due to the absence of dispersal flights in fall.rn
Resumo:
Objective : To compare two scoring systems: the Huddart/Bodenham system (HB system) and the Bauru-BCLP yardstick (BCLP yardstick), which classify treatment outcome in terms of dental arch relationships in patients with complete bilateral cleft lip and palate (CBCLP). The predictive value of these scoring systems for treatment outcome was also evaluated. Design : Retrospective longitudinal study. Patients : Dental arch relationships of 43 CBCLP patients were evaluated at 6, 9, and 12 years. Setting : Treatment outcome in BCLP patients using two scoring systems. Main Outcome Measures : For each age group, the HB scores were correlated with the BCLP yardstick scores using Spearman's correlation coefficient. The predictive value of the two scoring systems was evaluated by backward regression analysis. Results : Intraobserver Kappa values for the BCLP yardstick scoring for the two observers were .506 and .627, respectively, and the interobserver reliability ranged from .427 and .581. The intraobserver reliability for the HB system ranged from .92 to .97 and the interobserver reliability from .88 to .96. The BCLP yardstick scores of 6 and 9 years together were predictors for the outcome at 12 years (explained variance 41.3%). Adding the incisor and lateral HB scores in the regression model increased the explained variance to 67%. Conclusions : The BCLP yardstick and the HB system are reliable scoring systems for evaluation of dental arch relationships of CBCLP patients. The HB system categorizes treatment outcome into similar categories as the BCLP yardstick. In case a more sensitive measure of treatment outcome is needed, selectively both scoring systems should be used.