897 resultados para Simulation-based methods


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we describe in detail the Monte Carlo simulation (LVDG4) built to interpret the experimental data collected by LVD and to measure the muon-induced neutron yield in iron and liquid scintillator. A full Monte Carlo simulation, based on the Geant4 (v 9.3) toolkit, has been developed and validation tests have been performed. We used the LVDG4 to determine the active vetoing and the shielding power of LVD. The idea was to evaluate the feasibility to host a dark matter detector in the most internal part, called Core Facility (LVD-CF). The first conclusion is that LVD is a good moderator, but the iron supporting structure produce a great number of neutrons near the core. The second conclusions is that if LVD is used as an active veto for muons, the neutron flux in the LVD-CF is reduced by a factor 50, of the same order of magnitude of the neutron flux in the deepest laboratory of the world, Sudbury. Finally, the muon-induced neutron yield has been measured. In liquid scintillator we found $(3.2 \pm 0.2) \times 10^{-4}$ n/g/cm$^2$, in agreement with previous measurements performed at different depths and with the general trend predicted by theoretical calculations and Monte Carlo simulations. Moreover we present the first measurement, in our knowledge, of the neutron yield in iron: $(1.9 \pm 0.1) \times 10^{-3}$ n/g/cm$^2$. That measurement provides an important check for the MC of neutron production in heavy materials that are often used as shield in low background experiments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the last decade, the reverse vaccinology approach shifted the paradigm of vaccine discovery from conventional culture-based methods to high-throughput genome-based approaches for the development of recombinant protein-based vaccines against pathogenic bacteria. Besides reaching its main goal of identifying new vaccine candidates, this new procedure produced also a huge amount of molecular knowledge related to them. In the present work, we explored this knowledge in a species-independent way and we performed a systematic in silico molecular analysis of more than 100 protective antigens, looking at their sequence similarity, domain composition and protein architecture in order to identify possible common molecular features. This meta-analysis revealed that, beside a low sequence similarity, most of the known bacterial protective antigens shared structural/functional Pfam domains as well as specific protein architectures. Based on this, we formulated the hypothesis that the occurrence of these molecular signatures can be predictive of possible protective properties of other proteins in different bacterial species. We tested this hypothesis in Streptococcus agalactiae and identified four new protective antigens. Moreover, in order to provide a second proof of the concept for our approach, we used Staphyloccus aureus as a second pathogen and identified five new protective antigens. This new knowledge-driven selection process, named MetaVaccinology, represents the first in silico vaccine discovery tool based on conserved and predictive molecular and structural features of bacterial protective antigens and not dependent upon the prediction of their sub-cellular localization.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die Analyse funktioneller Zusammenhänge zwischen Ernährung und Zahnmorphologie ist ein wichtiger Aspekt primatologischer und paläontologischer Forschung. Als überdauernder Teil des Verdauungssystems geben Zähne die bestmöglichen Hinweise auf die Ernährungsstrategien (ausgestorbener) Arten und eine Fülle weiterer Informationen. Aufgrund dessen ist es für die wissenschaftliche Arbeit von größter Bedeutung, die Zähne so detailliert und exakt wie möglich in ihrer gesamten Struktur zu erfassen. Bisher wurden zumeist zweidimensionale Parameter verwendet, um die komplexe Kronenmorphologie von Primatenmolaren vergleichend zu untersuchen. Die vorliegende Arbeit hatte das Ziel, Zähne verschiedener Arten von Altweltaffen mittels computerbasierter Methoden dreidimensional zu erfassen und neue Parameter zu definieren, mit denen die Form dieser Zähne objektiv erfasst und funktionell interpretiert werden kann. Mit einem Oberflächen-Scanner wurden die Gebisse einer Stichprobe von insgesamt 48 Primaten von fünf verschiedenen Arten eingescannt und mit Bildverarbeitungsmethoden so bearbeitet, dass dreidimensionale digitale Modelle einzelner Backenzähne zur Analyse vorlagen. Es wurden dabei sowohl Arten ausgewählt, die eine für ihre Gattung typische Ernährungsweise besitzen - also Frugivorie bei den Cercopithecinen und Folivorie bei den Colobinen - als auch solche, die eine davon abweichende Alimentation bevorzugen. Alle Altweltaffen haben sehr ähnliche Molaren. Colobinen haben jedoch höhere und spitzere Zahnhöcker, dünneren Zahnschmelz und scheinen ihre Zähne weniger stark abzukauen als die Meerkatzen. Diese Beobachtungen konnten mit Hilfe der neuen Parameter quantifiziert werden. Aus der 3D-Oberfläche und der Grundfläche der Zähne wurde ein Index gebildet, der die Stärke des Oberflächenreliefs angibt. Dieser Index hat bei Colobinen deutlich höhere Werte als bei Cercopithecinen, auch bei Zähnen, die schon stark abgekaut sind. Die Steilheit der Höcker und ihre Ausrichtung wurden außerdem gemessen. Auch diese Winkelmessungen bestätigten das Bild. Je höher der Blätteranteil an der Ernährung ist, desto höher sind die Indexwerte und umso steiler sind die Höcker. Besonders wichtig war es, dies auch für abgekaute Zähne zu bestätigen, die bisher nicht in funktionelle Analysen miteinbezogen wurden. Die Ausrichtung der Höckerseiten gibt Hinweise auf die Kaubewegung, die zum effizienten Zerkleinern der Nahrung notwendig ist. Die Ausrichtung der Höcker der Colobinen deutet darauf hin, dass diese Primaten flache, gleitende Kaubewegungen machen, bei denen die hohen Höcker aneinander vorbei scheren. Dies ist sinnvoll zum Zerschneiden von faserreicher Nahrung wie Blättern. Cercopithecinen scheinen ihre Backenzähne eher wie Mörser und Stößel zu verwenden, um Früchte und Samen zu zerquetschen und zu zermahlen. Je nachdem, was neben der hauptsächlichen Nahrung noch gekaut wird, unterscheiden sich die Arten graduell. Anders als bisher vermutet wurde, konnte gezeigt werden, dass Colobinen trotz des dünnen Zahnschmelzes ihre Zähne weniger stark abkauen und weniger Dentin freigelegt wird. Dies gibt eindeutige Hinweise auf die Unterschiede in der mechanischen Belastung, die während des Kauvorgangs auf die Zähne wirkt, und lässt sich gut mit der Ernährung der Arten in Zusammenhang bringen. Anhand dieser modellhaften Beobachtungen können in Zukunft ausgestorbene Arten hinsichtlich ihrer Ernährungsweise mit 3D-Techniken untersucht werden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have developed a method for locating sources of volcanic tremor and applied it to a dataset recorded on Stromboli volcano before and after the onset of the February 27th 2007 effusive eruption. Volcanic tremor has attracted considerable attention by seismologists because of its potential value as a tool for forecasting eruptions and for better understanding the physical processes that occur inside active volcanoes. Commonly used methods to locate volcanic tremor sources are: 1) array techniques, 2) semblance based methods, 3) calculation of wave field amplitude. We have choosen the third approach, using a quantitative modeling of the seismic wavefield. For this purpose, we have calculated the Green Functions (GF) in the frequency domain with the Finite Element Method (FEM). We have used this method because it is well suited to solve elliptic problems, as the elastodynamics in the Fourier domain. The volcanic tremor source is located by determining the source function over a regular grid of points. The best fit point is choosen as the tremor source location. The source inversion is performed in the frequency domain, using only the wavefield amplitudes. We illustrate the method and its validation over a synthetic dataset. We show some preliminary results on the Stromboli dataset, evidencing temporal variations of the volcanic tremor sources.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I crescenti volumi di traffico che interessano le pavimentazioni stradali causano sollecitazioni tensionali di notevole entità che provocano danni permanenti alla sovrastruttura. Tali danni ne riducono la vita utile e comportano elevati costi di manutenzione. Il conglomerato bituminoso è un materiale multifase composto da inerti, bitume e vuoti d'aria. Le proprietà fisiche e le prestazioni della miscela dipendono dalle caratteristiche dell'aggregato, del legante e dalla loro interazione. L’approccio tradizionalmente utilizzato per la modellazione numerica del conglomerato bituminoso si basa su uno studio macroscopico della sua risposta meccanica attraverso modelli costitutivi al continuo che, per loro natura, non considerano la mutua interazione tra le fasi eterogenee che lo compongono ed utilizzano schematizzazioni omogenee equivalenti. Nell’ottica di un’evoluzione di tali metodologie è necessario superare questa semplificazione, considerando il carattere discreto del sistema ed adottando un approccio di tipo microscopico, che consenta di rappresentare i reali processi fisico-meccanici dai quali dipende la risposta macroscopica d’insieme. Nel presente lavoro, dopo una rassegna generale dei principali metodi numerici tradizionalmente impiegati per lo studio del conglomerato bituminoso, viene approfondita la teoria degli Elementi Discreti Particellari (DEM-P), che schematizza il materiale granulare come un insieme di particelle indipendenti che interagiscono tra loro nei punti di reciproco contatto secondo appropriate leggi costitutive. Viene valutata l’influenza della forma e delle dimensioni dell’aggregato sulle caratteristiche macroscopiche (tensione deviatorica massima) e microscopiche (forze di contatto normali e tangenziali, numero di contatti, indice dei vuoti, porosità, addensamento, angolo di attrito interno) della miscela. Ciò è reso possibile dal confronto tra risultati numerici e sperimentali di test triassiali condotti su provini costituiti da tre diverse miscele formate da sfere ed elementi di forma generica.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Toxoplasma gondii is an obligate intracellular parasite capable of infecting virtually all warm-blooded species, including humans, but cats are the only definitive hosts. Humans or animals acquire T. gondii infection by ingesting food or water contaminated with sporulated oocysts or by ingesting tissue cysts containing bradyzoites. Toxoplasmosis has the highest human incidence among zoonotic parasitic diseases, but it is still considered an underreported zoonosis. The importance of T. gondii primary infection in livestock is related to the ability of the parasite to produce tissue cysts in infected animals, which may represent important sources of infection for humans. Consumption of undercooked mutton and pork are considered important sources of human Toxoplasma gondii. The first aim of this thesis was to develop a rapid and sensitive in- house indirect ELISA for the detection of antibodies against T. gondii in sheep sera. ROC-curve analysis showed high discriminatory power (AUC=0.999) and high sensitivity (99.4%) and specificity (99.8%) of the method. The ELISA was used to test a batch of sheep sera (375) collected in the Forli-Cesena district. The overall prevalence was estimated at 41.9% demonstrating that T. gondii infection is widely distributed in sheep reared in Forli-Cesena district. Since the epidemiological impact of waterborne transmission route of T.gondii to humans is now thought to be more significant than previously believed, the second aim of the thesis was to evaluate PCR based methods for detecting T. gondii DNA in raw and finished drinking water samples collected in Scotland. Samples were tested using a quantitative PCR on 529 bp repetitive elements. Only one raw water sample (0.3%), out of the 358 examined, tested T. gondii positive demonstrating that there is no evidence that tap water is a source of Toxoplasma infection in Scotland.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the past ten years, the cross-correlation of long-time series of ambient seismic noise (ASN) has been widely adopted to extract the surface-wave part of the Green’s Functions (GF). This stochastic procedure relies on the assumption that ASN wave-field is diffuse and stationary. At frequencies <1Hz, the ASN is mainly composed by surface-waves, whose origin is attributed to the sea-wave climate. Consequently, marked directional properties may be observed, which call for accurate investigation about location and temporal evolution of the ASN-sources before attempting any GF retrieval. Within this general context, this thesis is aimed at a thorough investigation about feasibility and robustness of the noise-based methods toward the imaging of complex geological structures at the local (∼10-50km) scale. The study focused on the analysis of an extended (11 months) seismological data set collected at the Larderello-Travale geothermal field (Italy), an area for which the underground geological structures are well-constrained thanks to decades of geothermal exploration. Focusing on the secondary microseism band (SM;f>0.1Hz), I first investigate the spectral features and the kinematic properties of the noise wavefield using beamforming analysis, highlighting a marked variability with time and frequency. For the 0.1-0.3Hz frequency band and during Spring- Summer-time, the SMs waves propagate with high apparent velocities and from well-defined directions, likely associated with ocean-storms in the south- ern hemisphere. Conversely, at frequencies >0.3Hz the distribution of back- azimuths is more scattered, thus indicating that this frequency-band is the most appropriate for the application of stochastic techniques. For this latter frequency interval, I tested two correlation-based methods, acting in the time (NCF) and frequency (modified-SPAC) domains, respectively yielding esti- mates of the group- and phase-velocity dispersions. Velocity data provided by the two methods are markedly discordant; comparison with independent geological and geophysical constraints suggests that NCF results are more robust and reliable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, new advances in the development of spectroscopic based methods for the characterization of heritage materials have been achieved. As concern FTIR spectroscopy new approaches aimed at exploiting near and far IR region for the characterization of inorganic or organic materials have been tested. Paint cross-section have been analysed by FTIR spectroscopy in the NIR range and an “ad hoc” chemometric approach has been developed for the elaboration of hyperspectral maps. Moreover, a new method for the characterization of calcite based on the use of grinding curves has been set up both in MIR and in FAR region. Indeed, calcite is a material widely applied in cultural heritage, and this spectroscopic approach is an efficient and rapid tool to distinguish between different calcite samples. Different enhanced vibrational techniques for the characterisation of dyed fibres have been tested. First a SEIRA (Surface Enhanced Infra-Red Absorption) protocol has been optimised allowing the analysis of colorant micro-extracts thanks to the enhancement produced by the addition of gold nanoparticles. These preliminary studies permitted to identify a new enhanced FTIR method, named ATR/RAIRS, which allowed to reach lower detection limits. Regarding Raman microscopy, the research followed two lines, which have in common the aim of avoiding the use of colloidal solutions. AgI based supports obtained after deposition on a gold-coated glass slides have been developed and tested spotting colorant solutions. A SERS spectrum can be obtained thanks to the photoreduction, which the laser may induce on the silver salt. Moreover, these supports can be used for the TLC separation of a mixture of colorants and the analyses by means of both Raman/SERS and ATR-RAIRS can be successfully reached. Finally, a photoreduction method for the “on fiber” analysis of colorant without the need of any extraction have been optimised.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Movement analysis carried out in laboratory settings is a powerful, but costly solution since it requires dedicated instrumentation, space and personnel. Recently, new technologies such as the magnetic and inertial measurement units (MIMU) are becoming widely accepted as tools for the assessment of human motion in clinical and research settings. They are relatively easy-to-use and potentially suitable for estimating gait kinematic features, including spatio-temporal parameters. The objective of this thesis regards the development and testing in clinical contexts of robust MIMUs based methods for assessing gait spatio-temporal parameters applicable across a number of different pathological gait patterns. First, considering the need of a solution the least obtrusive as possible, the validity of the single unit based approach was explored. A comparative evaluation of the performance of various methods reported in the literature for estimating gait temporal parameters using a single unit attached to the trunk first in normal gait and then in different pathological gait conditions was performed. Then, the second part of the research headed towards the development of new methods for estimating gait spatio-temporal parameters using shank worn MIMUs on different pathological subjects groups. In addition to the conventional gait parameters, new methods for estimating the changes of the direction of progression were explored. Finally, a new hardware solution and relevant methodology for estimating inter-feet distance during walking was proposed. Results of the technical validation of the proposed methods at different walking speeds and along different paths against a gold standard were reported and showed that the use of two MIMUs attached to the lower limbs associated with a robust method guarantee a much higher accuracy in determining gait spatio-temporal parameters. In conclusion, the proposed methods could be reliably applied to various abnormal gaits obtaining in some cases a comparable level of accuracy with respect to normal gait.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As a large and long-lived species with high economic value, restricted spawning areas and short spawning periods, the Atlantic bluefin tuna (BFT; Thunnus thynnus) is particularly susceptible to over-exploitation. Although BFT have been targeted by fisheries in the Mediterranean Sea for thousands of years, it has only been in these last decades that the exploitation rate has reached far beyond sustainable levels. An understanding of the population structure, spatial dynamics, exploitation rates and the environmental variables that affect BFT is crucial for the conservation of the species. The aims of this PhD project were 1) to assess the accuracy of larval identification methods, 2) determine the genetic structure of modern BFT populations, 3) assess the self-recruitment rate in the Gulf of Mexico and Mediterranean spawning areas, 4) estimate the immigration rate of BFT to feeding aggregations from the various spawning areas, and 5) develop tools capable of investigating the temporal stability of population structuring in the Mediterranean Sea. Several weaknesses in modern morphology-based taxonomy including demographic decline of expert taxonomists, flawed identification keys, reluctance of the taxonomic community to embrace advances in digital communications and a general scarcity of modern user-friendly materials are reviewed. Barcoding of scombrid larvae revealed important differences in the accuracy of the taxonomic identifications carried out by different ichthyoplanktologists following morphology-based methods. Using a Genotyping-by-Sequencing a panel of 95 SNPs was developed and used to characterize the population structuring of BFT and composition of adult feeding aggregations. Using novel molecular techniques, DNA was extracted from bluefin tuna vertebrae excavated from late iron age, ancient roman settlements Byzantine-era Constantinople and a 20th century collection. A second panel of 96 SNPs was developed to genotype historical and modern samples in order to elucidate changes in population structuring and allele frequencies of loci associated with selective traits.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Am Mainzer Mikrotron können Lambda-Hyperkerne in (e,e'K^+)-Reaktionen erzeugt werden. Durch den Nachweis des erzeugten Kaons im KAOS-Spektrometer lassen sich Reaktionen markieren, bei denen ein Hyperon erzeugt wurde. Die Spektroskopie geladener Pionen, die aus schwachen Zweikörperzerfällen leichter Hyperkerne stammen, erlaubt es die Bindungsenergie des Hyperons im Kern mit hoher Präzision zu bestimmen. Neben der direkten Produktion von Hyperkernen ist auch die Erzeugung durch die Fragmentierung eines hoch angeregten Kontinuumszustands möglich. Dadurch können unterschiedliche Hyperkerne in einem Experiment untersucht werden. Für die Spektroskopie der Zerfallspionen stehen hochauflösende Magnetspektrometer zur Verfügung. Um die Grundzustandsmasse der Hyperkerne aus dem Pionimpuls zu berechnen, ist es erforderlich, dass das Hyperfragment vor dem Zerfall im Target abgebremst wird. Basierend auf dem bekannten Wirkungsquerschnitt der elementaren Kaon-Photoproduktion wurde eine Berechnung der zu erwartenden Ereignisrate vorgenommen. Es wurde eine Monte-Carlo-Simulation entwickelt, die den Fragmentierungsprozess und das Abbremsen der Hyperfragmente im Target beinhaltet. Diese nutzt ein statistisches Aufbruchsmodell zur Beschreibung der Fragmentierung. Dieser Ansatz ermöglicht für Wasserstoff-4-Lambda-Hyperkerne eine Vorhersage der zu erwartenden Zählrate an Zerfallspionen. In einem Pilotexperiment im Jahr 2011 wurde erstmalig an MAMI der Nachweis von Hadronen mit dem KAOS-Spektrometer unter einem Streuwinkel von 0° demonstriert, und koinzident dazu Pionen nachgewiesen. Es zeigte sich, dass bedingt durch die hohen Untergrundraten von Positronen in KAOS eine eindeutige Identifizierung von Hyperkernen in dieser Konfiguration nicht möglich war. Basierend auf diesen Erkenntnissen wurde das KAOS-Spektrometer so modifiziert, dass es als dedizierter Kaonenmarkierer fungierte. Zu diesem Zweck wurde ein Absorber aus Blei im Spektrometer montiert, in dem Positronen durch Schauerbildung abgestoppt werden. Die Auswirkung eines solchen Absorbers wurde in einem Strahltest untersucht. Eine Simulation basierend auf Geant4 wurde entwickelt mittels derer der Aufbau von Absorber und Detektoren optimiert wurde, und die Vorhersagen über die Auswirkung auf die Datenqualität ermöglichte. Zusätzlich wurden mit der Simulation individuelle Rückrechnungsmatrizen für Kaonen, Pionen und Protonen erzeugt, die die Wechselwirkung der Teilchen mit der Bleiwand beinhalteten, und somit eine Korrektur der Auswirkungen ermöglichen. Mit dem verbesserten Aufbau wurde 2012 eine Produktionsstrahlzeit durchgeführt, wobei erfolgreich Kaonen unter 0° Streuwinkel koninzident mit Pionen aus schwachen Zerfällen detektiert werden konnten. Dabei konnte im Impulsspektrum der Zerfallspionen eine Überhöhung mit einer Signifikanz, die einem p-Wert von 2,5 x 10^-4 entspricht, festgestellt werden. Diese Ereignisse können aufgrund ihres Impulses, den Zerfällen von Wasserstoff-4-Lambda-Hyperkernen zugeordnet werden, wobei die Anzahl detektierter Pionen konsistent mit der berechneten Ausbeute ist.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Robust and accurate identification of intervertebral discs from low resolution, sparse MRI scans is essential for the automated scan planning of the MRI spine scan. This paper presents a graphical model based solution for the detection of both the positions and orientations of intervertebral discs from low resolution, sparse MRI scans. Compared with the existing graphical model based methods, the proposed method does not need a training process using training data and it also has the capability to automatically determine the number of vertebrae visible in the image. Experiments on 25 low resolution, sparse spine MRI data sets verified its performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Most butterfly monitoring protocols rely on counts along transects (Pollard walks) to generate species abundance indices and track population trends. It is still too often ignored that a population count results from two processes: the biological process (true abundance) and the statistical process (our ability to properly quantify abundance). Because individual detectability tends to vary in space (e.g., among sites) and time (e.g., among years), it remains unclear whether index counts truly reflect population sizes and trends. This study compares capture-mark-recapture (absolute abundance) and count-index (relative abundance) monitoring methods in three species (Maculinea nausithous and Iolana iolas: Lycaenidae; Minois dryas: Satyridae) in contrasted habitat types. We demonstrate that intraspecific variability in individual detectability under standard monitoring conditions is probably the rule rather than the exception, which questions the reliability of count-based indices to estimate and compare specific population abundance. Our results suggest that the accuracy of count-based methods depends heavily on the ecology and behavior of the target species, as well as on the type of habitat in which surveys take place. Monitoring programs designed to assess the abundance and trends in butterfly populations should incorporate a measure of detectability. We discuss the relative advantages and inconveniences of current monitoring methods and analytical approaches with respect to the characteristics of the species under scrutiny and resources availability.