978 resultados para radial distribution functions
Resumo:
In der vorliegenden Arbeit werden verschiedene Wassermodelle in sogenannten Multiskalen-Computersimulationen mit zwei Auflösungen untersucht, in atomistischer Auflösung und in einer vergröberten Auflösung, die als "coarse-grained" bezeichnet wird. In der atomistischen Auflösung wird ein Wassermolekül, entsprechend seiner chemischen Struktur, durch drei Atome beschrieben, im Gegensatz dazu wird ein Molekül in der coarse-grained Auflösung durch eine Kugel dargestellt.rnrnDie coarse-grained Modelle, die in dieser Arbeit vorgestellt werden, werden mit verschiedenen coarse-graining Methoden entwickelt. Hierbei kommen hauptsächlich die "iterative Boltzmann Inversion" und die "iterative Monte Carlo Inversion" zum Einsatz. Beides sind struktur-basierte Ansätze, die darauf abzielen bestimmte strukturelle Eigenschaften, wie etwa die Paarverteilungsfunktionen, des zugrundeliegenden atomistischen Systems zu reproduzieren. Zur automatisierten Anwendung dieser Methoden wurde das Softwarepaket "Versatile Object-oriented Toolkit for Coarse-Graining Applications" (VOTCA) entwickelt.rnrnEs wird untersucht, in welchem Maße coarse-grained Modelle mehrere Eigenschaftenrndes zugrundeliegenden atomistischen Modells gleichzeitig reproduzieren können, z.B. thermodynamische Eigenschaften wie Druck und Kompressibilität oder strukturelle Eigenschaften, die nicht zur Modellbildung verwendet wurden, z.B. das tetraedrische Packungsverhalten, welches für viele spezielle Eigenschaft von Wasser verantwortlich ist.rnrnMit Hilfe des "Adaptive Resolution Schemes" werden beide Auflösungen in einer Simulation kombiniert. Dabei profitiert man von den Vorteilen beider Modelle:rnVon der detaillierten Darstellung eines räumlich kleinen Bereichs in atomistischer Auflösung und von der rechnerischen Effizienz des coarse-grained Modells, die den Bereich simulierbarer Zeit- und Längenskalen vergrössert.rnrnIn diesen Simulationen kann der Einfluss des Wasserstoffbrückenbindungsnetzwerks auf die Hydration von Fullerenen untersucht werden. Es zeigt sich, dass die Struktur der Wassermoleküle an der Oberfläche hauptsächlich von der Art der Wechselwirkung zwischen dem Fulleren und Wasser und weniger von dem Wasserstoffbrückenbindungsnetzwerk dominiert wird.rn
Resumo:
Amphiphile Peptide, Pro-Glu-(Phe-Glu)n-Pro, Pro-Asp-(Phe-Asp)n-Pro, und Phe-Glu-(Phe-Glu)n-Phe, können so aus n alternierenden Sequenzen von hydrophoben und hydrophilen Aminosäuren konstruiert werden, dass sie sich in Monolagen an der Luft-Wasser Grenzfläche anordnen. In biologischen Systemen können Strukturen an der organisch-wässrigen Grenzfläche als Matrix für die Kristallisation von Hydroxyapatit dienen, ein Vorgang der für die Behandlung von Osteoporose verwendet werden kann. In der vorliegenden Arbeit wurden Computersimulationenrneingesetzt, um die Strukturen und die zugrunde liegenden Wechselwirkungen welche die Aggregation der Peptide auf mikroskopischer Ebene steuern, zu untersuchen. Atomistische Molekulardynamik-Simulationen von einzelnen Peptidsträngen zeigen, dass sie sich leicht an der Luft-Wasser Grenzfläche anordnen und die Fähigkeit haben, sich in β-Schleifen zu falten, selbst für relativ kurze Peptidlängen (n = 2). Seltene Ereignisse wie diese (i.e. Konformationsänderungen) erfordern den Einsatz fortgeschrittener Sampling-Techniken. Hier wurde “Replica Exchange” Molekulardynamik verwendet um den Einfluss der Peptidsequenzen zu untersuchen. Die Simulationsergebnisse zeigten, dass Peptide mit kürzeren azidischen Seitenketten (Asp vs. Glu) gestrecktere Konformationen aufwiesen als die mit längeren Seitenketten, die in der Lage waren die Prolin-Termini zu erreichen. Darüber hinaus zeigte sich, dass die Prolin-Termini (Pro vs. Phe) notwendig sind, um eine 2D-Ordnung innerhalb derrnAggregate zu erhalten. Das Peptid Pro-Asp-(Phe-Asp)n-Pro, das beide dieser Eigenschaften enthält, zeigt das geordnetste Verhalten, eine geringe Verdrehung der Hauptkette, und ist in der Lage die gebildeten Aggregate durch Wasserstoffbrücken zwischen den sauren Seitenketten zu stabilisieren. Somit ist dieses Peptid am besten zur Aggregation geeignet. Dies wurde auch durch die Beurteilung der Stabilität von experimentnah-aufgesetzten Peptidaggregaten, sowie der Neigung einzelner Peptide zur Selbstorganisation von anfänglich ungeordneten Konfigurationen unterstützt. Da atomistische Simulationen nur auf kleine Systemgrößen und relativ kurze Zeitskalen begrenzt sind, wird ein vergröbertes Modell entwickelt damit die Selbstorganisation auf einem größeren Maßstab studiert werden kann. Da die Selbstorganisation an der Grenzfläche vonrnInteresse ist, wurden existierenden Vergröberungsmethoden erweitert, um nicht-gebundene Potentiale für inhomogene Systeme zu bestimmen. Die entwickelte Methode ist analog zur iterativen Boltzmann Inversion, bildet aber das Update für das Interaktionspotential basierend auf der radialen Verteilungsfunktion in einer Slab-Geometrie und den Breiten des Slabs und der Grenzfläche. Somit kann ein Kompromiss zwischen der lokalen Flüssigketsstruktur und den thermodynamischen Eigenschaften der Grenzfläche erreicht werden. Die neue Methode wurde für einen Wasser- und einen Methanol-Slab im Vakuum demonstriert, sowie für ein einzelnes Benzolmolekül an der Vakuum-Wasser Grenzfläche, eine Anwendung die von besonderer Bedeutung in der Biologie ist, in der oft das thermodynamische/Grenzflächenpolymerisations-Verhalten zusätzlich der strukturellen Eigenschaften des Systems erhalten werden müssen. Daraufrnbasierend wurde ein vergröbertes Modell über einen Fragment-Ansatz parametrisiert und die Affinität des Peptids zur Vakuum-Wasser Grenzfläche getestet. Obwohl die einzelnen Fragmente sowohl die Struktur als auch die Wahrscheinlichkeitsverteilungen an der Grenzfläche reproduzierten, diffundierte das Peptid als Ganzes von der Grenzfläche weg. Jedoch führte eine Reparametrisierung der nicht-gebundenen Wechselwirkungen für eines der Fragmente der Hauptkette in einem Trimer dazu, dass das Peptid an der Grenzfläche blieb. Dies deutet darauf hin, dass die Kettenkonnektivität eine wichtige Rolle im Verhalten des Petpids an der Grenzfläche spielt.
Resumo:
In this thesis the measurement of the effective weak mixing angle wma in proton-proton collisions is described. The results are extracted from the forward-backward asymmetry (AFB) in electron-positron final states at the ATLAS experiment at the LHC. The AFB is defined upon the distribution of the polar angle between the incoming quark and outgoing lepton. The signal process used in this study is the reaction pp to zgamma + X to ee + X taking a total integrated luminosity of 4.8\,fb^(-1) of data into account. The data was recorded at a proton-proton center-of-mass energy of sqrt(s)=7TeV. The weak mixing angle is a central parameter of the electroweak theory of the Standard Model (SM) and relates the neutral current interactions of electromagnetism and weak force. The higher order corrections on wma are related to other SM parameters like the mass of the Higgs boson.rnrnBecause of the symmetric initial state constellation of colliding protons, there is no favoured forward or backward direction in the experimental setup. The reference axis used in the definition of the polar angle is therefore chosen with respect to the longitudinal boost of the electron-positron final state. This leads to events with low absolute rapidity have a higher chance of being assigned to the opposite direction of the reference axis. This effect called dilution is reduced when events at higher rapidities are used. It can be studied including electrons and positrons in the forward regions of the ATLAS calorimeters. Electrons and positrons are further referred to as electrons. To include the electrons from the forward region, the energy calibration for the forward calorimeters had to be redone. This calibration is performed by inter-calibrating the forward electron energy scale using pairs of a central and a forward electron and the previously derived central electron energy calibration. The uncertainty is shown to be dominated by the systematic variations.rnrnThe extraction of wma is performed using chi^2 tests, comparing the measured distribution of AFB in data to a set of template distributions with varied values of wma. The templates are built in a forward folding technique using modified generator level samples and the official fully simulated signal sample with full detector simulation and particle reconstruction and identification. The analysis is performed in two different channels: pairs of central electrons or one central and one forward electron. The results of the two channels are in good agreement and are the first measurements of wma at the Z resonance using electron final states at proton-proton collisions at sqrt(s)=7TeV. The precision of the measurement is already systematically limited mostly by the uncertainties resulting from the knowledge of the parton distribution functions (PDF) and the systematic uncertainties of the energy calibration.rnrnThe extracted results of wma are combined and yield a value of wma_comb = 0.2288 +- 0.0004 (stat.) +- 0.0009 (syst.) = 0.2288 +- 0.0010 (tot.). The measurements are compared to the results of previous measurements at the Z boson resonance. The deviation with respect to the combined result provided by the LEP and SLC experiments is up to 2.7 standard deviations.
Resumo:
The Standard Model of particle physics is a very successful theory which describes nearly all known processes of particle physics very precisely. Nevertheless, there are several observations which cannot be explained within the existing theory. In this thesis, two analyses with high energy electrons and positrons using data of the ATLAS detector are presented. One, probing the Standard Model of particle physics and another searching for phenomena beyond the Standard Model.rnThe production of an electron-positron pair via the Drell-Yan process leads to a very clean signature in the detector with low background contributions. This allows for a very precise measurement of the cross-section and can be used as a precision test of perturbative quantum chromodynamics (pQCD) where this process has been calculated at next-to-next-to-leading order (NNLO). The invariant mass spectrum mee is sensitive to parton distribution functions (PFDs), in particular to the poorly known distribution of antiquarks at large momentum fraction (Bjoerken x). The measurementrnof the high-mass Drell-Yan cross-section in proton-proton collisions at a center-of-mass energy of sqrt(s) = 7 TeV is performed on a dataset collected with the ATLAS detector, corresponding to an integrated luminosity of 4.7 fb-1. The differential cross-section of pp -> Z/gamma + X -> e+e- + X is measured as a function of the invariant mass in the range 116 GeV < mee < 1500 GeV. The background is estimated using a data driven method and Monte Carlo simulations. The final cross-section is corrected for detector effects and different levels of final state radiation corrections. A comparison isrnmade to various event generators and to predictions of pQCD calculations at NNLO. A good agreement within the uncertainties between measured cross-sections and Standard Model predictions is observed.rnExamples of observed phenomena which can not be explained by the Standard Model are the amount of dark matter in the universe and neutrino oscillations. To explain these phenomena several extensions of the Standard Model are proposed, some of them leading to new processes with a high multiplicity of electrons and/or positrons in the final state. A model independent search in multi-object final states, with objects defined as electrons and positrons, is performed to search for these phenomenas. Therndataset collected at a center-of-mass energy of sqrt(s) = 8 TeV, corresponding to an integrated luminosity of 20.3 fb-1 is used. The events are separated in different categories using the object multiplicity. The data-driven background method, already used for the cross-section measurement was developed further for up to five objects to get an estimation of the number of events including fake contributions. Within the uncertainties the comparison between data and Standard Model predictions shows no significant deviations.
Resumo:
Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn
Resumo:
Klimamontoring benötigt eine operative, raum-zeitliche Analyse der Klimavariabilität. Mit dieser Zielsetzung, funktionsbereite Karten regelmäßig zu erstellen, ist es hilfreich auf einen Blick, die räumliche Variabilität der Klimaelemente in der zeitlichen Veränderungen darzustellen. Für aktuelle und kürzlich vergangene Jahre entwickelte der Deutsche Wetterdienst ein Standardverfahren zur Erstellung solcher Karten. Die Methode zur Erstellung solcher Karten variiert für die verschiedenen Klimaelemente bedingt durch die Datengrundlage, die natürliche Variabilität und der Verfügbarkeit der in-situ Daten.rnIm Rahmen der Analyse der raum-zeitlichen Variabilität innerhalb dieser Dissertation werden verschiedene Interpolationsverfahren auf die Mitteltemperatur der fünf Dekaden der Jahre 1951-2000 für ein relativ großes Gebiet, der Region VI der Weltorganisation für Meteorologie (Europa und Naher Osten) angewendet. Die Region deckt ein relativ heterogenes Arbeitsgebiet von Grönland im Nordwesten bis Syrien im Südosten hinsichtlich der Klimatologie ab.rnDas zentrale Ziel der Dissertation ist eine Methode zur räumlichen Interpolation der mittleren Dekadentemperaturwerte für die Region VI zu entwickeln. Diese Methode soll in Zukunft für die operative monatliche Klimakartenerstellung geeignet sein. Diese einheitliche Methode soll auf andere Klimaelemente übertragbar und mit der entsprechenden Software überall anwendbar sein. Zwei zentrale Datenbanken werden im Rahmen dieser Dissertation verwendet: So genannte CLIMAT-Daten über dem Land und Schiffsdaten über dem Meer.rnIm Grunde wird die Übertragung der Punktwerte der Temperatur per räumlicher Interpolation auf die Fläche in drei Schritten vollzogen. Der erste Schritt beinhaltet eine multiple Regression zur Reduktion der Stationswerte mit den vier Einflussgrößen der Geographischen Breite, der Höhe über Normalnull, der Jahrestemperaturamplitude und der thermischen Kontinentalität auf ein einheitliches Niveau. Im zweiten Schritt werden die reduzierten Temperaturwerte, so genannte Residuen, mit der Interpolationsmethode der Radialen Basis Funktionen aus der Gruppe der Neuronalen Netzwerk Modelle (NNM) interpoliert. Im letzten Schritt werden die interpolierten Temperaturraster mit der Umkehrung der multiplen Regression aus Schritt eins mit Hilfe der vier Einflussgrößen auf ihr ursprüngliches Niveau hochgerechnet.rnFür alle Stationswerte wird die Differenz zwischen geschätzten Wert aus der Interpolation und dem wahren gemessenen Wert berechnet und durch die geostatistische Kenngröße des Root Mean Square Errors (RMSE) wiedergegeben. Der zentrale Vorteil ist die wertegetreue Wiedergabe, die fehlende Generalisierung und die Vermeidung von Interpolationsinseln. Das entwickelte Verfahren ist auf andere Klimaelemente wie Niederschlag, Schneedeckenhöhe oder Sonnenscheindauer übertragbar.
Resumo:
Il lavoro presentato in questa Tesi si basa sul calcolo di modelli dinamici per Galassie Sferoidali Nane studiando il problema mediante l'utilizzo di funzioni di distribuzione. Si è trattato un tipo di funzioni di distribuzione, "Action-Based distribution functions", le quali sono funzioni delle sole variabili azione. Fornax è stata descritta con un'appropriata funzione di distribuzione e il problema della costruzione di modelli dinamici è stato affrontato assumendo sia un alone di materia oscura con distribuzione di densità costante nelle regioni interne sia un alone con cuspide. Per semplicità è stata assunta simmetria sferica e non è stato calcolato esplicitamente il potenziale gravitazionale della componente stellare (le stelle sono traccianti in un potenziale gravitazionale fissato). Tramite un diretto confronto con alcune osservabili, quali il profilo di densità stellare proiettata e il profilo di dispersione di velocità lungo la linea di vista, sono stati trovati alcuni modelli rappresentativi della dinamica di Fornax. Modelli calcolati tramite funzioni di distribuzione basati su azioni permettono di determinare in maniera autoconsistente profili di anisotropia. Tutti i modelli calcolati sono caratterizzati dal possedere un profilo di anisotropia con forte anisotropia tangenziale. Sono state poi comparate le stime di materia oscura di questi modelli con i più comuni e usati stimatori di massa in letteratura. E stato inoltre stimato il rapporto tra la massa totale del sistema (componente stellare e materia oscura) e la componente stellare di Fornax, entro 1600 pc ed entro i 3 kpc. Come esplorazione preliminare, in questo lavoro abbiamo anche presentato anche alcuni esempi di modelli sferici a due componenti in cui il campo gravitazionale è determinato dall'autogravità delle stelle e da un potenziale esterno che rappresenta l'alone di materia oscura.
Resumo:
We present studies of the spatial clustering of inertial particles embedded in turbulent flow. A major part of the thesis is experimental, involving the technique of Phase Doppler Interferometry (PDI). The thesis also includes significant amount of simulation studies and some theoretical considerations. We describe the details of PDI and explain why it is suitable for study of particle clustering in turbulent flow with a strong mean velocity. We introduce the concept of the radial distribution function (RDF) as our chosen way of quantifying inertial particle clustering and present some original works on foundational and practical considerations related to it. These include methods of treating finite sampling size, interpretation of the magnitude of RDF and the possibility of isolating RDF signature of inertial clustering from that of large scale mixing. In experimental work, we used the PDI to observe clustering of water droplets in a turbulent wind tunnel. From that we present, in the form of a published paper, evidence of dynamical similarity (Stokes number similarity) of inertial particle clustering together with other results in qualitative agreement with available theoretical prediction and simulation results. We next show detailed quantitative comparisons of results from our experiments, direct-numerical-simulation (DNS) and theory. Very promising agreement was found for like-sized particles (mono-disperse). Theory is found to be incorrect regarding clustering of different-sized particles and we propose a empirical correction based on the DNS and experimental results. Besides this, we also discovered a few interesting characteristics of inertial clustering. Firstly, through observations, we found an intriguing possibility for modeling the RDF arising from inertial clustering that has only one (sensitive) parameter. We also found that clustering becomes saturated at high Reynolds number.
Resumo:
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the nanostructures responsible for structural coloration, that is, coloration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages precomputation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our precomputation-based technique and compare to a reference BRDF construction technique.
Resumo:
The inclusive jet cross-section has been measured in proton-proton collisions at root s = 2.76 TeV in a dataset corresponding to an integrated luminosity of 0.20 pb(-1) collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-k(t) algorithm with two radius parameters of 0.4 and 0.6. The inclusive jet double-differential cross-section is presented as a function of the jet transverse momentum p(T) and jet rapidity y, covering a range of 20 <= p(T) < 430 GeV and vertical bar y vertical bar < 4.4. The ratio of the cross-section to the inclusive jet cross-section measurement at root s = 7 TeV, published by the ATLAS Collaboration, is calculated as a function of both transverse momentum and the dimensionless quantity x(T) = 2p(T)/root s, in bins of jet rapidity. The systematic uncertainties on the ratios are significantly reduced due to the cancellation of correlated uncertainties in the two measurements. Results are compared to the prediction from next-to-leading order perturbative QCD calculations corrected for non-perturbative effects, and next-to-leading order Monte Carlo simulation. Furthermore, the ATLAS jet cross-section measurements at root s = 2.76 TeV and root s = 7 TeV are analysed within a framework of next-to-leading order perturbative QCD calculations to determine parton distribution functions of the proton, taking into account the correlations between the measurements.
Resumo:
We describe a technique for interactive rendering of diffraction effects produced by biological nanostructures, such as snake skin surface gratings. Our approach uses imagery from atomic force microscopy that accurately captures the geometry of the nanostructures responsible for structural colouration, that is, colouration due to wave interference, in a variety of animals. We develop a rendering technique that constructs bidirectional reflection distribution functions (BRDFs) directly from the measured data and leverages pre-computation to achieve interactive performance. We demonstrate results of our approach using various shapes of the surface grating nanostructures. Finally, we evaluate the accuracy of our pre-computation-based technique and compare to a reference BRDF construction technique.
Resumo:
Double-differential dijet cross-sections measured in pp collisions at the LHC with a 7TeV centre-of-mass energy are presented as functions of dijet mass and half the rapidity separation of the two highest-pT jets. These measurements are obtained using data corresponding to an integrated luminosity of 4.5 fb−1, recorded by the ATLAS detector in 2011. The data are corrected for detector effects so that cross-sections are presented at the particle level. Cross-sections are measured up to 5TeV dijet mass using jets reconstructed with the anti-kt algorithm for values of the jet radius parameter of 0.4 and 0.6. The cross-sections are compared with next-to-leading-order perturbative QCD calculations by NLOJet++ corrected to account for non-perturbative effects. Comparisons with POWHEG predictions, using a next-to-leading-order matrix element calculation interfaced to a partonshower Monte Carlo simulation, are also shown. Electroweak effects are accounted for in both cases. The quantitative comparison of data and theoretical predictions obtained using various parameterizations of the parton distribution functions is performed using a frequentist method. In general, good agreement with data is observed for the NLOJet++ theoretical predictions when using the CT10, NNPDF2.1 and MSTW 2008 PDF sets. Disagreement is observed when using the ABM11 and HERAPDF1.5 PDF sets for some ranges of dijet mass and half the rapidity separation. An example setting a lower limit on the compositeness scale for a model of contact interactions is presented, showing that the unfolded results can be used to constrain contributions to dijet production beyond that predicted by the Standard Model.
Resumo:
With continuous new improvements in brachytherapy source designs and techniques, method of 3D dosimetry for treatment dose verifications would better ensure accurate patient radiotherapy treatment. This study was aimed to first evaluate the 3D dose distributions of the low-dose rate (LDR) Amersham 6711 OncoseedTM using PRESAGE® dosimeters to establish PRESAGE® as a suitable brachytherapy dosimeter. The new AgX100 125I seed model (Theragenics Corporation) was then characterized using PRESAGE® following the TG-43 protocol. PRESAGE® dosimeters are solid, polyurethane-based, 3D dosimeters doped with radiochromic leuco dyes that produce a linear optical density response to radiation dose. For this project, the radiochromic response in PRESAGE® was captured using optical-CT scanning (632 nm) and the final 3D dose matrix was reconstructed using the MATLAB software. An Amersham 6711 seed with an air-kerma strength of approximately 9 U was used to irradiate two dosimeters to 2 Gy and 11 Gy at 1 cm to evaluate dose rates in the r=1 cm to r=5 cm region. The dosimetry parameters were compared to the values published in the updated AAPM Report No. 51 (TG-43U1). An AgX100 seed with an air-kerma strength of about 6 U was used to irradiate two dosimeters to 3.6 Gy and 12.5 Gy at 1 cm. The dosimetry parameters for the AgX100 were compared to the values measured from previous Monte-Carlo and experimental studies. In general, the measured dose rate constant, anisotropy function, and radial dose function for the Amersham 6711 showed agreements better than 5% compared to consensus values in the r=1 to r=3 cm region. The dose rates and radial dose functions measured for the AgX100 agreed with the MCNPX and TLD-measured values within 3% in the r=1 to r=3 cm region. The measured anisotropy function in PRESAGE® showed relative differences of up to 9% with the MCNPX calculated values. It was determined that post-irradiation optical density change over several days was non-linear in different dose regions, and therefore the dose values in the r=4 to r=5 cm regions had higher uncertainty due to this effect. This study demonstrated that within the radial distance of 3 cm, brachytherapy dosimetry in PRESAGE® can be accurate within 5% as long as irradiation times are within 48 hours.
Resumo:
The episodic occurrence of debris flow events in response to stochastic precipitation and wildfire events makes hazard prediction challenging. Previous work has shown that frequency-magnitude distributions of non-fire-related debris flows follow a power law, but less is known about the distribution of post-fire debris flows. As a first step in parameterizing hazard models, we use frequency-magnitude distributions and cumulative distribution functions to compare volumes of post-fire debris flows to non-fire-related debris flows. Due to the large number of events required to parameterize frequency-magnitude distributions, and the relatively small number of post-fire event magnitudes recorded in the literature, we collected data on 73 recent post-fire events in the field. The resulting catalog of 988 debris flow events is presented as an appendix to this article. We found that the empirical cumulative distribution function of post-fire debris flow volumes is composed of smaller events than that of non-fire-related debris flows. In addition, the slope of the frequency-magnitude distribution of post-fire debris flows is steeper than that of non-fire-related debris flows, evidence that differences in the post-fire environment tend to produce a higher proportion of small events. We propose two possible explanations: 1) post-fire events occur on shorter return intervals than debris flows in similar basins that do not experience fire, causing their distribution to shift toward smaller events due to limitations in sediment supply, or 2) fire causes changes in resisting and driving forces on a package of sediment, such that a smaller perturbation of the system is required in order for a debris flow to occur, resulting in smaller event volumes.