962 resultados para Mean-Reverting Jump-Diffusion
Resumo:
In dieser Arbeit wurde das Wachstum sowie die ultraschnelle Elektronendynamik des Oberflächenplasmon Polaritons von Goldnanoteilchen auf Titandioxid untersucht. Die Messung der Dephasierungszeit des Oberflächenplasmons von Nanoteilchen mit definierter Form und Größe erfolgte dabei mit der Methode des spektralen Lochbrennens. Die Nanoteilchen wurden durch Deposition von Goldatomen aus einem thermischen Atomstrahl mit anschließender Diffussion und Nukleation, d.h. Volmer-Weber-Wachstum, auf Titandioxidsubstraten hergestellt und mittels einer Kombination aus optischer Spektroskopie und Rasterkraftmikroskopie systematisch untersucht. Dabei lässt sich das Nanoteilchenensemble durch das mittlere Achsverhältnis und den mittleren Äquivalentradius charakterisieren. Die Messungen zeigen, dass die Proben große Größen- und Formverteilungen aufweisen und ein definierter Zusammenhang zwischen Größe und Form der Teilchen existiert. Während kleine Goldnanoteilchen nahezu kugelförmig sind, flachen die Teilchen mit zunehmender Größe immer mehr ab. Des Weiteren wurde in dieser Arbeit die Methode des lasergestützten Wachstums auf das System Gold auf Titandioxid angewendet. Systematische Untersuchungen zeigten, dass sich das Achsverhältnis der Teilchen durch geeignete Wahl von Photonenenergie und Fluenz des eingestrahlten Laserlichts definiert und gezielt vorgeben lässt. Die Methode des lasergestützten Wachstums erschließt damit den Bereich außerhalb der Zugänglichkeit des natürlichen Wachstums. Aufgrund der Formabhängigkeit der spektrale Lage der Plasmonresonanz ist man somit in der Lage, die optischen Eigenschaften der Nanoteilchen gezielt einzustellen und z.B. für technische Anwendungen zu optimieren. Die Untersuchung der ultraschnellen Elektronendynamik von Goldnanoteilchen auf Titandioxid mit äquivalenten Radien zwischen 8 bis 15 nm erfolgte in dieser Arbeit mit der Methode des spektralen Lochbrennes. Hierzu wurde die Dephasierungszeit des Oberflächenplasmons systematisch als Funktion der Photonenenergie in einem Bereich von 1,45 bis 1,85 eV gemessen. Es zeigte sich, dass die gemessenen Dephasierungszeiten von 8,5 bis 16,2 fs deutlich unter den in der dielektrischen Funktion von Gold enthaltenen Werten lagen, was den erwarteten Einfluss der reduzierten Dimension der Teilchen demonstriert. Um die Messwerte trotz verschiedener Teilchengrößen untereinander vergleichen und den Einfluss der intrinsischen Dämpfung quantifizieren zu können, wurde zusätzlich der Dämpfungsparameter A bestimmt. Die ermittelten A-Faktoren zeigten dabei eine starke Abhängigkeit von der Plasmonenergie. Für Teilchen mit Plasmonenergien von 1,45 bis 1,55 eV wurde ein Dämpfungsfaktor von A ~ 0,2 nm/fs ermittelt, der lediglich Oberflächenstreuung als dominierenden Dämpfungsmechanismus widerspiegelt. Hingegen wurde für Teilchen mit Plasmonenergien oberhalb von 1,55 eV ein drastischer Anstieg der Dämpfung auf A ~ 0,4 nm/fs beobachtet. Die erhöhte Dämpfung wurde dabei dem zusätzlichen Vorliegen einer chemischen Dämpfung durch das Titandioxidsubstrat zugeschrieben. Zusammenfassend zeigen die Ergebnisse somit, dass eine starke Abhängigkeit der chemischen Dämpfung von der Photonenenergie vorliegt. Es konnte erstmals nachgewiesen werden, dass die chemische Dämpfung erst ab einer bestimmten unteren Schwelle der Photonenenergie einsetzt, die für Goldnanoteilchen auf Titandioxid bei etwa 1,6 eV liegt.
Resumo:
Kriging is an interpolation technique whose optimality criteria are based on normality assumptions either for observed or for transformed data. This is the case of normal, lognormal and multigaussian kriging. When kriging is applied to transformed scores, optimality of obtained estimators becomes a cumbersome concept: back-transformed optimal interpolations in transformed scores are not optimal in the original sample space, and vice-versa. This lack of compatible criteria of optimality induces a variety of problems in both point and block estimates. For instance, lognormal kriging, widely used to interpolate positive variables, has no straightforward way to build consistent and optimal confidence intervals for estimates. These problems are ultimately linked to the assumed space structure of the data support: for instance, positive values, when modelled with lognormal distributions, are assumed to be embedded in the whole real space, with the usual real space structure and Lebesgue measure
Resumo:
There is almost not a case in exploration geology, where the studied data doesn’t includes below detection limits and/or zero values, and since most of the geological data responds to lognormal distributions, these “zero data” represent a mathematical challenge for the interpretation. We need to start by recognizing that there are zero values in geology. For example the amount of quartz in a foyaite (nepheline syenite) is zero, since quartz cannot co-exists with nepheline. Another common essential zero is a North azimuth, however we can always change that zero for the value of 360°. These are known as “Essential zeros”, but what can we do with “Rounded zeros” that are the result of below the detection limit of the equipment? Amalgamation, e.g. adding Na2O and K2O, as total alkalis is a solution, but sometimes we need to differentiate between a sodic and a potassic alteration. Pre-classification into groups requires a good knowledge of the distribution of the data and the geochemical characteristics of the groups which is not always available. Considering the zero values equal to the limit of detection of the used equipment will generate spurious distributions, especially in ternary diagrams. Same situation will occur if we replace the zero values by a small amount using non-parametric or parametric techniques (imputation). The method that we are proposing takes into consideration the well known relationships between some elements. For example, in copper porphyry deposits, there is always a good direct correlation between the copper values and the molybdenum ones, but while copper will always be above the limit of detection, many of the molybdenum values will be “rounded zeros”. So, we will take the lower quartile of the real molybdenum values and establish a regression equation with copper, and then we will estimate the “rounded” zero values of molybdenum by their corresponding copper values. The method could be applied to any type of data, provided we establish first their correlation dependency. One of the main advantages of this method is that we do not obtain a fixed value for the “rounded zeros”, but one that depends on the value of the other variable. Key words: compositional data analysis, treatment of zeros, essential zeros, rounded zeros, correlation dependency
Resumo:
Exhaustive statistical information-gathering operations pose major logistical challenges. By using GISs, managing the associated information becomes simpler, and monitoring the quality control of the information gathered can be stricter
Resumo:
En aquest article es resumeixen els resultats publicats en un informe de l' ISS (Istituto Superiore di Sanità) del desembre de 2006, sobre un model matemàtic desenvolupat per un grup de treball que inclou a investigadors de les Universitats de Trento, Pisa i Roma, i els Instituts Nacionals de Salut (Istituto Superiore di Sanità, ISS), per avaluar i mesurar l'impacte de la transmissió i el control de la pandèmia de grip
Resumo:
In this paper we find that the diffusion pattern of mobile telephony in Colombia can be best characterised as following a Logistic curve. Although in recent years the rate of growth of mobile phone subscribers has started to slow down, we find evidence that there is still room for further expansion as the saturation level is expected to be reached in five years time. The estimated saturation level is consistent with some individuals possessing more than one mobile device.
Resumo:
In this paper we introduce a financial market model based on continuos time random motions with alternanting constant velocities and with jumps ocurring when the velocity switches. if jump directions are in the certain corresondence with the velocity directions of the underlyng random motion with respect to the interest rate, the model is free of arbitrage. The replicating strategies for options are constructed in details. Closed form formulas for the opcion prices are obtained.
Resumo:
Resumen tomado de la publicaci??n
Resumo:
Diffusion tensor magnetic resonance imaging, which measures directional information of water diffusion in the brain, has emerged as a powerful tool for human brain studies. In this paper, we introduce a new Monte Carlo-based fiber tracking approach to estimate brain connectivity. One of the main characteristics of this approach is that all parameters of the algorithm are automatically determined at each point using the entropy of the eigenvalues of the diffusion tensor. Experimental results show the good performance of the proposed approach
Resumo:
Pressure-jump (p-jump)-induced relaxation kinetics was used to explore the energy landscape of protein folding/unfolding of Y115W, a fluorescent variant of ribonuclease A. Pressure-jumps of 40MPa amplitude (5ms dead-time) were conducted both to higher (unfolding) and to lower (folding) pressure, in the range from 100 to 500MPa, between 30 and 50°C. Significant deviations from the expected symmetrical protein relaxation kinetics were observed. Whereas downward p-jumps resulted always in single exponential kinetics, the kinetics induced by upward p-jumps were biphasic in the low pressure range and monophasic at higher pressures. The relative amplitude of the slow phase decreased as a function of both pressure and temperature. At 50°C, only the fast phase remained. These results can be interpreted within the framework of a two-dimensional energy surface containing a pressure- and temperature-dependent barrier between two unfolded states differing in the isomeric state of the Asn-113–Pro-114 bond. Analysis of the activation volume of the fast kinetic phase revealed a temperature-dependent shift of the unfolding transition state to a larger volume. The observed compensation of this effect by glycerol offers an explanation for its protein stabilizing effect
Resumo:
Diffusion Tensor Imaging (DTI) is a new magnetic resonance imaging modality capable of producing quantitative maps of microscopic natural displacements of water molecules that occur in brain tissues as part of the physical diffusion process. This technique has become a powerful tool in the investigation of brain structure and function because it allows for in vivo measurements of white matter fiber orientation. The application of DTI in clinical practice requires specialized processing and visualization techniques to extract and represent acquired information in a comprehensible manner. Tracking techniques are used to infer patterns of continuity in the brain by following in a step-wise mode the path of a set of particles dropped into a vector field. In this way, white matter fiber maps can be obtained.
Resumo:
O ácido azelaico é um fármaco com actividade bacteriostática para muitos microorganismos sendo por isso frequentemente aplicado no tratamento do acne. Porém, às formulações tópicas deste fármaco estão geralmente associados alguns efeitos adversos e fracas adesões à terapêutica. Assim, a nanotecnologia pode ser aqui considerada como uma estratégia inovadora para ultrapassar os obstáculos anteriores. O objectivo deste estudo centrou-se no desenvolvimento e caracterização de nanopartículas de PLGA contendo o ácido azelaico. As nanopartículas foram produzidas através do método modificado de emulsificação/difusão do solvente e posteriormente incluídas num gel de Carbopol 940. Foram caracterizados vários parâmetros da formulação tais como potencial zeta, tamanho da partícula e eficiência de encapsulação. O tamanho médio das partículas foi de 378,63 nm (com I.P. cerca de 0,09) e o potencial zeta foi de -7,82 mV. Aeficiência de encapsulação do ácido azelaico foi de 76 ± 3,81%. Consequentemente, estas nanopartículas poderão ser consideradas ferramentas úteis para a veiculação do ácido azelaico.