870 resultados para Path-dependence
Resumo:
[ES] La Planificación de Rutas o Caminos es un disciplina de Robótica que trata la búsqueda de caminos factibles u óptimos. Para la mayoría de vehículos y entornos, no es un problema trivial y por tanto nos encontramos con un gran diversidad de algoritmos para resolverlo, no sólo en Robótica e Inteligencia Artificial, sino también como parte de la literatura de Optimización, con Métodos Numéricos y Algoritmos Bio-inspirados, como Algoritmos Genéticos y el Algoritmo de la Colonia de Hormigas. El caso particular de escenarios de costes variables es considerablemente difícil de abordar porque el entorno en el que se mueve el vehículo cambia con el tiempo. El presente trabajo de tesis estudia este problema y propone varias soluciones prácticas para aplicaciones de Robótica Submarina.
Resumo:
The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.
Resumo:
This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.
Resumo:
This thesis presents a creative and practical approach to dealing with the problem of selection bias. Selection bias may be the most important vexing problem in program evaluation or in any line of research that attempts to assert causality. Some of the greatest minds in economics and statistics have scrutinized the problem of selection bias, with the resulting approaches – Rubin’s Potential Outcome Approach(Rosenbaum and Rubin,1983; Rubin, 1991,2001,2004) or Heckman’s Selection model (Heckman, 1979) – being widely accepted and used as the best fixes. These solutions to the bias that arises in particular from self selection are imperfect, and many researchers, when feasible, reserve their strongest causal inference for data from experimental rather than observational studies. The innovative aspect of this thesis is to propose a data transformation that allows measuring and testing in an automatic and multivariate way the presence of selection bias. The approach involves the construction of a multi-dimensional conditional space of the X matrix in which the bias associated with the treatment assignment has been eliminated. Specifically, we propose the use of a partial dependence analysis of the X-space as a tool for investigating the dependence relationship between a set of observable pre-treatment categorical covariates X and a treatment indicator variable T, in order to obtain a measure of bias according to their dependence structure. The measure of selection bias is then expressed in terms of inertia due to the dependence between X and T that has been eliminated. Given the measure of selection bias, we propose a multivariate test of imbalance in order to check if the detected bias is significant, by using the asymptotical distribution of inertia due to T (Estadella et al. 2005) , and by preserving the multivariate nature of data. Further, we propose the use of a clustering procedure as a tool to find groups of comparable units on which estimate local causal effects, and the use of the multivariate test of imbalance as a stopping rule in choosing the best cluster solution set. The method is non parametric, it does not call for modeling the data, based on some underlying theory or assumption about the selection process, but instead it calls for using the existing variability within the data and letting the data to speak. The idea of proposing this multivariate approach to measure selection bias and test balance comes from the consideration that in applied research all aspects of multivariate balance, not represented in the univariate variable- by-variable summaries, are ignored. The first part contains an introduction to evaluation methods as part of public and private decision process and a review of the literature of evaluation methods. The attention is focused on Rubin Potential Outcome Approach, matching methods, and briefly on Heckman’s Selection Model. The second part focuses on some resulting limitations of conventional methods, with particular attention to the problem of how testing in the correct way balancing. The third part contains the original contribution proposed , a simulation study that allows to check the performance of the method for a given dependence setting and an application to a real data set. Finally, we discuss, conclude and explain our future perspectives.
Resumo:
The fundamental goal of this thesis is the determination of the isospin dependence of the Ar+Ni fusion-evaporation cross section. Three Ar isotope beams, with energies of about 13AMeV, have been accelerated and impinged onto isotopically enriched Ni targets, in order to produce Pd nuclei, with mass number varying from 92 to 104. The measurements have been performed by the high performance 4pi detector INDRA, coupled with the magnetic spectrometer VAMOS. Even if the results are very preliminary, the obtained fusion-evaporation cross sections behaviour gives a hint at the possible isospin dependence of the fusion-evaporation cross sections.
Resumo:
Sudden cardiac death due to ventricular arrhythmia is one of the leading causes of mortality in the world. In the last decades, it has proven that anti-arrhythmic drugs, which prolong the refractory period by means of prolongation of the cardiac action potential duration (APD), play a good role in preventing of relevant human arrhythmias. However, it has long been observed that the “class III antiarrhythmic effect” diminish at faster heart rates and that this phenomenon represent a big weakness, since it is the precise situation when arrhythmias are most prone to occur. It is well known that mathematical modeling is a useful tool for investigating cardiac cell behavior. In the last 60 years, a multitude of cardiac models has been created; from the pioneering work of Hodgkin and Huxley (1952), who first described the ionic currents of the squid giant axon quantitatively, mathematical modeling has made great strides. The O’Hara model, that I employed in this research work, is one of the modern computational models of ventricular myocyte, a new generation began in 1991 with ventricular cell model by Noble et al. Successful of these models is that you can generate novel predictions, suggest experiments and provide a quantitative understanding of underlying mechanism. Obviously, the drawback is that they remain simple models, they don’t represent the real system. The overall goal of this research is to give an additional tool, through mathematical modeling, to understand the behavior of the main ionic currents involved during the action potential (AP), especially underlining the differences between slower and faster heart rates. In particular to evaluate the rate-dependence role on the action potential duration, to implement a new method for interpreting ionic currents behavior after a perturbation effect and to verify the validity of the work proposed by Antonio Zaza using an injected current as a perturbing effect.
Resumo:
A path integral simulation algorithm which includes a higher-order Trotter approximation (HOA)is analyzed and compared to an approach which includes the correct quantum mechanical pair interaction (effective Propagator (EPr)). It is found that the HOA algorithmconverges to the quantum limit with increasing Trotter number P as P^{-4}, while the EPr algorithm converges as P^{-2}.The convergence rate of the HOA algorithm is analyzed for various physical systemssuch as a harmonic chain,a particle in a double-well potential, gaseous argon, gaseous helium and crystalline argon. A new expression for the estimator for the pair correlation function in the HOA algorithm is derived. A new path integral algorithm, the hybrid algorithm, is developed.It combines an exact treatment of the quadratic part of the Hamiltonian and thehigher-order Trotter expansion techniques.For the discrete quantum sine-Gordon chain (DQSGC), it is shown that this algorithm works more efficiently than all other improved path integral algorithms discussed in this work. The new simulation techniques developed in this work allow the analysis of theDQSGC and disordered model systems in the highly quantum mechanical regime using path integral molecular dynamics (PIMD)and adiabatic centroid path integral molecular dynamics (ACPIMD).The ground state phonon dispersion relation is calculated for the DQSGC by the ACPIMD method.It is found that the excitation gap at zero wave vector is reduced by quantum fluctuations. Two different phases exist: One phase with a finite excitation gap at zero wave vector, and a gapless phase where the excitation gap vanishes.The reaction of the DQSGC to an external driving force is analyzed at T=0.In the gapless phase the system creeps if a small force is applied, and in the phase with a gap the system is pinned. At a critical force, the systems undergo a depinning transition in both phases and flow is induced. The analysis of the DQSGC is extended to models with disordered substrate potentials. Three different cases are analyzed: Disordered substrate potentials with roughness exponent H=0, H=1/2,and a model with disordered bond length. For all models, the ground state phonon dispersion relation is calculated.
Resumo:
In 1998 a pilot experiment was carried out to study the helicity dependence of photoreaction cross sections using circularly polarized real photons on longitudinally polarized deuterons in a deuterated butanol target. The knowledge of these cross sections is required to test the validity of the Gerasimov-Drell-Hearn sum rule on the deuteron and the neutron. The focus of this thesis is on the results for the differential and total cross sections for the photodisintegration reaction for various photon energies in the range from 200 to 450 MeV using data taken with the detector system DAPHNE. The current understanding of the NN interaction as represented by the calculations by M. Schwamb could be confirmed within the given uncertainties. In addition, the detector DAPHNE has been prepared for the main experiment in 2003. The according work is presented together with results of the quality-test measurements of the renewed detector components.
Resumo:
Der erste experimentelle Test der GDH-Summenregel für das Proton wurde 1998 am MAMI-Beschleuniger der Universität Mainz durchgeführt. Ferner wurde ein Pilot-Experiment mit einem polarisierten Deuteron-Target vorgenommen. Dieselbe Kollaboration führte 2003 ein auf das Deuteron ausgerichtetes Experiment durch mit der Absicht, die GDH-Summenregel für das Neutron zu untersuchen. Der in beiden Experimenten verwendete Aufbau erlaubt nicht nur die Messung des totalen Wirkungsquerschnitts, sondern auch gleichzeitig einzelne Teilreaktionen zu studieren. In dieser Arbeit werden die Daten des Deuteron-Pilot-Experiments von 1998 analysiert. Desweiteren wird eine Studie der Helizitätsabhängigkeit der differenziellen Wirkungsquerschnitte für drei Pion-Photoproduktionskanäle des Deuterons in der oberen Hälfte der Delta-Resonanz präsentiert. Diese Ergebnisse werden mit einem theoretischen Modell verglichen. Dabei wurde eine hinreichend gute Übereinstimmung für die unpolarisierten Reaktionen gefunden, während für die polarisierten Kanäle kleinere Diskrepanzen beobachtet wurden. Der Targetpolarisationsgrad ist einer der relevanten Parameter, der für eine absolute Normalisierung der Wirkungsquerschnitte notwendig ist. Die Analyse dieses Parameters für die 2003er Daten wird in der vorliegenden Arbeit vorgestellt. Zur Zeit ist in Mainz ein Frozen-Spin-Target im Bau. Es wird als Target für polarisierte Protonen oder polarisierte Deuteronen für zukünftige Experimente mit dem Crystal Ball zur Verfügung stehen. Die Vorbereitungen der verschiedenen Subsysteme dieses Aufbaus stellten einen wichtigen Teil dieser Arbeit dar. Die fundamentalen Grundlagen der Methode und deren technische Umsetzung, sowie der momentane Status der Aufbauarbeiten am Target werden im Detail präsentiert.
Resumo:
Biomedical analyses are becoming increasingly complex, with respect to both the type of the data to be produced and the procedures to be executed. This trend is expected to continue in the future. The development of information and protocol management systems that can sustain this challenge is therefore becoming an essential enabling factor for all actors in the field. The use of custom-built solutions that require the biology domain expert to acquire or procure software engineering expertise in the development of the laboratory infrastructure is not fully satisfactory because it incurs undesirable mutual knowledge dependencies between the two camps. We propose instead an infrastructure concept that enables the domain experts to express laboratory protocols using proper domain knowledge, free from the incidence and mediation of the software implementation artefacts. In the system that we propose this is made possible by basing the modelling language on an authoritative domain specific ontology and then using modern model-driven architecture technology to transform the user models in software artefacts ready for execution in a multi-agent based execution platform specialized for biomedical laboratories.
Resumo:
L’Exploratory Search, paradigma di ricerca basato sulle attività di scoperta e d’apprendimento, è stato per diverso tempo ignorato dai motori di ricerca tradizionali. Invece, è spesso dalle ricerche esplorative che nascono le idee più innovative. Le recenti tecnologie del Semantic Web forniscono le soluzioni che permettono d’implementare dei motori di ricerca capaci di accompagnare gli utenti impegnati in tale tipo di ricerca. Aemoo, motore di ricerca sul quale s’appoggia questa tesi ne è un esempio efficace. A partire da quest’ultimo e sempre con l’aiuto delle tecnologie del Web of Data, questo lavoro si propone di fornire una metodologia che permette di prendere in considerazione la singolarità del profilo di ciascun utente al fine di guidarlo nella sua ricerca esplorativa in modo personalizzato. Il criterio di personalizzazione che abbiamo scelto è comportamentale, ovvero basato sulle decisioni che l’utente prende ad ogni tappa che ritma il processo di ricerca. Implementando un prototipo, abbiamo potuto testare la validità di quest’approccio permettendo quindi all’utente di non essere più solo nel lungo e tortuoso cammino che porta alla conoscenza.
Resumo:
Die Untersuchung von dissipativen Quantensystemen erm¨oglicht es, Quantenph¨anomene auch auf makroskopischen L¨angenskalen zu beobachten. Das in dieser Dissertation gew¨ahlte mikroskopische Modell erlaubt es, den bisher nur ph¨anomenologisch zug¨anglichen Effekt der Quantendissipation mathematisch und physikalisch herzuleiten und zu untersuchen. Bei dem betrachteten mikroskopischen Modell handelt es sich um eine 1-dimensionale Kette von harmonischen Freiheitsgraden, die sowohl untereinander als auch an r anharmonische Freiheitsgrade gekoppelt sind. Die F¨alle einer, respektive zwei anharmonischer Bindungen werden in dieser Arbeit explizit betrachtet. Hierf¨ur wird eine analytische Trennung der harmonischen von den anharmonischen Freiheitsgraden auf zwei verschiedenen Wegen durchgef¨uhrt. Das anharmonische Potential wird als symmetrisches Doppelmuldenpotential gew¨ahlt, welches mit Hilfe der Wick Rotation die Berechnung der ¨Uberg¨ange zwischen beiden Minima erlaubt. Das Eliminieren der harmonischen Freiheitsgrade erfolgt mit Hilfe des wohlbekannten Feynman-Vernon Pfadintegral-Formalismus [21]. In dieser Arbeit wird zuerst die Positionsabh¨angigkeit einer anharmonischen Bindung im Tunnelverhalten untersucht. F¨ur den Fall einer fernab von den R¨andern lokalisierten anharmonischen Bindung wird ein Ohmsches dissipatives Tunneln gefunden, was bei der Temperatur T = 0 zu einem Phasen¨ubergang in Abh¨angigkeit einer kritischen Kopplungskonstanten Ccrit f¨uhrt. Dieser Phasen¨ubergang wurde bereits in rein ph¨anomenologisches Modellen mit Ohmscher Dissipation durch das Abbilden des Systems auf das Ising-Modell [26] erkl¨art. Wenn die anharmonische Bindung jedoch an einem der R¨ander der makroskopisch grossen Kette liegt, tritt nach einer vom Abstand der beiden anharmonischen Bindungen abh¨angigen Zeit tD ein ¨Ubergang von Ohmscher zu super- Ohmscher Dissipation auf, welche im Kern KM(τ ) klar sichtbar ist. F¨ur zwei anharmonische Bindungen spielt deren indirekteWechselwirkung eine entscheidende Rolle. Es wird gezeigt, dass der Abstand D beider Bindungen und die Wahl des Anfangs- und Endzustandes die Dissipation bestimmt. Unter der Annahme, dass beide anharmonischen Bindung gleichzeitig tunneln, wird eine Tunnelwahrscheinlichkeit p(t) analog zu [14], jedoch f¨ur zwei anharmonische Bindungen, berechnet. Als Resultat erhalten wir entweder Ohmsche Dissipation f¨ur den Fall, dass beide anharmonischen Bindungen ihre Gesamtl¨ange ¨andern, oder super-Ohmsche Dissipation, wenn beide anharmonischen Bindungen durch das Tunneln ihre Gesamtl¨ange nicht ¨andern.
Resumo:
Ziel dieser Arbeit war der Aufbau und Einsatz des Atmosphärischen chemischen Ionisations-Massenspektrometers AIMS für boden- und flugzeuggetragene Messungen von salpetriger Säure (HONO). Für das Massenspektrometer wurden eine mit Gleichspannung betriebene Gasentladungsionenquelle und ein spezielles Druckregelventil entwickelt. Während der Instrumentenvergleichskampagne FIONA (Formal Intercomparisons of Observations of Nitrous Acid) an einer Atmosphären-Simulationskammer in Valencia (Spanien) wurde AIMS für HONO kalibriert und erstmals eingesetzt. In verschiedenen Experimenten wurden HONO-Mischungsverhältnisse zwischen 100 pmol/mol und 25 nmol/mol erzeugt und mit AIMS interferenzfrei gemessen. Innerhalb der Messunsicherheit von ±20% stimmen die massenspektrometrischen Messungen gut mit den Methoden der Differenziellen Optischen Absorptions-Spektrometrie und der Long Path Absorption Photometrie überein. Die Massenspektrometrie kann somit zum schnellen und sensitiven Nachweis von HONO in verschmutzter Stadtluft und in Abgasfahnen genutzt werden.rnErste flugzeuggetragene Messungen von HONO mit AIMS wurden 2011 bei der Messkampagne CONCERT (Contrail and Cirrus Experiment) auf dem DLR Forschungsflugzeug Falcon durchgeführt. Hierbei konnte eine Nachweisgrenze von < 10 pmol/mol (3σ, 1s) erreicht werden. Bei Verfolgungsflügen wurden im jungen Abgasstrahl von Passagierflugzeugen molare HONO zu Stickoxid-Verhältnisse (HONO/NO) von 2.0 bis 2.5% gemessen. HONO wird im Triebwerk durch die Reaktion von NO mit OH gebildet. Ein gemessener abnehmender Trend der HONO/NO Verhältnisse mit zunehmendem Stickoxid-Emissionsindex wurde bestätigt und weist auf eine OH Limitierung im jungen Abgasstrahl hin.rnNeben den massenspektrometrischen Messungen wurden Flugzeugmessungen der Partikelsonde Forward Scattering Spectrometer Probe FSSP-300 in jungen Kondensstreifen ausgewertet und analysiert. Aus den gemessenen Partikelgrößenverteilungen wurden Extinktions- und optische Tiefe-Verteilungen abgeleitet und für die Untersuchung verschiedener wissenschaftlicher Fragestellungen, z.B. bezüglich der Partikelform in jungen Kondensstreifen und ihrer Klimawirkung, zur Verfügung gestellt. Im Rahmen dieser Arbeit wurde der Einfluss des Flugzeug- und Triebwerktyps auf mikrophysikalische und optische Eigenschaften von Kondensstreifen untersucht. Unter ähnlichen meteorologischen Bedingungen bezüglich Feuchte, Temperatur und stabiler thermischer Schichtung wurden 2 Minuten alte Kondensstreifen der Passagierflugzeuge vom Typ A319-111, A340-311 und A380-841 verglichen. Im Rahmen der Messunsicherheit wurde keine Änderung des Effektivdurchmessers der Partikelgrößenverteilungen gefunden. Hingegen nehmen mit zunehmendem Flugzeuggewicht die Partikelanzahldichte (162 bis 235 cm-3), die Extinktion (2.1 bis 3.2 km-1), die Absinktiefe des Kondensstreifens (120 bis 290 m) und somit die optische Tiefe der Kondensstreifen (0.25 bis 0.94) zu. Der gemessene Trend wurde durch Vergleich mit zwei unabhängigen Kondensstreifen-Modellen bestätigt. Mit den Messungen wurde eine lineare Abhängigkeit der totalen Extinktion (Extinktion mal Querschnittsfläche des Kondensstreifens) vom Treibstoffverbrauch pro Flugstrecke gefunden und bestätigt.