921 resultados para 2D correlation plot
Resumo:
The interaction of short intense laser pulses with atoms/molecules produces a multitude of highly nonlinear processes requiring a non-perturbative treatment. Detailed study of these highly nonlinear processes by numerically solving the time-dependent Schrodinger equation becomes a daunting task when the number of degrees of freedom is large. Also the coupling between the electronic and nuclear degrees of freedom further aggravates the computational problems. In the present work we show that the time-dependent Hartree (TDH) approximation, which neglects the correlation effects, gives unreliable description of the system dynamics both in the absence and presence of an external field. A theoretical framework is required that treats the electrons and nuclei on equal footing and fully quantum mechanically. To address this issue we discuss two approaches, namely the multicomponent density functional theory (MCDFT) and the multiconfiguration time-dependent Hartree (MCTDH) method, that go beyond the TDH approximation and describe the correlated electron-nuclear dynamics accurately. In the MCDFT framework, where the time-dependent electronic and nuclear densities are the basic variables, we discuss an algorithm to calculate the exact Kohn-Sham (KS) potentials for small model systems. By simulating the photodissociation process in a model hydrogen molecular ion, we show that the exact KS potentials contain all the many-body effects and give an insight into the system dynamics. In the MCTDH approach, the wave function is expanded as a sum of products of single-particle functions (SPFs). The MCTDH method is able to describe the electron-nuclear correlation effects as the SPFs and the expansion coefficients evolve in time and give an accurate description of the system dynamics. We show that the MCTDH method is suitable to study a variety of processes such as the fragmentation of molecules, high-order harmonic generation, the two-center interference effect, and the lochfrass effect. We discuss these phenomena in a model hydrogen molecular ion and a model hydrogen molecule. Inclusion of absorbing boundaries in the mean-field approximation and its consequences are discussed using the model hydrogen molecular ion. To this end, two types of calculations are considered: (i) a variational approach with a complex absorbing potential included in the full many-particle Hamiltonian and (ii) an approach in the spirit of time-dependent density functional theory (TDDFT), including complex absorbing potentials in the single-particle equations. It is elucidated that for small grids the TDDFT approach is superior to the variational approach.
Resumo:
Inhalt dieser Arbeit ist ein Verfahren zur numerischen Lösung der zweidimensionalen Flachwassergleichung, welche das Fließverhalten von Gewässern, deren Oberflächenausdehnung wesentlich größer als deren Tiefe ist, modelliert. Diese Gleichung beschreibt die gravitationsbedingte zeitliche Änderung eines gegebenen Anfangszustandes bei Gewässern mit freier Oberfläche. Diese Klasse beinhaltet Probleme wie das Verhalten von Wellen an flachen Stränden oder die Bewegung einer Flutwelle in einem Fluss. Diese Beispiele zeigen deutlich die Notwendigkeit, den Einfluss von Topographie sowie die Behandlung von Nass/Trockenübergängen im Verfahren zu berücksichtigen. In der vorliegenden Dissertation wird ein, in Gebieten mit hinreichender Wasserhöhe, hochgenaues Finite-Volumen-Verfahren zur numerischen Bestimmung des zeitlichen Verlaufs der Lösung der zweidimensionalen Flachwassergleichung aus gegebenen Anfangs- und Randbedingungen auf einem unstrukturierten Gitter vorgestellt, welches in der Lage ist, den Einfluss topographischer Quellterme auf die Strömung zu berücksichtigen, sowie in sogenannten \glqq lake at rest\grqq-stationären Zuständen diesen Einfluss mit den numerischen Flüssen exakt auszubalancieren. Basis des Verfahrens ist ein Finite-Volumen-Ansatz erster Ordnung, welcher durch eine WENO Rekonstruktion unter Verwendung der Methode der kleinsten Quadrate und eine sogenannte Space Time Expansion erweitert wird mit dem Ziel, ein Verfahren beliebig hoher Ordnung zu erhalten. Die im Verfahren auftretenden Riemannprobleme werden mit dem Riemannlöser von Chinnayya, LeRoux und Seguin von 1999 gelöst, welcher die Einflüsse der Topographie auf den Strömungsverlauf mit berücksichtigt. Es wird in der Arbeit bewiesen, dass die Koeffizienten der durch das WENO-Verfahren berechneten Rekonstruktionspolynome die räumlichen Ableitungen der zu rekonstruierenden Funktion mit einem zur Verfahrensordnung passenden Genauigkeitsgrad approximieren. Ebenso wird bewiesen, dass die Koeffizienten des aus der Space Time Expansion resultierenden Polynoms die räumlichen und zeitlichen Ableitungen der Lösung des Anfangswertproblems approximieren. Darüber hinaus wird die wohlbalanciertheit des Verfahrens für beliebig hohe numerische Ordnung bewiesen. Für die Behandlung von Nass/Trockenübergangen wird eine Methode zur Ordnungsreduktion abhängig von Wasserhöhe und Zellgröße vorgeschlagen. Dies ist notwendig, um in der Rechnung negative Werte für die Wasserhöhe, welche als Folge von Oszillationen des Raum-Zeit-Polynoms auftreten können, zu vermeiden. Numerische Ergebnisse die die theoretische Verfahrensordnung bestätigen werden ebenso präsentiert wie Beispiele, welche die hervorragenden Eigenschaften des Gesamtverfahrens in der Berechnung herausfordernder Probleme demonstrieren.
Resumo:
An improved understanding of soil organic carbon (Corg) dynamics in interaction with the mechanisms of soil structure formation is important in terms of sustainable agriculture and reduction of environmental costs of agricultural ecosystems. However, information on physical and chemical processes influencing formation and stabilization of water stable aggregates in association with Corg sequestration is scarce. Long term soil experiments are important in evaluating open questions about management induced effects on soil Corg dynamics in interaction with soil structure formation. The objectives of the present thesis were: (i) to determine the long term impacts of different tillage treatments on the interaction between macro aggregation (>250 µm) and light fraction (LF) distribution and on C sequestration in plots differing in soil texture and climatic conditions. (ii) to determine the impact of different tillage treatments on temporal changes in the size distribution of water stable aggregates and on macro aggregate turnover. (iii) to evaluate the macro aggregate rebuilding in soils with varying initial Corg contents, organic matter (OM) amendments and clay contents in a short term incubation experiment. Soil samples were taken in 0-5 cm, 5-25 cm and 25-40 cm depth from up to four commercially used fields located in arable loess regions of eastern and southern Germany after 18-25 years of different tillage treatments with almost identical experimental setups per site. At each site, one large field with spatially homogenous soil properties was divided into three plots. One of the following three tillage treatments was carried in each plot: (i) Conventional tillage (CT) with annual mouldboard ploughing to 25-30 cm (ii) mulch tillage (MT) with a cultivator or disc harrow 10-15 cm deep, and (iii) no tillage (NT) with direct drilling. The crop rotation at each site consisted of sugar beet (Beta vulgaris L.) - winter wheat (Triticum aestivum L.) - winter wheat. Crop residues were left on the field and crop management was carried out following the regional standards of agricultural practice. To investigate the above mentioned research objectives, three experiments were conducted: Experiment (i) was performed with soils sampled from four sites in April 2010 (wheat stand). Experiment (ii) was conducted with soils sampled from three sites in April 2010, September 2011 (after harvest or sugar beet stand), November 2011 (after tillage) and April 2012 (bare soil or wheat stand). An incubation study (experiment (iii)) was performed with soil sampled from one site in April 2010. Based on the aforementioned research objectives and experiments the main findings were: (i) Consistent results were found between the four long term tillage fields, varying in texture and climatic conditions. Correlation analysis of the yields of macro aggregate against the yields of free LF ( ≤1.8 g cm-3) and occluded LF, respectively, suggested that the effective litter translocation in higher soil depths and higher litter input under CT and MT compensated in the long term the higher physical impact by tillage equipment than under NT. The Corg stocks (kg Corg m−2) in 522 kg soil, based on the equivalent soil mass approach (CT: 0–40 cm, MT: 0–38 cm, NT: 0–36 cm) increased in the order CT (5.2) = NT (5.2) < MT (5.7). Significantly (p ≤ 0.05) highest Corg stocks under MT were probably a result of high crop yields in combination with reduced physical tillage impact and effective litter incorporation, resulting in a Corg sequestration rate of 31 g C-2 m-2 yr-1. (ii) Significantly higher yields of macro aggregates (g kg-2 soil) under NT (732-777) and MT (680-726) than under CT (542-631) were generally restricted to the 0-5 cm sampling depth for all sampling dates. Temporal changes on aggregate size distribution were only small and no tillage induced net effect was detectable. Thus, we assume that the physical impact by tillage equipment was only small or the impact was compensated by a higher soil mixing and effective litter translocation into higher soil depths under CT, which probably resulted in a high re aggregation. (iii) The short term incubation study showed that macro aggregate yields (g kg-2 soil) were higher after 28 days in soils receiving OM (121.4-363.0) than in the control soils (22.0-52.0), accompanied by higher contents of microbial biomass carbon and ergosterol. Highest soil respiration rates after OM amendments within the first three days of incubation indicated that macro aggregate formation is a fast process. Most of the rebuilt macro aggregates were formed within the first seven days of incubation (42-75%). Nevertheless, it was ongoing throughout the entire 28 days of incubation, which was indicated by higher soil respiration rates at the end of the incubation period in OM amended soils than in the control soils. At the same time, decreasing carbon contents within macro aggregates over time indicated that newly occluded OM within the rebuilt macro aggregates served as Corg source for microbial biomass. The different clay contents played only minor role in macro aggregate formation under the particular conditions of the incubation study. Overall, no net changes on macro aggregation were identified in the short term. Furthermore, no indications for an effective Corg sequestration on the long term under NT in comparison to CT were found. The interaction of soil disturbance, litter distribution and the fast re aggregation suggested that a distinct steady state per tillage treatment in terms of soil aggregation was established. However, continuous application of MT with a combination of reduced physical tillage impact and effective litter incorporation may offer some potential in improving the soil structure and may therefore prevent incorporated LF from rapid decomposition and result in a higher C sequestration on the long term.
Resumo:
In model-based vision, there are a huge number of possible ways to match model features to image features. In addition to model shape constraints, there are important match-independent constraints that can efficiently reduce the search without the combinatorics of matching. I demonstrate two specific modules in the context of a complete recognition system, Reggie. The first is a region-based grouping mechanism to find groups of image features that are likely to come from a single object. The second is an interpretive matching scheme to make explicit hypotheses about occlusion and instabilities in the image features.
Resumo:
A new information-theoretic approach is presented for finding the pose of an object in an image. The technique does not require information about the surface properties of the object, besides its shape, and is robust with respect to variations of illumination. In our derivation, few assumptions are made about the nature of the imaging process. As a result the algorithms are quite general and can foreseeably be used in a wide variety of imaging situations. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, aligning a complex 3D object model to real scenes including clutter and occlusion, tracking a human head in a video sequence and aligning a view-based 2D object model to real images. The method is based on a formulation of the mutual information between the model and the image called EMMA. As applied here the technique is intensity-based, rather than feature-based. It works well in domains where edge or gradient-magnitude based methods have difficulty, yet it is more robust than traditional correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Finally, we will describe a number of additional real-world applications that can be solved efficiently and reliably using EMMA. EMMA can be used in machine learning to find maximally informative projections of high-dimensional data. EMMA can also be used to detect and correct corruption in magnetic resonance images (MRI).
Resumo:
The registration of pre-operative volumetric datasets to intra- operative two-dimensional images provides an improved way of verifying patient position and medical instrument loca- tion. In applications from orthopedics to neurosurgery, it has a great value in maintaining up-to-date information about changes due to intervention. We propose a mutual information- based registration algorithm to establish the proper align- ment. For optimization purposes, we compare the perfor- mance of the non-gradient Powell method and two slightly di erent versions of a stochastic gradient ascent strategy: one using a sparsely sampled histogramming approach and the other Parzen windowing to carry out probability density approximation. Our main contribution lies in adopting the stochastic ap- proximation scheme successfully applied in 3D-3D registra- tion problems to the 2D-3D scenario, which obviates the need for the generation of full DRRs at each iteration of pose op- timization. This facilitates a considerable savings in compu- tation expense. We also introduce a new probability density estimator for image intensities via sparse histogramming, de- rive gradient estimates for the density measures required by the maximization procedure and introduce the framework for a multiresolution strategy to the problem. Registration results are presented on uoroscopy and CT datasets of a plastic pelvis and a real skull, and on a high-resolution CT- derived simulated dataset of a real skull, a plastic skull, a plastic pelvis and a plastic lumbar spine segment.
Resumo:
We investigate the differences --- conceptually and algorithmically --- between affine and projective frameworks for the tasks of visual recognition and reconstruction from perspective views. It is shown that an affine invariant exists between any view and a fixed view chosen as a reference view. This implies that for tasks for which a reference view can be chosen, such as in alignment schemes for visual recognition, projective invariants are not really necessary. We then use the affine invariant to derive new algebraic connections between perspective views. It is shown that three perspective views of an object are connected by certain algebraic functions of image coordinates alone (no structure or camera geometry needs to be involved).
Resumo:
This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.
Resumo:
We study four measures of problem instance behavior that might account for the observed differences in interior-point method (IPM) iterations when these methods are used to solve semidefinite programming (SDP) problem instances: (i) an aggregate geometry measure related to the primal and dual feasible regions (aspect ratios) and norms of the optimal solutions, (ii) the (Renegar-) condition measure C(d) of the data instance, (iii) a measure of the near-absence of strict complementarity of the optimal solution, and (iv) the level of degeneracy of the optimal solution. We compute these measures for the SDPLIB suite problem instances and measure the correlation between these measures and IPM iteration counts (solved using the software SDPT3) when the measures have finite values. Our conclusions are roughly as follows: the aggregate geometry measure is highly correlated with IPM iterations (CORR = 0.896), and is a very good predictor of IPM iterations, particularly for problem instances with solutions of small norm and aspect ratio. The condition measure C(d) is also correlated with IPM iterations, but less so than the aggregate geometry measure (CORR = 0.630). The near-absence of strict complementarity is weakly correlated with IPM iterations (CORR = 0.423). The level of degeneracy of the optimal solution is essentially uncorrelated with IPM iterations.
Resumo:
A problem in the archaeometric classification of Catalan Renaissance pottery is the fact, that the clay supply of the pottery workshops was centrally organized by guilds, and therefore usually all potters of a single production centre produced chemically similar ceramics. However, analysing the glazes of the ware usually a large number of inclusions in the glaze is found, which reveal technological differences between single workshops. These inclusions have been used by the potters in order to opacify the transparent glaze and to achieve a white background for further decoration. In order to distinguish different technological preparation procedures of the single workshops, at a Scanning Electron Microscope the chemical composition of those inclusions as well as their size in the two-dimensional cut is recorded. Based on the latter, a frequency distribution of the apparent diameters is estimated for each sample and type of inclusion. Following an approach by S.D. Wicksell (1925), it is principally possible to transform the distributions of the apparent 2D-diameters back to those of the true three-dimensional bodies. The applicability of this approach and its practical problems are examined using different ways of kernel density estimation and Monte-Carlo tests of the methodology. Finally, it is tested in how far the obtained frequency distributions can be used to classify the pottery
Resumo:
The Hardy-Weinberg law, formulated about 100 years ago, states that under certain assumptions, the three genotypes AA, AB and BB at a bi-allelic locus are expected to occur in the proportions p2, 2pq, and q2 respectively, where p is the allele frequency of A, and q = 1-p. There are many statistical tests being used to check whether empirical marker data obeys the Hardy-Weinberg principle. Among these are the classical xi-square test (with or without continuity correction), the likelihood ratio test, Fisher's Exact test, and exact tests in combination with Monte Carlo and Markov Chain algorithms. Tests for Hardy-Weinberg equilibrium (HWE) are numerical in nature, requiring the computation of a test statistic and a p-value. There is however, ample space for the use of graphics in HWE tests, in particular for the ternary plot. Nowadays, many genetical studies are using genetical markers known as Single Nucleotide Polymorphisms (SNPs). SNP data comes in the form of counts, but from the counts one typically computes genotype frequencies and allele frequencies. These frequencies satisfy the unit-sum constraint, and their analysis therefore falls within the realm of compositional data analysis (Aitchison, 1986). SNPs are usually bi-allelic, which implies that the genotype frequencies can be adequately represented in a ternary plot. Compositions that are in exact HWE describe a parabola in the ternary plot. Compositions for which HWE cannot be rejected in a statistical test are typically “close" to the parabola, whereas compositions that differ significantly from HWE are “far". By rewriting the statistics used to test for HWE in terms of heterozygote frequencies, acceptance regions for HWE can be obtained that can be depicted in the ternary plot. This way, compositions can be tested for HWE purely on the basis of their position in the ternary plot (Graffelman & Morales, 2008). This leads to nice graphical representations where large numbers of SNPs can be tested for HWE in a single graph. Several examples of graphical tests for HWE (implemented in R software), will be shown, using SNP data from different human populations
Resumo:
Presentar las principales corrientes teóricas surgidas en torno a la motivación de logro así como de las diversas interpretaciones que se han dado a los resultados obtenidos en las investigaciones realizadas en el área, durante los últimos 35 años, y analizar las características de personalidad de los grupos de sujetos con diferente motivación de logro. Estudiantes voluntarios de tercer curso de BUP del Instituto Padre Manjón de Granada, con una edad media de 17 años. Aplicación de diferentes cuestionarios y pruebas a los alumnos, para hallar el grado de motivación de logro resultante; basándose en el rendimiento de los sujetos y las calificaciones medias finales de curso. Medida 'N ACH' de la motivación de logro, escala de Ray-Lynn de la motivación de logro, PTM de Hermans, cuestionario de motivación de logro de Mehrbian, cuestionarios de ansiedad (DAS, TAQ, MAS) para la medida del motivo de miedo al fracaso, prueba de rendimiento, test de inteligencia general, test de aptitudes diferenciales (DAT), cuestionario de personalidad 16 PF. BMDP 2d Frecuency Count Routine, se analiza mediante un Chi cuadrado la frecuencia de historias relacionadas con el logro y no relacionadas, correlación de Pearson, aplicación de la fórmula de Kuder-Richardson, BMDP 6r Partial Correlation and Multivariate Regression, cálculo del coeficiente de semejanza de perfiles. Los resultados de la investigación apoyan la posición de Atkinson y Birch. Ante la imposibilidad de utilizar los cuestionarios como medida alternativa al sistema proyectivo, es la puntuación combinada 'N ACH-DAS' la única que ofrece las suficientes garantías de estar realmente midiendo la motivación de logro resultante de los sujetos. Son las imagenes con un mayor nivel de sugerncia las que provocan una mayor activación de la necesidad de logro, la puntuación 'N ACH TAT' permite pronosticar el rendimiento de los sujetos, los sujetos con un mismo nivel de motivación de logro resultante difieren significativamente entre si en personalidad, en función del sexo, el cuestionario PMT de Hermans y la subescala OT de Ray son los que presentaron una mayor validez predictiva referente al rendimiento de mujeres y hombres.
Resumo:
Aquest projecte s'ha dut a terme amb el Grup de visió per computador del departament d'Arquitectura i Tecnologia de Computadors (ATC) de la Universitat de Girona. Està enfocat a l'anàlisi d'imatges mèdiques, en concret s'analitzaran imatges de pròstata en relació a desenvolupaments que s'estan realitzant en el grup de visió esmentat. Els objectius fixats per aquest projecte són desenvolupar dos mòduls de processamentm d'imatges els quals afrontaran dos blocs important en el tractament d'imatges, aquests dos mòduls seran un pre-processat d'imatges, que constarà de tres filtres i un bloc de segmentació per tal de cercar la pròstata dintre de les imatges a tractar. En el projecte es treballarà amb el llenguatge de programació C++, concretament amb unes llibreries que es denominen ITK (Insight Toolkit ) i són open source enfocades al tractament d'imatges mèdiques. A part d'aquesta eina s'utilitzaran d'altres com les Qt que és una biblioteca d'eines per crear entorns gràfics