880 resultados para Lagrangian Formulation
Resumo:
A study was made to evaluate the effect of a castor oil-based detergent on strawberry crops treated with different classes of pesticides, namely deltamethrin, folpet, tebuconazole, abamectin and mancozeb, in a controlled environment. Experimental crops of greenhouse strawberries were cultivated in five different ways with control groups using pesticides and castor oil-based detergent. The results showed that the group 2, which was treated with castor oil-based detergent, presented the lowest amount of pesticide residues and the highest quality of fruit produced.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
We compute the effective Lagrangian of static gravitational fields interacting with thermal fields. Our approach employs the usual imaginary time formalism as well as the equivalence between the static and space-time independent external gravitational fields. This allows to obtain a closed form expression for the thermal effective Lagrangian in d space-time dimensions.
Resumo:
At each outer iteration of standard Augmented Lagrangian methods one tries to solve a box-constrained optimization problem with some prescribed tolerance. In the continuous world, using exact arithmetic, this subproblem is always solvable. Therefore, the possibility of finishing the subproblem resolution without satisfying the theoretical stopping conditions is not contemplated in usual convergence theories. However, in practice, one might not be able to solve the subproblem up to the required precision. This may be due to different reasons. One of them is that the presence of an excessively large penalty parameter could impair the performance of the box-constraint optimization solver. In this paper a practical strategy for decreasing the penalty parameter in situations like the one mentioned above is proposed. More generally, the different decisions that may be taken when, in practice, one is not able to solve the Augmented Lagrangian subproblem will be discussed. As a result, an improved Augmented Lagrangian method is presented, which takes into account numerical difficulties in a satisfactory way, preserving suitable convergence theory. Numerical experiments are presented involving all the CUTEr collection test problems.
Resumo:
The generalized finite element method (GFEM) is applied to a nonconventional hybrid-mixed stress formulation (HMSF) for plane analysis. In the HMSF, three approximation fields are involved: stresses and displacements in the domain and displacement fields on the static boundary. The GFEM-HMSF shape functions are then generated by the product of a partition of unity associated to each field and the polynomials enrichment functions. In principle, the enrichment can be conducted independently over each of the HMSF approximation fields. However, stability and convergence features of the resulting numerical method can be affected mainly by spurious modes generated when enrichment is arbitrarily applied to the displacement fields. With the aim to efficiently explore the enrichment possibilities, an extension to GFEM-HMSF of the conventional Zienkiewicz-Patch-Test is proposed as a necessary condition to ensure numerical stability. Finally, once the extended Patch-Test is satisfied, some numerical analyses focusing on the selective enrichment over distorted meshes formed by bilinear quadrilateral finite elements are presented, thus showing the performance of the GFEM-HMSF combination.
Resumo:
The stability of two recently developed pressure spaces has been assessed numerically: The space proposed by Ausas et al. [R.F. Ausas, F.S. Sousa, G.C. Buscaglia, An improved finite element space for discontinuous pressures, Comput. Methods Appl. Mech. Engrg. 199 (2010) 1019-1031], which is capable of representing discontinuous pressures, and the space proposed by Coppola-Owen and Codina [A.H. Coppola-Owen, R. Codina, Improving Eulerian two-phase flow finite element approximation with discontinuous gradient pressure shape functions, Int. J. Numer. Methods Fluids, 49 (2005) 1287-1304], which can represent discontinuities in pressure gradients. We assess the stability of these spaces by numerically computing the inf-sup constants of several meshes. The inf-sup constant results as the solution of a generalized eigenvalue problems. Both spaces are in this way confirmed to be stable in their original form. An application of the same numerical assessment tool to the stabilized equal-order P-1/P-1 formulation is then reported. An interesting finding is that the stabilization coefficient can be safely set to zero in an arbitrary band of elements without compromising the formulation's stability. An analogous result is also reported for the mini-element P-1(+)/P-1 when the velocity bubbles are removed in an arbitrary band of elements. (C) 2012 Elsevier B.V. All rights reserved.
The boundedness of penalty parameters in an augmented Lagrangian method with constrained subproblems
Resumo:
Augmented Lagrangian methods are effective tools for solving large-scale nonlinear programming problems. At each outer iteration, a minimization subproblem with simple constraints, whose objective function depends on updated Lagrange multipliers and penalty parameters, is approximately solved. When the penalty parameter becomes very large, solving the subproblem becomes difficult; therefore, the effectiveness of this approach is associated with the boundedness of the penalty parameters. In this paper, it is proved that under more natural assumptions than the ones employed until now, penalty parameters are bounded. For proving the new boundedness result, the original algorithm has been slightly modified. Numerical consequences of the modifications are discussed and computational experiments are presented.
Resumo:
A study was made to evaluate the effect of a castor oil-based detergent on strawberry crops treated with different classes of pesticides, namely deltamethrin, folpet, tebuconazole, abamectin and mancozeb, in a controlled environment. Experimental crops of greenhouse strawberries were cultivated in five different ways with control groups using pesticides and castor oil-based detergent. The results showed that the group 2, which was treated with castor oil-based detergent, presented the lowest amount of pesticide residues and the highest quality of fruit produced.
Resumo:
[EN]The lagrangian evolution of 4 months old anticyclonic intrathermocline eddy of the Canary Eddy Corridor is investigated from the trajectories of 5 satellites tracked drifting buoys. Buoys were drogued below and above the Ekman depth at 15 m and 100 m, respectively. One buoy remained inside the eddy during almost 4 months being thus a long lived coherent feature with a life span of at least 8 months. The eddy consisted in a central core rotating in solid body rotation with a rather constant periodicity of 4 days and in an outer ring rotating much more slowly with periodicities between 8 and 12 days…
Resumo:
[EN]This works aims at assessing the acoustic efficiency of differente this noise barrier models. These designs frequently feature complex profiles and their implementarion in shape optimization processes may not always be easy in terms of determining their topological feasibility. A methodology to conduct both overall shape and top edge optimisations of thin cross section acoustic barriers by idealizing them as profiles with null boundary thickness is proposed.
Resumo:
Global observations of the chemical composition of the atmosphere are essential for understanding and studying the present and future state of the earth's atmosphere. However, by analyzing field experiments the consideration of the atmospheric motion is indispensable, because transport enables different chemical species, with different local natural and anthropogenic sources, to interact chemically and so consequently influences the chemical composition of the atmosphere. The distance over which that transport occurs is highly dependent upon meteorological conditions (e.g., wind speed, precipitation) and the properties of chemical species itself (e.g., solubility, reactivity). This interaction between chemistry and dynamics makes the study of atmospheric chemistry both difficult and challenging, and also demonstrates the relevance of including the atmospheric motions in that context. In this doctoral thesis the large-scale transport of air over the eastern Mediterranean region during summer 2001, with a focus on August during the Mediterranean Intensive Oxidant Study (MINOS) measurement campaign, was investigated from a lagrangian perspective. Analysis of back trajectories demonstrated transport of polluted air masses from western and eastern Europe in the boundary layer, from the North Atlantic/North American area in the middle end upper troposphere and additionally from South Asia in the upper troposphere towards the eastern Mediterranean. Investigation of air mass transport near the tropopause indicated enhanced cross-tropopause transport relative to the surrounding area over the eastern Mediterranean region in summer. A large band of air mass transport across the dynamical tropopause develops in June, and is shifted toward higher latitudes in July and August. This shifting is associated with the development and the intensification of the Arabian and South Asian upper-level anticyclones and consequential with areas of maximum clear-air turbulence, hypothesizing quasi-permanent areas with turbulent mixing of tropospheric and stratospheric air during summer over the eastern Mediterranean as a result of large-scale synoptic circulation. In context with the latex knowledge about the transport of polluted air masses towards the Mediterranean and with increasing emissions, especially in developing countries like India, this likely gains in importance.
Resumo:
La tesi di Dottorato studia il flusso sanguigno tramite un codice agli elementi finiti (COMSOL Multiphysics). Nell’arteria è presente un catetere Doppler (in posizione concentrica o decentrata rispetto all’asse di simmetria) o di stenosi di varia forma ed estensione. Le arterie sono solidi cilindrici rigidi, elastici o iperelastici. Le arterie hanno diametri di 6 mm, 5 mm, 4 mm e 2 mm. Il flusso ematico è in regime laminare stazionario e transitorio, ed il sangue è un fluido non-Newtoniano di Casson, modificato secondo la formulazione di Gonzales & Moraga. Le analisi numeriche sono realizzate in domini tridimensionali e bidimensionali, in quest’ultimo caso analizzando l’interazione fluido-strutturale. Nei casi tridimensionali, le arterie (simulazioni fluidodinamiche) sono infinitamente rigide: ricavato il campo di pressione si procede quindi all’analisi strutturale, per determinare le variazioni di sezione e la permanenza del disturbo sul flusso. La portata sanguigna è determinata nei casi tridimensionali con catetere individuando tre valori (massimo, minimo e medio); mentre per i casi 2D e tridimensionali con arterie stenotiche la legge di pressione riproduce l’impulso ematico. La mesh è triangolare (2D) o tetraedrica (3D), infittita alla parete ed a valle dell’ostacolo, per catturare le ricircolazioni. Alla tesi sono allegate due appendici, che studiano con codici CFD la trasmissione del calore in microcanali e l’ evaporazione di gocce d’acqua in sistemi non confinati. La fluidodinamica nei microcanali è analoga all’emodinamica nei capillari. Il metodo Euleriano-Lagrangiano (simulazioni dell’evaporazione) schematizza la natura mista del sangue. La parte inerente ai microcanali analizza il transitorio a seguito dell’applicazione di un flusso termico variabile nel tempo, variando velocità in ingresso e dimensioni del microcanale. L’indagine sull’evaporazione di gocce è un’analisi parametrica in 3D, che esamina il peso del singolo parametro (temperatura esterna, diametro iniziale, umidità relativa, velocità iniziale, coefficiente di diffusione) per individuare quello che influenza maggiormente il fenomeno.
Resumo:
Proxy data are essential for the investigation of climate variability on time scales larger than the historical meteorological observation period. The potential value of a proxy depends on our ability to understand and quantify the physical processes that relate the corresponding climate parameter and the signal in the proxy archive. These processes can be explored under present-day conditions. In this thesis, both statistical and physical models are applied for their analysis, focusing on two specific types of proxies, lake sediment data and stable water isotopes.rnIn the first part of this work, the basis is established for statistically calibrating new proxies from lake sediments in western Germany. A comprehensive meteorological and hydrological data set is compiled and statistically analyzed. In this way, meteorological times series are identified that can be applied for the calibration of various climate proxies. A particular focus is laid on the investigation of extreme weather events, which have rarely been the objective of paleoclimate reconstructions so far. Subsequently, a concrete example of a proxy calibration is presented. Maxima in the quartz grain concentration from a lake sediment core are compared to recent windstorms. The latter are identified from the meteorological data with the help of a newly developed windstorm index, combining local measurements and reanalysis data. The statistical significance of the correlation between extreme windstorms and signals in the sediment is verified with the help of a Monte Carlo method. This correlation is fundamental for employing lake sediment data as a new proxy to reconstruct windstorm records of the geological past.rnThe second part of this thesis deals with the analysis and simulation of stable water isotopes in atmospheric vapor on daily time scales. In this way, a better understanding of the physical processes determining these isotope ratios can be obtained, which is an important prerequisite for the interpretation of isotope data from ice cores and the reconstruction of past temperature. In particular, the focus here is on the deuterium excess and its relation to the environmental conditions during evaporation of water from the ocean. As a basis for the diagnostic analysis and for evaluating the simulations, isotope measurements from Rehovot (Israel) are used, provided by the Weizmann Institute of Science. First, a Lagrangian moisture source diagnostic is employed in order to establish quantitative linkages between the measurements and the evaporation conditions of the vapor (and thus to calibrate the isotope signal). A strong negative correlation between relative humidity in the source regions and measured deuterium excess is found. On the contrary, sea surface temperature in the evaporation regions does not correlate well with deuterium excess. Although requiring confirmation by isotope data from different regions and longer time scales, this weak correlation might be of major importance for the reconstruction of moisture source temperatures from ice core data. Second, the Lagrangian source diagnostic is combined with a Craig-Gordon fractionation parameterization for the identified evaporation events in order to simulate the isotope ratios at Rehovot. In this way, the Craig-Gordon model can be directly evaluated with atmospheric isotope data, and better constraints for uncertain model parameters can be obtained. A comparison of the simulated deuterium excess with the measurements reveals that a much better agreement can be achieved using a wind speed independent formulation of the non-equilibrium fractionation factor instead of the classical parameterization introduced by Merlivat and Jouzel, which is widely applied in isotope GCMs. Finally, the first steps of the implementation of water isotope physics in the limited-area COSMO model are described, and an approach is outlined that allows to compare simulated isotope ratios to measurements in an event-based manner by using a water tagging technique. The good agreement between model results from several case studies and measurements at Rehovot demonstrates the applicability of the approach. Because the model can be run with high, potentially cloud-resolving spatial resolution, and because it contains sophisticated parameterizations of many atmospheric processes, a complete implementation of isotope physics will allow detailed, process-oriented studies of the complex variability of stable isotopes in atmospheric waters in future research.rn