993 resultados para Lagrangian drifter
Resumo:
[EN] Aim: A key life-history component for many animals is the need for movement between different geographical locations at particular times. Green turtle (Chelonia mydas) hatchlings disperse from their natal location to spend an early pelagic stage in the ocean, followed by a neritic stage where small juveniles settle in coastal areas. In this study, we combined genetic and Lagrangian drifter data to investigate the connectivity between natal and foraging locations. In particular, we focus on the evidence for transatlantic transport.
Resumo:
[EN] Green turtle hatchlings disperse away from their natal location to spend an early pelagic stage in the ocean, followed by a neritic stage where small juveniles settle in coastal areas. Here, we combined genetic and Lagrangian drifter data to investigate the connectivity between natal and foraging locations; particularly focussing on the evidence for transatlantic transport. Our results supported the general hypothesis that turtles tend to select foraging areas ‘closest-to-home’.
Resumo:
The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region
Resumo:
Time series of transports in the Agulhas region have been constructed by simulating Lagrangian drifter trajectories in a 1/10 degree two-way nested ocean model. Using these 34 year long time series it is shown that smaller (larger) Agulhas Current transport leads to larger (smaller) Indian-Atlantic inter-ocean exchange. When transport is low, the Agulhas Current detaches farther downstream from the African continental slope. Moreover, the lower inertia suppresses generation of anti-cyclonic vorticity. These two effects cause the Agulhas retroflection to move westward and enhance Agulhas leakage. In the model a 1 Sv decrease in Agulhas Current transport at 32°S results in a 0.7 ± 0.2 Sv increase in Agulhas leakage.
Resumo:
Current dynamics in the Strait of Bonifacio (south Corsica) were investigated at a small scale during the STELLAMARE1 multidisciplinary cruise in summer 2012, using in situ measurements and modeling data. The Strait of Bonifacio is a particularly sensitive marine area in which specific conservation measures have been taken to preserve the natural environment and wild species. Good knowledge of the hydrodynamics in this area is essential to optimize the Marine Protected Area's management rules. Therefore, we used a high-resolution model (400 m) based on the MARS3D code to investigate the main flux exchanges and to formulate certain hypotheses about the formation of possible eddy structures. The aim of the present paper is first to synthetize the results obtained by combining Acoustic Doppler Current Profiler data, hydrological parameters, Lagrangian drifter data, and satellite observations such as MODIS OC5 chlorophyll a data or Metop-A AVHRR Sea Surface Temperature (SST) data. These elements are then used to validate the presence of the mesoscale eddies simulated by the model and their recurrence outside the cruise period. To complete the analysis, the response of the 3D hydrodynamical model was evaluated under two opposing wind systems and certain biases were detected. Strong velocities up to 1 m s(-1) were recorded in the east part due to the Venturi effect; a complementary system of vortices governed by Coriolis effect and west wind was observed in the west part, and horizontal stratification in the central part has been identified under typical wind condition.
Resumo:
We use a Lagrangian descriptor (the so called function M) which measures the length of particle trajectories on the ocean surface over a given interval of time. With this tool we identify the Lagrangian skeleton of the flow and compare it on three datasets over the Gulf of Mexico during the year 2010. The satellite altimetry data used come from AVISO and simulations from HYCOM GOMl0.04 experiments 30.1 and 31.0. We contrast the Lagrangian structure and transport using the evolution of several surface drifters. We show that the agreement in relevant cases between Lagrangian structures and dynamics of drifters depends on the quality of the data on the studied area.
Resumo:
This work examines the sources of moisture affecting the semi-arid Brazilian Northeast (NEB) during its pre-rainy and rainy season (JFMAM) through a Lagrangian diagnosis method. The FLEXPART model identifies the humidity contributions to the moisture budget over a region through the continuous computation of changes in the specific humidity along back or forward trajectories up to 10 days period. The numerical experiments were done for the period that spans between 2000 and 2004 and results were aggregated on a monthly basis. Results show that besides a minor local recycling component, the vast majority of moisture reaching NEB area is originated in the south Atlantic basin and that the nearby wet Amazon basin bears almost no impact. Moreover, although the maximum precipitation in the ""Poligono das Secas'' region (PS) occurs in March and the maximum precipitation associated with air parcels emanating from the South Atlantic towards PS is observed along January to March, the highest moisture contribution from this oceanic region occurs slightly later (April). A dynamical analysis suggests that the maximum precipitation observed in the PS sector does not coincide with the maximum moisture supply probably due to the combined effect of the Walker and Hadley cells in inhibiting the rising motions over the region in the months following April.
Resumo:
Hyperspectral imaging can be used for object detection and for discriminating between different objects based on their spectral characteristics. One of the main problems of hyperspectral data analysis is the presence of mixed pixels, due to the low spatial resolution of such images. This means that several spectrally pure signatures (endmembers) are combined into the same mixed pixel. Linear spectral unmixing follows an unsupervised approach which aims at inferring pure spectral signatures and their material fractions at each pixel of the scene. The huge data volumes acquired by such sensors put stringent requirements on processing and unmixing methods. This paper proposes an efficient implementation of a unsupervised linear unmixing method on GPUs using CUDA. The method finds the smallest simplex by solving a sequence of nonsmooth convex subproblems using variable splitting to obtain a constraint formulation, and then applying an augmented Lagrangian technique. The parallel implementation of SISAL presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory. The results herein presented indicate that the GPU implementation can significantly accelerate the method's execution over big datasets while maintaining the methods accuracy.
Resumo:
Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.
Resumo:
One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.
Resumo:
Understanding the behavior of c omplex composite materials using mixing procedures is fundamental in several industrial processes. For instance, polymer composites are usually manufactured using dispersion of fillers in polymer melt matrices. The success of the filler dispersion depends both on the complex flow patterns generated and on the polymer melt rheological behavior. Consequently, the availability of a numerical tool that allow to model both fluid and particle would be very useful to increase the process insight. Nowadays there ar e computational tools that allow modeling the behavior of filled systems, taking into account both the behavior of the fluid (Computational Rheology) and the particles (Discrete Element Method). One example is the DPMFoam solver of the OpenFOAM ® framework where the averaged volume fraction momentum and mass conservation equations are used to describe the fluid (continuous phase) rheology, and the Newton’s second law of motion is used to compute the particles (discrete phase) movement. In this work the refer red solver is extended to take into account the elasticity of the polymer melts for the continuous phase. The solver capabilities will be illustrated by studying the effect of the fluid rheology on the filler dispersion, taking into account different fluid types (generalized Newtonian or viscoelastic) and particles volume fraction and size. The results obtained are used to evaluate the relevance of considering the fluid complex rheology for the prediction of the composites morphology
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2013
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.
Resumo:
Isotopic and isotonic chains of superheavy nuclei are analyzed to search for spherical double shell closures beyond Z=82 and N=126 within the new effective field theory model of Furnstahl, Serot, and Tang for the relativistic nuclear many-body problem. We take into account several indicators to identify the occurrence of possible shell closures, such as two-nucleon separation energies, two-nucleon shell gaps, average pairing gaps, and the shell correction energy. The effective Lagrangian model predicts N=172 and Z=120 and N=258 and Z=120 as spherical doubly magic superheavy nuclei, whereas N=184 and Z=114 show some magic character depending on the parameter set. The magicity of a particular neutron (proton) number in the analyzed mass region is found to depend on the number of protons (neutrons) present in the nucleus.