47 resultados para Calculated, eddy covariance method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we calibrate the Vasicek interest rate model under the risk neutral measure by learning the model parameters using Gaussian processes for machine learning regression. The calibration is done by maximizing the likelihood of zero coupon bond log prices, using mean and covariance functions computed analytically, as well as likelihood derivatives with respect to the parameters. The maximization method used is the conjugate gradients. The only prices needed for calibration are zero coupon bond prices and the parameters are directly obtained in the arbitrage free risk neutral measure.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of optimal positioning of surface bonded piezoelectric patches in sandwich plates with viscoelastic core and laminated face layers. The objective is to maximize a set of modal loss factors for a given frequency range using multiobjective topology optimization. Active damping is introduced through co-located negative velocity feedback control. The multiobjective topology optimization problem is solved using the Direct MultiSearch Method. An application to a simply supported sandwich plate is presented with results for the maximization of the first six modal loss factors. The influence of the finite element mesh is analyzed and the results are, to some extent, compared with those obtained using alternative single objective optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Brain dopamine transporters imaging by Single Emission Tomography (SPECT) with 123I-FP-CIT (DaTScanTM) has become an important tool in the diagnosis and evaluation of Parkinson syndromes.This diagnostic method allows the visualization of a portion of the striatum – where healthy pattern resemble two symmetric commas - allowing the evaluation of dopamine presynaptic system, in which dopamine transporters are responsible for dopamine release into the synaptic cleft, and their reabsorption into the nigrostriatal nerve terminals, in order to be stored or degraded. In daily practice for assessment of DaTScan TM, it is common to rely only on visual assessment for diagnosis. However, this process is complex and subjective as it depends on the observer’s experience and it is associated with high variability intra and inter observer. Studies have shown that semiquantification can improve the diagnosis of Parkinson syndromes. For semiquantification, analysis methods of image segmentation using regions of interest (ROI) are necessary. ROIs are drawn, in specific - striatum - and in nonspecific – background – uptake areas. Subsequently, specific binding ratios are calculated. Low adherence of semiquantification for diagnosis of Parkinson syndromes is related, not only with the associated time spent, but also with the need of an adapted database of reference values for the population concerned, as well as, the examination of each service protocol. Studies have concluded, that this process increases the reproducibility of semiquantification. The aim of this investigation was to create and validate a database of healthy controls for Dopamine transporters with DaTScanTM named DBRV. The created database has been adapted to the Nuclear Medicine Department’s protocol, and the population of Infanta Cristina’s Hospital located in Badajoz, Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article, we present the first study on probabilistic tsunami hazard assessment for the Northeast (NE) Atlantic region related to earthquake sources. The methodology combines the probabilistic seismic hazard assessment, tsunami numerical modeling, and statistical approaches. We consider three main tsunamigenic areas, namely the Southwest Iberian Margin, the Gloria, and the Caribbean. For each tsunamigenic zone, we derive the annual recurrence rate for each magnitude range, from Mw 8.0 up to Mw 9.0, with a regular interval, using the Bayesian method, which incorporates seismic information from historical and instrumental catalogs. A numerical code, solving the shallow water equations, is employed to simulate the tsunami propagation and compute near shore wave heights. The probability of exceeding a specific tsunami hazard level during a given time period is calculated using the Poisson distribution. The results are presented in terms of the probability of exceedance of a given tsunami amplitude for 100- and 500-year return periods. The hazard level varies along the NE Atlantic coast, being maximum along the northern segment of the Morocco Atlantic coast, the southern Portuguese coast, and the Spanish coast of the Gulf of Cadiz. We find that the probability that a maximum wave height exceeds 1 m somewhere in the NE Atlantic region reaches 60 and 100 % for 100- and 500-year return periods, respectively. These probability values decrease, respectively, to about 15 and 50 % when considering the exceedance threshold of 5 m for the same return periods of 100 and 500 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Naturally Occurring Radioactive Materials (NORM) are materials that are found naturally in the environment and contain radioactive isotopes that can cause negative effects on the health of workers who manipulate them. Present in underground work like mining and tunnel construction in granite zones, these materials are difficult to identify and characterize without appropriate equipment for risk evaluation. The assessing methods were exemplified with a case study applied to the handling and processing of phosphoric rock where one found significant amounts of radioactive isotopes and consequently elevated radon concentrations in enclosed spaces containing these materials. © 2015 Taylor & Francis Group, London.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reporter genes are routinely used in every laboratory for molecular and cellular biology for studying heterologous gene expression and general cellular biological mechanisms, such as transfection processes. Although well characterized and broadly implemented, reporter genes present serious limitations, either by involving time-consuming procedures or by presenting possible side effects on the expression of the heterologous gene or even in the general cellular metabolism. Fourier transform mid-infrared (FT-MIR) spectroscopy was evaluated to simultaneously analyze in a rapid (minutes) and high-throughput mode (using 96-wells microplates), the transfection efficiency, and the effect of the transfection process on the host cell biochemical composition and metabolism. Semi-adherent HEK and adherent AGS cell lines, transfected with the plasmid pVAX-GFP using Lipofectamine, were used as model systems. Good partial least squares (PLS) models were built to estimate the transfection efficiency, either considering each cell line independently (R 2 ≥ 0.92; RMSECV ≤ 2 %) or simultaneously considering both cell lines (R 2 = 0.90; RMSECV = 2 %). Additionally, the effect of the transfection process on the HEK cell biochemical and metabolic features could be evaluated directly from the FT-IR spectra. Due to the high sensitivity of the technique, it was also possible to discriminate the effect of the transfection process from the transfection reagent on KEK cells, e.g., by the analysis of spectral biomarkers and biochemical and metabolic features. The present results are far beyond what any reporter gene assay or other specific probe can offer for these purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Chaves basin is a pull-apart tectonic depression implanted on granites, schists, and graywackes, and filled with a sedimentary sequence of variable thickness. It is a rather complex structure, as it includes an intricate network of faults and hydrogeological systems. The topography of the basement of the Chaves basin still remains unclear, as no drill hole has ever intersected the bottom of the sediments, and resistivity surveys suffer from severe equivalence issues resulting from the geological setting. In this work, a joint inversion approach of 1D resistivity and gravity data designed for layered environments is used to combine the consistent spatial distribution of the gravity data with the depth sensitivity of the resistivity data. A comparison between the results from the inversion of each data set individually and the results from the joint inversion show that although the joint inversion has more difficulty adjusting to the observed data, it provides more realistic and geologically meaningful models than the ones calculated by the inversion of each data set individually. This work provides a contribution for a better understanding of the Chaves basin, while using the opportunity to study further both the advantages and difficulties comprising the application of the method of joint inversion of gravity and resistivity data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have calculated the equilibrium shape of the axially symmetric meniscus along which a spherical bubble contacts a flat liquid surface by analytically integrating the Young-Laplace equation in the presence of gravity, in the limit of large Bond numbers. This method has the advantage that it provides semianalytical expressions for key geometrical properties of the bubble in terms of the Bond number. Results are in good overall agreement with experimental data and are consistent with fully numerical (Surface Evolver) calculations. In particular, we are able to describe how the bubble shape changes from hemispherical, with a flat, shallow bottom, to lenticular, with a deeper, curved bottom, as the Bond number is decreased.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The bending of simply supported composite plates is analyzed using a direct collocation meshless numerical method. In order to optimize node distribution the Direct MultiSearch (DMS) for multi-objective optimization method is applied. In addition, the method optimizes the shape parameter in radial basis functions. The optimization algorithm was able to find good solutions for a large variety of nodes distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimal design of cold-formed steel columns is addressed in this paper, with two objectives: maximize the local-global buckling strength and maximize the distortional buckling strength. The design variables of the problem are the angles of orientation of cross-section wall elements the thickness and width of the steel sheet that forms the cross-section are fixed. The elastic local, distortional and global buckling loads are determined using Finite Strip Method (CUFSM) and the strength of cold-formed steel columns (with given length) is calculated using the Direct Strength Method (DSM). The bi-objective optimization problem is solved using the Direct MultiSearch (DMS) method, which does not use any derivatives of the objective functions. Trade-off Pareto optimal fronts are obtained separately for symmetric and anti-symmetric cross-section shapes. The results are analyzed and further discussed, and some interesting conclusions about the individual strengths (local-global and distortional) are found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A multiobjective approach for optimization of passive damping for vibration reduction in sandwich structures is presented in this paper. Constrained optimization is conducted for maximization of modal loss factors and minimization of weight of sandwich beams and plates with elastic laminated constraining layers and a viscoelastic core, with layer thickness and material and laminate layer ply orientation angles as design variables. The problem is solved using the Direct MultiSearch (DMS) solver for derivative-free multiobjective optimization and solutions are compared with alternative ones obtained using genetic algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Parallel hyperspectral unmixing problem is considered in this paper. A semisupervised approach is developed under the linear mixture model, where the abundance's physical constraints are taken into account. The proposed approach relies on the increasing availability of spectral libraries of materials measured on the ground instead of resorting to endmember extraction methods. Since Libraries are potentially very large and hyperspectral datasets are of high dimensionality a parallel implementation in a pixel-by-pixel fashion is derived to properly exploits the graphics processing units (GPU) architecture at low level, thus taking full advantage of the computational power of GPUs. Experimental results obtained for real hyperspectral datasets reveal significant speedup factors, up to 164 times, with regards to optimized serial implementation.