961 resultados para Spectral projected gradient method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present the operational matrices of the left Caputo fractional derivative, right Caputo fractional derivative and Riemann–Liouville fractional integral for shifted Legendre polynomials. We develop an accurate numerical algorithm to solve the two-sided space–time fractional advection–dispersion equation (FADE) based on a spectral shifted Legendre tau (SLT) method in combination with the derived shifted Legendre operational matrices. The fractional derivatives are described in the Caputo sense. We propose a spectral SLT method, both in temporal and spatial discretizations for the two-sided space–time FADE. This technique reduces the two-sided space–time FADE to a system of algebraic equations that simplifies the problem. Numerical results carried out to confirm the spectral accuracy and efficiency of the proposed algorithm. By selecting relatively few Legendre polynomial degrees, we are able to get very accurate approximations, demonstrating the utility of the new approach over other numerical methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, operational matrices were adapted for solving several kinds of fractional differential equations (FDEs). The use of numerical techniques in conjunction with operational matrices of some orthogonal polynomials, for the solution of FDEs on finite and infinite intervals, produced highly accurate solutions for such equations. This article discusses spectral techniques based on operational matrices of fractional derivatives and integrals for solving several kinds of linear and nonlinear FDEs. More precisely, we present the operational matrices of fractional derivatives and integrals, for several polynomials on bounded domains, such as the Legendre, Chebyshev, Jacobi and Bernstein polynomials, and we use them with different spectral techniques for solving the aforementioned equations on bounded domains. The operational matrices of fractional derivatives and integrals are also presented for orthogonal Laguerre and modified generalized Laguerre polynomials, and their use with numerical techniques for solving FDEs on a semi-infinite interval is discussed. Several examples are presented to illustrate the numerical and theoretical properties of various spectral techniques for solving FDEs on finite and semi-infinite intervals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Electrohysterogram (EHG) is a new instrument for pregnancy monitoring. It measures the uterine muscle electrical signal, which is closely related with uterine contractions. The EHG is described as a viable alternative and a more precise instrument than the currently most widely used method for the description of uterine contractions: the external tocogram. The EHG has also been indicated as a promising tool in the assessment of preterm delivery risk. This work intends to contribute towards the EHG characterization through the inventory of its components which are: • Contractions; • Labor contractions; • Alvarez waves; • Fetal movements; • Long Duration Low Frequency Waves; The instruments used for cataloging were: Spectral Analysis, parametric and non-parametric, energy estimators, time-frequency methods and the tocogram annotated by expert physicians. The EHG and respective tocograms were obtained from the Icelandic 16-electrode Electrohysterogram Database. 288 components were classified. There is not a component database of this type available for consultation. The spectral analysis module and power estimation was added to Uterine Explorer, an EHG analysis software developed in FCT-UNL. The importance of this component database is related to the need to improve the understanding of the EHG which is a relatively complex signal, as well as contributing towards the detection of preterm birth. Preterm birth accounts for 10% of all births and is one of the most relevant obstetric conditions. Despite the technological and scientific advances in perinatal medicine, in developed countries, prematurity is the major cause of neonatal death. Although various risk factors such as previous preterm births, infection, uterine malformations, multiple gestation and short uterine cervix in second trimester, have been associated with this condition, its etiology remains unknown [1][2][3].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the most popular approaches to path planning and control is the potential field method. This method is particularly attractive because it is suitable for on-line feedback control. In this approach the gradient of a potential field is used to generate the robot's trajectory. Thus, the path is generated by the transient solutions of a dynamical system. On the other hand, in the nonlinear attractor dynamic approach the path is generated by a sequence of attractor solutions. This way the transient solutions of the potential field method are replaced by a sequence of attractor solutions (i.e., asymptotically stable states) of a dynamical system. We discuss at a theoretical level some of the main differences of these two approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Higher myopic refractive errors are associated with serious ocular complications that can put visual function at risk. There is respective interest in slowing and if possible stopping myopia progression before it reaches a level associated with increased risk of secondary pathology. The purpose of this report was to review our understanding of the rationale(s) and success of contact lenses (CLs) used to reduce myopia progression. Methods: A review commenced by searching the PubMed database. The inclusion criteria stipulated publications of clinical trials evaluating the efficacy of CLs in regulating myopia progression based on the primary endpoint of changes in axial length measurements and published in peerreviewed journals. Other publications from conference proceedings or patents were exceptionally considered when no peer-review articles were available. Results: The mechanisms that presently support myopia regulation with CLs are based on the change of relative peripheral defocus and changing the foveal image quality signal to potentially interfere with the accommodative system. Ten clinical trials addressing myopia regulation with CLs were reviewed, including corneal refractive therapy (orthokeratology), peripheral gradient lenses, and bifocal (dual-focus) and multifocal lenses. Conclusions: CLs were reported to be well accepted, consistent, and safe methods to address myopia regulation in children. Corneal refractive therapy (orthokeratology) is so far the method with the largest demonstrated efficacy in myopia regulation across different ethnic groups. However, factors such as patient convenience, the degree of initial myopia, and non-CL treatments may also be considered. The combination of different strategies (i.e., central defocus, peripheral defocus, spectral filters, pharmaceutical delivery, and active lens-borne illumination) in a single device will present further testable hypotheses exploring how different mechanisms can reinforce or compete with each other to improve or reduce myopia regulation with CLs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A simple protocol is described for the silver staining of polyacrylamide gradient gels used for the separation of restriction fragments of kinetoplast DNA [schizodeme analysis of trypanosomatids (Morel et al., 1980)]. The method overcomes the problems of non-uniform staining and strong background color which are frequently encountered when conventional protocols for silver staining of linear gels. The method described has proven to be of general applicability for DNA, RNA and protein separations in gradient gels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a mixed finite element method for a class of nonlinear diffusion equations, which is based on their interpretation as gradient flows in optimal transportation metrics. We introduce an appropriate linearization of the optimal transport problem, which leads to a mixed symmetric formulation. This formulation preserves the maximum principle in case of the semi-discrete scheme as well as the fully discrete scheme for a certain class of problems. In addition solutions of the mixed formulation maintain exponential convergence in the relative entropy towards the steady state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. We demonstrate the behavior of the proposed scheme with 2D simulations of the porous medium equations and blow-up questions in the Patlak-Keller-Segel model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Among the various determinants of treatment response, the achievement of sufficient blood levels is essential for curing malaria. For helping us at improving our current understanding of antimalarial drugs pharmacokinetics, efficacy and toxicity, we have developed a liquid chromatography-tandem mass spectrometry method (LC-MS/MS) requiring 200mul of plasma for the simultaneous determination of 14 antimalarial drugs and their metabolites which are the components of the current first-line combination treatments for malaria (artemether, artesunate, dihydroartemisinin, amodiaquine, N-desethyl-amodiaquine, lumefantrine, desbutyl-lumefantrine, piperaquine, pyronaridine, mefloquine, chloroquine, quinine, pyrimethamine and sulfadoxine). Plasma is purified by a combination of protein precipitation, evaporation and reconstitution in methanol/ammonium formate 20mM (pH 4.0) 1:1. Reverse-phase chromatographic separation of antimalarial drugs is obtained using a gradient elution of 20mM ammonium formate and acetonitrile both containing 0.5% formic acid, followed by rinsing and re-equilibration to the initial solvent composition up to 21min. Analyte quantification, using matrix-matched calibration samples, is performed by electro-spray ionization-triple quadrupole mass spectrometry by selected reaction monitoring detection in the positive mode. The method was validated according to FDA recommendations, including assessment of extraction yield, matrix effect variability, overall process efficiency, standard addition experiments as well as antimalarials short- and long-term stability in plasma. The reactivity of endoperoxide-containing antimalarials in the presence of hemolysis was tested both in vitro and on malaria patients samples. With this method, signal intensity of artemisinin decreased by about 20% in the presence of 0.2% hemolysed red-blood cells in plasma, whereas its derivatives were essentially not affected. The method is precise (inter-day CV%: 3.1-12.6%) and sensitive (lower limits of quantification 0.15-3.0 and 0.75-5ng/ml for basic/neutral antimalarials and artemisinin derivatives, respectively). This is the first broad-range LC-MS/MS assay covering the currently in-use antimalarials. It is an improvement over previous methods in terms of convenience (a single extraction procedure for 14 major antimalarials and metabolites reducing significantly the analytical time), sensitivity, selectivity and throughput. While its main limitation is investment costs for the equipment, plasma samples can be collected in the field and kept at 4 degrees C for up to 48h before storage at -80 degrees C. It is suited to detecting the presence of drug in subjects for screening purposes and quantifying drug exposure after treatment. It may contribute to filling the current knowledge gaps in the pharmacokinetics/pharmacodynamics relationships of antimalarials and better define the therapeutic dose ranges in different patient populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a simple method using percoll gradient for isolation of highly enriched human monocytes. High numbers of fully functional cells are obtained from whole blood or buffy coat cells. The use of simple laboratory equipment and a relatively cheap reagent makes the described method a convenient approach to obtaining human monocytes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The trabecular bone score (TBS) is a gray-level textural metric that can be extracted from the two-dimensional lumbar spine dual-energy X-ray absorptiometry (DXA) image. TBS is related to bone microarchitecture and provides skeletal information that is not captured from the standard bone mineral density (BMD) measurement. Based on experimental variograms of the projected DXA image, TBS has the potential to discern differences between DXA scans that show similar BMD measurements. An elevated TBS value correlates with better skeletal microstructure; a low TBS value correlates with weaker skeletal microstructure. Lumbar spine TBS has been evaluated in cross-sectional and longitudinal studies. The following conclusions are based upon publications reviewed in this article: 1) TBS gives lower values in postmenopausal women and in men with previous fragility fractures than their nonfractured counterparts; 2) TBS is complementary to data available by lumbar spine DXA measurements; 3) TBS results are lower in women who have sustained a fragility fracture but in whom DXA does not indicate osteoporosis or even osteopenia; 4) TBS predicts fracture risk as well as lumbar spine BMD measurements in postmenopausal women; 5) efficacious therapies for osteoporosis differ in the extent to which they influence the TBS; 6) TBS is associated with fracture risk in individuals with conditions related to reduced bone mass or bone quality. Based on these data, lumbar spine TBS holds promise as an emerging technology that could well become a valuable clinical tool in the diagnosis of osteoporosis and in fracture risk assessment. © 2014 American Society for Bone and Mineral Research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Starting with logratio biplots for compositional data, which are based on the principle of subcompositional coherence, and then adding weights, as in correspondence analysis, we rediscover Lewi's spectral map and many connections to analyses of two-way tables of non-negative data. Thanks to the weighting, the method also achieves the property of distributional equivalence

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a field application of a high-level reinforcement learning (RL) control system for solving the action selection problem of an autonomous robot in cable tracking task. The learning system is characterized by using a direct policy search method for learning the internal state/action mapping. Policy only algorithms may suffer from long convergence times when dealing with real robotics. In order to speed up the process, the learning phase has been carried out in a simulated environment and, in a second step, the policy has been transferred and tested successfully on a real robot. Future steps plan to continue the learning process on-line while on the real robot while performing the mentioned task. We demonstrate its feasibility with real experiments on the underwater robot ICTINEU AUV