983 resultados para Mixed-integer linear programing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work provides an assessment of layerwise mixed models using least-squares formulation for the coupled electromechanical static analysis of multilayered plates. In agreement with three-dimensional (3D) exact solutions, due to compatibility and equilibrium conditions at the layers interfaces, certain mechanical and electrical variables must fulfill interlaminar C-0 continuity, namely: displacements, in-plane strains, transverse stresses, electric potential, in-plane electric field components and transverse electric displacement (if no potential is imposed between layers). Hence, two layerwise mixed least-squares models are here investigated, with two different sets of chosen independent variables: Model A, developed earlier, fulfills a priori the interiaminar C-0 continuity of all those aforementioned variables, taken as independent variables; Model B, here newly developed, rather reduces the number of independent variables, but also fulfills a priori the interlaminar C-0 continuity of displacements, transverse stresses, electric potential and transverse electric displacement, taken as independent variables. The predictive capabilities of both models are assessed by comparison with 3D exact solutions, considering multilayered piezoelectric composite plates of different aspect ratios, under an applied transverse load or surface potential. It is shown that both models are able to predict an accurate quasi-3D description of the static electromechanical analysis of multilayered plates for all aspect ratios.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies several topics related with the concept of “fractional” that are not directly related with Fractional Calculus, but can help the reader in pursuit new research directions. We introduce the concept of non-integer positional number systems, fractional sums, fractional powers of a square matrix, tolerant computing and FracSets, negative probabilities, fractional delay discrete-time linear systems, and fractional Fourier transform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral imaging has become one of the main topics in remote sensing applications, which comprise hundreds of spectral bands at different (almost contiguous) wavelength channels over the same area generating large data volumes comprising several GBs per flight. This high spectral resolution can be used for object detection and for discriminate between different objects based on their spectral characteristics. One of the main problems involved in hyperspectral analysis is the presence of mixed pixels, which arise when the spacial resolution of the sensor is not able to separate spectrally distinct materials. Spectral unmixing is one of the most important task for hyperspectral data exploitation. However, the unmixing algorithms can be computationally very expensive, and even high power consuming, which compromises the use in applications under on-board constraints. In recent years, graphics processing units (GPUs) have evolved into highly parallel and programmable systems. Specifically, several hyperspectral imaging algorithms have shown to be able to benefit from this hardware taking advantage of the extremely high floating-point processing performance, compact size, huge memory bandwidth, and relatively low cost of these units, which make them appealing for onboard data processing. In this paper, we propose a parallel implementation of an augmented Lagragian based method for unsupervised hyperspectral linear unmixing on GPUs using CUDA. The method called simplex identification via split augmented Lagrangian (SISAL) aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The efficient implementation of SISAL method presented in this work exploits the GPU architecture at low level, using shared memory and coalesced accesses to memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the main problems of hyperspectral data analysis is the presence of mixed pixels due to the low spatial resolution of such images. Linear spectral unmixing aims at inferring pure spectral signatures and their fractions at each pixel of the scene. The huge data volumes acquired by hyperspectral sensors put stringent requirements on processing and unmixing methods. This letter proposes an efficient implementation of the method called simplex identification via split augmented Lagrangian (SISAL) which exploits the graphics processing unit (GPU) architecture at low level using Compute Unified Device Architecture. SISAL aims to identify the endmembers of a scene, i.e., is able to unmix hyperspectral data sets in which the pure pixel assumption is violated. The proposed implementation is performed in a pixel-by-pixel fashion using coalesced accesses to memory and exploiting shared memory to store temporary data. Furthermore, the kernels have been optimized to minimize the threads divergence, therefore achieving high GPU occupancy. The experimental results obtained for the simulated and real hyperspectral data sets reveal speedups up to 49 times, which demonstrates that the GPU implementation can significantly accelerate the method's execution over big data sets while maintaining the methods accuracy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the effects of environmental and trade policies in an international duopoly serving two countries, with pollution abatement. This analysis is done in both mixed and privatized markets. The model has two stages: First, governments choose environmental taxes and import tariffs, simultaneously; then, the firms compete in the market by choosing output levels for the domestic market and to export and also abatement levels.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Submitted in partial fulfillment for the Requirements for the Degree of PhD in Mathematics, in the Speciality of Statistics in the Faculdade de Ciências e Tecnologia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os Estados-Membros da União Europeia têm tido a preocupação de reduzirem a dimensão da Administração Pública na economia, a par de a tornar muito mais eficiente de forma a promover o crescimento económico. Neste artigo analisam-se as relações entre a despesa pública e o crescimento económico em 14 Estados-Membros da União Europeia dos 15, com o objectivo de determinar a dimensão óptima das Administrações Públicas, tendo por base teórica a Curva de Armey. Os resultados, para o período 1965-2007, sugerem uma dimensão do sector público maximizadora do crescimento económico de 47,37% e 22,17% do PIB, quando avaliada pelas despesas públicas totais e o consumo público, respectivamente.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Competition between public and private firms exists in a range of industries like telecommunications, electricity, natural gas, airlines industries, as weel as services including hospitals, banking and education. Some authors studied mixed oligopolies under Cournot competition (firms move simultaneously) and some others considered Stackelberg models (firms move sequentially). Tomaru [1] analyzed, in a Cournot model, how decision-making upon cost-reducing R&D investment by a domestic public firm is affected by privatization when competing in the domestic market with a foreign firm. He shows that privatization of the domestic public firm lowers productive efficiency and deteriorates domestic social welfare. In this paper, we examine the same question but in a Stackelberg formulation instead of Cournot. The model is a three-stage game. In the first stage, the domestic firm chooses the amount of cost-reducing R&D investment. Then, the firms compete à la Stackelberg. Two cases are considered: (i) The domestic firm is the leader; (ii) The foreign firm is the leader. We show that the results obtained in [1] for Cournot competition are robust in the sence that they are also true when firms move sequentially.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this manuscript we tackle the problem of semidistributed user selection with distributed linear precoding for sum rate maximization in multiuser multicell systems. A set of adjacent base stations (BS) form a cluster in order to perform coordinated transmission to cell-edge users, and coordination is carried out through a central processing unit (CU). However, the message exchange between BSs and the CU is limited to scheduling control signaling and no user data or channel state information (CSI) exchange is allowed. In the considered multicell coordinated approach, each BS has its own set of cell-edge users and transmits only to one intended user while interference to non-intended users at other BSs is suppressed by signal steering (precoding). We use two distributed linear precoding schemes, Distributed Zero Forcing (DZF) and Distributed Virtual Signalto-Interference-plus-Noise Ratio (DVSINR). Considering multiple users per cell and the backhaul limitations, the BSs rely on local CSI to solve the user selection problem. First we investigate how the signal-to-noise-ratio (SNR) regime and the number of antennas at the BSs impact the effective channel gain (the magnitude of the channels after precoding) and its relationship with multiuser diversity. Considering that user selection must be based on the type of implemented precoding, we develop metrics of compatibility (estimations of the effective channel gains) that can be computed from local CSI at each BS and reported to the CU for scheduling decisions. Based on such metrics, we design user selection algorithms that can find a set of users that potentially maximizes the sum rate. Numerical results show the effectiveness of the proposed metrics and algorithms for different configurations of users and antennas at the base stations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work uses surface imprinting to design a novel smart plastic antibodymaterial (SPAM) for Haemoglobin (Hb). Charged binding sites are described here for the first time to tailor plastic antibody nanostructures for a large size protein such as Hb. Its application to design small, portable and low cost potentiometric devices is presented. The SPAM material was obtained by linking Hb to silica nanoparticles and allowing its ionic interaction with charged vinyl monomers. A neutral polymeric matrix was created around these and the imprinted protein removed. Additional materials were designed in parallel acting as a control: a neutral imprinted material (NSPAM), obtained by removing the charged monomers from the procedure, and the Non-Imprinted (NI) versions of SPAM and NSPAM by removing the template. SEM analysis confirmed the surface modification of the silica nanoparticles. All materials were mixed with PVC/plasticizer and applied as selective membranes in potentiometric transduction. Electromotive force (emf) variations were detected only for selective membranes having a lipophilic anionic additive in the membrane. The presence of Hb inside these membranes was evident and confirmed by FTIR, optical microscopy and Raman spectroscopy. The best performance was found for SPAM-based selective membranes with an anionic lipophilic additive, at pH 5. The limits of detection were 43.8 mg mL 1 and linear responses were obtained down to 83.8 mg mL 1, with an average cationic slope of +40 mV per decade. Good selectivity was also observed against other coexisting biomolecules. The analytical application was conducted successfully, showing accurate and precise results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies several topics related with the concept of “fractional” that are not directly related with Fractional Calculus, but can help the reader in pursuit new research directions. We introduce the concept of non-integer positional number systems, fractional sums, fractional powers of a square matrix, tolerant computing and FracSets, negative probabilities, fractional delay discrete-time linear systems, and fractional Fourier transform.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper employs the Lyapunov direct method for the stability analysis of fractional order linear systems subject to input saturation. A new stability condition based on saturation function is adopted for estimating the domain of attraction via ellipsoid approach. To further improve this estimation, the auxiliary feedback is also supported by the concept of stability region. The advantages of the proposed method are twofold: (1) it is straightforward to handle the problem both in analysis and design because of using Lyapunov method, (2) the estimation leads to less conservative results. A numerical example illustrates the feasibility of the proposed method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper studies the statistical distributions of worldwide earthquakes from year 1963 up to year 2012. A Cartesian grid, dividing Earth into geographic regions, is considered. Entropy and the Jensen–Shannon divergence are used to analyze and compare real-world data. Hierarchical clustering and multi-dimensional scaling techniques are adopted for data visualization. Entropy-based indices have the advantage of leading to a single parameter expressing the relationships between the seismic data. Classical and generalized (fractional) entropy and Jensen–Shannon divergence are tested. The generalized measures lead to a clear identification of patterns embedded in the data and contribute to better understand earthquake distributions.