979 resultados para Convergence Analysis
Resumo:
This paper analyzes the convergence of the constant modulus algorithm (CMA) in a decision feedback equalizer using only a feedback filter. Several works had already observed that the CMA presented a better performance than decision directed algorithm in the adaptation of the decision feedback equalizer, but theoretical analysis always showed to be difficult specially due to the analytical difficulties presented by the constant modulus criterion. In this paper, we surmount such obstacle by using a recent result concerning the CM analysis, first obtained in a linear finite impulse response context with the objective of comparing its solutions to the ones obtained through the Wiener criterion. The theoretical analysis presented here confirms the robustness of the CMA when applied to the adaptation of the decision feedback equalizer and also defines a class of channels for which the algorithm will suffer from ill-convergence when initialized at the origin.
Resumo:
The increase of the women purchase power has led some companies to adopt strategies of products differentiation as well as to produce specific products to the female public. The auto industry is not immune to this phenomenon, once the women represent, approximately half of the automobile sales in the country. Considering the consumption and the behavior differences between women and men, it has set the following question: are there differences between the choices associated to the automobile by men and the choices associated to the automobile by women? It has been presented to the participants items found in the people`s day-by-day, which are valorized by them, and the participants have been asked to choose and associate these items to the automobile. The results analysis revealed there are more similarities than differences between choices associated to the automobile by men ad choices associated to the automobile by women. The similarity between the choices suggests that the representations, the meanings and values assigned. to the car by men ana women are similar and thus the strategy of product differentiation does not apply to the automotive industry
Resumo:
The mid-crustal Alpine Schist in central Southern Alps, New Zealand has been exhumed during the past similar to3 m.y. on the hanging wall of the oblique-slip Alpine Fault. These rocks underwent ductile deformation during their passage through the similar to 150-km-wide Pacific-Australia plate boundary zone. Likely to be Cretaceous in age, peak metamorphism predates the largely Pliocene and younger oblique convergence that continues to uplift the Southern Alps today. Late Cenozoic ductile deformation constructively reinforced a pre-existing fabric that was well oriented to accommodate a dextral-transpressive overprint. Quartz microstructures below a recently exhumed brittle-ductile transition zone reflect a late Cenozoic increment of ductile strain that was distributed across deeper levels of the Pacific Plate. Deformation was transpressive, including a dextral-normal shear component that bends and rotates a delaminated panel of Pacific Plate crust onto the oblique footwall ramp of the Alpine Fault. Progressive ductile shear in mylonites at the base of the Pacific Plate overprints earlier fabrics in a dextral-reverse sense, a deformation that accompanies translation of the schists up the Alpine Fault. Ductile shear along that structure affects not only the 12-km-thick section of Alpine mylonites, but is distributed across several kilometres of overlying nonmylonitic rocks. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
Abstract: In the last few decades, Central American countries are making a significant effort in order to modernize their governments' legislation both on financial management and systems of financial information. In this sense, these countries aim to enhance the quality of public financial information in order to improve decision-making processes, decrease the level of corruption, and keep citizens informed. In this context, the purpose of this paper is twofold. Firstly, to assess the degree of similarity of the financial information that is being developed by Central American governments with regard to the recommendations set up by Ipsas, and secondly, to analyse the efforts and the strategies that those countries are carrying out in the process of implementing those standards. To determine the differences in the information containing the annual financial statements issued by national public authorities and the recommendations set up by Ipsas we conducted a deductive content analysis. In view of the results we can say that the quality of annual financial statements presented by the countries in Central America, in comparison to the recommendations by the Ipsas concerning Ifac information, is not enough. Hence, in order to operate significant changes, it is still necessary to create new strategies for the implementation of the Ipsas.
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para obtenção do grau de Mestre em Biotecnologia
Resumo:
The most common techniques for stress analysis/strength prediction of adhesive joints involve analytical or numerical methods such as the Finite Element Method (FEM). However, the Boundary Element Method (BEM) is an alternative numerical technique that has been successfully applied for the solution of a wide variety of engineering problems. This work evaluates the applicability of the boundary elem ent code BEASY as a design tool to analyze adhesive joints. The linearity of peak shear and peel stresses with the applied displacement is studied and compared between BEASY and the analytical model of Frostig et al., considering a bonded single-lap joint under tensile loading. The BEM results are also compared with FEM in terms of stress distributions. To evaluate the mesh convergence of BEASY, the influence of the mesh refinement on peak shear and peel stress distributions is assessed. Joint stress predictions are carried out numerically in BEASY and ABAQUS®, and analytically by the models of Volkersen, Goland, and Reissner and Frostig et al. The failure loads for each model are compared with experimental results. The preparation, processing, and mesh creation times are compared for all models. BEASY results presented a good agreement with the conventional methods.
Resumo:
Purpose – Our paper aims at analyzing how different European countries cope with the European Energy Policy, which proposes a set of measures (free energy market, smart meters, energy certificates) to improve energy utilization and management in Europe. Design/methodology/approach – The paper first reports the general vision, regulations and goals set up by Europe to implement the European Energy Policy. Later on, it performs an analysis of how some European countries are coping with the goals, with financial, legal, economical and regulatory measures. Finally, the paper draws a comparison between the countries to present a view on how Europe is responding to the emerging energy emergency of the modern world. Findings – Our analysis on different use cases (countries) showed that European countries are converging to a common energy policy, even though some countries appear to be later than others In particular, Southern European countries were slowed down by the world financial and economical crisis. Still, it appears that contingency plans were put into action, and Europe as a whole is proceeding steadily towards the common vision. Research limitations/implications – European countries are applying yet more cuts to financing green technologies, and it is not possible to predict clearly how each country will evolve its support to the European energy policy. Practical implications – Different countries applied the concepts and measures in different ways. The implementation of the European energy policy has to cope with the resulting plethora of regulations, and a company proposing enhancement regarding energy management still has to possess robust knowledge of the single country, before being able to export experience and know-how between European countries. Originality/Value – Even though a few surveys on energy measures in Europe are already part of the state-of-the-art, organic analysis diagonal to the different topics of the European Energy Policy is missing. Moreover, this paper highlights how European countries are converging on a common view, and provides some details on the differences between the countries, thus facilitating parties interesting into cross-country export of experience and technology for energy management.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene.
Resumo:
This paper examines modern economic growth according to the multidimensional scaling (MDS) method and state space portrait (SSP) analysis. Electing GDP per capita as the main indicator for economic growth and prosperity, the long-run perspective from 1870 to 2010 identifies the main similarities among 34 world partners’ modern economic growth and exemplifies the historical waving mechanics of the largest world economy, the USA. MDS reveals two main clusters among the European countries and their old offshore territories, and SSP identifies the Great Depression as a mild challenge to the American global performance, when compared to the Second World War and the 2008 crisis.
Resumo:
This paper examines modern economic growth according to the multidimensional scaling (MDS) method and state space portrait (SSP) analysis. Electing GDP per capita as the main indicator for economic growth and prosperity, the long-run perspective from 1870 to 2010 identifies the main similarities among 34 world partners’ modern economic growth and exemplifies the historical waving mechanics of the largest world economy, the USA. MDS reveals two main clusters among the European countries and their old offshore territories, and SSP identifies the Great Depression as a mild challenge to the American global performance, when compared to the Second World War and the 2008 crisis.
Resumo:
Waves of globalization reflect the historical technical progress and modern economic growth. The dynamics of this process are here approached using the multidimensional scaling (MDS) methodology to analyze the evolution of GDP per capita, international trade openness, life expectancy, and education tertiary enrollment in 14 countries. MDS provides the appropriate theoretical concepts and the exact mathematical tools to describe the joint evolution of these indicators of economic growth, globalization, welfare and human development of the world economy from 1977 up to 2012. The polarization dance of countries enlightens the convergence paths, potential warfare and present-day rivalries in the global geopolitical scene.
Resumo:
This paper investigates the extent of disparities amongst the provinces of China since the economic reform in 1978 up to the most recent year for which data is available. After a brief review of theoretical and in particular recent empirical literature on regional inequality in China it investigates whether or not the dynamic economic growth in China has been coupled with increasing disparities amongst the Chinese provinces. The paper utilises a few models of convergence along the lines of those hypothesised by neoclassical economists. It employs per capita income and per capita consumption to identify the possible absolute and conditional convergence since the economic reforms. The coverage and impact of the disparities in terms of the relative size of population affected are then taken into account in the analysis of inequality in income and consumption.
Resumo:
This paper analyses the differential impact of human capital, in terms of different levels of schooling, on regional productivity and convergence. The potential existence of geographical spillovers of human capital is also considered by applying spatial panel data techniques. The empirical analysis of Spanish provinces between 1980 and 2007 confirms the positive impact of human capital on regional productivity and convergence, but reveals no evidence of any positive geographical spillovers of human capital. In fact, in some specifications the spatial lag presented by tertiary studies has a negative effect on the variables under consideration.
Resumo:
In this paper we propose a stabilized conforming finite volume element method for the Stokes equations. On stating the convergence of the method, optimal a priori error estimates in different norms are obtained by establishing the adequate connection between the finite volume and stabilized finite element formulations. A superconvergence result is also derived by using a postprocessing projection method. In particular, the stabilization of the continuous lowest equal order pair finite volume element discretization is achieved by enriching the velocity space with local functions that do not necessarily vanish on the element boundaries. Finally, some numerical experiments that confirm the predicted behavior of the method are provided.