842 resultados para hidden borrowing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Com a crescente geração, armazenamento e disseminação da informação nos últimos anos, o anterior problema de falta de informação transformou-se num problema de extracção do conhecimento útil a partir da informação disponível. As representações visuais da informação abstracta têm sido utilizadas para auxiliar a interpretação os dados e para revelar padrões de outra forma escondidos. A visualização de informação procura aumentar a cognição humana aproveitando as capacidades visuais humanas, de forma a tornar perceptível a informação abstracta, fornecendo os meios necessários para que um humano possa absorver quantidades crescentes de informação, com as suas capacidades de percepção. O objectivo das técnicas de agrupamento de dados consiste na divisão de um conjunto de dados em vários grupos, em que dados semelhantes são colocados no mesmo grupo e dados dissemelhantes em grupos diferentes. Mais especificamente, o agrupamento de dados com restrições tem o intuito de incorporar conhecimento a priori no processo de agrupamento de dados, com o objectivo de aumentar a qualidade do agrupamento de dados e, simultaneamente, encontrar soluções apropriadas a tarefas e interesses específicos. Nesta dissertação é estudado a abordagem de Agrupamento de Dados Visual Interactivo que permite ao utilizador, através da interacção com uma representação visual da informação, incorporar o seu conhecimento prévio acerca do domínio de dados, de forma a influenciar o agrupamento resultante para satisfazer os seus objectivos. Esta abordagem combina e estende técnicas de visualização interactiva de informação, desenho de grafos de forças direccionadas e agrupamento de dados com restrições. Com o propósito de avaliar o desempenho de diferentes estratégias de interacção com o utilizador, são efectuados estudos comparativos utilizando conjuntos de dados sintéticos e reais.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mestrado em Contabilidade

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diane Arbus‘ photographs are mainly about difference. Most of the time she is trying ‗[…] to suppress, or at least reduce, moral and sensory queasiness‘ (Sontag 1977: 40) in order to represent a world where the subject of the photograph is not merely the ‗other‘ but also the I. Her technique does not coax her subjects into natural poses. Instead she encourages them to be strange and awkward. By posing for her, the revelation of the self is identified with what is odd. This paper aims at understanding the geography of difference that, at the same time, is also of resistance, since Diane Arbus reveals what was forcefully hidden by bringing it into light in such a way that it is impossible to ignore. Her photographs display a poetic beauty that is not only of the ‗I‘ but also of the ‗eye‘. The world that is depicted is one in which we are all the same. She ―atomizes‖ reality by separating each element and ‗Instead of showing identity between things which are different […] everybody is shown to look the same.‘ (Sontag 1977: 47). Furthermore, this paper analyses some of Arbus‘ photographs so as to explain this point of view, by trying to argue that between rejecting and reacting against what is standardized she does not forget the geography of the body which is also a geography of the self. While creating a new imagetic topos, where what is trivial becomes divine, she also presents the frailty of others as our own.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic data is difficult to analyze and classical mathematical tools reveal strong limitations in exposing hidden relationships between earthquakes. In this paper, we study earthquake phenomena in the perspective of complex systems. Global seismic data, covering the period from 1962 up to 2011 is analyzed. The events, characterized by their magnitude, geographic location and time of occurrence, are divided into groups, either according to the Flinn-Engdahl (F-E) seismic regions of Earth or using a rectangular grid based in latitude and longitude coordinates. Two methods of analysis are considered and compared in this study. In a first method, the distributions of magnitudes are approximated by Gutenberg-Richter (G-R) distributions and the parameters used to reveal the relationships among regions. In the second method, the mutual information is calculated and adopted as a measure of similarity between regions. In both cases, using clustering analysis, visualization maps are generated, providing an intuitive and useful representation of the complex relationships that are present among seismic data. Such relationships might not be perceived on classical geographic maps. Therefore, the generated charts are a valid alternative to other visualization tools, for understanding the global behavior of earthquakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Catastrophic events, such as wars and terrorist attacks, tornadoes and hurricanes, earthquakes, tsunamis, floods and landslides, are always accompanied by a large number of casualties. The size distribution of these casualties has separately been shown to follow approximate power law (PL) distributions. In this paper, we analyze the statistical distributions of the number of victims of catastrophic phenomena, in particular, terrorism, and find double PL behavior. This means that the data sets are better approximated by two PLs instead of a single one. We plot the PL parameters, corresponding to several events, and observe an interesting pattern in the charts, where the lines that connect each pair of points defining the double PLs are almost parallel to each other. A complementary data analysis is performed by means of the computation of the entropy. The results reveal relationships hidden in the data that may trigger a future comprehensive explanation of this type of phenomena.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Informática e Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study describes the change of the ultraviolet spectral bands starting from 0.1 to 5.0 nm slit width in the spectral range of 200–400 nm. The analysis of the spectral bands is carried out by using the multidimensional scaling (MDS) approach to reach the latent spectral background. This approach indicates that 0.1 nm slit width gives higher-order noise together with better spectral details. Thus, 5.0 nm slit width possesses the higher peak amplitude and lower-order noise together with poor spectral details. In the above-mentioned conditions, the main problem is to find the relationship between the spectral band properties and the slit width. For this aim, the MDS tool is to used recognize the hidden information of the ultraviolet spectra of sildenafil citrate by using a ShimadzuUV–VIS 2550, which is in theworld the best double monochromator instrument. In this study, the proposed mathematical approach gives the rich findings for the efficient use of the spectrophotometer in the qualitative and quantitative studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We propose a wireless medium access control (MAC) protocol that provides static-priority scheduling of messages in a guaranteed collision-free manner. Our protocol supports multiple broadcast domains, resolves the wireless hidden terminal problem and allows for parallel transmissions across a mesh network. Arbitration of messages is achieved without the notion of a master coordinating node, global clock synchronization or out-of-band signaling. The protocol relies on bit-dominance similar to what is used in the CAN bus except that in order to operate on a wireless physical layer, nodes are not required to receive incoming bits while transmitting. The use of bit-dominance efficiently allows for a much larger number of priorities than would be possible using existing wireless solutions. A MAC protocol with these properties enables schedulability analysis of sporadic message streams in wireless multihop networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The potential of the electrocardiographic (ECG) signal as a biometric trait has been ascertained in the literature over the past decade. The inherent characteristics of the ECG make it an interesting biometric modality, given its universality, intrinsic aliveness detection, continuous availability, and inbuilt hidden nature. These properties enable the development of novel applications, where non-intrusive and continuous authentication are critical factors. Examples include, among others, electronic trading platforms, the gaming industry, and the auto industry, in particular for car sharing programs and fleet management solutions. However, there are still some challenges to overcome in order to make the ECG a widely accepted biometric. In particular, the questions of uniqueness (inter-subject variability) and permanence over time (intra-subject variability) are still largely unanswered. In this paper we focus on the uniqueness question, presenting a preliminary study of our biometric recognition system, testing it on a database encompassing 618 subjects. We also performed tests with subsets of this population. The results reinforce that the ECG is a viable trait for biometrics, having obtained an Equal Error Rate of 9.01% and an Error of Identification of 15.64% for the entire test population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Paper presented at the ECKM 2010 – 11th European Conference on Knowledge Management, 2-3 September, 2010, Famalicão, Portugal. URL: http://www.academic-conferences.org/eckm/eckm2010/eckm10-home.htm

Relevância:

10.00% 10.00%

Publicador:

Resumo:

All every day activities take place in space. And it is upon this that all information and knowledge revolve. The latter are the key elements in the organisation of territories. Their creation, use and distribution should therefore occur in a balanced way throughout the whole territory in order to allow all individuals to participate in an egalitarian society, in which the flow of knowledge can take precedence over the flow of interests. The information society depends, to a large extent, on the technological capacity to disseminate information and, consequently, the knowledge throughout territory, thereby creating conditions which allow a more balanced development, from the both the social and economic points of view thus avoiding the existence of info-exclusion territories. Internet should therefore be considered more than a mere technology, given that its importance goes well beyond the frontiers of culture and society. It is already a part of daily life and of the new forms of thinking and transmitting information, thus making it a basic necessity essential, for a full socio-economic development. Its role as a platform of creation and distribution of content is regarded as an indispensable element for education in today’s society, since it makes information a much more easily acquired benefit.”…in the same way that the new technologies of generation and distribution of energy allowed factories and large companies to establish themselves as the organisational bases of industrial society, so the internet today constitutes the technological base of the organisational form that characterises the Information Era: the network” (CASTELLS, 2004:15). The changes taking place today in regional and urban structures are increasingly more evident due to a combination of factors such as faster means of transport, more efficient telecommunications and other cheaper and more advanced technologies of information and knowledge. Although their impact on society is obvious, society itself also has a strong influence on the evolution of these technologies. And although physical distance has lost much of the responsibility it had towards explaining particular phenomena of the economy and of society, other aspects such as telecommunications, new forms of mobility, the networks of innovation, the internet, cyberspace, etc., have become more important, and are the subject of study and profound analysis. The science of geographical information, allows, in a much more rigorous way, the analysis of problems thus integrating in a much more balanced way, the concepts of place, of space and of time. Among the traditional disciplines that have already found their place in this process of research and analysis, we can give special attention to a geography of new spaces, which, while not being a geography of ‘innovation’, nor of the ‘Internet’, nor even ‘virtual’, which can be defined as one of the ‘Information Society’, encompassing not only the technological aspects but also including a socio-economic approach. According to the last European statistical data, Portugal shows a deficit in terms of information and knowledge dissemination among its European partners. Some of the causes are very well identified - low levels of scholarship, weak investments on innovation and R&D (both private and public sector) - but others seem to be hidden behind socio-economical and technological factors. So, the justification of Portugal as the case study appeared naturally, on a difficult quest to find the major causes to territorial asymmetries. The substantial amount of data needed for this work was very difficult to obtain and for the islands of Madeira and Azores was insufficient, so only Continental Portugal was considered for this study. In an effort to understand the various aspects of the Geography of the Information Society and bearing in mind the increasing generalised use of information technologies together with the range of technologies available for the dissemination of information, it is important to: (i) Reflect on the geography of the new socio-technological spaces. (ii) Evaluate the potential for the dissemination of information and knowledge through the selection of variables that allow us to determine the dynamic of a given territory or region; (iii) Define a Geography of the Information Society in Continental Portugal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of mathematical methods and computer algorithms in the analysis of economic and financial data series aims to give empirical descriptions of the hidden relations between many complex or unknown variables and systems. This strategy overcomes the requirement for building models based on a set of ‘fundamental laws’, which is the paradigm for studying phenomena usual in physics and engineering. In spite of this shortcut, the fact is that financial series demonstrate to be hard to tackle, involving complex memory effects and a apparently chaotic behaviour. Several measures for describing these objects were adopted by market agents, but, due to their simplicity, they are not capable to cope with the diversity and complexity embedded in the data. Therefore, it is important to propose new measures that, on one hand, are highly interpretable by standard personal but, on the other hand, are capable of capturing a significant part of the dynamical effects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmic microwave background (CMB) radiation is the imprint from an early stage of the Universe and investigation of its properties is crucial for understanding the fundamental laws governing the structure and evolution of the Universe. Measurements of the CMB anisotropies are decisive to cosmology, since any cosmological model must explain it. The brightness, strongest at the microwave frequencies, is almost uniform in all directions, but tiny variations reveal a spatial pattern of small anisotropies. Active research is being developed seeking better interpretations of the phenomenon. This paper analyses the recent data in the perspective of fractional calculus. By taking advantage of the inherent memory of fractional operators some hidden properties are captured and described.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.