826 resultados para Geometric representation
Resumo:
Electrocardiogram (ECG) biometrics are a relatively recent trend in biometric recognition, with at least 13 years of development in peer-reviewed literature. Most of the proposed biometric techniques perform classifi-cation on features extracted from either heartbeats or from ECG based transformed signals. The best representation is yet to be decided. This paper studies an alternative representation, a dissimilarity space, based on the pairwise dissimilarity between templates and subjects' signals. Additionally, this representation can make use of ECG signals sourced from multiple leads. Configurations of three leads will be tested and contrasted with single-lead experiments. Using the same k-NN classifier the results proved superior to those obtained through a similar algorithm which does not employ a dissimilarity representation. The best Authentication EER went as low as 1:53% for a database employing 503 subjects. However, the employment of extra leads did not prove itself advantageous.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The concepts and instruments required for the teaching and learning of geometric optics are introduced in the didactic processwithout a proper didactic transposition. This claim is secured by the ample evidence of both wide- and deep-rooted alternative concepts on the topic. Didactic transposition is a theory that comes from a reflection on the teaching and learning process in mathematics but has been used in other disciplinary fields. It will be used in this work in order to clear up the main obstacles in the teachinglearning process of geometric optics. We proceed to argue that since Newton’s approach to optics, in his Book I of Opticks, is independent of the corpuscular or undulatory nature of light, it is the most suitable for a constructivist learning environment. However, Newton’s theory must be subject to a proper didactic transposition to help overcome the referred alternative concepts. Then is described our didactic transposition in order to create knowledge to be taught using a dialogical process between students’ previous knowledge, history of optics and the desired outcomes on geometrical optics in an elementary pre-service teacher training course. Finally, we use the scheme-facet structure of knowledge both to analyse and discuss our results as well as to illuminate shortcomings that must be addressed in our next stage of the inquiry.
Resumo:
Dissertação apresentada na Faculdade de Ciência e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Proceedings of the 10th Conference on Dynamical Systems Theory and Applications
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation presented to obtain the Ph.D degree in Biology, Computational Biology.
Staging the Scientist: The Representation of Science and its Processes in American and British Drama
Resumo:
Dissertação apresentada para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Estudos Ingleses e Norte Americanos
Resumo:
This dissertation presents an approach aimed at three-dimensional perception’s obstacle detection on all-terrain robots. Given the huge amount of acquired information, the adversities such environments present to an autonomous system and the swiftness, thus required, from each of its navigation decisions, it becomes imperative that the 3-D perceptional system to be able to map obstacles and passageways in the most swift and detailed manner. In this document, a hybrid approach is presented bringing the best of several methods together, combining the lightness of lesser meticulous analyses with the detail brought by more thorough ones. Realizing the former, a terrain’s slope mapping system upon a low resolute volumetric representation of the surrounding occupancy. For the latter’s detailed evaluation, two novel metrics were conceived to discriminate the little depth discrepancies found in between range scanner’s beam distance measurements. The hybrid solution resulting from the conjunction of these two representations provides a reliable answer to traversability mapping and a robust discrimination of penetrable vegetation from that constituting real obstructions. Two distinct robotic platforms offered the possibility to test the hybrid approach on very different applications: a boat, under an European project, the ECHORD Riverwatch, and a terrestrial four-wheeled robot for a national project, the Introsys Robot.
Resumo:
The term res publica (literally “thing of the people”) was coined by the Romans to translate the Greek word politeia, which, as we know, referred to a political community organised in accordance with certain principles, amongst which the notion of the “good life” (as against exclusively private interests) was paramount. This ideal also came to be known as political virtue. To achieve it, it was necessary to combine the best of each “constitutional” type and avoid their worst aspects (tyranny, oligarchy and ochlocracy). Hence, the term acquired from the Greeks a sense of being a “mixed” and “balanced” system. Anyone that was entitled to citizenship could participate in the governance of the “public thing”. This implied the institutionalization of open debate and confrontation between interested parties as a way of achieving the consensus necessary to ensure that man the political animal, who fought with words and reason, prevailed over his “natural” counterpart. These premises lie at the heart of the project which is now being presented under the title of Res Publica: Citizenship and Political Representation in Portugal, 1820-1926. The fact that it is integrated into the centenary commemorations of the establishment of the Republic in Portugal is significant, as it was the idea of revolution – with its promise of rupture and change – that inspired it. However, it has also sought to explore events that could be considered the precursor of democratization in the history of Portugal, namely the vintista, setembrista and patuleia revolutions. It is true that the republican regime was opposed to the monarchic. However, although the thesis that monarchy would inevitably lead to tyranny had held sway for centuries, it had also been long believed that the monarchic system could be as “politically virtuous” as a republic (in the strict sense of the word) provided that power was not concentrated in the hands of a single individual. Moreover, various historical experiments had shown that republics could also degenerate into Caesarism and different kinds of despotism. Thus, when absolutism began to be overturned in continental Europe in the name of the natural rights of man and the new social pact theories, initiating the difficult process of (written) constitutionalization, the monarchic principle began to be qualified as a “monarchy hedged by republican institutions”, a situation in which not even the king was exempt from isonomy. This context justifies the time frame chosen here, as it captures the various changes and continuities that run through it. Having rejected the imperative mandate and the reinstatement of the model of corporative representation (which did not mean that, in new contexts, this might not be revived, or that the second chamber established by the Constitutional Charter of 1826 might not be given another lease of life), a new power base was convened: national sovereignty, a precept that would be shared by the monarchic constitutions of 1822 and 1838, and by the republican one of 1911. This followed the French example (manifested in the monarchic constitution of 1791 and in the Spanish constitution of 1812), as not even republicans entertained a tradition of republicanism based upon popular sovereignty. This enables us to better understand the rejection of direct democracy and universal suffrage, and also the long incapacitation (concerning voting and standing for office) of the vast body of “passive” citizens, justified by “enlightened”, property- and gender-based criteria. Although the republicans had promised in the propaganda phase to alter this situation, they ultimately failed to do so. Indeed, throughout the whole period under analysis, the realisation of the potential of national sovereignty was mediated above all by the individual citizen through his choice of representatives. However, this representation was indirect and took place at national level, in the hope that action would be motivated not by particular local interests but by the common good, as dictated by reason. This was considered the only way for the law to be virtuous, a requirement that was also manifested in the separation and balance of powers. As sovereignty was postulated as single and indivisible, so would be the nation that gave it soul and the State that embodied it. Although these characteristics were common to foreign paradigms of reference, in Portugal, the constitutionalization process also sought to nationalise the idea of Empire. Indeed, this had been the overriding purpose of the 1822 Constitution, and it persisted, even after the loss of Brazil, until decolonization. Then, the dream of a single nation stretching from the Minho to Timor finally came to an end.
Resumo:
11TH INTERNATIONAL COLLOQUIUM ON ANCIENT MOSAICS OCTOBER 16TH 20TH, 2009, BURSA TURKEY Mosaics of Turkey and Parallel Developments in the Rest of the Ancient and Medieval World: Questions of Iconography, Style and Technique from the Beginnings of Mosaic until the Late Byzantine Era
Resumo:
This thesis focuses on the representation of Popular Music in museums by mapping, analyzing, and characterizing its practices in Portugal at the beginning of the 21st century. Now that museums' ability to shape public discourse is acknowledged, the examination of popular music's discourses in museums is of the utmost importance for Ethnomusicology and Popular Music Studies as well as for Museum Studies. The concept of 'heritage' is at the heart of this processes. The study was designed with the aim of moving the exhibiting of popular music in museums forward through a qualitative inquiry of case studies. Data collection involved surveying pop-rock music exhibitions as a qualitative sampling of popular music exhibitions in Portugal from 2007 to 2013. Two of these exhibitions were selected as case studies: No Tempo do Gira-Discos: Um Percurso pela Produção Fonográfica Portuguesa at the Museu da Música in Lisbon in 2007 (also Faculdade de Letras, 2009), and A Magia do Vinil, a Música que Mudou a Sociedade at the Oficina da Cultura in Almada in 2008 (and several other venues, from 2009 to 2013). Two specific domains were observed: popular music exhibitions as instances of museum practice and museum professionals. The first domain encompasses analyzing the types of objects selected for exhibition; the interactive museum practices fostered by the exhibitions; the concepts and narratives used to address popular music discursively, as well as the interpretative practices they allow. The second domain, focuses museum professionals and curators of popular music exhibitions as members of a group, namely their goals, motivations and perspectives. The theoretical frameworks adopted were drawn from the fields of ethnomusicology, popular music studies, and museum studies. The written materials of the exhibitions were subjected of methods of discourse analysis methods. Semi-structured interviews with curators and museum professional were also conducted and analysed. From the museum studies perspective, the study research suggests that the practice adopted by popular music museums largely matches that of conventional museums. From the ethnomusicological and popular music studies stand point, the two case studies reveal two distinct conceptual worlds: the first exhibition, curated by an academic and an independent researcher, points to a mental configuration where popular music is explained through a framework of genres supported by different musical practices. Moreover, it is industry actors such as decision makers and gatekeepers that govern popular music, which implies that the visitors' romantic conception of the musician is to some extent dismantled; the second exhibition, curated by a record collector and specialist, is based on a more conventional process of the everyday historical speech that encodes a mismatch between “good” and “bad music”. Data generated by a survey shows that only one curator, in fact that of my first case study, has an academic background. The backgrounds of all the others are in some way similar to the curator of the second case study. Therefore, I conclude that the second case study best conveys the current practice of exhibiting Popular Music in Portugal.