83 resultados para Bluetooth Data Noise
em Reposit
Resumo:
Low noise surfaces have been increasingly considered as a viable and cost-effective alternative to acoustical barriers. However, road planners and administrators frequently lack information on the correlation between the type of road surface and the resulting noise emission profile. To address this problem, a method to identify and classify different types of road pavements was developed, whereby near field road noise is analyzed using statistical learning methods. The vehicle rolling sound signal near the tires and close to the road surface was acquired by two microphones in a special arrangement which implements the Close-Proximity method. A set of features, characterizing the properties of the road pavement, was extracted from the corresponding sound profiles. A feature selection method was used to automatically select those that are most relevant in predicting the type of pavement, while reducing the computational cost. A set of different types of road pavement segments were tested and the performance of the classifier was evaluated. Results of pavement classification performed during a road journey are presented on a map, together with geographical data. This procedure leads to a considerable improvement in the quality of road pavement noise data, thereby increasing the accuracy of road traffic noise prediction models.
Resumo:
Wireless local-area networks (WLANs) have been deployed as office and home communications infrastructures worldwide. The diversification of the standards, such as IEEE 802.11 series demands the design of RF front-ends. Low power consumption is one of the most important design concerns in the application of those technologies. To maintain competitive hardware costs, CMOS has been used since it is the best solution for low cost and high integration processing, allowing analog circuits to be mixed with digital ones. In the receiver chain, the low noise amplifier (LNA) is one of the most critical blocks in a transceiver design. The sensitivity is mainly determined by the LNA noise figure and gain. It interfaces with the pre-select filter and the mixer. Furthermore, since it is the first gain stage, care must be taken to provide accurate input match, low-noise figure, good linearity and a sufficient gain over a wide band of operation. Several CMOS LNAs have been reported during the last decade, showing that the most research has been done at 802.11/b and GSM standards (900-2400MHz spectrum) and more recently at 802.11/a (5GHz band). One of the more significant disadvantages of 802.11/b is that the frequency band is crowded and subject to interference from other technologies, as is 2.4GHz cordless phones and Bluetooth. As the demand for radio-frequency integrated circuits, operating at higher frequency bands, increases, the IEEE 802.11/a standard becomes a very attractive option to wireless communication system developers. This paper presents the design and implementation of a low power, low noise amplifier aimed at IEEE 802.11a for WLAN applications. It was designed to be integrated with an active balun and mixer, representing the first step toward a fully integrated monolithic WLAN receiver. All the required circuits are integrated at the same die and are powered by 1.8V supply source. Preliminary experimental results (S-parameters) are shown and promise excellent results. The LNA circuit design details are illustrated in Section 2. Spectre simulation results focused at gain, noise figure (NF) and input/output matching are presented in Section 3. Finally, conclusions and comparison with other recently reported LNAs are made in Section 4, followed by future work.
Resumo:
Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
Resumo:
Chapter in Book Proceedings with Peer Review First Iberian Conference, IbPRIA 2003, Puerto de Andratx, Mallorca, Spain, JUne 4-6, 2003. Proceedings
Resumo:
Seismic ambient noise tomography is applied to central and southern Mozambique, located in the tip of the East African Rift (EAR). The deployment of MOZART seismic network, with a total of 30 broad-band stations continuously recording for 26 months, allowed us to carry out the first tomographic study of the crust under this region, which until now remained largely unexplored at this scale. From cross-correlations extracted from coherent noise we obtained Rayleigh wave group velocity dispersion curves for the period range 5–40 s. These dispersion relations were inverted to produce group velocity maps, and 1-D shear wave velocity profiles at selected points. High group velocities are observed at all periods on the eastern edge of the Kaapvaal and Zimbabwe cratons, in agreement with the findings of previous studies. Further east, a pronounced slow anomaly is observed in central and southern Mozambique, where the rifting between southern Africa and Antarctica created a passive margin in the Mesozoic, and further rifting is currently happening as a result of the southward propagation of the EAR. In this study, we also addressed the question concerning the nature of the crust (continental versus oceanic) in the Mozambique Coastal Plains (MCP), still in debate. Our data do not support previous suggestions that the MCP are floored by oceanic crust since a shallow Moho could not be detected, and we discuss an alternative explanation for its ocean-like magnetic signature. Our velocity maps suggest that the crystalline basement of the Zimbabwe craton may extend further east well into Mozambique underneath the sediment cover, contrary to what is usually assumed, while further south the Kaapval craton passes into slow rifted crust at the Lebombo monocline as expected. The sharp passage from fast crust to slow crust on the northern part of the study area coincides with the seismically active NNE-SSW Urema rift, while further south the Mazenga graben adopts an N-S direction parallel to the eastern limit of the Kaapvaal craton. We conclude that these two extensional structures herald the southward continuation of the EAR, and infer a structural control of the transition between the two types of crust on the ongoing deformation.
Resumo:
Dimensionality reduction plays a crucial role in many hyperspectral data processing and analysis algorithms. This paper proposes a new mean squared error based approach to determine the signal subspace in hyperspectral imagery. The method first estimates the signal and noise correlations matrices, then it selects the subset of eigenvalues that best represents the signal subspace in the least square sense. The effectiveness of the proposed method is illustrated using simulated and real hyperspectral images.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.
Resumo:
Lossless compression algorithms of the Lempel-Ziv (LZ) family are widely used nowadays. Regarding time and memory requirements, LZ encoding is much more demanding than decoding. In order to speed up the encoding process, efficient data structures, like suffix trees, have been used. In this paper, we explore the use of suffix arrays to hold the dictionary of the LZ encoder, and propose an algorithm to search over it. We show that the resulting encoder attains roughly the same compression ratios as those based on suffix trees. However, the amount of memory required by the suffix array is fixed, and much lower than the variable amount of memory used by encoders based on suffix trees (which depends on the text to encode). We conclude that suffix arrays, when compared to suffix trees in terms of the trade-off among time, memory, and compression ratio, may be preferable in scenarios (e.g., embedded systems) where memory is at a premium and high speed is not critical.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
Opposite enantiomers exhibit different NMR properties in the presence of an external common chiral element, and a chiral molecule exhibits different NMR properties in the presence of external enantiomeric chiral elements. Automatic prediction of such differences, and comparison with experimental values, leads to the assignment of the absolute configuration. Here two cases are reported, one using a dataset of 80 chiral secondary alcohols esterified with (R)-MTPA and the corresponding 1H NMR chemical shifts and the other with 94 13C NMR chemical shifts of chiral secondary alcohols in two enantiomeric chiral solvents. For the first application, counterpropagation neural networks were trained to predict the sign of the difference between chemical shifts of opposite stereoisomers. The neural networks were trained to process the chirality code of the alcohol as the input, and to give the NMR property as the output. In the second application, similar neural networks were employed, but the property to predict was the difference of chemical shifts in the two enantiomeric solvents. For independent test sets of 20 objects, 100% correct predictions were obtained in both applications concerning the sign of the chemical shifts differences. Additionally, with the second dataset, the difference of chemical shifts in the two enantiomeric solvents was quantitatively predicted, yielding r2 0.936 for the test set between the predicted and experimental values.
Resumo:
A rápida evolução dos dispositivos móveis e das tecnologias de comunicação sem fios transformou o telemóvel num poderoso dispositivo de computação móvel. A necessidade de estar sempre contactável, comum à civilização moderna, tem aumentado a dependência deste dispositivo, sendo transportado pela maioria das pessoas num ambiente urbano e assumindo um papel talvez mais importante que a própria carteira. A ubiquidade e capacidade de computação dos telemóveis aumentam o interesse no desenvolvimento de serviços móveis, além de tradicionais serviços de voz. Um telemóvel pode em breve tornar-se um elemento activo nas nossas tarefas diárias, servindo como um instrumento de pagamento e controlo de acessos, proporcionando assim novas interfaces para serviços existentes. A unificação de vários serviços num único dispositivo é um desafio que pode simplificar a nossa rotina diária e aumentar o conforto, no limite deixaremos de necessitar de dinheiro físico, cartões de crédito ou débito, chaves de residência e de veículos automóveis, ou inclusive documentos de identificação como bilhetes de identidade ou passaportes. O interesse demonstrado pelos intervenientes, desde os fabricantes de telemóveis e operadores de rede móvel até às instituições financeiras, levaram ao aparecimento de múltiplas soluções de serviços móveis. Porém estas soluções respondem geralmente a problemas específicos, apenas contemplando um fornecedor de serviços ou uma determinada operação de pagamento, como seja a compra de bilhetes ou pagamento de estacionamento. Estas soluções emergentes consistem também tipicamente em especificações fechadas e protocolos proprietários. A definição de uma arquitectura genérica, aberta interoperável e extensível é necessária para que os serviços móveis possam ser adoptados de uma forma generalizada por diferentes fornecedores de serviços e para diversos tipos de pagamento. A maior parte das soluções actuais de pagamento móvel depende de comunicações através da rede móvel, algumas utilizam o telemóvel apenas como uma interface de acesso à internet enquanto outras possibilitam o envio de um SMS (Short Message Service) para autorizar uma transacção, o que implica custos de comunicação em todas as operações de pagamento. Este custo de operação torna essas soluções inadequadas para a realização de micropagamentos e podem por isso ter uma aceitação limitada por parte dos clientes. As soluções existentes focam-se maioritariamente em pagamentos à distância, não tirando partido das características do pagamento presencial e não oferecendo por isso uma verdadeira alternativa ao modelo actual de pagamento com cartões de crédito/débito. As capacidades computacionais dos telemóveis e suporte de diversos protocolos de comunicação sem fio local não têm sido aproveitadas, vendo o telemóvel apenas como um terminal GSM (Global System for Mobile Communications) e não oferecendo serviços adicionais como seja a avaliação dinâmica de risco ou controlo de despesas. Esta dissertação propõe e valida, através de um demonstrador, uma aquitectua aberta para o pagamento e controlo de acesso baseado em dispositivos móveis, intitulada WPAC (Wireless Payment and Access Control). Para chegar à solução apresentada foram estudadas outras soluções de pagamento, desde o aparecimento dos cartões de débito até a era de pagamentos electrónicos móveis, passando pelas soluções de pagamento através da internet. As capacidades dos dispositivos móveis, designadamente os telemóveis, e tecnologias de comunicação sem fios foram também analisadas a fim de determinar o estado tecnológico actual. A arquitectura WPAC utiliza padrões de desenho utilizados pela indústria em soluções de sucesso, a utilização de padrões testados e a reutilização de soluções com provas dadas permite aumentar a confiança nesta solução, um destes exemplos é a utilização de uma infra-estrutura de chave pública para o estabelecimento de um canal de comunicação seguro. Esta especificação é uma arquitectura orientada aos serviços que utiliza os Web Services para a definição do contracto do serviço de pagamento. A viabilidade da solução na orquestração de um conjunto de tecnologias e a prova de conceito de novas abordagens é alcançada com a construção de um protótipo e a realização de testes. A arquitectura WPAC possibilita a realização de pagamentos móveis presenciais, isto é, junto do fornecedor de bens ou serviços, seguindo o modelo de pagamento com cartões de crédito/débito no que diz respeito aos intervenientes e relações entre eles. Esta especificação inclui como aspecto inovador a avaliação dinâmica de risco, que utiliza o valor do pagamento, a existência de pagamentos frequentes num período curto de tempo, e a data, hora e local do pagamento como factores de risco; solicitando ao cliente o conjunto de credenciais adequado ao risco avaliado, desde códigos pessoais a dados biométricos. É também apresentada uma alternativa ao processo normal de pagamento, que apesar de menos cómoda permite efectuar pagamentos quando não é possível estabelecer um canal de comunicação sem fios, aumentando assim a tolerância a falhas. Esta solução não implica custos de operação para o cliente na comunicação com o ponto de venda do comerciante, que é realizada através de tecnologias de comunicação local sem fios, pode ser necessária a comunicação através da rede móvel com o emissor do agente de pagamento para a actualização do agente de software ou de dados de segurança, mas essas transmissões são ocasionais. O modelo de segurança recorre a certificados para autenticação dos intervenientes e a uma infra-estrutura de chave pública para cifra e assinatura de mensagens. Os dados de segurança incluídos no agente de software móvel, para desabilitar a cópia ou corrupção da aplicação mas também para a comparação com as credenciais inseridas pelo cliente, devem igualmente ser encriptados e assinados de forma a garantir a sua confidencialidade e integridade. A arquitectura de pagamento utiliza o standard de Web Services, que é amplamente conhecido, aberto e interoperável, para definição do serviço de pagamento. Existem extensões à especificação de Web Services relativas à segurança que permitem trocar itens de segurança e definem o modo de cifra e assinatura de mensagens, possibilitando assim a sua utilização em aplicações que necessitem de segurança como é o caso de serviços de pagamento e controlo de acesso. O contracto de um Web Service define o modo de invocação dos serviços, transmissão de informação e representação de dados, sendo normalmente utilizado o protocolo SOAP que na prática não é mais que um protocolo de troca de mansagens XML (eXtensible Markup Language). O envio e recepção de mensagens XML; ou seja, a transmissão de simples sequências de caracteres, é suportado pela maioria dos protocolos de comunicação, sendo portanto uma solução abrangente que permite a adopção de diversas tecnologias de comunicação sem fios. O protótipo inclui um agente de software móvel, implementado sobre a forma de uma MIDlet, aplicação Java para dispositivos móveis, que implementa o protocolo de pagamento comunicando sobre uma ligação Bluetooth com o ponto de venda do comerciante, simulado por uma aplicação desenvolvida sobre a plataforma .NET e que por isso faz prova da heterogeneidade da solução. A comunicação entre o comerciante e o seu banco para autorização do pagamento e transferência monetária utiliza o protocolo existente para a autorização de pagamentos, com base em cartões de crédito/débito. A definição desta especificação aberta e genérica em conjunto com o forte interesse demonstrado pelos intervenientes, proporciona uma boa perspectiva em termos de adopção da solução, o que pode impulsionar a implementação de serviços móveis e dessa forma simplificar as rotinas diárias das pessoas. Soluções móveis de pagamento reduzem a necessidade de transportar vários cartões de crédito/débito na nossa carteira. A avaliação dinâmica de risco permite aumentar a segurança dos pagamentos, com a solicitação de mais credenciais ao cliente para pagamentos com um maior risco associado, sendo um ponto importante quer para os clientes quer para as instituições financeiras pois diminui o risco de fraude e aumenta a confiança no sistema. Esta solução de pagamento electrónico pode também facilitar a consulta de pagamentos efectuados e saldos, mantendo um histórico dos movimentos, o que não é possível nos cartões de crédito/débito sem uma visita a uma ATM (Automated Teller Machine) ou utilização de homebanking.
Resumo:
This paper presents an investigation into cloud-to-ground lightning activity over the continental territory of Portugal with data collected by the national Lightning Location System. The Lightning Location System in Portugal is first presented. Analyses about geographical, seasonal, and polarity distribution of cloud-to-ground lightning activity and cumulative probability of peak current are carried out. An overall ground flash density map is constructed from the database, which contains the information of more than five years and almost four million records. This map is compared with the thunderstorm days map, produced by the Portuguese Institute of Meteorology, and with the orographic map of Portugal. Finally, conclusions are duly drawn.
Resumo:
We present a study of the magnetic properties of a group of basalt samples from the Saldanha Massif (Mid-Atlantic Ridge - MAR - 36degrees 33' 54" N, 33degrees 26' W), and we set out to interpret these properties in the tectono-magmatic framework of this sector of the MAR. Most samples have low magnetic anisotropy and magnetic minerals of single domain grain size, typical of rapid cooling. The thermomagnetic study mostly shows two different susceptibility peaks. The high temperature peak is related to mineralogical alteration due to heating. The low temperature peak shows a distinction between three different stages of low temperature oxidation: the presence of titanomagnetite, titanomagnetite and titanomaghemite, and exclusively of titanomaghemite. Based on established empirical relationships between Curie temperature and degree of oxidation, the latter is tentatively deduced for all samples. Finally, swath bathymetry and sidescan sonar data combined with dive observations show that the Saldanha Massif is located over an exposed section of upper mantle rocks interpreted to be the result of detachment tectonics. Basalt samples inside the detachment zone often have higher than expected oxidation rates; this effect can be explained by the higher permeability caused by the detachment fault activity.