12 resultados para Cumulative time
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Recent literature has proved that many classical pricing models (Black and Scholes, Heston, etc.) and risk measures (V aR, CV aR, etc.) may lead to “pathological meaningless situations”, since traders can build sequences of portfolios whose risk leveltends to −infinity and whose expected return tends to +infinity, i.e., (risk = −infinity, return = +infinity). Such a sequence of strategies may be called “good deal”. This paper focuses on the risk measures V aR and CV aR and analyzes this caveat in a discrete time complete pricing model. Under quite general conditions the explicit expression of a good deal is given, and its sensitivity with respect to some possible measurement errors is provided too. We point out that a critical property is the absence of short sales. In such a case we first construct a “shadow riskless asset” (SRA) without short sales and then the good deal is given by borrowing more and more money so as to invest in the SRA. It is also shown that the SRA is interested by itself, even if there are short selling restrictions.
Resumo:
The population growth of a Staphylococcus aureus culture, an active colloidal system of spherical cells, was followed by rheological measurements, under steady-state and oscillatory shear flows. We observed a rich viscoelastic behavior as a consequence of the bacteria activity, namely, of their multiplication and density-dependent aggregation properties. In the early stages of growth (lag and exponential phases), the viscosity increases by about a factor of 20, presenting several drops and full recoveries. This allows us to evoke the existence of a percolation phenomenon. Remarkably, as the bacteria reach their late phase of development, in which the population stabilizes, the viscosity returns close to its initial value. Most probably, this is caused by a change in the bacteria physiological activity and in particular, by the decrease of their adhesion properties. The viscous and elastic moduli exhibit power-law behaviors compatible with the "soft glassy materials" model, whose exponents are dependent on the bacteria growth stage. DOI: 10.1103/PhysRevE.87.030701.
Resumo:
O presente trabalho visa propor uma estratégia para a construção e lançamento de um novo modelo de negócio para a atuação das Relações Públicas em Portugal, numa proposta direcionada para as micro e pequenas empresas. Entre o serviço in house e a consultadoria clássica existe um espaço não coberto em Portugal: um serviço in house partilhado. Apresenta-se aqui este projeto de serviço de Relações Públicas para aqueles para quem é incomportável assumir nos seus quadros um Técnico de Comunicação.
Resumo:
Conferência - 16th International Symposium on Wireless Personal Multimedia Communications (WPMC)- Jun 24-27, 2013
Resumo:
The 41 years of armed conflict (1961 to 2002) resulted in a poor development of the health care and education infrastructures, and forced the relocation of people to safer places, namely major urban cities like Luanda. This phase was characterized by typical demographic, nutritional and epidemiological profiles. With the end of this period Angola has been repeatedly ranked as one of the three fastest growing economies in the world, and along with the social stabilization and globalization, the country is facing the introduction of new medical technologies, improvement of health sys-tems and services, better access to them, and overall better quality of life. These changes could also be translating into socio-cultural, demographic and nutritional changes which in turn may leading to changes in the epidemiological profile of the country. Thus, the emergence of non-communicable diseases are likely to become an increasingly im-portant public health problem in Angola. Also, considering that several infectious diseases persist, our weakened health system will have to face a double burden. Thus, disease surveillance data on non-communicable diseases to determine their prevalence and impact, along with the major behavioural risk factors like consumption of tobacco, alcohol, diets and physical inactivity are urgently needed.
Resumo:
Contrastando com o importante legado dos mestres organistas portugueses dos séculos XVI e XVII, a música portuguesa para órgão pós-1700 parece quase inexistente (excluindo raros exemplos, como as quatro sonatas para órgão de Carlos Seixas). Seja devido à destruição causada pelo grande terramoto de Lisboa em 1755, ou a outras causas, a ausência de fontes é surpreendente, considerando os testemunhos de actividade musical durante aquele período. Este artigo lida com uma fonte até hoje relativamente ignorada: o manuscrito CLI/1-4 nº 7 da Biblioteca do Palácio Ducal de Vila Viçosa (Versos / Sobre o Canto Chão / Para Orgão / De Fr. Jeronimo da M.dre de DS.). Esta colecção de vinte versos para órgão de Jerónimo da Madre de Deus é, de longe, a maior obra portuguesa para órgão da primeira metade do século XVIII até hoje conhecida. Claramente pensadas para o órgão, estas curtas peças testemunham a transformação da escrita para tecla em Portugal durante o reinado de D. João V (nomeadamente através da absorção de influências italianas) e fornecem informações preciosas sobre o tipo de instrumento em que eram tocadas.
Resumo:
Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.
Resumo:
Trabalho Final de Mestrado para a obtenção do grau de Mestre em Engenharia Mecânica /Energia
Resumo:
ABSTRACT - Starting with the explanation of metanarrative as a sort of self-reflexive storytelling (as defended by Kenneth Weaver Hope in his unpublished PhD. thesis), I propose to talk about enunciative practices that stress the telling more than the told. In line with some metaficcional practices applied to cinema, such as the ‘mindfuck’ film (Jonathan Eig, 2003), the ‘psychological puzzle film’ (Elliot Panek, 2003) and the ‘mind-game film’ (Thomas Elsaesser, 2009), I will address the manipulations that a narrative film endures in order to produce a more fruitful and complex experience for the viewer. I will particularly concentrate on the misrepresentation of time as a way to produce a labyrinthine work of fiction where the linear description of events is replaced by a game of time disclosure. The viewer is thus called upon to reconstruct the order of the various situations portrayed in a process that I call ‘temporal mapping’. However, as the viewer attempts to do this, the film, ironically, because of the intricate nature of the plot and the uncertain status of the characters, resists the attempt. There is a sort of teasing taking place between the film and its spectator: an invitation of decoding that is half-denied until the end, where the puzzle is finally solved. I will use three of Alejandro Iñárritu’s films to better convey my point: Amores perros (2000), 21 Grams (2003) and Babel (2006). I will consider Iñárritu’s methods to produce a non-linear storytelling as a way to stress the importance of time and its validity as one of the elements that make up for a metanarrative experience in films. I will focus especially on 21 Grams, which I consider to be a paragon of the labyrinth.
Resumo:
Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.
Resumo:
Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.
Resumo:
The importance of wind power energy for energy and environmental policies has been growing in past recent years. However, because of its random nature over time, the wind generation cannot be reliable dispatched and perfectly forecasted, becoming a challenge when integrating this production in power systems. In addition the wind energy has to cope with the diversity of production resulting from alternative wind power profiles located in different regions. In 2012, Portugal presented a cumulative installed capacity distributed over 223 wind farms [1]. In this work the circular data statistical methods are used to analyze and compare alternative spatial wind generation profiles. Variables indicating extreme situations are analyzed. The hour (s) of the day where the farm production attains its maximum daily production is considered. This variable was converted into circular variable, and the use of circular statistics enables to identify the daily hour distribution for different wind production profiles. This methodology was applied to a real case, considering data from the Portuguese power system regarding the year 2012 with a 15-minutes interval. Six geographical locations were considered, representing different wind generation profiles in the Portuguese system.In this work the circular data statistical methods are used to analyze and compare alternative spatial wind generation profiles. Variables indicating extreme situations are analyzed. The hour (s) of the day where the farm production attains its maximum daily production is considered. This variable was converted into circular variable, and the use of circular statistics enables to identify the daily hour distribution for different wind production profiles. This methodology was applied to a real case, considering data from the Portuguese power system regarding the year 2012 with a 15-minutes interval. Six geographical locations were considered, representing different wind generation profiles in the Portuguese system.