972 resultados para real-effort task


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de Estágio submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Teatro e Comunidade.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, many real-time operating systems discretize the time relying on a system time unit. To take this behavior into account, real-time scheduling algorithms must adopt a discrete-time model in which both timing requirements of tasks and their time allocations have to be integer multiples of the system time unit. That is, tasks cannot be executed for less than one time unit, which implies that they always have to achieve a minimum amount of work before they can be preempted. Assuming such a discrete-time model, the authors of Zhu et al. (Proceedings of the 24th IEEE international real-time systems symposium (RTSS 2003), 2003, J Parallel Distrib Comput 71(10):1411–1425, 2011) proposed an efficient “boundary fair” algorithm (named BF) and proved its optimality for the scheduling of periodic tasks while achieving full system utilization. However, BF cannot handle sporadic tasks due to their inherent irregular and unpredictable job release patterns. In this paper, we propose an optimal boundary-fair scheduling algorithm for sporadic tasks (named BF TeX ), which follows the same principle as BF by making scheduling decisions only at the job arrival times and (expected) task deadlines. This new algorithm was implemented in Linux and we show through experiments conducted upon a multicore machine that BF TeX outperforms the state-of-the-art discrete-time optimal scheduler (PD TeX ), benefiting from much less scheduling overheads. Furthermore, it appears from these experimental results that BF TeX is barely dependent on the length of the system time unit while PD TeX —the only other existing solution for the scheduling of sporadic tasks in discrete-time systems—sees its number of preemptions, migrations and the time spent to take scheduling decisions increasing linearly when improving the time resolution of the system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

“Many-core” systems based on a Network-on-Chip (NoC) architecture offer various opportunities in terms of performance and computing capabilities, but at the same time they pose many challenges for the deployment of real-time systems, which must fulfill specific timing requirements at runtime. It is therefore essential to identify, at design time, the parameters that have an impact on the execution time of the tasks deployed on these systems and the upper bounds on the other key parameters. The focus of this work is to determine an upper bound on the traversal time of a packet when it is transmitted over the NoC infrastructure. Towards this aim, we first identify and explore some limitations in the existing recursive-calculus-based approaches to compute the Worst-Case Traversal Time (WCTT) of a packet. Then, we extend the existing model by integrating the characteristics of the tasks that generate the packets. For this extended model, we propose an algorithm called “Branch and Prune” (BP). Our proposed method provides tighter and safe estimates than the existing recursive-calculus-based approaches. Finally, we introduce a more general approach, namely “Branch, Prune and Collapse” (BPC) which offers a configurable parameter that provides a flexible trade-off between the computational complexity and the tightness of the computed estimate. The recursive-calculus methods and BP present two special cases of BPC when a trade-off parameter is 1 or ∞, respectively. Through simulations, we analyze this trade-off, reason about the implications of certain choices, and also provide some case studies to observe the impact of task parameters on the WCTT estimates.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The interest of the study on the implementation of expanded agglomerated cork as exterior wall covering derives from two critical factors in a perspective of sustainable development: the use of a product consisting of a renewable natural material-cork-and the concern to contribute to greater sustainability in construction. The study aims to assess the feasibility of its use by analyzing the corresponding behaviour under different conditions. Since this application is relatively recent, only about ten years old, there is still much to learn about the reliability of its long-term properties. In this context, this study aims to deepen and approach aspects, some of them poorly studied and even unknown, that deal with characteristics that will make the agglomerate a good choice for exterior wall covering. The analysis of these and other characteristics is being performed by testing both under actual exposure conditions, on an experimental cell at LNEC, and on laboratory. In this paper the main laboratory tests are presented and the obtained results are compared with the outcome of the field study. © (2015) Trans Tech Publications, Switzerland.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resumo: Com base no conceito de implementação de intenções (Gollwitzer, 1993, 1999) e na teoria do contexto de resposta de Kirsch & Lynn (1997), o presente trabalho testou a eficácia de uma intervenção combinada de implementação de intenções com hipnose e sugestão pós-hipnótica na promoção da adesão a uma tarefa simples (avaliação do humor) e uma tarefa difícil (actividade física). Os participantes são estudantes universitários de uma universidade na Nova Jérsia, (N=124, Estudo 1, EUA) e em Lisboa (N=323, Estudo 2, Portugal). Em ambos os estudos os participantes foram seleccionados a partir de uma amostra mais vasta baseado num escrutínio da sua sugestibilidade hipnótica avaliada por meio da Escala de Grupo de Sugestibilidade Hipnótica de Waterloo-Stanford (WSGC): Forma C. O Estudo 1 usou um desenho factorial do tipo 2x2x3 (tipo de intenção formada x hipnose x nível de sugestionabilidade) e o Estudo 2 usou um desenho factorial do tipo 2 x 2x 2 x 4 (tipo de tarefa x tipo de intenção formada x hipnose x nível de sugestionabilidade). No Estudo 1 foi pedido aos participantes que corressem todos os dias e durante três semanas durante 5 minutos, que medissem a sua pulsação antes e depois da actividade física e que mandassem um e-mail ao experimentador, fornecendo assim uma medida comportamental e uma medida de auto-relato. Aos participantes no grupo de intenções de meta foi apenas pedido que corressem todos os dias. Aos participantes no grupo de implementação de intenções foi pedido que especificasses com exactidão quando e onde iriam correr e enviar o e-mail. Para além disso, cerca de metade dos participantes foram hipnotizados e receberam uma sugestão pós-hipnótica em que lhes foi sugerido que o pensamento de correr todos os dias lhes viria à mente sem esforço no momento apropriado. A outra metade dos participantes não recebeu qualquer sugestão hipnótica. No Estudo 2 foi seguido o mesmo procedimento, mas a cerca de metade dos participantes foi atribuída uma tarefa fácil (enviar um Adherence to health-related behaviors ix SMS com a avaliação diária do seu estado de humor naquele momento) e à outra metade da amostra foi atribuída a tarefa de exercício físico atrás descrita (tarefa difícil). Os resultados do estudo 1 mostraram uma interacção significativa entre o nível de sugestionabilidade dos participantes e a sugestão pós-hipnótica (p<.01) indicando que a administração da sugestão pós-hipnótica aumentou a adesão nos participantes muito sugestionáveis, mas baixou a adesão nos participantes pouco sugestionáveis. Não se encontraram diferenças entre os grupos que formaram intenções de meta e os que formaram implementação de intenções. No Estudo 2 os resultados indicaram que os participantes aderiram significativamente mais à tarefa fácil do que à tarefa difícil (p<.001). Os resultados não revelaram diferenças significativas entre as condições implementações de intenções, hipnose e as duas estratégias combinadas, indicando que a implementação de intenções não foi eficaz no aumento da adesão às duas tarefas propostas e não beneficiou da combinação com as sugestões pós-hipnóticas. A utilização da hipnose com sugestão pós-hipnótica significativamente reduziu a adesão a ambas as tarefas. Dado que não existiam instrumentos em Português destinados a avaliar a sugestionabilidade hipnótica, traduziu-se e adaptou-se para Português Escala de Grupo de sugestibilidade hipnótica de Waterloo-Stanford (WSGC): Forma C. A amostra Portuguesa (N=625) apresentou resultados semelhantes aos encontrados nas amostras de referência em termos do formato da distribuição dos padrões da pontuação e do índice de dificuldade dos itens. Contudo, a proporção de estudantes portugueses encontrada que pontuaram na zona superior de sugestionabilidade foi significativamente inferior à proporção de participantes na mesma zona encontrada nas amostras de referência. No sentido de lançar alguma luz sobre as razões para este resultado, inquiriu-se alguns dos participantes acerca das suas atitudes face à hipnose utilizando uma versão portuguesa da Escala de Valência de Atitudes e Crenças face à Hipnose e comparou-se com a opinião de Adherence to health-related behaviors xAbstract: On the basis of Gollwitzer’s (1993, 1999) implementation intentions’ concept, and Kirsch & Lynn’s (1997) response set theory, this dissertation tested the effectiveness of a combined intervention of implementation intentions with hypnosis with posthypnotic suggestions in enhancing adherence to a simple (mood report) and a difficult (physical activity) health-related task. Participants were enrolled in a university in New Jersey (N=124, Study 1, USA) and in two universities in Lisbon (N=323, Study 2, Portugal). In both studies participants were selected from a broader sample based on their suggestibility scores using the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility and then randomly assigned to the experimental groups. Study 1 used a 2x2x3 factorial design (instruction x hypnosis x level of suggestibility) and Study 2 used a 2 x 2x 2 x 4 factorial design (task x instructions x hypnosis x level of suggestibility). In Study 1 participants were asked to run in place for 5 minutes each day for a three-week period, to take their pulse rate before and after the activity, and to send a daily email report to the experimenter, thus providing both a self-report and a behavioral measure of adherence. Participants in the goal intention condition were simply asked to run in place and send the e-mail once a day. Those in the implementation intention condition were further asked to specify the exact place and time they would perform the physical activity and send the e-mail. In addition, half of the participants were given a post-hypnotic suggestion indicating that the thought of running in place would come to mind without effort at the appropriate moment. The other half did not receive a posthypnotic suggestion. Study 2 followed the same procedure, but additionally half of the participants were instructed to send a mood report by SMS (easy task) and half were assigned to the physical activity task described above (difficult task). Adherence to health-related behaviors vii Study 1 result’s showed a significant interaction between participant’s suggestibility level and posthypnotic suggestion (p<.01) indicating that posthypnotic suggestion enhanced adherence among highly suggestible participants, but lowered it among low suggestible individuals. No differences between the goal intention and the implementation intentions groups were found. In Study 2, participants adhered significantly more (p<.001) to the easy task than to the difficult task. Results did not revealed significant differences between the implementation intentions, hypnosis and the two conditions combined, indicating that implementation intentions was not enhanced by hypnosis with posthypnotic suggestion, neither was effective as single intervention in enhancing adherence to any of the tasks. Hypnosis with posthypnotic suggestion alone significantly reduced adherence to both tasks in comparison with participants that did not receive hypnosis. Since there were no instruments in Portuguese language to asses hypnotic suggestibility, the Waterloo-Stanford Group C (WSGC) scale of hypnotic susceptibility was translated and adapted to Portuguese and was used in the screening of a sample of college students from Lisbon (N=625). Results showed that the Portuguese sample has distribution shapes and difficulty patterns of hypnotic suggestibility scores similar to the reference samples, with the exception of the proportion of Portuguese students scoring in the high range of hypnotic suggestibility, that was found lower than the in reference samples. In order to shed some light on the reasons for this finding participant’s attitudes toward hypnosis were inquired using a Portuguese translation and adaptation of the Escala de Valencia de Actitudes y Creencias Hacia la Hipnosis, Versión Cliente, and compared with participants with no prior hypnosis experience (N=444). Significant differences were found between the two groups with participants without hypnosis experience scoring higher in factors indicating misconceptions and negative attitudes about hypnosis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Este texto apresenta os conceitos fundamentais das funções reais de mais de duas variáveis. Estes slides são um complemento às aulas de Matemática II para o tema em estudo para alunos da licenciatura em gestão.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trabalho final de Mestrado para obtenção do grau de Mestre em Engenharia de Redes de Comunicação e Multimédia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The application of compressive sensing (CS) to hyperspectral images is an active area of research over the past few years, both in terms of the hardware and the signal processing algorithms. However, CS algorithms can be computationally very expensive due to the extremely large volumes of data collected by imaging spectrometers, a fact that compromises their use in applications under real-time constraints. This paper proposes four efficient implementations of hyperspectral coded aperture (HYCA) for CS, two of them termed P-HYCA and P-HYCA-FAST and two additional implementations for its constrained version (CHYCA), termed P-CHYCA and P-CHYCA-FAST on commodity graphics processing units (GPUs). HYCA algorithm exploits the high correlation existing among the spectral bands of the hyperspectral data sets and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. The proposed P-HYCA and P-CHYCA implementations have been developed using the compute unified device architecture (CUDA) and the cuFFT library. Moreover, this library has been replaced by a fast iterative method in the P-HYCA-FAST and P-CHYCA-FAST implementations that leads to very significant speedup factors in order to achieve real-time requirements. The proposed algorithms are evaluated not only in terms of reconstruction error for different compressions ratios but also in terms of computational performance using two different GPU architectures by NVIDIA: 1) GeForce GTX 590; and 2) GeForce GTX TITAN. Experiments are conducted using both simulated and real data revealing considerable acceleration factors and obtaining good results in the task of compressing remotely sensed hyperspectral data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Informática - Área de Especialização em Sistemas Gráficos e Multimédia

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral instruments have been incorporated in satellite missions, providing large amounts of data of high spectral resolution of the Earth surface. This data can be used in remote sensing applications that often require a real-time or near-real-time response. To avoid delays between hyperspectral image acquisition and its interpretation, the last usually done on a ground station, onboard systems have emerged to process data, reducing the volume of information to transfer from the satellite to the ground station. For this purpose, compact reconfigurable hardware modules, such as field-programmable gate arrays (FPGAs), are widely used. This paper proposes an FPGA-based architecture for hyperspectral unmixing. This method based on the vertex component analysis (VCA) and it works without a dimensionality reduction preprocessing step. The architecture has been designed for a low-cost Xilinx Zynq board with a Zynq-7020 system-on-chip FPGA-based on the Artix-7 FPGA programmable logic and tested using real hyperspectral data. Experimental results indicate that the proposed implementation can achieve real-time processing, while maintaining the methods accuracy, which indicate the potential of the proposed platform to implement high-performance, low-cost embedded systems, opening perspectives for onboard hyperspectral image processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Endmember extraction (EE) is a fundamental and crucial task in hyperspectral unmixing. Among other methods vertex component analysis ( VCA) has become a very popular and useful tool to unmix hyperspectral data. VCA is a geometrical based method that extracts endmember signatures from large hyperspectral datasets without the use of any a priori knowledge about the constituent spectra. Many Hyperspectral imagery applications require a response in real time or near-real time. Thus, to met this requirement this paper proposes a parallel implementation of VCA developed for graphics processing units. The impact on the complexity and on the accuracy of the proposed parallel implementation of VCA is examined using both simulated and real hyperspectral datasets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Actas do 17º Congresso da Associação Internacional para a História do Vidro

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hyperspectral remote sensing exploits the electromagnetic scattering patterns of the different materials at specific wavelengths [2, 3]. Hyperspectral sensors have been developed to sample the scattered portion of the electromagnetic spectrum extending from the visible region through the near-infrared and mid-infrared, in hundreds of narrow contiguous bands [4, 5]. The number and variety of potential civilian and military applications of hyperspectral remote sensing is enormous [6, 7]. Very often, the resolution cell corresponding to a single pixel in an image contains several substances (endmembers) [4]. In this situation, the scattered energy is a mixing of the endmember spectra. A challenging task underlying many hyperspectral imagery applications is then decomposing a mixed pixel into a collection of reflectance spectra, called endmember signatures, and the corresponding abundance fractions [8–10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. Linear mixing model holds approximately when the mixing scale is macroscopic [13] and there is negligible interaction among distinct endmembers [3, 14]. If, however, the mixing scale is microscopic (or intimate mixtures) [15, 16] and the incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [17], the linear model is no longer accurate. Linear spectral unmixing has been intensively researched in the last years [9, 10, 12, 18–21]. It considers that a mixed pixel is a linear combination of endmember signatures weighted by the correspondent abundance fractions. Under this model, and assuming that the number of substances and their reflectance spectra are known, hyperspectral unmixing is a linear problem for which many solutions have been proposed (e.g., maximum likelihood estimation [8], spectral signature matching [22], spectral angle mapper [23], subspace projection methods [24,25], and constrained least squares [26]). In most cases, the number of substances and their reflectances are not known and, then, hyperspectral unmixing falls into the class of blind source separation problems [27]. Independent component analysis (ICA) has recently been proposed as a tool to blindly unmix hyperspectral data [28–31]. ICA is based on the assumption of mutually independent sources (abundance fractions), which is not the case of hyperspectral data, since the sum of abundance fractions is constant, implying statistical dependence among them. This dependence compromises ICA applicability to hyperspectral images as shown in Refs. [21, 32]. In fact, ICA finds the endmember signatures by multiplying the spectral vectors with an unmixing matrix, which minimizes the mutual information among sources. If sources are independent, ICA provides the correct unmixing, since the minimum of the mutual information is obtained only when sources are independent. This is no longer true for dependent abundance fractions. Nevertheless, some endmembers may be approximately unmixed. These aspects are addressed in Ref. [33]. Under the linear mixing model, the observations from a scene are in a simplex whose vertices correspond to the endmembers. Several approaches [34–36] have exploited this geometric feature of hyperspectral mixtures [35]. Minimum volume transform (MVT) algorithm [36] determines the simplex of minimum volume containing the data. The method presented in Ref. [37] is also of MVT type but, by introducing the notion of bundles, it takes into account the endmember variability usually present in hyperspectral mixtures. The MVT type approaches are complex from the computational point of view. Usually, these algorithms find in the first place the convex hull defined by the observed data and then fit a minimum volume simplex to it. For example, the gift wrapping algorithm [38] computes the convex hull of n data points in a d-dimensional space with a computational complexity of O(nbd=2cþ1), where bxc is the highest integer lower or equal than x and n is the number of samples. The complexity of the method presented in Ref. [37] is even higher, since the temperature of the simulated annealing algorithm used shall follow a log( ) law [39] to assure convergence (in probability) to the desired solution. Aiming at a lower computational complexity, some algorithms such as the pixel purity index (PPI) [35] and the N-FINDR [40] still find the minimum volume simplex containing the data cloud, but they assume the presence of at least one pure pixel of each endmember in the data. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. PPI algorithm uses the minimum noise fraction (MNF) [41] as a preprocessing step to reduce dimensionality and to improve the signal-to-noise ratio (SNR). The algorithm then projects every spectral vector onto skewers (large number of random vectors) [35, 42,43]. The points corresponding to extremes, for each skewer direction, are stored. A cumulative account records the number of times each pixel (i.e., a given spectral vector) is found to be an extreme. The pixels with the highest scores are the purest ones. N-FINDR algorithm [40] is based on the fact that in p spectral dimensions, the p-volume defined by a simplex formed by the purest pixels is larger than any other volume defined by any other combination of pixels. This algorithm finds the set of pixels defining the largest volume by inflating a simplex inside the data. ORA SIS [44, 45] is a hyperspectral framework developed by the U.S. Naval Research Laboratory consisting of several algorithms organized in six modules: exemplar selector, adaptative learner, demixer, knowledge base or spectral library, and spatial postrocessor. The first step consists in flat-fielding the spectra. Next, the exemplar selection module is used to select spectral vectors that best represent the smaller convex cone containing the data. The other pixels are rejected when the spectral angle distance (SAD) is less than a given thresh old. The procedure finds the basis for a subspace of a lower dimension using a modified Gram–Schmidt orthogonalizati on. The selected vectors are then projected onto this subspace and a simplex is found by an MV T pro cess. ORA SIS is oriented to real-time target detection from uncrewed air vehicles using hyperspectral data [46]. In this chapter we develop a new algorithm to unmix linear mixtures of endmember spectra. First, the algorithm determines the number of endmembers and the signal subspace using a newly developed concept [47, 48]. Second, the algorithm extracts the most pure pixels present in the data. Unlike other methods, this algorithm is completely automatic and unsupervised. To estimate the number of endmembers and the signal subspace in hyperspectral linear mixtures, the proposed scheme begins by estimating sign al and noise correlation matrices. The latter is based on multiple regression theory. The signal subspace is then identified by selectin g the set of signal eigenvalue s that best represents the data, in the least-square sense [48,49 ], we note, however, that VCA works with projected and with unprojected data. The extraction of the end members exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. As PPI and N-FIND R algorithms, VCA also assumes the presence of pure pixels in the data. The algorithm iteratively projects data on to a direction orthogonal to the subspace spanned by the endmembers already determined. The new end member signature corresponds to the extreme of the projection. The algorithm iterates until all end members are exhausted. VCA performs much better than PPI and better than or comparable to N-FI NDR; yet it has a computational complexity between on e and two orders of magnitude lower than N-FINDR. The chapter is structure d as follows. Section 19.2 describes the fundamentals of the proposed method. Section 19.3 and Section 19.4 evaluate the proposed algorithm using simulated and real data, respectively. Section 19.5 presents some concluding remarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório apresentado para cumprimento dos requisitos necessários à obtenção do grau Mestre em Ensino de Inglês e de Língua Estrangeira (Espanhol) no 3º Ciclo do Ensino Básico e no Ensino Secundário