917 resultados para dynamic time warping
Resumo:
This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.
Resumo:
The Smart Grid environment allows the integration of resources of small and medium players through the use of Demand Response programs. Despite the clear advantages for the grid, the integration of consumers must be carefully done. This paper proposes a system which simulates small and medium players. The system is essential to produce tests and studies about the active participation of small and medium players in the Smart Grid environment. When comparing to similar systems, the advantages comprise the capability to deal with three types of loads – virtual, contextual and real. It can have several loads optimization modules and it can run in real time. The use of modules and the dynamic configuration of the player results in a system which can represent different players in an easy and independent way. This paper describes the system and all its capabilities.
Resumo:
A função de escalonamento desempenha um papel importante nos sistemas de produção. Os sistemas de escalonamento têm como objetivo gerar um plano de escalonamento que permite gerir de uma forma eficiente um conjunto de tarefas que necessitam de ser executadas no mesmo período de tempo pelos mesmos recursos. Contudo, adaptação dinâmica e otimização é uma necessidade crítica em sistemas de escalonamento, uma vez que as organizações de produção têm uma natureza dinâmica. Nestas organizações ocorrem distúrbios nas condições requisitos de trabalho regularmente e de forma inesperada. Alguns exemplos destes distúrbios são: surgimento de uma nova tarefa, cancelamento de uma tarefa, alteração na data de entrega, entre outros. Estes eventos dinâmicos devem ser tidos em conta, uma vez que podem influenciar o plano criado, tornando-o ineficiente. Portanto, ambientes de produção necessitam de resposta imediata para estes eventos, usando um método de reescalonamento em tempo real, para minimizar o efeito destes eventos dinâmicos no sistema de produção. Deste modo, os sistemas de escalonamento devem de uma forma automática e inteligente, ser capazes de adaptar o plano de escalonamento que a organização está a seguir aos eventos inesperados em tempo real. Esta dissertação aborda o problema de incorporar novas tarefas num plano de escalonamento já existente. Deste modo, é proposta uma abordagem de otimização – Hiper-heurística baseada em Seleção Construtiva para Escalonamento Dinâmico- para lidar com eventos dinâmicos que podem ocorrer num ambiente de produção, a fim de manter o plano de escalonamento, o mais robusto possível. Esta abordagem é inspirada em computação evolutiva e hiper-heurísticas. Do estudo computacional realizado foi possível concluir que o uso da hiper-heurística de seleção construtiva pode ser vantajoso na resolução de problemas de otimização de adaptação dinâmica.
Resumo:
Over the past decades several approaches for schedulability analysis have been proposed for both uni-processor and multi-processor real-time systems. Although different techniques are employed, very little has been put forward in using formal specifications, with the consequent possibility for mis-interpretations or ambiguities in the problem statement. Using a logic based approach to schedulability analysis in the design of hard real-time systems eases the synthesis of correct-by-construction procedures for both static and dynamic verification processes. In this paper we propose a novel approach to schedulability analysis based on a timed temporal logic with time durations. Our approach subsumes classical methods for uni-processor scheduling analysis over compositional resource models by providing the developer with counter-examples, and by ruling out schedules that cause unsafe violations on the system. We also provide an example showing the effectiveness of our proposal.
Resumo:
This paper analyses forest fires in the perspective of dynamical systems. Forest fires exhibit complex correlations in size, space and time, revealing features often present in complex systems, such as the absence of a characteristic length-scale, or the emergence of long range correlations and persistent memory. This study addresses a public domain forest fires catalogue, containing information of events for Portugal, during the period from 1980 up to 2012. The data is analysed in an annual basis, modelling the occurrences as sequences of Dirac impulses with amplitude proportional to the burnt area. First, we consider mutual information to correlate annual patterns. We use visualization trees, generated by hierarchical clustering algorithms, in order to compare and to extract relationships among the data. Second, we adopt the Multidimensional Scaling (MDS) visualization tool. MDS generates maps where each object corresponds to a point. Objects that are perceived to be similar to each other are placed on the map forming clusters. The results are analysed in order to extract relationships among the data and to identify forest fire patterns.
A real-time quantitative assay for hepatitis B DNA virus (HBV) developed to detect all HBV genotypes
Resumo:
Hepatitis B virus (HBV) is a major cause of chronic liver disease worldwide. Besides genotype, quantitative analysis of HBV infection is extensively used for monitoring disease progression and treatment. Affordable viral load monitoring is desirable in resource-limited settings and it has been already shown to be useful in developing countries for other viruses such as Hepatitis C virus (HCV) and HIV. In this paper, we describe the validation of a real-time PCR assay for HBV DNA quantification with TaqMan chemistry and MGB probes. Primers and probes were designed using an alignment of sequences from all HBV genotypes in order to equally amplify all of them. The assay is internally controlled and was standardized with an international HBV panel. Its efficacy was evaluated comparing the results with two other methods: Versant HBV DNA Assay 3.0 (bDNA, Siemens, NY, USA) and another real-time PCR from a reference laboratory. Intra-assay and inter-assay reproducibilities were determined and the mean of CV values obtained were 0.12 and 0.09, respectively. The assay was validated with a broad dynamic range and is efficient for amplifying all HBV genotypes, providing a good option to quantify HBV DNA as a routine procedure, with a cheap and reliable protocol.
Resumo:
Face à estagnação da tecnologia uniprocessador registada na passada década, aos principais fabricantes de microprocessadores encontraram na tecnologia multi-core a resposta `as crescentes necessidades de processamento do mercado. Durante anos, os desenvolvedores de software viram as suas aplicações acompanhar os ganhos de performance conferidos por cada nova geração de processadores sequenciais, mas `a medida que a capacidade de processamento escala em função do número de processadores, a computação sequencial tem de ser decomposta em várias partes concorrentes que possam executar em paralelo, para que possam utilizar as unidades de processamento adicionais e completar mais rapidamente. A programação paralela implica um paradigma completamente distinto da programação sequencial. Ao contrário dos computadores sequenciais tipificados no modelo de Von Neumann, a heterogeneidade de arquiteturas paralelas requer modelos de programação paralela que abstraiam os programadores dos detalhes da arquitectura e simplifiquem o desenvolvimento de aplicações concorrentes. Os modelos de programação paralela mais populares incitam os programadores a identificar instruções concorrentes na sua lógica de programação, e a especificá-las sob a forma de tarefas que possam ser atribuídas a processadores distintos para executarem em simultâneo. Estas tarefas são tipicamente lançadas durante a execução, e atribuídas aos processadores pelo motor de execução subjacente. Como os requisitos de processamento costumam ser variáveis, e não são conhecidos a priori, o mapeamento de tarefas para processadores tem de ser determinado dinamicamente, em resposta a alterações imprevisíveis dos requisitos de execução. `A medida que o volume da computação cresce, torna-se cada vez menos viável garantir as suas restrições temporais em plataformas uniprocessador. Enquanto os sistemas de tempo real se começam a adaptar ao paradigma de computação paralela, há uma crescente aposta em integrar execuções de tempo real com aplicações interativas no mesmo hardware, num mundo em que a tecnologia se torna cada vez mais pequena, leve, ubíqua, e portável. Esta integração requer soluções de escalonamento que simultaneamente garantam os requisitos temporais das tarefas de tempo real e mantenham um nível aceitável de QoS para as restantes execuções. Para tal, torna-se imperativo que as aplicações de tempo real paralelizem, de forma a minimizar os seus tempos de resposta e maximizar a utilização dos recursos de processamento. Isto introduz uma nova dimensão ao problema do escalonamento, que tem de responder de forma correcta a novos requisitos de execução imprevisíveis e rapidamente conjeturar o mapeamento de tarefas que melhor beneficie os critérios de performance do sistema. A técnica de escalonamento baseado em servidores permite reservar uma fração da capacidade de processamento para a execução de tarefas de tempo real, e assegurar que os efeitos de latência na sua execução não afectam as reservas estipuladas para outras execuções. No caso de tarefas escalonadas pelo tempo de execução máximo, ou tarefas com tempos de execução variáveis, torna-se provável que a largura de banda estipulada não seja consumida por completo. Para melhorar a utilização do sistema, os algoritmos de partilha de largura de banda (capacity-sharing) doam a capacidade não utilizada para a execução de outras tarefas, mantendo as garantias de isolamento entre servidores. Com eficiência comprovada em termos de espaço, tempo, e comunicação, o mecanismo de work-stealing tem vindo a ganhar popularidade como metodologia para o escalonamento de tarefas com paralelismo dinâmico e irregular. O algoritmo p-CSWS combina escalonamento baseado em servidores com capacity-sharing e work-stealing para cobrir as necessidades de escalonamento dos sistemas abertos de tempo real. Enquanto o escalonamento em servidores permite partilhar os recursos de processamento sem interferências a nível dos atrasos, uma nova política de work-stealing que opera sobre o mecanismo de capacity-sharing aplica uma exploração de paralelismo que melhora os tempos de resposta das aplicações e melhora a utilização do sistema. Esta tese propõe uma implementação do algoritmo p-CSWS para o Linux. Em concordância com a estrutura modular do escalonador do Linux, ´e definida uma nova classe de escalonamento que visa avaliar a aplicabilidade da heurística p-CSWS em circunstâncias reais. Ultrapassados os obstáculos intrínsecos `a programação da kernel do Linux, os extensos testes experimentais provam que o p-CSWS ´e mais do que um conceito teórico atrativo, e que a exploração heurística de paralelismo proposta pelo algoritmo beneficia os tempos de resposta das aplicações de tempo real, bem como a performance e eficiência da plataforma multiprocessador.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e de Computadores
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Civil
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Doutor em Alterações Climáticas e Políticas de Desenvolvimento Sustentável
Resumo:
Spin-lattice Relaxation, self-Diffusion coefficients and Residual Dipolar Couplings (RDC’s) are the basis of well established Nuclear Magnetic Resonance techniques for the physicochemical study of small molecules (typically organic compounds and natural products with MW < 1000 Da), as they proved to be a powerful and complementary source of information about structural dynamic processes in solution. The work developed in this thesis consists in the application of the earlier-mentioned NMR techniques to explore, analyze and systematize patterns of the molecular dynamic behavior of selected small molecules in particular experimental conditions. Two systems were chosen to investigate molecular dynamic behavior by these techniques: the dynamics of ion-pair formation and ion interaction in ionic liquids (IL) and the dynamics of molecular reorientation when molecules are placed in oriented phases (alignment media). The application of NMR spin-lattice relaxation and self-diffusion measurements was applied to study the rotational and translational molecular dynamics of the IL: 1-butyl-3-methylimidazolium tetrafluoroborate [BMIM][BF4]. The study of the cation-anion dynamics in neat and IL-water mixtures was systematically investigated by a combination of multinuclear NMR relaxation techniques with diffusion data (using by H1, C13 and F19 NMR spectroscopy). Spin-lattice relaxation time (T1), self-diffusion coefficients and nuclear Overhauser effect experiments were combined to determine the conditions that favor the formation of long lived [BMIM][BF4] ion-pairs in water. For this purpose and using the self-diffusion coefficients of cation and anion as a probe, different IL-water compositions were screened (from neat IL to infinite dilution) to find the conditions where both cation and anion present equal diffusion coefficients (8% water fraction at 25 ºC). This condition as well as the neat IL and the infinite dilution were then further studied by 13C NMR relaxation in order to determine correlation times (c) for the molecular reorientational motion using a mathematical iterative procedure and experimental data obtained in a temperature range between 273 and 353 K. The behavior of self-diffusion and relaxation data obtained in our experiments point at the combining parameters of molar fraction 8 % and temperature 298 K as the most favorable condition for the formation of long lived ion-pairs. When molecules are subjected to soft anisotropic motion by being placed in some special media, Residual Dipolar Couplings (RDCs), can be measured, because of the partial alignment induced by this media. RDCs are emerging as a powerful routine tool employed in conformational analysis, as it complements and even outperforms the approaches based on the classical NMR NOE or J3 couplings. In this work, three different alignment media have been characterized and evaluated in terms of integrity using 2H and 1H 1D-NMR spectroscopy, namely the stretched and compressed gel PMMA, and the lyotropic liquid crystals CpCl/n-hexanol/brine and cromolyn/water. The influence that different media and degrees of alignment have on the dynamic properties of several molecules was explored. Different sized sugars were used and their self-diffusion was determined as well as conformation features using RDCs. The results obtained indicate that no influence is felt by the small molecules diffusion and conformational features studied within the alignment degree range studied, which was the 3, 5 and 6 % CpCl/n-hexanol/brine for diffusion, and 5 and 7.5 % CpCl/n-hexanol/brine for conformation. It was also possible to determine that the small molecules diffusion verified in the alignment media presented close values to the ones observed in water, reinforcing the idea of no conditioning of molecular properties in such media.
Resumo:
INTRODUCTION: Sylvatic yellow fever (SYF) is enzootic in Brazil, causing periodic outbreaks in humans living near forest borders or in rural areas. In this study, the cycling patterns of this arbovirosis were analyzed. METHODS: Spectral Fourier analysis was used to capture the periodicity patterns of SYF in time series. RESULTS: SYF outbreaks have not increased in frequency, only in the number of cases. There are two dominant cycles in SYF outbreaks, a seven year cycle for the central-western region and a 14 year cycle for the northern region. Most of the variance was concentrated in the central-western region and dominated the entire endemic region. CONCLUSIONS: The seven year cycle is predominant in the endemic region of the disease due the greater contribution of variance in the central-western region; however, it was possible identify a 14 cycle that governs SYF outbreaks in the northern region. No periodicities were identified for the remaining geographical regions.
Resumo:
Digital Businesses have become a major driver for economic growth and have seen an explosion of new startups. At the same time, it also includes mature enterprises that have become global giants in a relatively short period of time. Digital Businesses have unique characteristics that make the running and management of a Digital Business much different from traditional offline businesses. Digital businesses respond to online users who are highly interconnected and networked. This enables a rapid flow of word of mouth, at a pace far greater than ever envisioned when dealing with traditional products and services. The relatively low cost of incremental user addition has led to a variety of innovation in pricing of digital products, including various forms of free and freemium pricing models. This thesis explores the unique characteristics and complexities of Digital Businesses and its implications on the design of Digital Business Models and Revenue Models. The thesis proposes an Agent Based Modeling Framework that can be used to develop Simulation Models that simulate the complex dynamics of Digital Businesses and the user interactions between users of a digital product. Such Simulation models can be used for a variety of purposes such as simple forecasting, analysing the impact of market disturbances, analysing the impact of changes in pricing models and optimising the pricing for maximum revenue generation or a balance between growth in usage and revenue generation. These models can be developed for a mature enterprise with a large historical record of user growth rate as well as for early stage enterprises without much historical data. Through three case studies, the thesis demonstrates the applicability of the Framework and its potential applications.