846 resultados para Flexible Design Framework for Airport (FlexDFA)
Resumo:
Este relatório foi realizado no âmbito da unidade curricular de DIPRE (Dissertação/Projecto/Estágio) lecionada no Mestrado em Engenharia Civil – Infraestruturas, no Instituto Superior de Engenharia do Porto. O estágio foi realizado na Divisão Municipal de Obras e Iluminação Pública, na Câmara Municipal do Porto. Neste relatório procurou-se descrever e caracterizar todos os tipos de pavimentos, desenvolver e estudar novas técnicas de orçamentação e planeamento, e ainda analisar vários casos para mostrar a validade do que se realizou. Este relatório inicia-se com uma primeira parte de âmbito teórico, em que se faz referência aos diferentes tipos de pavimentos, analisando-se o seu comportamento, execução, patologias e métodos de dimensionamento. Para além disso faz-se a interpretação do Decreto-Lei nº 163/2006 e das questões de mobilidade urbana. Para o estágio foram necessárias diversas ferramentas de trabalho, não só fornecidas pela Divisão Municipal de Obras e Iluminação Pública, mas também propostas e exploradas pelo aluno. Com estas ferramentas conseguiu-se desenvolver um novo método de orçamentação, estudando os Rendimentos dos operários para um maior rigor nas estimativas de custo efetuadas. As soluções que se apresentam para mostrar o trabalho desenvolvido foram escolhidas de acordo com a sua importância e abrangência para demonstrar tudo o que foi acompanhado e realizado durante o estágio. Começando pela Rua do Dr. Magalhães Lemos, que foi selecionada porque houve a oportunidade de acompanhar e fiscalizar uma obra que contempla a execução de dois pavimentos distintos, o pavimento flexível e o rígido, em Betão Armado Contínuo. Optou-se também por selecionar dois casos de melhoria da acessibilidade no centro da cidade, porque foram dois projetos desenvolvidos pelo aluno em que se conseguiu explorar as diferentes decisões que teve de se tomar. Por fim, apresenta-se o estudo do dimensionamento da Rua de Santo Ildefonso de acordo com as diretrizes da Câmara Municipal do Porto no percurso académico.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Com o aumento de plataformas móveis disponíveis no mercado e com o constante incremento na sua capacidade computacional, a possibilidade de executar aplicações e em especial jogos com elevados requisitos de desempenho aumentou consideravelmente. O mercado dos videojogos tem assim um cada vez maior número de potenciais clientes. Em especial, o mercado de jogos massive multiplayer online (MMO) tem-se tornado muito atractivo para as empresas de desenvolvimento de jogos. Estes jogos suportam uma elevada quantidade de jogadores em simultâneo que podem estar a executar o jogo em diferentes plataformas e distribuídos por um "mundo" de jogo extenso. Para incentivar a exploração desse "mundo", distribuem-se de forma inteligente pontos de interesse que podem ser explorados pelo jogador. Esta abordagem leva a um esforço substancial no planeamento e construção desses mundos, gastando tempo e recursos durante a fase de desenvolvimento. Isto representa um problema para as empresas de desenvolvimento de jogos, e em alguns casos, e impraticável suportar tais custos para equipas indie. Nesta tese e apresentada uma abordagem para a criação de mundos para jogos MMO. Estudam-se vários jogos MMO que são casos de sucesso de modo a identificar propriedades comuns nos seus mundos. O objectivo e criar uma framework flexível capaz de gerar mundos com estruturas que respeitam conjuntos de regras definidas por game designers. Para que seja possível usar a abordagem aqui apresentada em v arias aplicações diferentes, foram desenvolvidos dois módulos principais. O primeiro, chamado rule-based-map-generator, contem a lógica e operações necessárias para a criação de mundos. O segundo, chamado blocker, e um wrapper à volta do módulo rule-based-map-generator que gere as comunicações entre servidor e clientes. De uma forma resumida, o objectivo geral e disponibilizar uma framework para facilitar a geração de mundos para jogos MMO, o que normalmente e um processo bastante demorado e aumenta significativamente o custo de produção, através de uma abordagem semi-automática combinando os benefícios de procedural content generation (PCG) com conteúdo gráfico gerado manualmente.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertation to obtain the degree of Doctor of Philosophy in Electrical and Computer Engineering(Industrial Information Systems)
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
The Graphics Processing Unit (GPU) is present in almost every modern day personal computer. Despite its specific purpose design, they have been increasingly used for general computations with very good results. Hence, there is a growing effort from the community to seamlessly integrate this kind of devices in everyday computing. However, to fully exploit the potential of a system comprising GPUs and CPUs, these devices should be presented to the programmer as a single platform. The efficient combination of the power of CPU and GPU devices is highly dependent on each device’s characteristics, resulting in platform specific applications that cannot be ported to different systems. Also, the most efficient work balance among devices is highly dependable on the computations to be performed and respective data sizes. In this work, we propose a solution for heterogeneous environments based on the abstraction level provided by algorithmic skeletons. Our goal is to take full advantage of the power of all CPU and GPU devices present in a system, without the need for different kernel implementations nor explicit work-distribution.To that end, we extended Marrow, an algorithmic skeleton framework for multi-GPUs, to support CPU computations and efficiently balance the work-load between devices. Our approach is based on an offline training execution that identifies the ideal work balance and platform configurations for a given application and input data size. The evaluation of this work shows that the combination of CPU and GPU devices can significantly boost the performance of our benchmarks in the tested environments, when compared to GPU-only executions.
Resumo:
The reported productivity gains while using models and model transformations to develop entire systems, after almost a decade of experience applying model-driven approaches for system development, are already undeniable benefits of this approach. However, the slowness of higher-level, rule based model transformation languages hinders the applicability of this approach to industrial scales. Lower-level, and efficient, languages can be used but productivity and easy maintenance seize to exist. The abstraction penalty problem is not new, it also exists for high-level, object oriented languages but everyone is using them now. Why is not everyone using rule based model transformation languages then? In this thesis, we propose a framework, comprised of a language and its respective environment, designed to tackle the most performance critical operation of high-level model transformation languages: the pattern matching. This framework shows that it is possible to mitigate the performance penalty while still using high-level model transformation languages.
Resumo:
The Intel R Xeon PhiTM is the first processor based on Intel’s MIC (Many Integrated Cores) architecture. It is a co-processor specially tailored for data-parallel computations, whose basic architectural design is similar to the ones of GPUs (Graphics Processing Units), leveraging the use of many integrated low computational cores to perform parallel computations. The main novelty of the MIC architecture, relatively to GPUs, is its compatibility with the Intel x86 architecture. This enables the use of many of the tools commonly available for the parallel programming of x86-based architectures, which may lead to a smaller learning curve. However, programming the Xeon Phi still entails aspects intrinsic to accelerator-based computing, in general, and to the MIC architecture, in particular. In this thesis we advocate the use of algorithmic skeletons for programming the Xeon Phi. Algorithmic skeletons abstract the complexity inherent to parallel programming, hiding details such as resource management, parallel decomposition, inter-execution flow communication, thus removing these concerns from the programmer’s mind. In this context, the goal of the thesis is to lay the foundations for the development of a simple but powerful and efficient skeleton framework for the programming of the Xeon Phi processor. For this purpose we build upon Marrow, an existing framework for the orchestration of OpenCLTM computations in multi-GPU and CPU environments. We extend Marrow to execute both OpenCL and C++ parallel computations on the Xeon Phi. We evaluate the newly developed framework, several well-known benchmarks, like Saxpy and N-Body, will be used to compare, not only its performance to the existing framework when executing on the co-processor, but also to assess the performance on the Xeon Phi versus a multi-GPU environment.
Resumo:
Nowadays, the consumption of goods and services on the Internet are increasing in a constant motion. Small and Medium Enterprises (SMEs) mostly from the traditional industry sectors are usually make business in weak and fragile market sectors, where customized products and services prevail. To survive and compete in the actual markets they have to readjust their business strategies by creating new manufacturing processes and establishing new business networks through new technological approaches. In order to compete with big enterprises, these partnerships aim the sharing of resources, knowledge and strategies to boost the sector’s business consolidation through the creation of dynamic manufacturing networks. To facilitate such demand, it is proposed the development of a centralized information system, which allows enterprises to select and create dynamic manufacturing networks that would have the capability to monitor all the manufacturing process, including the assembly, packaging and distribution phases. Even the networking partners that come from the same area have multi and heterogeneous representations of the same knowledge, denoting their own view of the domain. Thus, different conceptual, semantic, and consequently, diverse lexically knowledge representations may occur in the network, causing non-transparent sharing of information and interoperability inconsistencies. The creation of a framework supported by a tool that in a flexible way would enable the identification, classification and resolution of such semantic heterogeneities is required. This tool will support the network in the semantic mapping establishments, to facilitate the various enterprises information systems integration.
Resumo:
This paper offers a new approach to estimating time-varying covariance matrices in the framework of the diagonal-vech version of the multivariate GARCH(1,1) model. Our method is numerically feasible for large-scale problems, produces positive semidefinite conditional covariance matrices, and does not impose unrealistic a priori restrictions. We provide an empirical application in the context of international stock markets, comparing the nev^ estimator with a number of existing ones.
Resumo:
Digital Businesses have become a major driver for economic growth and have seen an explosion of new startups. At the same time, it also includes mature enterprises that have become global giants in a relatively short period of time. Digital Businesses have unique characteristics that make the running and management of a Digital Business much different from traditional offline businesses. Digital businesses respond to online users who are highly interconnected and networked. This enables a rapid flow of word of mouth, at a pace far greater than ever envisioned when dealing with traditional products and services. The relatively low cost of incremental user addition has led to a variety of innovation in pricing of digital products, including various forms of free and freemium pricing models. This thesis explores the unique characteristics and complexities of Digital Businesses and its implications on the design of Digital Business Models and Revenue Models. The thesis proposes an Agent Based Modeling Framework that can be used to develop Simulation Models that simulate the complex dynamics of Digital Businesses and the user interactions between users of a digital product. Such Simulation models can be used for a variety of purposes such as simple forecasting, analysing the impact of market disturbances, analysing the impact of changes in pricing models and optimising the pricing for maximum revenue generation or a balance between growth in usage and revenue generation. These models can be developed for a mature enterprise with a large historical record of user growth rate as well as for early stage enterprises without much historical data. Through three case studies, the thesis demonstrates the applicability of the Framework and its potential applications.
Resumo:
The airport pavement deteriorates during service due to traffic and climate effects therefore systematic monitoring is required in order to assess their structural and functional condition. The aim of this work is to present the methodologies used nowadays for airport pavement evaluation and to contribute to their improvement in structural analysis area The main aspects that are addressed are the application of the Ground Penetrating Radar (GPR) and the use of the Falling Weight Deflectometer (FWD) tests, for structural evaluation, and the use of the GRIP tester and the measurement of texture depth of the wearing course layer, for the functional evaluation of the runway. Also, freeware computer softwares used to design new runways (FAARFIELD and COMFAA) are presented and examples are given. Case studies are described both for structural and functional evaluation.
Resumo:
Existing wireless networks are characterized by a fixed spectrum assignment policy. However, the scarcity of available spectrum and its inefficient usage demands for a new communication paradigm to exploit the existing spectrum opportunistically. Future Cognitive Radio (CR) devices should be able to sense unoccupied spectrum and will allow the deployment of real opportunistic networks. Still, traditional Physical (PHY) and Medium Access Control (MAC) protocols are not suitable for this new type of networks because they are optimized to operate over fixed assigned frequency bands. Therefore, novel PHY-MAC cross-layer protocols should be developed to cope with the specific features of opportunistic networks. This thesis is mainly focused on the design and evaluation of MAC protocols for Decentralized Cognitive Radio Networks (DCRNs). It starts with a characterization of the spectrum sensing framework based on the Energy-Based Sensing (EBS) technique considering multiple scenarios. Then, guided by the sensing results obtained by the aforementioned technique, we present two novel decentralized CR MAC schemes: the first one designed to operate in single-channel scenarios and the second one to be used in multichannel scenarios. Analytical models for the network goodput, packet service time and individual transmission probability are derived and used to compute the performance of both protocols. Simulation results assess the accuracy of the analytical models as well as the benefits of the proposed CR MAC schemes.