18 resultados para Input-output data
em Instituto Politécnico do Porto, Portugal
Resumo:
This technical report presents a description of the output data files and the tools used to validate and to extract information from the output data files generated by the Repeater-Based Hybrid Wired/Wireless Network Simulator and the Bridge-Based Hybrid Wired/Wireless Network Simulator.
Resumo:
Este trabalho teve o intuito de testar a viabilidade da programação offline para tarefas de lixamento na empresa Grohe Portugal. Para tal era necessário perceber o que é a programação offline e para isso foi efectuada uma pesquisa referente a essa temática, onde ficou evidente que a programação offline é em tudo semelhante à programação online, tendo apenas como principal diferença o facto de não usar o robô propriamente dito durante o desenvolvimento do programa. Devido à ausência do robô, a programação offline exige que se conheça detalhadamente a célula de trabalho, bem como todas as entradas e saídas associadas à célula, sendo que o conhecimento das entradas e saídas pode ser contornada carregando um backup do robô ou carregando os módulos de sistema. No entanto os fabricantes habitualmente não fornecem informação detalhada sobre as células de trabalho, o que dificulta o processo de implementação da unidade no modelo 3D para a programação offline. Após este estudo inicial, foi efectuado um estudo das características inerentes a cada uma das células existentes, com o objectivo de se obter uma melhor percepção de toda a envolvente relacionada com as tarefas de lixamento. Ao longo desse estudo efectuaram-se vários testes para validar os diversos programas desenvolvidos, bem como para testar a modelação 3D efectuada. O projecto propriamente dito consistiu no desenvolvimento de programas offline de forma a minimizar o impacto (em especial o tempo de paragem) da programação de novos produtos. Todo o trabalho de programação era até então feito utilizando o robô, o que implicava tempos de paragem que podiam ser superiores a três dias. Com o desenvolvimento dos programas em modo offline conseguiu-se reduzir esse tempo de paragem dos robôs para pouco mais de um turno (8h), existindo apenas a necessidade de efectuar algumas afinações e correcções nos movimentos de entrada, saída e movimentações entre rotinas e unidades, uma vez que estes movimentos são essenciais ao bom acabamento da peça e convém que seja suaves. Para a realização e conclusão deste projecto foram superadas diversas etapas, sendo que as mais relevantes foram: - A correcta modelação 3D da célula, tendo em conta todo o cenário envolvente, para evitar colisões do robô com a célula; - A adaptação da programação offline para uma linguagem mais usual aos afinadores, ou seja, efectuar a programação com targets inline e criar diferentes rotinas para cada uma das partes da peça, facilitando assim a afinação; - A habituação à programação recorrendo apenas ao uso de módulos para transferir os programas para a célula, bem como a utilização de entradas, saídas e algumas rotinas e funcionalidades já existentes.
Resumo:
Modern real-time systems, with a more flexible and adaptive nature, demand approaches for timeliness evaluation based on probabilistic measures of meeting deadlines. In this context, simulation can emerge as an adequate solution to understand and analyze the timing behaviour of actual systems. However, care must be taken with the obtained outputs under the penalty of obtaining results with lack of credibility. Particularly important is to consider that we are more interested in values from the tail of a probability distribution (near worst-case probabilities), instead of deriving confidence on mean values. We approach this subject by considering the random nature of simulation output data. We will start by discussing well known approaches for estimating distributions out of simulation output, and the confidence which can be applied to its mean values. This is the basis for a discussion on the applicability of such approaches to derive confidence on the tail of distributions, where the worst-case is expected to be.
Resumo:
The problem of uncertainty propagation in composite laminate structures is studied. An approach based on the optimal design of composite structures to achieve a target reliability level is proposed. Using the Uniform Design Method (UDM), a set of design points is generated over a design domain centred at mean values of random variables, aimed at studying the space variability. The most critical Tsai number, the structural reliability index and the sensitivities are obtained for each UDM design point, using the maximum load obtained from optimal design search. Using the UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on supervised evolutionary learning. Finally, using the developed ANN a Monte Carlo simulation procedure is implemented and the variability of the structural response based on global sensitivity analysis (GSA) is studied. The GSA is based on the first order Sobol indices and relative sensitivities. An appropriate GSA algorithm aiming to obtain Sobol indices is proposed. The most important sources of uncertainty are identified.
Resumo:
An approach for the analysis of uncertainty propagation in reliability-based design optimization of composite laminate structures is presented. Using the Uniform Design Method (UDM), a set of design points is generated over a domain centered on the mean reference values of the random variables. A methodology based on inverse optimal design of composite structures to achieve a specified reliability level is proposed, and the corresponding maximum load is outlined as a function of ply angle. Using the generated UDM design points as input/output patterns, an Artificial Neural Network (ANN) is developed based on an evolutionary learning process. Then, a Monte Carlo simulation using ANN development is performed to simulate the behavior of the critical Tsai number, structural reliability index, and their relative sensitivities as a function of the ply angle of laminates. The results are generated for uniformly distributed random variables on a domain centered on mean values. The statistical analysis of the results enables the study of the variability of the reliability index and its sensitivity relative to the ply angle. Numerical examples showing the utility of the approach for robust design of angle-ply laminates are presented.
Resumo:
The main purpose of this work was the development of procedures for the simulation of atmospheric ows over complex terrain, using OpenFOAM. For this aim, tools and procedures were developed apart from this code for the preprocessing and data extraction, which were thereafter applied in the simulation of a real case. For the generation of the computational domain, a systematic method able to translate the terrain elevation model to a native OpenFOAM format (blockMeshDict) was developed. The outcome was a structured mesh, in which the user has the ability to de ne the number of control volumes and its dimensions. With this procedure, the di culties of case set up and the high computation computational e ort reported in literature associated to the use of snappyHexMesh, the OpenFOAM resource explored until then for the accomplishment of this task, were considered to be overwhelmed. Developed procedures for the generation of boundary conditions allowed for the automatic creation of idealized inlet vertical pro les, de nition of wall functions boundary conditions and the calculation of internal eld rst guesses for the iterative solution process, having as input experimental data supplied by the user. The applicability of the generated boundary conditions was limited to the simulation of turbulent, steady-state, incompressible and neutrally strati ed atmospheric ows, always recurring to RaNS (Reynolds-averaged Navier-Stokes) models. For the modelling of terrain roughness, the developed procedure allowed to the user the de nition of idealized conditions, like an uniform aerodynamic roughness length or making its value variable as a function of topography characteristic values, or the using of real site data, and it was complemented by the development of techniques for the visual inspection of generated roughness maps. The absence and the non inclusion of a forest canopy model limited the applicability of this procedure to low aerodynamic roughness lengths. The developed tools and procedures were then applied in the simulation of a neutrally strati ed atmospheric ow over the Askervein hill. In the performed simulations was evaluated the solution sensibility to di erent convection schemes, mesh dimensions, ground roughness and formulations of the k - ε and k - ω models. When compared to experimental data, calculated values showed a good agreement of speed-up in hill top and lee side, with a relative error of less than 10% at a height of 10 m above ground level. Turbulent kinetic energy was considered to be well simulated in the hill windward and hill top, and grossly predicted in the lee side, where a zone of ow separation was also identi ed. Despite the need of more work to evaluate the importance of the downstream recirculation zone in the quality of gathered results, the agreement between the calculated and experimental values and the OpenFOAM sensibility to the tested parameters were considered to be generally in line with the simulations presented in the reviewed bibliographic sources.
Resumo:
An ever increasing need for extra functionality in a single embedded system demands for extra Input/Output (I/O) devices, which are usually connected externally and are expensive in terms of energy consumption. To reduce their energy consumption, these devices are equipped with power saving mechanisms. While I/O device scheduling for real-time (RT) systems with such power saving features has been studied in the past, the use of energy resources by these scheduling algorithms may be improved. Technology enhancements in the semiconductor industry have allowed the hardware vendors to reduce the device transition and energy overheads. The decrease in overhead of sleep transitions has opened new opportunities to further reduce the device energy consumption. In this research effort, we propose an intra-task device scheduling algorithm for real-time systems that wakes up a device on demand and reduces its active time while ensuring system schedulability. This intra-task device scheduling algorithm is extended for devices with multiple sleep states to further minimise the overall device energy consumption of the system. The proposed algorithms have less complexity when compared to the conservative inter-task device scheduling algorithms. The system model used relaxes some of the assumptions commonly made in the state-of-the-art that restrict their practical relevance. Apart from the aforementioned advantages, the proposed algorithms are shown to demonstrate the substantial energy savings.
Resumo:
O intuito principal desta Tese é criar um interface de Dados entre uma fonte de informação e fornecimento de Rotas para turistas e disponibilizar essa informação através de um sistema móvel interactivo de navegação e visualização desses mesmos dados. O formato tecnológico será portátil e orientado à mobilidade (PDA) e deverá ser prático, intuitivo e multi-facetado, permitindo boa usabilidade a públicos de várias faixas etárias. Haverá uma componente de IA (Inteligência Artificial), que irá usar a informação fornecida para tomar decisões ponderadas tendo em conta uma diversidade de aspectos. O Sistema a desenvolver deverá ser, assim, capaz de lidar com imponderáveis (alterações de rota, gestão de horários, cancelamento de pontos de visita, novos pontos de visita) e, finalmente, deverá ajudar o turista a gerir o seu tempo entre Pontos de Interesse (POI – Points os Interest). Deverá também permitir seguir ou não um dado percurso pré-definido, havendo possibilidade de cenários de exploração de POIs, sugeridos a partir de sugestões in loco, similares a Locais incluídos no trajecto, que se enquadravam no perfil dos Utilizadores. O âmbito geográfico de teste deste projecto será a zona ribeirinha do porto, por ser um ex-líbris da cidade e, simultaneamente, uma zona com muitos desafios ao nível geográfico (com a inclinação) e ao nível do grande número de Eventos e Locais a visitar.
Resumo:
We describe a novel approach to explore DNA nucleotide sequence data, aiming to produce high-level categorical and structural information about the underlying chromosomes, genomes and species. The article starts by analyzing chromosomal data through histograms using fixed length DNA sequences. After creating the DNA-related histograms, a correlation between pairs of histograms is computed, producing a global correlation matrix. These data are then used as input to several data processing methods for information extraction and tabular/graphical output generation. A set of 18 species is processed and the extensive results reveal that the proposed method is able to generate significant and diversified outputs, in good accordance with current scientific knowledge in domains such as genomics and phylogenetics.
Resumo:
A crescente complexidade dos sistemas electrónicos associada a um desenvolvimento nas tecnologias de encapsulamento levou à miniaturização dos circuitos integrados, provocando dificuldades e limitações no diagnóstico e detecção de falhas, diminuindo drasticamente a aplicabilidade dos equipamentos ICT. Como forma de lidar com este problema surgiu a infra-estrutura Boundary Scan descrita na norma IEEE1149.1 “Test Access Port and Boundary-Scan Architecture”, aprovada em 1990. Sendo esta solução tecnicamente viável e interessante economicamente para o diagnóstico de defeitos, efectua também outras aplicações. O SVF surgiu do desejo de incutir e fazer com que os fornecedores independentes incluíssem a norma IEEE 1149.1, é desenvolvido num formato ASCII, com o objectivo de enviar sinais, aguardar pela sua resposta, segundo a máscara de dados baseada na norma IEEE1149.1. Actualmente a incorporação do Boundary Scan nos circuitos integrados está em grande expansão e consequentemente usufrui de uma forte implementação no mercado. Neste contexto o objectivo da dissertação é o desenvolvimento de um controlador boundary scan que implemente uma interface com o PC e possibilite o controlo e monitorização da aplicação de teste ao PCB. A arquitectura do controlador desenvolvido contém um módulo de Memória de entrada, um Controlador TAP e uma Memória de saída. A implementação do controlador foi feita através da utilização de uma FPGA, é um dispositivo lógico reconfiguráveis constituído por blocos lógicos e por uma rede de interligações, ambos configuráveis, que permitem ao utilizador implementar as mais variadas funções digitais. A utilização de uma FPGA tem a vantagem de permitir a versatilidade do controlador, facilidade na alteração do seu código e possibilidade de inserir mais controladores dentro da FPGA. Foi desenvolvido o protocolo de comunicação e sincronização entre os vários módulos, permitindo o controlo e monitorização dos estímulos enviados e recebidos ao PCB, executados automaticamente através do software do Controlador TAP e de acordo com a norma IEEE 1149.1. A solução proposta foi validada por simulação utilizando o simulador da Xilinx. Foram analisados todos os sinais que constituem o controlador e verificado o correcto funcionamento de todos os seus módulos. Esta solução executa todas as sequências pretendidas e necessárias (envio de estímulos) à realização dos testes ao PCB. Recebe e armazena os dados obtidos, enviando-os posteriormente para a memória de saída. A execução do trabalho permitiu concluir que os projectos de componentes electrónicos tenderão a ser descritos num nível de abstracção mais elevado, recorrendo cada vez mais ao uso de linguagens de hardware, no qual o VHDL é uma excelente ferramenta de programação. O controlador desenvolvido será uma ferramenta bastante útil e versátil para o teste de PCBs e outras funcionalidades disponibilizadas pelas infra-estruturas BS.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Wind resource evaluation in two sites located in Portugal was performed using the mesoscale modelling system Weather Research and Forecasting (WRF) and the wind resource analysis tool commonly used within the wind power industry, the Wind Atlas Analysis and Application Program (WAsP) microscale model. Wind measurement campaigns were conducted in the selected sites, allowing for a comparison between in situ measurements and simulated wind, in terms of flow characteristics and energy yields estimates. Three different methodologies were tested, aiming to provide an overview of the benefits and limitations of these methodologies for wind resource estimation. In the first methodology the mesoscale model acts like “virtual” wind measuring stations, where wind data was computed by WRF for both sites and inserted directly as input in WAsP. In the second approach, the same procedure was followed but here the terrain influences induced by the mesoscale model low resolution terrain data were removed from the simulated wind data. In the third methodology, the simulated wind data is extracted at the top of the planetary boundary layer height for both sites, aiming to assess if the use of geostrophic winds (which, by definition, are not influenced by the local terrain) can bring any improvement in the models performance. The obtained results for the abovementioned methodologies were compared with those resulting from in situ measurements, in terms of mean wind speed, Weibull probability density function parameters and production estimates, considering the installation of one wind turbine in each site. Results showed that the second tested approach is the one that produces values closest to the measured ones, and fairly acceptable deviations were found using this coupling technique in terms of estimated annual production. However, mesoscale output should not be used directly in wind farm sitting projects, mainly due to the mesoscale model terrain data poor resolution. Instead, the use of mesoscale output in microscale models should be seen as a valid alternative to in situ data mainly for preliminary wind resource assessments, although the application of mesoscale and microscale coupling in areas with complex topography should be done with extreme caution.
Resumo:
This paper presents the application of multidimensional scaling (MDS) analysis to data emerging from noninvasive lung function tests, namely the input respiratory impedance. The aim is to obtain a geometrical mapping of the diseases in a 3D space representation, allowing analysis of (dis)similarities between subjects within the same pathology groups, as well as between the various groups. The adult patient groups investigated were healthy, diagnosed chronic obstructive pulmonary disease (COPD) and diagnosed kyphoscoliosis, respectively. The children patient groups were healthy, asthma and cystic fibrosis. The results suggest that MDS can be successfully employed for mapping purposes of restrictive (kyphoscoliosis) and obstructive (COPD) pathologies. Hence, MDS tools can be further examined to define clear limits between pools of patients for clinical classification, and used as a training aid for medical traineeship.
Resumo:
In this paper we describe a low cost distributed system intended to increase the positioning accuracy of outdoor navigation systems based on the Global Positioning System (GPS). Since the accuracy of absolute GPS positioning is insufficient for many outdoor navigation tasks, another GPS based methodology – the Differential GPS (DGPS) – was developed in the nineties. The differential or relative positioning approach is based on the calculation and dissemination of the range errors of the received GPS satellites. GPS/DGPS receivers correlate the broadcasted GPS data with the DGPS corrections, granting users increased accuracy. DGPS data can be disseminated using terrestrial radio beacons, satellites and, more recently, the Internet. Our goal is to provide mobile platforms within our campus with DGPS data for precise outdoor navigation. To achieve this objective, we designed and implemented a three-tier client/server distributed system that, first, establishes Internet links with remote DGPS sources and, then, performs campus-wide dissemination of the obtained data. The Internet links are established between data servers connected to remote DGPS sources and the client, which is the data input module of the campus-wide DGPS data provider. The campus DGPS data provider allows the establishment of both Intranet and wireless links within the campus. This distributed system is expected to provide adequate support for accurate outdoor navigation tasks.
Resumo:
Adhesive bonding is nowadays a serious candidate to replace methods such as fastening or riveting, because of attractive mechanical properties. As a result, adhesives are being increasingly used in industries such as the automotive, aerospace and construction. Thus, it is highly important to predict the strength of bonded joints to assess the feasibility of joining during the fabrication process of components (e.g. due to complex geometries) or for repairing purposes. This work studies the tensile behaviour of adhesive joints between aluminium adherends considering different values of adherend thickness (h) and the double-cantilever beam (DCB) test. The experimental work consists of the definition of the tensile fracture toughness (GIC) for the different joint configurations. A conventional fracture characterization method was used, together with a J-integral approach, that take into account the plasticity effects occurring in the adhesive layer. An optical measurement method is used for the evaluation of crack tip opening and adherends rotation at the crack tip during the test, supported by a Matlab® sub-routine for the automated extraction of these quantities. As output of this work, a comparative evaluation between bonded systems with different values of adherend thickness is carried out and complete fracture data is provided in tension for the subsequent strength prediction of joints with identical conditions.