883 resultados para Viscosidade e simulação


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Os coeficientes de difusão (D 12) são propriedades fundamentais na investigação e na indústria, mas a falta de dados experimentais e a inexistência de equações que os estimem com precisão e confiança em fases comprimidas ou condensadas constituem limitações importantes. Os objetivos principais deste trabalho compreendem: i) a compilação de uma grande base de dados para valores de D 12 de sistemas gasosos, líquidos e supercríticos; ii) o desenvolvimento e validação de novos modelos de coeficientes de difusão a diluição infinita, aplicáveis em amplas gamas de temperatura e densidade, para sistemas contendo componentes muito distintos em termos de polaridade, tamanho e simetria; iii) a montagem e teste de uma instalação experimental para medir coeficientes de difusão em líquidos e fluidos supercríticos. Relativamente à modelação, uma nova expressão para coeficientes de difusão a diluição infinita de esferas rígidas foi desenvolvida e validada usando dados de dinâmica molecular (desvio relativo absoluto médio, AARD = 4.44%) Foram também estudados os coeficientes de difusão binários de sistemas reais. Para tal, foi compilada uma extensa base de dados de difusividades de sistemas reais em gases e solventes densos (622 sistemas binários num total de 9407 pontos experimentais e 358 moléculas) e a mesma foi usada na validação dos novos modelos desenvolvidos nesta tese. Um conjunto de novos modelos foi proposto para o cálculo de coeficientes de difusão a diluição infinita usando diferentes abordagens: i) dois modelos de base molecular com um parâmetro específico para cada sistema, aplicáveis em sistemas gasosos, líquidos e supercríticos, em que natureza do solvente se encontra limitada a apolar ou fracamente polar (AARDs globais na gama 4.26-4.40%); ii) dois modelos de base molecular biparamétricos, aplicáveis em todos os estados físicos, para qualquer tipo de soluto diluído em qualquer solvente (apolar, fracamente polar e polar). Ambos os modelos dão origem a erros globais entre 2.74% e 3.65%; iii) uma correlação com um parâmetro, específica para coeficientes de difusão em dióxido de carbono supercrítico (SC-CO2) e água líquida (AARD = 3.56%); iv) nove correlações empíricas e semi-empíricas que envolvem dois parâmetros, dependentes apenas da temperatura e/ou densidade do solvente e/ou viscosidade do solvente. Estes últimos modelos são muito simples e exibem excelentes resultados (AARDs entre 2.78% e 4.44%) em sistemas líquidos e supercríticos; e v) duas equações preditivas para difusividades de solutos em SC-CO2, em que os erros globais de ambas são inferiores a 6.80%. No global, deve realçar-se o facto de os novos modelos abrangerem a grande variedade de sistemas e moléculas geralmente encontrados. Os resultados obtidos são consistentemente melhores do que os obtidos com os modelos e abordagens encontrados na literatura. No caso das correlações com um ou dois parâmetros, mostrou-se que estes mesmos parâmetros podem ser ajustados usando um conjunto muito pequeno de dados, e posteriormente serem utilizados na previsão de valores de D 12 longe do conjunto original de pontos. Uma nova instalação experimental para medir coeficientes de difusão binários por técnicas cromatográficas foi montada e testada. O equipamento, o procedimento experimental e os cálculos analíticos necessários à obtenção dos valores de D 12 pelo método de abertura do pico cromatográfico, foram avaliados através da medição de difusividades de tolueno e acetona em SC-CO2. Seguidamente, foram medidos coeficientes de difusão de eucaliptol em SC-CO2 nas gamas de 202 – 252 bar e 313.15 – 333.15 K. Os resultados experimentais foram analisados através de correlações e modelos preditivos para D12.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos últimos anos, a Optoelectrónica tem sido estabelecida como um campo de investigação capaz de conduzir a novas soluções tecnológicas. As conquistas abundantes no campo da óptica e lasers, bem como em comunicações ópticas têm sido de grande importância e desencadearam uma série de inovações. Entre o grande número de componentes ópticos existentes, os componentes baseados em fibra óptica são principalmente relevantes devido à sua simplicidade e à elevada de transporte de dados da fibra óptica. Neste trabalho foi focado um destes componentes ópticos: as redes de difracção em fibra óptica, as quais têm propriedades ópticas de processamento únicas. Esta classe de componentes ópticos é extremamente atraente para o desenvolvimento de dispositivos de comunicações ópticas e sensores. O trabalho começou com uma análise teórica aplicada a redes em fibra e foram focados os métodos de fabricação de redes em fibra mais utilizados. A inscrição de redes em fibra também foi abordado neste trabalho, onde um sistema de inscrição automatizada foi implementada para a fibra óptica de sílica, e os resultados experimentais mostraram uma boa aproximação ao estudo de simulação. Também foi desenvolvido um sistema de inscrição de redes de Bragg em fibra óptica de plástico. Foi apresentado um estudo detalhado da modulação acústico-óptica em redes em fibra óptica de sílica e de plástico. Por meio de uma análise detalhada dos modos de excitação mecânica aplicadas ao modulador acústico-óptico, destacou-se que dois modos predominantes de excitação acústica pode ser estabelecidos na fibra óptica, dependendo da frequência acústica aplicada. Através dessa caracterização, foi possível desenvolver novas aplicações para comunicações ópticas. Estudos e implementação de diferentes dispositivos baseados em redes em fibra foram realizados, usando o efeito acústico-óptico e o processo de regeneração em fibra óptica para várias aplicações tais como rápido multiplexador óptico add-drop, atraso de grupo sintonizável de redes de Bragg, redes de Bragg com descolamento de fase sintonizáveis, método para a inscrição de redes de Bragg com perfis complexos, filtro sintonizável para equalização de ganho e filtros ópticos notch ajustáveis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente trabalho teve como objetivo principal estudar a correlação no estado fresco e no estado endurecido entre argamassas e betões com pozolanas, nomeadamente, um metacaulino e uma diatomite. Este trabalho procurou também otimizar a utilização dos materiais pozolânicos na produção de argamassas e betões. O estudo do comportamento reológico inicia-se com a avaliação da argamassa padrão e do betão padrão, utilizando para tal reómetros adequados a cada material. O comportamento reológico das argamassas com pozolanas foi analisado em função do comportamento da argamassa padrão. Verificou-se que é possível ajustar o comportamento reológico de argamassas com pozolanas ao comportamento da argamassa padrão e, deste modo, obter-se também betões correspondentes (com pozolanas) dentro do intervalo de trabalhabilidade pretendido e pré-definido para o betão padrão. Também foi possível concluir que, até um determinado teor de material pozolânico, se verificava uma correlação entre os parâmetros reológicos (viscosidade e tensão de cedência) das argamassas e os seus betões correspondentes. Na caracterização das argamassas e betões no estado endurecido, verificou-se a existência de uma correlação entre a resistência à compressão das argamassas e as resistências dos betões correspondentes para a maioria das formulações. Quando o ajuste de trabalhabilidade foi efetuado através da alteração do teor de água, apenas as formulações com metacaulino apresentavam uma relação linear entre as resistências das argamassas e a dos betões correspondentes. Usando um agente redutor de água de amassadura para o ajuste de trabalhabilidade, as formulações com metacaulino continuam a apresentar uma relação linear entre as resistências das argamassas e as resistências dos betões. As formulações mistas, com metacaulino e diatomite, também apresentam uma relação linear entre o valor das resistências das argamassas e dos betões. As composições com diatomite não mostram esta relação linear entre a resistência das argamassas e a resistência dos betões, embora exista uma correlação entre elas. O estudo de algumas propriedades no estado endurecido de betões mostrou que a utilização de água como elemento de ajuste de trabalhabilidade diminui sempre a resistência à compressão dos betões com o aumento do teor em pozolana. O uso de um agente redutor de água de amassadura, principalmente no caso da utilização do metacaulino, aumenta a resistência dos betões face ao padrão devido à sua maior reatividade pozolânica relativamente à diatomite. Estas tendências para os resultados observados na resistência mecânica foram também visíveis no módulo de elasticidade e justificáveis pela evolução da microestrutura avaliada conjuntamente por porosimetria, análises térmicas e microscopia eletrónica de varrimento. Finalmente, no estudo da influência dos materiais pozolânicos sobre a durabilidade dos betões, especificamente sobre a resistência à penetração de cloretos, ambas as pozolanas mostraram um efeito bloqueador à penetração de cloretos e, também aqui esse efeito foi mais evidente em composições com metacaulino e na presença de um agente redutor de água de amassadura.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Minho River, situated 30 km south of the Rias Baixas is the most important freshwater source flowing into the Western Galician Coast (NW of the Iberian Peninsula). This discharge is important to determine the hydrological patterns adjacent to its mouth, particularly close to the Galician coastal region. The buoyancy generated by the Minho plume can flood the Rias Baixas for long periods, reversing the normal estuarine density gradients. Thus, it becomes important to analyse its dynamics as well as the thermohaline patterns of the areas affected by the freshwater spreading. Thus, the main aim of this work was to study the propagation of the Minho estuarine plume to the Rias Baixas, establishing the conditions in which this plume affects the circulation and hydrographic features of these coastal systems, through the development and application of the numerical model MOHID. For this purpose, the hydrographic features of the Rias Baixas mouths were studied. It was observed that at the northern mouths, due to their shallowness, the heat fluxes between the atmosphere and ocean are the major forcing, influencing the water temperature, while at the southern mouths the influence of the upwelling events and the Minho River discharge were more frequent. The salinity increases from south to north, revealing that the observed low values may be caused by the Minho River freshwater discharge. An assessment of wind data along the Galician coast was carried out, in order to evaluate the applicability of the study to the dispersal of the Minho estuarine plume. Firstly, a comparative analysis between winds obtained from land meteorological stations and offshore QuikSCAT satellite were performed. This comparison revealed that satellite data constitute a good approach to study wind induced coastal phenomena. However, since the numerical model MOHID requires wind data with high spatial and temporal resolution close to the coast, results of the forecasted model WRF were added to the previous study. The analyses revealed that the WRF model data is a consistent tool to obtain representative wind data near the coast, showing good results when comparing with in situ wind observations from oceanographic buoys. To study the influence of the Minho buoyant discharge influence on the Rias Baixas, a set of three one-way nested models was developed and implemented, using the numerical model MOHID. The first model domain is a barotropic model and includes the whole Iberian Peninsula coast. The second and third domains are baroclinic models, where the second domain is a coarse representation of the Rias Baixas and adjacent coastal area, while the third includes the same area with a higher resolution. A bi-dimensional model was also implemented in the Minho estuary, in order to quantify the flow (and its properties) that the estuary injects into the ocean. The chosen period for the Minho estuarine plume propagation validation was the spring of 1998, since a high Minho River discharge was reported, as well as favourable wind patterns to advect the estuarine plume towards the Rias Baixas, and there was field data available to compare with the model predictions. The obtained results show that the adopted nesting methodology was successful implemented. Model predictions reproduce accurately the hydrodynamics and thermohaline patterns on the Minho estuary and Rias Baixas. The importance of the Minho river discharge and the wind forcing in the event of May 1998 was also studied. The model results showed that a continuous moderate Minho River discharge combined with southerly winds is enough to reverse the Rias Baixas circulation pattern, reducing the importance of the occurrence of specific events of high runoff values. The conditions in which the estuarine plume Minho affects circulation and hydrography of the Rias Baixas were evaluated. The numerical results revealed that the Minho estuarine plume responds rapidly to wind variations and is also influenced by the bathymetry and morphology of the coastline. Without wind forcing, the plume expands offshore, creating a bulge in front of the river mouth. When the wind blows southwards, the main feature is the offshore extension of the plume. Otherwise, northward wind spreads the river plume towards the Rias Baixas. The plume is confined close to the coast, reaching the Rias Baixas after 1.5 days. However, for Minho River discharges higher than 800 m3 s-1, the Minho estuarine plume reverses the circulation patterns in the Rias Baixas. It was also observed that the wind stress and Minho River discharge are the most important factors influencing the size and shape of the Minho estuarine plume. Under the same conditions, the water exchange between Rias Baixas was analysed following the trajectories particles released close to the Minho River mouth. Over 5 days, under Minho River discharges higher than 2100 m3 s-1 combined with southerly winds of 6 m s-1, an intense water exchange between Rias was observed. However, only 20% of the particles found in Ria de Pontevedra come directly from the Minho River. In summary, the model application developed in this study contributed to the characterization and understanding of the influence of the Minho River on the Rias Baixas circulation and hydrography, highlighting that this methodology can be replicated to other coastal systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Portugal é um dos países europeus com melhor cobertura espacial e populacional de rede de autoestradas (5º entre os 27 da UE). O acentuado crescimento desta rede nos últimos anos leva a que seja necessária a utilização de metodologias de análise e avaliação da qualidade do serviço que é prestado nestas infraestruturas, relativamente às condições de circulação. Usualmente, a avaliação da qualidade de serviço é efetuada por intermédio de metodologias internacionalmente aceites, das quais se destaca a preconizada no Highway Capacity Manual (HCM). É com esta última metodologia que são habitualmente determinados em Portugal, os níveis de serviço nas diversas componentes de uma autoestrada (secções correntes, ramos de ligação e segmentos de entrecruzamento). No entanto, a sua transposição direta para a realidade portuguesa levanta algumas reservas, uma vez que os elementos que compõem o ambiente rodoviário (infraestrutura, veículo e condutor) são distintos dos da realidade norte-americana para a qual foi desenvolvida. Assim, seria útil para os atores envolvidos no setor rodoviário dispor de metodologias desenvolvidas para as condições portuguesas, que possibilitassem uma caracterização mais realista da qualidade de serviço ao nível da operação em autoestradas. No entanto, importa referir que o desenvolvimento de metodologias deste género requer uma quantidade muito significativa de dados geométricos e de tráfego, o que acarreta uma enorme necessidade de meios, quer humanos, quer materiais. Esta abordagem é assim de difícil execução, sendo por isso necessário recorrer a metodologias alternativas para a persecução deste objetivo. Ultimamente tem-se verificado o uso cada vez mais generalizado de modelos de simulação microscópica de tráfego, que simulando o movimento individual dos veículos num ambiente virtual permitem realizar análises de tráfego. A presente dissertação apresenta os resultados obtidos no desenvolvimento de uma metodologia que procura recriar, através de simuladores microscópicos de tráfego, o comportamento das correntes de tráfego em secções correntes de autoestradas com o intuito de, posteriormente, se proceder à adaptação da metodologia preconizada no HCM (na sua edição de 2000) à realidade portuguesa. Para tal, com os simuladores microscópicos utilizados (AIMSUN e VISSIM) procurou-se reproduzir as condições de circulação numa autoestrada portuguesa, de modo a que fosse possível analisar as alterações sofridas no comportamento das correntes de tráfego após a modificação dos principais fatores geométricos e de tráfego envolvidos na metodologia do HCM 2000. Para o efeito, realizou-se uma análise de sensibilidade aos simuladores de forma a avaliar a sua capacidade para representar a influência desses fatores, com vista a, numa fase posterior, se quantificar o seu efeito para a realidade nacional e dessa forma se proceder à adequação da referida metodologia ao contexto português. Em resumo, o presente trabalho apresenta as principais vantagens e limitações dos microssimuladores AIMSUN e VISSIM na modelação do tráfego de uma autoestrada portuguesa, tendo-se concluído que estes simuladores não são capazes de representar de forma explícita alguns dos fatores considerados na metodologia do HCM 2000, o que impossibilita a sua utilização como ferramenta de quantificação dos seus efeitos e consequentemente inviabiliza a adaptação dessa metodologia à realidade nacional. São, no entanto, referidas algumas indicações de como essas limitações poderão vir a ser ultrapassadas, com vista à consecução futura dessa adequação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Viscoelastic treatments are one of the most efficient treatments, as far as passive damping is concerned, particularly in the case of thin and light structures. In this type of treatment, part of the strain energy generated in the viscoelastic material is dissipated to the surroundings, in the form of heat. A layer of viscoelastic material is applied to a structure in an unconstrained or constrained configuration, the latter proving to be the most efficient arrangement. This is due to the fact that the relative movement of both the host and constraining layers cause the viscoelastic material to be subjected to a relatively high strain energy. There are studies, however, that claim that the partial application of the viscoelastic material is just as efficient, in terms of economic costs or any other form of treatment application costs. The application of patches of material in specific and selected areas of the structure, thus minimising the extension of damping material, results in an equally efficient treatment. Since the damping mechanism of a viscoelastic material is based on the dissipation of part of the strain energy, the efficiency of the partial treatment can be correlated to the modal strain energy of the structure. Even though the results obtained with this approach in various studies are considered very satisfactory, an optimisation procedure is deemed necessary. In order to obtain optimum solutions, however, time consuming numerical simulations are required. The optimisation process to use the minimum amount of viscoelastic material is based on an evolutionary geometry re-design and calculation of the modal damping, making this procedure computationally costly. To avert this disadvantage, this study uses adaptive layerwise finite elements and applies Genetic Algorithms in the optimisation process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A modelação e análise de séries temporais de valores inteiros têm sido alvo de grande investigação e desenvolvimento nos últimos anos, com aplicações várias em diversas áreas da ciência. Nesta tese a atenção centrar-se-á no estudo na classe de modelos basedos no operador thinning binomial. Tendo como base o operador thinning binomial, esta tese focou-se na construção e estudo de modelos SETINAR(2; p(1); p(2)) e PSETINAR(2; 1; 1)T , modelos autorregressivos de valores inteiros com limiares autoinduzidos e dois regimes, admitindo que as inovações formam uma sucessão de variáveis independentes com distribuição de Poisson. Relativamente ao primeiro modelo analisado, o modelo SETINAR(2; p(1); p(2)), além do estudo das suas propriedades probabilísticas e de métodos, clássicos e bayesianos, para estimar os parâmetros, analisou-se a questão da seleção das ordens, no caso de elas serem desconhecidas. Com este objetivo consideraram-se algoritmos de Monte Carlo via cadeias de Markov, em particular o algoritmo Reversible Jump, abordando-se também o problema da seleção de modelos, usando metodologias clássica e bayesiana. Complementou-se a análise através de um estudo de simulação e uma aplicação a dois conjuntos de dados reais. O modelo PSETINAR(2; 1; 1)T proposto, é também um modelo autorregressivo com limiares autoinduzidos e dois regimes, de ordem unitária em cada um deles, mas apresentando uma estrutura periódica. Estudaram-se as suas propriedades probabilísticas, analisaram-se os problemas de inferência e predição de futuras observações e realizaram-se estudos de simulação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ever-growing energy consumption in mobile networks stimulated by the expected growth in data tra ffic has provided the impetus for mobile operators to refocus network design, planning and deployment towards reducing the cost per bit, whilst at the same time providing a signifi cant step towards reducing their operational expenditure. As a step towards incorporating cost-eff ective mobile system, 3GPP LTE-Advanced has adopted the coordinated multi-point (CoMP) transmission technique due to its ability to mitigate and manage inter-cell interference (ICI). Using CoMP the cell average and cell edge throughput are boosted. However, there is room for reducing energy consumption further by exploiting the inherent exibility of dynamic resource allocation protocols. To this end packet scheduler plays the central role in determining the overall performance of the 3GPP longterm evolution (LTE) based on packet-switching operation and provide a potential research playground for optimizing energy consumption in future networks. In this thesis we investigate the baseline performance for down link CoMP using traditional scheduling approaches, and subsequently go beyond and propose novel energy e fficient scheduling (EES) strategies that can achieve power-e fficient transmission to the UEs whilst enabling both system energy effi ciency gain and fairness improvement. However, ICI can still be prominent when multiple nodes use common resources with di fferent power levels inside the cell, as in the so called heterogeneous networks (Het- Net) environment. HetNets are comprised of two or more tiers of cells. The rst, or higher tier, is a traditional deployment of cell sites, often referred to in this context as macrocells. The lower tiers are termed small cells, and can appear as microcell, picocells or femtocells. The HetNet has attracted signiffi cant interest by key manufacturers as one of the enablers for high speed data at low cost. Research until now has revealed several key hurdles that must be overcome before HetNets can achieve their full potential: bottlenecks in the backhaul must be alleviated, as well as their seamless interworking with CoMP. In this thesis we explore exactly the latter hurdle, and present innovative ideas on advancing CoMP to work in synergy with HetNet deployment, complemented by a novel resource allocation policy for HetNet tighter interference management. As system level simulator has been used to analyze the proposed algorithm/protocols, and results have concluded that up to 20% energy gain can be observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this work was to monitor a set of physical-chemical properties of heavy oil procedural streams through nuclear magnetic resonance spectroscopy, in order to propose an analysis procedure and online data processing for process control. Different statistical methods which allow to relate the results obtained by nuclear magnetic resonance spectroscopy with the results obtained by the conventional standard methods during the characterization of the different streams, have been implemented in order to develop models for predicting these same properties. The real-time knowledge of these physical-chemical properties of petroleum fractions is very important for enhancing refinery operations, ensuring technically, economically and environmentally proper refinery operations. The first part of this work involved the determination of many physical-chemical properties, at Matosinhos refinery, by following some standard methods important to evaluate and characterize light vacuum gas oil, heavy vacuum gas oil and fuel oil fractions. Kinematic viscosity, density, sulfur content, flash point, carbon residue, P-value and atmospheric and vacuum distillations were the properties analysed. Besides the analysis by using the standard methods, the same samples were analysed by nuclear magnetic resonance spectroscopy. The second part of this work was related to the application of multivariate statistical methods, which correlate the physical-chemical properties with the quantitative information acquired by nuclear magnetic resonance spectroscopy. Several methods were applied, including principal component analysis, principal component regression, partial least squares and artificial neural networks. Principal component analysis was used to reduce the number of predictive variables and to transform them into new variables, the principal components. These principal components were used as inputs of the principal component regression and artificial neural networks models. For the partial least squares model, the original data was used as input. Taking into account the performance of the develop models, by analysing selected statistical performance indexes, it was possible to conclude that principal component regression lead to worse performances. When applying the partial least squares and artificial neural networks models better results were achieved. However, it was with the artificial neural networks model that better predictions were obtained for almost of the properties analysed. With reference to the results obtained, it was possible to conclude that nuclear magnetic resonance spectroscopy combined with multivariate statistical methods can be used to predict physical-chemical properties of petroleum fractions. It has been shown that this technique can be considered a potential alternative to the conventional standard methods having obtained very promising results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cherenkov Imaging counters require large photosensitive areas, capable of single photon detection, operating at stable high gains under radioactive backgrounds while standing high rates, providing a fast response and a good time resolution, and being insensitive to magnetic fields. The development of photon detectors based in Micro Pattern Gaseous detectors (MPGDs), represent a new generation of gaseous photon detectors. In particular, gaseous detectors based on stacked Thick-Gaseous Electron Multipliers (THGEMs), or THGEM based structures, coupled to a CsI photoconverter coating, seem to fulfil the requirements imposed by Cherenkov imaging counters. This work focus on the study of the THGEM-based detectors response as function of its geometrical parameters and applied voltages and electric fields, aiming a future upgrade of the Cherenkov Imaging counter RICH-1 of the COMPASS experiment at CERN SPS. Further studies to decrease the fraction of ions that reach the photocathode (Ion Back Flow – IBF) to minimize the ageing and maximize the photoelectron extraction are performed. Experimental studies are complemented with simulation results, also perfomed in this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A domótica é uma área com grande interesse e margem de exploração, que pretende alcançar a gestão automática e autónoma de recursos habitacionais, proporcionando um maior conforto aos utilizadores. Para além disso, cada vez mais se procuram incluir benefícios económicos e ambientais neste conceito, por forma a garantir um futuro sustentável. O aquecimento de água (por meios elétricos) é um dos fatores que mais contribui para o consumo de energia total de uma residência. Neste enquadramento surge o tema “algoritmos inteligentes de baixa complexidade”, com origem numa parceria entre o Departamento de Eletrónica, Telecomunicações e Informática (DETI) da Universidade de Aveiro e a Bosch Termotecnologia SA, que visa o desenvolvimento de algoritmos ditos “inteligentes”, isto é, com alguma capacidade de aprendizagem e funcionamento autónomo. Os algoritmos devem ser adaptados a unidades de processamento de 8 bits para equipar pequenos aparelhos domésticos, mais propriamente tanques de aquecimento elétrico de água. Uma porção do desafio está, por isso, relacionada com as restrições computacionais de microcontroladores de 8 bits. No caso específico deste trabalho, foi determinada a existência de sensores de temperatura da água no tanque como a única fonte de informação externa aos algoritmos, juntamente com parâmetros pré-definidos pelo utilizador que estabelecem os limiares de temperatura máxima e mínima da água. Partindo deste princípio, os algoritmos desenvolvidos baseiam-se no perfil de consumo de água quente, observado ao longo de cada semana, para tentar prever futuras tiragens de água e, consequentemente, agir de forma adequada, adiantando ou adiando o aquecimento da água do tanque. O objetivo é alcançar uma gestão vantajosa entre a economia de energia e o conforto do utilizador (água quente), isto sem que exista necessidade de intervenção direta por parte do utilizador final. A solução prevista inclui também o desenvolvimento de um simulador que permite observar, avaliar e comparar o desempenho dos algoritmos desenvolvidos.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The high dependence of Portugal from foreign energy sources (mainly fossil fuels), together with the international commitments assumed by Portugal and the national strategy in terms of energy policy, as well as resources sustainability and climate change issues, inevitably force Portugal to invest in its energetic self-sufficiency. The 20/20/20 Strategy defined by the European Union defines that in 2020 60% of the total electricity consumption must come from renewable energy sources. Wind energy is currently a major source of electricity generation in Portugal, producing about 23% of the national total electricity consumption in 2013. The National Energy Strategy 2020 (ENE2020), which aims to ensure the national compliance of the European Strategy 20/20/20, states that about half of this 60% target will be provided by wind energy. This work aims to implement and optimise a numerical weather prediction model in the simulation and modelling of the wind energy resource in Portugal, both in offshore and onshore areas. The numerical model optimisation consisted in the determination of which initial and boundary conditions and planetary boundary layer physical parameterizations options provide wind power flux (or energy density), wind speed and direction simulations closest to in situ measured wind data. Specifically for offshore areas, it is also intended to evaluate if the numerical model, once optimised, is able to produce power flux, wind speed and direction simulations more consistent with in situ measured data than wind measurements collected by satellites. This work also aims to study and analyse possible impacts that anthropogenic climate changes may have on the future wind energetic resource in Europe. The results show that the ECMWF reanalysis ERA-Interim are those that, among all the forcing databases currently available to drive numerical weather prediction models, allow wind power flux, wind speed and direction simulations more consistent with in situ wind measurements. It was also found that the Pleim-Xiu and ACM2 planetary boundary layer parameterizations are the ones that showed the best performance in terms of wind power flux, wind speed and direction simulations. This model optimisation allowed a significant reduction of the wind power flux, wind speed and direction simulations errors and, specifically for offshore areas, wind power flux, wind speed and direction simulations more consistent with in situ wind measurements than data obtained from satellites, which is a very valuable and interesting achievement. This work also revealed that future anthropogenic climate changes can negatively impact future European wind energy resource, due to tendencies towards a reduction in future wind speeds especially by the end of the current century and under stronger radiative forcing conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ionic liquids are a class of solvents that, due to their unique properties, have been proposed in the past few years as alternatives to some hazardous volatile organic compounds. They are already used by industry, where it was possible to improve different processes by the incorporation of this kind of non-volatile and often liquid solvents. However, even if ionic liquids cannot contribute to air pollution, due to their negligible vapour pressures, they can be dispersed thorough aquatic streams thus contaminating the environment. Therefore, the main goals of this work are to study the mutual solubilities between water and different ionic liquids in order to infer on their environmental impact, and to propose effective methods to remove and, whenever possible, recover ionic liquids from aqueous media. The liquid-liquid phase behaviour of different ionic liquids and water was evaluated in the temperature range between (288.15 and 318.15) K. For higher melting temperature ionic liquids a narrower temperature range was studied. The gathered data allowed a deep understanding on the structural effects of the ionic liquid, namely the cation core, isomerism, symmetry, cation alkyl chain length and the anion nature through their mutual solubilities (saturation values) with water. The experimental data were also supported by the COnductor-like Screening MOdel for Real Solvents (COSMO-RS), and for some more specific systems, molecular dynamics simulations were also employed for a better comprehension of these systems at a molecular level. On the other hand, in order to remove and recover ionic liquids from aqueous solutions, two different methods were studied: one based on aqueous biphasic systems, that allowed an almost complete recovery of hydrophilic ionic liquids (those completely miscible with water at temperatures close to room temperature) by the addition of strong salting-out agents (Al2(SO4)3 or AlK(SO4)2); and the other based on the adsorption of several ionic liquids onto commercial activated carbon. The first approach, in addition to allowing the removal of ionic liquids from aqueous solutions, also makes possible to recover the ionic liquid and to recycle the remaining solution. In the adsorption process, only the removal of the ionic liquid from aqueous solutions was attempted. Nevertheless, a broad understanding of the structural effects of the ionic liquid on the adsorption process was attained, and a final improvement on the adsorption of hydrophilic ionic liquids by the addition of an inorganic salt (Na2SO4) was also achieved. Yet, the development of a recovery process that allows the reuse of the ionic liquid is still required for the development of sustainable processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For the past decades it has been a worldwide concern to reduce the emission of harmful gases released during the combustion of fossil fuels. This goal has been addressed through the reduction of sulfur-containing compounds, and the replacement of fossil fuels by biofuels, such as bioethanol, produced in large scale from biomass. For this purpose, a new class of solvents, the Ionic Liquids (ILs), has been applied, aiming at developing new processes and replacing common organic solvents in the current processes. ILs can be composed by a large number of different combinations of cations and anions, which confer unique but desired properties to ILs. The ability of fine-tuning the properties of ILs to meet the requirements of a specific application range by mixing different cations and anions arises as the most relevant aspect for rendering ILs so attractive to researchers. Nonetheless, due to the huge number of possible combinations between the ions it is required the use of cheap predictive approaches for anticipating how they will act in a given situation. Molecular dynamics (MD) simulation is a statistical mechanics computational approach, based on Newton’s equations of motion, which can be used to study macroscopic systems at the atomic level, through the prediction of their properties, and other structural information. In the case of ILs, MD simulations have been extensively applied. The slow dynamics associated to ILs constitutes a challenge for their correct description that requires improvements and developments of existent force fields, as well as larger computational efforts (longer times of simulation). The present document reports studies based on MD simulations devoted to disclose the mechanisms of interaction established by ILs in systems representative of fuel and biofuels streams, and at biomass pre-treatment process. Hence, MD simulations were used to evaluate different systems composed of ILs and thiophene, benzene, water, ethanol and also glucose molecules. For the latter molecules, it was carried out a study aiming to ascertain the performance of a recently proposed force field (GROMOS 56ACARBO) to reproduce the dynamic behavior of such molecules in aqueous solution. The results here reported reveal that the interactions established by ILs are dependent on the individual characteristics of each IL. Generally, the polar character of ILs is deterministic in their propensity to interact with the other molecules. Although it is unquestionable the advantage of using MD simulations, it is necessary to recognize the need for improvements and developments of force fields, not only for a successful description of ILs, but also for other relevant compounds such as the carbohydrates.