832 resultados para Otimização de rotas


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nas últimas décadas, um grande número de processos têm sido descritos em termos de redes complexas. A teoria de redes complexas vem sendo utilizada com sucesso para descrever, modelar e caracterizar sistemas naturais, artificias e sociais, tais como ecossistemas, interações entre proteínas, a Internet, WWW, até mesmo as relações interpessoais na sociedade. Nesta tese de doutoramento apresentamos alguns modelos de agentes interagentes em redes complexas. Inicialmente, apresentamos uma breve introdução histórica (Capítulo 1), seguida de algumas noções básicas sobre redes complexas (Capítulo 2) e de alguns trabalhos e modelos mais relevantes a esta tese de doutoramento (Capítulo 3). Apresentamos, no Capítulo 4, o estudo de um modelo de dinâmica de opiniões, onde busca-se o consenso entre os agentes em uma população, seguido do estudo da evolução de agentes interagentes em um processo de ramificação espacialmente definido (Capítulo 5). No Capítulo 6 apresentamos um modelo de otimização de fluxos em rede e um estudo do surgimento de redes livres de escala a partir de um processo de otimização . Finalmente, no Capítulo 7, apresentamos nossas conclusões e perspectivas futuras.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Esta tese apresenta um estudo sobre otimização económica de parques eólicos, com o objetivo de obter um algoritmo para otimização económica de parques eólicos através do custo da energia produzida. No estudo utilizou-se uma abordagem multidisciplinar. Inicialmente, apresentam-se as principais tecnologias e diferentes arquiteturas utilizadas nos parques eólicos. Bem como esquemas de funcionamento e gestão dos parques. São identificadas variáveis necessárias e apresenta-se um modelo dimensionamento para cálculo dos custos da energia produzida, tendo-se dado ênfase às instalações onshore e ligados a rede elétrica de distribuição. É feita uma análise rigorosa das características das topologias dos aerogeradores disponíveis no mercado, e simula-se o funcionamento de um parque eólico para testar a validade dos modelos desenvolvidos. Também é implementado um algoritmo para a obtenção de uma resposta otimizada para o ciclo de vida económico do parque eólico em estudo. A abordagem proposta envolve algoritmos para otimização do custo de produção com multiplas funções objetivas com base na descrição matemática da produção de eletricidade. Foram desenvolvidos modelos de otimização linear, que estabelece a ligação entre o custo económico e a produção de eletricidade, tendo em conta ainda as emissões de CO2 em instrumentos de política energética para energia eólica. São propostas expressões para o cálculo do custo de energia com variáveis não convencionais, nomeadamente, para a produção variável do parque eólico, fator de funcionamento e coeficiente de eficiência geral do sistema. Para as duas últimas, também é analisado o impacto da distribuição do vento predominante no sistema de conversão de energia eólica. Verifica-se que os resultados obtidos pelos algoritmos propostos são similares às obtidas por demais métodos numéricos já publicados na comunidade científica, e que o algoritmo de otimização económica sofre influência significativa dos valores obtidos dos coeficientes em questão. Finalmente, é demonstrado que o algoritmo proposto (LCOEwso) é útil para o dimensionamento e cálculo dos custos de capital e O&M dos parques eólicos com informação incompleta ou em fase de projeto. Nesse sentido, o contributo desta tese vem ser desenvolver uma ferramenta de apoio à tomada de decisão de um gestor, investidor ou ainda agente público em fomentar a implantação de um parque eólico.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The assessment of ecological status of lotic freshwater bodies, based on stringent criteria of classification, has been defined by the Water Framework Directive (WFD), as a result of the implementation and optimization of methodologies that integrate physico-chemical, biological, and hydromorphological parameters. It is recognized that the application of this methodology is not easy, because it requires deep technical and scientific knowledge; it is time consuming in its application involving high financial costs. Thus, the main objective of this study was the development of cheaper and faster complementary methodologies that may contribute to the technical application of the classification criteria defined by the WFD, achieving the same final results of evaluation. In order to achieve this main goal, the river Mau, a small mountain river subjected to different stressors (eg, metals, pesticides), was established as the main sampling area. This thesis reviewed the historical development of various biotic indexes and its application in assessing water quality, especially highlighting the new paradigm defined by the WFD, and the corresponding actions developed for optimization and intercalibration of methodologies, evaluating the final state of water bodies. The ecological spatiotemporal characterization of the river Mau focused on the application of the WFD methodology, using at this stage only macroinvertebrates collected during four seasons. Results were compared with historical data of the last three years and they demonstrated that the river is in good condition. However, the ecological quality decreased at certain locations indicating that organisms were subjected to some type of disturbance. As the ecological quality can be conditioned by pulses of contamination from the sediments, in environmental adverse conditions, assays were performed with elutriates, obtained from sediments collected near the mining complex Braçal-Palhal. Results showed that this method was effective achieving the state of contamination, which may be important in prioritizing/scoring of critical areas within river ecosystems potentially impacted, using the WFD methodology. However, this methodology requires the collection of sediment which can promote the modification and / or loss of contaminants. To solve this potential problem, we developed a new methodology to obtain similar results. For this, we used a benthic microalga, belonging to the Portuguese flora, sensitive to organic pollution and metals. This methodology was optimized for application in situ, by immobilization of diatom in calcium alginate beads. The results showedthat their sensitivity and normal growth rate are similar to data obtained when used free cells of diatom. This new methodology allowed the achievement of a very quick response on the degree of contamination of a site, providing a complementary methodology to WFD.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of a compact gamma camera with high spatial resolution is of great interest in Nuclear Medicine as a means to increase the sensitivity of scintigraphy exams and thus allow the early detection of small tumours. Following the introduction of the wavelength-shifting fibre (WSF) gamma camera by Soares et al. and evolution of photodiodes into highly sensitive silicon photomultipliers (SiPMs), this thesis explores the development of a WSF gamma camera using SiPMs to obtain the position information of scintillation events in a continuous CsI(Na) crystal. The design is highly flexible, allowing the coverage of different areas and the development of compact cameras, with very small dead areas at the edges. After initial studies which confirmed the feasibility of applying SiPMs, a prototype with 5 5 cm2 was assembled and tested at room temperature, in an active field-of-view of 10 10 mm2. Calibration and characterisation of intrinsic properties of this prototype were done using 57Co, while extrinsic measurements were performed using a high-resolution parallel-hole collimator and 99mTc. In addition, a small mouse injected with a radiopharmaceutical was imaged with the developed prototype. Results confirm the great potential of SiPMs when applied in a WSF gamma camera, achieving spatial resolution performance superior to the traditional Anger camera. Furthermore, performance can be improved by an optimisation of experimental conditions, in order to minimise and control the undesirable effects of thermal noise and non-uniformity of response of multiple SiPMs. The development and partial characterisation of a larger SiPM WSF gamma camera with 10 10 cm2 for clinical application are also presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A reação entre o óxido de magnésio (MgO) e o fosfato de monoamónio (MAP), à temperatura ambiente, origina os cimentos de fosfato de magnésio, materiais caracterizados pela sua presa rápida e pelas excelentes propriedades mecânicas adquiridas precocemente. As propriedades finais são dependentes, essencialmente, da composição do cimento (razão molar magnésia:fosfato e utilização de retardantes de presa) mas também são influenciadas pela reatividade da magnésia utilizada. Neste trabalho, a reação foi caracterizada através do estudo da influência da razão molar MgO:MAP (variando de 1:1 até 8:1), da presença e teor de aditivos retardantes (ácido bórico, ácido cítrico e tripolifosfato de sódio) e da variação da área superficial específica da magnésia (conseguida por calcinação do óxido), no tempo de presa, na temperatura máxima atingida e nas fases cristalinas finais formadas. A reação de presa pode ser comparada à hidratação do cimento Portland, com a existência de 4 estágios (reação inicial, indução, aceleração e desaceleração), com a diferença que estes estágios ocorrem a velocidade muito mais alta nos cimentos de fosfato de magnésio. Este estudo foi realizado utilizando a espetroscopia de impedâncias, acompanhada pela monitorização da evolução de temperatura ao longo do tempo de reação e, por paragem de reação, identificando as fases cristalinas formadas. A investigação do mecanismo de reação foi complementada com a observação da microestrutura dos cimentos formados e permitiu concluir que a origem da magnésia usada não afeta a reação nem as propriedades do cimento final. A metodologia de superfície de resposta foi utilizada para o estudo e otimização das características finais do produto, tendo-se mostrado um método muito eficaz. Para o estudo da variação da área superficial específica da magnésia com as condições de calcinação (temperatura e tempo de patamar) usou-se o planeamento fatorial de experiências tendo sido obtido um modelo matemático que relaciona a resposta da área superficial específica da magnésia com as condições de calcinação. As propriedades finais dos cimentos (resistência mecânica à compressão e absorção de água) foram estudadas utilizando o planeamento simplex de experiências, que permitiu encontrar modelos que relacionam a propriedade em estudo com os valores das variáveis (razão molar MgO:MAP, área superficial específica da magnésia e quantidade de ácido bórico). Estes modelos podem ser usados para formular composições e produzir cimentos com propriedades finais específicas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ao longo das últimas décadas, a micromoldação (u-moldação) por injeção de termoplásticos ganhou um lugar de destaque no mercado de equipamentos eletrónicos e de uma ampla gama de componentes mecânicos. No entanto, quando o tamanho do componente diminui, os pressupostos geralmente aceites na moldação por injeção convencional deixam de ser válidos para descrever o comportamento reológico e termomecânico do polímero na microimpressão. Por isso, a compreensão do comportamento dinâmico do polímero à escala micro bem como da sua caraterização, análise e previsão das propriedades mecânicas exige uma investigação mais alargada. O objetivo principal deste programa doutoral passa por uma melhor compreensão do fenómeno físico intrínseco ao processo da μ-moldação por injeção. Para cumprir com o objetivo estabelecido, foi efetuado um estudo paramétrico do processo de μ-moldação por injeção, cujos resultados foram comparados com os resultados obtidos por simulação numérica. A caracterização dinâmica mecânica das μ-peças foi efetuada com o objetivo de recolher os dados necessários para a previsão do desempenho mecânico das mesmas, a longo prazo. Finalmente, depois da calibração do modelo matemático do polímero, foram realizadas análises estruturais com o intuito de prever o desempenho mecânico das μ-peças no longo prazo. Verificou-se que o desempenho mecânico das μ-peças pode ser significativamente afetado pelas tensões residuais de origem mecânica e térmica. Estas últimas, resultantes do processo de fabrico e das condições de processamento, por isso, devem ser consideradas na previsão do desempenho mecânico e do tempo de serviço das u-moldações.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A evolução constante em novas tecnologias que providenciam suporte à forma como os nossos dispositivos se ligam, bem como a forma como utilizamos diferentes capacidades e serviços on-line, criou um conjunto sem precedentes de novos desafios que motivam o desenvolvimento de uma recente área de investigação, denominada de Internet Futura. Nesta nova área de investigação, novos aspectos arquiteturais estão ser desenvolvidos, os quais, através da re-estruturação de componentes nucleares subjacentesa que compõem a Internet, progride-a de uma forma capaz de não são fazer face a estes novos desafios, mas também de a preparar para os desafios de amanhã. Aspectos chave pertencendo a este conjunto de desafios são os ambientes de rede heterogéneos compostos por diferentes tipos de redes de acesso, a cada vez maior mudança do tráfego peer-to-peer (P2P) como o tipo de tráfego mais utilizado na Internet, a orquestração de cenários da Internet das Coisas (IoT) que exploram mecanismos de interação Maquinaa-Maquina (M2M), e a utilização de mechanismos centrados na informação (ICN). Esta tese apresenta uma nova arquitetura capaz de simultaneamente fazer face a estes desafios, evoluindo os procedimentos de conectividade e entidades envolvidas, através da adição de uma camada de middleware, que age como um mecanismo de gestão de controlo avançado. Este mecanismo de gestão de controlo aproxima as entidades de alto nível (tais como serviços, aplicações, entidades de gestão de mobilidade, operações de encaminhamento, etc.) com as componentes das camadas de baixo nível (por exemplo, camadas de ligação, sensores e atuadores), permitindo uma otimização conjunta dos procedimentos de ligação subjacentes. Os resultados obtidos não só sublinham a flexibilidade dos mecanismos que compoem a arquitetura, mas também a sua capacidade de providenciar aumentos de performance quando comparados com outras soluÇÕes de funcionamento especÍfico, enquanto permite um maior leque de cenáios e aplicações.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A integridade do sinal em sistemas digitais interligados de alta velocidade, e avaliada através da simulação de modelos físicos (de nível de transístor) é custosa de ponto vista computacional (por exemplo, em tempo de execução de CPU e armazenamento de memória), e exige a disponibilização de detalhes físicos da estrutura interna do dispositivo. Esse cenário aumenta o interesse pela alternativa de modelação comportamental que descreve as características de operação do equipamento a partir da observação dos sinais eléctrico de entrada/saída (E/S). Os interfaces de E/S em chips de memória, que mais contribuem em carga computacional, desempenham funções complexas e incluem, por isso, um elevado número de pinos. Particularmente, os buffers de saída são obrigados a distorcer os sinais devido à sua dinâmica e não linearidade. Portanto, constituem o ponto crítico nos de circuitos integrados (CI) para a garantia da transmissão confiável em comunicações digitais de alta velocidade. Neste trabalho de doutoramento, os efeitos dinâmicos não-lineares anteriormente negligenciados do buffer de saída são estudados e modulados de forma eficiente para reduzir a complexidade da modelação do tipo caixa-negra paramétrica, melhorando assim o modelo standard IBIS. Isto é conseguido seguindo a abordagem semi-física que combina as características de formulação do modelo caixa-negra, a análise dos sinais eléctricos observados na E/S e propriedades na estrutura física do buffer em condições de operação práticas. Esta abordagem leva a um processo de construção do modelo comportamental fisicamente inspirado que supera os problemas das abordagens anteriores, optimizando os recursos utilizados em diferentes etapas de geração do modelo (ou seja, caracterização, formulação, extracção e implementação) para simular o comportamento dinâmico não-linear do buffer. Em consequência, contributo mais significativo desta tese é o desenvolvimento de um novo modelo comportamental analógico de duas portas adequado à simulação em overclocking que reveste de um particular interesse nas mais recentes usos de interfaces de E/S para memória de elevadas taxas de transmissão. A eficácia e a precisão dos modelos comportamentais desenvolvidos e implementados são qualitativa e quantitativamente avaliados comparando os resultados numéricos de extracção das suas funções e de simulação transitória com o correspondente modelo de referência do estado-da-arte, IBIS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Alzheimer’s disease is a chronic progressive neurodegenerative disease and is the most common form of dementia (estimated 50−60% of all cases), associated with loss of memory (in particular episodic memory), cognitive decline, and behavioural and physical disability, ultimately leading to death. Alzheimer’s disease is a complex disease, mostly occurring sporadically with no apparent inheritance and being the age the main risk factor. The production and accumulation of amyloid-beta peptide in the central nervous system is a key event in the development of Alzheimer’s disease. This project is devoted to the synthesis of amyloid-beta ligands, fluorophores and blood brain barrier-transporters for diagnosis and therapy of Alzheimer’s disease. Different amyloid-beta ligands will be synthesized and their ability to interact with amyloid-beta plaques will be studied with nuclear magnetic resonance techniques and a process of lead optimization will be performed. Many natural and synthetic compounds able to interact as amyloid-beta ligands have been identified. Among them, a set of small molecules in which aromatic moieties seem to play a key role to inhibit amyloid-beta aggregation, in particular heteroaromatic polycyclic compounds such as tetracyclines. Nevertheless tetracyclines suffer from chemical instability, low water solubility and possess, in this contest, undesired anti-bacterial activity. In order to overcome these limitations, one of our goals is to synthesize tetracyclines analogues bearing a polycyclic structure with improved chemical stability and water solubility, possibly lacking antibacterial activity but conserving the ability to interact with amyloid-beta peptides. Known tetracyclines have in common a fourth cycle without an aromatic character and with different functionalisations. We aim to synthesize derivatives in which this cycle is represented by a sugar moiety, thus bearing different derivatisable positions or create derivatives in which we will increase or decrease the number of fused rings. In order to generate a potential drug-tool candidate, these molecules should also possess the correct chemical-physical characteristics. The glycidic moiety, not being directly involved in the binding, it assures further possible derivatizations, such as conjugation to others molecular entities (nanoparticles, polymeric supports, etc.), and functionalization with chemical groups able to modulate the hydro/lipophilicity. In order to be useful such compounds should perform their action within the brain, therefore they have to be able to cross the blood brain barrier, and to be somehow detected for diagnostic purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estrogens, such as 17β-estradiol (E2) are essential for normal growth and differentiation of the mammary gland. There are two estrogen receptors (ERs), ERα and ERβ which are ligand activated transcription factors. ERα stimulates proliferation and is the single most powerful predictor of breast cancer prognosis and since 70% of breast cancers express ERα, strategies to block this receptor are the primary breast cancer treatment. Unlike ERα, the role of ERβ in breast cancer and its potential as alternative therapeutic target remains controversial, mainly due to the lack of correlation between results obtained in vitro and epidemiological studies. The aim of this thesis was to increase our understanding of the molecular and cellular mechanisms of estrogen signaling in normal and cancerous cells, in different cellular contexts and with focus on ERβ. In Paper I we characterized the effect of the flavone PD098059 - which is a commonly used MEK1 inhibitor - on activation of transcription by ERα and ERβ. We found that the estrogenic effect of PD098059 is dose dependent in concentrations ranging from 1 – 10 μM and that activation of transcription by ER is suppressed by the inhibitory effect of PD98059 on MEK1 at concentrations above 50 μM. In agreement with its flavone nature, PD098059 had a much stronger effect on ERβ than on ERα transcriptional activity. Therefore, use of this compound for the study of signalling events in cells expressing ER should be carefully considered. In Paper II we assessed the effect of ERβ agonists in vivo and administered under different conditions in vitro. In basal conditions, ERβ induced apoptosis; however, in vivo ERβ agonists stimulated proliferation and inhibited apoptosis. In vivo effects were reproduced in culture, by activation of MAPK/ERK½ pathway with epidermal growth factor or basement membrane extract. In addition, insulin signalling and PI3-K/AKT activation was necessary for stimulation of proliferation. These results suggest that the cellular context modulates ERβ activity. Manuscript presents preliminary work aimed at the set-up of a methodological strategy to isolate ERs and to identify interacting proteins in different cellular contexts and which could modulate the bi-phased effects of ERβ in cell growth. In conclusion, the studies presented in this thesis contribute to clarify the apparent contradictory information regarding ERβ function in normal and cancerous mammary epithelium and suggest that the cellular context should be considered when ERβ effects are studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This investigation focused on the development, test and validation of methodologies for mercury fractionation and speciation in soil and sediment. After an exhaustive review of the literature, several methods were chosen and tested in well characterised soil and sediment samples. Sequential extraction procedures that divide mercury fractions according to their mobility and potential availability in the environment were investigated. The efficiency of different solvents for fractionation of mercury was evaluated, as well as the adequacy of different analytical instruments for quantification of mercury in the extracts. Kinetic experiments to establish the equilibrium time for mercury release from soil or sediment were also performed. It was found that in the studied areas, only a very small percentage of mercury is present as mobile species and that mobility is associated to higher aluminium and manganese contents, and that high contents of organic matter and sulfur result in mercury tightly bound to the matrix. Sandy soils tend to release mercury faster that clayey soils, and therefore, texture of soil or sediment has a strong influence on the mobility of mercury. It was also understood that analytical techniques for quantification of mercury need to be further developed, with lower quantification limits, particularly for mercury quantification of less concentrated fractions: water-soluble e exchangeable. Although the results provided a better understanding of the distribution of mercury in the sample, the complexity of the procedure limits its applicability and robustness. A proficiency-testing scheme targeting total mercury determination in soil, sediment, fish and human hair was organised in order to evaluate the consistency of results obtained by different laboratories, applying their routine methods to the same test samples. Additionally, single extractions by 1 mol L-1 ammonium acetate solution, 0.1 mol L-1 HCl and 0.1 mol L-1 CaCl2, as well as extraction of the organometallic fraction were proposed for soil; the last was also suggested for sediment and fish. This study was important to update the knowledge on analytical techniques that are being used for mercury quantification, the associated problems and sources of error, and to improve and standardize mercury extraction techniques, as well as to implement effective strategies for quality control in mercury determination. A different, “non chemical-like” method for mercury species identification was developed, optimised and validated, based on the thermo-desorption of the different mercury species. Compared to conventional extraction procedures, this method has advantages: it requires little to no sample treatment; a complete identification of species present is obtained in less than two hours; mercury losses are almost neglectable; can be considered “clean”, as no residues are produced; the worldwide comparison of results obtained is easier and reliable, an important step towards the validation of the method. Therefore, the main deliverables of this PhD thesis are an improved knowledge on analytical procedures for identification and quantification of mercury species in soils and sediments, as well as a better understanding of the factors controlling the behaviour of mercury in these matrices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the last few decades, Metal-Organic Frameworks (MOFs), also known as Coordination Polymers, have attracted worldwide research attentions due to their incremented fascinating architectures and unique properties. These multidimensional materials have been potential applications in distinct areas: gas storage and separation, ion exchange, catalysis, magnetism, in optical sensors, among several others. The MOF research group at the University of Aveiro has prepared MOFs from the combination of phosphonate organic primary building units (PBUs) with, mainly, lanthanides. This thesis documents the last findings in this area involving the synthesis of multidimensional MOFs based on four di- or tripodal phosphonates ligands. The organic PBUs were designed and prepared by selecting and optimizing the best reaction conditions and synthetic routes. The self-assembly between phosphonate PBUs and rare-earths cations led to the formation of several 1D, 2D and 3D families of isotypical MOFs. The preparation of these materials was achieved by using distinct synthetic approaches: hydro(solvo)thermal, microwave- and ultrasound-assisted, one-pot and ionothermal synthesis. The selection of the organic PBUs showed to have an important role in the final architectures: while flexible phosphonate ligands afforded 1D, 2D and dense 3D structures, a large and rigid organic PBU isolated a porous 3D MOF. The crystal structure of these materials was successfully unveiled by powder or single-crystal X-ray diffraction. All multidimensional MOFs were characterized by standard solid-state techniques (FT-IR, electron microscopy (SEM and EDS), solid-state NMR, elemental and thermogravimetric analysis). Some MOF materials exhibited remarkable thermal stability and robustness up to ca. 400 ºC. The intrinsic properties of some MOFs were investigated. Photoluminescence studies revealed that the selected organic PBUs are suitable sensitizers of Tb3+ leading to the isolation of intense green-emitting materials. The suppression of the O−H quenchers by deuteration or dehydration processes improves substantially the photoluminescence of the optically-active Eu3+-based materials. Some MOF materials exhibited high heterogeneous catalytic activity and excellent regioselectivity in the ring-opening reaction of styrene oxide (PhEtO) with methanol (100% conversion of PhEtO at 55 ºC for 30 min). The porous MOF material was employed in gas separation processes. This compound showed the ability to separate propane over propylene. The ionexchanged form of this material (containing K+ cations into its network) exhibited higher affinity for CO2 being capable to separate acetylene over this environment non-friendly gas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Viscoelastic treatments are one of the most efficient treatments, as far as passive damping is concerned, particularly in the case of thin and light structures. In this type of treatment, part of the strain energy generated in the viscoelastic material is dissipated to the surroundings, in the form of heat. A layer of viscoelastic material is applied to a structure in an unconstrained or constrained configuration, the latter proving to be the most efficient arrangement. This is due to the fact that the relative movement of both the host and constraining layers cause the viscoelastic material to be subjected to a relatively high strain energy. There are studies, however, that claim that the partial application of the viscoelastic material is just as efficient, in terms of economic costs or any other form of treatment application costs. The application of patches of material in specific and selected areas of the structure, thus minimising the extension of damping material, results in an equally efficient treatment. Since the damping mechanism of a viscoelastic material is based on the dissipation of part of the strain energy, the efficiency of the partial treatment can be correlated to the modal strain energy of the structure. Even though the results obtained with this approach in various studies are considered very satisfactory, an optimisation procedure is deemed necessary. In order to obtain optimum solutions, however, time consuming numerical simulations are required. The optimisation process to use the minimum amount of viscoelastic material is based on an evolutionary geometry re-design and calculation of the modal damping, making this procedure computationally costly. To avert this disadvantage, this study uses adaptive layerwise finite elements and applies Genetic Algorithms in the optimisation process.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the modern society, new devices, applications and technologies, with sophisticated capabilities, are converging in the same network infrastructure. Users are also increasingly demanding in personal preferences and expectations, desiring Internet connectivity anytime and everywhere. These aspects have triggered many research efforts, since the current Internet is reaching a breaking point trying to provide enough flexibility for users and profits for operators, while dealing with the complex requirements raised by the recent evolution. Fully aligned with the future Internet research, many solutions have been proposed to enhance the current Internet-based architectures and protocols, in order to become context-aware, that is, to be dynamically adapted to the change of the information characterizing any network entity. In this sense, the presented Thesis proposes a new architecture that allows to create several networks with different characteristics according to their context, on the top of a single Wireless Mesh Network (WMN), which infrastructure and protocols are very flexible and self-adaptable. More specifically, this Thesis models the context of users, which can span from their security, cost and mobility preferences, devices’ capabilities or services’ quality requirements, in order to turn a WMN into a set of logical networks. Each logical network is configured to meet a set of user context needs (for instance, support of high mobility and low security). To implement this user-centric architecture, this Thesis uses the network virtualization, which has often been advocated as a mean to deploy independent network architectures and services towards the future Internet, while allowing a dynamic resource management. This way, network virtualization can allow a flexible and programmable configuration of a WMN, in order to be shared by multiple logical networks (or virtual networks - VNs). Moreover, the high level of isolation introduced by network virtualization can be used to differentiate the protocols and mechanisms of each context-aware VN. This architecture raises several challenges to control and manage the VNs on-demand, in response to user and WMN dynamics. In this context, we target the mechanisms to: (i) discover and select the VN to assign to an user; (ii) create, adapt and remove the VN topologies and routes. We also explore how the rate of variation of the user context requirements can be considered to improve the performance and reduce the complexity of the VN control and management. Finally, due to the scalability limitations of centralized control solutions, we propose a mechanism to distribute the control functionalities along the architectural entities, which can cooperate to control and manage the VNs in a distributed way.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The expectations of citizens from the Information Technologies (ITs) are increasing as the ITs have become integral part of our society, serving all kinds of activities whether professional, leisure, safety-critical applications or business. Hence, the limitations of the traditional network designs to provide innovative and enhanced services and applications motivated a consensus to integrate all services over packet switching infrastructures, using the Internet Protocol, so as to leverage flexible control and economical benefits in the Next Generation Networks (NGNs). However, the Internet is not capable of treating services differently while each service has its own requirements (e.g., Quality of Service - QoS). Therefore, the need for more evolved forms of communications has driven to radical changes of architectural and layering designs which demand appropriate solutions for service admission and network resources control. This Thesis addresses QoS and network control issues, aiming to improve overall control performance in current and future networks which classify services into classes. The Thesis is divided into three parts. In the first part, we propose two resource over-reservation algorithms, a Class-based bandwidth Over-Reservation (COR) and an Enhanced COR (ECOR). The over-reservation means reserving more bandwidth than a Class of Service (CoS) needs, so the QoS reservation signalling rate is reduced. COR and ECOR allow for dynamically defining over-reservation parameters for CoSs based on network interfaces resource conditions; they aim to reduce QoS signalling and related overhead without incurring CoS starvation or waste of bandwidth. ECOR differs from COR by allowing for optimizing control overhead minimization. Further, we propose a centralized control mechanism called Advanced Centralization Architecture (ACA), that uses a single state-full Control Decision Point (CDP) which maintains a good view of its underlying network topology and the related links resource statistics on real-time basis to control the overall network. It is very important to mention that, in this Thesis, we use multicast trees as the basis for session transport, not only for group communication purposes, but mainly to pin packets of a session mapped to a tree to follow the desired tree. Our simulation results prove a drastic reduction of QoS control signalling and the related overhead without QoS violation or waste of resources. Besides, we provide a generic-purpose analytical model to assess the impact of various parameters (e.g., link capacity, session dynamics, etc.) that generally challenge resource overprovisioning control. In the second part of this Thesis, we propose a decentralization control mechanism called Advanced Class-based resource OverpRovisioning (ACOR), that aims to achieve better scalability than the ACA approach. ACOR enables multiple CDPs, distributed at network edge, to cooperate and exchange appropriate control data (e.g., trees and bandwidth usage information) such that each CDP is able to maintain a good knowledge of the network topology and the related links resource statistics on real-time basis. From scalability perspective, ACOR cooperation is selective, meaning that control information is exchanged dynamically among only the CDPs which are concerned (correlated). Moreover, the synchronization is carried out through our proposed concept of Virtual Over-Provisioned Resource (VOPR), which is a share of over-reservations of each interface to each tree that uses the interface. Thus, each CDP can process several session requests over a tree without requiring synchronization between the correlated CDPs as long as the VOPR of the tree is not exhausted. Analytical and simulation results demonstrate that aggregate over-reservation control in decentralized scenarios keep low signalling without QoS violations or waste of resources. We also introduced a control signalling protocol called ACOR Protocol (ACOR-P) to support the centralization and decentralization designs in this Thesis. Further, we propose an Extended ACOR (E-ACOR) which aggregates the VOPR of all trees that originate at the same CDP, and more session requests can be processed without synchronization when compared with ACOR. In addition, E-ACOR introduces a mechanism to efficiently track network congestion information to prevent unnecessary synchronization during congestion time when VOPRs would exhaust upon every session request. The performance evaluation through analytical and simulation results proves the superiority of E-ACOR in minimizing overall control signalling overhead while keeping all advantages of ACOR, that is, without incurring QoS violations or waste of resources. The last part of this Thesis includes the Survivable ACOR (SACOR) proposal to support stable operations of the QoS and network control mechanisms in case of failures and recoveries (e.g., of links and nodes). The performance results show flexible survivability characterized by fast convergence time and differentiation of traffic re-routing under efficient resource utilization i.e. without wasting bandwidth. In summary, the QoS and architectural control mechanisms proposed in this Thesis provide efficient and scalable support for network control key sub-systems (e.g., QoS and resource control, traffic engineering, multicasting, etc.), and thus allow for optimizing network overall control performance.