856 resultados para Algoritmos experimentais


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Future emerging market trends head towards positioning based services placing a new perspective on the way we obtain and exploit positioning information. On one hand, innovations in information technology and wireless communication systems enabled the development of numerous location based applications such as vehicle navigation and tracking, sensor networks applications, home automation, asset management, security and context aware location services. On the other hand, wireless networks themselves may bene t from localization information to improve the performances of di erent network layers. Location based routing, synchronization, interference cancellation are prime examples of applications where location information can be useful. Typical positioning solutions rely on measurements and exploitation of distance dependent signal metrics, such as the received signal strength, time of arrival or angle of arrival. They are cheaper and easier to implement than the dedicated positioning systems based on ngerprinting, but at the cost of accuracy. Therefore intelligent localization algorithms and signal processing techniques have to be applied to mitigate the lack of accuracy in distance estimates. Cooperation between nodes is used in cases where conventional positioning techniques do not perform well due to lack of existing infrastructure, or obstructed indoor environment. The objective is to concentrate on hybrid architecture where some nodes have points of attachment to an infrastructure, and simultaneously are interconnected via short-range ad hoc links. The availability of more capable handsets enables more innovative scenarios that take advantage of multiple radio access networks as well as peer-to-peer links for positioning. Link selection is used to optimize the tradeo between the power consumption of participating nodes and the quality of target localization. The Geometric Dilution of Precision and the Cramer-Rao Lower Bound can be used as criteria for choosing the appropriate set of anchor nodes and corresponding measurements before attempting location estimation itself. This work analyzes the existing solutions for node selection in order to improve localization performance, and proposes a novel method based on utility functions. The proposed method is then extended to mobile and heterogeneous environments. Simulations have been carried out, as well as evaluation with real measurement data. In addition, some speci c cases have been considered, such as localization in ill-conditioned scenarios and the use of negative information. The proposed approaches have shown to enhance estimation accuracy, whilst signi cantly reducing complexity, power consumption and signalling overhead.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

“Branch-and-cut” algorithm is one of the most efficient exact approaches to solve mixed integer programs. This algorithm combines the advantages of a pure branch-and-bound approach and cutting planes scheme. Branch-and-cut algorithm computes the linear programming relaxation of the problem at each node of the search tree which is improved by the use of cuts, i.e. by the inclusion of valid inequalities. It should be taken into account that selection of strongest cuts is crucial for their effective use in branch-and-cut algorithm. In this thesis, we focus on the derivation and use of cutting planes to solve general mixed integer problems, and in particular inventory problems combined with other problems such as distribution, supplier selection, vehicle routing, etc. In order to achieve this goal, we first consider substructures (relaxations) of such problems which are obtained by the coherent loss of information. The polyhedral structure of those simpler mixed integer sets is studied to derive strong valid inequalities. Finally those strong inequalities are included in the cutting plane algorithms to solve the general mixed integer problems. We study three mixed integer sets in this dissertation. The first two mixed integer sets arise as a subproblem of the lot-sizing with supplier selection, the network design and the vendor-managed inventory routing problems. These sets are variants of the well-known single node fixed-charge network set where a binary or integer variable is associated with the node. The third set occurs as a subproblem of mixed integer sets where incompatibility between binary variables is considered. We generate families of valid inequalities for those sets, identify classes of facet-defining inequalities, and discuss the separation problems associated with the inequalities. Then cutting plane frameworks are implemented to solve some mixed integer programs. Preliminary computational experiments are presented in this direction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A modelação e análise de séries temporais de valores inteiros têm sido alvo de grande investigação e desenvolvimento nos últimos anos, com aplicações várias em diversas áreas da ciência. Nesta tese a atenção centrar-se-á no estudo na classe de modelos basedos no operador thinning binomial. Tendo como base o operador thinning binomial, esta tese focou-se na construção e estudo de modelos SETINAR(2; p(1); p(2)) e PSETINAR(2; 1; 1)T , modelos autorregressivos de valores inteiros com limiares autoinduzidos e dois regimes, admitindo que as inovações formam uma sucessão de variáveis independentes com distribuição de Poisson. Relativamente ao primeiro modelo analisado, o modelo SETINAR(2; p(1); p(2)), além do estudo das suas propriedades probabilísticas e de métodos, clássicos e bayesianos, para estimar os parâmetros, analisou-se a questão da seleção das ordens, no caso de elas serem desconhecidas. Com este objetivo consideraram-se algoritmos de Monte Carlo via cadeias de Markov, em particular o algoritmo Reversible Jump, abordando-se também o problema da seleção de modelos, usando metodologias clássica e bayesiana. Complementou-se a análise através de um estudo de simulação e uma aplicação a dois conjuntos de dados reais. O modelo PSETINAR(2; 1; 1)T proposto, é também um modelo autorregressivo com limiares autoinduzidos e dois regimes, de ordem unitária em cada um deles, mas apresentando uma estrutura periódica. Estudaram-se as suas propriedades probabilísticas, analisaram-se os problemas de inferência e predição de futuras observações e realizaram-se estudos de simulação.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Network virtualisation is seen as a promising approach to overcome the so-called “Internet impasse” and bring innovation back into the Internet, by allowing easier migration towards novel networking approaches as well as the coexistence of complementary network architectures on a shared infrastructure in a commercial context. Recently, the interest from the operators and mainstream industry in network virtualisation has grown quite significantly, as the potential benefits of virtualisation became clearer, both from an economical and an operational point of view. In the beginning, the concept has been mainly a research topic and has been materialized in small-scale testbeds and research network environments. This PhD Thesis aims to provide the network operator with a set of mechanisms and algorithms capable of managing and controlling virtual networks. To this end, we propose a framework that aims to allocate, monitor and control virtual resources in a centralized and efficient manner. In order to analyse the performance of the framework, we performed the implementation and evaluation on a small-scale testbed. To enable the operator to make an efficient allocation, in real-time, and on-demand, of virtual networks onto the substrate network, it is proposed a heuristic algorithm to perform the virtual network mapping. For the network operator to obtain the highest profit of the physical network, it is also proposed a mathematical formulation that aims to maximize the number of allocated virtual networks onto the physical network. Since the power consumption of the physical network is very significant in the operating costs, it is important to make the allocation of virtual networks in fewer physical resources and onto physical resources already active. To address this challenge, we propose a mathematical formulation that aims to minimize the energy consumption of the physical network without affecting the efficiency of the allocation of virtual networks. To minimize fragmentation of the physical network while increasing the revenue of the operator, it is extended the initial formulation to contemplate the re-optimization of previously mapped virtual networks, so that the operator has a better use of its physical infrastructure. It is also necessary to address the migration of virtual networks, either for reasons of load balancing or for reasons of imminent failure of physical resources, without affecting the proper functioning of the virtual network. To this end, we propose a method based on cloning techniques to perform the migration of virtual networks across the physical infrastructure, transparently, and without affecting the virtual network. In order to assess the resilience of virtual networks to physical network failures, while obtaining the optimal solution for the migration of virtual networks in case of imminent failure of physical resources, the mathematical formulation is extended to minimize the number of nodes migrated and the relocation of virtual links. In comparison with our optimization proposals, we found out that existing heuristics for mapping virtual networks have a poor performance. We also found that it is possible to minimize the energy consumption without penalizing the efficient allocation. By applying the re-optimization on the virtual networks, it has been shown that it is possible to obtain more free resources as well as having the physical resources better balanced. Finally, it was shown that virtual networks are quite resilient to failures on the physical network.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ever-growing energy consumption in mobile networks stimulated by the expected growth in data tra ffic has provided the impetus for mobile operators to refocus network design, planning and deployment towards reducing the cost per bit, whilst at the same time providing a signifi cant step towards reducing their operational expenditure. As a step towards incorporating cost-eff ective mobile system, 3GPP LTE-Advanced has adopted the coordinated multi-point (CoMP) transmission technique due to its ability to mitigate and manage inter-cell interference (ICI). Using CoMP the cell average and cell edge throughput are boosted. However, there is room for reducing energy consumption further by exploiting the inherent exibility of dynamic resource allocation protocols. To this end packet scheduler plays the central role in determining the overall performance of the 3GPP longterm evolution (LTE) based on packet-switching operation and provide a potential research playground for optimizing energy consumption in future networks. In this thesis we investigate the baseline performance for down link CoMP using traditional scheduling approaches, and subsequently go beyond and propose novel energy e fficient scheduling (EES) strategies that can achieve power-e fficient transmission to the UEs whilst enabling both system energy effi ciency gain and fairness improvement. However, ICI can still be prominent when multiple nodes use common resources with di fferent power levels inside the cell, as in the so called heterogeneous networks (Het- Net) environment. HetNets are comprised of two or more tiers of cells. The rst, or higher tier, is a traditional deployment of cell sites, often referred to in this context as macrocells. The lower tiers are termed small cells, and can appear as microcell, picocells or femtocells. The HetNet has attracted signiffi cant interest by key manufacturers as one of the enablers for high speed data at low cost. Research until now has revealed several key hurdles that must be overcome before HetNets can achieve their full potential: bottlenecks in the backhaul must be alleviated, as well as their seamless interworking with CoMP. In this thesis we explore exactly the latter hurdle, and present innovative ideas on advancing CoMP to work in synergy with HetNet deployment, complemented by a novel resource allocation policy for HetNet tighter interference management. As system level simulator has been used to analyze the proposed algorithm/protocols, and results have concluded that up to 20% energy gain can be observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chapter 1 introduces the scope of the work by identifying the clinically relevant prenatal disorders and presently available diagnostic methods. The methodology followed in this work is presented, along with a brief account of the principles of the analytical and statistical tools employed. A thorough description of the state of the art of metabolomics in prenatal research concludes the chapter, highlighting the merit of this novel strategy to identify robust disease biomarkers. The scarce use of maternal and newborn urine in previous reports enlightens the relevance of this work. Chapter 2 presents a description of all the experimental details involved in the work performed, comprising sampling, sample collection and preparation issues, data acquisition protocols and data analysis procedures. The proton Nuclear Magnetic Resonance (NMR) characterization of maternal urine composition in healthy pregnancies is presented in Chapter 3. The urinary metabolic profile characteristic of each pregnancy trimester was defined and a 21-metabolite signature found descriptive of the metabolic adaptations occurring throughout pregnancy. 8 metabolites were found, for the first time to our knowledge, to vary in connection to pregnancy, while known metabolic effects were confirmed. This chapter includes a study of the effects of non-fasting (used in this work) as a possible confounder. Chapter 4 describes the metabolomic study of 2nd trimester maternal urine for the diagnosis of fetal disorders and prediction of later-developing complications. This was achieved by applying a novel variable selection method developed in the context of this work. It was found that fetal malformations (FM) (and, specifically those of the central nervous system, CNS) and chromosomal disorders (CD) (and, specifically, trisomy 21, T21) are accompanied by changes in energy, amino acids, lipids and nucleotides metabolic pathways, with CD causing a further deregulation in sugars metabolism, urea cycle and/or creatinine biosynthesis. Multivariate analysis models´ validation revealed classification rates (CR) of 84% for FM (87%, CNS) and 85% for CD (94%, T21). For later-diagnosed preterm delivery (PTD), preeclampsia (PE) and intrauterine growth restriction (IUGR), it is found that urinary NMR profiles have early predictive value, with CRs ranging from 84% for PTD (11-20 gestational weeks, g.w., prior to diagnosis), 94% for PE (18-24 g.w. pre-diagnosis) and 94% for IUGR (2-22 g.w. pre-diagnosis). This chapter includes results obtained for an ultraperformance liquid chromatography-mass spectrometry (UPLC-MS) study of pre-PTD samples and correlation with NMR data. One possible marker was detected, although its identification was not possible. Chapter 5 relates to the NMR metabolomic study of gestational diabetes mellitus (GDM), establishing a potentially predictive urinary metabolic profile for GDM, 2-21 g.w. prior to diagnosis (CR 83%). Furthermore, the NMR spectrum was shown to carry information on individual phenotypes, able to predict future insulin treatment requirement (CR 94%). Chapter 6 describes results that demonstrate the impact of delivery mode (CR 88%) and gender (CR 76%) on newborn urinary profile. It was also found that newborn prematurity, respiratory depression, large for gestational age growth and malformations induce relevant metabolic perturbations (CR 82-92%), as well as maternal conditions, namely GDM (CR 82%) and maternal psychiatric disorders (CR 91%). Finally, the main conclusions of this thesis are presented in Chapter 7, highlighting the value of maternal or newborn urine metabolomics for pregnancy monitoring and disease prediction, towards the development of new early and non-invasive diagnostic methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nos últimos anos temos vindo a assistir a uma mudança na forma como a informação é disponibilizada online. O surgimento da web para todos possibilitou a fácil edição, disponibilização e partilha da informação gerando um considerável aumento da mesma. Rapidamente surgiram sistemas que permitem a coleção e partilha dessa informação, que para além de possibilitarem a coleção dos recursos também permitem que os utilizadores a descrevam utilizando tags ou comentários. A organização automática dessa informação é um dos maiores desafios no contexto da web atual. Apesar de existirem vários algoritmos de clustering, o compromisso entre a eficácia (formação de grupos que fazem sentido) e a eficiência (execução em tempo aceitável) é difícil de encontrar. Neste sentido, esta investigação tem por problemática aferir se um sistema de agrupamento automático de documentos, melhora a sua eficácia quando se integra um sistema de classificação social. Analisámos e discutimos dois métodos baseados no algoritmo k-means para o clustering de documentos e que possibilitam a integração do tagging social nesse processo. O primeiro permite a integração das tags diretamente no Vector Space Model e o segundo propõe a integração das tags para a seleção das sementes iniciais. O primeiro método permite que as tags sejam pesadas em função da sua ocorrência no documento através do parâmetro Social Slider. Este método foi criado tendo por base um modelo de predição que sugere que, quando se utiliza a similaridade dos cossenos, documentos que partilham tags ficam mais próximos enquanto que, no caso de não partilharem, ficam mais distantes. O segundo método deu origem a um algoritmo que denominamos k-C. Este para além de permitir a seleção inicial das sementes através de uma rede de tags também altera a forma como os novos centróides em cada iteração são calculados. A alteração ao cálculo dos centróides teve em consideração uma reflexão sobre a utilização da distância euclidiana e similaridade dos cossenos no algoritmo de clustering k-means. No contexto da avaliação dos algoritmos foram propostos dois algoritmos, o algoritmo da “Ground truth automática” e o algoritmo MCI. O primeiro permite a deteção da estrutura dos dados, caso seja desconhecida, e o segundo é uma medida de avaliação interna baseada na similaridade dos cossenos entre o documento mais próximo de cada documento. A análise de resultados preliminares sugere que a utilização do primeiro método de integração das tags no VSM tem mais impacto no algoritmo k-means do que no algoritmo k-C. Além disso, os resultados obtidos evidenciam que não existe correlação entre a escolha do parâmetro SS e a qualidade dos clusters. Neste sentido, os restantes testes foram conduzidos utilizando apenas o algoritmo k-C (sem integração de tags no VSM), sendo que os resultados obtidos indicam que a utilização deste algoritmo tende a gerar clusters mais eficazes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Estudar os mecanismos subjacentes à produção de fala é uma tarefa complexa e exigente, requerendo a obtenção de dados mediante a utilização de variadas técnicas, onde se incluem algumas modalidades imagiológicas. De entre estas, a Ressonância Magnética (RM) tem ganho algum destaque, nos últimos anos, posicionando-se como uma das mais promissoras no domínio da produção de fala. Um importante contributo deste trabalho prende-se com a otimização e implementação de protocolos (RM) e proposta de estratégias de processamento de imagem ajustados aos requisitos da produção de fala, em geral, e às especificidades dos diferentes sons. Para além disso, motivados pela escassez de dados para o Português Europeu (PE), constitui-se como objetivo a obtenção de dados articulatórios que permitam complementar informação já existente e clarificar algumas questões relativas à produção dos sons do PE (nomeadamente, consoantes laterais e vogais nasais). Assim, para as consoantes laterais foram obtidas imagens RM (2D e 3D), através de produções sustidas, com recurso a uma sequência Eco de Gradiente (EG) rápida (3D VIBE), no plano sagital, englobando todo o trato vocal. O corpus, adquirido por sete falantes, contemplou diferentes posições silábicas e contextos vocálicos. Para as vogais nasais, foram adquiridas, em três falantes, imagens em tempo real com uma sequência EG - Spoiled (TurboFLASH), nos planos sagital e coronal, obtendo-se uma resolução temporal de 72 ms (14 frames/s). Foi efetuada aquisição sincronizada das imagens com o sinal acústico mediante utilização de um microfone ótico. Para o processamento e análise de imagem foram utilizados vários algoritmos semiautomáticos. O tratamento e análise dos dados permitiu efetuar uma descrição articulatória das consoantes laterais, ancorada em dados qualitativos (e.g., visualizações 3D, comparação de contornos) e quantitativos que incluem áreas, funções de área do trato vocal, extensão e área das passagens laterais, avaliação de efeitos contextuais e posicionais, etc. No que respeita à velarização da lateral alveolar /l/, os resultados apontam para um /l/ velarizado independentemente da sua posição silábica. Relativamente ao /L/, em relação ao qual a informação disponível era escassa, foi possível verificar que a sua articulação é bastante mais anteriorizada do que tradicionalmente descrito e também mais extensa do que a da lateral alveolar. A resolução temporal de 72 ms conseguida com as aquisições de RM em tempo real, revelou-se adequada para o estudo das características dinâmicas das vogais nasais, nomeadamente, aspetos como a duração do gesto velar, gesto oral, coordenação entre gestos, etc. complementando e corroborando resultados, já existentes para o PE, obtidos com recurso a outras técnicas instrumentais. Para além disso, foram obtidos novos dados de produção relevantes para melhor compreensão da nasalidade (variação área nasal/oral no tempo, proporção nasal/oral). Neste estudo, fica patente a versatilidade e potencial da RM para o estudo da produção de fala, com contributos claros e importantes para um melhor conhecimento da articulação do Português, para a evolução de modelos de síntese de voz, de base articulatória, e para aplicação futura em áreas mais clínicas (e.g., perturbações da fala).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cherenkov Imaging counters require large photosensitive areas, capable of single photon detection, operating at stable high gains under radioactive backgrounds while standing high rates, providing a fast response and a good time resolution, and being insensitive to magnetic fields. The development of photon detectors based in Micro Pattern Gaseous detectors (MPGDs), represent a new generation of gaseous photon detectors. In particular, gaseous detectors based on stacked Thick-Gaseous Electron Multipliers (THGEMs), or THGEM based structures, coupled to a CsI photoconverter coating, seem to fulfil the requirements imposed by Cherenkov imaging counters. This work focus on the study of the THGEM-based detectors response as function of its geometrical parameters and applied voltages and electric fields, aiming a future upgrade of the Cherenkov Imaging counter RICH-1 of the COMPASS experiment at CERN SPS. Further studies to decrease the fraction of ions that reach the photocathode (Ion Back Flow – IBF) to minimize the ageing and maximize the photoelectron extraction are performed. Experimental studies are complemented with simulation results, also perfomed in this work.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Desulfurization is one of the most important processes in the refining industry. Due to a growing concern about the risks to human health and environment, associated with the emissions of sulfur compounds, legislation has become more stringent, requiring a drastic reduction in the sulfur content of fuel to levels close to zero (< 10 ppm S). However, conventional desulfurization processes are inefficient and have high operating costs. This scenario stimulates the improvement of existing processes and the development of new and more efficient technologies. Aiming at overcoming these shortcomings, this work investigates an alternative desulfurization process using ionic liquids for the removal of mercaptans from "jet fuel" streams. The screening and selection of the most suitable ionic liquid were performed based on experimental and COSMO-RS predicted liquid-liquid equilibrium data. A model feed of 1-hexanethiol and n-dodecane was selected to represent a jet-fuel stream. High selectivities were determined, as a result of the low mutual solubility between the ionic liquid and the hydrocarbon matrix, proving the potential use of the ionic liquid, which prevents the loss of fuel for the solvent. The distribution ratios of mercaptans towards the ionic liquids were not as favorable, making the traditional liquid-liquid extraction processes not suitable for the removal of aliphatic S-compounds due to the high volume of extractant required. This work explores alternative methods and proposes the use of ionic liquids in a separation process assisted by membranes. In the process proposed the ionic liquid is used as extracting solvent of the sulfur species, in a hollow fiber membrane contactor, without co-extracting the other jet-fuel compound. In a second contactor, the ionic liquid is regenerated applying a sweep gas stripping, which allows for its reuse in a closed loop between the two membrane contactors. This integrated extraction/regeneration process of desulfurization produced a jet-fuel model with sulfur content lower than 2 ppm of S, as envisaged by legislation for the use of ultra-low sulfur jet-fuel. This result confirms the high potential for development of ultra-deep desulfurization application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The solid-fluid transition properties of the n - 6 Lennard-Jones system are studied by means of extensive free energy calculations. Different values of the parameter n which regulates the steepness of the short-range repulsive interaction are investigated. Furthermore, the free energies of the n < 12 systems are calculated using the n = 12 system as a reference. The method relies on a generalization of the multiple histogram method that combines independent canonical ensemble simulations performed with different Hamiltonians and computes the free energy difference between them. The phase behavior of the fullerene C60 solid is studied by performing NPT simulations using atomistic models which treat each carbon in the molecule as a separate interaction site with additional bond charges. In particular, the transition from an orientationally frozen phase at low temperatures to one where the molecules are freely rotating at higher temperatures is studied as a function of applied pressure. The adsorption of molecular hydrogen in the zeolite NaA is investigated by means of grand-canonical Monte Carlo, in a wide range of temperatures and imposed gas pressures, and results are compared with available experimental data. A potential model is used that comprises three main interactions: van der Waals, Coulomb and induced polarization by the permanent electric field in the zeolite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aims at improving the knowledge on the post-fire vegetation regeneration. For that, forests and shrublands were studied, after forest fires and experimental fires. Maritime Pine (Pinus pinaster) recruitment after fire was studied. Fire severity was evidenced as a major effect on this process. High crown fire severity can combust the pines, destroying the seed bank and impeding post fire pine recruitment. However, crown combustion also influences the post-fire conditions on the soil surface, since high crown combustion (HCC) will decrease the postfire needle cast. After low crown combustion (LCC) (scorched rather than torched crowns), a considerable needle cover was observed, along with a higher density of pine seedlings. The overall trends of post-fire recruitment among LCC and HCC areas could be significantly attributed to cover by needles, as well by the estimation of fire severity using the diameters of the burned twigs (TSI). Fire increased the germination from the soil seed bank of a Pinus pinaster forest, and the effects were also related with fire severity. The densities of seedlings of the dominant taxa (genus Erica and Calluna vulgaris) were contrastingly affected in relation to the unburned situation, depending on fire severity, as estimated from the degree of fire-induced crown damage (LCC/HCC), as well as using a severity index based on the diameters of remaining twigs (TSI). Low severity patches had an increase in germination density relatively to the control, while high severity patches suffered a reduction. After an experimental fire in a heathland dominated by Pterospartum tridentatum, Erica australis and E. umbellata, no net differences in seedling emergence were observed, in relation to the pre-fire situation. However, rather than having no effect, the heterogeneity of temperatures caused by fire promoted caused divergent effects over the burned plot in terms of Erica australis germination – a progressive increased was observed in the plots were maximum temperature recorded ranged from 29 to 42.5ºC and decreased in plots with maximum temperature ranging from 51.5 to 74.5ºC. In this heathland, the seed density of two of the main species (E. australis and E. umbellata) was higher under their canopies, but the same was not true for P. tridentatum. The understory regeneration in pine and eucalypt stands, 5 to 6 years post fire, has been strongly associated with post-fire management practices. The effect of forest type was, comparatively, insignificant. Soil tilling, tree harvesting and shrub clearance, were linked to lower soil cover percentages. However, while all these management operations negatively affected the cover of resprouters, seeders were not affected by soil tilling. A strong influence of biogeographic region was identified, suggesting that more vulnerable regions may suffer higher effects of management, even under comparatively lower management pressure than more productive regions. This emphasizes the need to adequate post-fire management techniques to the target regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis reports the application of metabolomics to human tissues and biofluids (blood plasma and urine) to unveil the metabolic signature of primary lung cancer. In Chapter 1, a brief introduction on lung cancer epidemiology and pathogenesis, together with a review of the main metabolic dysregulations known to be associated with cancer, is presented. The metabolomics approach is also described, addressing the analytical and statistical methods employed, as well as the current state of the art on its application to clinical lung cancer studies. Chapter 2 provides the experimental details of this work, in regard to the subjects enrolled, sample collection and analysis, and data processing. In Chapter 3, the metabolic characterization of intact lung tissues (from 56 patients) by proton High Resolution Magic Angle Spinning (HRMAS) Nuclear Magnetic Resonance (NMR) spectroscopy is described. After careful assessment of acquisition conditions and thorough spectral assignment (over 50 metabolites identified), the metabolic profiles of tumour and adjacent control tissues were compared through multivariate analysis. The two tissue classes could be discriminated with 97% accuracy, with 13 metabolites significantly accounting for this discrimination: glucose and acetate (depleted in tumours), together with lactate, alanine, glutamate, GSH, taurine, creatine, phosphocholine, glycerophosphocholine, phosphoethanolamine, uracil nucleotides and peptides (increased in tumours). Some of these variations corroborated typical features of cancer metabolism (e.g., upregulated glycolysis and glutaminolysis), while others suggested less known pathways (e.g., antioxidant protection, protein degradation) to play important roles. Another major and novel finding described in this chapter was the dependence of this metabolic signature on tumour histological subtype. While main alterations in adenocarcinomas (AdC) related to phospholipid and protein metabolisms, squamous cell carcinomas (SqCC) were found to have stronger glycolytic and glutaminolytic profiles, making it possible to build a valid classification model to discriminate these two subtypes. Chapter 4 reports the NMR metabolomic study of blood plasma from over 100 patients and near 100 healthy controls, the multivariate model built having afforded a classification rate of 87%. The two groups were found to differ significantly in the levels of lactate, pyruvate, acetoacetate, LDL+VLDL lipoproteins and glycoproteins (increased in patients), together with glutamine, histidine, valine, methanol, HDL lipoproteins and two unassigned compounds (decreased in patients). Interestingly, these variations were detected from initial disease stages and the magnitude of some of them depended on the histological type, although not allowing AdC vs. SqCC discrimination. Moreover, it is shown in this chapter that age mismatch between control and cancer groups could not be ruled out as a possible confounding factor, and exploratory external validation afforded a classification rate of 85%. The NMR profiling of urine from lung cancer patients and healthy controls is presented in Chapter 5. Compared to plasma, the classification model built with urinary profiles resulted in a superior classification rate (97%). After careful assessment of possible bias from gender, age and smoking habits, a set of 19 metabolites was proposed to be cancer-related (out of which 3 were unknowns and 6 were partially identified as N-acetylated metabolites). As for plasma, these variations were detected regardless of disease stage and showed some dependency on histological subtype, the AdC vs. SqCC model built showing modest predictive power. In addition, preliminary external validation of the urine-based classification model afforded 100% sensitivity and 90% specificity, which are exciting results in terms of potential for future clinical application. Chapter 6 describes the analysis of urine from a subset of patients by a different profiling technique, namely, Ultra-Performance Liquid Chromatography coupled to Mass Spectrometry (UPLC-MS). Although the identification of discriminant metabolites was very limited, multivariate models showed high classification rate and predictive power, thus reinforcing the value of urine in the context of lung cancer diagnosis. Finally, the main conclusions of this thesis are presented in Chapter 7, highlighting the potential of integrated metabolomics of tissues and biofluids to improve current understanding of lung cancer altered metabolism and to reveal new marker profiles with diagnostic value.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When developing software for autonomous mobile robots, one has to inevitably tackle some kind of perception. Moreover, when dealing with agents that possess some level of reasoning for executing their actions, there is the need to model the environment and the robot internal state in a way that it represents the scenario in which the robot operates. Inserted in the ATRI group, part of the IEETA research unit at Aveiro University, this work uses two of the projects of the group as test bed, particularly in the scenario of robotic soccer with real robots. With the main objective of developing algorithms for sensor and information fusion that could be used e ectively on these teams, several state of the art approaches were studied, implemented and adapted to each of the robot types. Within the MSL RoboCup team CAMBADA, the main focus was the perception of ball and obstacles, with the creation of models capable of providing extended information so that the reasoning of the robot can be ever more e ective. To achieve it, several methodologies were analyzed, implemented, compared and improved. Concerning the ball, an analysis of ltering methodologies for stabilization of its position and estimation of its velocity was performed. Also, with the goal keeper in mind, work has been done to provide it with information of aerial balls. As for obstacles, a new de nition of the way they are perceived by the vision and the type of information provided was created, as well as a methodology for identifying which of the obstacles are team mates. Also, a tracking algorithm was developed, which ultimately assigned each of the obstacles a unique identi er. Associated with the improvement of the obstacles perception, a new algorithm of estimating reactive obstacle avoidance was created. In the context of the SPL RoboCup team Portuguese Team, besides the inevitable adaptation of many of the algorithms already developed for sensor and information fusion and considering that it was recently created, the objective was to create a sustainable software architecture that could be the base for future modular development. The software architecture created is based on a series of di erent processes and the means of communication among them. All processes were created or adapted for the new architecture and a base set of roles and behaviors was de ned during this work to achieve a base functional framework. In terms of perception, the main focus was to de ne a projection model and camera pose extraction that could provide information in metric coordinates. The second main objective was to adapt the CAMBADA localization algorithm to work on the NAO robots, considering all the limitations it presents when comparing to the MSL team, especially in terms of computational resources. A set of support tools were developed or improved in order to support the test and development in both teams. In general, the work developed during this thesis improved the performance of the teams during play and also the e ectiveness of the developers team when in development and test phases.