883 resultados para Estabilizador de sistemas de potência. Regulador automático detensão. Gerador síncrono de polos salientes. Transformada Wavelet


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Os sistemas de comunicações óticas têm evoluído rapidamente e, como consequência, a potência dos sinais óticos propagados na fibra tem vindo a aumentar. A propagação de sinais de elevada potência pode resultar na degradação e/ou destruição da fibra ótica. O presente trabalho tem como objetivo o estudo do impacto da propagação de sinais óticos com elevadas potências na degradação e redução do tempo de vida útil das fibras óticas. Estudou-se este tipo de degradação das fibras óticas a dois níveis: degradação do revestimento de fibras sujeitas a curvaturas apertadas e ignição e propagação do efeito rastilho que leva a destruição da fibra ótica. Para a concretização deste estudo, caracterizou-se experimentalmente o aumento de temperatura no revestimento de fibras óticas sujeitas a curvaturas de diâmetro reduzido e na presença de sinais com elevada potência. Tendo-se desenvolvido um modelo analítico para descrever o aquecimento do revestimento da fibra. No âmbito do efeito rastilho em fibra ótica, estudou-se experimentalmente, em diversos tipos de fibra, as principais propriedades deste fenómeno, nomeadamente, a potência ótica de limiar necessária para a sua ignição e propagação, a velocidade de propagação e as características da cadeia de bolhas formada no núcleo da fibra ótica durante a propagação deste efeito. Desenvolveu-se, ainda, um modelo teórico para simular a propagação do efeito rastilho, tendo sido validado e complementado através de resultados experimentais. Por fim, desenvolveram-se e implementaram-se técnicas para detetar e interromper a propagação do efeito rastilho, como forma de limitar a extensão de fibra destruída e proteger os equipamentos óticos da rede.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação Multicélula é um tópico de investigação em rápido crescimento e uma solução promissora para controlar a interferência entre células em sistemas celulares, melhorando a equidade do sistema e aumentando a sua capacidade. Esta tecnologia já está em estudo no LTEAdvanced sob o conceito de coordenação multiponto (COMP). Existem várias abordagens sobre coordenação multicélula, dependendo da quantidade e do tipo de informação partilhada pelas estações base, através da rede de suporte (backhaul network), e do local onde essa informação é processada, i.e., numa unidade de processamento central ou de uma forma distribuída em cada estação base. Nesta tese, são propostas técnicas de pré-codificação e alocação de potência considerando várias estratégias: centralizada, todo o processamento é feito na unidade de processamento central; semidistribuída, neste caso apenas parte do processamento é executado na unidade de processamento central, nomeadamente a potência alocada a cada utilizador servido por cada estação base; e distribuída em que o processamento é feito localmente em cada estação base. Os esquemas propostos são projectados em duas fases: primeiro são propostas soluções de pré-codificação para mitigar ou eliminar a interferência entre células, de seguida o sistema é melhorado através do desenvolvimento de vários esquemas de alocação de potência. São propostas três esquemas de alocação de potência centralizada condicionada a cada estação base e com diferentes relações entre desempenho e complexidade. São também derivados esquemas de alocação distribuídos, assumindo que um sistema multicelular pode ser visto como a sobreposição de vários sistemas com uma única célula. Com base neste conceito foi definido uma taxa de erro média virtual para cada um desses sistemas de célula única que compõem o sistema multicelular, permitindo assim projectar esquemas de alocação de potência completamente distribuídos. Todos os esquemas propostos foram avaliados em cenários realistas, bastante próximos dos considerados no LTE. Os resultados mostram que os esquemas propostos são eficientes a remover a interferência entre células e que o desempenho das técnicas de alocação de potência propostas é claramente superior ao caso de não alocação de potência. O desempenho dos sistemas completamente distribuídos é inferior aos baseados num processamento centralizado, mas em contrapartida podem ser usados em sistemas em que a rede de suporte não permita a troca de grandes quantidades de informação.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A investigação em apreço tem como horizonte de ocorrência o espaço inter-organizacional onde as empresas se relacionam entre si, com os seus fornecedores, canais e Clientes. É pretendido estudar o actual estado das parcerias inter-organizacionais do sector segurador nacional e definir uma estratégia de desenvolvimento integrado dos sistemas de valor. Com base num modelo de análise ancorado na (1) racionalidade económica inscrita na TCE (teoria dos custos de transacção) e (2) na óptica das Capacidades Dinâmicas, é proposto o Modelo GPS (Gestão Integrada de Parcerias) compaginável com uma visão holística e dinâmica. A metodologia de verificação empírica compreendeu (1) recolha de dados através de questionário, dirigido a Companhias e Parceiros e (2) entrevistas semi-estruturadas. A análise descritiva dos dados permitiu validar o modelo GPS e caracterizar um sistema de valor heterogéneo, complexo e diversificado relativamente à natureza e intensidade de relacionamentos. O sistema de relacionamentos foi enquadrado numa escala de maturidade onde foram posicionadas as várias práticas de gestão de parcerias. Actualmente nos seguros estamos perante um sistema mais economic-intensive, transaccional, do que knowledge-intensive. No teste de hipóteses, efectuado com a ferramenta SPSS, assinalam-se as correlações que se esperavam encontrar, bem como as (principais) ausências. De facto, a ausência de vestígios de correlação entre governance social/confiança e colaboração nos seguros não era esperada e constitui uma chamada de atenção para uma dimensão sub-explorada, conducente a um quadro tensional. No final, com base na realidade captada, foram traçadas recomendações de desenvolvimento dos sistemas de valor visando alcançar níveis colaborativos mais eficazes, assentes na força dos laços fortes. Todavia, esta nova narrativa de gestão não é neutral face aos modelos vigentes, implicando algum grau de ruptura. A continuação de especialização em actividades core, desconstruindo de forma (mais) pronunciada a cadeia de valor, secundada por maior níveis de colaboração e socialização entre pares, são elementos constitutivos da realidade futura. Vendo para além da linha do horizonte, os gestores seguradores não podem ficar indiferentes à projecção de uma matriz de fundo de relacionamentos mais colaborativos enquanto terreno fértil de inovação e renovação de fontes de vantagem competitiva.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Esta tese descreve uma framework de trabalho assente no paradigma multi-camada para analisar, modelar, projectar e optimizar sistemas de comunicação. Nela se explora uma nova perspectiva acerca da camada física que nasce das relações entre a teoria de informação, estimação, métodos probabilísticos, teoria da comunicação e codificação. Esta framework conduz a métodos de projecto para a próxima geração de sistemas de comunicação de alto débito. Além disso, a tese explora várias técnicas de camada de acesso com base na relação entre atraso e débito para o projeto de redes sem fio tolerantes a atrasos. Alguns resultados fundamentais sobre a interação entre a teoria da informação e teoria da estimação conduzem a propostas de um paradigma alternativo para a análise, projecto e optimização de sistemas de comunicação. Com base em estudos sobre a relação entre a informação recíproca e MMSE, a abordagem descrita na tese permite ultrapassar, de forma inovadora, as dificuldades inerentes à optimização das taxas de transmissão de informação confiáveis em sistemas de comunicação, e permite a exploração da atribuição óptima de potência e estruturas óptimas de pre-codificação para diferentes modelos de canal: com fios, sem fios e ópticos. A tese aborda também o problema do atraso, numa tentativa de responder a questões levantadas pela enorme procura de débitos elevados em sistemas de comunicação. Isso é feito através da proposta de novos modelos para sistemas com codificação de rede (network coding) em camadas acima da sua camada física. Em particular, aborda-se a utilização de sistemas de codificação em rede para canais que variam no tempo e são sensíveis a atrasos. Isso foi demonstrado através da proposta de um novo modelo e esquema adaptativo, cujos algoritmos foram aplicados a sistemas sem fios com desvanecimento (fading) complexo, de que são exemplos os sistemas de comunicação via satélite. A tese aborda ainda o uso de sistemas de codificação de rede em cenários de transferência (handover) exigentes. Isso é feito através da proposta de novos modelos de transmissão WiFi IEEE 801.11 MAC, que são comparados com codificação de rede, e que se demonstram possibilitar transferência sem descontinuidades. Pode assim dizer-se que esta tese, através de trabalho de análise e de propostas suportadas por simulações, defende que na concepção de sistemas de comunicação se devem considerar estratégias de transmissão e codificação que sejam não só próximas da capacidade dos canais, mas também tolerantes a atrasos, e que tais estratégias têm de ser concebidas tendo em vista características do canal e a camada física.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Future emerging market trends head towards positioning based services placing a new perspective on the way we obtain and exploit positioning information. On one hand, innovations in information technology and wireless communication systems enabled the development of numerous location based applications such as vehicle navigation and tracking, sensor networks applications, home automation, asset management, security and context aware location services. On the other hand, wireless networks themselves may bene t from localization information to improve the performances of di erent network layers. Location based routing, synchronization, interference cancellation are prime examples of applications where location information can be useful. Typical positioning solutions rely on measurements and exploitation of distance dependent signal metrics, such as the received signal strength, time of arrival or angle of arrival. They are cheaper and easier to implement than the dedicated positioning systems based on ngerprinting, but at the cost of accuracy. Therefore intelligent localization algorithms and signal processing techniques have to be applied to mitigate the lack of accuracy in distance estimates. Cooperation between nodes is used in cases where conventional positioning techniques do not perform well due to lack of existing infrastructure, or obstructed indoor environment. The objective is to concentrate on hybrid architecture where some nodes have points of attachment to an infrastructure, and simultaneously are interconnected via short-range ad hoc links. The availability of more capable handsets enables more innovative scenarios that take advantage of multiple radio access networks as well as peer-to-peer links for positioning. Link selection is used to optimize the tradeo between the power consumption of participating nodes and the quality of target localization. The Geometric Dilution of Precision and the Cramer-Rao Lower Bound can be used as criteria for choosing the appropriate set of anchor nodes and corresponding measurements before attempting location estimation itself. This work analyzes the existing solutions for node selection in order to improve localization performance, and proposes a novel method based on utility functions. The proposed method is then extended to mobile and heterogeneous environments. Simulations have been carried out, as well as evaluation with real measurement data. In addition, some speci c cases have been considered, such as localization in ill-conditioned scenarios and the use of negative information. The proposed approaches have shown to enhance estimation accuracy, whilst signi cantly reducing complexity, power consumption and signalling overhead.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The promise of a truly mobile experience is to have the freedom to roam around anywhere and not be bound to a single location. However, the energy required to keep mobile devices connected to the network over extended periods of time quickly dissipates. In fact, energy is a critical resource in the design of wireless networks since wireless devices are usually powered by batteries. Furthermore, multi-standard mobile devices are allowing users to enjoy higher data rates with ubiquitous connectivity. However, the bene ts gained from multiple interfaces come at a cost in terms of energy consumption having profound e ect on the mobile battery lifetime and standby time. This concern is rea rmed by the fact that battery lifetime is one of the top reasons why consumers are deterred from using advanced multimedia services on their mobile on a frequent basis. In order to secure market penetration for next generation services energy e ciency needs to be placed at the forefront of system design. However, despite recent e orts, energy compliant features in legacy technologies are still in its infancy, and new disruptive architectures coupled with interdisciplinary design approaches are required in order to not only promote the energy gain within a single protocol layer, but to enhance the energy gain from a holistic perspective. A promising approach is cooperative smart systems, that in addition to exploiting context information, are entities that are able to form a coalition and cooperate in order to achieve a common goal. Migrating from this baseline, this thesis investigates how these technology paradigm can be applied towards reducing the energy consumption in mobile networks. In addition, we introduce an additional energy saving dimension by adopting an interlayer design so that protocol layers are designed to work in synergy with the host system, rather than independently, for harnessing energy. In this work, we exploit context information, cooperation and inter-layer design for developing new energy e cient and technology agnostic building blocks for mobile networks. These technology enablers include energy e cient node discovery and short-range cooperation for energy saving in mobile handsets, complemented by energy-aware smart scheduling for promoting energy saving on the network side. Analytical and simulations results were obtained, and veri ed in the lab on a real hardware testbed. Results have shown that up to 50% energy saving could be obtained.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The performance of real-time networks is under continuous improvement as a result of several trends in the digital world. However, these tendencies not only cause improvements, but also exacerbates a series of unideal aspects of real-time networks such as communication latency, jitter of the latency and packet drop rate. This Thesis focuses on the communication errors that appear on such realtime networks, from the point-of-view of automatic control. Specifically, it investigates the effects of packet drops in automatic control over fieldbuses, as well as the architectures and optimal techniques for their compensation. Firstly, a new approach to address the problems that rise in virtue of such packet drops, is proposed. This novel approach is based on the simultaneous transmission of several values in a single message. Such messages can be from sensor to controller, in which case they are comprised of several past sensor readings, or from controller to actuator in which case they are comprised of estimates of several future control values. A series of tests reveal the advantages of this approach. The above-explained approach is then expanded as to accommodate the techniques of contemporary optimal control. However, unlike the aforementioned approach, that deliberately does not send certain messages in order to make a more efficient use of network resources; in the second case, the techniques are used to reduce the effects of packet losses. After these two approaches that are based on data aggregation, it is also studied the optimal control in packet dropping fieldbuses, using generalized actuator output functions. This study ends with the development of a new optimal controller, as well as the function, among the generalized functions that dictate the actuator’s behaviour in the absence of a new control message, that leads to the optimal performance. The Thesis also presents a different line of research, related with the output oscillations that take place as a consequence of the use of classic co-design techniques of networked control. The proposed algorithm has the goal of allowing the execution of such classical co-design algorithms without causing an output oscillation that increases the value of the cost function. Such increases may, under certain circumstances, negate the advantages of the application of the classical co-design techniques. A yet another line of research, investigated algorithms, more efficient than contemporary ones, to generate task execution sequences that guarantee that at least a given number of activated jobs will be executed out of every set composed by a predetermined number of contiguous activations. This algorithm may, in the future, be applied to the generation of message transmission patterns in the above-mentioned techniques for the efficient use of network resources. The proposed task generation algorithm is better than its predecessors in the sense that it is capable of scheduling systems that cannot be scheduled by its predecessor algorithms. The Thesis also presents a mechanism that allows to perform multi-path routing in wireless sensor networks, while ensuring that no value will be counted in duplicate. Thereby, this technique improves the performance of wireless sensor networks, rendering them more suitable for control applications. As mentioned before, this Thesis is centered around techniques for the improvement of performance of distributed control systems in which several elements are connected through a fieldbus that may be subject to packet drops. The first three approaches are directly related to this topic, with the first two approaching the problem from an architectural standpoint, whereas the third one does so from more theoretical grounds. The fourth approach ensures that the approaches to this and similar problems that can be found in the literature that try to achieve goals similar to objectives of this Thesis, can do so without causing other problems that may invalidate the solutions in question. Then, the thesis presents an approach to the problem dealt with in it, which is centered in the efficient generation of the transmission patterns that are used in the aforementioned approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis focuses on the application of optimal alarm systems to non linear time series models. The most common classes of models in the analysis of real-valued and integer-valued time series are described. The construction of optimal alarm systems is covered and its applications explored. Considering models with conditional heteroscedasticity, particular attention is given to the Fractionally Integrated Asymmetric Power ARCH, FIAPARCH(p; d; q) model and an optimal alarm system is implemented, following both classical and Bayesian methodologies. Taking into consideration the particular characteristics of the APARCH(p; q) representation for financial time series, the introduction of a possible counterpart for modelling time series of counts is proposed: the INteger-valued Asymmetric Power ARCH, INAPARCH(p; q). The probabilistic properties of the INAPARCH(1; 1) model are comprehensively studied, the conditional maximum likelihood (ML) estimation method is applied and the asymptotic properties of the conditional ML estimator are obtained. The final part of the work consists on the implementation of an optimal alarm system to the INAPARCH(1; 1) model. An application is presented to real data series.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The continuous demand for highly efficient wireless transmitter systems has triggered an increased interest in switching mode techniques to handle the required power amplification. The RF carrier amplitude-burst transmitter, i.e. a wireless transmitter chain where a phase-modulated carrier is modulated in amplitude in an on-off mode, according to some prescribed envelope-to-time conversion, such as pulse-width or sigma-delta modulation, constitutes a promising architecture capable of efficiently transmitting signals of highly demanding complex modulation schemes. However, the tested practical implementations present results that are way behind the theoretically advanced promises (perfect linearity and efficiency). My original contribution to knowledge presented in this thesis is the first thorough study and model of the power efficiency and linearity characteristics that can be actually achieved with this architecture. The analysis starts with a brief revision of the theoretical idealized behavior of these switched-mode amplifier systems, followed by the study of the many sources of impairments that appear when the real system is implemented. In particular, a special attention is paid to the dynamic load modulation caused by the often ignored interaction between the narrowband signal reconstruction filter and the usual single-ended switched-mode power amplifier, which, among many other performance impairments, forces a two transistor implementation. The performance of this architecture is clearly explained based on the presented theory, which is supported by simulations and corresponding measured results of a fully working implementation. The drawn conclusions allow the development of a set of design rules for future improvements, one of which is proposed and verified in this thesis. It suggests a significant modification to this traditional architecture, where now the phase modulated carrier is always on – and thus allowing a single transistor implementation – and the amplitude is impressed into the carrier phase according to a bi-phase code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nos últimos anos temos vindo a assistir a uma mudança na forma como a informação é disponibilizada online. O surgimento da web para todos possibilitou a fácil edição, disponibilização e partilha da informação gerando um considerável aumento da mesma. Rapidamente surgiram sistemas que permitem a coleção e partilha dessa informação, que para além de possibilitarem a coleção dos recursos também permitem que os utilizadores a descrevam utilizando tags ou comentários. A organização automática dessa informação é um dos maiores desafios no contexto da web atual. Apesar de existirem vários algoritmos de clustering, o compromisso entre a eficácia (formação de grupos que fazem sentido) e a eficiência (execução em tempo aceitável) é difícil de encontrar. Neste sentido, esta investigação tem por problemática aferir se um sistema de agrupamento automático de documentos, melhora a sua eficácia quando se integra um sistema de classificação social. Analisámos e discutimos dois métodos baseados no algoritmo k-means para o clustering de documentos e que possibilitam a integração do tagging social nesse processo. O primeiro permite a integração das tags diretamente no Vector Space Model e o segundo propõe a integração das tags para a seleção das sementes iniciais. O primeiro método permite que as tags sejam pesadas em função da sua ocorrência no documento através do parâmetro Social Slider. Este método foi criado tendo por base um modelo de predição que sugere que, quando se utiliza a similaridade dos cossenos, documentos que partilham tags ficam mais próximos enquanto que, no caso de não partilharem, ficam mais distantes. O segundo método deu origem a um algoritmo que denominamos k-C. Este para além de permitir a seleção inicial das sementes através de uma rede de tags também altera a forma como os novos centróides em cada iteração são calculados. A alteração ao cálculo dos centróides teve em consideração uma reflexão sobre a utilização da distância euclidiana e similaridade dos cossenos no algoritmo de clustering k-means. No contexto da avaliação dos algoritmos foram propostos dois algoritmos, o algoritmo da “Ground truth automática” e o algoritmo MCI. O primeiro permite a deteção da estrutura dos dados, caso seja desconhecida, e o segundo é uma medida de avaliação interna baseada na similaridade dos cossenos entre o documento mais próximo de cada documento. A análise de resultados preliminares sugere que a utilização do primeiro método de integração das tags no VSM tem mais impacto no algoritmo k-means do que no algoritmo k-C. Além disso, os resultados obtidos evidenciam que não existe correlação entre a escolha do parâmetro SS e a qualidade dos clusters. Neste sentido, os restantes testes foram conduzidos utilizando apenas o algoritmo k-C (sem integração de tags no VSM), sendo que os resultados obtidos indicam que a utilização deste algoritmo tende a gerar clusters mais eficazes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wireless communication technologies have become widely adopted, appearing in heterogeneous applications ranging from tracking victims, responders and equipments in disaster scenarios to machine health monitoring in networked manufacturing systems. Very often, applications demand a strictly bounded timing response, which, in distributed systems, is generally highly dependent on the performance of the underlying communication technology. These systems are said to have real-time timeliness requirements since data communication must be conducted within predefined temporal bounds, whose unfulfillment may compromise the correct behavior of the system and cause economic losses or endanger human lives. The potential adoption of wireless technologies for an increasingly broad range of application scenarios has made the operational requirements more complex and heterogeneous than before for wired technologies. On par with this trend, there is an increasing demand for the provision of cost-effective distributed systems with improved deployment, maintenance and adaptation features. These systems tend to require operational flexibility, which can only be ensured if the underlying communication technology provides both time and event triggered data transmission services while supporting on-line, on-the-fly parameter modification. Generally, wireless enabled applications have deployment requirements that can only be addressed through the use of batteries and/or energy harvesting mechanisms for power supply. These applications usually have stringent autonomy requirements and demand a small form factor, which hinders the use of large batteries. As the communication support may represent a significant part of the energy requirements of a station, the use of power-hungry technologies is not adequate. Hence, in such applications, low-range technologies have been widely adopted. In fact, although low range technologies provide smaller data rates, they spend just a fraction of the energy of their higher-power counterparts. The timeliness requirements of data communications, in general, can be met by ensuring the availability of the medium for any station initiating a transmission. In controlled (close) environments this can be guaranteed, as there is a strict regulation of which stations are installed in the area and for which purpose. Nevertheless, in open environments, this is hard to control because no a priori abstract knowledge is available of which stations and technologies may contend for the medium at any given instant. Hence, the support of wireless real-time communications in unmanaged scenarios is a highly challenging task. Wireless low-power technologies have been the focus of a large research effort, for example, in the Wireless Sensor Network domain. Although bringing extended autonomy to battery powered stations, such technologies are known to be negatively influenced by similar technologies contending for the medium and, especially, by technologies using higher power transmissions over the same frequency bands. A frequency band that is becoming increasingly crowded with competing technologies is the 2.4 GHz Industrial, Scientific and Medical band, encompassing, for example, Bluetooth and ZigBee, two lowpower communication standards which are the base of several real-time protocols. Although these technologies employ mechanisms to improve their coexistence, they are still vulnerable to transmissions from uncoordinated stations with similar technologies or to higher power technologies such as Wi- Fi, which hinders the support of wireless dependable real-time communications in open environments. The Wireless Flexible Time-Triggered Protocol (WFTT) is a master/multi-slave protocol that builds on the flexibility and timeliness provided by the FTT paradigm and on the deterministic medium capture and maintenance provided by the bandjacking technique. This dissertation presents the WFTT protocol and argues that it allows supporting wireless real-time communication services with high dependability requirements in open environments where multiple contention-based technologies may dispute the medium access. Besides, it claims that it is feasible to provide flexible and timely wireless communications at the same time in open environments. The WFTT protocol was inspired on the FTT paradigm, from which higher layer services such as, for example, admission control has been ported. After realizing that bandjacking was an effective technique to ensure the medium access and maintenance in open environments crowded with contention-based communication technologies, it was recognized that the mechanism could be used to devise a wireless medium access protocol that could bring the features offered by the FTT paradigm to the wireless domain. The performance of the WFTT protocol is reported in this dissertation with a description of the implemented devices, the test-bed and a discussion of the obtained results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The planar design of solid oxide fuel cell (SOFC) is the most promising one due to its easier fabrication, improved performance and relatively high power density. In planar SOFCs and other solid-electrolyte devices, gas-tight seals must be formed along the edges of each cell and between the stack and gas manifolds. Glass and glass-ceramic (GC), in particular alkaline-earth alumino silicate based glasses and GCs, are becoming the most promising materials for gas-tight sealing applications in SOFCs. Besides the development of new glass-based materials, new additional concepts are required to overcome the challenges being faced by the currently existing sealant technology. The present work deals with the development of glasses- and GCs-based materials to be used as a sealants for SOFCs and other electrochemical functional applications. In this pursuit, various glasses and GCs in the field of diopside crystalline materials have been synthesized and characterized by a wide array of techniques. All the glasses were prepared by melt-quenching technique while GCs were produced by sintering of glass powder compacts at the temperature ranges from 800−900 ºC for 1−1000 h. Furthermore, the influence of various ionic substitutions, especially SrO for CaO, and Ln2O3 (Ln=La, Nd, Gd, and Yb), for MgO + SiO2 in Al-containing diopside on the structure, sintering and crystallization behaviour of glasses and properties of resultant GCs has been investigated, in relevance with final application as sealants in SOFC. From the results obtained in the study of diopside-based glasses, a bilayered concept of GC sealant is proposed to overcome the challenges being faced by (SOFCs). The systems designated as Gd−0.3 (in mol%: 20.62MgO−18.05CaO−7.74SrO−46.40SiO2−1.29Al2O3 − 2.04 B2O3−3.87Gd2O3) and Sr−0.3 (in mol%: 24.54 MgO−14.73 CaO−7.36 SrO−0.55 BaO−47.73 SiO2−1.23 Al2O3−1.23 La2O3−1.79 B2O3−0.84 NiO) have been utilized to realize the bi-layer concept. Both GCs exhibit similar thermal properties, while differing in their amorphous fractions, revealed excellent thermal stability along a period of 1,000 h. They also bonded well to the metallic interconnect (Crofer22APU) and 8 mol% yttrium stabilized zirconium (8YSZ) ceramic electrolyte without forming undesirable interfacial layers at the joints of SOFC components and GC. Two separated layers composed of glasses (Gd−0.3 and Sr−0.3) were prepared and deposited onto interconnect materials using a tape casting approach. The bi-layered GC showed good wetting and bonding ability to Crofer22APU plate, suitable thermal expansion coefficient (9.7–11.1 × 10–6 K−1), mechanical reliability, high electrical resistivity, and strong adhesion to the SOFC componets. All these features confirm the good suitability of the investigated bi-layered sealant system for SOFC applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the field of control systems it is common to use techniques based on model adaptation to carry out control for plants for which mathematical analysis may be intricate. Increasing interest in biologically inspired learning algorithms for control techniques such as Artificial Neural Networks and Fuzzy Systems is in progress. In this line, this paper gives a perspective on the quality of results given by two different biologically connected learning algorithms for the design of B-spline neural networks (BNN) and fuzzy systems (FS). One approach used is the Genetic Programming (GP) for BNN design and the other is the Bacterial Evolutionary Algorithm (BEA) applied for fuzzy rule extraction. Also, the facility to incorporate a multi-objective approach to the GP algorithm is outlined, enabling the designer to obtain models more adequate for their intended use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Devido ao seu custo e importância estratégica, os transformadores de potência constituem um componente vital dos sistemas de produção, transmissão e distribuição de energia eléctrica. A sua fiabilidade constitui assim um factor crucial no funcionamento dos sistemas eléctricos de energia. Não admira, pois, que sobre estes equipamentos recaiam grandes preocupações relativamente à sua manutenção e, consequentemente, ao desenvolvimento de métodos capazes de fornecerem um diagnóstico completo e fiável do seu estado de funcionamento. Para o efeito, torna-se porém indispensável possuir um conhecimento detalhado acerca das avarias susceptíveis de ocorrerem nos transformadores, bem como dos mecanismos específicos que lhe estão subjacentes.