925 resultados para Centralized and Distributed Multi-Agent Routing Schemas


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to analyse the possibilities of improving grid stability on island systems by local demand response mechanisms,a multi-agent simulation model is presented. To support the primary reserve, an under-frequency load shedding (UFLS)using refrigerator loads is modelled. The model represents the system at multiple scales, by recreating each refrigerator individually, and coupling the whole population of refrigerators to a model which simulates the frequency response of the energy system, allowing for cross-scale interactions. Using a simple UFLS strategy, emergent phenomena appear in the simulation. Synchronisation e ects among the individual loads were discovered, which can have strong, undesirable impacts on the system such as oscillations of loads and frequency. The phase transition from a stable to an oscillating system is discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cyber-physical systems integrate computation, networking, and physical processes. Substantial research challenges exist in the design and verification of such large-scale, distributed sensing, ac- tuation, and control systems. Rapidly improving technology and recent advances in control theory, networked systems, and computer science give us the opportunity to drastically improve our approach to integrated flow of information and cooperative behavior. Current systems rely on text-based spec- ifications and manual design. Using new technology advances, we can create easier, more efficient, and cheaper ways of developing these control systems. This thesis will focus on design considera- tions for system topologies, ways to formally and automatically specify requirements, and methods to synthesize reactive control protocols, all within the context of an aircraft electric power system as a representative application area.

This thesis consists of three complementary parts: synthesis, specification, and design. The first section focuses on the synthesis of central and distributed reactive controllers for an aircraft elec- tric power system. This approach incorporates methodologies from computer science and control. The resulting controllers are correct by construction with respect to system requirements, which are formulated using the specification language of linear temporal logic (LTL). The second section addresses how to formally specify requirements and introduces a domain-specific language for electric power systems. A software tool automatically converts high-level requirements into LTL and synthesizes a controller.

The final sections focus on design space exploration. A design methodology is proposed that uses mixed-integer linear programming to obtain candidate topologies, which are then used to synthesize controllers. The discrete-time control logic is then verified in real-time by two methods: hardware and simulation. Finally, the problem of partial observability and dynamic state estimation is ex- plored. Given a set placement of sensors on an electric power system, measurements from these sensors can be used in conjunction with control logic to infer the state of the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the design, construction, and applications of the optoelectronic swept-frequency laser (SFL). The optoelectronic SFL is a feedback loop designed around a swept-frequency (chirped) semiconductor laser (SCL) to control its instantaneous optical frequency, such that the chirp characteristics are determined solely by a reference electronic oscillator. The resultant system generates precisely controlled optical frequency sweeps. In particular, we focus on linear chirps because of their numerous applications. We demonstrate optoelectronic SFLs based on vertical-cavity surface-emitting lasers (VCSELs) and distributed-feedback lasers (DFBs) at wavelengths of 1550 nm and 1060 nm. We develop an iterative bias current predistortion procedure that enables SFL operation at very high chirp rates, up to 10^16 Hz/sec. We describe commercialization efforts and implementation of the predistortion algorithm in a stand-alone embedded environment, undertaken as part of our collaboration with Telaris, Inc. We demonstrate frequency-modulated continuous-wave (FMCW) ranging and three-dimensional (3-D) imaging using a 1550 nm optoelectronic SFL.

We develop the technique of multiple source FMCW (MS-FMCW) reflectometry, in which the frequency sweeps of multiple SFLs are "stitched" together in order to increase the optical bandwidth, and hence improve the axial resolution, of an FMCW ranging measurement. We demonstrate computer-aided stitching of DFB and VCSEL sweeps at 1550 nm. We also develop and demonstrate hardware stitching, which enables MS-FMCW ranging without additional signal processing. The culmination of this work is the hardware stitching of four VCSELs at 1550 nm for a total optical bandwidth of 2 THz, and a free-space axial resolution of 75 microns.

We describe our work on the tomographic imaging camera (TomICam), a 3-D imaging system based on FMCW ranging that features non-mechanical acquisition of transverse pixels. Our approach uses a combination of electronically tuned optical sources and low-cost full-field detector arrays, completely eliminating the need for moving parts traditionally employed in 3-D imaging. We describe the basic TomICam principle, and demonstrate single-pixel TomICam ranging in a proof-of-concept experiment. We also discuss the application of compressive sensing (CS) to the TomICam platform, and perform a series of numerical simulations. These simulations show that tenfold compression is feasible in CS TomICam, which effectively improves the volume acquisition speed by a factor ten.

We develop chirped-wave phase-locking techniques, and apply them to coherent beam combining (CBC) of chirped-seed amplifiers (CSAs) in a master oscillator power amplifier configuration. The precise chirp linearity of the optoelectronic SFL enables non-mechanical compensation of optical delays using acousto-optic frequency shifters, and its high chirp rate simultaneously increases the stimulated Brillouin scattering (SBS) threshold of the active fiber. We characterize a 1550 nm chirped-seed amplifier coherent-combining system. We use a chirp rate of 5*10^14 Hz/sec to increase the amplifier SBS threshold threefold, when compared to a single-frequency seed. We demonstrate efficient phase-locking and electronic beam steering of two 3 W erbium-doped fiber amplifier channels, achieving temporal phase noise levels corresponding to interferometric fringe visibilities exceeding 98%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work is concerned with a general analysis of wave interactions in periodic structures and particularly periodic thin film dielectric waveguides.

The electromagnetic wave propagation in an asymmetric dielectric waveguide with a periodically perturbed surface is analyzed in terms of a Floquet mode solution. First order approximate analytical expressions for the space harmonics are obtained. The solution is used to analyze various applications: (1) phase matched second harmonic generation in periodically perturbed optical waveguides; (2) grating couplers and thin film filters; (3) Bragg reflection devices; (4) the calculation of the traveling wave interaction impedance for solid state and vacuum tube optical traveling wave amplifiers which utilize periodic dielectric waveguides. Some of these applications are of interest in the field of integrated optics.

A special emphasis is put on the analysis of traveling wave interaction between electrons and electromagnetic waves in various operation regimes. Interactions with a finite temperature electron beam at the collision-dominated, collisionless, and quantum regimes are analyzed in detail assuming a one-dimensional model and longitudinal coupling.

The analysis is used to examine the possibility of solid state traveling wave devices (amplifiers, modulators), and some monolithic structures of these devices are suggested, designed to operate at the submillimeter-far infrared frequency regime. The estimates of attainable traveling wave interaction gain are quite low (on the order of a few inverse centimeters). However, the possibility of attaining net gain with different materials, structures and operation condition is not ruled out.

The developed model is used to discuss the possibility and the theoretical limitations of high frequency (optical) operation of vacuum electron beam tube; and the relation to other electron-electromagnetic wave interaction effects (Smith-Purcell and Cerenkov radiation and the free electron laser) are pointed out. Finally, the case where the periodic structure is the natural crystal lattice is briefly discussed. The longitudinal component of optical space harmonics in the crystal is calculated and found to be of the order of magnitude of the macroscopic wave, and some comments are made on the possibility of coherent bremsstrahlung and distributed feedback lasers in single crystals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sistemas Multiagentes estão recebendo cada vez mais a atenção de pesquisadores e desenvolvedores de jogos virtuais. O uso de agentes permite controlar o desempenho do usuário, adaptando a interface e alterando automaticamente o nível de dificuldade das tarefas. Este trabalho descreve uma estratégia de integração de sistemas multiagentes e ambientes virtuais tridimensionais e exemplifica a viabilidade dessa integração através do desenvolvimento de um jogo com características de Serious game. Este jogo visa estimular as funções cognitivas, tais como atenção e memória e é voltado para pessoas portadoras de diferentes distúrbios neuropsiquiátricos. A construção do jogo foi apoiada em um processo de desenvolvimento composto por várias etapas: estudos teóricos sobre as áreas envolvidas, estudo de tecnologias capazes de apoiar essa integração, levantamento de requisitos com especialistas, implementação e avaliação com especialistas. O produto final foi avaliado por especialistas da área médica, que consideraram os resultados como positivos.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blowflies are insects of forensic interest as they may indicate characteristics of the environment where a body has been laying prior to the discovery. In order to estimate changes in community related to landscape and to assess if blowfly species can be used as indicators of the landscape where a corpse has been decaying, we studied the blowfly community and how it is affected by landscape in a 7,000 km(2) region during a whole year. Using baited traps deployed monthly we collected 28,507 individuals of 10 calliphorid species, 7 of them well represented and distributed in the study area. Multiple Analysis of Variance found changes in abundance between seasons in the 7 analyzed species, and changes related to land use in 4 of them (Calliphora vomitoria, Lucilia ampullacea, L. caesar and L. illustris). Generalised Linear Model analyses of abundance of these species compared with landscape descriptors at different scales found only a clear significant relationship between summer abundance of C. vomitoria and distance to urban areas and degree of urbanisation. This relationship explained more deviance when considering the landscape composition at larger geographical scales (up to 2,500 m around sampling site). For the other species, no clear relationship between land uses and abundance was found, and therefore observed changes in their abundance patterns could be the result of other variables, probably small changes in temperature. Our results suggest that blowfly community composition cannot be used to infer in what kind of landscape a corpse has decayed, at least in highly fragmented habitats, the only exception being the summer abundance of C. vomitoria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O objetivo deste estudo foi avaliar ex vivo a extrusão bacteriana apical após instrumentação mecanizada com sistemas reciprocantes de instrumento único e movimento reciprocante (WaveOne and Reciproc) comparados a um sistema multi-instrumentos (BioRaCe). Quarenta e cinco incisivos inferiores humanos unirradiculares, ovais e de anatomia semelhante foram utilizados. Os dentes foram acessados e seus canais radiculares foram contaminados com uma suspensão de Enterococcus faecalis e incubados por 30 dias possibilitando crescimento bacteriano em biofilme. Os dentes contaminados foram divididos em três grupos com 15 espécimes cada (RE - Reciproc, WO -WaveOne e BR - BioRaCe). Foram utilizados oito dentes para grupos controle de crescimento bacteriano positivo e negativo. As bactérias extruídas apicalmente durante a instrumentação foram coletadas em frascos de vidro contendo 0,9% de NaCl. As amostras microbiológicas foram retiradas dos frascos e incubadas em meio BHI ágar, durante 24 horas. O crescimento bacteriano foi contado e os resultados foram expressos em unidades formadoras de colônia (UFC). Os dados foram analisados pelos testes estatísticos de Wilcoxon e Kruskal-Wallis. Não houve diferença estatisticamente significante no número de UFC entre os dois sistemas reciprocantes (p>0,05). Em contrapartida, o sistema de instrumentos rotatórios mostrou uma quantidade de UFC significativamente maior do que os dois outros grupos (p <0,05). A partir da análise dos resultados e dentro das limitações deste estudo foi possível concluir que todos os sistemas de instrumentação testados extruem bactérias apicalmente. No entanto, ambos os sistemas de instrumento único e movimento reciprocante extruem menos bactérias apicalmente do que o sistema rotatório multi-instrumentos de referência.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A modelagem orientada a agentes surge como paradigma no desenvolvimento de software, haja vista a quantidade de iniciativas e estudos que remetem à utilização de agentes de software como solução para tratar de problemas mais complexos. Apesar da popularidade de utilização de agentes, especialistas esbarram na falta de universalidade de uma metodologia para construção dos Sistemas Multiagentes (MAS), pois estas acabam pecando pelo excesso ou falta de soluções para modelar o problema. Esta dissertação propõe o uso de uma Ontologia sobre Metodologias Multiagentes, seguindo os princípios da Engenharia de Métodos Situacionais que se propõe a usar fragmentos de métodos para construção de metodologias baseados na especificidade do projeto em desenvolvimento. O objetivo do estudo é sedimentar o conhecimento na área de Metodologias Multiagentes, auxiliando o engenheiro de software a escolher a melhor metodologia ou o melhor fragmento de metodologia capaz de modelar um Sistema Multiagentes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A tecnologia de agentes tem sido reconhecida como um paradigma promissor em sistemas educacionais da nova geração. Entretanto, o esforço e inflexibilidade de algumas metodologias próprias para agentesacarretam num alto custo, tempo e adaptação de escopo. Este trabalho visaavaliar alternativas de desenvolvimento de um jogo educacional médico orientado a agentes, através da aplicação de um estudo de caso, com o intuito de verificar se metodologias próprias para implementação de sistemas multiagentes trazem benefícios no resultado final da implementação do jogo, e também se os resultados alcançados na comparação de processos de desenvolvimento de cunho tradicional e ágil fazem diferença no resultado final. Desta forma, este trabalho compara três metodologias baseadas nos conceitos da Engenharia de Software através de um estudo de caso, sendo elas: O-MaSE que é uma metodologiatradicional de desenvolvimento de sistemas multiagentes e utiliza um processo de desenvolvimento tradicional; AgilePASSI que é baseada no processo de desenvolvimento ágil e específica para sistemas multiagentes; e, por último, Scrum que é uma metodologia ágil, não sendo específica para implementação de sistemas multiagentes

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report provides baseline biological data on fishes, corals and habitats in Coral and Fish Bays, St. John, USVI. A similar report with data on nutrients and contaminants in the same bays is planned to be completed in 2013. Data from NOAA’s long-term Caribbean Coral Reef Ecosystem Monitoring program was compiled to provide a baseline assessment of corals, fishes and habitats from 2001 to 2010, data needed to assess the impacts of erosion control projects installed from 2010 to 2011. The baseline data supplement other information collected as part of the USVI Watershed Stabilization Project, a project funded by the American Recovery and Reinvestment Act of 2009 and distributed through the NOAA Restoration Center, but uses data which is not within the scope of ARRA funded work. We present data on 16 ecological indicators of fishes, corals and habitats. These indicators were chosen because of their sensitivity to changes in water quality noted in the scientific literature (e.g., Rogers 1990, Larsen and Webb 2009). We report long-term averages and corresponding standard errors, plot annual averages, map indicator values and list inventories of coral and fish species identified among surveys. Similar data will be needed in the future to make rigorous comparisons and determine the magnitude of any impacts from watershed stabilization.