930 resultados para visual communication design
Resumo:
Enhancing the handover process in broadband wireless communication deployment has traditionally motivated many research initiatives. In a high-speed railway domain, the challenge is even greater. Owing to the long distances covered, the mobile node gets involved in a compulsory sequence of handover processes. Consequently, poor performance during the execution of these handover processes significantly degrades the global end-to-end performance. This article proposes a new handover strategy for the railway domain: the RMPA handover, a Reliable Mobility Pattern Aware IEEE 802.16 handover strategy "customized" for a high-speed mobility scenario. The stringent high mobility feature is balanced with three other positive features in a high-speed context: mobility pattern awareness, different sources for location discovery techniques, and a previously known traffic data profile. To the best of the authors' knowledge, there is no IEEE 802.16 handover scheme that simultaneously covers the optimization of the handover process itself and the efficient timing of the handover process. Our strategy covers both areas of research while providing a cost-effective and standards-based solution. To schedule the handover process efficiently, the RMPA strategy makes use of a context aware handover policy; that is, a handover policy based on the mobile node mobility pattern, the time required to perform the handover, the neighboring network conditions, the data traffic profile, the received power signal, and current location and speed information of the train. Our proposal merges all these variables in a cross layer interaction in the handover policy engine. It also enhances the handover process itself by establishing the values for the set of handover configuration parameters and mechanisms of the handover process. RMPA is a cost-effective strategy because compatibility with standards-based equipment is guaranteed. The major contributions of the RMPA handover are in areas that have been left open to the handover designer's discretion. Our simulation analysis validates the RMPA handover decision rules and design choices. Our results supporting a high-demand video application in the uplink stream show a significant improvement in the end-to-end quality of service parameters, including end-to-end delay (22%) and jitter (80%), when compared with a policy based on signal-to-noise-ratio information.
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
Buildings in Port Aransas encounter drastic environmental challenges: the potential catastrophic storm surge and high winds from a hurricane, and daily conditions hostile to buildings, vehicles, and even most vegetation. Its location a few hundred feet from the Gulf of Mexico and near-tropical latitude expose buildings to continuous high humidity, winds laden with scouring sand and corrosive salt, and extremes of temperature and ultraviolet light. Building construction methods are able to address each of these, but doing so in a sustainable way creates significant challenges. The new research building at the Marine Science Institute has been designed and is being constructed to meet the demand for both survivability and sustainability. It is tracking towards formal certification as a LEED Gold structure while being robust and resistant to the harsh coastal environment. The effects of a hurricane are mitigated by elevating buildings and providing a windproof envelope. Ground-level enclosures are designed to be sacrificial and non-structural so they can wash or blow away without imposing damage on the upper portions of the building, and only non-critical functions and equipment will be supported within them. Design features that integrate survivability with sustainability include: orientation of building axis; integral shading from direct summer sunlight; light wells; photovoltaic arrays; collection of rainwater and air conditioning condensate for use in landscape irrigation; reduced impervious cover; xeriscaping and indigenous plants; recycling of waste heat from air conditioning systems; roofing system that reflects light and heat; long life, low maintenance stainless steel, high-tensile vinyl, hard-anodized aluminum and hot-dipped galvanized mountings throughout; chloride-resistant concrete; reduced visual impact; recycling of construction materials.
Resumo:
The frame of a laser diode transmitter for intersatellite communication is concisely introduced. A simple, novel and visual method for measuring the diffraction-limited wavefront of the transmitter by a Jamin double-shearing interferometer is proposed. To verify the validity of the measurement, the far-field divergence of beam is additionally rigorously analysed in terms of the Fraunhofer diffraction. The measurement, the necessary analyses and discussion are given in detail. By directly measuring the fringe widths and quantitatively interpreting the interference fringes, the minimum detectable wavefront height (DWH) of the wavefront is only 0.2 gimel (the distance between the perfect plane wavefront and the actual wavefront at the transmitting aperture) and the corresponding divergence is only 65.84 mu rad. This indicates that the wavefront approaches the diffraction-limited condition. The results show that this interferometer is a powerful tool for testing the semiconductor laser beam's wavefront, especially the diffraction-limited wavefront.
Resumo:
We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.
This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.
Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.
Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.
Resumo:
Far-field spot compression without energy loss in main lob is of great significance to wireless laser communication. In this letter, we propose two schemes to obtain far-field spot compression without energy loss in main lob. One scheme is based on the simulated annealing (SA) algorithm. Using SA algorithm, we design the phase profile of the diffractive phase element (DPE). Using the designed DPE, far-field spot compression without energy loss in main lob is achieved. The other scheme is based on YG algorithm. By means of YG algorithm, we appropriately designed the DPE in the emitting plane. Using the DPE, far-field spot compression without energy loss in main lob is obtained. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
O design de interfaces epiteliais dinâmicas: como as novas tecnologias afetam o projeto de tatuagens
Resumo:
A tatuagem é uma antiga forma de inscrição corporal que apesar de sua idade não sofreu alterações em termos de materiais e técnicas. O desenvolvimento de tecnologias para a concepção de novas modalidades de intervenção orgânica terá ramificações em diversas áreas, permitindo o uso de novas interfaces epiteliais interativas (tatuagens dinâmicas responsivas), e criando novas vias de interação e comunicação incorporada. Em contraste à prática tradicional de imagens estáticas, as tatuagens dinâmicas (TDs) permitem a geração de imagens dinâmicas e interativas na pele. Nosso objetivo aqui é apresentar este novo campo de pesquisa e refletir sobre o papel do designer no projeto de tatuagens dinâmicas e as implicações destas tatuagens que transformam a pele em uma nova fonte de inscrições interativas e reversíveis.
Resumo:
Migrating to cloud computing is one of the current enterprise challenges. This technology provides a new paradigm based on "on-demand payment" for information and communication technologies. In this sense, the small and medium enterprise is supposed to be the most interested, since initial investments are avoided and the technology allows gradual implementation. However, even if the characteristics and capacities have been widely discussed, entry into the cloud is still lacking in terms of practical, real frameworks. This paper aims at filling this gap, presenting a real tool already implemented and tested, which can be used as a cloud computing adoption decision tool. This tool uses diagnosis based on specific questions to gather the required information and subsequently provide the user with valuable information to deploy the business within the cloud, specifically in the form of Software as a Service (SaaS) solutions. This information allows the decision makers to generate their particular Cloud Road. A pilot study has been carried out with enterprises at a local level with a two-fold objective: To ascertain the degree of knowledge on cloud computing and to identify the most interesting business areas and their related tools for this technology. As expected, the results show high interest and low knowledge on this subject and the tool presented aims to readdress this mismatch, insofar as possible. Copyright: © 2015 Bildosola et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.