959 resultados para Communication protocols
Resumo:
One of the major concerns in an Intelligent Transportation System (ITS) scenario, such as that which may be found on a long-distance train service, is the provision of efficient communication services, satisfying users' expectations, and fulfilling even highly demanding application requirements, such as safety-oriented services. In an ITS scenario, it is common to have a significant amount of onboard devices that comprise a cluster of nodes (a mobile network) that demand connectivity to the outside networks. This demand has to be satisfied without service disruption. Consequently, the mobility of the mobile network has to be managed. Due to the nature of mobile networks, efficient and lightweight protocols are desired in the ITS context to ensure adequate service performance. However, the security is also a key factor in this scenario. Since the management of the mobility is essential for providing communications, the protocol for managing this mobility has to be protected. Furthermore, there are safety-oriented services in this scenario, so user application data should also be protected. Nevertheless, providing security is expensive in terms of efficiency. Based on this considerations, we have developed a solution for managing the network mobility for ITS scenarios: the NeMHIP protocol. This approach provides a secure management of network mobility in an efficient manner. In this article, we present this protocol and the strategy developed to maintain its security and efficiency in satisfactory levels. We also present the developed analytical models to analyze quantitatively the efficiency of the protocol. More specifically, we have developed models for assessing it in terms of signaling cost, which demonstrates that NeMHIP generates up to 73.47% less signaling compared to other relevant approaches. Therefore, the results obtained demonstrate that NeMHIP is the most efficient and secure solution for providing communications in mobile network scenarios such as in an ITS context.
Resumo:
[EN]Fundación Zain is developing new built heritage assessment protocols. The goal is to objectivize and standardize the analysis and decision process that leads to determining the degree of protection of built heritage in the Basque Country. The ultimate step in this objectivization and standardization effort will be the development of an information and communication technology (ICT) tool for the assessment of built heritage. This paper presents the ground work carried out to make this tool possible: the automatic, image-based delineation of stone masonry. This is a necessary first step in the development of the tool, as the built heritage that will be assessed consists of stone masonry construction, and many of the features analyzed can be characterized according to the geometry and arrangement of the stones. Much of the assessment is carried out through visual inspection. Thus, this process will be automated by applying image processing on digital images of the elements under inspection. The principal contribution of this paper is the automatic delineation the framework proposed. The other contribution is the performance evaluation of this delineation as the input to a classifier for a geometrically characterized feature of a built heritage object. The element chosen to perform this evaluation is the stone arrangement of masonry walls. The validity of the proposed framework is assessed on real images of masonry walls.
Resumo:
The ACT workshop "Enabling Sensor Interoperability" addressed the need for protocols at the hardware, firmware, and higher levels in order to attain instrument interoperability within and between ocean observing systems. For the purpose of the workshop, participants spoke in tern of "instruments" rather than "sensors," defining an instrument as a device that contains one or more sensors or actuators and can convert signals from analog to digital. An increase in the abundance, variety, and complexity of instruments and observing systems suggests that effective standards would greatly improve "plug-and-work" capabilities. However, there are few standards or standards bodies that currently address instrument interoperability and configuration. Instrument interoperability issues span the length and breadth of these systems, from the measurement to the end user, including middleware services. There are three major components of instrument interoperability including physical, communication, and application/control layers. Participants identified the essential issues, current obstacles, and enabling technologies and standards, then came up with a series of short and long term solutions. The top three recommended actions, deemed achievable within 6 months of the release of this report are: A list of recommendations for enabling instrument interoperability should be put together and distributed to instrument developers. A recommendation for funding sources to achieve instrument interoperability should be drafted. Funding should be provided (for example through NOPP or an IOOS request for proposals) to develop and demonstrate instrument interoperability technologies involving instrument manufacturers, observing system operators, and cyberinfrastructure groups. Program managers should be identified and made to understand that milestones for achieving instrument interoperability include a) selection of a methodology for uniquely identifying an instrument, b) development of a common protocol for automatic instrument discovery, c) agreement on uniform methods for measurements, d) enablement of end user controlled power cycling, and e) implementation of a registry component for IDS and attributes. The top three recommended actions, deemed achievable within S years of the release of this report are: An ocean observing interoperability standards body should be established that addresses standards for a) metadata, b) commands, c) protocols, d) processes, e) exclusivity, and f) naming authorities.[PDF contains 48 pages]
Resumo:
122 p.
Resumo:
Development and management indices identified in the capture fishery resources focus on stock management, freshwater and marine pollution by organic and inorganic compounds including silting, plankton sustainability, fishing methods, biological productivity, energy cycles, ornamental fish and sanctuaries. The issue of post-harvest handling and processing is also discussed. The paper also identifies fisheries sectorial problems at the artisanal and industrial level both at sea and at shore, in the processing plant, infrastructure, manpower and marketing issues. The paper suggests that advocacy should be incorporated into extension and communication programme ensuring some changes in attitudes of all stakeholders in the fisheries game. The paper concludes stating that policy makers should stop paying lip-service to the fisheries sub-sector and should create a separate Ministry for Fisheries
Resumo:
In this paper, the gamma-gamma probability distribution is used to model turbulent channels. The bit error rate (BER) performance of free space optical (FSO) communication systems employing on-off keying (OOK) or subcarrier binary phase-shift keying (BPSK) modulation format is derived. A tip-tilt adaptive optics system is also incorporated with a FSO system using the above modulation formats. The tip-tilt compensation can alleviate effects of atmospheric turbulence and thereby improve the BER performance. The improvement is different for different turbulence strengths and modulation formats. In addition, the BER performance of communication systems employing subcarrier BPSK modulation is much better than that of compatible systems employing OOK modulation with or without tip-tilt compensation.
Resumo:
The findings are presented of a study conducted in the framework of the Nigerian-German Kainji Lake Fisheries Promotion Project to examine the role and structure of communication in fishing villages around Kainji Lake in Nigeria. The major aim was to be able to utilize the knowledge at a later stage in the project cycle to pass on fisheries extension messages to fishing communities. The study had the following terms of reference: 1) describe the structure and processes of communication of fishermen around Kainji Lake; 2) identify the formal and informal media of communication used by the fishermen to communicate information concerning their job; 3) describe the problems inhibiting usage of the different media identified; 4) ascertain the extent of use of mass media by fishermen around the lake; and, 5) identify acceptable ways by which fisheries information can be repackaged for the use of extension workers. (PDF contains 58 pages)
Resumo:
The scalability of CMOS technology has driven computation into a diverse range of applications across the power consumption, performance and size spectra. Communication is a necessary adjunct to computation, and whether this is to push data from node-to-node in a high-performance computing cluster or from the receiver of wireless link to a neural stimulator in a biomedical implant, interconnect can take up a significant portion of the overall system power budget. Although a single interconnect methodology cannot address such a broad range of systems efficiently, there are a number of key design concepts that enable good interconnect design in the age of highly-scaled CMOS: an emphasis on highly-digital approaches to solving ‘analog’ problems, hardware sharing between links as well as between different functions (such as equalization and synchronization) in the same link, and adaptive hardware that changes its operating parameters to mitigate not only variation in the fabrication of the link, but also link conditions that change over time. These concepts are demonstrated through the use of two design examples, at the extremes of the power and performance spectra.
A novel all-digital clock and data recovery technique for high-performance, high density interconnect has been developed. Two independently adjustable clock phases are generated from a delay line calibrated to 2 UI. One clock phase is placed in the middle of the eye to recover the data, while the other is swept across the delay line. The samples produced by the two clocks are compared to generate eye information, which is used to determine the best phase for data recovery. The functions of the two clocks are swapped after the data phase is updated; this ping-pong action allows an infinite delay range without the use of a PLL or DLL. The scheme's generalized sampling and retiming architecture is used in a sharing technique that saves power and area in high-density interconnect. The eye information generated is also useful for tuning an adaptive equalizer, circumventing the need for dedicated adaptation hardware.
On the other side of the performance/power spectra, a capacitive proximity interconnect has been developed to support 3D integration of biomedical implants. In order to integrate more functionality while staying within size limits, implant electronics can be embedded onto a foldable parylene (‘origami’) substrate. Many of the ICs in an origami implant will be placed face-to-face with each other, so wireless proximity interconnect can be used to increase communication density while decreasing implant size, as well as facilitate a modular approach to implant design, where pre-fabricated parylene-and-IC modules are assembled together on-demand to make custom implants. Such an interconnect needs to be able to sense and adapt to changes in alignment. The proposed array uses a TDC-like structure to realize both communication and alignment sensing within the same set of plates, increasing communication density and eliminating the need to infer link quality from a separate alignment block. In order to distinguish the communication plates from the nearby ground plane, a stimulus is applied to the transmitter plate, which is rectified at the receiver to bias a delay generation block. This delay is in turn converted into a digital word using a TDC, providing alignment information.
Resumo:
The cytochromes P450 (P450s) are a remarkable class of heme enzymes that catalyze the metabolism of xenobiotics and the biosynthesis of signaling molecules. Controlled electron flow into the thiolate-ligated heme active site allows P450s to activate molecular oxygen and hydroxylate aliphatic C–H bonds via the formation of high-valent metal-oxo intermediates (compounds I and II). Due to the reactive nature and short lifetimes of these intermediates, many of the fundamental steps in catalysis have not been observed directly. The Gray group and others have developed photochemical methods, known as “flash-quench,” for triggering electron transfer (ET) and generating redox intermediates in proteins in the absence of native ET partners. Photo-triggering affords a high degree of temporal precision for the gating of an ET event; the initial ET and subsequent reactions can be monitored on the nanosecond-to-second timescale using transient absorption (TA) spectroscopies. Chapter 1 catalogues critical aspects of P450 structure and mechanism, including the native pathway for formation of compound I, and outlines the development of photochemical processes that can be used to artificially trigger ET in proteins. Chapters 2 and 3 describe the development of these photochemical methods to establish electronic communication between a photosensitizer and the buried P450 heme. Chapter 2 describes the design and characterization of a Ru-P450-BM3 conjugate containing a ruthenium photosensitizer covalently tethered to the P450 surface, and nanosecond-to-second kinetics of the photo-triggered ET event are presented. By analyzing data at multiple wavelengths, we have identified the formation of multiple ET intermediates, including the catalytically relevant compound II; this intermediate is generated by oxidation of a bound water molecule in the ferric resting state enzyme. The work in Chapter 3 probes the role of a tryptophan residue situated between the photosensitizer and heme in the aforementioned Ru-P450 BM3 conjugate. Replacement of this tryptophan with histidine does not perturb the P450 structure, yet it completely eliminates the ET reactivity described in Chapter 2. The presence of an analogous tryptophan in Ru-P450 CYP119 conjugates also is necessary for observing oxidative ET, but the yield of heme oxidation is lower. Chapter 4 offers a basic description of the theoretical underpinnings required to analyze ET. Single-step ET theory is first presented, followed by extensions to multistep ET: electron “hopping.” The generation of “hopping maps” and use of a hopping map program to analyze the rate advantage of hopping over single-step ET is described, beginning with an established rhenium-tryptophan-azurin hopping system. This ET analysis is then applied to the Ru-tryptophan-P450 systems described in Chapter 2; this strongly supports the presence of hopping in Ru-P450 conjugates. Chapter 5 explores the implementation of flash-quench and other phototriggered methods to examine the native reductive ET and gas binding events that activate molecular oxygen. In particular, TA kinetics that demonstrate heme reduction on the microsecond timescale for four Ru-P450 conjugates are presented. In addition, we implement laser flash-photolysis of P450 ferrous–CO to study the rates of CO rebinding in the thermophilic P450 CYP119 at variable temperature. Chapter 6 describes the development and implementation of air-sensitive potentiometric redox titrations to determine the solution reduction potentials of a series of P450 BM3 mutants, which were designed for non-native cyclopropanation of styrene in vivo. An important conclusion from this work is that substitution of the axial cysteine for serine shifts the wild type reduction potential positive by 130 mV, facilitating reduction by biological redox cofactors in the presence of poorly-bound substrates. While this mutation abolishes oxygenation activity, these mutants are capable of catalyzing the cyclopropanation of styrene, even within the confines of an E. coli cell. Four appendices are also provided, including photochemical heme oxidation in ruthenium-modified nitric oxide synthase (Appendix A), general protocols (Appendix B), Chapter-specific notes (Appendix C) and Matlab scripts used for data analysis (Appendix D).
Resumo:
16 p.
Resumo:
Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.
Resumo:
International communication strategy followed by Ikea analysis of campaigns in different countries, features and possible justifications of the differerences.