990 resultados para Maximum distance profile (MDP) convolutional codes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As consumers demand more functionality) from their electronic devices and manufacturers supply the demand then electrical power and clock requirements tend to increase, however reassessing system architecture can fortunately lead to suitable counter reductions. To maintain low clock rates and therefore reduce electrical power, this paper presents a parallel convolutional coder for the transmit side in many wireless consumer devices. The coder accepts a parallel data input and directly computes punctured convolutional codes without the need for a separate puncturing operation while the coded bits are available at the output of the coder in a parallel fashion. Also as the computation is in parallel then the coder can be clocked at 7 times slower than the conventional shift-register based convolutional coder (using DVB 7/8 rate). The presented coder is directly relevant to the design of modern low-power consumer devices

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spherical codes in even dimensions n = 2m generated by a commutative group of orthogonal matrices can be determined by a quotient of m-dimensional lattices when the sublattice has an orthogonal basis. We discuss here the existence of orthogonal sublattices of the lattices A2, D3, D4 and E8, which have the best packing density in their dimensions, in order to generate families of commutative group codes approaching the bound presented in Siqueira and Costa (2008) [14]. © 2013 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The implementation of local geodetic networks for georeferencing of rural properties has become a requirement after publication of the Georeferencing Technical Standard by INCRA. According to this standard, the maximum distance of baselines to GNSS L1 receivers is of 20 km. Besides the length of the baseline, the geometry and the number of geodetic control stations are other factors to be considered in the implementation of geodetic networks. Thus, this research aimed to examine the influence of baseline lengths higher than the regulated limit of 20 km, the geometry and the number of control stations on quality of local geodetic networks for georeferencing, and also to demonstrate the importance of using specific tests to evaluate the solution of ambiguities and on the quality of the adjustment. The results indicated that the increasing number of control stations has improved the quality of the network, the geometry has not influenced on the quality and the baseline length has influenced on the quality; however, lengths higher than 20 km has not interrupted the implementation, with GPS L1 receiver, of the local geodetic network for the purpose of georeferencing. Also, the use of different statistical tests, both for the evaluation of the resolution of ambiguities and for the adjustment, have enabled greater clearness in analyzing the results, which allow that unsuitable observations may be eliminated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reuse distance analysis, the prediction of how many distinct memory addresses will be accessed between two accesses to a given address, has been established as a useful technique in profile-based compiler optimization, but the cost of collecting the memory reuse profile has been prohibitive for some applications. In this report, we propose using the hardware monitoring facilities available in existing CPUs to gather an approximate reuse distance profile. The difficulties associated with this monitoring technique are discussed, most importantly that there is no obvious link between the reuse profile produced by hardware monitoring and the actual reuse behavior. Potential applications which would be made viable by a reliable hardware-based reuse distance analysis are identified.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mass-spectrometric uranium-series dated stalagmite from the Central Alps of Austria provides unprecedented new insights into high-altitude climate change during the peak of isotope stage 3. The stalagmite formed continuously between 57 and 46 kyr before present. A series of 'Hendy tests' demonstrates that the outer parts of the sample show a progressive increase of both stable C and O isotope values. No such covariant increase was detected within the axial zone. This in conjunction with other observations suggests that the continuous stable oxygen isotope profile obtained from the axial zone of the stalagmite largely reflects the unaltered isotopic composition of the cave drip water. The delta18O record shows events of high delta18O values that correlate remarkably with Interstadials 15 (a and b), 14 and 12 identified in the Greenland ice cores. Interstadial 15b started rapidly at 55.6 kyr and lasted ~300 yr only, Interstadial 15a peaked 54.9 kyr ago and was even of shorter duration (~100 yr), and Interstadial 14 commenced 54.2 kyr ago and lasted ~3000 yr. This stalagmite thus represents one of the first terrestrial archives outside the high latitudes which record precisely dated Dansgaard-Oeschger (D/O) events during isotope stage 3. Provided that rapid D/O warmings occurred synchronously in Greenland and the European Alps, the new data provide an independent tool to improve the GRIP and GISP2 chronologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new high resolution speleothem stable isotope record from the Villars Cave (SW-France) that covers part of marine isotope stage (MIS) 3. The Vil14 stalagmite grew between ~52 and 29 ka. The d13C profile is used as a palaeoclimate proxy and clearly shows the interstadial substages 13, 12 and 11. The new results complement and corroborate previously published stalagmite records Vil9 and Vil27 from the same site. The Vil14 stalagmite chronology is based on 12 Th-U dating by MC-ICP-MS and 3 by TIMS. A correction for detrital contamination was done using the 230Th/232Th activity ratio measured on clay collected in Villars Cave. The Vil14 results reveal that the onset of Dansgaard-Oeschger (DO) events 13 and 12 occurred at ~49.8 ka and ~47.8 ka, respectively. Within uncertainties, this is coherent with the latest NorthGRIP time scale (GICC05-60 ka) and with speleothem records from Central Alps. Our data show an abrupt d13C increase at the end of DO events 14 to 12 which coincides with a petrographical discontinuity probably due to a rapid cooling. As observed for Vil9 and Vil27, Vil14 growth significantly slowed down after ~ 42 ka and finally stopped ~ 29 ka ago where the d13C increase suggests a strong climate deterioration that coincides with both North Atlantic sea level and sea surface temperature drop.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thorium and uranium isotopes were measured in a diagenetic manganese nodule from the Peru basin applying alpha- and thermal ionization mass spectrometry (TIMS). Alpha-counting of 62 samples was carried out with a depth resolution of 0.4 mm to gain a high-resolution Th-230(excess) profile. In addition, 17 samples were measured with TIMS to obtain precise isotope concentrations and isotope ratios. We got values of 0.06-0.59 ppb (Th-230), 0.43-1.40 ppm (Th-232), 0.09-0.49 ppb (U-234) and 1.66-8.24 ppm (U-238). The uranium activity ratio in the uppermost samples (1-6 mm) and in two further sections in the nodule at 12.5+/-1.0 mm and 27.3-33.5 mm comes close to the present ocean wa ter value of 1.144+/-0.004. In two other sections of the nodule, this ratio is significantly higher, probably reflecting incorporation of diagenetic uranium. The upper 25 mm section of the Mn nodule shows a relatively smooth exponential decrease in the Th-230(excess) concentration (TIMS). The slope of the best fit yields a growth rate of 110 mm/Ma up to 24.5 mm depth. The section from 25 to 30.3 mm depth shows constant Th-230(excess) concentrations probably due to growth rates even faster than those in the top section of the nodule. From 33 to 50 mm depth, the growth rate is approximately 60 mm/Ma. Two layers in the nodule with distinct laminations (11-15 and 28-33 mm depth) probably formed during the transition from isotopic stage 8 to 7 and in stage 5e, respectively. The Mn/Fe ratio shows higher values during interglacials 5 and 7, and lower ones during glacials 4 and 6. A comparison of our data with data from adjacent sediment cores suggests (a) a variable sb supply of hydrothermal Mn to sediments and Mn nodules of the Peru basin or (b) suboxic conditions at the water sediment interface during periods with lower Mn/Fe ratios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work carried out by the physical oceanography group on POLARSTERN Leg ANT-V/3 concentrated on four major topics: A. A large scale survey of the eastern boundary between the Weddell gyre and the open ocean. On the way to the coastal polynya in early October 12 CTD stations were carried out between 54°30'S, 6°E and 70°30'S 8°W. Another set of 16 stations was obtained in early December on the way back north. During this transsect three current meter moorings were recovered at Maud Rise. The path between the current meter arrays was used to run an additional section to the NNE across the top of Maud Rise. B. A large scale survey of the Antarctic Coastal Current along the eastern shelf area. To obtain the water mass characteristics along the eastern Weddell shelf 36 CTD stations were carried out between Atka Bay and the Filchner Trench. Most of the stations were located on the shelf. Cross shelf sections were obtained both near Drescher Inlet and off Halley Bay, in the divergence area of the Coastal Current where the continental slope turns to the west and south of Vestkapp at Neptune's Point. A longshore section over 120 km was run north of Vestkapp. C. A mesoscale survey of the Antarctic Coastal Current off Drescher Inlet. The experimental work consisted of 37 CTD-stations and direct current measurements. The CTD-profiles were grouped into seven sections perpendicular to the coast line off Drescher Inlet extending once over 70 km but normally over 35 km. The profile depth ranged from 300 m on one section to the complete water column at two sections. Most sections consist of five stations providing highest resolution over the upper continental slope with offshore increasing spacing. The stations were chosen to represent the shelf (450 m), the shelf break (800 m), the upper slope (1600 m), the lower slope (2400 m) and the transition to the abyssal plain (3400 m). Rough topography and difficult ice conditions made it impossible to meet those requirements in all cases. D. A small scale survey of the hydrographic conditions under the sea ice. The motivation for these studies arose during the cruise. Consequently a suitable Instrumentation had to be developed at sea. This was done with a NB-Smart CTD which was inserted on an L-shaped lever through a hole in the ice. However, various water intrusions into the instrument resulted in the failure of this technique. In consequence a special lever system was built to position a NB Mark 3b weighing about 40 kg below the ice. Twenty four profiles were obtained reaching from the bottom of the ice down to 2 m below the ice surface with a maximum distance of 1 m from the entry hole. As the conductivity sensor was influenced by nearby ice platelets, salinity samples where drawn to check the sensor.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past few years, the number of wireless networks users has been increasing. Until now, Radio-Frequency (RF) used to be the dominant technology. However, the electromagnetic spectrum in these region is being saturated, demanding for alternative wireless technologies. Recently, with the growing market of LED lighting, the Visible Light Communications has been drawing attentions from the research community. First, it is an eficient device for illumination. Second, because of its easy modulation and high bandwidth. Finally, it can combine illumination and communication in the same device, in other words, it allows to implement highly eficient wireless communication systems. One of the most important aspects in a communication system is its reliability when working in noisy channels. In these scenarios, the received data can be afected by errors. In order to proper system working, it is usually employed a Channel Encoder in the system. Its function is to code the data to be transmitted in order to increase system performance. It commonly uses ECC, which appends redundant information to the original data. At the receiver side, the redundant information is used to recover the erroneous data. This dissertation presents the implementation steps of a Channel Encoder for VLC. It was consider several techniques such as Reed-Solomon and Convolutional codes, Block and Convolutional Interleaving, CRC and Puncturing. A detailed analysis of each technique characteristics was made in order to choose the most appropriate ones. Simulink models were created in order to simulate how diferent codes behave in diferent scenarios. Later, the models were implemented in a FPGA and simulations were performed. Hardware co-simulations were also implemented to faster simulation results. At the end, diferent techniques were combined to create a complete Channel Encoder capable of detect and correct random and burst errors, due to the usage of a RS(255,213) code with a Block Interleaver. Furthermore, after the decoding process, the proposed system can identify uncorrectable errors in the decoded data due to the CRC-32 algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the perceptual attunement of relatively skilled individuals to physical properties of striking implements in the sport of cricket. We also sought to assess whether utilising bats of different physical properties influenced performance of a specific striking action: the front foot straight drive. Eleven, skilled male cricketers (mean age = 16.6 ± 0.3 years) from an elite school cricket development programme consented to participate in the study. Whist blindfolded, participants wielded six bats exhibiting different mass and moment of inertia (MOI) characteristics and were asked to identify their three most preferred bats for hitting a ball to a maximum distance by performing a front foot straight drive (a common shot in cricket). Next, participants actually attempted to hit balls projected from a ball machine using each of the six bat configurations to enable kinematic analysis of front foot straight drive performance with each implement. Results revealed that, on first choice, the two bats with the smallest mass and MOI values (1 and 2) were most preferred by almost two-thirds (63.7%) of the participants. Kinematic analysis of movement patterns revealed that bat velocity, step length and bat-ball contact position measures significantly differed between bats. Data revealed how skilled youth cricketers were attuned to the different bat characteristics and harnessed movement system degeneracy to perform this complex interceptive action.