76 resultados para Source codes
Resumo:
This paper derives approximations allowing the estimation of outage probability for standard irregular LDPC codes and full-diversity Root-LDPC codes used over nonergodic block-fading channels. Two separate approaches are discussed: a numerical approximation, obtained by curve fitting, for both code ensembles, and an analytical approximation for Root-LDPC codes, obtained under the assumption that the slope of the iterative threshold curve of a given code ensemble matches the slope of the outage capacity curve in the high-SNR regime.
Resumo:
This paper presents our investigation on iterativedecoding performances of some sparse-graph codes on block-fading Rayleigh channels. The considered code ensembles are standard LDPC codes and Root-LDPC codes, first proposed in and shown to be able to attain the full transmission diversity. We study the iterative threshold performance of those codes as a function of fading gains of the transmission channel and propose a numerical approximation of the iterative threshold versus fading gains, both both LDPC and Root-LDPC codes.Also, we show analytically that, in the case of 2 fading blocks,the iterative threshold root of Root-LDPC codes is proportional to (α1 α2)1, where α1 and α2 are corresponding fading gains.From this result, the full diversity property of Root-LDPC codes immediately follows.
Resumo:
El tema de este estudio es el aumento de la comprensión teórica y empírica de la estrategia de negocio de código abierto en el dominio de sistemas embebidos por investigar modelos de negocios de código abierto, retos, recursos y capacidades operativas y dinámicas.
Resumo:
We first establish that policymakers on the Bank of England's Monetary PolicyCommittee choose lower interest rates with experience. We then reject increasingconfidence in private information or learning about the structure of the macroeconomy as explanations for this shift. Instead, a model in which voters signal theirhawkishness to observers better fits the data. The motivation for signalling is consistent with wanting to control inflation expectations, but not career concerns orpleasing colleagues. There is also no evidence of capture by industry. The papersuggests that policy-motivated reputation building may be important for explainingdynamics in experts' policy choices.
Resumo:
The COMPTEL unidentified source GRO J1411-64 was observed by INTEGRAL, and its central part, also by XMM-Newton. The data analysis shows no hint for new detections at hard X-rays. The upper limits in flux herein presented constrain the energy spectrum of whatever was producing GRO J1411-64, imposing, in the framework of earlier COMPTEL observations, the existence of a peak in power output located somewhere between 300-700 keV for the so-called low state. The Circinus Galaxy is the only source detected within the 4$\sigma$ location error of GRO J1411-64, but can be safely excluded as the possible counterpart: the extrapolation of the energy spectrum is well below the one for GRO J1411-64 at MeV energies. 22 significant sources (likelihood $> 10$) were extracted and analyzed from XMM-Newton data. Only one of these sources, XMMU J141255.6-635932, is spectrally compatible with GRO J1411-64 although the fact the soft X-ray observations do not cover the full extent of the COMPTEL source position uncertainty make an association hard to quantify and thus risky. The unique peak of the power output at high energies (hard X-rays and gamma-rays) resembles that found in the SED seen in blazars or microquasars. However, an analysis using a microquasar model consisting on a magnetized conical jet filled with relativistic electrons which radiate through synchrotron and inverse Compton scattering with star, disk, corona and synchrotron photons shows that it is hard to comply with all observational constrains. This and the non-detection at hard X-rays introduce an a-posteriori question mark upon the physical reality of this source, which is discussed in some detail.
Resumo:
Frequently the choice of a library management program is conditioned by social, economic and/or political factors that result in the selection of a system that is not altogether suitable for the library’s needs, characteristics and functions. Open source software is quickly becoming a preferred solution, owing to the freedom to copy, modify and distribute it and the freedom from contracts, as well as for greater opportunities for interoperability with other applications. These new trends regarding open source software in libraries are also reflected in LIS studies, as evidenced by the different courses addressing automated programs, repositorymanagement, including the Linux/GNU operating system, among others. The combination of the needs of the centres and the new trends for open source software is the focus of a virtual laboratory for the use of open source software for library applications. It was the result of a project, whose aim was to make a useful contribution to the library community, that was carried out by a group of professors of the School of Library and Information Science of the University of Barcelona, together with a group of students, members of a Working Group on Open Source Software for Information Professionals, of the Professional Library Association of Catalonia.
Resumo:
We present a electroluminescence (EL) study of the Si-rich silicon oxide (SRSO) LEDs with and without Er3+ ions under different polarization schemes: direct current (DC) and pulsed voltage (PV). The power efficiency of the devices and their main optical limitations are presented. We show that under PV polarization scheme, the devices achieve one order of magnitude superior performance in comparison with DC. Time-resolved measurements have shown that this enhancement is met only for active layers in which annealing temperature is high enough (>1000 ◦C) for silicon nanocrystal (Si-nc) formation. Modeling of the system with rate equations has been done and excitation cross-sections for both Si-nc and Er3+ ions have been extracted.
Resumo:
We present a numerical method for generating vortex rings in Bose-Einstein condensates confined in axially symmetric traps. The vortex ring is generated using the line-source approximation for the vorticity, i.e., the curl of the superfluid velocity field is different from zero only on a circumference of a given radius located on a plane perpendicular to the symmetry axis and coaxial with it. The particle density is obtained by solving a modified Gross-Pitaevskii equation that incorporates the effect of the velocity field. We discuss the appearance of density profiles, the vortex core structure, and the vortex nucleation energy, i.e., the energy difference between vortical and ground-state configurations. This is used to present a qualitative description of the vortex dynamics.
Resumo:
An instrument designed to measure thermal conductivity of consolidated rocks, dry or saturated, using a transient method is presented. The instrument measures relative values of the thermal conductivity, and it needs calibration to obtain absolute values. The device can be used as heat pulse line source and as continuous heat line source. Two parameters to determine thermal conductivity are proposed: TMAX, in heat pulse line source, and SLOPE, in continuous heat line source. Its performance is better, and the operation simpler, in heat pulse line-source mode with a measuring time of 170 s and a reproducibility better than 2.5%. The sample preparation is very simple on both modes. The performance has been tested with a set of ten rocks with thermal conductivity values between 1.4 and 5.2 W m¿1 K¿1 which covers the usual range for consolidated rocks.
Resumo:
The aim of this article is to show how, although the evident idealization of Greece and Platonic love throughout the Victorian-Edwardian England, both also show their limits. In order to make it clear the author refers constantly to the implicit Greek texts such as Plato's Symposium and Phaedrus and perhaps even to Plutarch¿s Eroticus in search of a Classical Tradition which is highly significant in order to understand that England at the beginning of the twentieth century.
Resumo:
We present a heuristic method for learning error correcting output codes matrices based on a hierarchical partition of the class space that maximizes a discriminative criterion. To achieve this goal, the optimal codeword separation is sacrificed in favor of a maximum class discrimination in the partitions. The creation of the hierarchical partition set is performed using a binary tree. As a result, a compact matrix with high discrimination power is obtained. Our method is validated using the UCI database and applied to a real problem, the classification of traffic sign images.
Resumo:
A common way to model multiclass classification problems is by means of Error-Correcting Output Codes (ECOCs). Given a multiclass problem, the ECOC technique designs a code word for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each subgroup of classes from each binary problem. However, we cannot guarantee that a linear classifier model convex regions. Furthermore, nonlinear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multiclass classification problems using subclass information in the ECOC framework. Complex problems are solved by splitting the original set of classes into subclasses and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceal the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Resumo:
In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.