950 resultados para Covered interest rate parity


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A series of ice cores from sites with different snow-accumulation rates across Law Dome, East Antarctica, was investigated for methanesulphonic acid (MSA) movement. The precipitation at these sites (up to 35 km apart) is influenced by the same air masses, the principal difference being the accumulation rate. At the low-accumulation-rate W20k site (0.17 in ice equivalent), MSA was completely relocated from the summer to winter layer. Moderate movement was observed at the intermediate-accumulation-rate site (0.7 in ice equivalent), Dome Summit South (DSS), while there was no evidence of movement at the high-accumulation-rate DE08 site (1.4 in ice equivalent). The main DSS record of MSA covered the epoch AD 1727-2000 and was used to investigate temporal post-depositional changes. Co-deposition of MSA and sea-salt ions was observed of the surface layers, outside of the main summer MSA peak, which complicates interpretation of these peaks as evidence of movement in deeper layers. A seasonal study of the 273 year DSS record revealed MSA migration predominantly from summer into autumn (in the up-core direction), but this migration was suppressed during the Tambora (1815) and unknown (1809) volcanic eruption period, and enhanced during an epoch (1770-1800) with high summer nitrate levels. A complex interaction between the gradients in nss-sulphate, nitrate and sea salts (which are influenced by accumulation rate) is believed to control the rate and extent of movement of MSA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The distribution of denitrification was investigated in the hypolimnion of the east and west lobes of permanently ice-covered Lake Bonney, Taylor Valley, Antarctica. Anomalously high concentrations of dissolved inorganic nitrogen (DIN; nitrate, nitrite, ammonium and nitrous oxide) in the oxygen-depleted hypolimnion of the east lobe of the Lake implied that denitrification is or was active in the west, but not in the east lobe. While previous investigations reported no detectable denitrification in the east lobe, we measured active denitrification in samples from both the east and west lobes. In the west lobe, measured denitrification rates exhibited a maximum at the depth of the chemocline and denitrification was not detectable in either the oxic surface waters or in the deep water where nitrate was absent. In the east lobe, denitrification was detected below the chemocline, at the depths where ammonium, nitrate, nitrite and nitrous oxide are all present at anomalously high levels, Trace metal availability was manipulated in incubation experiments in order to determine whether trace metal toxicity in the east lobe could explain the difference in nitrogen cycling between the 2 lobes. There were no consistent stimulatory effects of metal chelators or nutrient addition on the rate of denitrification in either lobe, so that the mechanisms underlying the unusual N cycle of the east lobe remain unknown. We conclude that all the ingredients necessary to allow denitrification to occur are present in the east lobe. However, even though denitrification could be detected under certain conditions in incubations, denitrification is inhibited under the in situ conditions of the lake.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

STUDY QUESTION Does intrauterine application of diluted seminal plasma (SP) at the time of ovum pick-up improve the pregnancy rate by ≥14% in IVF treatment? SUMMARY ANSWER Intrauterine instillation of diluted SP at the time of ovum pick-up is unlikely to increase the pregnancy rate by ≥14% in IVF. WHAT IS KNOWN ALREADY SP modulates endometrial function, and sexual intercourse around the time of embryo transfer has been suggested to increase the likelihood of pregnancy. A previous randomized double-blind pilot study demonstrated a strong trend towards increased pregnancy rates following the intracervical application of undiluted SP. As this study was not conclusive and as the finding could have been confounded by sexual intercourse, the intrauterine application of diluted SP was investigated in the present trial. STUDY DESIGN, SIZE, DURATION A single-centre, prospective, double-blind, placebo-controlled, randomized, superiority trial on women undergoing IVF was conducted from April 2007 until February 2012 at the University Department of Gynaecological Endocrinology and Reproductive Medicine, Heidelberg, Germany. PARTICIPANTS/MATERIALS, SETTING, METHODS The study was powered to detect an 14% increase in the clinical pregnancy rate and two sequential tests were planned using the Pocock spending function. At the first interim analysis, 279 women had been randomly assigned to intrauterine diluted SP (20% SP in saline from the patients' partner) (n = 138) or placebo (n = 141) at the time of ovum pick-up. MAIN RESULTS AND THE ROLE OF CHANCE The clinical pregnancy rate per randomized patient was 37/138 (26.8%) in the SP group and 41/141 (29.1%) in the placebo group (difference: -2.3%, 95% confidence interval of the difference: -12.7 to +8.2%; P = 0.69). The live birth rate per randomized patient was 28/138 (20.3%) in the SP group and 33/141 (23.4%) in the placebo group (difference: -3.1%, 95% confidence interval of the difference: -12.7 to +6.6%; P = 0.56). It was decided to terminate the trial due to futility at the first interim analysis, at a conditional power of 62%. LIMITATIONS, REASONS FOR CAUTION The confidence interval of the difference remains wide, thus clinically relevant differences cannot reliably be excluded based on this single study. WIDER IMPLICATIONS OF THE FINDINGS The results of this study cast doubt on the validity of the concept that SP increases endometrial receptivity and thus implantation in humans. STUDY FUNDING/COMPETING INTEREST(S) Funding was provided by the department's own research facilities. TRIAL REGISTRATION NUMBER DRKS00004615.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper shows that countries characterized by a financial accelerator mechanism may reverse the usual finding of the literature -- flexible exchange rate regimes do a worse job of insulating open economies from external shocks. I obtain this result with a calibrated small open economy model that endogenizes foreign interest rates by linking them to the banking sector's foreign currency leverage. This relationship renders exchange rate policy more important compared to the usual exogeneity assumption. I find empirical support for this prediction using the Local Projections method. Finally, 2nd order approximation to the model finds larger welfare losses under flexible regimes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the effects of off-balance sheet transactions on interest and exchange rate exposures has become more important for emerging market countries that are experiencing remarkable growth in derivatives markets. Using firm level data, we report a significant fall in exposure over the past 10 years and relate this to higher derivatives market participation. Our methodology is composed of a three stage approach: First, we measure foreign exchange exposures using the Adler-Dumas (1984) model. Next, we follow an indirect approach to infer derivatives market participation at the firm level. Finally, we study the relationship between exchange rate exposure and derivatives market participation. Our results show that foreign exchange exposure is negatively related to derivatives market participation, and support the hedging explanation of the exchange rate exposure puzzle. This decline is especially salient in the financial sector, for bigger firms, and over longer time periods. Results are robust to using different exchange rates, a GARCH-SVAR approach to measure exchange rate exposure, and different return horizons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many patients with anxiety and depression initially seek treatment from their primary care physicians. Changes in insurance coverage and current mental parity laws, make reimbursement for services a problem. This has led to a coding dilemma for physicians seeking payment for their services. This study seeks to determine first the frequency at which primary care physicians use alternative coding, and secondly, if physicians would change their coding practices, provided reimbursement was assured through changes in mental parity laws. A mail survey was sent to 260 randomly selected primary care physicians, who are family practice, internal medicine, and general practice physicians, and members of the Harris County Medical Society. The survey evaluated the physicians' demographics, the number of patients with psychiatric disorders seen by primary care physicians, the frequency with which physicians used alternative coding, and if mental parity laws changed, the rate at which physicians would use a psychiatric illness diagnosis as the primary diagnostic code. The overall response rate was 23%. Only 47 of the 59 physicians, who responded, qualified for the study and of those 45% used a psychiatric disorder to diagnose patients with a primary psychiatric disorder, 47% used a somatic/symptom disorder, and 8% used a medical diagnosis. From the physicians who would not use a psychiatric diagnosis as a primary ICD-9 code, 88% were afraid of not being reimbursed and 12% were worried about stigma or jeopardizing insurability. If payment were assured using a psychiatric diagnostic code, 81% physicians would use a psychiatric diagnosis as the primary diagnostic code. However, 19% would use an alternative diagnostic code in fear of stigmatizing and/or jeopardizing patients' insurability. Although the sample size of the study design was adequate, our survey did not have an ideal response rate, and no significant correlation was observed. However, it is evident that reimbursement for mental illness continues to be a problem for primary care physicians. The reformation of mental parity laws is necessary to ensure that patients receive mental health services and that primary care physicians are reimbursed. Despite the possibility of improved mental parity legislation, some physicians are still hesitant to assign patients with a mental illness diagnosis, due to the associated stigma, which still plays a role in today's society. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sediments were sampled and oxygen profiles of the water column were determined in the Indian Ocean off west and south Indonesia in order to obtain information on the production, transformation, and accumulation of organic matter (OM). The stable carbon isotope composition (d13Corg) in combination with C/N ratios depicts the almost exclusively marine origin of sedimentary organic matter in the entire study area. Maximum concentrations of organic carbon (Corg) and nitrogen (N) of 3.0% and 0.31%, respectively, were observed in the northern Mentawai Basin and in the Savu and Lombok basins. Minimum d15N values of 3.7 per mil were measured in the northern Mentawai Basin, whereas they varied around 5.4 per mil at stations outside this region. Minimum bottom water oxygen concentrations of 1.1 mL L**1, corresponding to an oxygen saturation of 16.1%, indicate reduced ventilation of bottom water in the northern Mentawai Basin. This low bottom water oxygen reduces organic matter decomposition, which is demonstrated by the almost unaltered isotopic composition of nitrogen during early diagenesis. Maximum Corg accumulation rates (CARs) were measured in the Lombok (10.4 g C m**-2 yr**-1) and northern Mentawai basins (5.2 g C m**-2 yr**-1). Upwelling-induced high productivity is responsible for the high CAR off East Java, Lombok, and Savu Basins, while a better OM preservation caused by reduced ventilation contributes to the high CAR observed in the northern Mentawai Basin. The interplay between primary production, remineralisation, and organic carbon burial determines the regional heterogeneity. CAR in the Indian Ocean upwelling region off Indonesia is lower than in the Peru and Chile upwellings, but in the same order of magnitude as in the Arabian Sea, the Benguela, and Gulf of California upwellings, and corresponds to 0.1-7.1% of the global ocean carbon burial. This demonstrates the relevance of the Indian Ocean margin off Indonesia for the global OM burial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We quantified postdepositional losses of methane sulfonate (MSA-), nitrate, and chloride at the European Project for Ice Coring in Antarctica (EPICA) drilling site in Dronning Maud Land (DML) (75°S, 0°E). Analyses of four intermediate deep firn cores and 13 snow pits were considered. We found that about 26 ± 13% of the once deposited nitrate and typically 51 ± 20% of MSA- were lost, while for chloride, no significant depletion could be observed in firn older than one year. Assuming a first order exponential decay rate, the characteristic e-folding time for MSA- is 6.4 ± 3 years and 19 ± 6 years for nitrate. It turns out that for nitrate and MSA- the typical mean concentrations representative for the last 100 years were reached after 5.4 and 6.5 years, respectively, indicating that beneath a depth of around 1.2-1.4 m postdepositional losses can be neglected. In the area of investigation, only MSA- concentrations and postdepositional losses showed a distinct dependence on snow accumulation rate. Consequently, MSA- concentrations archived at this site should be significantly dependent on the variability of annual snow accumulation, and we recommend a corresponding correction. With a simple approach, we estimated the partial pressure of the free acids MSA, HNO3, and HCl on the basis of Henry's law assuming that ionic impurities of the bulk ice matrix are localized in a quasi-brine layer (QBL). In contrast to measurements, this approach predicts a nearly complete loss of MSA-, NO3 - , and Cl-.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fine-grained sediment depocenters on continental shelves are of increased scientific interest since they record environmental changes sensitively. A north-south elongated mud depocenter extends along the Senegalese coast in mid-shelf position. Shallow-acoustic profiling was carried out to determine extent, geometry and internal structures of this sedimentary body. In addition, four sediment cores were retrieved with the main aim to identify how paleoclimatic signals and coastal changes have controlled the formation of this mud depocenter. A general paleoclimatic pattern in terms of fluvial input appears to be recorded in this depositional archive. Intervals characterized by high terrigenous input, high sedimentation rates and fine grain sizes occur roughly contemporaneously in all cores and are interpreted as corresponding to intensified river discharge related to more humid conditions in the hinterland. From 2750 to 1900 and from 1000 to 700 cal a BP, wetter conditions are recorded off Senegal, an observation which is in accordance with other records from NW-Africa. Nevertheless, the three employed proxies (sedimentation rate, grain size and elemental distribution) do not always display consistent inter-core patterns. Major differences between the individual core records are attributed to sediment remobilization which was linked to local hydrographic variations as well as reorganizations of the coastal system. The Senegal mud belt is a layered inhomogeneous sedimentary body deposited on an irregular erosive surface. Early Holocene deceleration in the rate of the sea-level rise could have enabled initial mud deposition on the shelf. These favorable conditions for mud deposition occur coevally with a humid period over NW-Africa, thus, high river discharge. Sedimentation started preferentially in the northern areas of the mud belt. During mid-Holocene, a marine incursion led to the formation of an embayment. Afterwards, sedimentation in the north was interrupted in association with a remarkable southward shift in the location of the active depocenter as it is reflected by the sedimentary architecture and confirmed by radiocarbon dates. These sub-recent shifts in depocenters location are caused by migrations of the Senegal River mouth. During late Holocene times, the weakening of river discharge allowed the longshore currents to build up a chain of beach barriers which have forced the river mouth to shift southwards.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a measurement of the charged current interaction rate of the electron neutrino beam component of the beam above 1.5 GeV using the large fiducial mass of the T2K π0 detector. The predominant portion of the νe flux (∼85%) at these energies comes from kaon decays. The measured ratio of the observed beam interaction rate to the predicted rate in the detector with water targets filled is 0.89 ± 0.08 (stat.) ± 0.11 (sys.), and with the water targets emptied is 0.90 ± 0.09 (stat.) ± 0.13 (sys.). The ratio obtained for the interactions on water only from an event subtraction method is 0.87 ± 0.33 (stat.) ± 0.21 (sys.). This is the first measurement of the interaction rate of electron neutrinos on water, which is particularly of interest to experiments with water Cherenkov detectors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information reconciliation is a crucial procedure in the classical post-processing of quantum key distribution (QKD). Poor reconciliation e?ciency, revealing more information than strictly needed, may compromise the maximum attainable distance, while poor performance of the algorithm limits the practical throughput in a QKD device. Historically, reconciliation has been mainly done using close to minimal information disclosure but heavily interactive procedures, like Cascade, or using less e?cient but also less interactive ?just one message is exchanged? procedures, like the ones based in low-density parity-check (LDPC) codes. The price to pay in the LDPC case is that good e?ciency is only attained for very long codes and in a very narrow range centered around the quantum bit error rate (QBER) that the code was designed to reconcile, thus forcing to have several codes if a broad range of QBER needs to be catered for. Real world implementations of these methods are thus very demanding, either on computational or communication resources or both, to the extent that the last generation of GHz clocked QKD systems are ?nding a bottleneck in the classical part. In order to produce compact, high performance and reliable QKD systems it would be highly desirable to remove these problems. Here we analyse the use of short-length LDPC codes in the information reconciliation context using a low interactivity, blind, protocol that avoids an a priori error rate estimation. We demonstrate that 2×103 bits length LDPC codes are suitable for blind reconciliation. Such codes are of high interest in practice, since they can be used for hardware implementations with very high throughput.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Puncturing is a well-known coding technique widely used for constructing rate-compatible codes. In this paper, we consider the problem of puncturing low-density parity-check codes and propose a new algorithm for intentional puncturing. The algorithm is based on the puncturing of untainted symbols, i.e. nodes with no punctured symbols within their neighboring set. It is shown that the algorithm proposed here performs better than previous proposals for a range of coding rates and short proportions of punctured symbols.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The postprocessing or secret-key distillation process in quantum key distribution (QKD) mainly involves two well-known procedures: information reconciliation and privacy amplification. Information or key reconciliation has been customarily studied in terms of efficiency. During this, some information needs to be disclosed for reconciling discrepancies in the exchanged keys. The leakage of information is lower bounded by a theoretical limit, and is usually parameterized by the reconciliation efficiency (or inefficiency), i.e. the ratio of additional information disclosed over the Shannon limit. Most techniques for reconciling errors in QKD try to optimize this parameter. For instance, the well-known Cascade (probably the most widely used procedure for reconciling errors in QKD) was recently shown to have an average efficiency of 1.05 at the cost of a high interactivity (number of exchanged messages). Modern coding techniques, such as rate-adaptive low-density parity-check (LDPC) codes were also shown to achieve similar efficiency values exchanging only one message, or even better values with few interactivity and shorter block-length codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

¿Suministrarán las fuentes de energía renovables toda la energía que el mundo necesita algún día? Algunos argumentan que sí, mientras que otros dicen que no. Sin embargo, en algunas regiones del mundo, la producción de electricidad a través de fuentes de energía renovables ya está en una etapa prometedora de desarrollo en la que su costo de generación de electricidad compite con fuentes de electricidad convencionales, como por ejemplo la paridad de red. Este logro ha sido respaldado por el aumento de la eficiencia de la tecnología, la reducción de los costos de producción y, sobre todo, los años de intervenciones políticas de apoyo financiero. La difusión de los sistemas solares fotovoltaicos (PV) en Alemania es un ejemplo relevante. Alemania no sólo es el país líder en términos de capacidad instalada de sistemas fotovoltaicos (PV) en todo el mundo, sino también uno de los países pioneros donde la paridad de red se ha logrado recientemente. No obstante, podría haber una nube en el horizonte. La tasa de difusión ha comenzado a declinar en muchas regiones. Además, las empresas solares locales – que se sabe son importantes impulsores de la difusión – han comenzado a enfrentar dificultades para manejar sus negocios. Estos acontecimientos plantean algunas preguntas importantes: ¿Es ésta una disminución temporal en la difusión? ¿Los adoptantes continuarán instalando sistemas fotovoltaicos? ¿Qué pasa con los modelos de negocio de las empresas solares locales? Con base en el caso de los sistemas fotovoltaicos en Alemania a través de un análisis multinivel y dos revisiones literarias complementarias, esta tesis doctoral extiende el debate proporcionando riqueza múltiple de datos empíricos en un conocimiento de contexto limitado. El primer análisis se basa en la perspectiva del adoptante, que explora el nivel "micro" y el proceso social que subyace a la adopción de los sistemas fotovoltaicos. El segundo análisis es una perspectiva a nivel de empresa, que explora los modelos de negocio de las empresas y sus roles impulsores en la difusión de los sistemas fotovoltaicos. El tercero análisis es una perspectiva regional, la cual explora el nivel "meso", el proceso social que subyace a la adopción de sistemas fotovoltaicos y sus técnicas de modelado. Los resultados incluyen implicaciones tanto para académicos como políticos, no sólo sobre las innovaciones en energía renovable relativas a la paridad de red, sino también, de manera inductiva, sobre las innovaciones ambientales impulsadas por las políticas que logren la competitividad de costes. ABSTRACT Will renewable energy sources supply all of the world energy needs one day? Some argue yes, while others say no. However, in some regions of the world, the electricity production through renewable energy sources is already at a promising stage of development at which their electricity generation costs compete with conventional electricity sources’, i.e., grid parity. This achievement has been underpinned by the increase of technology efficiency, reduction of production costs and, above all, years of policy interventions of providing financial support. The diffusion of solar photovoltaic (PV) systems in Germany is an important frontrunner case in point. Germany is not only the top country in terms of installed PV systems’ capacity worldwide but also one of the pioneer countries where the grid parity has recently been achieved. However, there might be a cloud on the horizon. The diffusion rate has started to decline in many regions. In addition, local solar firms – which are known to be important drivers of diffusion – have started to face difficulties to run their businesses. These developments raise some important questions: Is this a temporary decline on diffusion? Will adopters continue to install PV systems? What about the business models of the local solar firms? Based on the case of PV systems in Germany through a multi-level analysis and two complementary literature reviews, this PhD Dissertation extends the debate by providing multiple wealth of empirical details in a context-limited knowledge. The first analysis is based on the adopter perspective, which explores the “micro” level and the social process underlying the adoption of PV systems. The second one is a firm-level perspective, which explores the business models of firms and their driving roles in diffusion of PV systems. The third one is a regional perspective, which explores the “meso” level, i.e., the social process underlying the adoption of PV systems and its modeling techniques. The results include implications for both scholars and policymakers, not only about renewable energy innovations at grid parity, but also in an inductive manner, about policy-driven environmental innovations that achieve the cost competiveness.