82 resultados para Andreas Wohlfahrt
Resumo:
Energy in today's short-range wireless communication is mostly spent on the analog- and digital hardware rather than on radiated power. Hence,purely information-theoretic considerations fail to achieve the lowest energy per information bit and the optimization process must carefully consider the overall transceiver. In this paper, we propose to perform cross-layer optimization, based on an energy-aware rate adaptation scheme combined with a physical layer that is able to properly adjust its processing effort to the data rate and the channel conditions to minimize the energy consumption per information bit. This energy proportional behavior is enabled by extending the classical system modes with additional configuration parameters at the various layers. Fine grained models of the power consumption of the hardware are developed to provide awareness of the physical layer capabilities to the medium access control layer. The joint application of the proposed energy-aware rate adaptation and modifications to the physical layer of an IEEE802.11n system, improves energy-efficiency (averaged over many noise and channel realizations) in all considered scenarios by up to 44%.
Resumo:
Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.
Resumo:
Defects in primary cilium biogenesis underlie the ciliopathies, a growing group of genetic disorders. We describe a whole-genome siRNA-based reverse genetics screen for defects in biogenesis and/or maintenance of the primary cilium, obtaining a global resource. We identify 112 candidate ciliogenesis and ciliopathy genes, including 44 components of the ubiquitin-proteasome system, 12 G-protein-coupled receptors, and 3 pre-mRNA processing factors (PRPF6, PRPF8 and PRPF31) mutated in autosomal dominant retinitis pigmentosa. The PRPFs localize to the connecting cilium, and PRPF8- and PRPF31-mutated cells have ciliary defects. Combining the screen with exome sequencing data identified recessive mutations in PIBF1, also known as CEP90, and C21orf2, also known as LRRC76, as causes of the ciliopathies Joubert and Jeune syndromes. Biochemical approaches place C21orf2 within key ciliopathy-associated protein modules, offering an explanation for the skeletal and retinal involvement observed in individuals with C21orf2 variants. Our global, unbiased approaches provide insights into ciliogenesis complexity and identify roles for unanticipated pathways in human genetic disease.
Resumo:
The formation rate of university spin-out firms has increased markedly over the past decade. While this is seen as an important channel for the commercialisation of academic research, concerns have centred around high failure rates and no-to-low growth among those which survive compared to other new technology based firms. Universities have responded to this by investing in incubators to assist spin-outs to overcome their liability of newness. Yet how effective are incubators in supporting these firms? Here we examine this in terms of the structural networks that spin-out firms form, the role of the incubator in this and the effect of this on the spin-out process.
Resumo:
Cytokine secretion and degranulation represent key components of CD8(+) T-cell cytotoxicity. While transcriptional blockade of IFN-γ and inhibition of degranulation by TGF-β are well established, we wondered whether TGF-β could also induce immune-regulatory miRNAs in human CD8(+) T cells. We used miRNA microarrays and high-throughput sequencing in combination with qRT-PCR and found that TGF-β promotes expression of the miR-23a cluster in human CD8(+) T cells. Likewise, TGF-β up-regulated expression of the cluster in CD8(+) T cells from wild-type mice, but not in cells from mice with tissue-specific expression of a dominant-negative TGF-β type II receptor. Reporter gene assays including site mutations confirmed that miR-23a specifically targets the 3'UTR of CD107a/LAMP1 mRNA, whereas the further miRNAs expressed in this cluster-namely, miR-27a and -24-target the 3'UTR of IFN-γ mRNA. Upon modulation of the miR-23a cluster by the respective miRNA antagomirs and mimics, we observed significant changes in IFN-γ expression, but only slight effects on CD107a/LAMP1 expression. Still, overexpression of the cluster attenuated the cytotoxic activity of antigen-specific CD8(+) T cells. These functional data thus reveal that the miR-23a cluster not only is induced by TGF-β, but also exerts a suppressive effect on CD8(+) T-cell effector functions, even in the absence of TGF-β signaling.
Disseminated tumor cells and their prognostic significance in nonmetastatic prostate cancer patients
Resumo:
Detection of pretreatment disseminated cells (pre-DTC) reflecting its homing to bone marrow (BM) in prostate cancer (PCa) might improve the current model to predict recurrence or survival in men with nonmetastatic disease despite of primary treatment. Thereby, pre-DTC may serve as an early prognostic biomarker. Post-treatment DTCs (post-DTC) finding may supply the clinician with additional predictive information about the possible course of PCa. To assess the prognostic impact of DTCs in BM aspirates sampled before initiation of primary therapy (pre-DTC) and at least 2 years after (post-DTC) to established prognostic factors and survival in patients with PCa. Available BM of 129 long-term follow-up patients with T1-3N0M0 PCa was assessed in addition to 100 BM of those in whom a pretreatment BM was sampled. Patients received either combined therapy [n = 81 (63%)], radiotherapy (RT) with different duration of hormone treatment (HT) or monotherapy with RT or HT alone [n = 48 (37%)] adapted to the criteria of the SPCG-7 trial. Mononuclear cells were deposited on slides according to the cytospin methodology and DTCs were identified by immunocytochemistry using the pancytokeratin antibodies AE1/AE3. The median age of men at diagnosis was 64.5 years (range 49.5-73.4 years). The median long-term follow-up from first BM sampling to last observation was 11 years. Categorized clinically relevant factors in PCa showed only pre-DTC status as the statistically independent parameter for survival in the multivariate analysis. Pre-DTCs homing to BM are significantly associated with clinically relevant outcome independent to the patient's treatment at diagnosis with nonmetastatic PCa.
Resumo:
Climate model projections suggestwidespread drying in the Mediterranean Basin and wetting in Fennoscandia in the coming decades largely as a consequence of greenhouse gas forcing of climate. To place these and other “Old World” climate projections into historical perspective based on more complete estimates of natural hydroclimatic variability, we have developed the “Old World Drought Atlas” (OWDA), a set of year-to-year maps of tree-ring reconstructed summer wetness and dryness over Europe and the Mediterranean Basin during the Common Era.
The OWDA matches historical accounts of severe drought and wetness with a spatial completeness not previously available. In addition, megadroughts reconstructed over north-central Europe in the 11th and mid-15th centuries
reinforce other evidence from North America and Asia that droughts were more severe, extensive, and prolonged over Northern Hemisphere land areas before the 20th century, with an inadequate understanding of their causes. The OWDA provides new data to determine the causes of Old World drought and wetness and attribute past climate variability to forced and/or internal variability.
Resumo:
Current variation aware design methodologies, tuned for worst-case scenarios, are becoming increasingly pessimistic from the perspective of power and performance. A good example of such pessimism is setting the refresh rate of DRAMs according to the worst-case access statistics, thereby resulting in very frequent refresh cycles, which are responsible for the majority of the standby power consumption of these memories. However, such a high refresh rate may not be required, either due to extremely low probability of the actual occurrence of such a worst-case, or due to the inherent error resilient nature of many applications that can tolerate a certain number of potential failures. In this paper, we exploit and quantify the possibilities that exist in dynamic memory design by shifting to the so-called approximate computing paradigm in order to save power and enhance yield at no cost. The statistical characteristics of the retention time in dynamic memories were revealed by studying a fabricated 2kb CMOS compatible embedded DRAM (eDRAM) memory array based on gain-cells. Measurements show that up to 73% of the retention power can be saved by altering the refresh time and setting it such that a small number of failures is allowed. We show that these savings can be further increased by utilizing known circuit techniques, such as body biasing, which can help, not only in extending, but also in preferably shaping the retention time distribution. Our approach is one of the first attempts to access the data integrity and energy tradeoffs achieved in eDRAMs for utilizing them in error resilient applications and can prove helpful in the anticipated shift to approximate computing.
Resumo:
Static timing analysis provides the basis for setting the clock period of a microprocessor core, based on its worst-case critical path. However, depending on the design, this critical path is not always excited and therefore dynamic timing margins exist that can theoretically be exploited for the benefit of better speed or lower power consumption (through voltage scaling). This paper introduces predictive instruction-based dynamic clock adjustment as a technique to trim dynamic timing margins in pipelined microprocessors. To this end, we exploit the different timing requirements for individual instructions during the dynamically varying program execution flow without the need for complex circuit-level measures to detect and correct timing violations. We provide a design flow to extract the dynamic timing information for the design using post-layout dynamic timing analysis and we integrate the results into a custom cycle-accurate simulator. This simulator allows annotation of individual instructions with their impact on timing (in each pipeline stage) and rapidly derives the overall code execution time for complex benchmarks. The design methodology is illustrated at the microarchitecture level, demonstrating the performance and power gains possible on a 6-stage OpenRISC in-order general purpose processor core in a 28nm CMOS technology. We show that employing instruction-dependent dynamic clock adjustment leads on average to an increase in operating speed by 38% or to a reduction in power consumption by 24%, compared to traditional synchronous clocking, which at all times has to respect the worst-case timing identified through static timing analysis.
Resumo:
In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.
Resumo:
The worsening of process variations and the consequent increased spreads in circuit performance and consumed power hinder the satisfaction of the targeted budgets and lead to yield loss. Corner based design and adoption of design guardbands might limit the yield loss. However, in many cases such methods may not be able to capture the real effects which might be way better than the predicted ones leading to increasingly pessimistic designs. The situation is even more severe in memories which consist of substantially different individual building blocks, further complicating the accurate analysis of the impact of variations at the architecture level leaving many potential issues uncovered and opportunities unexploited. In this paper, we develop a framework for capturing non-trivial statistical interactions among all the components of a memory/cache. The developed tool is able to find the optimum memory/cache configuration under various constraints allowing the designers to make the right choices early in the design cycle and consequently improve performance, energy, and especially yield. Our, results indicate that the consideration of the architectural interactions between the memory components allow to relax the pessimistic access times that are predicted by existing techniques.
Resumo:
The area and power consumption of low-density parity check (LDPC) decoders are typically dominated by embedded memories. To alleviate such high memory costs, this paper exploits the fact that all internal memories of a LDPC decoder are frequently updated with new data. These unique memory access statistics are taken advantage of by replacing all static standard-cell based memories (SCMs) of a prior-art LDPC decoder implementation by dynamic SCMs (D-SCMs), which are designed to retain data just long enough to guarantee reliable operation. The use of D-SCMs leads to a 44% reduction in silicon area of the LDPC decoder compared to the use of static SCMs. The low-power LDPC decoder architecture with refresh-free D-SCMs was implemented in a 90nm CMOS process, and silicon measurements show full functionality and an information bit throughput of up to 600 Mbps (as required by the IEEE 802.11n standard).
Resumo:
In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.
Resumo:
Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.
Resumo:
We consider the problem of linking web search queries to entities from a knowledge base such as Wikipedia. Such linking enables converting a user’s web search session to a footprint in the knowledge base that could be used to enrich the user profile. Traditional methods for entity linking have been directed towards finding entity mentions in text documents such as news reports, each of which are possibly linked to multiple entities enabling the usage of measures like entity set coherence. Since web search queries are very small text fragments, such criteria that rely on existence of a multitude of mentions do not work too well on them. We propose a three-phase method for linking web search queries to wikipedia entities. The first phase does IR-style scoring of entities against the search query to narrow down to a subset of entities that are expanded using hyperlink information in the second phase to a larger set. Lastly, we use a graph traversal approach to identify the top entities to link the query to. Through an empirical evaluation on real-world web search queries, we illustrate that our methods significantly enhance the linking accuracy over state-of-the-art methods.