38 resultados para fine-grained control
Resumo:
Sphere Decoding (SD) is a highly effective detection technique for Multiple-Input Multiple-Output (MIMO) wireless communications receivers, offering quasi-optimal accuracy with relatively low computational complexity as compared to the ideal ML detector. Despite this, the computational demands of even low-complexity SD variants, such as Fixed Complexity SD (FSD), remains such that implementation on modern software-defined network equipment is a highly challenging process, and indeed real-time solutions for MIMO systems such as 4 4 16-QAM 802.11n are unreported. This paper overcomes this barrier. By exploiting large-scale networks of fine-grained softwareprogrammable processors on Field Programmable Gate Array (FPGA), a series of unique SD implementations are presented, culminating in the only single-chip, real-time quasi-optimal SD for 44 16-QAM 802.11n MIMO. Furthermore, it demonstrates that the high performance software-defined architectures which enable these implementations exhibit cost comparable to dedicated circuit architectures.
Resumo:
Groundwater drawn from fluvioglacial sand and gravel aquifers form the principal source of drinking water in many part of central Western Europe. High population densities and widespread organic agriculture in these same areas constitute hazards that may impact the microbiological quality of many potable supplies. Tracer testing comparing two similarly sized bacteria (E.coli and P. putida) and the smaller bacteriophage (H40/1) with the response of non-reactive solute tracer (uranine) at the decametre scale revealed that all tracers broke through up to 100 times more quickly than anticipated using conventional rules of thumb. All microbiological tracer responses were less disperse than the solute, although bacterial peak relative concentrations consistently exceeded those of the solute tracer at one sampling location reflecting exclusion processes influencing micro biological tracer migration. Relative recoveries of H40/1 and E.coli proved consistent at both monitoring wells, while responses of H40/1 and P.putida differed. Examination of exposures of the upper reaches of the aquifer in nearby sand and gravel quarries revealed the aquifer to consist of laterally extensive layers of open framework (OW) gravel enveloped in finer grained gravelly sand. Granulometric analysis of these deposits suggested that the OW gravel was up to two orders of magnitude more permeable than the surrounding deposits giving rise to the preferential flow paths. By contrast fine grained lenses of silty sand within the OW gravels are suspected to play an important role in the exclusion processes that permit solutes to access them but exclude larger micro organisms.
Resumo:
The use of efficient synchronization mechanisms is crucial for implementing fine grained parallel programs on modern shared cache multi-core architectures. In this paper we study this problem by considering Single-Producer/Single- Consumer (SPSC) coordination using unbounded queues. A novel unbounded SPSC algorithm capable of reducing the row synchronization latency and speeding up Producer-Consumer coordination is presented. The algorithm has been extensively tested on a shared-cache multi-core platform and a sketch proof of correctness is presented. The queues proposed have been used as basic building blocks to implement the FastFlow parallel framework, which has been demonstrated to offer very good performance for fine-grain parallel applications. © 2012 Springer-Verlag.
Resumo:
Drill cores from the inner-alpine valley terrace of Unterangerberg, located in the Eastern Alps of Austria, offer first insights into a Pleistocene sedimentary record that was not accessible so far. The succession comprises diamict, gravel, sand, lignite and thick, fine grained sediments. Additionally, cataclastic deposits originating from two paleo-landslide events are present. Multi-proxy analyses including sedimentological and palynological investigations as well as radiocarbon and luminescence data record the onset of the last glacial period (Wurmian) at Unterangerberg at similar to 120-110 ka. This first time period, correlated to the MIS 5d, was characterised by strong fluvial aggradation under cold climatic conditions, with only sparse vegetation cover. Furthermore, two large and quasi-synchronous landslide events occurred during this time interval. No record of the first Early Wiirmian interstadial (MIS 5c) is preserved. During the second Early Wiirmian interstadial (MIS 5a), the local vegetation was characterised by a boreal forest dominated by Picea, with few thermophilous elements. The subsequent collapse of the vegetation is recorded by sediments dated to similar to 70-60 ka (i.e. MIS 4), with very low pollen concentrations and the potential presence of permafrost. Climatic conditions improved again between similar to 55 and 45 ka (MIS 3) and cold-adapted trees re-appeared during interstadials, forming an open forest vegetation. MIS 3 stadials were shorter and less severe than the MIS 4 at Unterangerberg, and vegetation during these cold phases was mainly composed of shrubs, herbs and grasses, similar to what is known from today's alpine timberline. The Unterangerberg record ended at similar to 45 ka and/or was truncated by ice during the Last Glacial Maximum. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
The commonly used British Standard constant head triaxial permeability test for testing of fine-grained soils is relatively time consuming. A reduction in the required time for soil permeability testing would provide potential cost savings to the construction industry, particularly in the construction quality assurance of landfill clay liners. The purpose of this paper is to evaluate an alternative approach of measuring permeability of fine-grained soils benefiting from accelerated time scaling for seepage flow when testing specimens in elevated gravity conditions provided by a centrifuge. As part of the investigation, an apparatus was designed and produced to measure water flow through soil samples under conditions of elevated gravitational acceleration using a small desktop laboratory centrifuge. A membrane was used to hydrostatically confine the test sample. A miniature data acquisition system was designed and incorporated in the apparatus to monitor and record changes in head and flow throughout the tests. Under enhanced gravity in the centrifuge, the flow through the sample was under ‘variable head' conditions as opposed to ‘constant head' conditions as in the classic constant head permeability tests conducted at 1 g . A mathematical model was developed for analysis of Darcy's coefficient of permeability under conditions of elevated gravitational acceleration and verified using the results obtained. The test data compare well with the results on analogous samples obtained using the classical British Standard constant head permeability tests.
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.
Resumo:
Arcellacea (testate lobose amoebae) are important lacustrine environmental indicators that have been used in paleoclimatic reconstructions, assessing the effectiveness of mine tailings pond reclamation projects and for studying the effects of land use change in rural, industrial and urban settings. Recognition of ecophenotypically significant infra-specific ‘strains’ within arcellacean assemblages has the potential to enhance the utility of the group in characterizing contemporary and paleoenvironments. We present a novel approach which employs statistical tools to investigate the environmental and taxonomic significance of proposed strains. We test this approach on two identified strains: Difflugia protaeiformis Lamarck strain ‘acuminata’ (DPA), characterized by fine grained agglutination, and Difflugia protaeiformis Lamarck strain ‘claviformis’ (DPC), characterized by coarse grained agglutination. Redundancy analysis indicated that both organisms are associated with similar environmental variables. No relationship was observed between substrate particle size and abundance of DPC, indicating that DPC has a size preference for xenosomes during test construction. Thus DPC should not be designated as a distinct strain but rather form a species complex with DPA. This study elucidates the need to justify the designation of strains based on their autecology in addition to morphological stability.
Resumo:
The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.
Resumo:
Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.
Resumo:
The authors present a VLSI circuit for implementing wave digital filter (WDF) two-port adaptors. Considerable speedups over conventional designs have been obtained using fine grained pipelining. This has been achieved through the use of most significant bit (MSB) first carry-save arithmetic, which allows systems to be designed in which latency L is small and independent of either coefficient or input data wordlength. L is determined by the online delay associated with the computation required at each node in the circuit (in this case a multiply/add plus two separate additions). This in turn means that pipelining can be used to considerably enhance the sampling rate of a recursive digital filter. The level of pipelining which will offer enhancement is determined by L and is fine-grained rather than bit level. In the case of the circuit considered, L = 3. For this reason pipeline delays (half latches) have been introduced between every two rows of cells to produce a system with a once every cycle sample rate.
Resumo:
Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Resumo:
Lower Cretaceous meandering and braided fluvial sandstones of the Nubian Formation form some of the most important subsurface reservoir rocks in the Sirt Basin, north-central Libya. Mineralogical, petrographical and geochemical analyses of sandstone samples from well BB6-59, Sarir oilfield, indicate that the meandering fluvial sandstones are fine- to very fine-grained subarkosic arenites (av. Q91F5L4), and that braided fluvial sandstones are medium- to very coarse-grained quartz arenites (av. Q96F3L1). The reservoir qualities of these sandstones were modified during both eodiagenesis (ca. <70oC; <2 km) and mesodiagenesis (ca. >70oC; >2km). Reservoir quality evolution was controlled primarily by the dissolution and kaolinitization of feldspars, micas and mud intraclasts during eodiagenesis, and by the amount and thicknessof grain-coating clays, chemical compaction and quartz overgrowths during mesodiagenesis. However, dissolution and kaolinitization of feldspars, micas and mud intraclasts resulted in the creation of intercrystalline micro- and mouldic macro-porosity and permeability during eodiagenesis, which were more widespread in braided fluvial than in meandering fluvial sandstones. This was because of the greater depositional porosity and permeability in the braided fluvial sandstones which enhanced percolation of meteoric waters. The development of only limited quartz overgrowths in the braided fluvial sandstones, in which quartz grains are coated by thick illite layers, retained high porosity and permeability (12-23 % and 30- 600 mD). By contrast, meandering fluvial sandstones underwent porosity loss as a result of quartz overgrowth development on quartz grains which lack or have thin and incomplete grain-coating illite (2-15 % and 0-0.1mD). Further loss of porosity in the meandering fluvial sandstones occurred as a result of chemical compaction (pressuredissolution) induced by the occurrence of micas along grains contacts. Otherdiagenetic alterations, such as the growth of pyrite, siderite, dolomite/ankerite and albitization, had little impact on reservoir quality. The albitization of feldspars may have had minor positive influence on reservoir quality throughthe creation of intercrystalline micro-porosity between albite crystals.The results of this study show that diagenetic modifications of the braided and meandering fluvial sandstones in the Nubian Formation, and resulting changes in reservoir quality, are closely linked to depositional porosity and permeability. They are also linked to the thickness of grain-coating infiltrated clays, and to variations in detrital composition, particularly the amounts of mud intraclasts, feldspars and mica grains as well as climatic conditions.
Resumo:
The design of a high-performance IIR (infinite impulse response) digital filter is described. The chip architecture operates on 11-b parallel, two's complement input data with a 12-b parallel two's complement coefficient to produce a 14-b two's complement output. The chip is implemented in 1.5-µm, double-layer-metal CMOS technology, consumes 0.5 W, and can operate up to 15 Msample/s. The main component of the system is a fine-grained systolic array that internally is based on a signed binary number representation (SBNR). Issues addressed include testing, clock distribution, and circuitry for conversion between two's complement and SBNR.
Resumo:
Software-programmable `soft' processors have shown tremendous potential for efficient realisation of high performance signal processing operations on Field Programmable Gate Array (FPGA), whilst lowering the design burden by avoiding the need to design fine-grained custom circuit archi-tectures. However, the complex data access patterns, high memory bandwidth and computational requirements of sliding window applications, such as Motion Estimation (ME) and Matrix Multiplication (MM), lead to low performance, inefficient soft processor realisations. This paper resolves this issue, showing how by adding support for block data addressing and accelerators for high performance loop execution, performance and resource efficiency over four times better than current best-in-class metrics can be achieved. In addition, it demonstrates the first recorded real-time soft ME estimation realisation for H.263 systems.
Resumo:
We study the computational complexity of finding maximum a posteriori configurations in Bayesian networks whose probabilities are specified by logical formulas. This approach leads to a fine grained study in which local information such as context-sensitive independence and determinism can be considered. It also allows us to characterize more precisely the jump from tractability to NP-hardness and beyond, and to consider the complexity introduced by evidence alone.