891 resultados para fine grained ground mass


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photoionization cross-sections out of the fine-structure levels (2S(2)2p(4) P-3(2,0,1)) of the O-like Fe ion Fe XIX have been reinvestigated. Data for photoionization out of each of these finestructure levels have been obtained, where the calculations have been performed with and without the inclusion of radiation damping on the resonance structure in order to assess the importance of this process. Recombination rate coefficients are determined using the Milne relation, for the case of an electron recombining with N-like Fe ions (Fe XX) in the ground state to form O-like Fe (Fe XIX) existing in each of the fine- structure ground-state levels. Recombination rates are presented over a temperature range similar to 4.0 less than or equal to log T-e less than or equal to 7.0, of importance to the modelling of X-ray emission plasmas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

High-speed field-programmable gate array (FPGA) implementations of an adaptive least mean square (LMS) filter with application in an electronic support measures (ESM) digital receiver, are presented. They employ "fine-grained" pipelining, i.e., pipelining within the processor and result in an increased output latency when used in the LMS recursive system. Therefore, the major challenge is to maintain a low latency output whilst increasing the pipeline stage in the filter for higher speeds. Using the delayed LMS (DLMS) algorithm, fine-grained pipelined FPGA implementations using both the direct form (DF) and the transposed form (TF) are considered and compared. It is shown that the direct form LMS filter utilizes the FPGA resources more efficiently thereby allowing a 120 MHz sampling rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many genetic studies have demonstrated an association between the 7-repeat (7r) allele of a 48-base pair variable number of tandem repeats (VNTR) in exon 3 of the DRD4 gene and the phenotype of attention deficit hyperactivity disorder (ADHD). Previous studies have shown inconsistent associations between the 7r allele and neurocognitive performance in children with ADHD. We investigated the performance of 128 children with and without ADHD on the Fixed and Random versions of the Sustained Attention to Response Task (SART). We employed timeseries analyses of reaction-time data to allow a fine-grained analysis of reaction time variability, a candidate endophenotype for ADHD. Children were grouped into either the 7r-present group (possessing at least one copy of the 7r allele) or the 7r-absent group. The ADHD group made significantly more commission errors and was significantly more variable in RT in terms of fast moment-to-moment variability than the control group, but no effect of genotype was found on these measures. Children with ADHD without the 7r allele made significantly more omission errors, were significantly more variable in the slow frequency domain and showed less sensitivity to the signal (d') than those children with ADHD the 7r and control children with or without the 7r. These results highlight the utility of time-series analyses of reaction time data for delineating the neuropsychological deficits associated with ADHD and the DRD4 VNTR. Absence of the 7-repeat allele in children with ADHD is associated with a neurocognitive profile of drifting sustained attention that gives rise to variable and inconsistent performance. (c) 2008 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research reported here is based on the standard laboratory experiments routinely performed in order to measure various geotechnical parameters. These experiments require consolidation of fine-grained samples in triaxial or stress path apparatus. The time required for the consolidation is dependent on the permeability of the soil and the length of the drainage path. The consolidation time is often of the order of several weeks in large clay-dominated samples. Long testing periods can be problematic, as they can delay decisions on design and construction methods. Acceleration of the consolidation process would require a reduction in effective drainage length and this is usually achieved by placing filter drains around the sample. The purpose of the research reported in this paper is to assess if these filter drains work effectively and, if not, to determine what modifications to the filter drains are needed. The findings have shown that use of a double filter reduces the consolidation time several fold.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sphere Decoding (SD) is a highly effective detection technique for Multiple-Input Multiple-Output (MIMO) wireless communications receivers, offering quasi-optimal accuracy with relatively low computational complexity as compared to the ideal ML detector. Despite this, the computational demands of even low-complexity SD variants, such as Fixed Complexity SD (FSD), remains such that implementation on modern software-defined network equipment is a highly challenging process, and indeed real-time solutions for MIMO systems such as 4 4 16-QAM 802.11n are unreported. This paper overcomes this barrier. By exploiting large-scale networks of fine-grained softwareprogrammable processors on Field Programmable Gate Array (FPGA), a series of unique SD implementations are presented, culminating in the only single-chip, real-time quasi-optimal SD for 44 16-QAM 802.11n MIMO. Furthermore, it demonstrates that the high performance software-defined architectures which enable these implementations exhibit cost comparable to dedicated circuit architectures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Groundwater drawn from fluvioglacial sand and gravel aquifers form the principal source of drinking water in many part of central Western Europe. High population densities and widespread organic agriculture in these same areas constitute hazards that may impact the microbiological quality of many potable supplies. Tracer testing comparing two similarly sized bacteria (E.coli and P. putida) and the smaller bacteriophage (H40/1) with the response of non-reactive solute tracer (uranine) at the decametre scale revealed that all tracers broke through up to 100 times more quickly than anticipated using conventional rules of thumb. All microbiological tracer responses were less disperse than the solute, although bacterial peak relative concentrations consistently exceeded those of the solute tracer at one sampling location reflecting exclusion processes influencing micro biological tracer migration. Relative recoveries of H40/1 and E.coli proved consistent at both monitoring wells, while responses of H40/1 and P.putida differed. Examination of exposures of the upper reaches of the aquifer in nearby sand and gravel quarries revealed the aquifer to consist of laterally extensive layers of open framework (OW) gravel enveloped in finer grained gravelly sand. Granulometric analysis of these deposits suggested that the OW gravel was up to two orders of magnitude more permeable than the surrounding deposits giving rise to the preferential flow paths. By contrast fine grained lenses of silty sand within the OW gravels are suspected to play an important role in the exclusion processes that permit solutes to access them but exclude larger micro organisms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of efficient synchronization mechanisms is crucial for implementing fine grained parallel programs on modern shared cache multi-core architectures. In this paper we study this problem by considering Single-Producer/Single- Consumer (SPSC) coordination using unbounded queues. A novel unbounded SPSC algorithm capable of reducing the row synchronization latency and speeding up Producer-Consumer coordination is presented. The algorithm has been extensively tested on a shared-cache multi-core platform and a sketch proof of correctness is presented. The queues proposed have been used as basic building blocks to implement the FastFlow parallel framework, which has been demonstrated to offer very good performance for fine-grain parallel applications. © 2012 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Drill cores from the inner-alpine valley terrace of Unterangerberg, located in the Eastern Alps of Austria, offer first insights into a Pleistocene sedimentary record that was not accessible so far. The succession comprises diamict, gravel, sand, lignite and thick, fine grained sediments. Additionally, cataclastic deposits originating from two paleo-landslide events are present. Multi-proxy analyses including sedimentological and palynological investigations as well as radiocarbon and luminescence data record the onset of the last glacial period (Wurmian) at Unterangerberg at similar to 120-110 ka. This first time period, correlated to the MIS 5d, was characterised by strong fluvial aggradation under cold climatic conditions, with only sparse vegetation cover. Furthermore, two large and quasi-synchronous landslide events occurred during this time interval. No record of the first Early Wiirmian interstadial (MIS 5c) is preserved. During the second Early Wiirmian interstadial (MIS 5a), the local vegetation was characterised by a boreal forest dominated by Picea, with few thermophilous elements. The subsequent collapse of the vegetation is recorded by sediments dated to similar to 70-60 ka (i.e. MIS 4), with very low pollen concentrations and the potential presence of permafrost. Climatic conditions improved again between similar to 55 and 45 ka (MIS 3) and cold-adapted trees re-appeared during interstadials, forming an open forest vegetation. MIS 3 stadials were shorter and less severe than the MIS 4 at Unterangerberg, and vegetation during these cold phases was mainly composed of shrubs, herbs and grasses, similar to what is known from today's alpine timberline. The Unterangerberg record ended at similar to 45 ka and/or was truncated by ice during the Last Glacial Maximum. (C) 2013 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The commonly used British Standard constant head triaxial permeability (BS) test, for permeability testing of fine grained soils, is known to have a relatively long test duration. Consequently, a reduction in the required time for permeability test provides potential cost savings, to the construction industry (specifically, for use during Construction Quality Control (CQA) of landfill mineral liners). The purpose of this article is to investigate and evaluate alternative short duration testing methods for the measurement of the permeability of fine grained soils.

As part of the investigation the feasibility of an existing method of short duration permeability test, known as the Accelerated Permeability (AP) test was assessed and compared with permeability measured using British Standard method (BS) and Ramp Accelerated Permeability (RAP). Four different fine grained materials, of a variety of physical properties were compacted at various moisture contents to produced analogous samples for testing using three the three different methodologies. Fabric analysis was carried out on specimens derived from post-test samples using Mercury Intrusion Porosimetry (MIP) and Scanning Electron Microscope (SEM) to assess the effects of testing methodology on soil structure. Results showed that AP testing in general under predicts permeability values derived from the BS test due to large changes in structure of the soil caused by AP test methodology, which is also validated using MIP and SEM observations. RAP testing, in general provides an improvement to the AP test but still under-predicts permeability values. The potential savings in test duration are shown to be relatively minimal for both the AP and RAP tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The commonly used British Standard constant head triaxial permeability test for testing of fine-grained soils is relatively time consuming. A reduction in the required time for soil permeability testing would provide potential cost savings to the construction industry, particularly in the construction quality assurance of landfill clay liners. The purpose of this paper is to evaluate an alternative approach of measuring permeability of fine-grained soils benefiting from accelerated time scaling for seepage flow when testing specimens in elevated gravity conditions provided by a centrifuge. As part of the investigation, an apparatus was designed and produced to measure water flow through soil samples under conditions of elevated gravitational acceleration using a small desktop laboratory centrifuge. A membrane was used to hydrostatically confine the test sample. A miniature data acquisition system was designed and incorporated in the apparatus to monitor and record changes in head and flow throughout the tests. Under enhanced gravity in the centrifuge, the flow through the sample was under ‘variable head' conditions as opposed to ‘constant head' conditions as in the classic constant head permeability tests conducted at 1 g . A mathematical model was developed for analysis of Darcy's coefficient of permeability under conditions of elevated gravitational acceleration and verified using the results obtained. The test data compare well with the results on analogous samples obtained using the classical British Standard constant head permeability tests.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Arcellacea (testate lobose amoebae) are important lacustrine environmental indicators that have been used in paleoclimatic reconstructions, assessing the effectiveness of mine tailings pond reclamation projects and for studying the effects of land use change in rural, industrial and urban settings. Recognition of ecophenotypically significant infra-specific ‘strains’ within arcellacean assemblages has the potential to enhance the utility of the group in characterizing contemporary and paleoenvironments. We present a novel approach which employs statistical tools to investigate the environmental and taxonomic significance of proposed strains. We test this approach on two identified strains: Difflugia protaeiformis Lamarck strain ‘acuminata’ (DPA), characterized by fine grained agglutination, and Difflugia protaeiformis Lamarck strain ‘claviformis’ (DPC), characterized by coarse grained agglutination. Redundancy analysis indicated that both organisms are associated with similar environmental variables. No relationship was observed between substrate particle size and abundance of DPC, indicating that DPC has a size preference for xenosomes during test construction. Thus DPC should not be designated as a distinct strain but rather form a species complex with DPA. This study elucidates the need to justify the designation of strains based on their autecology in addition to morphological stability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.