859 resultados para ultra-fine grained titanium
Resumo:
The commonly used British Standard constant head triaxial permeability (BS) test, for permeability testing of fine grained soils, is known to have a relatively long test duration. Consequently, a reduction in the required time for permeability test provides potential cost savings, to the construction industry (specifically, for use during Construction Quality Control (CQA) of landfill mineral liners). The purpose of this article is to investigate and evaluate alternative short duration testing methods for the measurement of the permeability of fine grained soils.
As part of the investigation the feasibility of an existing method of short duration permeability test, known as the Accelerated Permeability (AP) test was assessed and compared with permeability measured using British Standard method (BS) and Ramp Accelerated Permeability (RAP). Four different fine grained materials, of a variety of physical properties were compacted at various moisture contents to produced analogous samples for testing using three the three different methodologies. Fabric analysis was carried out on specimens derived from post-test samples using Mercury Intrusion Porosimetry (MIP) and Scanning Electron Microscope (SEM) to assess the effects of testing methodology on soil structure. Results showed that AP testing in general under predicts permeability values derived from the BS test due to large changes in structure of the soil caused by AP test methodology, which is also validated using MIP and SEM observations. RAP testing, in general provides an improvement to the AP test but still under-predicts permeability values. The potential savings in test duration are shown to be relatively minimal for both the AP and RAP tests.
Resumo:
The commonly used British Standard constant head triaxial permeability test for testing of fine-grained soils is relatively time consuming. A reduction in the required time for soil permeability testing would provide potential cost savings to the construction industry, particularly in the construction quality assurance of landfill clay liners. The purpose of this paper is to evaluate an alternative approach of measuring permeability of fine-grained soils benefiting from accelerated time scaling for seepage flow when testing specimens in elevated gravity conditions provided by a centrifuge. As part of the investigation, an apparatus was designed and produced to measure water flow through soil samples under conditions of elevated gravitational acceleration using a small desktop laboratory centrifuge. A membrane was used to hydrostatically confine the test sample. A miniature data acquisition system was designed and incorporated in the apparatus to monitor and record changes in head and flow throughout the tests. Under enhanced gravity in the centrifuge, the flow through the sample was under ‘variable head' conditions as opposed to ‘constant head' conditions as in the classic constant head permeability tests conducted at 1 g . A mathematical model was developed for analysis of Darcy's coefficient of permeability under conditions of elevated gravitational acceleration and verified using the results obtained. The test data compare well with the results on analogous samples obtained using the classical British Standard constant head permeability tests.
Resumo:
Achieving a clearer picture of categorial distinctions in the brain is essential for our understanding of the conceptual lexicon, but much more fine-grained investigations are required in order for this evidence to contribute to lexical research. Here we present a collection of advanced data-mining techniques that allows the category of individual concepts to be decoded from single trials of EEG data. Neural activity was recorded while participants silently named images of mammals and tools, and category could be detected in single trials with an accuracy well above chance, both when considering data from single participants, and when group-training across participants. By aggregating across all trials, single concepts could be correctly assigned to their category with an accuracy of 98%. The pattern of classifications made by the algorithm confirmed that the neural patterns identified are due to conceptual category, and not any of a series of processing-related confounds. The time intervals, frequency bands and scalp locations that proved most informative for prediction permit physiological interpretation: the widespread activation shortly after appearance of the stimulus (from 100. ms) is consistent both with accounts of multi-pass processing, and distributed representations of categories. These methods provide an alternative to fMRI for fine-grained, large-scale investigations of the conceptual lexicon. © 2010 Elsevier Inc.
Resumo:
Arcellacea (testate lobose amoebae) are important lacustrine environmental indicators that have been used in paleoclimatic reconstructions, assessing the effectiveness of mine tailings pond reclamation projects and for studying the effects of land use change in rural, industrial and urban settings. Recognition of ecophenotypically significant infra-specific ‘strains’ within arcellacean assemblages has the potential to enhance the utility of the group in characterizing contemporary and paleoenvironments. We present a novel approach which employs statistical tools to investigate the environmental and taxonomic significance of proposed strains. We test this approach on two identified strains: Difflugia protaeiformis Lamarck strain ‘acuminata’ (DPA), characterized by fine grained agglutination, and Difflugia protaeiformis Lamarck strain ‘claviformis’ (DPC), characterized by coarse grained agglutination. Redundancy analysis indicated that both organisms are associated with similar environmental variables. No relationship was observed between substrate particle size and abundance of DPC, indicating that DPC has a size preference for xenosomes during test construction. Thus DPC should not be designated as a distinct strain but rather form a species complex with DPA. This study elucidates the need to justify the designation of strains based on their autecology in addition to morphological stability.
Resumo:
The Ziegler Reservoir fossil site near Snowmass Village, Colorado, provides a unique opportunity to reconstruct high-altitude paleoenvironmental conditions in the Rocky Mountains during the last interglacial period. We used four different techniques to establish a chronological framework for the site. Radiocarbon dating of lake organics, bone collagen, and shell carbonate, and in situ cosmogenic Be and Al ages on a boulder on the crest of a moraine that impounded the lake suggest that the ages of the sediments that hosted the fossils are between ~ 140 ka and > 45 ka. Uranium-series ages of vertebrate remains generally fall within these bounds, but extremely low uranium concentrations and evidence of open-system behavior limit their utility. Optically stimulated luminescence (OSL) ages (n = 18) obtained from fine-grained quartz maintain stratigraphic order, were replicable, and provide reliable ages for the lake sediments. Analysis of the equivalent dose (D) dispersion of the OSL samples showed that the sediments were fully bleached prior to deposition and low scatter suggests that eolian processes were likely the dominant transport mechanism for fine-grained sediments into the lake. The resulting ages show that the fossil-bearing sediments span the latest part of marine isotope stage (MIS) 6, all of MIS 5 and MIS 4, and the earliest part of MIS 3.
Resumo:
Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.
Resumo:
The authors present a VLSI circuit for implementing wave digital filter (WDF) two-port adaptors. Considerable speedups over conventional designs have been obtained using fine grained pipelining. This has been achieved through the use of most significant bit (MSB) first carry-save arithmetic, which allows systems to be designed in which latency L is small and independent of either coefficient or input data wordlength. L is determined by the online delay associated with the computation required at each node in the circuit (in this case a multiply/add plus two separate additions). This in turn means that pipelining can be used to considerably enhance the sampling rate of a recursive digital filter. The level of pipelining which will offer enhancement is determined by L and is fine-grained rather than bit level. In the case of the circuit considered, L = 3. For this reason pipeline delays (half latches) have been introduced between every two rows of cells to produce a system with a once every cycle sample rate.
Resumo:
Multivariate classification techniques have proven to be powerful tools for distinguishing experimental conditions in single sessions of functional magnetic resonance imaging (fMRI) data. But they are vulnerable to a considerable penalty in classification accuracy when applied across sessions or participants, calling into question the degree to which fine-grained encodings are shared across subjects. Here, we introduce joint learning techniques, where feature selection is carried out using a held-out subset of a target dataset, before training a linear classifier on a source dataset. Single trials of functional MRI data from a covert property generation task are classified with regularized regression techniques to predict the semantic class of stimuli. With our selection techniques (joint ranking feature selection (JRFS) and disjoint feature selection (DJFS)), classification performance during cross-session prediction improved greatly, relative to feature selection on the source session data only. Compared with JRFS, DJFS showed significant improvements for cross-participant classification. And when using a groupwise training, DJFS approached the accuracies seen for prediction across different sessions from the same participant. Comparing several feature selection strategies, we found that a simple univariate ANOVA selection technique or a minimal searchlight (one voxel in size) is appropriate, compared with larger searchlights.
Resumo:
Lower Cretaceous meandering and braided fluvial sandstones of the Nubian Formation form some of the most important subsurface reservoir rocks in the Sirt Basin, north-central Libya. Mineralogical, petrographical and geochemical analyses of sandstone samples from well BB6-59, Sarir oilfield, indicate that the meandering fluvial sandstones are fine- to very fine-grained subarkosic arenites (av. Q91F5L4), and that braided fluvial sandstones are medium- to very coarse-grained quartz arenites (av. Q96F3L1). The reservoir qualities of these sandstones were modified during both eodiagenesis (ca. <70oC; <2 km) and mesodiagenesis (ca. >70oC; >2km). Reservoir quality evolution was controlled primarily by the dissolution and kaolinitization of feldspars, micas and mud intraclasts during eodiagenesis, and by the amount and thicknessof grain-coating clays, chemical compaction and quartz overgrowths during mesodiagenesis. However, dissolution and kaolinitization of feldspars, micas and mud intraclasts resulted in the creation of intercrystalline micro- and mouldic macro-porosity and permeability during eodiagenesis, which were more widespread in braided fluvial than in meandering fluvial sandstones. This was because of the greater depositional porosity and permeability in the braided fluvial sandstones which enhanced percolation of meteoric waters. The development of only limited quartz overgrowths in the braided fluvial sandstones, in which quartz grains are coated by thick illite layers, retained high porosity and permeability (12-23 % and 30- 600 mD). By contrast, meandering fluvial sandstones underwent porosity loss as a result of quartz overgrowth development on quartz grains which lack or have thin and incomplete grain-coating illite (2-15 % and 0-0.1mD). Further loss of porosity in the meandering fluvial sandstones occurred as a result of chemical compaction (pressuredissolution) induced by the occurrence of micas along grains contacts. Otherdiagenetic alterations, such as the growth of pyrite, siderite, dolomite/ankerite and albitization, had little impact on reservoir quality. The albitization of feldspars may have had minor positive influence on reservoir quality throughthe creation of intercrystalline micro-porosity between albite crystals.The results of this study show that diagenetic modifications of the braided and meandering fluvial sandstones in the Nubian Formation, and resulting changes in reservoir quality, are closely linked to depositional porosity and permeability. They are also linked to the thickness of grain-coating infiltrated clays, and to variations in detrital composition, particularly the amounts of mud intraclasts, feldspars and mica grains as well as climatic conditions.
Resumo:
Energy in today's short-range wireless communication is mostly spent on the analog- and digital hardware rather than on radiated power. Hence,purely information-theoretic considerations fail to achieve the lowest energy per information bit and the optimization process must carefully consider the overall transceiver. In this paper, we propose to perform cross-layer optimization, based on an energy-aware rate adaptation scheme combined with a physical layer that is able to properly adjust its processing effort to the data rate and the channel conditions to minimize the energy consumption per information bit. This energy proportional behavior is enabled by extending the classical system modes with additional configuration parameters at the various layers. Fine grained models of the power consumption of the hardware are developed to provide awareness of the physical layer capabilities to the medium access control layer. The joint application of the proposed energy-aware rate adaptation and modifications to the physical layer of an IEEE802.11n system, improves energy-efficiency (averaged over many noise and channel realizations) in all considered scenarios by up to 44%.
Resumo:
The design of a high-performance IIR (infinite impulse response) digital filter is described. The chip architecture operates on 11-b parallel, two's complement input data with a 12-b parallel two's complement coefficient to produce a 14-b two's complement output. The chip is implemented in 1.5-µm, double-layer-metal CMOS technology, consumes 0.5 W, and can operate up to 15 Msample/s. The main component of the system is a fine-grained systolic array that internally is based on a signed binary number representation (SBNR). Issues addressed include testing, clock distribution, and circuitry for conversion between two's complement and SBNR.
Resumo:
Samples of fine-grained channel bed sediment and overbank floodplain deposits were collected along the main channels of the Rivers Aire (and its main tributary, the River Calder) and Swale, in Yorkshire, UK, in order to investigate downstream changes in the storage and deposition of heavy metals (Cr, Cu, Pb, Zn), total P and the sum of selected PCB congeners, and to estimate the total storage of these contaminants within the main channels and floodplains of these river systems. Downstream trends in the contaminant content of the <63 μm fraction of channel bed and floodplain sediment in the study rivers are controlled mainly by the location of the main sources of the contaminants, which varies between rivers. In the Rivers Aire and Calder, the contaminant content of the <63 μm fraction of channel bed and floodplain sediment generally increases in a downstream direction, reflecting the location of the main urban and industrialized areas in the middle and lower parts of the basin. In the River Swale, the concentrations of most of the contaminants examined are approximately constant along the length of the river, due to the relatively unpolluted nature of this river. However, the Pb and Zn content of fine channel bed sediment decreases downstream, due to the location of historic metal mines in the headwaters of this river, and the effect of downstream dilution with uncontaminated sediment. The magnitude and spatial variation of contaminant storage and deposition on channel beds and floodplains are also controlled by the amount of <63 μm sediment stored on the channel bed and deposited on the floodplain during overbank events. Consequently, contaminant deposition and storage are strongly influenced by the surface area of the floodplain and channel bed. Contaminant storage on the channel beds of the study rivers is, therefore, generally greatest in the middle and lower reaches of the rivers, since channel width increases downstream. Comparisons of the estimates of total storage of specific contaminants on the channel beds of the main channel systems of the study rivers with the annual contaminant flux at the catchment outlets indicate that channel storage represents <3% of the outlet flux and is, therefore, of limited importance in regulating that flux. Similar comparisons between the annual deposition flux of specific contaminants to the floodplains of the study rivers and the annual contaminant flux at the catchment outlet, emphasise the potential importance of floodplain deposition as a conveyance loss. In the case of the River Aire the floodplain deposition flux is equivalent to between ca. 2% (PCBs) and 36% (Pb) of the outlet flux. With the exception of PCBs, for which the value is ≅0, the equivalent values for the River Swale range between 18% (P) and 95% (Pb). The study emphasises that knowledge of the fine-grained sediment delivery system operating in a river basin is an essential prerequisite for understanding the transport and storage of sediment-associated contaminants in river systems and that conveyance losses associated with floodplain deposition exert an important control on downstream contaminant fluxes and the fate of such contaminants. © 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.
Resumo:
Software-programmable `soft' processors have shown tremendous potential for efficient realisation of high performance signal processing operations on Field Programmable Gate Array (FPGA), whilst lowering the design burden by avoiding the need to design fine-grained custom circuit archi-tectures. However, the complex data access patterns, high memory bandwidth and computational requirements of sliding window applications, such as Motion Estimation (ME) and Matrix Multiplication (MM), lead to low performance, inefficient soft processor realisations. This paper resolves this issue, showing how by adding support for block data addressing and accelerators for high performance loop execution, performance and resource efficiency over four times better than current best-in-class metrics can be achieved. In addition, it demonstrates the first recorded real-time soft ME estimation realisation for H.263 systems.
Resumo:
We study the computational complexity of finding maximum a posteriori configurations in Bayesian networks whose probabilities are specified by logical formulas. This approach leads to a fine grained study in which local information such as context-sensitive independence and determinism can be considered. It also allows us to characterize more precisely the jump from tractability to NP-hardness and beyond, and to consider the complexity introduced by evidence alone.