38 resultados para fine-grained control

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Densely deployed WiFi networks will play a crucial role in providing the capacity for next generation mobile internet. However, due to increasing interference, overlapped channels in WiFi networks and throughput efficiency degradation, densely deployed WiFi networks is not a guarantee to obtain higher throughput. An emergent challenge is how to efficiently utilize scarce spectrum resources, by matching physical layer resources to traffic demand. In this aspect, access control allocation strategies play a pivotal role but remain too coarse-grained. As a solution, this research proposes a flexible framework for fine-grained channel width adaptation and multi-channel access in WiFi networks. This approach, named SFCA (Sub-carrier Fine-grained Channel Access), adopts DOFDM (Discontinuous Orthogonal Frequency Division Multiplexing) at the PHY layer. It allocates the frequency resource with a sub-carrier granularity, which facilitates the channel width adaptation for multi-channel access and thus brings more flexibility and higher frequency efficiency. The MAC layer uses a frequency-time domain backoff scheme, which combines the popular time-domain BEB scheme with a frequency-domain backoff to decrease access collision, resulting in higher access probability for the contending nodes. SFCA is compared with FICA (an established access scheme) showing significant outperformance. Finally we present results for next generation 802.11ac WiFi networks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many genetic studies have demonstrated an association between the 7-repeat (7r) allele of a 48-base pair variable number of tandem repeats (VNTR) in exon 3 of the DRD4 gene and the phenotype of attention deficit hyperactivity disorder (ADHD). Previous studies have shown inconsistent associations between the 7r allele and neurocognitive performance in children with ADHD. We investigated the performance of 128 children with and without ADHD on the Fixed and Random versions of the Sustained Attention to Response Task (SART). We employed timeseries analyses of reaction-time data to allow a fine-grained analysis of reaction time variability, a candidate endophenotype for ADHD. Children were grouped into either the 7r-present group (possessing at least one copy of the 7r allele) or the 7r-absent group. The ADHD group made significantly more commission errors and was significantly more variable in RT in terms of fast moment-to-moment variability than the control group, but no effect of genotype was found on these measures. Children with ADHD without the 7r allele made significantly more omission errors, were significantly more variable in the slow frequency domain and showed less sensitivity to the signal (d') than those children with ADHD the 7r and control children with or without the 7r. These results highlight the utility of time-series analyses of reaction time data for delineating the neuropsychological deficits associated with ADHD and the DRD4 VNTR. Absence of the 7-repeat allele in children with ADHD is associated with a neurocognitive profile of drifting sustained attention that gives rise to variable and inconsistent performance. (c) 2008 Wiley-Liss, Inc.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Index properties such as the liquid limit and plastic limit are widely used to evaluate certain geotechnical parameters of fine-grained soils. Measurement of the liquid limit is a mechanical process, and the possibility of errors occurring during measurement is not significant. However, this is not the case for plastic limit testing, despite the fact that the current method of measurement is embraced by many standards around the world. The method in question relies on a fairly crude procedure known widely as the ‘thread rolling' test, though it has been the subject of much criticism in recent years. It is essential that a new, more reliable method of measuring the plastic limit is developed using a mechanical process that is both consistent and easily reproducible. The work reported in this paper concerns the development of a new device to measure the plastic limit, based on the existing falling cone apparatus. The force required for the test is equivalent to the application of a 54 N fast-static load acting on the existing cone used in liquid limit measurements. The test is complete when the relevant water content of the soil specimen allows the cone to achieve a penetration of 20 mm. The new technique was used to measure the plastic limit of 16 different clays from around the world. The plastic limit measured using the new method identified reasonably well the water content at which the soil phase changes from the plastic to the semi-solid state. Further evaluation was undertaken by conducting plastic limit tests using the new method on selected samples and comparing the results with values reported by local site investigation laboratories. Again, reasonable agreement was found.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The initial part of this paper reviews the early challenges (c 1980) in achieving real-time silicon implementations of DSP computations. In particular, it discusses research on application specific architectures, including bit level systolic circuits that led to important advances in achieving the DSP performance levels then required. These were many orders of magnitude greater than those achievable using programmable (including early DSP) processors, and were demonstrated through the design of commercial digital correlator and digital filter chips. As is discussed, an important challenge was the application of these concepts to recursive computations as occur, for example, in Infinite Impulse Response (IIR) filters. An important breakthrough was to show how fine grained pipelining can be used if arithmetic is performed most significant bit (msb) first. This can be achieved using redundant number systems, including carry-save arithmetic. This research and its practical benefits were again demonstrated through a number of novel IIR filter chip designs which at the time, exhibited performance much greater than previous solutions. The architectural insights gained coupled with the regular nature of many DSP and video processing computations also provided the foundation for new methods for the rapid design and synthesis of complex DSP System-on-Chip (SoC), Intellectual Property (IP) cores. This included the creation of a wide portfolio of commercial SoC video compression cores (MPEG2, MPEG4, H.264) for very high performance applications ranging from cell phones to High Definition TV (HDTV). The work provided the foundation for systematic methodologies, tools and design flows including high-level design optimizations based on "algorithmic engineering" and also led to the creation of the Abhainn tool environment for the design of complex heterogeneous DSP platforms comprising processors and multiple FPGAs. The paper concludes with a discussion of the problems faced by designers in developing complex DSP systems using current SoC technology. © 2007 Springer Science+Business Media, LLC.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The commonly used British Standard constant head triaxial permeability (BS) test, for permeability testing of fine grained soils, is known to have a relatively long test duration. Consequently, a reduction in the required time for permeability test provides potential cost savings, to the construction industry (specifically, for use during Construction Quality Control (CQA) of landfill mineral liners). The purpose of this article is to investigate and evaluate alternative short duration testing methods for the measurement of the permeability of fine grained soils.

As part of the investigation the feasibility of an existing method of short duration permeability test, known as the Accelerated Permeability (AP) test was assessed and compared with permeability measured using British Standard method (BS) and Ramp Accelerated Permeability (RAP). Four different fine grained materials, of a variety of physical properties were compacted at various moisture contents to produced analogous samples for testing using three the three different methodologies. Fabric analysis was carried out on specimens derived from post-test samples using Mercury Intrusion Porosimetry (MIP) and Scanning Electron Microscope (SEM) to assess the effects of testing methodology on soil structure. Results showed that AP testing in general under predicts permeability values derived from the BS test due to large changes in structure of the soil caused by AP test methodology, which is also validated using MIP and SEM observations. RAP testing, in general provides an improvement to the AP test but still under-predicts permeability values. The potential savings in test duration are shown to be relatively minimal for both the AP and RAP tests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy in today's short-range wireless communication is mostly spent on the analog- and digital hardware rather than on radiated power. Hence,purely information-theoretic considerations fail to achieve the lowest energy per information bit and the optimization process must carefully consider the overall transceiver. In this paper, we propose to perform cross-layer optimization, based on an energy-aware rate adaptation scheme combined with a physical layer that is able to properly adjust its processing effort to the data rate and the channel conditions to minimize the energy consumption per information bit. This energy proportional behavior is enabled by extending the classical system modes with additional configuration parameters at the various layers. Fine grained models of the power consumption of the hardware are developed to provide awareness of the physical layer capabilities to the medium access control layer. The joint application of the proposed energy-aware rate adaptation and modifications to the physical layer of an IEEE802.11n system, improves energy-efficiency (averaged over many noise and channel realizations) in all considered scenarios by up to 44%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Energy efficiency is an essential requirement for all contemporary computing systems. We thus need tools to measure the energy consumption of computing systems and to understand how workloads affect it. Significant recent research effort has targeted direct power measurements on production computing systems using on-board sensors or external instruments. These direct methods have in turn guided studies of software techniques to reduce energy consumption via workload allocation and scaling. Unfortunately, direct energy measurements are hampered by the low power sampling frequency of power sensors. The coarse granularity of power sensing limits our understanding of how power is allocated in systems and our ability to optimize energy efficiency via workload allocation.
We present ALEA, a tool to measure power and energy consumption at the granularity of basic blocks, using a probabilistic approach. ALEA provides fine-grained energy profiling via sta- tistical sampling, which overcomes the limitations of power sens- ing instruments. Compared to state-of-the-art energy measurement tools, ALEA provides finer granularity without sacrificing accuracy. ALEA achieves low overhead energy measurements with mean error rates between 1.4% and 3.5% in 14 sequential and paral- lel benchmarks tested on both Intel and ARM platforms. The sampling method caps execution time overhead at approximately 1%. ALEA is thus suitable for online energy monitoring and optimization. Finally, ALEA is a user-space tool with a portable, machine-independent sampling method. We demonstrate two use cases of ALEA, where we reduce the energy consumption of a k-means computational kernel by 37% and an ocean modelling code by 33%, compared to high-performance execution baselines, by varying the power optimization strategy between basic blocks.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Samples of fine-grained channel bed sediment and overbank floodplain deposits were collected along the main channels of the Rivers Aire (and its main tributary, the River Calder) and Swale, in Yorkshire, UK, in order to investigate downstream changes in the storage and deposition of heavy metals (Cr, Cu, Pb, Zn), total P and the sum of selected PCB congeners, and to estimate the total storage of these contaminants within the main channels and floodplains of these river systems. Downstream trends in the contaminant content of the <63 μm fraction of channel bed and floodplain sediment in the study rivers are controlled mainly by the location of the main sources of the contaminants, which varies between rivers. In the Rivers Aire and Calder, the contaminant content of the <63 μm fraction of channel bed and floodplain sediment generally increases in a downstream direction, reflecting the location of the main urban and industrialized areas in the middle and lower parts of the basin. In the River Swale, the concentrations of most of the contaminants examined are approximately constant along the length of the river, due to the relatively unpolluted nature of this river. However, the Pb and Zn content of fine channel bed sediment decreases downstream, due to the location of historic metal mines in the headwaters of this river, and the effect of downstream dilution with uncontaminated sediment. The magnitude and spatial variation of contaminant storage and deposition on channel beds and floodplains are also controlled by the amount of <63 μm sediment stored on the channel bed and deposited on the floodplain during overbank events. Consequently, contaminant deposition and storage are strongly influenced by the surface area of the floodplain and channel bed. Contaminant storage on the channel beds of the study rivers is, therefore, generally greatest in the middle and lower reaches of the rivers, since channel width increases downstream. Comparisons of the estimates of total storage of specific contaminants on the channel beds of the main channel systems of the study rivers with the annual contaminant flux at the catchment outlets indicate that channel storage represents <3% of the outlet flux and is, therefore, of limited importance in regulating that flux. Similar comparisons between the annual deposition flux of specific contaminants to the floodplains of the study rivers and the annual contaminant flux at the catchment outlet, emphasise the potential importance of floodplain deposition as a conveyance loss. In the case of the River Aire the floodplain deposition flux is equivalent to between ca. 2% (PCBs) and 36% (Pb) of the outlet flux. With the exception of PCBs, for which the value is ≅0, the equivalent values for the River Swale range between 18% (P) and 95% (Pb). The study emphasises that knowledge of the fine-grained sediment delivery system operating in a river basin is an essential prerequisite for understanding the transport and storage of sediment-associated contaminants in river systems and that conveyance losses associated with floodplain deposition exert an important control on downstream contaminant fluxes and the fate of such contaminants. © 2003 Elsevier Science Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The British standard constant-head triaxial test for measuring the permeability of fine-grained soils takes a relatively long time. A quicker test could provide savings to the construction industry, particularly for checking the quality of landfill clay liners. An accelerated permeability test has been developed, but the method often underestimates the permeability values compared owing to structural changes in the soil sample. This paper reports on an investigation
into the accelerated test to discover if the changes can be limited by using a revised procedure. The accelerated test is assessed and compared with the standard test and a ramp-accelerated permeability test. Four different finegrained materials are compacted at various water contents to produce analogous samples for testing using the three different methods. Fabric analysis is carried out on specimens derived from post-test samples using mercury intrusion porosimetry and scanning electron microscopy to assess the effects of testing on soil structure. The results show that accelerated testing in general underestimates permeability compared with values derived from the standard test, owing to changes in soil structure caused by testing. The ramp-accelerated test is shown to provide an improvement in terms of these structural changes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-speed field-programmable gate array (FPGA) implementations of an adaptive least mean square (LMS) filter with application in an electronic support measures (ESM) digital receiver, are presented. They employ "fine-grained" pipelining, i.e., pipelining within the processor and result in an increased output latency when used in the LMS recursive system. Therefore, the major challenge is to maintain a low latency output whilst increasing the pipeline stage in the filter for higher speeds. Using the delayed LMS (DLMS) algorithm, fine-grained pipelined FPGA implementations using both the direct form (DF) and the transposed form (TF) are considered and compared. It is shown that the direct form LMS filter utilizes the FPGA resources more efficiently thereby allowing a 120 MHz sampling rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the absence of a firm link between individual meteorites and their asteroidal parent bodies, asteroids are typically characterized only by their light reflection properties, and grouped accordingly into classes. On 6 October 2008, a small asteroid was discovered with a flat reflectance spectrum in the 554-995nm wavelength range, and designated 2008 TC3 (refs 4-6). It subsequently hit the Earth. Because it exploded at 37km altitude, no macroscopic fragments were expected to survive. Here we report that a dedicated search along the approach trajectory recovered 47 meteorites, fragments of a single body named Almahata Sitta, with a total mass of 3.95kg. Analysis of one of these meteorites shows it to be an achondrite, a polymict ureilite, anomalous in its class: ultra-fine-grained and porous, with large carbonaceous grains. The combined asteroid and meteorite reflectance spectra identify the asteroid as F class, now firmly linked to dark carbon-rich anomalous ureilites, a material so fragile it was not previously represented in meteorite collections.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research reported here is based on the standard laboratory experiments routinely performed in order to measure various geotechnical parameters. These experiments require consolidation of fine-grained samples in triaxial or stress path apparatus. The time required for the consolidation is dependent on the permeability of the soil and the length of the drainage path. The consolidation time is often of the order of several weeks in large clay-dominated samples. Long testing periods can be problematic, as they can delay decisions on design and construction methods. Acceleration of the consolidation process would require a reduction in effective drainage length and this is usually achieved by placing filter drains around the sample. The purpose of the research reported in this paper is to assess if these filter drains work effectively and, if not, to determine what modifications to the filter drains are needed. The findings have shown that use of a double filter reduces the consolidation time several fold.