827 resultados para Multiple-scale processing
Resumo:
The vegetation of Europe has undergone substantial changes during the course of the Holocene epoch, resulting from range expansion of plants following climate amelioration, competition between taxa and disturbance through anthropogenic activities. Much of the detail of this pattern is understood from
decades of pollen analytical work across Europe, and this understanding has been used to address questions relating to vegetation-climate feedback, biogeography and human impact. Recent advances in modelling the relationship between pollen and vegetation now make it possible to transform pollen
proportions into estimates of vegetation cover at both regional and local spatial scales, using the Landscape Reconstruction Algorithm (LRA), i.e. the REVEALS (Regional Estimates of VEgetation Abundance from Large Sites) and the LOVE (LOcal VEgetation) models. This paper presents the compilation and analysis of 73 pollen stratigraphies from the British Isles, to assess the application of the LRA and describe the pattern of landscape/woodland openness (i.e. the cover of low herb and bushy vegetation) through the Holocene. The results show that multiple small sites can be used as an effective replacement for a single large site for the reconstruction of regional vegetation cover. The REVEALS vegetation estimates imply that the British Isles had a greater degree of landscape/woodland openness at the regional scale than areas on the European mainland. There is considerable spatial bias in the British Isles dataset towards wetland areas and uplands, which may explain higher estimates of landscape openness compared with Europe. Where multiple estimates of regional vegetation are available from within the same region inter-regional differences are greater than intra-regional differences, supporting the use of the REVEALS model to the estimation of regional vegetation from pollen data.
Resumo:
The paper presents IPPro which is a high performance, scalable soft-core processor targeted for image processing applications. It has been based on the Xilinx DSP48E1 architecture using the ZYNQ Field Programmable Gate Array and is a scalar 16-bit RISC processor that operates at 526MHz, giving 526MIPS of performance. Each IPPro core uses 1 DSP48, 1 Block RAM and 330 Kintex-7 slice-registers, thus making the processor as compact as possible whilst maintaining flexibility and programmability. A key aspect of the approach is in reducing the application design time and implementation effort by using multiple IPPro processors in a SIMD mode. For different applications, this allows us to exploit different levels of parallelism and mapping for the specified processing architecture with the supported instruction set. In this context, a Traffic Sign Recognition (TSR) algorithm has been prototyped on a Zedboard with the colour and morphology operations accelerated using multiple IPPros. Simulation and experimental results demonstrate that the processing platform is able to achieve a speedup of 15 to 33 times for colour filtering and morphology operations respectively, with a reduced design effort and time.
Resumo:
Ubiquitous parallel computing aims to make parallel programming accessible to a wide variety of programming areas using deterministic and scale-free programming models built on a task abstraction. However, it remains hard to reconcile these attributes with pipeline parallelism, where the number of pipeline stages is typically hard-coded in the program and defines the degree of parallelism.
This paper introduces hyperqueues, a programming abstraction that enables the construction of deterministic and scale-free pipeline parallel programs. Hyperqueues extend the concept of Cilk++ hyperobjects to provide thread-local views on a shared data structure. While hyperobjects are organized around private local views, hyperqueues require shared concurrent views on the underlying data structure. We define the semantics of hyperqueues and describe their implementation in a work-stealing scheduler. We demonstrate scalable performance on pipeline-parallel PARSEC benchmarks and find that hyperqueues provide comparable or up to 30% better performance than POSIX threads and Intel's Threading Building Blocks. The latter are highly tuned to the number of available processing cores, while programs using hyperqueues are scale-free.
Resumo:
Humans typically make several rapid eye movements (saccades) per second. It is thought that visual working memory can retain and spatially integrate three to four objects or features across each saccade but little is known about this neural mechanism. Previously we showed that transcranial magnetic stimulation (TMS) to the posterior parietal cortex and frontal eye fields degrade trans-saccadic memory of multiple object features (Prime, Vesia, & Crawford, 2008, Journal of Neuroscience, 28(27), 6938-6949; Prime, Vesia, & Crawford, 2010, Cerebral Cortex, 20(4), 759-772.). Here, we used a similar protocol to investigate whether dorsolateral prefrontal cortex (DLPFC), an area involved in spatial working memory, is also involved in trans-saccadic memory. Subjects were required to report changes in stimulus orientation with (saccade task) or without (fixation task) an eye movement in the intervening memory interval. We applied single-pulse TMS to left and right DLPFC during the memory delay, timed at three intervals to arrive approximately 100ms before, 100ms after, or at saccade onset. In the fixation task, left DLPFC TMS produced inconsistent results, whereas right DLPFC TMS disrupted performance at all three intervals (significantly for presaccadic TMS). In contrast, in the saccade task, TMS consistently facilitated performance (significantly for left DLPFC/perisaccadic TMS and right DLPFC/postsaccadic TMS) suggesting a dis-inhibition of trans-saccadic processing. These results are consistent with a neural circuit of trans-saccadic memory that overlaps and interacts with, but is partially separate from the circuit for visual working memory during sustained fixation.
Resumo:
In this paper we investigate the first and second order characteristics of the received signal at the output ofhypothetical selection, equal gain and maximal ratio combiners which utilize spatially separated antennas at the basestation. Considering a range of human body movements, we model the model the small-scale fading characteristics ofthe signal using diversity specific analytical equations which take into account the number of available signal branchesat the receiver. It is shown that these equations provide an excellent fit to the measured channel data. Furthermore, formany hypothetical diversity receiver configurations, the Nakagami-m parameter was found to be close to 1.
Resumo:
Massive multiple-input multiple-output (MIMO) systems are cellular networks where the base stations (BSs) are equipped with unconventionally many antennas, deployed on colocated or distributed arrays. Huge spatial degrees-of-freedom are achieved by coherent processing over these massive arrays, which provide strong signal gains, resilience to imperfect channel knowledge, and low interference. This comes at the price of more infrastructure; the hardware cost and circuit power consumption scale linearly/affinely with the number of BS antennas N. Hence, the key to cost-efficient deployment of large arrays is low-cost antenna branches with low circuit power, in contrast to today’s conventional expensive and power-hungry BS antenna branches. Such low-cost transceivers are prone to hardware imperfections, but it has been conjectured that the huge degrees-of-freedom would bring robustness to such imperfections. We prove this claim for a generalized uplink system with multiplicative phasedrifts, additive distortion noise, and noise amplification. Specifically, we derive closed-form expressions for the user rates and a scaling law that shows how fast the hardware imperfections can increase with N while maintaining high rates. The connection between this scaling law and the power consumption of different transceiver circuits is rigorously exemplified. This reveals that one can make the circuit power increase as p N, instead of linearly, by careful circuit-aware system design.
Resumo:
We consider a multipair decode-and-forward relay channel, where multiple sources transmit simultaneously their signals to multiple destinations with the help of a full-duplex relay station. We assume that the relay station is equipped with massive arrays, while all sources and destinations have a single antenna. The relay station uses channel estimates obtained from received pilots and zero-forcing (ZF) or maximum-ratio combining/maximum-ratio transmission (MRC/MRT) to process the signals. To reduce significantly the loop interference effect, we propose two techniques: i) using a massive receive antenna array; or ii) using a massive transmit antenna array together with very low transmit power at the relay station. We derive an exact achievable rate in closed-form for MRC/MRT processing and an analytical approximation of the achievable rate for ZF processing. This approximation is very tight, especially for large number of relay station antennas. These closed-form expressions enable us to determine the regions where the full-duplex mode outperforms the half-duplex mode, as well as, to design an optimal power allocation scheme. This optimal power allocation scheme aims to maximize the energy efficiency for a given sum spectral efficiency and under peak power constraints at the relay station and sources. Numerical results verify the effectiveness of the optimal power allocation scheme. Furthermore, we show that, by doubling the number of transmit/receive antennas at the relay station, the transmit power of each source and of the relay station can be reduced by 1.5dB if the pilot power is equal to the signal power, and by 3dB if the pilot power is kept fixed, while maintaining a given quality-of-service.
Resumo:
When implementing autonomic management of multiple non-functional concerns a trade-off must be found between the ability to develop independently management of the individual concerns (following the separation of concerns principle) and the detection and resolution of conflicts that may arise when combining the independently developed management code. Here we discuss strategies to establish this trade-off and introduce a model checking based methodology aimed at simplifying the discovery and handling of conflicts arising from deployment-within the same parallel application-of independently developed management policies. Preliminary results are shown demonstrating the feasibility of the approach.
Resumo:
Shoeprint evidence collected from crime scenes can play an important role in forensic investigations. Usually, the analysis of shoeprints is carried out manually and is based on human expertise and knowledge. As well as being error prone, such a manual process can also be time consuming; thus affecting the usability and suitability of shoeprint evidence in a court of law. Thus, an automatic system for classification and retrieval of shoeprints has the potential to be a valuable tool. This paper presents a solution for the automatic retrieval of shoeprints which is considerably more robust than existing solutions in the presence of geometric distortions such as scale, rotation and scale distortions. It addresses the issue of classifying partial shoeprints in the presence of rotation, scale and noise distortions and relies on the use of two local point-of-interest detectors whose matching scores are combined. In this work, multiscale Harris and Hessian detectors are used to select corners and blob-like structures in a scale-space representation for scale invariance, while Scale Invariant Feature Transform (SIFT) descriptor is employed to achieve rotation invariance. The proposed technique is based on combining the matching scores of the two detectors at the score level. Our evaluation has shown that it outperforms both detectors in most of our extended experiments when retrieving partial shoeprints with geometric distortions, and is clearly better than similar work published in the literature. We also demonstrate improved performance in the face of wear and tear. As matter of fact, whilst the proposed work outperforms similar algorithms in the literature, it is shown that achieving good retrieval performance is not constrained by acquiring a full print from a scene of crime as a partial print can still be used to attain comparable retrieval results to those of using the full print. This gives crime investigators more flexibility is choosing the parts of a print to search for in a database of footwear.
Resumo:
Graphene, due to its outstanding properties, has become the topic of much research activity in recent years. Much of that work has been on a laboratory scale however, if we are to introduce graphene into real product applications it is necessary to examine how the material behaves under industrial processing conditions. In this paper the melt processing of polyamide 6/graphene nanoplatelet composites via twin screw extrusion is investigated and structure–property relationships are examined for mechanical and electrical properties. Graphene nanoplatelets (GNPs) with two aspect ratios (700 and 1000) were used in order to examine the influence of particle dimensions on composite properties. It was found that the introduction of GNPs had a nucleating effect on polyamide 6 (PA6) crystallization and substantially increased crystallinity by up to 120% for a 20% loading in PA6. A small increase in crystallinity was observed when extruder screw speed increased from 50 rpm to 200 rpm which could be attributed to better dispersion and more nucleation sites for crystallization. A maximum enhancement of 412% in Young's modulus was achieved at 20 wt% loading of GNPs. This is the highest reported enhancement in modulus achieved to date for a melt mixed thermoplastic/GNPs composite. A further result of importance here is that the modulus continued to increase as the loading of GNPs increased even at 20 wt% loading and results are in excellent agreement with theoretical predictions for modulus enhancement. Electrical percolation was achieved between 10–15 wt% loading for both aspect ratios of GNPs with an increase in conductivity of approximately 6 orders of magnitude compared to the unfilled PA6.
A sting in the spit: widespread cross-infection of multiple RNA viruses across wild and managed bees
Resumo:
Declining populations of bee pollinators are a cause of concern, with major repercussions for biodiversity loss and food security. RNA viruses associated with honeybees represent a potential threat to other insect pollinators, but the extent of this threat is poorly understood. This study aims to attain a detailed understanding of the current and ongoing risk of emerging infectious disease (EID) transmission between managed and wild pollinator species across a wide range of RNA viruses. Within a structured large-scale national survey across 26 independent sites, we quantify the prevalence and pathogen loads of multiple RNA viruses in co-occurring managed honeybee (Apis mellifera) and wild bumblebee (Bombus spp.) populations. We then construct models that compare virus prevalence between wild and managed pollinators. Multiple RNA viruses associated with honeybees are widespread in sympatric wild bumblebee populations. Virus prevalence in honeybees is a significant predictor of virus prevalence in bumblebees, but we remain cautious in speculating over the principle direction of pathogen transmission. We demonstrate species-specific differences in prevalence, indicating significant variation in disease susceptibility or tolerance. Pathogen loads within individual bumblebees may be high and in the case of at least one RNA virus, prevalence is higher in wild bumblebees than in managed honeybee populations. Our findings indicate widespread transmission of RNA viruses between managed and wild bee pollinators, pointing to an interconnected network of potential disease pressures within and among pollinator species. In the context of the biodiversity crisis, our study emphasizes the importance of targeting a wide range of pathogens and defining host associations when considering potential drivers of population decline.
Resumo:
This paper describes large scale tests conducted on a novel unglazed solar air collector system. The proposed system, referred to as a back-pass solar collector (BPSC), has on-site installation and aesthetic advantages over conventional unglazed transpired solar collectors (UTSC) as it is fully integrated within a standard insulated wall panel. This paper presents the results obtained from monitoring a BPSC wall panel over one year. Measurements of temperature, wind velocity and solar irradiance were taken at multiple air mass flow rates. It is shown that the length of the collector cavities has a direct impact on the efficiency of the system. It is also shown that beyond a height-to-flow ratio of 0.023m/m<sup>3</sup>/hr/m<sup>2</sup>, no additional heat output is obtained by increasing the collector height for the experimental setup in this study, but these numbers would obviously be different if the experimental setup or test environment (e.g. location and climate) change. An equation for predicting the temperature rise of the BPSC is proposed.
Resumo:
BACKGROUND: While the discovery of new drugs is a complex, lengthy and costly process, identifying new uses for existing drugs is a cost-effective approach to therapeutic discovery. Connectivity mapping integrates gene expression profiling with advanced algorithms to connect genes, diseases and small molecule compounds and has been applied in a large number of studies to identify potential drugs, particularly to facilitate drug repurposing. Colorectal cancer (CRC) is a commonly diagnosed cancer with high mortality rates, presenting a worldwide health problem. With the advancement of high throughput omics technologies, a number of large scale gene expression profiling studies have been conducted on CRCs, providing multiple datasets in gene expression data repositories. In this work, we systematically apply gene expression connectivity mapping to multiple CRC datasets to identify candidate therapeutics to this disease.
RESULTS: We developed a robust method to compile a combined gene signature for colorectal cancer across multiple datasets. Connectivity mapping analysis with this signature of 148 genes identified 10 candidate compounds, including irinotecan and etoposide, which are chemotherapy drugs currently used to treat CRCs. These results indicate that we have discovered high quality connections between the CRC disease state and the candidate compounds, and that the gene signature we created may be used as a potential therapeutic target in treating the disease. The method we proposed is highly effective in generating quality gene signature through multiple datasets; the publication of the combined CRC gene signature and the list of candidate compounds from this work will benefit both cancer and systems biology research communities for further development and investigations.
Resumo:
Field programmable gate array devices boast abundant resources with which custom accelerator components for signal, image and data processing may be realised; however, realising high performance, low cost accelerators currently demands manual register transfer level design. Software-programmable ’soft’ processors have been proposed as a way to reduce this design burden but they are unable to support performance and cost comparable to custom circuits. This paper proposes a new soft processing approach for FPGA which promises to overcome this barrier. A high performance, fine-grained streaming processor, known as a Streaming Accelerator Element, is proposed which realises accelerators as large scale custom multicore networks. By adopting a streaming execution approach with advanced program control and memory addressing capabilities, typical program inefficiencies can be almost completely eliminated to enable performance and cost which are unprecedented amongst software-programmable solutions. When used to realise accelerators for fast fourier transform, motion estimation, matrix multiplication and sobel edge detection it is shown how the proposed architecture enables real-time performance and with performance and cost comparable with hand-crafted custom circuit accelerators and up to two orders of magnitude beyond existing soft processors.