19 resultados para Application specific instruction-set processor
em CentAUR: Central Archive University of Reading - UK
Resumo:
We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. Spatially nonpredictive cues preceded search arrays that included a color-defined target. For target-color singleton cues, behavioral spatial cueing effects were accompanied by cueinduced N2pc components, indicative of attentional capture. These effects were only minimally attenuated for nonsingleton target-color cues, underlining the dominance of top-down task set over salience in attentional capture. Nontarget-color singleton cues triggered no N2pc, but instead an anterior N2 component indicative of top-down inhibition. In Experiment 2, inverted behavioral cueing effects of these cues were accompanied by a delayed N2pc to targets at cued locations, suggesting that perceptually salient but task-irrelevant visual events trigger location-specific inhibition mechanisms that can delay subsequent target selection.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud-masks. Here, this is done over both land and ocean using night-time (infrared) imagery. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 87% and 48% for ocean and land, respectively using the Bayesian technique, compared to 74% and 39%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Numerical Weather Prediction (NWP) fields are used to assist the detection of cloud in satellite imagery. Simulated observations based on NWP are used within a framework based on Bayes' theorem to calculate a physically-based probability of each pixel with an imaged scene being clear or cloudy. Different thresholds can be set on the probabilities to create application-specific cloud masks. Here, the technique is shown to be suitable for daytime applications over land and sea, using visible and near-infrared imagery, in addition to thermal infrared. We use a validation dataset of difficult cloud detection targets for the Spinning Enhanced Visible and Infrared Imager (SEVIRI) achieving true skill scores of 89% and 73% for ocean and land, respectively using the Bayesian technique, compared to 90% and 70%, respectively for the threshold-based techniques associated with the validation dataset.
Resumo:
Purpose: Vergence and accommodation studies often use adult participants with experience of vision science. Reports of infant and clinical responses are generally more variable and of lower gain, with the implication that differences lie in immaturity or sub-optimal clinical characteristics but expert/naïve differences are rarely considered or quantified. Methods: Sixteen undergraduates, naïve to vision science, were individually matched by age, visual acuity, refractive error, heterophoria, stereoacuity and near point of accommodation to second- and third-year orthoptics and optometry undergraduates (‘experts’). Accommodation and vergence responses were assessed to targets moving between 33 cm, 50 cm, 1 m and 2 m using a haploscopic device incorporating a PlusoptiX SO4 autorefractor. Disparity, blur and looming cues were separately available or minimised in all combinations. Instruction set was minimal. Results: In all cases, vergence and accommodation response slopes (gain) were steeper and closer to 1.0 in the expert group (p = 0.001), with the largest expert/naïve differences for both vergence and accommodation being for near targets (p = 0.012). For vergence, the differences between expert and naïve response slopes increased with increasingly open-loop targets (linear trend p = 0.025). Although we predicted that proximal cues would drive additional response in the experts, the proximity-only cue was the only condition that showed no statistical effect of experience. Conclusions: Expert observers provide more accurate responses to near target demand than closely matched naïve observers. We suggest that attention, practice, voluntary and proprioceptive effects may enhance responses in experienced participants when compared to a more typical general population. Differences between adult reports and the developmental and clinical literature may partially reflect expert/naïve effects, as well as developmental change. If developmental and clinical studies are to be compared to adult normative data, uninstructed naïve adult data should be used.
Resumo:
Grid workflow authoring tools are typically specific to particular workflow engines built into Grid middleware, or are application specific and are designed to interact with specific software implementations. g-Eclipse is a middleware independent Grid workbench that aims to provide a unified abstraction of the Grid and includes a Grid workflow builder to allow users to author and deploy workflows to the Grid. This paper describes the g-Eclipse Workflow Builder and its implementations for two Grid middlewares, gLite and GRIA, and a case study utilizing the Workflow Builder in a Grid user's scientific workflow deployment.
Resumo:
We used a battery of biomarkers in fish to study the effects of the extensive dredging in Goteborg harbor situated at the river Gota alv estuary, Sweden. Eelpout (Zoarces viviparus) were sampled along a gradient into Goteborg harbor, both before and during the dredging. Biomarker responses in the eelpout before the dredging already indicated that fish in Goteborg harbor are chronically affected by pollutants under normal conditions compared to those in a reference area. However, the results during the dredging activities clearly show that fish were even more affected by remobilized pollutants. Elevated ethoxyresorufin-O-deethylase activities and cytochrome P4501A levels indicated exposure to polycyclic aromatic hydrocarbons. Elevated metallothionein gene expression indicated an increase in metal exposure. An increase in general cell toxicity, measured as a decrease in lysosomal membrane stability, as well as effects on the immune system also could be observed in eelpout sampled during the dredging. The results also suggest that dredging activities in the Gota alv estuary can affect larger parts of the Swedish western coast than originally anticipated. The present study demonstrates that the application of a set of biomarkers is a useful approach in monitoring the impact of anthropogenic activities on aquatic environments.
Resumo:
Purpose – To describe some research done, as part of an EPSRC funded project, to assist engineers working together on collaborative tasks. Design/methodology/approach – Distributed finite state modelling and agent techniques are used successfully in a new hybrid self-organising decision making system applied to collaborative work support. For the particular application, analysis of the tasks involved has been performed and these tasks are modelled. The system then employs a novel generic agent model, where task and domain knowledge are isolated from the support system, which provides relevant information to the engineers. Findings – The method is applied in the despatch of transmission commands within the control room of The National Grid Company Plc (NGC) – tasks are completed significantly faster when the system is utilised. Research limitations/implications – The paper describes a generic approach and it would be interesting to investigate how well it works in other applications. Practical implications – Although only one application has been studied, the methodology could equally be applied to a general class of cooperative work environments. Originality/value – One key part of the work is the novel generic agent model that enables the task and domain knowledge, which are application specific, to be isolated from the support system, and hence allows the method to be applied in other domains.
Resumo:
We describe a high-level design method to synthesize multi-phase regular arrays. The method is based on deriving component designs using classical regular (or systolic) array synthesis techniques and composing these separately evolved component design into a unified global design. Similarity transformations ar e applied to component designs in the composition stage in order to align data ow between the phases of the computations. Three transformations are considered: rotation, re ection and translation. The technique is aimed at the design of hardware components for high-throughput embedded systems applications and we demonstrate this by deriving a multi-phase regular array for the 2-D DCT algorithm which is widely used in many vide ocommunications applications.
Resumo:
Simulating spiking neural networks is of great interest to scientists wanting to model the functioning of the brain. However, large-scale models are expensive to simulate due to the number and interconnectedness of neurons in the brain. Furthermore, where such simulations are used in an embodied setting, the simulation must be real-time in order to be useful. In this paper we present NeMo, a platform for such simulations which achieves high performance through the use of highly parallel commodity hardware in the form of graphics processing units (GPUs). NeMo makes use of the Izhikevich neuron model which provides a range of realistic spiking dynamics while being computationally efficient. Our GPU kernel can deliver up to 400 million spikes per second. This corresponds to a real-time simulation of around 40 000 neurons under biologically plausible conditions with 1000 synapses per neuron and a mean firing rate of 10 Hz.
Resumo:
The complexity of current and emerging high performance architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven performance modelling approach is outlined that is appro- priate for modern multicore architectures. The approach is demonstrated by constructing a model of a simple shallow water code on a Cray XE6 system, from application-specific benchmarks that illustrate precisely how architectural char- acteristics impact performance. The model is found to recre- ate observed scaling behaviour up to 16K cores, and used to predict optimal rank-core affinity strategies, exemplifying the type of problem such a model can be used for.
Resumo:
Purpose The relative efficiency of different eye exercise regimes is unclear, and in particular the influences of practice, placebo and the amount of effort required are rarely considered. This study measured conventional clinical measures after different regimes in typical young adults. Methods 156 asymptomatic young adults were directed to carry out eye exercises 3 times daily for two weeks. Exercises were directed at improving blur responses (accommodation), disparity responses (convergence), both in a naturalistic relationship, convergence in excess of accommodation, accommodation in excess of convergence, and a placebo regime. They were compared to two control groups, neither of which were given exercises, but the second of which were asked to make maximum effort during the second testing. Results Instruction set and participant effort were more effective than many exercises. Convergence exercises independent of accommodation were the most effective treatment, followed by accommodation exercises, and both regimes resulted in changes in both vergence and accommodation test responses. Exercises targeting convergence and accommodation working together were less effective than those where they were separated. Accommodation measures were prone to large instruction/effort effects and monocular accommodation facility was subject to large practice effects. Conclusions Separating convergence and accommodation exercises seemed more effective than exercising both systems concurrently and suggests that stimulation of accommodation and convergence may act in an additive fashion to aid responses. Instruction/effort effects are large and should be carefully controlled if claims for the efficacy of any exercise regime are to be made.
Resumo:
Much has been written about where the boundaries of the firm are drawn, but little about what occurs at the boundaries themselves. When a firm subcontracts, does it inform its suppliers fully of what it requires, or is it willing to accept what they have available? In practice firms often engage in a dialogue, or conversation, with their suppliers, in which at first they set out their general requirements, and only when the supplier reports back on how these can be met are their more specific requirements set out. This paper models such conversations as a rational response to communication costs. The model is used to examine the impact of new information technology, such as CAD/CAM, on the conduct of subcontracting. It can also be used to examine its impact on the marketing activities of firms. The technique of analysis, which is based on the economic theory of teams, has more general applications too. It can be used to model all the forms of dialogue involved in the processes of coordination both within and between firms.
A benchmark-driven modelling approach for evaluating deployment choices on a multi-core architecture
Resumo:
The complexity of current and emerging architectures provides users with options about how best to use the available resources, but makes predicting performance challenging. In this work a benchmark-driven model is developed for a simple shallow water code on a Cray XE6 system, to explore how deployment choices such as domain decomposition and core affinity affect performance. The resource sharing present in modern multi-core architectures adds various levels of heterogeneity to the system. Shared resources often includes cache, memory, network controllers and in some cases floating point units (as in the AMD Bulldozer), which mean that the access time depends on the mapping of application tasks, and the core's location within the system. Heterogeneity further increases with the use of hardware-accelerators such as GPUs and the Intel Xeon Phi, where many specialist cores are attached to general-purpose cores. This trend for shared resources and non-uniform cores is expected to continue into the exascale era. The complexity of these systems means that various runtime scenarios are possible, and it has been found that under-populating nodes, altering the domain decomposition and non-standard task to core mappings can dramatically alter performance. To find this out, however, is often a process of trial and error. To better inform this process, a performance model was developed for a simple regular grid-based kernel code, shallow. The code comprises two distinct types of work, loop-based array updates and nearest-neighbour halo-exchanges. Separate performance models were developed for each part, both based on a similar methodology. Application specific benchmarks were run to measure performance for different problem sizes under different execution scenarios. These results were then fed into a performance model that derives resource usage for a given deployment scenario, with interpolation between results as necessary.
Resumo:
The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region
Resumo:
Establishing the mechanisms by which microbes interact with their environment, including eukaryotic hosts, is a major challenge that is essential for the economic utilisation of microbes and their products. Techniques for determining global gene expression profiles of microbes, such as microarray analyses, are often hampered by methodological restraints, particularly the recovery of bacterial transcripts (RNA) from complex mixtures and rapid degradation of RNA. A pioneering technology that avoids this problem is In Vivo Expression Technology (IVET). IVET is a 'promoter-trapping' methodology that can be used to capture nearly all bacterial promoters (genes) upregulated during a microbe-environment interaction. IVET is especially useful because there is virtually no limit to the type of environment used (examples to date include soil, oomycete, a host plant or animal) to select for active microbial promoters. Furthermore, IVET provides a powerful method to identify genes that are often overlooked during genomic annotation, and has proven to be a flexible technology that can provide even more information than identification of gene expression profiles. A derivative of IVET, termed resolvase-IVET (RIVET), can be used to provide spatio-temporal information about environment-specific gene expression. More recently, niche-specific genes captured during an IVET screen have been exploited to identify the regulatory mechanisms controlling their expression. Overall, IVET and its various spin-offs have proven to be a valuable and robust set of tools for analysing microbial gene expression in complex environments and providing new targets for biotechnological development.