910 resultados para Run-Time
Resumo:
A rapid liquid chromatographic-tandem mass spectrometric (LC-MS/MS) multi-residue method for the simultaneous quantitation and identification of sixteen synthetic growth promoters and bisphenol A in bovine milk has been developed and validated. Sample preparation was straightforward, efficient and economically advantageous. Milk was extracted with acetonitrile followed by phase separation with NaCl. After centrifugation, the extract was purified by dispersive solid-phase extraction with C18 sorbent material. The compounds were analysed by reversed-phase LC-MS/MS using both positive and negative ionization and operated in multiple reaction monitoring (MRM) mode, acquiring two diagnostic product ions from each of the chosen precursor ions for unambiguous confirmation. Total chromatographic run time was less than 10 min for each sample. The method was validated at a level of 1 mu g L-1. A wide variety of deuterated internal standards were used to improve method performance. The accuracy and precision of the method were satisfactory for all analytes. The confirmative quantitative liquid chromatographic tandem mass spectrometric (LC-MS/MS) method was validated according to Commission Decision 2002/657/EC. The decision limit (CC alpha) and the detection capability (CC beta) were found to be below the chosen validation level of 1 mu g L-1 for all compounds. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The most promising way to maintain reliable data transfer across the rapidly fluctuating channels used by next generation multiple-input multiple-output communications schemes is to exploit run-time variable modulation and antenna configurations. This demands that the baseband signal processing architectures employed in the communications terminals must provide low cost and high performance with runtime reconfigurability. We present a softcore-processor based solution to this issue, and show for the first time, that such programmable architectures can enable real-time data operation for cutting-edge standards
such as 802.11n; furthermore, by exploiting deep processing pipelines and interleaved task execution, the cost and performance of these architectures is shown to be on a par with traditional dedicated circuit based solutions. We believe this to be the first such programmable architecture to achieve this, and the combination of implementation efficiency and programmability makes this implementation style the most promising approach for hosting such dynamic architectures.
Resumo:
The potential use of negative electrospray ionisation mass spectrometry (ESI-MS) in the characterisation of the three polyacetylenes common in carrots (Daucus carota) has been assessed. The MS scans have demonstrated that the polyacetylenes undergo a modest degree of in-source decomposition in the negative ionisation mode while the positive ionisation mode has shown predominantly sodiated ions and no [M+H](+) ions. Tandem mass spectrometric (MS/MS) studies have shown that the polyacetylenes follow two distinct fragmentation pathways: one that involves cleavage of the C3-C4 bond and the other with cleavage of the C7-C8 bond. The cleavage of the C7-C8 bond generated product ions m/z 105.0 for falcarinol, m/z 105/107.0 for falcarindiol, m/z 147.0/149.1 for falcarindiol-3-acetate. In addition to these product ions, the transitions m/z 243.2 -> 187.1 (falcarinol), m/z 259.2 -> 203.1 (falcarindiol), m/z 301.2 -> 255.2/203.1 (falcarindiol-3-acetate), mostly from the C3-C4 bond cleavage, can form the basis of multiple reaction monitoring (MRM)-quantitative methods which are poorly represented in the literature. The 'MS3' experimental data confirmed a less pronounced homolytic cleavage site between the C11-C12 bond in the falcarinol-type polacetylenes. The optimised liquid chromatography (LC)/MS conditions have achieved a baseline chromatographic separation of the three polyacetylenes investigated within 40 min total run-time. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes pL induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004).
Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O(n2D?ln?), where n is the sequence length, ? is the number of local minima in the partial landscape pL under consideration and D is the maximum number of steepest descent steps in attraction basins associated with pL.
Resumo:
We propose a data flow based run time system as an efficient tool for supporting execution of parallel code on heterogeneous architectures hosting both multicore CPUs and GPUs. We discuss how the proposed run time system may be the target of both structured parallel applications developed using algorithmic skeletons/parallel design patterns and also more "domain specific" programming models. Experimental results demonstrating the feasibility of the approach are presented. © 2012 World Scientific Publishing Company.
Resumo:
For the first time, a simple and validated reversed-phase liquid chromatography (RP-LC) with fluorescence detection has been developed for the simultaneous analysis of glutamate (Glu), ?-aminobutyric acid (GABA), glycine (Gly) and taurine (Tau) in Wistar and tremor rats brain synaptosomes. The samples were separated on a C18 analytical column with gradient elution of methanol and 0.1 mol L-1 potassium acetate at a flow rate of 1 mL min-1. Total run time was approximately 25 min. All calibration curves exhibited good linearity (r 2 > 0.999) within test ranges. The reproducibility was estimated by intra-and inter-day assays and RSD values were less than 2.48%. The recoveries were between 96.32 and 105.21%. The method was successfully applied to the quantification of amino acids in Wistar and tremor rats brain synaptosomes. Through this developed protocol, the levels of Glu in hippocampal and prefrontal cortical synaptosomes of tremor rats were both significantly elevated than those of adult Wistar rats whereas significantly decreased concentrations of GABA and Gly were observed in the hippocampal region of tremor rats without evident difference in the prefrontal cortex between experimental and control groups. In addition, our studies also showed a marked elevation of Tau in tremor rats hippocampal synaptosomes although there was no pronounced difference in the prefrontal cortical region of Wistar and tremor rats.
Resumo:
Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation, which is a generalisation of the feedback vertex set problem on directed graphs, and thus it is an NP-hard task. We present several constraint programming formulations of the problem. We also present formulations using partial weighted maximum Boolean satisfiability and mixed integer linear programming. We study all these formulations by experimentally comparing them on a variety of randomly generated instances of the feature subscription problem.
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
Data flow techniques have been around since the early '70s when they were used in compilers for sequential languages. Shortly after their introduction they were also consideredas a possible model for parallel computing, although the impact here was limited. Recently, however, data flow has been identified as a candidate for efficient implementation of various programming models on multi-core architectures. In most cases, however, the burden of determining data flow "macro" instructions is left to the programmer, while the compiler/run time system manages only the efficient scheduling of these instructions. We discuss a structured parallel programming approach supporting automatic compilation of programs to macro data flow and we show experimental results demonstrating the feasibility of the approach and the efficiency of the resulting "object" code on different classes of state-of-the-art multi-core architectures. The experimental results use different base mechanisms to implement the macro data flow run time support, from plain pthreads with condition variables to more modern and effective lock- and fence-free parallel frameworks. Experimental results comparing efficiency of the proposed approach with those achieved using other, more classical, parallel frameworks are also presented. © 2012 IEEE.
Resumo:
Multi-core and many-core platforms are becoming increasingly heterogeneous and asymmetric. This significantly increases the porting and tuning effort required for parallel codes, which in turn often leads to a growing gap between peak machine power and actual application performance. In this work a first step toward the automated optimization of high level skeleton-based parallel code is discussed. The paper presents an abstract annotation model for skeleton programs aimed at formally describing suitable mapping of parallel activities on a high-level platform representation. The derived mapping and scheduling strategies are used to generate optimized run-time code. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
Technical market indicators are tools used by technical an- alysts to understand trends in trading markets. Technical (market) indicators are often calculated in real-time, as trading progresses. This paper presents a mathematically- founded framework for calculating technical indicators. Our framework consists of a domain specific language for the un- ambiguous specification of technical indicators, and a run- time system based on Click, for computing the indicators. We argue that our solution enhances the ease of program- ming due to aligning our domain-specific language to the mathematical description of technical indicators, and that it enables executing programs in kernel space for decreased latency, without exposing the system to users’ programming errors.
Resumo:
In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.
Resumo:
Heterogeneous computing technologies, such as multi-core CPUs, GPUs and FPGAs can provide significant performance improvements. However, developing applications for these technologies often results in coupling applications to specific devices, typically through the use of proprietary tools. This paper presents SHEPARD, a compile time and run-time framework that decouples application development from the target platform and enables run-time allocation of tasks to heterogeneous computing devices. Through the use of special annotated functions, called managed tasks, SHEPARD approximates a task's performance on available devices, and coupled with the approximation of current device demand, decides which device can satisfy the task with the lowest overall execution time. Experiments using a task parallel application, based on an in-memory database, demonstrate the opportunity for automatic run-time task allocation to achieve speed-up over a static allocation to a single specific device. © 2014 IEEE.
Resumo:
One of the outstanding issues in parallel computing is the selection of task granularity. This work proposes a solution to the task granularity problem by lowering the overhead of the task scheduler and as such supporting very fine-grain tasks. Using a combination of static (compile-time) scheduling and dynamic (run-time) scheduling, we aim to make scheduling decisions as fast as with static scheduling while retaining the dynamic load- balancing properties of fully dynamic scheduling. We present an example application and discuss the requirements on the compiler and runtime system to realize hybrid static/dynamic scheduling.
Resumo:
Today there is a growing interest in the integration of health monitoring applications in portable devices necessitating the development of methods that improve the energy efficiency of such systems. In this paper, we present a systematic approach that enables energy-quality trade-offs in spectral analysis systems for bio-signals, which are useful in monitoring various health conditions as those associated with the heart-rate. To enable such trade-offs, the processed signals are expressed initially in a basis in which significant components that carry most of the relevant information can be easily distinguished from the parts that influence the output to a lesser extent. Such a classification allows the pruning of operations associated with the less significant signal components leading to power savings with minor quality loss since only less useful parts are pruned under the given requirements. To exploit the attributes of the modified spectral analysis system, thresholding rules are determined and adopted at design- and run-time, allowing the static or dynamic pruning of less-useful operations based on the accuracy and energy requirements. The proposed algorithm is implemented on a typical sensor node simulator and results show up-to 82% energy savings when static pruning is combined with voltage and frequency scaling, compared to the conventional algorithm in which such trade-offs were not available. In addition, experiments with numerous cardiac samples of various patients show that such energy savings come with a 4.9% average accuracy loss, which does not affect the system detection capability of sinus-arrhythmia which was used as a test case.