56 resultados para Run-Time Code Generation, Programming Languages, Object-Oriented Programming
Resumo:
In this paper, a new reconfigurable multi-standard architecture is introduced for integer-pixel motion estimation and a standard-cell based chip design study is presented. This has been designed to cover most of the common block-based video compression standards, including MPEG-2, MPEG-4, H.263, H.264, AVS and WMV-9. The architecture exhibits simpler control, high throughput and relative low hardware cost and highly competitive when compared with excising designs for specific video standards. It can also, through the use of control signals, be dynamically reconfigured at run-time to accommodate different system constraint such as the trade-off in power dissipation and video-quality. The computational rates achieved make the circuit suitable for high end video processing applications. Silicon design studies indicate that circuits based on this approach incur only a relatively small penalty in terms of power dissipation and silicon area when compared with implementations for specific standards.
Resumo:
Concern for NGO accountability has been intensified in recent years, following the growth in the size of NGOs and their power to influence global politics and curb the excesses of globalization. Questions have been raised about where the sector embraces the same standards of accountability that it demands from government and business. The objective of this paper is to examine one aspect of NGO accountability, its discharge through annual reporting. Using Habermas’ (1984; 1987) theory of communicative action, and specifically its validity claims, the research investigates whether NGOs use their annual reporting process to account to the host societies in which they operate or steer stakeholder actions toward their own self-interests. The results of the study indicate that efforts by organizations to account are characterized by communicative action through the provision of truthful disclosures, generally appropriate to the discharge of accountability and in a manner intended to improve their understandability. At the same time, however, some organizations exhibit strategically oriented behaviors in which the disclosure content is guided by the opportunity to present organizations in a particular light and there appears a lack of rhetor authenticity. The latter findings cast doubt on the ethical inspiration of NGOs and the values they demand from business communities, and questions arise as to why such practices exist and what lessons can be learnt from them.
Resumo:
Heterogeneous computing technologies, such as multi-core CPUs, GPUs and FPGAs can provide significant performance improvements. However, developing applications for these technologies often results in coupling applications to specific devices, typically through the use of proprietary tools. This paper presents SHEPARD, a compile time and run-time framework that decouples application development from the target platform and enables run-time allocation of tasks to heterogeneous computing devices. Through the use of special annotated functions, called managed tasks, SHEPARD approximates a task's performance on available devices, and coupled with the approximation of current device demand, decides which device can satisfy the task with the lowest overall execution time. Experiments using a task parallel application, based on an in-memory database, demonstrate the opportunity for automatic run-time task allocation to achieve speed-up over a static allocation to a single specific device. © 2014 IEEE.
Resumo:
One of the outstanding issues in parallel computing is the selection of task granularity. This work proposes a solution to the task granularity problem by lowering the overhead of the task scheduler and as such supporting very fine-grain tasks. Using a combination of static (compile-time) scheduling and dynamic (run-time) scheduling, we aim to make scheduling decisions as fast as with static scheduling while retaining the dynamic load- balancing properties of fully dynamic scheduling. We present an example application and discuss the requirements on the compiler and runtime system to realize hybrid static/dynamic scheduling.
Resumo:
Identifying responsibility for classes in object oriented software design phase is a crucial task. This paper proposes an approach for producing high quality and robust behavioural diagrams (e.g. Sequence Diagrams) through Class Responsibility Assignment (CRA). GRASP or General Responsibility Assignment Software Pattern (or Principle) was used to direct the CRA process when deriving behavioural diagrams. A set of tools to support CRA was developed to provide designers and developers with a cognitive toolkit that can be used when analysing and designing object-oriented software. The tool developed is called Use Case Specification to Sequence Diagrams (UC2SD). UC2SD uses a new approach for developing Unified Modelling Language (UML) software designs from Natural Language, making use of a meta-domain oriented ontology, well established software design principles and established Natural Language Processing (NLP) tools. UC2SD generates a well-formed UML sequence diagrams as output.
Resumo:
Today there is a growing interest in the integration of health monitoring applications in portable devices necessitating the development of methods that improve the energy efficiency of such systems. In this paper, we present a systematic approach that enables energy-quality trade-offs in spectral analysis systems for bio-signals, which are useful in monitoring various health conditions as those associated with the heart-rate. To enable such trade-offs, the processed signals are expressed initially in a basis in which significant components that carry most of the relevant information can be easily distinguished from the parts that influence the output to a lesser extent. Such a classification allows the pruning of operations associated with the less significant signal components leading to power savings with minor quality loss since only less useful parts are pruned under the given requirements. To exploit the attributes of the modified spectral analysis system, thresholding rules are determined and adopted at design- and run-time, allowing the static or dynamic pruning of less-useful operations based on the accuracy and energy requirements. The proposed algorithm is implemented on a typical sensor node simulator and results show up-to 82% energy savings when static pruning is combined with voltage and frequency scaling, compared to the conventional algorithm in which such trade-offs were not available. In addition, experiments with numerous cardiac samples of various patients show that such energy savings come with a 4.9% average accuracy loss, which does not affect the system detection capability of sinus-arrhythmia which was used as a test case.
Resumo:
To intercept a moving object, one needs to be in the right place at the right time. In order to do this, it is necessary to pick up and use perceptual information that specifies the time to arrival of an object at an interception point. In the present study, we examined the ability to intercept a laterally moving virtual sound object by controlling the displacement of a sliding handle and tested whether and how the interaural time difference (ITD) could be the main source of perceptual information for successfully intercepting the virtual object. The results revealed that in order to accomplish the task, one might need to vary the duration of the movement, control the hand velocity and time to reach the peak velocity (speed coupling), while the adjustment of movement initiation did not facilitate performance. Furthermore, the overall performance was more successful when subjects employed a time-to-contact (tau) coupling strategy. This result shows that prospective information is available in sound for guiding goal-directed actions.
Resumo:
A single-step lateral flow immunoassay (LFIA) was developed and validated for the rapid screening of paralytic shellfish toxins (PSTs) from a variety of shellfish species, at concentrations relevant to regulatory limits of 800 μg STX-diHCl equivalents/kg shellfish meat. A simple aqueous extraction protocol was performed within several minutes from sample homogenate. The qualitative result was generated after a 5 min run time using a portable reader which removed subjectivity from data interpretation. The test was designed to generate noncompliant results with samples containing approximately 800 μg of STX-diHCl/kg. The cross-reactivities in relation to STX, expressed as mean ± SD, were as follows: NEO: 128.9% ± 29%; GTX1&4: 5.7% ± 1.5%; GTX2&3: 23.4% ± 10.4%; dcSTX: 55.6% ± 10.9%; dcNEO: 28.0% ± 8.9%; dcGTX2&3: 8.3% ± 2.7%; C1&C2: 3.1% ± 1.2%; GTX5: 23.3% ± 14.4% (n = 5 LFIA lots). There were no indications of matrix effects from the different samples evaluated (mussels, scallops, oysters, clams, cockles) nor interference from other shellfish toxins (domoic acid, okadaic acid group). Naturally contaminated sample evaluations showed no false negative results were generated from a variety of different samples and profiles (n = 23), in comparison to reference methods (MBA method 959.08, LC-FD method 2005.06). External laboratory evaluations of naturally contaminated samples (n = 39) indicated good correlation with reference methods (MBA, LC-FD). This is the first LFIA which has been shown, through rigorous validation, to have the ability to detect most major PSTs in a reliable manner and will be a huge benefit to both industry and regulators, who need to perform rapid and reliable testing to ensure shellfish are safe to eat.
Resumo:
Flow processing is a fundamental element of stateful traffic classification and it has been recognized as an essential factor for delivering today’s application-aware network operations and security services. The basic function within a flow processing engine is to search and maintain a flow table, create new flow entries if no entry matches and associate each entry with flow states and actions for future queries. Network state information on a per-flow basis must be managed in an efficient way to enable Ethernet frame transmissions at 40 Gbit/s (Gbps) and 100 Gbps in the near future. This paper presents a hardware solution of flow state management for implementing large-scale flow tables on popular computer memories using DDR3 SDRAMs. Working with a dedicated flow lookup table at over 90 million lookups per second, the proposed system is able to manage 512-bit state information at run time.
Resumo:
Power capping is a fundamental method for reducing the energy consumption of a wide range of modern computing environments, ranging from mobile embedded systems to datacentres. Unfortunately, maximising performance and system efficiency under static power caps remains challenging, while maximising performance under dynamic power caps has been largely unexplored. We present an adaptive power capping method that reduces the power consumption and maximizes the performance of heterogeneous SoCs for mobile and server platforms. Our technique combines power capping with coordinated DVFS, data partitioning and core allocations on a heterogeneous SoC with ARM processors and FPGA resources. We design our framework as a run-time system based on OpenMP and OpenCL to utilise the heterogeneous resources. We evaluate it through five data-parallel benchmarks on the Xilinx SoC which allows fully voltage and frequency control. Our experiments show a significant performance boost of 30% under dynamic power caps with concurrent execution on ARM and FPGA, compared to a naive separate approach.