18 resultados para Implementation cost
em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast
Resumo:
Realising memory intensive applications such as image and video processing on FPGA requires creation of complex, multi-level memory hierarchies to achieve real-time performance; however commerical High Level Synthesis tools are unable to automatically derive such structures and hence are unable to meet the demanding bandwidth and capacity constraints of these applications. Current approaches to solving this problem can only derive either single-level memory structures or very deep, highly inefficient hierarchies, leading in either case to one or more of high implementation cost and low performance. This paper presents an enhancement to an existing MC-HLS synthesis approach which solves this problem; it exploits and eliminates data duplication at multiple levels levels of the generated hierarchy, leading to a reduction in the number of levels and ultimately higher performance, lower cost implementations. When applied to synthesis of C-based Motion Estimation, Matrix Multiplication and Sobel Edge Detection applications, this enables reductions in Block RAM and Look Up Table (LUT) cost of up to 25%, whilst simultaneously increasing throughput.
Resumo:
Caches hide the growing latency of accesses to the main memory from the processor by storing the most recently used data on-chip. To limit the search time through the caches, they are organized in a direct mapped or set-associative way. Such an organization introduces many conflict misses that hamper performance. This paper studies randomizing set index functions, a technique to place the data in the cache in such a way that conflict misses are avoided. The performance of such a randomized cache strongly depends on the randomization function. This paper discusses a methodology to generate randomization functions that perform well over a broad range of benchmarks. The methodology uses profiling information to predict the conflict miss rate of randomization functions. Then, using this information, a search algorithm finds the best randomization function. Due to implementation issues, it is preferable to use a randomization function that is extremely simple and can be evaluated in little time. For these reasons, we use randomization functions where each randomized address bit is computed as the XOR of a subset of the original address bits. These functions are chosen such that they operate on as few address bits as possible and have few inputs to each XOR. This paper shows that to index a 2(m)-set cache, it suffices to randomize m+2 or m+3 address bits and to limit the number of inputs to each XOR to 2 bits to obtain the full potential of randomization. Furthermore, it is shown that the randomization function that we generate for one set of benchmarks also works well for an entirely different set of benchmarks. Using the described methodology, it is possible to reduce the implementation cost of randomization functions with only an insignificant loss in conflict reduction.
Resumo:
The paper describes the design and implementation of a novel low cost virtual rugby decision making interactive for use in a visitor centre. Original laboratory-based experimental work in decision making in rugby, using a virtual reality headset [1] is adapted for use in a public visitor centre, with consideration given to usability, costs, practicality and health and safety. Movement of professional rugby players was captured and animated within a virtually recreated stadium. Users then interact with these virtual representations via use of a lowcost sensor (Microsoft Kinect) to attempt to block them. Retaining the principles of perception and action, egocentric viewpoint, immersion, sense of presence, representative design and game design the system delivers an engaging and effective interactive to illustrate the underlying scientific principles of deceptive movement. User testing highlighted the need for usability, system robustness, fair and accurate scoring, appropriate level of difficulty and enjoyment.
Resumo:
The design and implementation of a programmable cyclic redundancy check (CRC) computation circuit architecture, suitable for deployment in network related system-on-chips (SoCs) is presented. The architecture has been designed to be field reprogrammable so that it is fully flexible in terms of the polynomial deployed and the input port width. The circuit includes an embedded configuration controller that has a low reconfiguration time and hardware cost. The circuit has been synthesised and mapped to 130-nm UMC standard cell [application-specific integrated circuit (ASIC)] technology and is capable of supporting line speeds of 5 Gb/s. © 2006 IEEE.
Resumo:
Continuing achievements in hardware technology are bringing ubiquitous computing closer to reality. The notion of a connected, interactive and autonomous environment is common to all sensor networks, biosystems and radio frequency identification (RFID) devices, and the emergence of significant deployments and sophisticated applications can be expected. However, as more information is collected and transmitted, security issues will become vital for such a fully connected environment. In this study the authors consider adding security features to low-cost devices such as RFID tags. In particular, the authors consider the implementation of a digital signature architecture that can be used for device authentication, to prevent tag cloning, and for data authentication to prevent transmission forgery. The scheme is built around the signature variant of the cryptoGPS identification scheme and the SHA-1 hash function. When implemented on 130 nm CMOS the full design uses 7494 gates and consumes 4.72 mu W of power, making it smaller and more power efficient than previous low-cost digital signature designs. The study also presents a low-cost SHA-1 hardware architecture which is the smallest standardised hash function design to date.
Resumo:
ntegrated organisational IT systems, such as enterprise resource planning (ERP), supply chain management (SCM) and digital manufacturing (DM), have promised and delivered substantial performance benefits to many adopting firms. However, implementations of such systems have tended to be problematic. ERP projects, in particular, are prone to cost and time overruns, not delivering anticipated benefits and often being abandoned before completion. While research has developed around IT implementation, this has focused mainly on standalone (or discrete), as opposed to integrated, IT systems. Within this literature, organisational (i.e., structural and cultural) characteristics have been found to influence implementation success. The key aims of this research are (a) to investigate the role of organisational characteristics in determining IT implementation success; (b) to determine whether their influence differs for integrated IT and discrete IT projects; and (c) to develop specific guidelines for managers of integrated IT implementations. An in-depth comparative case study of two IT projects was conducted within a major aerospace manufacturing company.
Resumo:
The Water Framework Directive (WFD) has initiated a shift towards a targeted approach to implementation through its focus on river basin districts as management units and the natural ecological characteristics of waterbodies. Due to its role in eutrophication, phosphorus (P) has received considerable attention, resulting in a significant body of research, which now forms the evidence base for the programme of measures (POMs) adopted in WFD River Basin Management Plans (RBMP). Targeting POMs at critical sources areas (CSAs) of P could significantly improve environmental efficiency and cost effectiveness of proposed mitigation strategies. This paper summarises the progress made towards targeting mitigation measures at CSAs in Irish catchments. A review of current research highlights that knowledge related to P export at field scale is relatively comprehensive however; the availability of site-specific data and tools limits widespread identification of CSA at this scale. Increasing complexity of hydrological processes at larger scales limits accurate identification of CSA at catchment scale. Implementation of a tiered approach, using catchment scale tools in conjunction with field-by-field surveys could decrease uncertainty and provide a more practical and cost effective method of delineating CSA in a range of catchments. Despite scientific and practical uncertainties, development of a tiered CSA-based approach to assist in the development of supplementary measures would provide a means of developing catchment-specific and cost-effective programmes of measures for diffuse P. The paper presents a conceptual framework for such an approach, which would have particular relevance for the development of supplementary measures in High Status Waterbodies (HSW). The cost and resources necessary for implementation are justified based on HSWs’ value as undisturbed reference condition ecosystems.
Resumo:
In the pursuit of producing high quality, low-cost composite aircraft structures, out-of-autoclave manufacturing processes for textile reinforcements are being simulated with increasing accuracy. This paper focuses on the continuum-based, finite element modelling of textile composites as they deform during the draping process. A non-orthogonal constitutive model tracks yarn orientations within a material subroutine developed for Abaqus/Explicit, resulting in the realistic determination of fabric shearing and material draw-in. Supplementary material characterisation was experimentally performed in order to define the tensile and non-linear shear behaviour accurately. The validity of the finite element model has been studied through comparison with similar research in the field and the experimental lay-up of carbon fibre textile reinforcement over a tool with double curvature geometry, showing good agreement.
Resumo:
The purpose of this research is to identify and assess the opportunities and challenges of implementing a Site Waste Management Plan (SWMP) on projects irrespective of size. In the UK, construction and demolition waste accounts for a third of all UK waste. There are a number of factors that influence the implementation of SWMPs. In order to identify and analyse these factors, 4 unstructured interviews were carried out and a sample of 56 participants completed a questionnaire survey. The scope of the study was limited to UK
construction industry professionals. The analysis revealed that more needs to be done if the industry is to meet government targets of reduction in construction related waste going to landfill. In addition, although SWMP may not yet be legally required on all construction projects, clients and contractors need to realise
the benefits to cut costs and implement best practice by adopting a SWMP. The benefits of implementing a SWMP will not only help to achieve this but also gain significant cost savings on projects and is also extremely beneficial to the environment. This study presents evidence that contractors need to do more to reduce waste and draws a clear link between waste reduction and the implementation of SWMPs. The findings are useful in the ongoing efforts to encourage the industry to find smarter, more efficient and less
damaging ways to operate
Resumo:
The aim of this paper is to demonstrate the applicability and the effectiveness of a computationally demanding stereo matching algorithm in different lowcost and low-complexity embedded devices, by focusing on the analysis of timing and image quality performances. Various optimizations have been implemented to allow its deployment on specific hardware architectures while decreasing memory and processing time requirements: (1) reduction of color channel information and resolution for input images, (2) low-level software optimizations such as parallel computation, replacement of function calls or loop unrolling, (3) reduction of redundant data structures and internal data representation. The feasibility of a stereovision system on a low cost platform is evaluated by using standard datasets and images taken from Infra-Red (IR) cameras. Analysis of the resulting disparity map accuracy with respect to a full-size dataset is performed as well as the testing of suboptimal solutions
Resumo:
Multiple Table Lookup architectures in Software Defined Networking (SDN) open the door for exciting new network applications. The development of the OpenFlow protocol supported the SDN paradigm. However, the first version of the OpenFlow protocol specified a single table lookup model with the associated constraints in flow entry numbers and search capabilities. With the introduction of multiple table lookup in OpenFlow v1.1, flexible and efficient search to support SDN application innovation became possible. However, implementation of multiple table lookup in hardware to meet high performance requirements is non-trivial. One possible approach involves the use of multi-dimensional lookup algorithms. A high lookup performance can be achieved by using embedded memory for flow entry storage. A detailed study of OpenFlow flow filters for multi-dimensional lookup is presented in this paper. Based on a proposed multiple table lookup architecture, the memory consumption and update performance using parallel single field searches are evaluated. The results demonstrate an efficient multi-table lookup implementation with minimum memory usage.
Resumo:
PURPOSE: To assess the impact of community outreach and the availability of low-cost surgeries [500 Renminbi (RMB) or 65 United States dollars (US$) per surgery] on the willingness to pay for cataract surgery among male and female rural-dwelling Chinese.METHODS: Cross-sectional willingness-to-pay surveys were conducted at the initiation of a cataract outreach programme in June 2001 and then again in July 2006. Respondents underwent visual acuity testing and provided socio-demographic data.RESULTS: In 2001 and 2006, 325 and 303 subjects, respectively, were interviewed. On average the 2006 sample subjects were of similar age, more likely to be female (p < 0.01), illiterate (p < 0.01), and less likely to come from a household with annual income of less than US$789 (62% vs. 87%, p < 0.01). Familiarity with cataract surgery increased from 21.2% to 44.4% over the 5 years for male subjects (p < 0.01) and 15.8%-44.4% among females (p < 0.01). The proportion of respondents willing to pay at least 500 RMB for surgery increased from 67% to 88% (p < 0.01) among male subjects and from 50% to 91% (p < 0.01) among females.CONCLUSIONS: Five years of access to free cataract testing and low-cost surgery programmes appears to have improved the familiarity with cataract surgery and increased the willingness to pay at least 500 RMB (US$65) for it in this rural population. Elderly women are now as likely as men to be willing to pay at least 500 RMB, reversing gender differences present 5 years ago.
Resumo:
Introduction
Standard treatment for neovascular age-related macular degeneration (nAMD) is intravitreal injections of anti-VEGF drugs. Following multiple injections, nAMD lesions often become quiescent but there is a high risk of reactivation, and regular review by hospital ophthalmologists is the norm. The present trial examines the feasibility of community optometrists making lesion reactivation decisions.
Methods
The Effectiveness of Community vs Hospital Eye Service (ECHoES) trial is a virtual trial; lesion reactivation decisions were made about vignettes that comprised clinical data, colour fundus photographs, and optical coherence tomograms displayed on a web-based platform. Participants were either hospital ophthalmologists or community optometrists. All participants were provided with webinar training on the disease, its management, and assessment of the retinal imaging outputs. In a balanced design, 96 participants each assessed 42 vignettes; a total of 288 vignettes were assessed seven times by each professional group.The primary outcome is a participant's judgement of lesion reactivation compared with a reference standard. Secondary outcomes are the frequency of sight threatening errors; judgements about specific lesion components; participant-rated confidence in their decisions about the primary outcome; cost effectiveness of follow-up by optometrists rather than ophthalmologists.
Discussion
This trial addresses an important question for the NHS, namely whether, with appropriate training, community optometrists can make retreatment decisions for patients with nAMD to the same standard as hospital ophthalmologists. The trial employed a novel approach as participation was entirely through a web-based application; the trial required very few resources compared with those that would have been needed for a conventional randomised controlled clinical trial.
Resumo:
The present work presents an investigation regarding the feasibility analysis of a cogeneration plant for a food processing facility with the aim to decrease the cost of energy supply. The monthly electricity and heat consumption profiles are analyzed, in order to understand the consumption profiles, as well as the costs of the current furniture of electricity and gas. Then, a detailed thermodynamic model of the cogeneration cycle is implemented and the investment costs are linked to the thermodynamic variables by means of cost functions. The optimal electricity power of the co-generator is determined with reference to various investment indexes. The analysis highlights that the optimal dimension varies according to the chosen indicator, therefore it is not possible to establish it univocally, but it depends on the financial/economic strategy of the company through the considered investment index.
Resumo:
This paper presents a study on the implementation of Real-Time Pricing (RTP) based Demand Side Management (DSM) of water pumping at a clean water pumping station in Northern Ireland, with the intention of minimising electricity costs and maximising the usage of electricity from wind generation. A Genetic Algorithm (GA) was used to create pumping schedules based on system constraints and electricity tariff scenarios. Implementation of this method would allow the water network operator to make significant savings on electricity costs while also helping to mitigate the variability of wind generation.