12 resultados para Special purpose vehicles.
Resumo:
Public private partnerships (PPP) are an established model for most governments internationally to provide infrastructure-based services, using private finance. Typically the public authority will sign a contract with a special purpose vehicle (SPV), which, because of the holistic nature of PPP, in turn sub-contracts the finance, design, construction, maintenance and soft services to companies that are often related to its shareholders. Thus there is a considerable network of linked organisations that together procure and provide the PPP project. While there is an increasing body of research that examines these PPP projects, much of it is interview or case study based so that the evidence is drawn from a small number of interviews or cases in specific sectors. It also focuses on the public sector procurer and the private sector contractor in the network of organisations. Although it has been recognised that the perceptions of the financiers may vary from those of other key PPP players there is much less research that focuses on the financiers. In this paper we report the results of a postal questionnaire survey, administered to 109 providers of senior debt and equity, from which the response rate was just less than 40%. We supplement these findings with a small number of illustrative quotes from interviewees, where the cited quote represents a commonly held view. We used SPSS and Nvivo to analyse the data. The findings show that when assessing PPPs financiers perceive a very wide range of risks as important, and that it is important to them that many of these risks are either insured or allocated to sub-contractors. When considering participating in PPPs, financiers agree that working with familiar partners on familiar projects and in familiar sectors is important, which may raise barriers to entry and undermine competitive processes.
Resumo:
Hardware designers and engineers typically need to explore a multi-parametric design space in order to find the best configuration for their designs using simulations that can take weeks to months to complete. For example, designers of special purpose chips need to explore parameters such as the optimal bitwidth and data representation. This is the case for the development of complex algorithms such as Low-Density Parity-Check (LDPC) decoders used in modern communication systems. Currently, high-performance computing offers a wide set of acceleration options, that range from multicore CPUs to graphics processing units (GPUs) and FPGAs. Depending on the simulation requirements, the ideal architecture to use can vary. In this paper we propose a new design flow based on OpenCL, a unified multiplatform programming model, which accelerates LDPC decoding simulations, thereby significantly reducing architectural exploration and design time. OpenCL-based parallel kernels are used without modifications or code tuning on multicore CPUs, GPUs and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL for mapping the simulations into FPGAs. To the best of our knowledge, this is the first time that a single, unmodified OpenCL code is used to target those three different platforms. We show that, depending on the design parameters to be explored in the simulation, on the dimension and phase of the design, the GPU or the FPGA may suit different purposes more conveniently, providing different acceleration factors. For example, although simulations can typically execute more than 3x faster on FPGAs than on GPUs, the overhead of circuit synthesis often outweighs the benefits of FPGA-accelerated execution.
Resumo:
The design cycle for complex special-purpose computing systems is extremely costly and time-consuming. It involves a multiparametric design space exploration for optimization, followed by design verification. Designers of special purpose VLSI implementations often need to explore parameters, such as optimal bitwidth and data representation, through time-consuming Monte Carlo simulations. A prominent example of this simulation-based exploration process is the design of decoders for error correcting systems, such as the Low-Density Parity-Check (LDPC) codes adopted by modern communication standards, which involves thousands of Monte Carlo runs for each design point. Currently, high-performance computing offers a wide set of acceleration options that range from multicore CPUs to Graphics Processing Units (GPUs) and Field Programmable Gate Arrays (FPGAs). The exploitation of diverse target architectures is typically associated with developing multiple code versions, often using distinct programming paradigms. In this context, we evaluate the concept of retargeting a single OpenCL program to multiple platforms, thereby significantly reducing design time. A single OpenCL-based parallel kernel is used without modifications or code tuning on multicore CPUs, GPUs, and FPGAs. We use SOpenCL (Silicon to OpenCL), a tool that automatically converts OpenCL kernels to RTL in order to introduce FPGAs as a potential platform to efficiently execute simulations coded in OpenCL. We use LDPC decoding simulations as a case study. Experimental results were obtained by testing a variety of regular and irregular LDPC codes that range from short/medium (e.g., 8,000 bit) to long length (e.g., 64,800 bit) DVB-S2 codes. We observe that, depending on the design parameters to be simulated, on the dimension and phase of the design, the GPU or FPGA may suit different purposes more conveniently, thus providing different acceleration factors over conventional multicore CPUs.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Resumo:
Reliability has emerged as a critical design constraint especially in memories. Designers are going to great lengths to guarantee fault free operation of the underlying silicon by adopting redundancy-based techniques, which essentially try to detect and correct every single error. However, such techniques come at a cost of large area, power and performance overheads which making many researchers to doubt their efficiency especially for error resilient systems where 100% accuracy is not always required. In this paper, we present an alternative method focusing on the confinement of the resulting output error induced by any reliability issues. By focusing on memory faults, rather than correcting every single error the proposed method exploits the statistical characteristics of any target application and replaces any erroneous data with the best available estimate of that data. To realize the proposed method a RISC processor is augmented with custom instructions and special-purpose functional units. We apply the method on the proposed enhanced processor by studying the statistical characteristics of the various algorithms involved in a popular multimedia application. Our experimental results show that in contrast to state-of-the-art fault tolerance approaches, we are able to reduce runtime and area overhead by 71.3% and 83.3% respectively.
Resumo:
This paper employs a unique decentralised cooperative control method to realise a formation-based collision avoidance strategy for a group of autonomous vehicles. In this approach, the vehicles' role in the formation and their alert and danger areas are first defined, and the formation-based intra-group and external collision avoidance methods are then proposed to translate the collision avoidance problem into the formation stability problem. The extension–decomposition–aggregation formation control method is next employed to stabilise the original and modified formations, whilst manoeuvring, and subsequently solve their collision avoidance problem indirectly. Simulation study verifies the feasibility and effectiveness of the intra-group and external collision avoidance strategy. It is demonstrated that both formation control and collision avoidance problems can be simultaneously solved if the stability of the expanded formation including external obstacles can be satisfied.
Resumo:
The technical challenges in the design and programming of signal processors for multimedia communication are discussed. The development of terminal equipment to meet such demand presents a significant technical challenge, considering that it is highly desirable that the equipment be cost effective, power efficient, versatile, and extensible for future upgrades. The main challenges in the design and programming of signal processors for multimedia communication are, general-purpose signal processor design, application-specific signal processor design, operating systems and programming support and application programming. The size of FFT is programmable so that it can be used for various OFDM-based communication systems, such as digital audio broadcasting (DAB), digital video broadcasting-terrestrial (DVB-T) and digital video broadcasting-handheld (DVB-H). The clustered architecture design and distributed ping-pong register files in the PAC DSP raise new challenges of code generation.
Resumo:
Electric vehicles (EVs) offer great potential to move from fossil fuel dependency in transport once some of the technical barriers related to battery reliability and grid integration are resolved. The European Union has set a target to achieve a 10% reduction in greenhouse gas emissions by 2020 relative to 2005 levels. This target is binding in all the European Union member states. If electric vehicle issues are overcome then the challenge is to use as much renewable energy as possible to achieve this target. In this paper, the impacts of electric vehicle charged in the all-Ireland single wholesale electricity market after the 2020 deadline passes is investigated using a power system dispatch model. For the purpose of this work it is assumed that a 10% electric vehicle target in the Republic of Ireland is not achieved, but instead 8% is reached by 2025 considering the slow market uptake of electric vehicles. Our experimental study shows that the increasing penetration of EVs could contribute to approach the target of the EU and Ireland government on emissions reduction, regardless of different charging scenarios. Furthermore, among various charging scenarios, the off-peak charging is the best approach, contributing 2.07% to the target of 10% reduction of Greenhouse gas emissions by 2025.
Resumo:
The transport sector is considered to be one of the most dependent sectors on fossil fuels. Meeting ecological, social and economic demands throughout the sector has got increasingly important in recent times. A passenger vehicle with a more environmentally friendly propulsion system is the hybrid electric vehicle. Combining an internal combustion engine and an electric motor offers the potential to reduce carbon dioxide emissions. The overall objective of this research is to provide an appraisal of the use of a micro gas turbine as the range extender in a plug-in hybrid electric vehicle. In this application, the gas turbine can always operate at its most efficient operating point as its only requirement is to recharge the battery. For this reason, it is highly suitable for this purpose. Gas turbines offer many benefits over traditional internal combustion engines which are traditionally used in this application. They offer a high power-to-weight ratio, multi-fuel capability and relatively low emission levels due to continuous combustion.
Resumo:
One of the main purposes of building a battery model is for monitoring and control during battery charging/discharging as well as for estimating key factors of batteries such as the state of charge for electric vehicles. However, the model based on the electrochemical reactions within the batteries is highly complex and difficult to compute using conventional approaches. Radial basis function (RBF) neural networks have been widely used to model complex systems for estimation and control purpose, while the optimization of both the linear and non-linear parameters in the RBF model remains a key issue. A recently proposed meta-heuristic algorithm named Teaching-Learning-Based Optimization (TLBO) is free of presetting algorithm parameters and performs well in non-linear optimization. In this paper, a novel self-learning TLBO based RBF model is proposed for modelling electric vehicle batteries using RBF neural networks. The modelling approach has been applied to two battery testing data sets and compared with some other RBF based battery models, the training and validation results confirm the efficacy of the proposed method.
Resumo:
Simulation is a well-established and effective approach to the development of fuel-efficient and low-emissions vehicles in both on-highway and off-highway applications.
The simulation of on-highway automotive vehicles is widely reported in literature, whereas research relating to non-automotive and off-highway vehicles is relatively sparse. This review paper focuses on the challenges of simulating such vehicles and discusses the differences in the approach to drive cycle testing and experimental validation of vehicle simulations. In particular, an inner-city diesel-electric hybrid bus and an ICE (Internal Combustion Engine) powered forklift truck will be used as case studies.
Computer prediction of fuel consumption and emissions of automotive vehicles on standardised drive cycles is well-established and commercial software packages such as AVL CRUISE have been specifically developed for this purpose. The vehicles considered in this review paper present new challenges from both the simulation and drive-cycle testing perspectives. For example, in the case of the forklift truck, the drive cycles involve reversing elements, variable mass, lifting operations, and do not specify a precise velocity-time profile. In particular, the difficulties associated with the prediction of productivity, i.e. the maximum rate of completing a series of defined operations, are discussed. In the case of the hybrid bus, the standardised drive cycles are unrepresentative of real-life use and alternative approaches are required in the development of efficient and low-emission vehicles.
Two simulation approaches are reviewed: the adaptation of a standard automotive vehicle simulation package, and the development of bespoke models using packages such as MATLAB/Simulink.