909 resultados para Bayesian adaptive design
Resumo:
A generic architecture for implementing a QR array processor in silicon is presented. This improves on previous research by considerably simplifying the derivation of timing schedules for a QR system implemented as a folded linear array, where account has to be taken of processor cell latency and timing at the detailed circuit level. The architecture and scheduling derived have been used to create a generator for the rapid design of System-on-a-Chip (SoC) cores for QR decomposition. This is demonstrated through the design of a single-chip architecture for implementing an adaptive beamformer for radar applications. Published as IEEE Trans Circuits and Systems Part II, Analog and Digital Signal Processing, April 2003 NOT Express Briefs. Parts 1 and II of Journal reorganised since then into Regular Papers and Express briefs
Resumo:
In this paper, we present a random iterative graph based hyper-heuristic to produce a collection of heuristic sequences to construct solutions of different quality. These heuristic sequences can be seen as dynamic hybridisations of different graph colouring heuristics that construct solutions step by step. Based on these sequences, we statistically analyse the way in which graph colouring heuristics are automatically hybridised. This, to our knowledge, represents a new direction in hyper-heuristic research. It is observed that spending the search effort on hybridising Largest Weighted Degree with Saturation Degree at the early stage of solution construction tends to generate high quality solutions. Based on these observations, an iterative hybrid approach is developed to adaptively hybridise these two graph colouring heuristics at different stages of solution construction. The overall aim here is to automate the heuristic design process, which draws upon an emerging research theme on developing computer methods to design and adapt heuristics automatically. Experimental results on benchmark exam timetabling and graph colouring problems demonstrate the effectiveness and generality of this adaptive hybrid approach compared with previous methods on automatically generating and adapting heuristics. Indeed, we also show that the approach is competitive with the state of the art human produced methods.
Resumo:
This paper presents the design of a novel single chip adaptive beamformer capable of performing 50 Gflops, (Giga-floating-point operations/second). The core processor is a QR array implemented on a fully efficient linear systolic architecture, derived using a mapping that allows individual processors for boundary and internal cell operations. In addition, the paper highlights a number of rapid design techniques that have been used to realise this system. These include an architecture synthesis tool for quickly developing the circuit architecture and the utilisation of a library of parameterisable silicon intellectual property (IP) cores, to rapidly develop detailed silicon designs.
Resumo:
A silicon implementation of the Approximate Rotations algorithm capable of carrying the computational load of algorithms such as QRD and SVD, within the real-time realisation of applications such as Adaptive Beamforming, is described. A modification to the original Approximate Rotations algorithm to simplify the method of optimal angle selection is proposed. Analysis shows that fewer iterations of the Approximate Rotations algorithm are required compared with the conventional CORDIC algorithm to achieve similar degrees of accuracy. The silicon design studies undertaken provide direct practical evidence of superior performance with the Approximate Rotations algorithm, requiring approximately 40% of the total computation time of the conventional CORDIC algorithm, for a similar silicon area cost. © 2004 IEEE.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
In this paper, we propose a system level design approach considering voltage over-scaling (VOS) that achieves error resiliency using unequal error protection of different computation elements, while incurring minor quality degradation. Depending on user specifications and severity of process variations/channel noise, the degree of VOS in each block of the system is adaptively tuned to ensure minimum system power while providing "just-the-right" amount of quality and robustness. This is achieved, by taking into consideration system level interactions and ensuring that under any change of operating conditions only the "lesscrucial" computations, that contribute less to block/system output quality, are affected. The design methodology applied to a DCT/IDCT system shows large power benefits (up to 69%) at reasonable image quality while tolerating errors induced by varying operating conditions (VOS, process variations, channel noise). Interestingly, the proposed IDCT scheme conceals channel noise at scaled voltages. ©2009 IEEE.
Resumo:
Considering the development of aerospace composite components, designing for reduced manufacturing layup cost and structural complexity is increasingly important. While the advantage of composite materials is the ability to tailor designs to various structural loads for minimum mass, the challenge is obtaining a design that is manufacturable and minimizes local ply incompatibility. The focus of the presented research is understanding how the relationships between mass, manufacturability and design complexity, under realistic loads and design requirements, can be affected by enforcing ply continuity in the design process. Presented are a series of sizing case studies on an upper wing cover, designed using conventional analyses and the tabular laminate design process. Introducing skin ply continuity constraints can generate skin designs with minimal ply discontinuities, fewer ply drops and larger ply areas than designs not constrained for continuity. However, the reduced design freedom associated with the addition of these constraints results in a weight penalty over the total wing cover. Perhaps more interestingly, when considering manual hand layup the reduced design complexity is not translated into a reduced recurring manufacturing cost. In contrast, heavier wing cover designs appear to take more time to layup regardless of the laminate design complexity. © 2012 AIAA.
Resumo:
The design and VLSI implementation of two key components of the class-IV partial response maximum likelihood channel (PR-IV) the adaptive filter and the Viterbi decoder are described. These blocks are implemented using parameterised VHDL modules, from a library of common digital signal processing (DSP) and arithmetic functions. Design studies, based on 0.6 micron 3.3V standard cell processes, indicate that worst case sampling rates of 49 mega-samples per second are achievable for this system, with proportionally high sampling rates for full custom designs and smaller dimension processes. Significant increases in the sampling rate, from 49 MHz to approximately 180 MHz, can be achieved by operating four filter modules in parallel, and this implementation has 50% lower power consumption than a pipelined filter operating at the same speed.
Resumo:
We propose a mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber line. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Furthermore, our proposed method outperforms the minimum mean-squared error design for a range of time domain equalizer (TEQ) filter lengths. The improved performance outweighs the small increase in computational complexity required. A block variant of our proposed algorithm is also presented to overcome the increased latency imposed on the feedback path of the adaptive system.
Resumo:
BACKGROUND: Long-term hormone therapy has been the standard of care for advanced prostate cancer since the 1940s. STAMPEDE is a randomised controlled trial using a multiarm, multistage platform design. It recruits men with high-risk, locally advanced, metastatic or recurrent prostate cancer who are starting first-line long-term hormone therapy. We report primary survival results for three research comparisons testing the addition of zoledronic acid, docetaxel, or their combination to standard of care versus standard of care alone.
METHODS: Standard of care was hormone therapy for at least 2 years; radiotherapy was encouraged for men with N0M0 disease to November, 2011, then mandated; radiotherapy was optional for men with node-positive non-metastatic (N+M0) disease. Stratified randomisation (via minimisation) allocated men 2:1:1:1 to standard of care only (SOC-only; control), standard of care plus zoledronic acid (SOC + ZA), standard of care plus docetaxel (SOC + Doc), or standard of care with both zoledronic acid and docetaxel (SOC + ZA + Doc). Zoledronic acid (4 mg) was given for six 3-weekly cycles, then 4-weekly until 2 years, and docetaxel (75 mg/m(2)) for six 3-weekly cycles with prednisolone 10 mg daily. There was no blinding to treatment allocation. The primary outcome measure was overall survival. Pairwise comparisons of research versus control had 90% power at 2·5% one-sided α for hazard ratio (HR) 0·75, requiring roughly 400 control arm deaths. Statistical analyses were undertaken with standard log-rank-type methods for time-to-event data, with hazard ratios (HRs) and 95% CIs derived from adjusted Cox models. This trial is registered at ClinicalTrials.gov (NCT00268476) and ControlledTrials.com (ISRCTN78818544).
FINDINGS: 2962 men were randomly assigned to four groups between Oct 5, 2005, and March 31, 2013. Median age was 65 years (IQR 60-71). 1817 (61%) men had M+ disease, 448 (15%) had N+/X M0, and 697 (24%) had N0M0. 165 (6%) men were previously treated with local therapy, and median prostate-specific antigen was 65 ng/mL (IQR 23-184). Median follow-up was 43 months (IQR 30-60). There were 415 deaths in the control group (347 [84%] prostate cancer). Median overall survival was 71 months (IQR 32 to not reached) for SOC-only, not reached (32 to not reached) for SOC + ZA (HR 0·94, 95% CI 0·79-1·11; p=0·450), 81 months (41 to not reached) for SOC + Doc (0·78, 0·66-0·93; p=0·006), and 76 months (39 to not reached) for SOC + ZA + Doc (0·82, 0·69-0·97; p=0·022). There was no evidence of heterogeneity in treatment effect (for any of the treatments) across prespecified subsets. Grade 3-5 adverse events were reported for 399 (32%) patients receiving SOC, 197 (32%) receiving SOC + ZA, 288 (52%) receiving SOC + Doc, and 269 (52%) receiving SOC + ZA + Doc.
INTERPRETATION: Zoledronic acid showed no evidence of survival improvement and should not be part of standard of care for this population. Docetaxel chemotherapy, given at the time of long-term hormone therapy initiation, showed evidence of improved survival accompanied by an increase in adverse events. Docetaxel treatment should become part of standard of care for adequately fit men commencing long-term hormone therapy.
FUNDING: Cancer Research UK, Medical Research Council, Novartis, Sanofi-Aventis, Pfizer, Janssen, Astellas, NIHR Clinical Research Network, Swiss Group for Clinical Cancer Research.
Resumo:
The integration of an ever growing proportion of large scale distributed renewable generation has increased the probability of maloperation of the traditional RoCoF and vector shift relays. With reduced inertia due to non-synchronous penetration in a power grid, system wide disturbances have forced the utility industry to design advanced protection schemes to prevent system degradation and avoid cascading outages leading to widespread blackouts. This paper explores a novel adaptive nonlinear approach applied to islanding detection, based on wide area phase angle measurements. This is challenging, since the voltage phase angles from different locations exhibit not only strong nonlinear but also time-varying characteristics. The adaptive nonlinear technique, called moving window kernel principal component analysis is proposed to model the time-varying and nonlinear trends in the voltage phase angle data. The effectiveness of the technique is exemplified using both DigSilent simulated cases and real test cases recorded from the Great Britain and Ireland power systems by the OpenPMU project.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.