985 resultados para adaptive cost
Resumo:
The coefficients of an echo canceller with a near-end section and a far-end section are usually updated with the same updating scheme, such as the LMS algorithm. A novel scheme is proposed for echo cancellation that is based on the minimisation of two different cost functions, i.e. one for the near-end section and a different one for the far-end section. The approach considered leads to a substantial improvement in performance over the LMS algorithm when it is applied to both sections of the echo canceller. The convergence properties of the algorithm are derived. The proposed scheme is also shown to be robust to noise variations. Simulation results confirm the superior performance of the new algorithm.
Resumo:
This paper describes the ParaPhrase project, a new 3-year targeted research project funded under EU Framework 7 Objective 3.4 (Computer Systems), starting in October 2011. ParaPhrase aims to follow a new approach to introducing parallelism using advanced refactoring techniques coupled with high-level parallel design patterns. The refactoring approach will use these design patterns to restructure programs defined as networks of software components into other forms that are more suited to parallel execution. The programmer will be aided by high-level cost information that will be integrated into the refactoring tools. The implementation of these patterns will then use a well-understood algorithmic skeleton approach to achieve good parallelism. A key ParaPhrase design goal is that parallel components are intended to match heterogeneous architectures, defined in terms of CPU/GPU combinations, for example. In order to achieve this, the ParaPhrase approach will map components at link time to the available hardware, and will then re-map them during program execution, taking account of multiple applications, changes in hardware resource availability, the desire to reduce communication costs etc. In this way, we aim to develop a new approach to programming that will be able to produce software that can adapt to dynamic changes in the system environment. Moreover, by using a strong component basis for parallelism, we can achieve potentially significant gains in terms of reducing sharing at a high level of abstraction, and so in reducing or even eliminating the costs that are usually associated with cache management, locking, and synchronisation. © 2013 Springer-Verlag Berlin Heidelberg.
Resumo:
In this paper, we present a hybrid mixed cost-function adaptive initialization algorithm for the time domain equalizer in a discrete multitone (DMT)-based asymmetric digital subscriber loop. Using our approach, a higher convergence rate than that of the commonly used least-mean square algorithm is obtained, whilst attaining bit rates close to the optimum maximum shortening SNR and the upper bound SNR. Moreover, our proposed method outperforms the minimum mean-squared error design for a range of TEQ filter lengths.
Resumo:
Institutions involved in the provision of tertiary education across Europe are feeling the pinch. European universities, and other higher education (HE) institutions, must operate in a climate where the pressure of government spending cuts (Garben, 2012) is in stark juxtaposition to the EU’s strategy to drive forward and maintain a growth of student numbers in the sector (eurostat, 2015).
In order to remain competitive, universities and HE institutions are making ever-greater use of electronic assessment (E-Assessment) systems (Chatzigavriil et all, 2015; Ferrell, 2012). These systems are attractive primarily because they offer a cost-effect and scalable approach for assessment. In addition to scalability, they also offer reliability, consistency and impartiality; furthermore, from the perspective of a student they are most popular because they can offer instant feedback (Walet, 2012).
There are disadvantages, though.
First, feedback is often returned to a student immediately on competition of their assessment. While it is possible to disable the instant feedback option (this is often the case during an end of semester exam period when assessment scores must be can be ratified before release), however, this option tends to be a global ‘all on’ or ‘all off’ configuration option which is controlled centrally rather than configurable on a per-assessment basis.
If a formative in-term assessment is to be taken by multiple groups of
students, each at different times, this restriction means that answers to each question will be disclosed to the first group of students undertaking the assessment. As soon as the answers are released “into the wild” the academic integrity of the assessment is lost for subsequent student groups.
Second, the style of feedback provided to a student for each question is often limited to a simple ‘correct’ or ‘incorrect’ indicator. While this type of feedback has its place, it often does not provide a student with enough insight to improve their understanding of a topic that they did not answer correctly.
Most E-Assessment systems boast a wide range of question types including Multiple Choice, Multiple Response, Free Text Entry/Text Matching and Numerical questions. The design of these types of questions is often quite restrictive and formulaic, which has a knock-on effect on the quality of feedback that can be provided in each case.
Multiple Choice Questions (MCQs) are most prevalent as they are the most prescriptive and therefore most the straightforward to mark consistently. They are also the most amenable question types, which allow easy provision of meaningful, relevant feedback to each possible outcome chosen.
Text matching questions tend to be more problematic due to their free text entry nature. Common misspellings or case-sensitivity errors can often be accounted for by the software but they are by no means fool proof, as it is very difficult to predict in advance the range of possible variations on an answer that would be considered worthy of marks by a manual marker of a paper based equivalent of the same question.
Numerical questions are similarly restricted. An answer can be checked for accuracy or whether it is within a certain range of the correct answer, but unless it is a special purpose-built mathematical E-Assessment system the system is unlikely to have computational capability and so cannot, for example, account for “method marks” which are commonly awarded in paper-based marking.
From a pedagogical perspective, the importance of providing useful formative feedback to students at a point in their learning when they can benefit from the feedback and put it to use must not be understated (Grieve et all, 2015; Ferrell, 2012).
In this work, we propose a number of software-based solutions, which will overcome the limitations and inflexibilities of existing E-Assessment systems.
Resumo:
Os Sistemas Embarcados Distribuídos (SEDs) estão, hoje em dia, muito difundidos em vastas áreas, desde a automação industrial, a automóveis, aviões, até à distribuição de energia e protecção do meio ambiente. Estes sistemas são, essencialmente, caracterizados pela integração distribuída de aplicações embarcadas, autónomas mas cooperantes, explorando potenciais vantagens em termos de modularidade, facilidade de manutenção, custos de instalação, tolerância a falhas, entre outros. Contudo, o ambiente operacional onde se inserem estes tipos de sistemas pode impor restrições temporais rigorosas, exigindo que o sistema de comunicação subjacente consiga transmitir mensagens com garantias temporais. Contudo, os SEDs apresentam uma crescente complexidade, uma vez que integram subsistemas cada vez mais heterogéneos, quer ao nível do tráfego gerado, quer dos seus requisitos temporais. Em particular, estes subsistemas operam de forma esporádica, isto é, suportam mudanças operacionais de acordo com estímulos exteriores. Estes subsistemas também se reconfiguram dinamicamente de acordo com a actualização dos seus requisitos e, ainda, têm lidar com um número variável de solicitações de outros subsistemas. Assim sendo, o nível de utilização de recursos pode variar e, desta forma, as políticas de alocação estática tornam-se muito ineficientes. Consequentemente, é necessário um sistema de comunicação capaz de suportar com eficácia reconfigurações e adaptações dinâmicas. A tecnologia Ethernet comutada tem vindo a emergir como uma solução sólida para fornecer comunicações de tempo-real no âmbito dos SEDs, como comprovado pelo número de protocolos de tempo-real que foram desenvolvidos na última década. No entanto, nenhum dos protocolos existentes reúne as características necessárias para fornecer uma eficiente utilização da largura de banda e, simultaneamente, para respeitar os requisitos impostos pelos SEDs. Nomeadamente, a capacidade para controlar e policiar tráfego de forma robusta, conjugada com suporte à reconfiguração e adaptação dinâmica, não comprometendo as garantias de tempo-real. Esta dissertação defende a tese de que, pelo melhoramento dos comutadores Ethernet para disponibilizarem mecanismos de reconfiguração e isolamento de tráfego, é possível suportar aplicações de tempo-real críticas, que são adaptáveis ao ambiente onde estão inseridas.Em particular, é mostrado que as técnicas de projecto, baseadas em componentes e apoiadas no escalonamento hierárquico de servidores de tráfego, podem ser integradas nos comutadores Ethernet para alcançar as propriedades desejadas. Como suporte, é fornecida, também, uma solução para instanciar uma hierarquia reconfigurável de servidores de tráfego dentro do comutador, bem como a análise adequada ao modelo de escalonamento. Esta última fornece um limite superior para o tempo de resposta que os pacotes podem sofrer dentro dos servidores de tráfego, com base unicamente no conhecimento de um dado servidor e na hierarquia actual, isto é, sem o conhecimento das especifidades do tráfego dentro dos outros servidores. Finalmente, no âmbito do projecto HaRTES foi construído um protótipo do comutador Ethernet, o qual é baseado no paradigma “Flexible Time-Triggered”, que permite uma junção flexível de uma fase síncrona para o tráfego controlado pelo comutador e uma fase assíncrona que implementa a estrutura hierárquica de servidores referidos anteriormente. Além disso, as várias experiências práticas realizadas permitiram validar as propriedades desejadas e, consequentemente, a tese que fundamenta esta dissertação.
Resumo:
The proliferation of wireless sensor networks in a large spectrum of applications had been spurered by the rapid advances in MEMS(micro-electro mechanical systems )based sensor technology coupled with low power,Low cost digital signal processors and radio frequency circuits.A sensor network is composed of thousands of low cost and portable devices bearing large sensing computing and wireless communication capabilities. This large collection of tiny sensors can form a robust data computing and communication distributed system for automated information gathering and distributed sensing.The main attractive feature is that such a sensor network can be deployed in remote areas.Since the sensor node is battery powered,all the sensor nodes should collaborate together to form a fault tolerant network so as toprovide an efficient utilization of precious network resources like wireless channel,memory and battery capacity.The most crucial constraint is the energy consumption which has become the prime challenge for the design of long lived sensor nodes.
Resumo:
Short term load forecasting is one of the key inputs to optimize the management of power system. Almost 60-65% of revenue expenditure of a distribution company is against power purchase. Cost of power depends on source of power. Hence any optimization strategy involves optimization in scheduling power from various sources. As the scheduling involves many technical and commercial considerations and constraints, the efficiency in scheduling depends on the accuracy of load forecast. Load forecasting is a topic much visited in research world and a number of papers using different techniques are already presented. The accuracy of forecast for the purpose of merit order dispatch decisions depends on the extent of the permissible variation in generation limits. For a system with low load factor, the peak and the off peak trough are prominent and the forecast should be able to identify these points to more accuracy rather than minimizing the error in the energy content. In this paper an attempt is made to apply Artificial Neural Network (ANN) with supervised learning based approach to make short term load forecasting for a power system with comparatively low load factor. Such power systems are usual in tropical areas with concentrated rainy season for a considerable period of the year
Resumo:
Facing the double menace of climate change threats and water crisis, poor communities have now encountered ever more severe challenges in ensuring agricultural productivity and food security. Communities hence have to manage these challenges by adopting a comprehensive approach that not only enhances water resource management, but also adapts agricultural activities to climate variability. Implemented by the Global Environment Facility’s Small Grants Programme, the Community Water Initiative (CWI) has adopted a distinctive approach to support demand-driven, innovative, low cost and community-based water resource management for food security. Experiences from CWI showed that a comprehensive, locally adapted approach that integrates water resources management, poverty reduction, climate adaptation and community empowerment provides a good model for sustainable development in poor rural areas.
Resumo:
Planning a project with proper considerations of all necessary factors and managing a project to ensure its successful implementation will face a lot of challenges. Initial stage in planning a project for bidding a project is costly, time consuming and usually with poor accuracy on cost and effort predictions. On the other hand, detailed information for previous projects may be buried in piles of archived documents which can be increasingly difficult to learn from the previous experiences. Project portfolio has been brought into this field aiming to improve the information sharing and management among different projects. However, the amount of information that could be shared is still limited to generic information. This paper, we report a recently developed software system COBRA to automatically generate a project plan with effort estimation of time and cost based on data collected from previous completed projects. To maximise the data sharing and management among different projects, we proposed a method of using product based planning from PRINCE2 methodology. (Automated Project Information Sharing and Management System -�COBRA) Keywords: project management, product based planning, best practice, PRINCE2
Resumo:
The shallow water equations are solved using a mesh of polygons on the sphere, which adapts infrequently to the predicted future solution. Infrequent mesh adaptation reduces the cost of adaptation and load-balancing and will thus allow for more accurate mapping on adaptation. We simulate the growth of a barotropically unstable jet adapting the mesh every 12 h. Using an adaptation criterion based largely on the gradient of the vorticity leads to a mesh with around 20 per cent of the cells of a uniform mesh that gives equivalent results. This is a similar proportion to previous studies of the same test case with mesh adaptation every 1–20 min. The prediction of the mesh density involves solving the shallow water equations on a coarse mesh in advance of the locally refined mesh in order to estimate where features requiring higher resolution will grow, decay or move to. The adaptation criterion consists of two parts: that resolved on the coarse mesh, and that which is not resolved and so is passively advected on the coarse mesh. This combination leads to a balance between resolving features controlled by the large-scale dynamics and maintaining fine-scale features.
Resumo:
This letter proposes the subspace-based blind adaptive channel estimation algorithm for dual-rate quasi-synchronous DS/CDMA systems, which can operate at the low-rate (LR) or high-rate (HR) mode. Simulation results show that the proposed blind adaptive algorithm at the LR mode has a better performance than that at the HR mode, with the cost of an increasing computational complexity.
Resumo:
Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.
Resumo:
This paper describes a fast and reliable method for redistributing a computational mesh in three dimensions which can generate a complex three dimensional mesh without any problems due to mesh tangling. The method relies on a three dimensional implementation of the parabolic Monge–Ampère (PMA) technique, for finding an optimally transported mesh. The method for implementing PMA is described in detail and applied to both static and dynamic mesh redistribution problems, studying both the convergence and the computational cost of the algorithm. The algorithm is applied to a series of problems of increasing complexity. In particular very regular meshes are generated to resolve real meteorological features (derived from a weather forecasting model covering the UK area) in grids with over 2×107 degrees of freedom. The PMA method computes these grids in times commensurate with those required for operational weather forecasting.
Resumo:
We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved