994 resultados para Adaptive reuse
Resumo:
This paper proposes a novel approach to video deblocking which performs perceptually adaptive bilateral filtering by considering color, intensity, and motion features in a holistic manner. The method is based on bilateral filter which is an effective smoothing filter that preserves edges. The bilateral filter parameters are adaptive and avoid over-blurring of texture regions and at the same time eliminate blocking artefacts in the smooth region and areas of slow motion content. This is achieved by using a saliency map to control the strength of the filter for each individual point in the image based on its perceptual importance. The experimental results demonstrate that the proposed algorithm is effective in deblocking highly compressed video sequences and to avoid over-blurring of edges and textures in salient regions of image.
Resumo:
In July 2010, China announced the “National Plan for Medium and Long-term Education Reform and Development(2010-2020)” (PRC 2010). The Plan calls for an education system that: • promotes an integrated development which harnesses everyone’s talent; • combines learning and thinking; unifies knowledge and practice; • allows teachers to teach according to individuals’ needs; and • reforms education quality evaluation and personnel evaluation systems focusing on performance including character, knowledge, ability and other factors. This paper discusses the design and implementation of a Professional Learning Program (PLP) undertaken by 432 primary, middle and high school teachers in China. The aim of this initiative was to develop adaptive expertise in using technology that facilitated innovative science and technology teaching and learning as envisaged by the Chinese Ministry of Education’s (2010-2020) education reforms. Key principles derived from literature about professional learning and scaffolding of learning informed the design of the PLP. The analysis of data revealed that the participants had made substantial progress towards the development of adaptive expertise. This was manifested not only by advances in the participants’ repertoires of Subject Matter Knowledge and Pedagogical Content Knowledge but also in changes to their levels of confidence and identities as teachers. It was found that through time the participants had coalesced into a professional learning community that readily engaged in the sharing, peer review, reuse and adaption, and collaborative design of innovative science and technology learning and assessment activities.
Resumo:
This paper explores the use of subarrays as array elements. Benefits of such a concept include improved gain in any direction without significantly increasing the overall size of the array and enhanced pattern control. The architecture for an array of subarrays will be discussed via a systems approach. Individual system designs are explored in further details and proof of principle is illustrated through a manufactured examples.
Resumo:
The mean shift tracker has achieved great success in visual object tracking due to its efficiency being nonparametric. However, it is still difficult for the tracker to handle scale changes of the object. In this paper, we associate a scale adaptive approach with the mean shift tracker. Firstly, the target in the current frame is located by the mean shift tracker. Then, a feature point matching procedure is employed to get the matched pairs of the feature point between target regions in the current frame and the previous frame. We employ FAST-9 corner detector and HOG descriptor for the feature matching. Finally, with the acquired matched pairs of the feature point, the affine transformation between target regions in the two frames is solved to obtain the current scale of the target. Experimental results show that the proposed tracker gives satisfying results when the scale of the target changes, with a good performance of efficiency.
Resumo:
Mobile ad-hoc networks (MANETs) have recently drawn significant research attention since they offer unique benefits and versatility with respect to bandwidth spatial reuse, intrinsic fault tolerance, and low-cost rapid deployment. This paper addresses the issue of delay sensitive realtime data transport in these type of networks. An effective QoS mechanism is thereby required for the speedy transport of the realtime data. QoS issue in MANET is an open-end problem. Various QoS measures are incorporated in the upperlayers of the network, but a few techniques addresses QoS techniques in the MAC layer. There are quite a few QoS techniques in the MAC layer for the infrastructure based wireless network. The goal and the challenge is to achieve a QoS delivery and a priority access to the real time traffic in adhoc wireless environment, while maintaining democracy in the resource allocation. We propose a MAC layer protocol called "FCP based FAMA protocol", which allocates the channel resources to the needy in a more democratic way, by examining the requirements, malicious behavior and genuineness of the request. We have simulated both the FAMA as well as FCP based FAMA and tested in various MANET conditions. Simulated results have clearly shown a performance improvement in the channel utilization and a decrease in the delay parameters in the later case. Our new protocol outperforms the other QoS aware MAC layer protocols.
Resumo:
Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.
Resumo:
Video compression techniques enable adaptive media streaming over heterogeneous links to end-devices. Scalable Video Coding (SVC) and Multiple Description Coding (MDC) represent well-known techniques for video compression with distinct characteristics in terms of bandwidth efficiency and resiliency to packet loss. In this paper, we present Scalable Description Coding (SDC), a technique to compromise the tradeoff between bandwidth efficiency and error resiliency without sacrificing user-perceived quality. Additionally, we propose a scheme that combines network coding and SDC to further improve the error resiliency. SDC yields upwards of 25% bandwidth savings over MDC. Additionally, our scheme features higher quality for longer durations even at high packet loss rates.
Resumo:
This document is a survey in the research area of User Modeling (UM) for the specific field of Adaptive Learning. The aims of this document are: To define what it is a User Model; To present existing and well known User Models; To analyze the existent standards related with UM; To compare existing systems. In the scientific area of User Modeling (UM), numerous research and developed systems already seem to promise good results, but some experimentation and implementation are still necessary to conclude about the utility of the UM. That is, the experimentation and implementation of these systems are still very scarce to determine the utility of some of the referred applications. At present, the Student Modeling research goes in the direction to make possible reuse a student model in different systems. The standards are more and more relevant for this effect, allowing systems communicate and to share data, components and structures, at syntax and semantic level, even if most of them still only allow syntax integration.
Resumo:
Self-adaptive Software (SaS) presents specific characteristics compared to traditional ones, as it makes possible adaptations to be incorporated at runtime. These adaptations, when manually performed, normally become an onerous, error-prone activity. In this scenario, automated approaches have been proposed to support such adaptations; however, the development of SaS is not a trivial task. In parallel, reference architectures are reusable artifacts that aggregate the knowledge of architectures of software systems in specific domains. They have facilitated the development, standardization, and evolution of systems of those domains. In spite of their relevance, in the SaS domain, reference architectures that could support a more systematic development of SaS are not found yet. Considering this context, the main contribution of this paper is to present a reference architecture based on reflection for SaS, named RA4SaS (Reference Architecture for SaS). Its main purpose is to support the development of SaS that presents adaptations at runtime. To show the viability of this reference architecture, a case study is presented. As result, it has been observed that RA4SaS has presented good perspective to efficiently contribute to the area of SaS.
Resumo:
Modelling architectural information is particularly important because of the acknowledged crucial role of software architecture in raising the level of abstraction during development. In the MDE area, the level of abstraction of models has frequently been related to low-level design concepts. However, model-driven techniques can be further exploited to model software artefacts that take into account the architecture of the system and its changes according to variations of the environment. In this paper, we propose model-driven techniques and dynamic variability as concepts useful for modelling the dynamic fluctuation of the environment and its impact on the architecture. Using the mappings from the models to implementation, generative techniques allow the (semi) automatic generation of artefacts making the process more efficient and promoting software reuse. The automatic generation of configurations and reconfigurations from models provides the basis for safer execution. The architectural perspective offered by the models shift focus away from implementation details to the whole view of the system and its runtime change promoting high-level analysis. © 2009 Springer Berlin Heidelberg.
Resumo:
One of the major drawbacks for mobile nodes in wireless networks is power management. Our goal is to evaluate the performance power control scheme to be used to reduce network congestion, improve quality of service and collision avoidance in vehicular network and road safety application. Some of the importance of power control (PC) are improving spatial reuse, and increasing network capacity in mobile wireless communications. In this simulation we have evaluated the performance of existing rate algorithms compared with context Aware Rate selection algorithm (ACARS) and also seen the performance of ACARS and how it can be applied to road safety, improve network control and power management. Result shows that ACARS is able to minimize the total transmit power in the presence of propagation processes and mobility of vehicles, by adapting to the fast varying channels conditions with the Path loss exponent values that was used for that environment which is shown in the network simulation parameter. Our results have shown that ACARS is a very robust algorithm which performs very well with the effect of propagation processes that is prone to every transmitted signal in mobile networks. © 2013 IEEE.
Resumo:
A parallel method for the dynamic partitioning of unstructured meshes is outlined. The method includes diffusive load-balancing techniques and an iterative optimisation technique known as relative gain optimisationwhich both balances theworkload and attempts to minimise the interprocessor communications overhead. It can also optionally include amultilevel strategy. Experiments on a series of adaptively refined meshes indicate that the algorithmprovides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more rapidly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.
Resumo:
This chapter describes a parallel optimization technique that incorporates a distributed load-balancing algorithm and provides an extremely fast solution to the problem of load-balancing adaptive unstructured meshes. Moreover, a parallel graph contraction technique can be employed to enhance the partition quality and the resulting strategy outperforms or matches results from existing state-of-the-art static mesh partitioning algorithms. The strategy can also be applied to static partitioning problems. Dynamic procedures have been found to be much faster than static techniques, to provide partitions of similar or higher quality and, in comparison, involve the migration of a fraction of the data. The method employs a new iterative optimization technique that balances the workload and attempts to minimize the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. The dynamic evolution of load has three major influences on possible partitioning techniques; cost, reuse, and parallelism. The unstructured mesh may be modified every few time-steps and so the load-balancing must have a low cost relative to that of the solution algorithm in between remeshing.
Resumo:
A parallel method for dynamic partitioning of unstructured meshes is described. The method employs a new iterative optimisation technique which both balances the workload and attempts to minimise the interprocessor communications overhead. Experiments on a series of adaptively refined meshes indicate that the algorithm provides partitions of an equivalent or higher quality to static partitioners (which do not reuse the existing partition) and much more quickly. Perhaps more importantly, the algorithm results in only a small fraction of the amount of data migration compared to the static partitioners.
Resumo:
A comprehensive user model, built by monitoring a user's current use of applications, can be an excellent starting point for building adaptive user-centred applications. The BaranC framework monitors all user interaction with a digital device (e.g. smartphone), and also collects all available context data (such as from sensors in the digital device itself, in a smart watch, or in smart appliances) in order to build a full model of user application behaviour. The model built from the collected data, called the UDI (User Digital Imprint), is further augmented by analysis services, for example, a service to produce activity profiles from smartphone sensor data. The enhanced UDI model can then be the basis for building an appropriate adaptive application that is user-centred as it is based on an individual user model. As BaranC supports continuous user monitoring, an application can be dynamically adaptive in real-time to the current context (e.g. time, location or activity). Furthermore, since BaranC is continuously augmenting the user model with more monitored data, over time the user model changes, and the adaptive application can adapt gradually over time to changing user behaviour patterns. BaranC has been implemented as a service-oriented framework where the collection of data for the UDI and all sharing of the UDI data are kept strictly under the user's control. In addition, being service-oriented allows (with the user's permission) its monitoring and analysis services to be easily used by 3rd parties in order to provide 3rd party adaptive assistant services. An example 3rd party service demonstrator, built on top of BaranC, proactively assists a user by dynamic predication, based on the current context, what apps and contacts the user is likely to need. BaranC introduces an innovative user-controlled unified service model of monitoring and use of personal digital activity data in order to provide adaptive user-centred applications. This aims to improve on the current situation where the diversity of adaptive applications results in a proliferation of applications monitoring and using personal data, resulting in a lack of clarity, a dispersal of data, and a diminution of user control.