927 resultados para adaptive systems
Resumo:
We propose a robust adaptive time synchronization and frequency offset estimation method for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems by applying electrical dispersion pre-compensation (pre-EDC) to the pilot symbol. This technique effectively eliminates the timing error due to the fiber chromatic dispersion, thus increasing significantly the accuracy of the frequency offset estimation process and improving the overall system performance. In addition, a simple design of the pilot symbol is proposed for full-range frequency offset estimation. This pilot symbol can also be used to carry useful data to effectively reduce the overhead due to time synchronization by a factor of 2.
Resumo:
ACM Computing Classification System (1998): K.3.1, K.3.2.
Resumo:
Agents inhabiting large scale environments are faced with the problem of generating maps by which they can navigate. One solution to this problem is to use probabilistic roadmaps which rely on selecting and connecting a set of points that describe the interconnectivity of free space. However, the time required to generate these maps can be prohibitive, and agents do not typically know the environment in advance. In this paper we show that the optimal combination of different point selection methods used to create the map is dependent on the environment, no point selection method dominates. This motivates a novel self-adaptive approach for an agent to combine several point selection methods. The success rate of our approach is comparable to the state of the art and the generation cost is substantially reduced. Self-adaptation therefore enables a more efficient use of the agent's resources. Results are presented for both a set of archetypal scenarios and large scale virtual environments based in Second Life, representing real locations in London.
Resumo:
Heterogeneous multi-core FPGAs contain different types of cores, which can improve efficiency when used with an effective online task scheduler. However, it is not easy to find the right cores for tasks when there are multiple objectives or dozens of cores. Inappropriate scheduling may cause hot spots which decrease the reliability of the chip. Given that, our research builds a simulating platform to evaluate all kinds of scheduling algorithms on a variety of architectures. On this platform, we provide an online scheduler which uses multi-objective evolutionary algorithm (EA). Comparing the EA and current algorithms such as Predictive Dynamic Thermal Management (PDTM) and Adaptive Temperature Threshold Dynamic Thermal Management (ATDTM), we find some drawbacks in previous work. First, current algorithms are overly dependent on manually set constant parameters. Second, those algorithms neglect optimization for heterogeneous architectures. Third, they use single-objective methods, or use linear weighting method to convert a multi-objective optimization into a single-objective optimization. Unlike other algorithms, the EA is adaptive and does not require resetting parameters when workloads switch from one to another. EAs also improve performance when used on heterogeneous architecture. A efficient Pareto front can be obtained with EAs for the purpose of multiple objectives.
Resumo:
Adaptability for distributed object-oriented enterprise frameworks in multimedia technology is a critical mission for system evolution. Today, building adaptive services is a complex task due to lack of adequate framework support in the distributed computing systems. In this paper, we propose a Metalevel Component-Based Framework which uses distributed computing design patterns as components to develop an adaptable pattern-oriented framework for distributed computing applications. We describe our approach of combining a meta-architecture with a pattern-oriented framework, resulting in an adaptable framework which provides a mechanism to facilitate system evolution. This approach resolves the problem of dynamic adaptation in the framework, which is encountered in most distributed multimedia applications. The proposed architecture of the pattern-oriented framework has the abilities to dynamically adapt new design patterns to address issues in the domain of distributed computing and they can be woven together to shape the framework in future. © 2011 Springer Science+Business Media B.V.
Resumo:
The main challenges of multimedia data retrieval lie in the effective mapping between low-level features and high-level concepts, and in the individual users' subjective perceptions of multimedia content. ^ The objectives of this dissertation are to develop an integrated multimedia indexing and retrieval framework with the aim to bridge the gap between semantic concepts and low-level features. To achieve this goal, a set of core techniques have been developed, including image segmentation, content-based image retrieval, object tracking, video indexing, and video event detection. These core techniques are integrated in a systematic way to enable the semantic search for images/videos, and can be tailored to solve the problems in other multimedia related domains. In image retrieval, two new methods of bridging the semantic gap are proposed: (1) for general content-based image retrieval, a stochastic mechanism is utilized to enable the long-term learning of high-level concepts from a set of training data, such as user access frequencies and access patterns of images. (2) In addition to whole-image retrieval, a novel multiple instance learning framework is proposed for object-based image retrieval, by which a user is allowed to more effectively search for images that contain multiple objects of interest. An enhanced image segmentation algorithm is developed to extract the object information from images. This segmentation algorithm is further used in video indexing and retrieval, by which a robust video shot/scene segmentation method is developed based on low-level visual feature comparison, object tracking, and audio analysis. Based on shot boundaries, a novel data mining framework is further proposed to detect events in soccer videos, while fully utilizing the multi-modality features and object information obtained through video shot/scene detection. ^ Another contribution of this dissertation is the potential of the above techniques to be tailored and applied to other multimedia applications. This is demonstrated by their utilization in traffic video surveillance applications. The enhanced image segmentation algorithm, coupled with an adaptive background learning algorithm, improves the performance of vehicle identification. A sophisticated object tracking algorithm is proposed to track individual vehicles, while the spatial and temporal relationships of vehicle objects are modeled by an abstract semantic model. ^
Resumo:
Optimization of adaptive traffic signal timing is one of the most complex problems in traffic control systems. This dissertation presents a new method that applies the parallel genetic algorithm (PGA) to optimize adaptive traffic signal control in the presence of transit signal priority (TSP). The method can optimize the phase plan, cycle length, and green splits at isolated intersections with consideration for the performance of both the transit and the general vehicles. Unlike the simple genetic algorithm (GA), PGA can provide better and faster solutions needed for real-time optimization of adaptive traffic signal control. ^ An important component in the proposed method involves the development of a microscopic delay estimation model that was designed specifically to optimize adaptive traffic signal with TSP. Macroscopic delay models such as the Highway Capacity Manual (HCM) delay model are unable to accurately consider the effect of phase combination and phase sequence in delay calculations. In addition, because the number of phases and the phase sequence of adaptive traffic signal may vary from cycle to cycle, the phase splits cannot be optimized when the phase sequence is also a decision variable. A "flex-phase" concept was introduced in the proposed microscopic delay estimation model to overcome these limitations. ^ The performance of PGA was first evaluated against the simple GA. The results show that PGA achieved both faster convergence and lower delay for both under- or over-saturated traffic conditions. A VISSIM simulation testbed was then developed to evaluate the performance of the proposed PGA-based adaptive traffic signal control with TSP. The simulation results show that the PGA-based optimizer for adaptive TSP outperformed the fully actuated NEMA control in all test cases. The results also show that the PGA-based optimizer was able to produce TSP timing plans that benefit the transit vehicles while minimizing the impact of TSP on the general vehicles. The VISSIM testbed developed in this research provides a powerful tool to design and evaluate different TSP strategies under both actuated and adaptive signal control. ^
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. ^ In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. ^ These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. ^ Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. ^ Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. ^ The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. ^ Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions. ^
Resumo:
This dissertation develops a process improvement method for service operations based on the Theory of Constraints (TOC), a management philosophy that has been shown to be effective in manufacturing for decreasing WIP and improving throughput. While TOC has enjoyed much attention and success in the manufacturing arena, its application to services in general has been limited. The contribution to industry and knowledge is a method for improving global performance measures based on TOC principles. The method proposed in this dissertation will be tested using discrete event simulation based on the scenario of the service factory of airline turnaround operations. To evaluate the method, a simulation model of aircraft turn operations of a U.S. based carrier was made and validated using actual data from airline operations. The model was then adjusted to reflect an application of the Theory of Constraints for determining how to deploy the scarce resource of ramp workers. The results indicate that, given slight modifications to TOC terminology and the development of a method for constraint identification, the Theory of Constraints can be applied with success to services. Bottlenecks in services must be defined as those processes for which the process rates and amount of work remaining are such that completing the process will not be possible without an increase in the process rate. The bottleneck ratio is used to determine to what degree a process is a constraint. Simulation results also suggest that redefining performance measures to reflect a global business perspective of reducing costs related to specific flights versus the operational local optimum approach of turning all aircraft quickly results in significant savings to the company. Savings to the annual operating costs of the airline were simulated to equal 30% of possible current expenses for misconnecting passengers with a modest increase in utilization of the workers through a more efficient heuristic of deploying them to the highest priority tasks. This dissertation contributes to the literature on service operations by describing a dynamic, adaptive dispatch approach to manage service factory operations similar to airline turnaround operations using the management philosophy of the Theory of Constraints.
Resumo:
As users continually request additional functionality, software systems will continue to grow in their complexity, as well as in their susceptibility to failures. Particularly for sensitive systems requiring higher levels of reliability, faulty system modules may increase development and maintenance cost. Hence, identifying them early would support the development of reliable systems through improved scheduling and quality control. Research effort to predict software modules likely to contain faults, as a consequence, has been substantial. Although a wide range of fault prediction models have been proposed, we remain far from having reliable tools that can be widely applied to real industrial systems. For projects with known fault histories, numerous research studies show that statistical models can provide reasonable estimates at predicting faulty modules using software metrics. However, as context-specific metrics differ from project to project, the task of predicting across projects is difficult to achieve. Prediction models obtained from one project experience are ineffective in their ability to predict fault-prone modules when applied to other projects. Hence, taking full benefit of the existing work in software development community has been substantially limited. As a step towards solving this problem, in this dissertation we propose a fault prediction approach that exploits existing prediction models, adapting them to improve their ability to predict faulty system modules across different software projects.
Resumo:
While the robots gradually become a part of our daily lives, they already play vital roles in many critical operations. Some of these critical tasks include surgeries, battlefield operations, and tasks that take place in hazardous environments or distant locations such as space missions. In most of these tasks, remotely controlled robots are used instead of autonomous robots. This special area of robotics is called teleoperation. Teleoperation systems must be reliable when used in critical tasks; hence, all of the subsystems must be dependable even under a subsystem or communication line failure. These systems are categorized as unilateral or bilateral teleoperation. A special type of bilateral teleoperation is described as force-reflecting teleoperation, which is further investigated as limited- and unlimited-workspace teleoperation. Teleoperation systems configured in this study are tested both in numerical simulations and experiments. A new method, Virtual Rapid Robot Prototyping, is introduced to create system models rapidly and accurately. This method is then extended to configure experimental setups with actual master systems working with system models of the slave robots accompanied with virtual reality screens as well as the actual slaves. Fault-tolerant design and modeling of the master and slave systems are also addressed at different levels to prevent subsystem failure. Teleoperation controllers are designed to compensate for instabilities due to communication time delays. Modifications to the existing controllers are proposed to configure a controller that is reliable in communication line failures. Position/force controllers are also introduced for master and/or slave robots. Later, controller architecture changes are discussed in order to make these controllers dependable even in systems experiencing communication problems. The customary and proposed controllers for teleoperation systems are tested in numerical simulations on single- and multi-DOF teleoperation systems. Experimental studies are then conducted on seven different systems that included limited- and unlimited-workspace teleoperation to verify and improve simulation studies. Experiments of the proposed controllers were successful relative to the customary controllers. Overall, by employing the fault-tolerance features and the proposed controllers, a more reliable teleoperation system is possible to design and configure which allows these systems to be used in a wider range of critical missions.
Resumo:
Adaptation is an important requirement for mobile applications due to the varying levels of resource availability that characterizes mobile environments. However without proper control, multiple applications can each adapt independently in response to a range of different adaptive stimuli, causing conflicts or sub optimal performance. In this thesis we presented a framework, which enables multiple adaptation mechanisms to coexist on one platform. The key component of this framework was the 'Policy Server', which has all the system policies and governs the rules for adaptation. We also simulated our framework and subjected it to various adaptation scenarios to demonstrate the working of the system as a whole. With the help of the simulation it was shown that our framework enables seamless adaptation of multiple applications.
Resumo:
Acknowledgements This study was possible by partial financial support from the following Brazilian government agencies: CNPq, CAPES, and FAPESP (2011/19296-1 and 2015/07311-7). We also wish thank Newton Fund and COFAP.
Resumo:
This paper will look at the benefits and limitations of content distribution using Forward Error Correction (FEC) in conjunction with the Transmission Control Protocol (TCP). FEC can be used to reduce the number of retransmissions which would usually result from a lost packet. The requirement for TCP to deal with any losses is then greatly reduced. There are however side-effects to using FEC as a countermeasure to packet loss: an additional requirement for bandwidth. When applications such as real-time video conferencing are needed, delay must be kept to a minimum, and retransmissions are certainly not desirable. A balance, therefore, between additional bandwidth and delay due to retransmissions must be struck. Our results show that the throughput of data can be significantly improved when packet loss occurs using a combination of FEC and TCP, compared to relying solely on TCP for retransmissions. Furthermore, a case study applies the result to demonstrate the achievable improvements in the quality of streaming video perceived by end users.
Resumo:
Due to huge popularity of portable terminals based on Wireless LANs and increasing demand for multimedia services from these terminals, the earlier structures and protocols are insufficient to cover the requirements of emerging networks and communications. Most research in this field is tailored to find more efficient ways to optimize the quality of wireless LAN regarding the requirements of multimedia services. Our work is to investigate the effects of modulation modes at the physical layer, retry limits at the MAC layer and packet sizes at the application layer over the quality of media packet transmission. Interrelation among these parameters to extract a cross-layer idea will be discussed as well. We will show how these parameters from different layers jointly contribute to the performance of service delivery by the network. The results obtained could form a basis to suggest independent optimization in each layer (an adaptive approach) or optimization of a set of parameters from different layers (a cross-layer approach). Our simulation model is implemented in the NS-2 simulator. Throughput and delay (latency) of packet transmission are the quantities of our assessments. © 2010 IEEE.