839 resultados para Multi processor systems
Resumo:
Prediction of queue waiting times of jobs submitted to production parallel batch systems is important to provide overall estimates to users and can also help meta-schedulers make scheduling decisions. In this work, we have developed a framework for predicting ranges of queue waiting times for jobs by employing multi-class classification of similar jobs in history. Our hierarchical prediction strategy first predicts the point wait time of a job using dynamic k-Nearest Neighbor (kNN) method. It then performs a multi-class classification using Support Vector Machines (SVMs) among all the classes of the jobs. The probabilities given by the SVM for the class predicted using k-NN and its neighboring classes are used to provide a set of ranges of predicted wait times with probabilities. We have used these predictions and probabilities in a meta-scheduling strategy that distributes jobs to different queues/sites in a multi-queue/grid environment for minimizing wait times of the jobs. Experiments with different production supercomputer job traces show that our prediction strategies can give correct predictions for about 77-87% of the jobs, and also result in about 12% improved accuracy when compared to the next best existing method. Experiments with our meta-scheduling strategy using different production and synthetic job traces for various system sizes, partitioning schemes and different workloads, show that the meta-scheduling strategy gives much improved performance when compared to existing scheduling policies by reducing the overall average queue waiting times of the jobs by about 47%.
Resumo:
We present a localization system that targets rapid deployment of stationary wireless sensor networks (WSN). The system uses a particle filter to fuse measurements from multiple localization modalities, such as RF ranging, neighbor information or maps, to obtain position estimations with higher accuracy than that of the individual modalities. The system isolates different modalities into separate components which can be included or excluded independently to tailor the system to a specific scenario. We show that position estimations can be improved with our system by combining multiple modalities. We evaluate the performance of the system in both an indoor and outdoor environment using combinations of five different modalities. Using two anchor nodes as reference points and combining all five modalities, we obtain RMS (Root Mean Square) estimation errors of approximately 2.5m in both cases, while using the components individually results in errors within the range of 3.5 and 9 m.
Resumo:
Practical orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE), exploit multi-user diversity using very limited feedback. The best-m feedback scheme is one such limited feedback scheme, in which users report only the gains of their m best subchannels (SCs) and their indices. While the scheme has been extensively studied and adopted in standards such as LTE, an analysis of its throughput for the practically important case in which the SCs are correlated has received less attention. We derive new closed-form expressions for the throughput when the SC gains of a user are uniformly correlated. We analyze the performance of the greedy but unfair frequency-domain scheduler and the fair round-robin scheduler for the general case in which the users see statistically non-identical SCs. An asymptotic analysis is then developed to gain further insights. The analysis and extensive numerical results bring out how correlation reduces throughput.
Resumo:
Optimal control of traffic lights at junctions or traffic signal control (TSC) is essential for reducing the average delay experienced by the road users amidst the rapid increase in the usage of vehicles. In this paper, we formulate the TSC problem as a discounted cost Markov decision process (MDP) and apply multi-agent reinforcement learning (MARL) algorithms to obtain dynamic TSC policies. We model each traffic signal junction as an independent agent. An agent decides the signal duration of its phases in a round-robin (RR) manner using multi-agent Q-learning with either is an element of-greedy or UCB 3] based exploration strategies. It updates its Q-factors based on the cost feedback signal received from its neighbouring agents. This feedback signal can be easily constructed and is shown to be effective in minimizing the average delay of the vehicles in the network. We show through simulations over VISSIM that our algorithms perform significantly better than both the standard fixed signal timing (FST) algorithm and the saturation balancing (SAT) algorithm 15] over two real road networks.
Resumo:
This paper reports a multi-scale study on damage evolution process and rupture of gabbro under uniaxial compression with several experimental techniques, including MTS810 testing machine, white digital speckle correlation method, and acoustic emission technique. In particular, the synchronization of the three experimental systems is realized for the study of relationship of deformation and damage at multiple scales. It is found that there are significant correlation between damage evolution at small and large length scales, and rupture at sample scale, especially it displays critical sensitivity at multiple scales and trans-scale fluctuations.
Resumo:
The optimal bounded control of quasi-integrable Hamiltonian systems with wide-band random excitation for minimizing their first-passage failure is investigated. First, a stochastic averaging method for multi-degrees-of-freedom (MDOF) strongly nonlinear quasi-integrable Hamiltonian systems with wide-band stationary random excitations using generalized harmonic functions is proposed. Then, the dynamical programming equations and their associated boundary and final time conditions for the control problems of maximizinig reliability and maximizing mean first-passage time are formulated based on the averaged It$\ddot{\rm o}$ equations by applying the dynamical programming principle. The optimal control law is derived from the dynamical programming equations and control constraints. The relationship between the dynamical programming equations and the backward Kolmogorov equation for the conditional reliability function and the Pontryagin equation for the conditional mean first-passage time of optimally controlled system is discussed. Finally, the conditional reliability function, the conditional probability density and mean of first-passage time of an optimally controlled system are obtained by solving the backward Kolmogorov equation and Pontryagin equation. The application of the proposed procedure and effectiveness of control strategy are illustrated with an example.
Resumo:
The separation of independent sources from mixed observed data is a fundamental and challenging problem. In many practical situations, observations may be modelled as linear mixtures of a number of source signals, i.e. a linear multi-input multi-output system. A typical example is speech recordings made in an acoustic environment in the presence of background noise and/or competing speakers. Other examples include EEG signals, passive sonar applications and cross-talk in data communications. In this paper, we propose iterative algorithms to solve the n × n linear time invariant system under two different constraints. Some existing solutions for 2 × 2 systems are reviewed and compared.
Resumo:
Protein-Chip as micro-assays for the determination of protein interaction, the analysis, the identification and the purification of proteins has large potential applications. The Optical Protein-Chip is able to detect the multi-interaction of proteins and multi-bio-activities of molecules directly and simultaneously with no labeling. The chip is a small matrix on solid substrate containing multi-micro-area prepared by microfabrication with photolithography or soft lithography for surface patterning, and processed with surface modification which includes the physical, chemical, and bio-chemical modifications, etc. The ligand immobilization, such as protein immobilization, especially the oriented immobilization with low steric hindrance and high bio-specific binding activity between ligand and receptor is used to form a sensing surface. Each area of the pattern is corresponding to only one bioactivity. The interval between the areas is non-bioactive and optically extinctive. The affinity between proteins is used to realize non-labeling microassays for the determination of protein identification and protein interaction. The sampling of the chip is non-disturbing, performed with imaging ellipsometry and image processing on a database of proteins.
Resumo:
This short communication presents our recent studies to implement numerical simulations for multi-phase flows on top-ranked supercomputer systems with distributed memory architecture. The numerical model is designed so as to make full use of the capacity of the hardware. Satisfactory scalability in terms of both the parallel speed-up rate and the size of the problem has been obtained on two high rank systems with massively parallel processors, the Earth Simulator (Earth simulator research center, Yokohama Kanagawa, Japan) and the TSUBAME (Tokyo Institute of Technology, Tokyo, Japan) supercomputers.
Resumo:
Em uso desde a Grécia antiga e atualmente massificado na maioria dos países do mundo, o sistema de votação tradicional baseado em cédulas de papel possui diversos problemas associados à segurança, tais como dificuldades para evitar coerção do eleitor, venda do voto e substituição fraudulenta do eleitor. Além de problemas de usabilidade que acarretam erros de preenchimento da cédula e um processo de apuração lento, que pode durar dias. Ao lado disso, o sistema tradicional não fornece a contraprova do voto, que permite ao eleitor conferir se o seu voto foi corretamente contabilizado na apuração. Inicialmente acreditou-se que a informatização do sistema de votação resolveria todos os problemas do sistema tradicional. Porém, com a sua implantação em alguns países o sistema de votação eletrônica não mostrou-se capaz de fornecer garantias irrefutáveis que não tivesse sido alvo de alterações fraudulentas durante o seu desenvolvimento ou operação. A má reputação do sistema eletrônico está principalmente associada à falta de transparência dos processos que, em sua maioria, não proporcionam a materialização do voto, conferido pelo eleitor para fins de contagem manual, e nem geram evidências (contraprova) da correta contabilização do voto do eleitor. O objetivo deste trabalho é propor uma arquitetura de votação eletrônica que integra, de forma segura, o anonimato e autenticidade do votante, a confidencialidade e integridade do voto/sistema. O sistema aumenta a usabilidade do esquema de votação baseado em "Três Cédulas" de papel, implementando-o computacionalmente. O esquema oferece maior credibilidade ao sistema de votação através da materialização e contraprova do voto, resistência à coerção e ao comércio do voto. Utilizando esquemas de criptografia assimétrica e segurança computacional clássica, associado a um sistema de auditoria eficiente, a proposta garante segurança e transparência nos processos envolvidos. A arquitetura de construção modular distribui a responsabilidade entre suas entidades, agregando-lhe robustez e viabilizando eleições em grande escala. O protótipo do sistema desenvolvido usando serviços web e Election Markup Language mostra a viabilidade da proposta.
Resumo:
The smart grid is a highly complex system that is being formed from the traditional power grid, adding new and sophisticated communication and control devices. This will enable integrating new elements for distributed power generation and also achieving an increasingly automated operation so for actions of the utilities as for customers. In order to model such systems a bottom-up method is followed, using only a few basic elements which are structured into two layers: a physical layer for the electrical power transmission, and one logical layer for element communication. A simple case study is presented to analyse the possibilities of simulation. It shows a microgrid model with dynamic load management and an integrated approach that can process both electrical and communication flows.