967 resultados para Distributed virtual machine
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation. The proposed system is completely distributed and automates the mechanisms for adapting the logical network to the offered load. The system is able to manage dynamically a logical network such as a virtual path network in ATM or a label switched path network in MPLS or GMPLS. The system design and implementation is based on a multi-agent system (MAS) which make the decisions of when and how to change a logical path. Despite the lack of a centralised global network view, results show that MAS manages the network resources effectively, reducing the connection blocking probability and, therefore, achieving better utilisation of network resources. We also include details of its architecture and implementation
Resumo:
Due to the high cost of a large ATM network working up to full strength to apply our ideas about network management, i.e., dynamic virtual path (VP) management and fault restoration, we developed a distributed simulation platform for performing our experiments. This platform also had to be capable of other sorts of tests, such as connection admission control (CAC) algorithms, routing algorithms, and accounting and charging methods. The platform was posed as a very simple, event-oriented and scalable simulation. The main goal was the simulation of a working ATM backbone network with a potentially large number of nodes (hundreds). As research into control algorithms and low-level, or rather cell-level methods, was beyond the scope of this study, the simulation took place at a connection level, i.e., there was no real traffic of cells. The simulated network behaved like a real network accepting and rejecting SNMP ones, or experimental tools using the API node
Resumo:
We present a system for dynamic network resource configuration in environments with bandwidth reservation and path restoration mechanisms. Our focus is on the dynamic bandwidth management results, although the main goal of the system is the integration of the different mechanisms that manage the reserved paths (bandwidth, restoration, and spare capacity planning). The objective is to avoid conflicts between these mechanisms. The system is able to dynamically manage a logical network such as a virtual path network in ATM or a label switch path network in MPLS. This system has been designed to be modular in the sense that in can be activated or deactivated, and it can be applied only in a sub-network. The system design and implementation is based on a multi-agent system (MAS). We also included details of its architecture and implementation
Resumo:
Through this study, we will measure how the collective MPI operations behaves in virtual and physical clusters, and its impact on the application performance. As we stated before, we will use as a test case the Weather Research and Forecasting simulations.
Resumo:
Although cross-sectional diffusion tensor imaging (DTI) studies revealed significant white matter changes in mild cognitive impairment (MCI), the utility of this technique in predicting further cognitive decline is debated. Thirty-five healthy controls (HC) and 67 MCI subjects with DTI baseline data were neuropsychologically assessed at one year. Among them, there were 40 stable (sMCI; 9 single domain amnestic, 7 single domain frontal, 24 multiple domain) and 27 were progressive (pMCI; 7 single domain amnestic, 4 single domain frontal, 16 multiple domain). Fractional anisotropy (FA) and longitudinal, radial, and mean diffusivity were measured using Tract-Based Spatial Statistics. Statistics included group comparisons and individual classification of MCI cases using support vector machines (SVM). FA was significantly higher in HC compared to MCI in a distributed network including the ventral part of the corpus callosum, right temporal and frontal pathways. There were no significant group-level differences between sMCI versus pMCI or between MCI subtypes after correction for multiple comparisons. However, SVM analysis allowed for an individual classification with accuracies up to 91.4% (HC versus MCI) and 98.4% (sMCI versus pMCI). When considering the MCI subgroups separately, the minimum SVM classification accuracy for stable versus progressive cognitive decline was 97.5% in the multiple domain MCI group. SVM analysis of DTI data provided highly accurate individual classification of stable versus progressive MCI regardless of MCI subtype, indicating that this method may become an easily applicable tool for early individual detection of MCI subjects evolving to dementia.
Resumo:
The book presents the state of the art in machine learning algorithms (artificial neural networks of different architectures, support vector machines, etc.) as applied to the classification and mapping of spatially distributed environmental data. Basic geostatistical algorithms are presented as well. New trends in machine learning and their application to spatial data are given, and real case studies based on environmental and pollution data are carried out. The book provides a CD-ROM with the Machine Learning Office software, including sample sets of data, that will allow both students and researchers to put the concepts rapidly to practice.
Resumo:
BACKGROUND: Several European HIV observational data bases have, over the last decade, accumulated a substantial number of resistance test results and developed large sample repositories, There is a need to link these efforts together, We here describe the development of such a novel tool that allows to bind these data bases together in a distributed fashion for which the control and data remains with the cohorts rather than classic data mergers.METHODS: As proof-of-concept we entered two basic queries into the tool: available resistance tests and available samples. We asked for patients still alive after 1998-01-01, and between 180 and 195 cm of height, and how many samples or resistance tests there would be available for these patients, The queries were uploaded with the tool to a central web server from which each participating cohort downloaded the queries with the tool and ran them against their database, The numbers gathered were then submitted back to the server and we could accumulate the number of available samples and resistance tests.RESULTS: We obtained the following results from the cohorts on available samples/resistance test: EuResist: not availableI11,194; EuroSIDA: 20,71611,992; ICONA: 3,751/500; Rega: 302/302; SHCS: 53,78311,485, In total, 78,552 samples and 15,473 resistance tests were available amongst these five cohorts. Once these data items have been identified, it is trivial to generate lists of relevant samples that would be usefuI for ultra deep sequencing in addition to the already available resistance tests, Saon the tool will include small analysis packages that allow each cohort to pull a report on their cohort profile and also survey emerging resistance trends in their own cohort,CONCLUSIONS: We plan on providing this tool to all cohorts within the Collaborative HIV and Anti-HIV Drug Resistance Network (CHAIN) and will provide the tool free of charge to others for any non-commercial use, The potential of this tool is to ease collaborations, that is, in projects requiring data to speed up identification of novel resistance mutations by increasing the number of observations across multiple cohorts instead of awaiting single cohorts or studies to reach the critical number needed to address such issues.
Resumo:
Cooperative transmission can be seen as a "virtual" MIMO system, where themultiple transmit antennas are in fact implemented distributed by the antennas both at the source and the relay terminal. Depending on the system design, diversity/multiplexing gainsare achievable. This design involves the definition of the type of retransmission (incrementalredundancy, repetition coding), the design of the distributed space-time codes, the errorcorrecting scheme, the operation of the relay (decode&forward or amplify&forward) and thenumber of antennas at each terminal. Proposed schemes are evaluated in different conditionsin combination with forward error correcting codes (FEC), both for linear and near-optimum(sphere decoder) receivers, for its possible implementation in downlink high speed packetservices of cellular networks. Results show the benefits of coded cooperation over directtransmission in terms of increased throughput. It is shown that multiplexing gains areobserved even if the mobile station features a single antenna, provided that cell wide reuse of the relay radio resource is possible.
Resumo:
Työssä analysoidaanprosessin vaikutusta paperikoneen stabiiliuteen. Kaksi modernia sanomalehtipaperikonetta analysoitiin ja sen perusteella molemmista prosesseista rakennettiin fysiikan lakeihin perustuvat simulointimallit APROS Paper simulointiohjelmistolla. Työn tavoitteena on selvittää, miten kyseisten koneiden prosessit eroavat toisistaan ja arvioida, miten havaitut erot vaikuttavat prosessien stabiiliuteen. Työssä tarkastellaan periodisten häiriöiden vaimenemista prosessissa. Simuloinnissa herätteenä käytettiin puhdasta valkoista kohinaa, jonka avulla eri taajuistenperiodisten häiriöiden vaimenemista analysoitiin. Prosessien häiriövasteet esitetään taajuuskoordinaatistossa. Suurimmat erot prosessien välillä löytyivät viirakaivosta ja sen sekoitusdynamiikasta. Perinteisen viirakaivon todettiin muistuttavan käyttäytymiseltään sarjaan kytkettyjä ideaalisekoittimia, kun taas pienempitilavuuksisen fluumin todettiin käyttäytyvän lähes kuin putkiviive. Vaikka erotprosessitilavuudessa sekä viirakaivon sekoitusdynamiikassa olivat hyvin selkeät, havaittiin vain marginaalinen ero prosessin välillä periodisten häiriöiden vaimenemisessa, koska erot viiraretentiotasoissa vaikuttivat eniten simulointituloksia. Matalammalla viiraretentiolla operoivan paperikoneen todettiin vaimentavan tehokkaammin prosessihäiriöitä. Samalla retentiotasolla pienempitilavuuksisen prosessin todettiin vaimentavan hitaita prosessihäiriöitä marginaalisesti paremmin. Tutkituista paperikoneista toisella simuloitiin viiraosan vedenpoistomuutoksenvaikutusta viiraretentioon ja paperin koostumukseen. Lisäksi arvioitiin viiraretention säädön toimivuutta. Viiraosan listakengän vedenpoiston todettiin aiheuttavan merkittäviä sakeus- ja retentiohäiriöitä, mikäli sen avulla poistettavan kiintoaineen virtaus tuplaantuisi. Viiraretention säädön todettiin estävän häiriöiden kierron prosessissa, mutta siirtävän ne suoraan rainaan. Retention säädön eikuitenkaan todettu olevan suoranainen häiriön lähde.
Resumo:
Les prioritats per als museus canvien. La missió de la nova museologia és convertir els museus en llocs per a gaudir i aprendre, cosa que fa que hagin de dur a terme una gestió financera molt semblant a la d'una empresa social que competeixi en el sector del lleure. Amb el pas del temps, els museus han d'establir i aplicar els criteris necessaris per a la supervivència, aplanant el terreny perquè altres institucions públiques siguin més obertes en els seus esforços per comunicar i difondre el seu patrimoni. Ja podem començar a parlar d'algunes conclusions comunament acceptades sobre el comportament dels visitants, que són necessàries per a planificar exposicions futures que vegin l'aprenentatge com un procés constructiu, les col·leccions com a objectes amb significat i les mateixes exposicions com a mitjans de comunicació que haurien de transformar la manera de pensar de l'espectador i que estan al servei del mateix missatge. Sembla que internet representa un mitjà efectiu per a assolir aquests objectius, ja que és capaç (a) d'adaptar-se als interessos i les característiques intel·lectuals d'un públic divers; (b) de redescobrir els significats dels objectes i adquirir un reconeixement sociocultural del seu valor per mitjà del seu potencial interactiu, i (c) de fer ús d'elements atractius i estimulants perquè tothom en gaudeixi. Per a aquest propòsit, és bàsic fer-nos les preguntes següents: quins criteris ha de seguir un museu virtual per a optimar la difusió del seu patrimoni?; quins elements estimulen els usuaris a quedar-se en una pàgina web i fer visites virtuals que els siguin satisfactòries?; quin paper té la usabilitat de l'aplicació en tot això?
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
Network virtualisation is considerably gaining attentionas a solution to ossification of the Internet. However, thesuccess of network virtualisation will depend in part on how efficientlythe virtual networks utilise substrate network resources.In this paper, we propose a machine learning-based approachto virtual network resource management. We propose to modelthe substrate network as a decentralised system and introducea learning algorithm in each substrate node and substrate link,providing self-organization capabilities. We propose a multiagentlearning algorithm that carries out the substrate network resourcemanagement in a coordinated and decentralised way. The taskof these agents is to use evaluative feedback to learn an optimalpolicy so as to dynamically allocate network resources to virtualnodes and links. The agents ensure that while the virtual networkshave the resources they need at any given time, only the requiredresources are reserved for this purpose. Simulations show thatour dynamic approach significantly improves the virtual networkacceptance ratio and the maximum number of accepted virtualnetwork requests at any time while ensuring that virtual networkquality of service requirements such as packet drop rate andvirtual link delay are not affected.
Resumo:
Commercially available haptic interfaces are usable for many purposes. However, as generic devices they are not the most suitable for the control of heavy duty mobile working machines like mining machines, container handling equipment and excavators. Alternative mechanical constructions for a haptic controller are presented and analysed. A virtual reality environment (VRE) was built to test the proposed haptic controller mechanisms. Verification of an electric motor emulating a hydraulic pump in the electro-hydraulic system of a mobile working machine is carried out. A real-time simulator using multi-body-dynamics based software with hardware-in-loop (HIL) setup was used for the tests. Recommendations for further development of a haptic controller and emulator electric motor are given.
Resumo:
L'interface cerveau-ordinateur (ICO) décode les signaux électriques du cerveau requise par l’électroencéphalographie et transforme ces signaux en commande pour contrôler un appareil ou un logiciel. Un nombre limité de tâches mentales ont été détectés et classifier par différents groupes de recherche. D’autres types de contrôle, par exemple l’exécution d'un mouvement du pied, réel ou imaginaire, peut modifier les ondes cérébrales du cortex moteur. Nous avons utilisé un ICO pour déterminer si nous pouvions faire une classification entre la navigation de type marche avant et arrière, en temps réel et en temps différé, en utilisant différentes méthodes. Dix personnes en bonne santé ont participé à l’expérience sur les ICO dans un tunnel virtuel. L’expérience fut a était divisé en deux séances (48 min chaque). Chaque séance comprenait 320 essais. On a demandé au sujets d’imaginer un déplacement avant ou arrière dans le tunnel virtuel de façon aléatoire d’après une commande écrite sur l'écran. Les essais ont été menés avec feedback. Trois électrodes ont été montées sur le scalp, vis-à-vis du cortex moteur. Durant la 1re séance, la classification des deux taches (navigation avant et arrière) a été réalisée par les méthodes de puissance de bande, de représentation temporel-fréquence, des modèles autorégressifs et des rapports d’asymétrie du rythme β avec classificateurs d’analyse discriminante linéaire et SVM. Les seuils ont été calculés en temps différé pour former des signaux de contrôle qui ont été utilisés en temps réel durant la 2e séance afin d’initier, par les ondes cérébrales de l'utilisateur, le déplacement du tunnel virtuel dans le sens demandé. Après 96 min d'entrainement, la méthode « online biofeedback » de la puissance de bande a atteint une précision de classification moyenne de 76 %, et la classification en temps différé avec les rapports d’asymétrie et puissance de bande, a atteint une précision de classification d’environ 80 %.