681 resultados para cluster computing
Resumo:
Simulation has traditionally been used for analyzing the behavior of complex real world problems. Even though only some features of the problems are considered, simulation time tends to become quite high even for common simulation problems. Parallel and distributed simulation is a viable technique for accelerating the simulations. The success of parallel simulation depends heavily on the combination of the simulation application, algorithm and message population in the simulation is sufficient, no additional delay is caused by this environment. In this thesis a conservative, parallel simulation algorithm is applied to the simulation of a cellular network application in a distributed workstation environment. This thesis presents a distributed simulation environment, Diworse, which is based on the use of networked workstations. The distributed environment is considered especially hard for conservative simulation algorithms due to the high cost of communication. In this thesis, however, the distributed environment is shown to be a viable alternative if the amount of communication is kept reasonable. Novel ideas of multiple message simulation and channel reduction enable efficient use of this environment for the simulation of a cellular network application. The distribution of the simulation is based on a modification of the well known Chandy-Misra deadlock avoidance algorithm with null messages. The basic Chandy Misra algorithm is modified by using the null message cancellation and multiple message simulation techniques. The modifications reduce the amount of null messages and the time required for their execution, thus reducing the simulation time required. The null message cancellation technique reduces the processing time of null messages as the arriving null message cancels other non processed null messages. The multiple message simulation forms groups of messages as it simulates several messages before it releases the new created messages. If the message population in the simulation is suffiecient, no additional delay is caused by this operation A new technique for considering the simulation application is also presented. The performance is improved by establishing a neighborhood for the simulation elements. The neighborhood concept is based on a channel reduction technique, where the properties of the application exclusively determine which connections are necessary when a certain accuracy for simulation results is required. Distributed simulation is also analyzed in order to find out the effect of the different elements in the implemented simulation environment. This analysis is performed by using critical path analysis. Critical path analysis allows determination of a lower bound for the simulation time. In this thesis critical times are computed for sequential and parallel traces. The analysis based on sequential traces reveals the parallel properties of the application whereas the analysis based on parallel traces reveals the properties of the environment and the distribution.
Resumo:
Fast atom bombardment mass spectroscopy has been used to study a large number of cationic phosphine-containing transition-metal-gold clusters, which ranged in mass from 1000 to 4000. Many of these clusters have been previously characterized and were examined in order to test the usefulness of the FABMS technique. Results showed that FABMS is excellent in giving the correct molecular formula and when combined with NMR, IR, and microanalysis gave a reliable characterization for cationic clusters¹. Recently FABMS has become one of the techniques employed as routine in cluster characterization2,3 and also is an effective tool for the structure analysis of large biomolecules4. Some results in the present work reinforce the importance of these data in the characterization of clusters in the absence of crystals with quality for X-ray analysis.
Resumo:
L'anàlisi de conglomerats o cluster és una tècnica multivariant que busca agrupar elements o variables tractant d'aconseguir la màxima homogeneïtat en cada grup i la major diferència entre ells, mitjançant una estructura jerarquitzada per poder decidir quin nivell jeràrquic és el més apropiat per establir la classificació. El programa SPSS disposa de tres tipus d'anàlisi de conglomerats: l'anàlisi de conglomerats jeràrquic, bietàpic i de K mitjanes. Aplicarem el mètode jeràrquic com el més idoni per determinar el nombre òptim de conglomerats existent en les dades i el contingut dels mateixos per al nostre cas pràctic.
Resumo:
An efficient approach for organizing large ad hoc networks is to divide the nodesinto multiple clusters and designate, for each cluster, a clusterhead which is responsible forholding intercluster control information. The role of a clusterhead entails rights and duties.On the one hand, it has a dominant position in front of the others because it manages theconnectivity and has access to other node¿s sensitive information. But on the other hand, theclusterhead role also has some associated costs. Hence, in order to prevent malicious nodesfrom taking control of the group in a fraudulent way and avoid selfish attacks from suitablenodes, the clusterhead needs to be elected in a secure way. In this paper we present a novelsolution that guarantees the clusterhead is elected in a cheat-proof manner.
Resumo:
The [Ru3O(Ac)6(py)2(CH3OH)]+ cluster provides an effective electrocatalytic species for the oxidation of methanol under mild conditions. This complex exhibits characteristic electrochemical waves at -1.02, 0.15 and 1.18 V, associated with the Ru3III,II,II/Ru3III,III,II/Ru 3III,III,III /Ru3IV,III,III successive redox couples, respectively. Above 1.7 V, formation of two RuIV centers enhances the 2-electron oxidation of the methanol ligand yielding formaldehyde, in agreement with the theoretical evolution of the HOMO levels as a function of the oxidation states. This work illustrates an important strategy to improve the efficiency of the oxidation catalysis, by using a multicentered redox catalyst and accessing its multiple higher oxidation states.
Resumo:
Kirjallisuusarvostelu
Resumo:
Företag inom industri och handel väljer allt oftare att låta ett logistikföretag sköta stora delar av sina logistiska processer. Logistikföretagen i sin tur överlåter utförandet av enskilda tjänster, som t.ex. olika typer av transport, till olika samarbetspartners inom branschen. I avhandlingen studeras hur logistikföretag går till väga då de väljer vilka av deras samarbetspartners som ska engageras för att delta i utförandet av ett logistiktjänstepaket, en arbetsprocess som här kallas aktivering. Fokus ligger på aktiveringens innehåll och de faktorer som inverkar på hur den går till och vilka samarbetsparter som kommer att engageras. Arbetet bygger på nätverksansatsen för studier av företagsrelationer på industriella marknader. Aktiveringsprocessen uppfattas som en rätt ordinär, rutinmässig verksamhet i företaget, men den kan också förväntas inverka på hur företagets samarbetsnätverk utvecklas över tiden, genom att vissa relationer förstärks medan andra försvagas. I den empi riska undersökningen deltog 29 logistikföretag i Åboregionen som utgående från ett diskussionsunderlag fick berätta om hur de går till väga vid aktivering.
Resumo:
This study aimed to verify the influence of partial dehydration of "Niagara Rosada" grape clusters in physicochemical quality of the pre- fermentation must. In Brazil, during the winemaking process it is common to need to adjust the grape must when the physicochemical characteristics of the raw material are insufficient to produce wines in accordance with the Brazilian legislation for classification of beverages, which establishes the minimum alcohol content of 8.6 % for the beverage to be considered wine. Therefore, given that the reduction in the water content of grape berries allows the concentration of chemical compounds present in its composition, especially the concentration of total soluble solids, we proceeded with the treatments that were formed by the combination of two temperatures (T1-37.1ºC and T2-22.9 ºC) two air speeds (S1: 1.79 m s-1 and S2: 3.21 m s-1) and a control (T0) that has not gone through the dehydration treatment. Analysis of pH, Total Titratable Acidity (TTA) were performed in mEq L-1, Total Soluble Solids (TSS) in ºBrix, water content on a dry basis and Concentration of Phenolic Compounds (CPC) in mg of gallic acid per 100g of must. The average comparison test identified statistically significant modifications for the adaptation of must for winemaking purposes, having the treatment with 22.9 ºC and air speed of 1.79 m s-1 shown the largest increase in the concentration of total soluble solids, followed by the second best result for concentration of phenolic compounds.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.