681 resultados para computing cluster
Resumo:
The [Ru3O(Ac)6(py)2(CH3OH)]+ cluster provides an effective electrocatalytic species for the oxidation of methanol under mild conditions. This complex exhibits characteristic electrochemical waves at -1.02, 0.15 and 1.18 V, associated with the Ru3III,II,II/Ru3III,III,II/Ru 3III,III,III /Ru3IV,III,III successive redox couples, respectively. Above 1.7 V, formation of two RuIV centers enhances the 2-electron oxidation of the methanol ligand yielding formaldehyde, in agreement with the theoretical evolution of the HOMO levels as a function of the oxidation states. This work illustrates an important strategy to improve the efficiency of the oxidation catalysis, by using a multicentered redox catalyst and accessing its multiple higher oxidation states.
Resumo:
Kirjallisuusarvostelu
Resumo:
Företag inom industri och handel väljer allt oftare att låta ett logistikföretag sköta stora delar av sina logistiska processer. Logistikföretagen i sin tur överlåter utförandet av enskilda tjänster, som t.ex. olika typer av transport, till olika samarbetspartners inom branschen. I avhandlingen studeras hur logistikföretag går till väga då de väljer vilka av deras samarbetspartners som ska engageras för att delta i utförandet av ett logistiktjänstepaket, en arbetsprocess som här kallas aktivering. Fokus ligger på aktiveringens innehåll och de faktorer som inverkar på hur den går till och vilka samarbetsparter som kommer att engageras. Arbetet bygger på nätverksansatsen för studier av företagsrelationer på industriella marknader. Aktiveringsprocessen uppfattas som en rätt ordinär, rutinmässig verksamhet i företaget, men den kan också förväntas inverka på hur företagets samarbetsnätverk utvecklas över tiden, genom att vissa relationer förstärks medan andra försvagas. I den empi riska undersökningen deltog 29 logistikföretag i Åboregionen som utgående från ett diskussionsunderlag fick berätta om hur de går till väga vid aktivering.
Resumo:
This study aimed to verify the influence of partial dehydration of "Niagara Rosada" grape clusters in physicochemical quality of the pre- fermentation must. In Brazil, during the winemaking process it is common to need to adjust the grape must when the physicochemical characteristics of the raw material are insufficient to produce wines in accordance with the Brazilian legislation for classification of beverages, which establishes the minimum alcohol content of 8.6 % for the beverage to be considered wine. Therefore, given that the reduction in the water content of grape berries allows the concentration of chemical compounds present in its composition, especially the concentration of total soluble solids, we proceeded with the treatments that were formed by the combination of two temperatures (T1-37.1ºC and T2-22.9 ºC) two air speeds (S1: 1.79 m s-1 and S2: 3.21 m s-1) and a control (T0) that has not gone through the dehydration treatment. Analysis of pH, Total Titratable Acidity (TTA) were performed in mEq L-1, Total Soluble Solids (TSS) in ºBrix, water content on a dry basis and Concentration of Phenolic Compounds (CPC) in mg of gallic acid per 100g of must. The average comparison test identified statistically significant modifications for the adaptation of must for winemaking purposes, having the treatment with 22.9 ºC and air speed of 1.79 m s-1 shown the largest increase in the concentration of total soluble solids, followed by the second best result for concentration of phenolic compounds.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
A tuberculose bovina (BTB) é uma enfermidade causada pela infecção pelo Mycobacterium bovis que acomete o homem e diversas espécies de mamíferos. A BTB tem grande importância por causar prejuízos econômicos nas regiões infectadas e por seu impacto na saúde pública. Foi realizado inquérito epidemiológico no Estado da Bahia, entre 2008 e 2010, com o objetivo de estimar a prevalência e conhecer a distribuição espaço temporal da enfermidade. O Estado foi estratificado em quatro regiões, cada uma com características epidemiológicas e demográficas homogêneas representativas de formas de produção pecuária. Um total de 18.810 cabeças com idade superior a 2 anos foi amostrado em 1350 propriedades. O teste cervical comparativo foi aplicado em cada animal selecionado, sendo considerados positivos os animais reagentes positivos ou duas vezes inconclusivos. Latitude e Longitude foram tomadas para cada propriedade amostrada com o auxilio do aparelho de Global Positioning System (GPS). O teste de Cuzick-and-Edwards e a análise de rastreio espacial (spatial scan statistic) foram utilizados para identificar qualquer agrupamento espacial de BTB. A prevalência de rebanho na Bahia, indicando a proporção de propriedades foco, foi de 1,6% (IC 95%: 1,0% - 2,69% por região). Nenhuma evidência significativa (P<0.05) de aglomeração espacial ou clustering foi detectada, possivelmente devido à baixa prevalência da doença. Estes resultados sugerem que a BTB tem baixa prevalência no estado da Bahia e que, nestas condições epidemiológicas, os focos encontrados não podem ser explicados por fatores espacialmente estruturados.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.