854 resultados para cloud computing voip reti opennebula ruby
Resumo:
The spread of sociocultural focuses and critical literacy studies, which offer an holistic perspective on communicative skills, has reached our country at a time when the commonest environment for writing is the internet, and information technologies have transformed writing with new channels, genres, forms of preparation and languages. Thanks to these changes, educational programmes now include new concepts of literacy related to knowledge and the use of digital environments. This paper explores the impact of introducing these new communicative environments to teaching written expression at secondary level and puts forward some ideas to link learning how to write to present communicative contexts and established practices. Without forgetting the achievements of recent decades, we need to bring about a series of changes to bring new learned writing practices to class and leave behind others we had championed as necessary when the goal was to move beyond exclusively linguistic or grammatical approaches
Resumo:
A simple, sensitive and selective cloud point extraction procedure is described for the preconcentration and atomic absorption spectrometric determination of Zn2+ and Cd2+ ions in water and biological samples, after complexation with 3,3',3",3'"-tetraindolyl (terephthaloyl) dimethane (TTDM) in basic medium, using Triton X-114 as nonionic surfactant. Detection limits of 3.0 and 2.0 µg L-1 and quantification limits 10.0 and 7.0 µg L-1were obtained for Zn2+ and Cd2+ ions, respectively. Relative standard deviation was 2.9 and 3.3, and enrichment factors 23.9 and 25.6, for Zn2+ and Cd2+ ions, respectively. The method enabled determination of low levels of Zn2+ and Cd2+ ions in urine, blood serum and water samples.
Resumo:
A new cloud point extraction (CPE) method was developed for the separation and preconcentration of copper (II) prior to spectrophotometric analysis. For this purpose, 1-(2,4-dimethylphenyl) azonapthalen-2-ol (Sudan II) was used as a chelating agent and the solution pH was adjusted to 10.0 with borate buffer. Polyethylene glycol tert-octylphenyl ether (Triton X-114) was used as an extracting agent in the presence of sodium dodecylsulphate (SDS). After phase separation, based on the cloud point of the mixture, the surfactant-rich phase was diluted with acetone, and the enriched analyte was spectrophotometrically determined at 537 nm. The variables affecting CPE efficiency were optimized. The calibration curve was linear within the range 0.285-20 µg L-1 with a detection limit of 0.085 µg L-1. The method was successfully applied to the quantification of copper in different beverage samples.
Resumo:
A new analytical approach was developed involving cloud point extraction (CPE) and spectrofluorimetric determination of triamterene (TM) in biological fluids. A urine or plasma sample was prepared and adjusted to pH 7, then TM was quickly extracted using CPE, using 0.05% (w/v) of Triton X-114 as the extractant. The main factors that affected the extraction efficiency (the pH of the sample, the Triton X-114 concentration, the addition of salt, the extraction time and temperature, and the centrifugation time and speed) were studied and optimized. The method gave calibration curves for TM with good linearities and correlation coefficients (r) higher than 0.99. The method showed good precision and accuracy, with intra- and inter-assay precisions of less than 8.50% at all concentrations. Standard addition recovery tests were carried out, and the recoveries ranged from 94.7% to 114%. The limits of detection and quantification were 3.90 and 11.7 µg L-1, respectively, for urine and 5.80 and 18.0 µg L-1, respectively, for plasma. The newly developed, environmentally friendly method was successfully used to extract and determine TM in human urine samples.
Resumo:
In this study, a procedure is developed for cloud point extraction of Pd(II) and Rh(III) ions in aqueous solution using Span 80 (non-ionic surfactant) prior to their determination by flame atomic absorption spectroscopy. This method is based on the extraction of Pd(II) and Rh(III) ions at a pH of 10 using Span 80 with no chelating agent. We investigated the effect of various parameters on the recovery of the analyte ions, including pH, equilibration temperature and time, concentration of Span 80, and ionic strength. Under the best experimental conditions, the limits of detection based on 3Sb for Pd(II) and Rh(III) ions were 1.3 and 1.2 ng mL-1, respectively. Seven replicate determinations of a mixture of 0.5 µg mL-1 palladium and rhodium ions gave a mean absorbance of 0.058 and 0.053 with relative standard deviations of 1.8 and 1.6%, respectively. The developed method was successfully applied to the extraction and determination of the palladium and rhodium ions in road dust and standard samples and satisfactory results were obtained.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
kuv., 10 x 24 cm
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
This study examines the practice of supply chain management problems and the perceived demand information distortion’s (the bullwhip effect) reduction with the interfirm information system, which is delivered as a cloud service to a company operating in the telecommunications industry. The purpose is to shed light in practice that do the interfirm information system have impact on the performance of the supply chain and in particularly the reduction of bullwhip effect. In addition, a holistic case study of the global telecommunications company's supply chain is presented and also the challenges it’s facing, and this study also proposes some measures to improve the situation. The theoretical part consists of the supply chain and its management, as well as increasing the efficiency and introducing the theories and related previous research. In addition, study presents performance metrics for the bullwhip effect detection and tracking. The theoretical part ends in presenting cloud -based business intelligence theoretical framework used in the background of this study. The research strategy is a qualitative case study, supported by quantitative data, which is collected from a telecommunication sector company's databases. Qualitative data were gathered mainly with two open interviews and the e-mail exchange during the development project. In addition, other materials from the company were collected during the project and the company's web site information was also used as the source. The data was collected to a specific case study database in order to increase reliability. The results show that the bullwhip effect can be reduced with the interfirm information system and with the use of CPFR and S&OP models and in particularly combining them to an integrated business planning. According to this study the interfirm information system does not, however, solve all of the supply chain and their effectiveness -related problems, because also the company’s processes and human activities have a major impact.
Resumo:
Tutkielman tavoitteena on tutkia, miten sähköisen taloushallinnon kehitys on vaikuttanut tilintarkastukseen ja miten se näkyy ammattilehtien kirjoittelussa vuosina 2003-2013. Alatavoitteina tutkitaan, mitä sähköisen taloushallinnon kehityksen tuomia hyötyjä ja haasteita on havaittu tarkastellussa suomalaisessa sekä kansainvälisessä ammattilehtikirjoittelussa tilintarkastuksen näkökulmasta. Kyseessä on laadullinen tutkimus ja tutkimusmetodologiana käytetään sisällönanalyysia. Tietokoneavusteisten tilintarkastuksen tekniikoiden kehityksen seurauksena tekniikoita voidaan kehittää kohti jatkuvaa tilintarkastusta. Kannettavien tietokoneiden ja pilvipalveluiden seurauksena tilintarkastuksesta on tullut enemmän ajasta ja paikasta riippumatonta. XBRL:n avulla tietojen vertailtavuus, luotettavuus ja tarkkuus ovat parantuneet. Haasteina voidaan nähdä tilintarkastajien IT-taitojen kehittämisen tarve sekä asiakkaan ja tilintarkastusyhteisön tietojärjestelmien yhteensopivuus. Hyvätkin ohjelmistot voivat altistaa väärinkäytöksille, jolloin tarvitaan uusia innovatiivisia tekniikoita väärinkäytösten havaitsemiseen. Tutkielman empiirisen osion luotettavuus perustuu ammattilehtien artikkeleiden kirjoittajien näkökulmiin. Tilintarkastusalan kehittyminen tulevaisuudessa on kiinni kehittyvän tekniikan lisäksi asenteista.