802 resultados para invoice computing
Resumo:
En termes generals, es pot definir l’Eficiència Energètica com la reducció del consum d’energia mantenint els mateixos serveis energètics, sense disminuir el nostre confort i qualitat de vida, protegint el medi ambient, assegurant el proveïment i fomentant un comportament Sostenible al seu ús. L’objectiu principal d’aquest treball, és reduir el consum d’energia i terme de potència contractat a la Universitat de Vic, aplicant un programa d’estalvi amb mesures correctores en el funcionament de les seves instal·lacions o espais. Per tal de poder arribar a aquest objectiu marcat, prèviament s’ha realitzat un estudi acurat, obtenint tota la informació necessària per poder aplicar les mesures correctores a la bossa més important de consum. Un cop trobada, dur a terme l’estudi de la viabilitat de la inversió de les mesures correctores més eficients, optimitzant els recursos destinats. L’espai on s’ha dut a terme l’estudi, ha estat a l’edifici F del Campus Miramarges, seguint les indicacions d’Arnau Bardolet (Cap de Manteniment de la UVIC). Aquest edifici consta d’un entresol, baixos i quatre plantes. L’equip de mesura que s’ha fet servir per realitzar l’estudi, és de la marca Circutor sèrie AR5-L, aquests equips són programables que mesuren, calculen i emmagatzemen en memòria els principals paràmetres elèctrics en xarxes trifàsiques. Els projectes futurs complementaris que es podrien realitzar a part d’aquest són: instal·lar sensors, instal·lar mòduls convertidors TCP/IP, aprofitar la xarxa intranet i crear un escada amb un sinòptic de control i gestió des d’un punt de treball. Aquest aplicatiu permet visualitzar en una pantalla d’un PC tots els estats dels elements controlats mitjançant un sinòptic (encendre/parar manualment l’enllumenat i endolls de les aules, estat d’enllumenat i endolls de les aules, consums instantanis/acumulats energètics, estat dels passadissos entre altres) i explotar les dades recollides a la base de dades. Cada espai tindria la seva lògica de funcionament automàtic específic. Entre les conclusions més rellevants obtingudes en aquest treball s’observa: · Que és pot reduir la potència contractada a la factura a l’estar per sota de la realment consumida. · Que no hi ha penalitzacions a la factura per consum de reactiva, ja que el compensador funciona correctament. · Que es pot reduir l’horari de l’inici del consum d’energia, ja que no correspon a l’activitat docent. · Els valors de la tensió i freqüència estan dintre de la normalitat. · Els harmònics estan al llindar màxim. Analitzant aquestes conclusions, voldria destacar les mesures correctores més importants que es poden dur a terme: canvi tecnològic a LED, temporitzar automàticament l’encesa i apagada dels fluorescents i equips informàtics de les aules “seguint calendari docent”, instal·lar sensors de moviment amb detecció lumínica als passadissos. Totes les conclusions extretes d’aquest treball, es poden aplicar a tots els edificis de la facultat, prèviament realitzant l’estudi individual de cadascuna, seguint els mateixos criteris per tal d’optimitzar la inversió.
Resumo:
Memristive computing refers to the utilization of the memristor, the fourth fundamental passive circuit element, in computational tasks. The existence of the memristor was theoretically predicted in 1971 by Leon O. Chua, but experimentally validated only in 2008 by HP Labs. A memristor is essentially a nonvolatile nanoscale programmable resistor — indeed, memory resistor — whose resistance, or memristance to be precise, is changed by applying a voltage across, or current through, the device. Memristive computing is a new area of research, and many of its fundamental questions still remain open. For example, it is yet unclear which applications would benefit the most from the inherent nonlinear dynamics of memristors. In any case, these dynamics should be exploited to allow memristors to perform computation in a natural way instead of attempting to emulate existing technologies such as CMOS logic. Examples of such methods of computation presented in this thesis are memristive stateful logic operations, memristive multiplication based on the translinear principle, and the exploitation of nonlinear dynamics to construct chaotic memristive circuits. This thesis considers memristive computing at various levels of abstraction. The first part of the thesis analyses the physical properties and the current-voltage behaviour of a single device. The middle part presents memristor programming methods, and describes microcircuits for logic and analog operations. The final chapters discuss memristive computing in largescale applications. In particular, cellular neural networks, and associative memory architectures are proposed as applications that significantly benefit from memristive implementation. The work presents several new results on memristor modeling and programming, memristive logic, analog arithmetic operations on memristors, and applications of memristors. The main conclusion of this thesis is that memristive computing will be advantageous in large-scale, highly parallel mixed-mode processing architectures. This can be justified by the following two arguments. First, since processing can be performed directly within memristive memory architectures, the required circuitry, processing time, and possibly also power consumption can be reduced compared to a conventional CMOS implementation. Second, intrachip communication can be naturally implemented by a memristive crossbar structure.
Resumo:
Valmistustekniikoiden kehittyessä IC-piireille saadaan mahtumaan yhä enemmän transistoreja. Monimutkaisemmat piirit mahdollistavat suurempien laskutoimitusmäärien suorittamisen aikayksikössä. Piirien aktiivisuuden lisääntyessä myös niiden energiankulutus lisääntyy, ja tämä puolestaan lisää piirin lämmöntuotantoa. Liiallinen lämpö rajoittaa piirien toimintaa. Tämän takia tarvitaan tekniikoita, joilla piirien energiankulutusta saadaan pienennettyä. Uudeksi tutkimuskohteeksi ovat tulleet pienet laitteet, jotka seuraavat esimerkiksi ihmiskehon toimintaa, rakennuksia tai siltoja. Tällaisten laitteiden on oltava energiankulutukseltaan pieniä, jotta ne voivat toimia pitkiä aikoja ilman akkujen lataamista. Near-Threshold Computing on tekniikka, jolla pyritään pienentämään integroitujen piirien energiankulutusta. Periaatteena on käyttää piireillä pienempää käyttöjännitettä kuin piirivalmistaja on niille alunperin suunnitellut. Tämä hidastaa ja haittaa piirin toimintaa. Jos kuitenkin laitteen toiminnassa pystyään hyväksymään huonompi laskentateho ja pienentynyt toimintavarmuus, voidaan saavuttaa säästöä energiankulutuksessa. Tässä diplomityössä tarkastellaan Near-Threshold Computing -tekniikkaa eri näkökulmista: aluksi perustuen kirjallisuudesta löytyviin aikaisempiin tutkimuksiin, ja myöhemmin tutkimalla Near-Threshold Computing -tekniikan soveltamista kahden tapaustutkimuksen kautta. Tapaustutkimuksissa tarkastellaan FO4-invertteriä sekä 6T SRAM -solua piirisimulaatioiden avulla. Näiden komponenttien käyttäytymisen Near-Threshold Computing –jännitteillä voidaan tulkita antavan kattavan kuvan suuresta osasta tavanomaisen IC-piirin pinta-alaa ja energiankulusta. Tapaustutkimuksissa käytetään 130 nm teknologiaa, ja niissä mallinnetaan todellisia piirivalmistusprosessin tuotteita ajamalla useita Monte Carlo -simulaatioita. Tämä valmistuskustannuksiltaan huokea teknologia yhdistettynä Near-Threshold Computing -tekniikkaan mahdollistaa matalan energiankulutuksen piirien valmistaminen järkevään hintaan. Tämän diplomityön tulokset näyttävät, että Near-Threshold Computing pienentää piirien energiankulutusta merkittävästi. Toisaalta, piirien nopeus heikkenee, ja yleisesti käytetty 6T SRAM -muistisolu muuttuu epäluotettavaksi. Pidemmät polut logiikkapiireissä sekä transistorien kasvattaminen muistisoluissa osoitetaan tehokkaiksi vastatoimiksi Near- Threshold Computing -tekniikan huonoja puolia vastaan. Tulokset antavat perusteita matalan energiankulutuksen IC-piirien suunnittelussa sille, kannattaako käyttää normaalia käyttöjännitettä, vai laskea sitä, jolloin piirin hidastuminen ja epävarmempi käyttäytyminen pitää ottaa huomioon.
Resumo:
In accordance with the Moore's law, the increasing number of on-chip integrated transistors has enabled modern computing platforms with not only higher processing power but also more affordable prices. As a result, these platforms, including portable devices, work stations and data centres, are becoming an inevitable part of the human society. However, with the demand for portability and raising cost of power, energy efficiency has emerged to be a major concern for modern computing platforms. As the complexity of on-chip systems increases, Network-on-Chip (NoC) has been proved as an efficient communication architecture which can further improve system performances and scalability while reducing the design cost. Therefore, in this thesis, we study and propose energy optimization approaches based on NoC architecture, with special focuses on the following aspects. As the architectural trend of future computing platforms, 3D systems have many bene ts including higher integration density, smaller footprint, heterogeneous integration, etc. Moreover, 3D technology can signi cantly improve the network communication and effectively avoid long wirings, and therefore, provide higher system performance and energy efficiency. With the dynamic nature of on-chip communication in large scale NoC based systems, run-time system optimization is of crucial importance in order to achieve higher system reliability and essentially energy efficiency. In this thesis, we propose an agent based system design approach where agents are on-chip components which monitor and control system parameters such as supply voltage, operating frequency, etc. With this approach, we have analysed the implementation alternatives for dynamic voltage and frequency scaling and power gating techniques at different granularity, which reduce both dynamic and leakage energy consumption. Topologies, being one of the key factors for NoCs, are also explored for energy saving purpose. A Honeycomb NoC architecture is proposed in this thesis with turn-model based deadlock-free routing algorithms. Our analysis and simulation based evaluation show that Honeycomb NoCs outperform their Mesh based counterparts in terms of network cost, system performance as well as energy efficiency.
Resumo:
This thesis discusses the opportunities and challenges of the cloud computing technology in healthcare information systems by reviewing the existing literature on cloud computing and healthcare information system and the impact of cloud computing technology to healthcare industry. The review shows that if problems related to security of data are solved then cloud computing will positively transform the healthcare institutions by giving advantage to the healthcare IT infrastructure as well as improving and giving benefit to healthcare services. Therefore, this thesis will explore the opportunities and challenges that are associated with cloud computing in the context of Finland in order to help the healthcare organizations and stakeholders to determine its direction when it decides to adopt cloud technology on their information systems.
Resumo:
Video transcoding refers to the process of converting a digital video from one format into another format. It is a compute-intensive operation. Therefore, transcoding of a large number of simultaneous video streams requires a large amount of computing resources. Moreover, to handle di erent load conditions in a cost-e cient manner, the video transcoding service should be dynamically scalable. Infrastructure as a Service Clouds currently offer computing resources, such as virtual machines, under the pay-per-use business model. Thus the IaaS Clouds can be leveraged to provide a coste cient, dynamically scalable video transcoding service. To use computing resources e ciently in a cloud computing environment, cost-e cient virtual machine provisioning is required to avoid overutilization and under-utilization of virtual machines. This thesis presents proactive virtual machine resource allocation and de-allocation algorithms for video transcoding in cloud computing. Since users' requests for videos may change at di erent times, a check is required to see if the current computing resources are adequate for the video requests. Therefore, the work on admission control is also provided. In addition to admission control, temporal resolution reduction is used to avoid jitters in a video. Furthermore, in a cloud computing environment such as Amazon EC2, the computing resources are more expensive as compared with the storage resources. Therefore, to avoid repetition of transcoding operations, a transcoded video needs to be stored for a certain time. To store all videos for the same amount of time is also not cost-e cient because popular transcoded videos have high access rate while unpopular transcoded videos are rarely accessed. This thesis provides a cost-e cient computation and storage trade-o strategy, which stores videos in the video repository as long as it is cost-e cient to store them. This thesis also proposes video segmentation strategies for bit rate reduction and spatial resolution reduction video transcoding. The evaluation of proposed strategies is performed using a message passing interface based video transcoder, which uses a coarse-grain parallel processing approach where video is segmented at group of pictures level.
Resumo:
Smart phones became part and parcel of our life, where mobility provides a freedom of not being bounded by time and space. In addition, number of smartphones produced each year is skyrocketing. However, this also created discrepancies or fragmentation among devices and OSes, which in turn made an exceeding hard for developers to deliver hundreds of similar featured applications with various versions for the market consumption. This thesis is an attempt to investigate whether cloud based mobile development platforms can mitigate and eventually eliminate fragmentation challenges. During this research, we have selected and analyzed the most popular cloud based development platforms and tested integrated cloud features. This research showed that cloud based mobile development platforms may able to reduce mobile fragmentation and enable to utilize single codebase to deliver a mobile application for different platforms.
Resumo:
The aim of this master’s thesis is to research and analyze how purchase invoice processing can be automated and streamlined in a system renewal project. The impacts of workflow automation on invoice handling are studied by means of time, cost and quality aspects. Purchase invoice processing has a lot of potential for automation because of its labor-intensive and repetitive nature. As a case study combining both qualitative and quantitative methods, the topic is approached from a business process management point of view. The current process was first explored through interviews and workshop meetings to create a holistic understanding of the process at hand. Requirements for process streamlining were then researched focusing on specified vendors and their purchase invoices, which helped to identify the critical factors for successful invoice automation. To optimize the flow from invoice receipt to approval for payment, the invoice receiving process was outsourced and the automation functionalities of the new system utilized in invoice handling. The quality of invoice data and the need of simple structured purchase order (PO) invoices were emphasized in the system testing phase. Hence, consolidated invoices containing references to multiple PO or blanket release numbers should be simplified in order to use automated PO matching. With non-PO invoices, it is important to receive the buyer reference details in an applicable invoice data field so that automation rules could be created to route invoices to a review and approval flow. In the beginning of the project, invoice processing was seen ineffective both time- and cost-wise, and it required a lot of manual labor to carry out all tasks. In accordance with testing results, it was estimated that over half of the invoices could be automated within a year after system implementation. Processing times could be reduced remarkably, which would then result savings up to 40 % in annual processing costs. Due to several advancements in the purchase invoice process, business process quality could also be perceived as improved.
Resumo:
Smart home implementation in residential buildings promises to optimize energy usage and save significant amount of energy simply due to a better understanding of user's energy usage profile. Apart from the energy optimisation prospects of this technology, it also aims to guarantee occupants significant amount of comfort and remote control over home appliances both at home locations and at remote places. However, smart home investment just like any other kind of investment requires an adequate measurement and justification of the economic gains it could proffer before its realization. These economic gains could differ for different occupants due to their inherent behaviours and tendencies. Thus it is pertinent to investigate the various behaviours and tendencies of occupants in different domain of interests and to measure the value of the energy savings accrued by smart home implementations in these domains of interest in order to justify such economic gains. This thesis investigates two domains of interests (the rented apartment and owned apartment) for primarily two behavioural tendencies (Finland and Germany) obtained from observation and corroborated by conducted interviews to measure the payback time and Return on Investment (ROI) of their smart home implementations. Also, similar measures are obtained for identified Australian use case. The research finding reveals that building automation for the Finnish behavioural tendencies seems to proffers a better ROI and payback time for smart home implementations.
Resumo:
This master’s thesis studies the case company’s current purchase invoice process and the challenges that are related to it. Like most of other master’s thesis this study consists of both theoretical- and empirical parts. The purpose of this work is to combine theoretical and empirical parts together so that the theoretical part brings value to the empirical case study. The case company’s main business is frequency converters for both low voltage AC & DC drives and medium voltage AC Drives which are used across all industries and applications. The main focus of this study is on the current invoice process modelling. When modelling the existing process with discipline and care, current challenges can be understood better. Empirical study relays heavily on interviews and existing, yet fragmented, data. This, along with own calculations and analysis, creates the foundation for the empirical part of this master’s thesis.
Resumo:
The power is still today an issue in wearable computing applications. The aim of the present paper is to raise awareness of the power consumption of wearable computing devices in specific scenarios to be able in the future to design energy efficient wireless sensors for context recognition in wearable computing applications. The approach is based on a hardware study. The objective of this paper is to analyze and compare the total power consumption of three representative wearable computing devices in realistic scenarios such as Display, Speaker, Camera and microphone, Transfer by Wi-Fi, Monitoring outdoor physical activity and Pedometer. A scenario based energy model is also developed. The Samsung Galaxy Nexus I9250 smartphone, the Vuzix M100 Smart Glasses and the SimValley Smartwatch AW-420.RX are the three devices representative of their form factors. The power consumption is measured using PowerTutor, an android energy profiler application with logging option and using unknown parameters so it is adjusted with the USB meter. The result shows that the screen size is the main parameter influencing the power consumption. The power consumption for an identical scenario varies depending on the wearable devices meaning that others components, parameters or processes might impact on the power consumption and further study is needed to explain these variations. This paper also shows that different inputs (touchscreen is more efficient than buttons controls) and outputs (speaker sensor is more efficient than display sensor) impact the energy consumption in different way. This paper gives recommendations to reduce the energy consumption in healthcare wearable computing application using the energy model.
Resumo:
Manufacturing industry has been always facing challenge to improve the production efficiency, product quality, innovation ability and struggling to adopt cost-effective manufacturing system. In recent years cloud computing is emerging as one of the major enablers for the manufacturing industry. Combining the emerged cloud computing and other advanced manufacturing technologies such as Internet of Things, service-oriented architecture (SOA), networked manufacturing (NM) and manufacturing grid (MGrid), with existing manufacturing models and enterprise information technologies, a new paradigm called cloud manufacturing is proposed by the recent literature. This study presents concepts and ideas of cloud computing and cloud manufacturing. The concept, architecture, core enabling technologies, and typical characteristics of cloud manufacturing are discussed, as well as the difference and relationship between cloud computing and cloud manufacturing. The research is based on mixed qualitative and quantitative methods, and a case study. The case is a prototype of cloud manufacturing solution, which is software platform cooperated by ATR Soft Oy and SW Company China office. This study tries to understand the practical impacts and challenges that are derived from cloud manufacturing. The main conclusion of this study is that cloud manufacturing is an approach to achieve the transformation from traditional production-oriented manufacturing to next generation service-oriented manufacturing. Many manufacturing enterprises are already using a form of cloud computing in their existing network infrastructure to increase flexibility of its supply chain, reduce resources consumption, the study finds out the shift from cloud computing to cloud manufacturing is feasible. Meanwhile, the study points out the related theory, methodology and application of cloud manufacturing system are far from maturity, it is still an open field where many new technologies need to be studied.
Resumo:
Kandidaatintutkielmani käsittelee Invoice-kauppaa ja sitä, mitä etuja siihen liittyy verrattuna tavanomaiseen tax-free kauppaan. Invoice-veronpalautusjärjestelmässä EU:n ulkopuolella asuva asiakas saa veronpalautuksen samasta liikkeestä seuraavalla asiointikerralla. Arvonlisäveronpalautukset on kuitenkin haettava puolen vuoden sisällä ostosten tekemisestä. Tavanomaisessa tax-free kaupassa asiakas saa arvonlisäveronpalautuksen rajalta poistuessaan Suomesta. Invoicea käytettäessä asiakas saa verosta isomman osan takaisin kuin tavanomaisessa tax-free kaupassa, mutta palautuksen saaminen kestää kauemmin, koska Invoicea käytettäessä veronpalautuksen voi saada vain samasta liikkeestä, missä ostokset on tehty. Tutkielmani tarkastelee aihetta kauppiaan näkökulmasta. Kauppiaan kannalta Invoicen etuna on erityisesti asiakkaiden ”koukuttaminen”, koska veronpalautukset on aina haettava samasta liikkestä, mistä tuotteet on ostettu. Näin samat asiakkaat tulevat usein samaan liikkeeseen myös seuraavilla Suomen matkoilla saadakseen veronpalautukset. Tämä tuo liikkeille usein myös uusia kanta-asiakkaita.Toisaalta on huomioitava myös Invoicen käytöstä kauppiaalle ja kassoille mahdollisesti aiheutuva lisätyö ja kustannukset. Veronpalautusten maksaminen takaisin asiakkaille ja tullissa leimattujen kuittien käsittely vie kassoilla tavanomaista enemmän aikaa ja saattaa vaatia lisää henkilökuntaa. Tutkielma on toteutettu laadullisena eli kvalitatiivisena ja tutkimusmenetelmänä on käytetty haastatteluita. Haastateltavat ovat kauppiaita Kaakkois-Suomen alueelta. Tavoitteenani oli koota mahdollisimman monipuolinen haastateltavien joukko sisältäen niin vaate- ja vapaa-ajan liikkeitä kuin sekatavara- ja päivittäistavarakauppoja. Teorialähteinä käytin yliopiston kirjastosta lainattuja kirjoja, LUT:in tietokantojen ja Edilex-tietokannan artikkeleita sekä Verohallinnon dokumentteja ja verkkojulkaisuja. Lisäksi olen hyödyntänyt tutkielmassani myös ajankohtaisia uutisia sekä erilaisten paikallis- ja aikakauslehtien artikkeleita. Tutkielmani johtopäätöksissä tulin siihen tulokseen, että kauppiaan kannalta edullisinta on käyttää samanaikaisesti sekä Invoicea että perinteistä, palautusliikeiden palveluja hyödyntävää tax-free järjestelmää. Tämä mahdollistaa liikkeille mahdollisimman laajan asiakasjoukon. Suomessa usein käyvät ostosmatkailijat suosivat yleensä Invoicea täysimääräisen veronpalautuksen vuoksi. Palautusliikkeet taas veloittavat asiakkaalle maksettavasta veronpalautuksesta oman palvelumaksunsa. Suomessa harvemmin vieraileville taas palautusliikkeiden palvelut ovat edullisempia, sillä veronpalautukset saa rajalta maasta poistuttaessa, eikä tarvitse palata samaan liikkeeseen puolen vuoden sisällä. Palautusliikkeiden etuna Invoiceen nähden on myös asioinnin vaivattomuus, sillä eri liikkeissä asioivat ostosmatkailijat saavat kaikista matkalla tekemistään ostoksista arvonlisäveronpalautuksensa yhdestä paikasta sen sijaan, että kävisivät hakemassa ne joka liikkeestä erikseen.
Resumo:
Variations in different types of genomes have been found to be responsible for a large degree of physical diversity such as appearance and susceptibility to disease. Identification of genomic variations is difficult and can be facilitated through computational analysis of DNA sequences. Newly available technologies are able to sequence billions of DNA base pairs relatively quickly. These sequences can be used to identify variations within their specific genome but must be mapped to a reference sequence first. In order to align these sequences to a reference sequence, we require mapping algorithms that make use of approximate string matching and string indexing methods. To date, few mapping algorithms have been tailored to handle the massive amounts of output generated by newly available sequencing technologies. In otrder to handle this large amount of data, we modified the popular mapping software BWA to run in parallel using OpenMPI. Parallel BWA matches the efficiency of multithreaded BWA functions while providing efficient parallelism for BWA functions that do not currently support multithreading. Parallel BWA shows significant wall time speedup in comparison to multithreaded BWA on high-performance computing clusters, and will thus facilitate the analysis of genome sequencing data.