794 resultados para BIM, Building Information Modeling, Cloud Computing, CAD, FM, GIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityön tarkoituksena on optimoida asiakkaiden sähkölaskun laskeminen hajautetun laskennan avulla. Älykkäiden etäluettavien energiamittareiden tullessa jokaiseen kotitalouteen, energiayhtiöt velvoitetaan laskemaan asiakkaiden sähkölaskut tuntiperusteiseen mittaustietoon perustuen. Kasvava tiedonmäärä lisää myös tarvittavien laskutehtävien määrää. Työssä arvioidaan vaihtoehtoja hajautetun laskennan toteuttamiseksi ja luodaan tarkempi katsaus pilvilaskennan mahdollisuuksiin. Lisäksi ajettiin simulaatioita, joiden avulla arvioitiin rinnakkaislaskennan ja peräkkäislaskennan eroja. Sähkölaskujen oikeinlaskemisen tueksi kehitettiin mittauspuu-algoritmi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työssä tutkitaan ERP-järjestelmää pilvipalveluna. Tutkimusmenetelmänä on käytetty kirjallisuuskatsausta. Työn tavoitteena on antaa lukijalle kuva siitä, mitä käsitteellä ERP-pilvipalveluna tarkoitetaan, miten se eroaa perinteisestä ERP–järjestelmästä sekä analysoida hyötyjä ja haittoja molemmissa ratkaisuissa. ERP-pilvipalvelulla tarkoitetaan ostettua palvelua, jota hallinnoi ja ylläpitää ulkopuolinen yritys. Järjestelmän käytöstä maksetaan kiinteä kuukausittainen maksu ja käyttäjämääriin perustuva maksu. Yrityksen käyttäessä ERP-pilvipalvelua sen ei tarvitse investoida laitteisiin. ERP-pilvipalvelu voi olla toteutettuna yritykselle varatussa yksityisessä pilvessä tai julkisessa pilvessä. Pilvipalveluna toteutettu ERP tekee kustannusten arvioinnin helpommaksi lyhyellä aikavälillä sen maksurakenteen takia. Pieni alkuinvestointi kannustaa ERP-pilvipalvelun käyttöön varsinkin pienissä yrityksissä. Isot yritykset kuitenkin suosivat perinteisiä ERP–ratkaisuja niiden paremman muokattavuuden takia. Suurin osa ERP–järjestelmistä on perinteisiä ratkaisuja. Yritykset pitävät pilvipalveluna toteutetun ERP:n suurimpana haasteena tietoturvariskejä. ERP-pilvipalvelu tekee yrityksen enemmän riippuvaiseksi palveluntarjoajasta.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Työn tavoitteena oli tutkia, kuinka rakennuksen tietomallia voidaan hyödyntää rakennusliikkeen tuotannonsuunnittelu- ja rakennusvaiheessa sekä mitä sen hyödyntäminen edellyttää rakennuksen tietomallin informaatiosisällöltä. Tavoitteena oli myös tunnistaa tuotannonohjauksen ”pullonkauloja”, joissa kohdin toimintaa voitaisiin tietomallien avulla tehostaa. Työn teoreettisena taustana on aineettoman pääoman merkitys yrityksen kilpailuedun luojana, tietämyksen hallinta ja teknologian hyödyntäminen tietämyksen hallinnassa. Työssä tutkittiin rakennustuotannon johtamista ja tietomallintamisen hyödyntämistä rakentamisessa sekä tietomallien hyödynnettävyyden varmistamista yleisellä tasolla. Työssä tutustuttiin kohdeyritykseen tuotannonohjaukseen ja rakennusvaiheen tietomallien hyödyntämisen nykytilaan. Tuloksina voidaan todeta, että tietomallien tuotannonsuunnittelu- ja rakennusvaiheen tietomallien hyödyntämisen perusedellytys on tietomallien oikeellisuus sekä tietomallien ja perinteisten suunnitteludokumenttien yhdenmukainen tietosisältö. Tämän lisäksi tarvitaan suunnitellut toimintatavat ja toimivat tiedonjakelukanavat sekä kyky hyödyntää tieto- ja viestintäteknologiaa. Tietomallit eivät tämän tutkimuksen perusteella näytä luoneen tarvetta uudenlaisille tuotanto-organisaation roolituksille. Tietomalleilla uskotaan olevan positiivisia vaikutuksia rakennusvaiheen muutostenhallinnassa. Tuotantoorganisaation henkilöillä oli positiivisia odotuksia tietomallien hyödyntämisestä tuotannonohjauksessa. Tietomallien odotetaan tukevan erillisten suunnitelmien muodostamien kokonaisuuksien hahmottamista, rakennusvaiheen osapuolten yhteistyötä ja töiden yhteensovitusta sekä logistiikan suunnittelua ja vaikutusten havainnointia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the main challenges in Software Engineering is to cope with the transition from an industry based on software as a product to software as a service. The field of Software Engineering should provide the necessary methods and tools to develop and deploy new cost-efficient and scalable digital services. In this thesis, we focus on deployment platforms to ensure cost-efficient scalability of multi-tier web applications and on-demand video transcoding service for different types of load conditions. Infrastructure as a Service (IaaS) clouds provide Virtual Machines (VMs) under the pay-per-use business model. Dynamically provisioning VMs on demand allows service providers to cope with fluctuations on the number of service users. However, VM provisioning must be done carefully, because over-provisioning results in an increased operational cost, while underprovisioning leads to a subpar service. Therefore, our main focus in this thesis is on cost-efficient VM provisioning for multi-tier web applications and on-demand video transcoding. Moreover, to prevent provisioned VMs from becoming overloaded, we augment VM provisioning with an admission control mechanism. Similarly, to ensure efficient use of provisioned VMs, web applications on the under-utilized VMs are consolidated periodically. Thus, the main problem that we address is cost-efficient VM provisioning augmented with server consolidation and admission control on the provisioned VMs. We seek solutions for two types of applications: multi-tier web applications that follow the request-response paradigm and on-demand video transcoding that is based on video streams with soft realtime constraints. Our first contribution is a cost-efficient VM provisioning approach for multi-tier web applications. The proposed approach comprises two subapproaches: a reactive VM provisioning approach called ARVUE and a hybrid reactive-proactive VM provisioning approach called Cost-efficient Resource Allocation for Multiple web applications with Proactive scaling. Our second contribution is a prediction-based VM provisioning approach for on-demand video transcoding in the cloud. Moreover, to prevent virtualized servers from becoming overloaded, the proposed VM provisioning approaches are augmented with admission control approaches. Therefore, our third contribution is a session-based admission control approach for multi-tier web applications called adaptive Admission Control for Virtualized Application Servers. Similarly, the fourth contribution in this thesis is a stream-based admission control and scheduling approach for on-demand video transcoding called Stream-Based Admission Control and Scheduling. Our fifth contribution is a computation and storage trade-o strategy for cost-efficient video transcoding in cloud computing. Finally, the sixth and the last contribution is a web application consolidation approach, which uses Ant Colony System to minimize the under-utilization of the virtualized application servers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The whole research of the current Master Thesis project is related to Big Data transfer over Parallel Data Link and my main objective is to assist the Saint-Petersburg National Research University ITMO research team to accomplish this project and apply Green IT methods for the data transfer system. The goal of the team is to transfer Big Data by using parallel data links with SDN Openflow approach. My task as a team member was to compare existing data transfer applications in case to verify which results the highest data transfer speed in which occasions and explain the reasons. In the context of this thesis work a comparison between 5 different utilities was done, which including Fast Data Transfer (FDT), BBCP, BBFTP, GridFTP, and FTS3. A number of scripts where developed which consist of creating random binary data to be incompressible to have fair comparison between utilities, execute the Utilities with specified parameters, create log files, results, system parameters, and plot graphs to compare the results. Transferring such an enormous variety of data can take a long time, and hence, the necessity appears to reduce the energy consumption to make them greener. In the context of Green IT approach, our team used Cloud Computing infrastructure called OpenStack. It’s more efficient to allocated specific amount of hardware resources to test different scenarios rather than using the whole resources from our testbed. Testing our implementation with OpenStack infrastructure results that the virtual channel does not consist of any traffic and we can achieve the highest possible throughput. After receiving the final results we are in place to identify which utilities produce faster data transfer in different scenarios with specific TCP parameters and we can use them in real network data links.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

TransRed.com es un servicio de bolsa de carga terrestre mediante el cual los usuarios actualizan la oferta y demanda del servicio en tiempo real (vía internet o teléfono), donde se consulta la disponibilidad de carga o de camiones de los afiliados a la asistencia remota. Este servicio se divide en dos: TransRed Empresaral (TRE) y TransRed Independiente (TRI) los usuarios de estos servicios serán empresas de transporte de carga terrestre y transportadores independientes. Este servicio actualmente no es prestado por ninguna empresa en Colombia y existe una necesidad latente del mismo, lo cual es una oportunidad que TransRed.com quiere aprovechar innovando en el sector. El tamaño del mercado potencial es de 1.849 empresas de transporte de carga terrestre y 47.000 transportadores independientes lo cual podría representar una facturación total de más de 55.000 millones a un precio por cuenta con alta aceptación de mercado de 80.000 mensuales para las TRE y 60.000 mensuales para las TRI. El objetivo de ventas planteado en este proyecto es de 504 cuentas TRE y 2.020 TRI para el tercer año, que representa el 27% del mercado TRE y el 4% del mercado TRI. Teniendo en cuenta que el nivel de aceptación del servicio está entre 70% y 80%, se pretende alcanzar el objetivo de ventas con una fuerza comercial distribuida en las cuatro principales zonas de concentración de nuestro mercado objetivo (Bogotá, Antioquia, Valle y Atlantico). Los resultados financieros del proyecto nos demuestran que cumpliendo los objetivos planteados y con la estructura de costos diseñada se obtiene un ROI de 54,33% para el tercer año, un periodo de recuperación de la inversión de 1 año y dos meses y un margen de utilidad neta de 24,47% en el tercer año. Este servicio permitirá a sus usuarios optimizar sus operaciones mediante el uso de tecnología de vanguardia, mejorando la competitividad del sector e integrando los diferentes actores que participan en él.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Animation on Green IT Container EdShare to be used to assemble resources developed by team for coursework 2. The metadata will be updated by the group, and description modified when the resource set has been completed. The resource has been set up with an instructional readme file. This will be replaced by each group with a new readme file. Each group needs to update this description. Further instructions in readme.txt

Relevância:

100.00% 100.00%

Publicador:

Resumo:

These slides give the instructuions for completing the COMP1214 team project on cloud and mobile computing

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En la segunda d??cada del siglo XXI se asiste a una completa transformaci??n en el proceso de ense??anza-aprendizaje de las lenguas extranjeras en general, y del espa??ol, en particular. En este nuevo contexto, las nuevas tecnolog??as aplicadas al ??mbito educativo juegan un papel muy importante. Este universo es conocido como la Web 2.0. Se presentan las ventajas e inconvenientes de la utilizaci??n de estas tecnolog??as en la clase de idiomas centr??ndose en cuatro herramientas digitales que ofrecen m??s posibilidades para dinamizar las clases de espa??ol: los blogs, los wikis, el podcasting y Google Docs o el cloud computing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A full assessment of para-­virtualization is important, because without knowledge about the various overheads, users can not understand whether using virtualization is a good idea or not. In this paper we are very interested in assessing the overheads of running various benchmarks on bare-­‐metal, as well as on para-­‐virtualization. The idea is to see what the overheads of para-­‐ virtualization are, as well as looking at the overheads of turning on monitoring and logging. The knowledge from assessing various benchmarks on these different systems will help a range of users understand the use of virtualization systems. In this paper we assess the overheads of using Xen, VMware, KVM and Citrix, see Table 1. These different virtualization systems are used extensively by cloud-­‐users. We are using various Netlib1 benchmarks, which have been developed by the University of Tennessee at Knoxville (UTK), and Oak Ridge National Laboratory (ORNL). In order to assess these virtualization systems, we run the benchmarks on bare-­‐metal, then on the para-­‐virtualization, and finally we turn on monitoring and logging. The later is important as users are interested in Service Level Agreements (SLAs) used by the Cloud providers, and the use of logging is a means of assessing the services bought and used from commercial providers. In this paper we assess the virtualization systems on three different systems. We use the Thamesblue supercomputer, the Hactar cluster and IBM JS20 blade server (see Table 2), which are all servers available at the University of Reading. A functional virtualization system is multi-­‐layered and is driven by the privileged components. Virtualization systems can host multiple guest operating systems, which run on its own domain, and the system schedules virtual CPUs and memory within each Virtual Machines (VM) to make the best use of the available resources. The guest-­‐operating system schedules each application accordingly. You can deploy virtualization as full virtualization or para-­‐virtualization. Full virtualization provides a total abstraction of the underlying physical system and creates a new virtual system, where the guest operating systems can run. No modifications are needed in the guest OS or application, e.g. the guest OS or application is not aware of the virtualized environment and runs normally. Para-­‐virualization requires user modification of the guest operating systems, which runs on the virtual machines, e.g. these guest operating systems are aware that they are running on a virtual machine, and provide near-­‐native performance. You can deploy both para-­‐virtualization and full virtualization across various virtualized systems. Para-­‐virtualization is an OS-­‐assisted virtualization; where some modifications are made in the guest operating system to enable better performance. In this kind of virtualization, the guest operating system is aware of the fact that it is running on the virtualized hardware and not on the bare hardware. In para-­‐virtualization, the device drivers in the guest operating system coordinate the device drivers of host operating system and reduce the performance overheads. The use of para-­‐virtualization [0] is intended to avoid the bottleneck associated with slow hardware interrupts that exist when full virtualization is employed. It has revealed [0] that para-­‐ virtualization does not impose significant performance overhead in high performance computing, and this in turn this has implications for the use of cloud computing for hosting HPC applications. The “apparent” improvement in virtualization has led us to formulate the hypothesis that certain classes of HPC applications should be able to execute in a cloud environment, with minimal performance degradation. In order to support this hypothesis, first it is necessary to define exactly what is meant by a “class” of application, and secondly it will be necessary to observe application performance, both within a virtual machine and when executing on bare hardware. A further potential complication is associated with the need for Cloud service providers to support Service Level Agreements (SLA), so that system utilisation can be audited.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A major infrastructure project is used to investigate the role of digital objects in the coordination of engineering design work. From a practice-based perspective, research emphasizes objects as important in enabling cooperative knowledge work and knowledge sharing. The term ‘boundary object’ has become used in the analysis of mutual and reciprocal knowledge sharing around physical and digital objects. The aim is to extend this work by analysing the introduction of an extranet into the public–private partnership project used to construct a new motorway. Multiple categories of digital objects are mobilized in coordination across heterogeneous, cross-organizational groups. The main findings are that digital objects provide mechanisms for accountability and control, as well as for mutual and reciprocal knowledge sharing; and that different types of objects are nested, forming a digital infrastructure for project delivery. Reconceptualizing boundary objects as a digital infrastructure for delivery has practical implications for management practices on large projects and for the use of digital tools, such as building information models, in construction. It provides a starting point for future research into the changing nature of digitally enabled coordination in project-based work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces an architecture for identifying and modelling in real-time at a copper mine using new technologies as M2M and cloud computing with a server in the cloud and an Android client inside the mine. The proposed design brings up pervasive mining, a system with wider coverage, higher communication efficiency, better fault-tolerance, and anytime anywhere availability. This solution was designed for a plant inside the mine which cannot tolerate interruption and for which their identification in situ, in real time, is an essential part of the system to control aspects such as instability by adjusting their corresponding parameters without stopping the process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body Sensor Networks (BSNs) have been recently introduced for the remote monitoring of human activities in a broad range of application domains, such as health care, emergency management, fitness and behaviour surveillance. BSNs can be deployed in a community of people and can generate large amounts of contextual data that require a scalable approach for storage, processing and analysis. Cloud computing can provide a flexible storage and processing infrastructure to perform both online and offline analysis of data streams generated in BSNs. This paper proposes BodyCloud, a SaaS approach for community BSNs that supports the development and deployment of Cloud-assisted BSN applications. BodyCloud is a multi-tier application-level architecture that integrates a Cloud computing platform and BSN data streams middleware. BodyCloud provides programming abstractions that allow the rapid development of community BSN applications. This work describes the general architecture of the proposed approach and presents a case study for the real-time monitoring and analysis of cardiac data streams of many individuals.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much has been written about where the boundaries of the firm are drawn, but little about what occurs at the boundaries themselves. When a firm subcontracts, does it inform its suppliers fully of what it requires, or is it willing to accept what they have available? In practice firms often engage in a dialogue, or conversation, with their suppliers, in which at first they set out their general requirements, and only when the supplier reports back on how these can be met are their more specific requirements set out. This paper models such conversations as a rational response to communication costs. The model is used to examine the impact of new information technology, such as CAD/CAM, on the conduct of subcontracting. It can also be used to examine its impact on the marketing activities of firms. The technique of analysis, which is based on the economic theory of teams, has more general applications too. It can be used to model all the forms of dialogue involved in the processes of coordination both within and between firms.