804 resultados para Cloud Computing, Software-as-a-Service (SaaS), SaaS Multi-Tenant, Windows Azure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postprint

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postprint

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Postprint

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Winner of best paper award.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing offers massive scalability and elasticity required by many scien-tific and commercial applications. Combining the computational and data handling capabilities of clouds with parallel processing also has the potential to tackle Big Data problems efficiently. Science gateway frameworks and workflow systems enable application developers to implement complex applications and make these available for end-users via simple graphical user interfaces. The integration of such frameworks with Big Data processing tools on the cloud opens new oppor-tunities for application developers. This paper investigates how workflow sys-tems and science gateways can be extended with Big Data processing capabilities. A generic approach based on infrastructure aware workflows is suggested and a proof of concept is implemented based on the WS-PGRADE/gUSE science gateway framework and its integration with the Hadoop parallel data processing solution based on the MapReduce paradigm in the cloud. The provided analysis demonstrates that the methods described to integrate Big Data processing with workflows and science gateways work well in different cloud infrastructures and application scenarios, and can be used to create massively parallel applications for scientific analysis of Big Data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: Radiation therapy is used to treat cancer using carefully designed plans that maximize the radiation dose delivered to the target and minimize damage to healthy tissue, with the dose administered over multiple occasions. Creating treatment plans is a laborious process and presents an obstacle to more frequent replanning, which remains an unsolved problem. However, in between new plans being created, the patient's anatomy can change due to multiple factors including reduction in tumor size and loss of weight, which results in poorer patient outcomes. Cloud computing is a newer technology that is slowly being used for medical applications with promising results. The objective of this work was to design and build a system that could analyze a database of previously created treatment plans, which are stored with their associated anatomical information in studies, to find the one with the most similar anatomy to a new patient. The analyses would be performed in parallel on the cloud to decrease the computation time of finding this plan. METHODS: The system used SlicerRT, a radiation therapy toolkit for the open-source platform 3D Slicer, for its tools to perform the similarity analysis algorithm. Amazon Web Services was used for the cloud instances on which the analyses were performed, as well as for storage of the radiation therapy studies and messaging between the instances and a master local computer. A module was built in SlicerRT to provide the user with an interface to direct the system on the cloud, as well as to perform other related tasks. RESULTS: The cloud-based system out-performed previous methods of conducting the similarity analyses in terms of time, as it analyzed 100 studies in approximately 13 minutes, and produced the same similarity values as those methods. It also scaled up to larger numbers of studies to analyze in the database with a small increase in computation time of just over 2 minutes. CONCLUSION: This system successfully analyzes a large database of radiation therapy studies and finds the one that is most similar to a new patient, which represents a potential step forward in achieving feasible adaptive radiation therapy replanning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Las TIC son inseparables de la museografía in situ e imprescindibles en la museografía en red fija y móvil. En demasiados casos se han instalado prótesis tecnológicas para barnizar de modernidad el espacio cultural, olvidando que la tecnología debe estar al servicio de los contenidos de manera que resulte invisible y perfectamente imbricada con la museografía tradicional. Las interfaces móviles pueden fusionar museo in situ y en red y acompañar a las personas más allá del espacio físico. Esa fusión debe partir de una base de datos narrativa y abierta a obras materiales e inmateriales de otros museos de manera que no se trasladen las limitaciones del museo físico al virtual. En el museo in situ tienen sentido las instalaciones hipermedia inmersivas que faciliten experiencias culturales innovadoras. La interactividad (relaciones virtuales) debe convivir con la interacción (relaciones físicas y personales) y estar al servicio de todas las personas, partiendo de que todas, todos tenemos limitaciones. Trabajar interdisciplinarmente ayuda a comprender mejor el museo para ponerlo al servicio de las personas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How can applications be deployed on the cloud to achieve maximum performance? This question is challenging to address with the availability of a wide variety of cloud Virtual Machines (VMs) with different performance capabilities. The research reported in this paper addresses the above question by proposing a six step benchmarking methodology in which a user provides a set of weights that indicate how important memory, local communication, computation and storage related operations are to an application. The user can either provide a set of four abstract weights or eight fine grain weights based on the knowledge of the application. The weights along with benchmarking data collected from the cloud are used to generate a set of two rankings - one based only on the performance of the VMs and the other takes both performance and costs into account. The rankings are validated on three case study applications using two validation techniques. The case studies on a set of experimental VMs highlight that maximum performance can be achieved by the three top ranked VMs and maximum performance in a cost-effective manner is achieved by at least one of the top three ranked VMs produced by the methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durch den großen Erfolg des Cloud Computing und der hohen Geschwindigkeit, mit der Cloud-Innovationen seither Einzug in die Praxis finden, eröffnen sich für die Industrie neue Chancen im Wettbewerb. Von besonderer Bedeutung sind die Möglichkeiten, Cloud-gestützte Geschäftsprozesse dynamisch, als direkte Reaktion auf einen Kundenauftrag, anzupassen und auszuführen. Dies gilt insbesondere auch für kooperative und unternehmensübergreifende Anwendungen, welche aus mehreren IT-Diensten verschiedener Partner bestehen. Gegenstand dieses Artikels ist die Vorstellung eines Konzeptes und einer Architektur für eine zentrale Cloud-Plattform zur Konfiguration, Ausführung und Überwachung von kollaborativen Logistik-Prozessen. Auf dieser Plattform können Geschäftsprozesse modelliert und in ihren Privacy-Eigenschaften parametrisiert werden. Die einzelnen Prozesselemente werden dabei mit IT-Diensten verknüpft, die beispielsweise auf externen Cloud-Plattformen ausgeführt werden. Ein Schwerpunkt der Veröffentlichung liegt in der Betrachtung der Erstellung, Umsetzung und Überwachung von Privacy-Anforderungen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Development of Internet-of-Services will be hampered by heterogeneous Internet-of-Things infrastructures, such as inconsistency in communicating with participating objects, connectivity between them, topology definition & data transfer, access via cloud computing for data storage etc. Our proposed solutions are applicable to a random topology scenario that allow establishing of multi-operational sensor networks out of single networks and/or single service networks with the participation of multiple networks; thus allowing virtual links to be created and resources to be shared. The designed layers are context-aware, application-oriented, and capable of representing physical objects to a management system, along with discovery of services. The reliability issue is addressed by deploying IETF supported IEEE 802.15.4 network model for low-rate wireless personal networks. Flow- sensor succeeded better results in comparison to the typical - sensor from reachability, throughput, energy consumption and diversity gain viewpoint and through allowing the multicast groups into maximum number, performances can be improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Con la crescita in complessità delle infrastrutture IT e la pervasività degli scenari di Internet of Things (IoT) emerge il bisogno di nuovi modelli computazionali basati su entità autonome capaci di portare a termine obiettivi di alto livello interagendo tra loro grazie al supporto di infrastrutture come il Fog Computing, per la vicinanza alle sorgenti dei dati, e del Cloud Computing per offrire servizi analitici complessi di back-end in grado di fornire risultati per milioni di utenti. Questi nuovi scenarii portano a ripensare il modo in cui il software viene progettato e sviluppato in una prospettiva agile. Le attività dei team di sviluppatori (Dev) dovrebbero essere strettamente legate alle attività dei team che supportano il Cloud (Ops) secondo nuove metodologie oggi note come DevOps. Tuttavia, data la mancanza di astrazioni adeguata a livello di linguaggio di programmazione, gli sviluppatori IoT sono spesso indotti a seguire approcci di sviluppo bottom-up che spesso risulta non adeguato ad affrontare la compessità delle applicazione del settore e l'eterogeneità dei compomenti software che le formano. Poichè le applicazioni monolitiche del passato appaiono difficilmente scalabili e gestibili in un ambiente Cloud con molteplici utenti, molti ritengono necessaria l'adozione di un nuovo stile architetturale, in cui un'applicazione dovrebbe essere vista come una composizione di micro-servizi, ciascuno dedicato a uno specifica funzionalità applicativa e ciascuno sotto la responsabilità di un piccolo team di sviluppatori, dall'analisi del problema al deployment e al management. Poichè al momento non si è ancora giunti a una definizione univoca e condivisa dei microservices e di altri concetti che emergono da IoT e dal Cloud, nè tantomento alla definzione di linguaggi sepcializzati per questo settore, la definzione di metamodelli custom associati alla produzione automatica del software di raccordo con le infrastrutture potrebbe aiutare un team di sviluppo ad elevare il livello di astrazione, incapsulando in una software factory aziendale i dettagli implementativi. Grazie a sistemi di produzione del sofware basati sul Model Driven Software Development (MDSD), l'approccio top-down attualmente carente può essere recuperato, permettendo di focalizzare l'attenzione sulla business logic delle applicazioni. Nella tesi viene mostrato un esempio di questo possibile approccio, partendo dall'idea che un'applicazione IoT sia in primo luogo un sistema software distribuito in cui l'interazione tra componenti attivi (modellati come attori) gioca un ruolo fondamentale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Il lavoro sviluppato deriva dalla creazione, in sede di tirocinio, di un piccolo database, creato a partire dalla ricerca dei dati fino alla scelta di informazioni di rilievo e alla loro conseguente archiviazione. L’obiettivo dell’elaborato è rappresentato dalla volontà di ampliare quella conoscenza basilare posseduta sul mondo dell’informazione dal punto di vista gestionale. Infatti, considerando lo scenario odierno, si può affermare che lo studio del cliente attraverso delle informazioni rilevanti, di vario tipo, è una delle conoscenze fondamentali nel mondo dell’ingegneria gestionale. Il metodo di studio utilizzato è basato sulla comprensione delle diverse tipologie di dati presenti nel mondo aziendale e, di conseguenza, al loro legame con il mondo del web e soprattutto con i metodi di archiviazione più moderni e più utilizzati oggi sia dalle aziende, che non dai privati stessi; le piattaforme cloud. L’elaborato si suddivide in tre argomenti differenti ma strettamente collegati tra loro; la prima parte tratta di come l’informazione più basilare vada raccolta ed analizzata, la sezione centrale è legata al tema chiave dell’internet come mezzo di archiviazione e non più solo come piattaforma di ricerca del dato, mentre nel capitolo finale viene chiarito il concetto di cloud computing, comodo veloce ed efficiente, considerato da qualche anno il punto d’incontro fra i primi due argomenti. Nello specifico si andranno a presentare alcuni di applicazione reale del cloud da parte di aziende come Amazon, Google e Facebook, multinazionali che ad oggi sono riuscite a fare dell’archiviazione e della manipolazione dei dati, a scopi industriali, una delle loro fonti di guadagno. Il risultato è rappresentato da una panoramica sul funzionamento e sulle tecniche di utilizzo dell’informazione, partendo dal dato più irrilevante fino ad arrivare ai database condivisi utilizzati, se non addirittura controllati, dalle più rinomate aziende nazionali ed internazionali.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper deals with the combination of OSGi and cloud computing. Both technologies are mainly placed in the field of distributed computing. Therefore, it is discussed how different approaches from different institutions work. In addition, the approaches are compared to each other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution and maturation of Cloud Computing created an opportunity for the emergence of new Cloud applications. High-performance Computing, a complex problem solving class, arises as a new business consumer by taking advantage of the Cloud premises and leaving the expensive datacenter management and difficult grid development. Standing on an advanced maturing phase, today’s Cloud discarded many of its drawbacks, becoming more and more efficient and widespread. Performance enhancements, prices drops due to massification and customizable services on demand triggered an emphasized attention from other markets. HPC, regardless of being a very well established field, traditionally has a narrow frontier concerning its deployment and runs on dedicated datacenters or large grid computing. The problem with common placement is mainly the initial cost and the inability to fully use resources which not all research labs can afford. The main objective of this work was to investigate new technical solutions to allow the deployment of HPC applications on the Cloud, with particular emphasis on the private on-premise resources – the lower end of the chain which reduces costs. The work includes many experiments and analysis to identify obstacles and technology limitations. The feasibility of the objective was tested with new modeling, architecture and several applications migration. The final application integrates a simplified incorporation of both public and private Cloud resources, as well as HPC applications scheduling, deployment and management. It uses a well-defined user role strategy, based on federated authentication and a seamless procedure to daily usage with balanced low cost and performance.