929 resultados para Cloud storage services
Resumo:
The life of humans and most living beings depend on sensation and perception for the best assessment of the surrounding world. Sensorial organs acquire a variety of stimuli that are interpreted and integrated in our brain for immediate use or stored in memory for later recall. Among the reasoning aspects, a person has to decide what to do with available information. Emotions are classifiers of collected information, assigning a personal meaning to objects, events and individuals, making part of our own identity. Emotions play a decisive role in cognitive processes as reasoning, decision and memory by assigning relevance to collected information. The access to pervasive computing devices, empowered by the ability to sense and perceive the world, provides new forms of acquiring and integrating information. But prior to data assessment on its usefulness, systems must capture and ensure that data is properly managed for diverse possible goals. Portable and wearable devices are now able to gather and store information, from the environment and from our body, using cloud based services and Internet connections. Systems limitations in handling sensorial data, compared with our sensorial capabilities constitute an identified problem. Another problem is the lack of interoperability between humans and devices, as they do not properly understand human’s emotional states and human needs. Addressing those problems is a motivation for the present research work. The mission hereby assumed is to include sensorial and physiological data into a Framework that will be able to manage collected data towards human cognitive functions, supported by a new data model. By learning from selected human functional and behavioural models and reasoning over collected data, the Framework aims at providing evaluation on a person’s emotional state, for empowering human centric applications, along with the capability of storing episodic information on a person’s life with physiologic indicators on emotional states to be used by new generation applications.
Resumo:
Diplomityössä tutkittiin Microsoftin pilvipalveluiden käyttöä pk-yrityksen päätelaitteiden hallinnassa sekä yrityksen tiedonhallinnassa. Kohdeyrityksenä oli pk-yritys, jolla on ollut Microsoftin pilvipalveluita käytössä jo useamman vuoden ajan. Pilvipalveluiden nopean päivitystahdin myötä uusimpien ominaisuuksien hyödyntäminen on suurelta osin jäänyt tekemättä. Tutkimus päädyttiin rajaamaan kohdeyrityksessä tehdyn vaatimusmäärittelyn mukaisiin tapauksiin, joihin pyrittiin etsimään vastauksia Microsoftin pilvipalveluiden tämän hetken ominaisuuksien avulla. Tutkimus toteutettiin tutustumalla kirjallisuuden avulla pilvitoimintamallin perusteisiin ja Microsoftin tarjoamiin pilvipalveluihin. Työn käytännön osuus toteutettiin useilla eri päätelaitteilla Office 365:den ja Windows Intunen muodostamassa testiympäristössä, jota laajennettiin Microsoft Azuren mahdollistamilla lisäominaisuuksilla. Pilvitoimintamallin yritykset taistelevat jatkuvasti käyttäjien luottamuksesta ja NSA:n vakoilutapauksen jälkeen palveluissa on tapahtunut paljon parannuksia. Tutkimuksessa todettiin erityisesti tiedon suojaamiskäytäntöjen ja hallintaominaisuuksien kehittyneen viime vuosina pilvitoimintamallin yritysten keskinäisen kilpailun myötä. Microsoftin pilvipalvelujen uudet ominaisuudet tarjoavat pk-yritysten käyttöön tehokkaita päätelaite- ja tiedonhallintaratkaisuja, joita on tähän asti ollut saatavilla vain suuryritysympäristöissä.
Resumo:
This thesis focuses on the private membership test (PMT) problem and presents three single server protocols to resolve this problem. In the presented solutions, a client can perform an inclusion test for some record x in a server's database, without revealing his record. Moreover after executing the protocols, the contents of server's database remain secret. In each of these solutions, a different cryptographic protocol is utilized to construct a privacy preserving variant of Bloom filter. The three suggested solutions are slightly different from each other, from privacy perspective and also from complexity point of view. Therefore, their use cases are different and it is impossible to choose one that is clearly the best between all three. We present the software developments of the three protocols by utilizing various pseudocodes. The performance of our implementation is measured based on a real case scenario. This thesis is a spin-off from the Academy of Finland research project "Cloud Security Services".
Resumo:
The popularity of MapReduce programming model has increased interest in the research community for its improvement. Among the other directions, the point of fault tolerance, concretely the failure detection issue seems to be a crucial one, but that until now has not reached its satisfying level. Motivated by this, I decided to devote my main research during this period into having a prototype system architecture of MapReduce framework with a new failure detection service, containing both analytical (theoretical) and implementation part. I am confident that this work should lead the way for further contributions in detecting failures to any NoSQL App frameworks, and cloud storage systems in general.
Resumo:
One of the main challenges facing next generation Cloud platform services is the need to simultaneously achieve ease of programming, consistency, and high scalability. Big Data applications have so far focused on batch processing. The next step for Big Data is to move to the online world. This shift will raise the requirements for transactional guarantees. CumuloNimbo is a new EC-funded project led by Universidad Politécnica de Madrid (UPM) that addresses these issues via a highly scalable multi-tier transactional platform as a service (PaaS) that bridges the gap between OLTP and Big Data applications.
Resumo:
La medida de la presión sonora es un proceso de extrema importancia para la ingeniería acústica, de aplicación en numerosas áreas de esta disciplina, como la acústica arquitectónica o el control de ruido. Sobre todo en esta última, es necesario poder efectuar medidas precisas en condiciones muy diversas. Por otra parte, la ubicuidad de los dispositivos móviles inteligentes (smartphones, tabletas, etc.), dispositivos que integran potencia de procesado, conectividad, interactividad y una interfaz intuitiva en un tamaño reducido, abre la posibilidad de su uso como sistemas de medida de calidad y de coste bajo. En este Proyecto se pretende utilizar las capacidades de entrada y salida, procesado, conectividad inalámbrica y geolocalización de los dispositivos móviles basados en iOS, en concreto el iPhone, para implementar un sistema de medidas acústicas que iguale o supere las prestaciones de los sonómetros existentes en el mercado. SonoPhone permitirá, mediante la conexión de un micrófono de medida adecuado, la realización de medidas de acuerdo a las normas técnicas en vigor, así como la posibilidad de programar, configurar y almacenar o trasmitir las medidas realizadas, que además estarán geolocalizadas con el GPS integrado en el dispositivo móvil. También se permitirá enviar los datos de la medida a un almacenamiento remoto en la nube. La aplicación tiene una estructura modular en la que un módulo de adquisición de datos lee la señal del micrófono, un back-end efectúa el procesado necesario, y otros módulos permiten la calibración del dispositivo y programar y configurar las medidas, así como su almacenamiento y transmisión en red. Una interfaz de usuario (GUI) permite visualizar las medidas y efectuar las configuraciones deseadas por el usuario, todo ello en tiempo real. Además de implementar la aplicación, se ha realizado una prueba de funcionamiento para determinar si el hardware del iPhone es adecuado para la medida de la presión acústica de acuerdo a las normas internacionales. Sound pressure measurement is an extremely important process in the field of acoustic engineering, with applications in numerous subfields, like for instance building acoustics and noise control, where it is necessary to be able to accurately measure sound pressure in very diverse (and sometimes adverse) conditions. On the other hand, the growing ubiquity of mobile devices such as smartphones or tablets, which combine processing power, connectivity, interactivity and an intuitive interface in a small size, makes it possible to use these devices as quality low-cost measurement systems. This Project aims to use the input-output capabilities of iOS-based mobile devices, in particular the iPhone, together with their processing power, wireless connectivity and geolocation features, to implement an acoustic measurement system that rivals the performance of existing devices. SonoPhone allows, with the addition of an adequate measurement microphone, to carry out measurements that comply with current technical regulations, as well as programming, configuring, storing and transmitting the results of the measurement. These measurements will be geolocated using the integrated GPS, and can be transmitted effortlessly to a remote cloud storage. The application is structured in modular fashion. A data acquisition module reads the signal from the microphone, while a back-end module carries out the necessary processing. Other modules permit the device to be calibrated, or control the configuration of the measurement and its storage or transmission. A Graphical User Interface (GUI) allows visual feedback on the measurement in progress, and provides the user with real-time control over the measurement parameters. Not only an application has been developed; a laboratory test was carried out with the goal of determining if the hardware of the iPhone permits the whole system to comply with international regulations regarding sound level meters.
Resumo:
En este documento está descrito detalladamente el trabajo realizado para completar todos objetivos marcados para este Trabajo de Fin de Grado, que tiene como meta final el desarrollo de un dashboard configurable de gestión y administración para instancias de OpenStack. OpenStack es una plataforma libre y de código abierto utilizada como solución de Infraestructura como Servicio (Infrastructure as a Service, IaaS) en clouds tanto públicos, que ofrecen sus servicios cobrando el tiempo de uso o los recursos utilizados, como privados para su utilización exclusiva en el entorno de una empresa. El proyecto OpenStack se inició como una colaboración entre la NASA y RackSpace, y a día de hoy es mantenido por las empresas más potentes del sector tecnológico a través de la Fundación OpenStack. La plataforma OpenStack permite el acceso a sus servicios a través de una Interfaz de Linea de Comandos (Command Line Interface, CLI), una API RESTful y una interfaz web en forma de dashboard. Esta última es ofrecida a través del servicio Horizon. Este servicio provee de una interfaz gráfica para acceder, gestionar y automatizar servicios basados en cloud. El dashboard de Horizon presente algunos problemas como que: solo admite opciones de configuración mediante código Python, lo que hace que el usuario no tenga ninguna capacidad de configuración y que el administrador esté obligado a interactuar directamente con el código. no tiene soporte para múltiples regiones que permitan que un usuario pueda distribuir sus recursos por distintos centros de datos en diversas localizaciones como más le convenga. El presente Trabajo de Fin de Grado, que es la fase inicial del proyecto FI-Dash, pretende solucionar estos problemas mediante el desarrollo de un catálogo de widget de la plataformaWireCloud que permitirán al usuario tener todas las funcionalidades ofrecidas por Horizon a la vez que le ofrecen capacidades de configuración y añaden funcionalidades no presentes en Horizon como el soporte de múltiples regiones. Como paso previo al desarrollo del catálogo de widgets se ha llevado a cabo un estudio de las tecnologías y servicios ofrecidos por OpenStack, así como de las herramientas que pudieran ser necesarias para la realización del trabajo. El proceso de desarrollo ha sido dividido en distintas fases de acuerdo con los distintos componentes que forman parte del dashboard cada uno con una funcion de gestion sobre un tipo de recurso distinto. Las otras fases del desarrollo han sido la integración completa del dashboard en la plataforma WireCloud y el diseño de una interfaz gráfica usable y atractiva.---ABSTRACT---Throughout this document it is described the work performed in order to achieve all of the objectives set for this Final Project, which has as its main goal the development of a configurable dashboard for managing and administrating OpenStack instances. OpenStack is a free and open source platform used as Infrastructure as a Service (IaaS) for both public clouds, which offer their services through payments on time or resources used, and private clouds for use only in the company’s environment. The OpenStack project started as a collaboration between NASA and Rackspace, and nowadays is maintained by the most powerful companies in the technology sector through the OpenStack Foundation. The OpenStack project provides access to its services through a Command Line Interface (CLI), a RESTful API and a web interface as dashboard. The latter is offered through a service called Horizon. This service provides a graphical interface to access, manage and automate cloud-based services. Horizon’s dashboard presents some problems such as: Only supports configuration options using Python code, which grants the user no configuration capabilities and forces the administrator to interact directly. No support for multiple regions that allow a user to allocate his resources by different data centers in different locations at his convenience. This Final Project, which is the initial stage of the FI-Dash project, aims to solve these problems by developing a catalog of widgets for the WireCloud platform that will allow the user to have all the features offered by Horizon while offering configuration capabilities and additional features not present in Horizon such as support for multiple regions. As a prelude to the development of the widget catalog, a study of technologies and services offered by OpenStack as well as tools that may be necessary to carry out the work has been conducted. The development process has been split in phases matching the different components that are part of the dashboard, having each one of them a function of management of one kind of resource. The other development phases have been the achieving of full integration with WireCloud and the design of a graphical interface that is both usable and atractive.
Resumo:
This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
This work presents an application of a hybrid Fuzzy-ELECTRE-TOPSIS multicriteria approach for a Cloud Computing Service selection problem. The research was exploratory, using a case of study based on the actual requirements of professionals in the field of Cloud Computing. The results were obtained by conducting an experiment aligned with a Case of Study using the distinct profile of three decision makers, for that, we used the Fuzzy-TOPSIS and Fuzzy-ELECTRE-TOPSIS methods to obtain the results and compare them. The solution includes the Fuzzy sets theory, in a way it could support inaccurate or subjective information, thus facilitating the interpretation of the decision maker judgment in the decision-making process. The results show that both methods were able to rank the alternatives from the problem as expected, but the Fuzzy-ELECTRE-TOPSIS method was able to attenuate the compensatory character existing in the Fuzzy-TOPSIS method, resulting in a different alternative ranking. The attenuation of the compensatory character stood out in a positive way at ranking the alternatives, because it prioritized more balanced alternatives than the Fuzzy-TOPSIS method, a factor that has been proven as important at the validation of the Case of Study, since for the composition of a mix of services, balanced alternatives form a more consistent mix when working with restrictions.
Resumo:
O presente trabalho tem como propósito responder a questão "qual o interesse estratégico de empresas do distrito de Aveiro se internacionalizarem para os Países Africanos de Língua Oficial Portuguesa (PALOP) e/ ou Brasil - Ceará?". O objeto de estudo surgiu após a integração num estágio curricular na AIDA - Associação Industrial do Distrito de Aveiro - e levou à revisão da literatura dos temas estratégia e internacionalização, assim como ao trabalho de campo (6 entrevistas a colaboradoras da AIDA), proporcionando as componentes conceptual e empírica. Verificou-se que o setor de atividade é fundamental para o sucesso das empresas nas missões. Designadamente, bastantes empresas ligadas ao setor metalomecânico, que tendem a recorrer a estes mercados dos PALOP e/ ou Brasil - Ceará, alcançaram, em muitos casos, o sucesso - isto é, a concretização de negócios com novos clientes e / ou o investimento direto nesses países. O contributo do presente trabalho reside também na perspetiva, resultante de um inquérito desenvolido no âmbito do estágio no Gabinete de Relações Exteriores da AIDA, de que ainda que se verifique uma janela de oportunidade para algumas das empresas nos referidos mercados (PALOP e Brasil - Ceará), entende-se que, para o sucesso efetivo destas empresas, outras formas de empreender poderiam ser colocadas em prática, nomeadamente alianças estratégicas entre pequenas e médias empresas (PME) de setores semelhantes, a nível local (Portugal), para competirem a nível internacional com os respetivos líderes de mercado. Desta forma, sugere-se lutar pela competitividade não só nos PALOP mas também nos mercados desenvolvidos, tais como Alemanha, Estados Unidos da América e/ ou países escandinavos - pois somente com clientes exigentes e com a pressão de concorrentes fortes poder-se-ão criar indústrias desenvolvidas e capazes de competir ao mais elevado nível e pelos melhores clientes, com poder de compra e fontes de inovação (Porter, 1990). Estas lições parecem, por ezes, esquecidas, mas segundo Gibbs (2007) um dos propósitos da investigação é também o de lembrar o que foi esquecido e/ ou ignorado. As entrevistas realizadas ofereceram contributo na medida em que proporcionam a compreensão dos motivos para as empresas portuguesas escolherem estes mercados, das razões para o sucesso ou insucesso nos PALOP e/ ou Brasil - Ceará, do investimento e esforço por parte das entidades não-governamentais portuguesas em internacionalizar empresas do setor da metalomecânica, das forças e fraquezas das missões empresariais, de que aspetos fazem da AIDA um agente de mudança e das áreas em que poderia haver maior diligência por parte da AIDA.São também sugeridas recomendações a associação, entre outras, a inclusão das questões culturais de cada país nos estudos de mercado não só sobre PALOP e Brasil - Ceará, mas também nos estudos de mercado do distrito de Aveiro, assim como de Portugal, para fazer divulgação a potenciais importadores; melhoria de processos, implementando-se um software de gestão/ partilha de conhecimento das várias oportunidades de negócio, rentabilizando o processo de estabelecimento de interesse em realizar negócio, no âmbito do EEN (Entelprise Eumpe Netwrk); intervenção na plataforma do IAPMEI por informáticos habilitados; e armazenamento de dados em cloud storage - um serviço do género da Dropbox, de modo a rentabilizar o tempo dispendido, assim como a tornar as pastas acedidas via intranet mais pequenas.
Resumo:
This thesis focuses on the private membership test (PMT) problem and presents three single server protocols to resolve this problem. In the presented solutions, a client can perform an inclusion test for some record x in a server's database, without revealing his record. Moreover after executing the protocols, the contents of server's database remain secret. In each of these solutions, a different cryptographic protocol is utilized to construct a privacy preserving variant of Bloom filter. The three suggested solutions are slightly different from each other, from privacy perspective and also from complexity point of view. Therefore, their use cases are different and it is impossible to choose one that is clearly the best between all three. We present the software developments of the three protocols by utilizing various pseudocodes. The performance of our implementation is measured based on a real case scenario. This thesis is a spin-off from the Academy of Finland research project "Cloud Security Services".
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
The modern industrial environment is populated by a myriad of intelligent devices that collaborate for the accomplishment of the numerous business processes in place at the production sites. The close collaboration between humans and work machines poses new interesting challenges that industry must overcome in order to implement the new digital policies demanded by the industrial transition. The Industry 5.0 movement is a companion revolution of the previous Industry 4.0, and it relies on three characteristics that any industrial sector should have and pursue: human centrality, resilience, and sustainability. The application of the fifth industrial revolution cannot be completed without moving from the implementation of Industry 4.0-enabled platforms. The common feature found in the development of this kind of platform is the need to integrate the Information and Operational layers. Our thesis work focuses on the implementation of a platform addressing all the digitization features foreseen by the fourth industrial revolution, making the IT/OT convergence inside production plants an improvement and not a risk. Furthermore, we added modular features to our platform enabling the Industry 5.0 vision. We favored the human centrality using the mobile crowdsensing techniques and the reliability and sustainability using pluggable cloud computing services, combined with data coming from the crowd support. We achieved important and encouraging results in all the domains in which we conducted our experiments. Our IT/OT convergence-enabled platform exhibits the right performance needed to satisfy the strict requirements of production sites. The multi-layer capability of the framework enables the exploitation of data not strictly coming from work machines, allowing a more strict interaction between the company, its employees, and customers.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática