837 resultados para cloud, disembodied, embodied, coordinazione, PaaS, OPaaS
Resumo:
Brown dwarfs and giant gas extrasolar planets have cold atmospheres with rich chemical compositions from which mineral cloud particles form. Their properties, like particle sizes and material composition, vary with height, and the mineral cloud particles are charged due to triboelectric processes in such dynamic atmospheres. The dynamics of the atmospheric gas is driven by the irradiating host star and/or by the rotation of the objects that changes during its lifetime. Thermal gas ionisation in these ultra-cool but dense atmospheres allows electrostatic interactions and magnetic coupling of a substantial atmosphere volume. Combined with a strong magnetic field , a chromosphere and aurorae might form as suggested by radio and x-ray observations of brown dwarfs. Non-equilibrium processes like cosmic ray ionisation and discharge processes in clouds will increase the local pool of free electrons in the gas. Cosmic rays and lighting discharges also alter the composition of the local atmospheric gas such that tracer molecules might be identified. Cosmic rays affect the atmosphere through air showers in a certain volume which was modelled with a 3D Monte Carlo radiative transfer code to be able to visualise their spacial extent. Given a certain degree of thermal ionisation of the atmospheric gas, we suggest that electron attachment to charge mineral cloud particles is too inefficient to cause an electrostatic disruption of the cloud particles. Cloud particles will therefore not be destroyed by Coulomb explosion for the local temperature in the collisional dominated brown dwarf and giant gas planet atmospheres. However, the cloud particles are destroyed electrostatically in regions with strong gas ionisation. The potential size of such cloud holes would, however, be too small and might occur too far inside the cloud to mimic the effect of, e.g. magnetic field induced star spots.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
In this thesis, tool support is addressed for the combined disciplines of Model-based testing and performance testing. Model-based testing (MBT) utilizes abstract behavioral models to automate test generation, thus decreasing time and cost of test creation. MBT is a functional testing technique, thereby focusing on output, behavior, and functionality. Performance testing, however, is non-functional and is concerned with responsiveness and stability under various load conditions. MBPeT (Model-Based Performance evaluation Tool) is one such tool which utilizes probabilistic models, representing dynamic real-world user behavior patterns, to generate synthetic workload against a System Under Test and in turn carry out performance analysis based on key performance indicators (KPI). Developed at Åbo Akademi University, the MBPeT tool is currently comprised of a downloadable command-line based tool as well as a graphical user interface. The goal of this thesis project is two-fold: 1) to extend the existing MBPeT tool by deploying it as a web-based application, thereby removing the requirement of local installation, and 2) to design a user interface for this web application which will add new user interaction paradigms to the existing feature set of the tool. All phases of the MBPeT process will be realized via this single web deployment location including probabilistic model creation, test configurations, test session execution against a SUT with real-time monitoring of user configurable metric, and final test report generation and display. This web application (MBPeT Dashboard) is implemented with the Java programming language on top of the Vaadin framework for rich internet application development. The Vaadin framework handles the complicated web communications processes and front-end technologies, freeing developers to implement the business logic as well as the user interface in pure Java. A number of experiments are run in a case study environment to validate the functionality of the newly developed Dashboard application as well as the scalability of the solution implemented in handling multiple concurrent users. The results support a successful solution with regards to the functional and performance criteria defined, while improvements and optimizations are suggested to increase both of these factors.
Resumo:
This thesis presents an analysis of the largest catalog to date of infrared spectra of massive young stellar objects in the Large Magellanic Cloud. Evidenced by their very different spectral features, the luminous objects span a range of evolutionary states from those most embedded in their natal molecular material to those that have dissipated and ionized their surroundings to form compact HII regions and photodissociation regions. We quantify the contributions of the various spectral features using the statistical method of principal component analysis. Using this analysis, we classify the YSO spectra into several distinct groups based upon their dominant spectral features: silicate absorption (S Group), silicate absorption and fine-structure line emission (SE), polycyclic aromatic hydrocarbon (PAH) emission (P Group), PAH and fine-structure line emission (PE), and only fine-structure line emission (E). Based upon the relative numbers of sources in each category, we are able to estimate the amount of time massive YSOs spend in each evolutionary stage. We find that approximately 50% of the sources have ionic fine-structure lines, indicating that a compact HII region forms about half-way through the YSO lifetime probed in our study. Of the 277 YSOs we collected spectra for, 41 have ice absorption features, indicating they are surrounded by cold ice-bearing dust particles. We have decomposed the shape of the ice features to probe the composition and thermal history of the ice. We find that most the CO2 ice is embedded a polar ice matrix that has been thermally processed by the embedded YSO. The amount of thermal processing may be correlated with the luminosity of the YSO. Using the Australia Telescope Compact Array, we imaged the dense gas around a subsample of our sources in the HII complexes N44, N105, N113, and N159 using HCO+ and HCN as dense gas tracers. We find that the molecular material in star forming environments is highly clumpy, with clumps that range from subparsec to ~2 parsecs in size and with masses between 10^2 to 10^4 solar masses. We find that there are varying levels of star formation in the clumps, with the lower-mass clumps tending to be without massive YSOs. These YSO-less clumps could either represent an earlier stage of clump to the more massive YSO-bearing ones or clumps that will never form a massive star. Clumps with massive YSOs at their centers have masses larger than those with massive YSOs at their edges, and we suggest that the difference is evolutionary: edge YSO clumps are more advanced than those with YSOs at their centers. Clumps with YSOs at their edges may have had a significant fraction of their mass disrupted or destroyed by the forming massive star. We find that the strength of the silicate absorption seen in YSO IR spectra feature is well-correlated with the on-source HCO+ and HCN flux densities, such that the strength of the feature is indicative of the embeddedness of the YSO. We estimate that ~40% of the entire spectral sample has strong silicate absorption features, implying that the YSOs are embedded in circumstellar material for about 40% of the time probed in our study.
Resumo:
Aim of the study: To introduce and describe FlorNExT®, a free cloud computing application to estimate growth and yield of maritime pine (Pinus pinaster Ait.) even-aged stands in the Northeast of Portugal (NE Portugal). Area of study: NE Portugal. Material and methods: FlorNExT® implements a dynamic growth and yield modelling framework which integrates transition functions for dominant height (site index curves) and basal area, as well as output functions for tree and stand volume, biomass, and carbon content. Main results: FlorNExT® is freely available from any device with an Internet connection at: http://flornext.esa.ipb.pt/. Research highlights: This application has been designed to make it possible for any stakeholder to easily estimate standing volume, biomass, and carbon content in maritime pine stands from stand data, as well as to estimate growth and yield based on four stand variables: age, density, dominant height, and basal area. FlorNExT® allows planning thinning treatments. FlorNExT® is a fundamental tool to support forest mobilization at local and regional scales in NE Portugal. Material and methods: FlorNExT® implements a dynamic growth and yield modelling framework which integrates transition functions for dominant height (site index curves) and basal area, as well as output functions for tree and stand volume, biomass, and carbon content. Main results: FlorNExT® is freely available from any device with an Internet connection at: http://flornext.esa.ipb.pt/. Research highlights: This application has been designed to make it possible for any stakeholder to easily estimate standing volume, biomass, and carbon content in maritime pine stands from stand data, as well as to estimate growth and yield based on four stand variables: age, density, dominant height, and basal area. FlorNExT® allows planning thinning treatments. FlorNExT® is a fundamental tool to support forest mobilization at local and regional scales in NE Portugal.
Resumo:
Dissertação apresentada ao Instituto Politécnico de Castelo Branco para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Desenvolvimento de Software e Sistemas Interativos, realizada sob a orientação científica do Doutor Fernando Reinaldo Silva Garcia Ribeiro e do Doutor José Carlos Meireles Monteiro Metrôlho, Professores Adjuntos da Unidade Técnico-Científica de Informática da Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco.
Resumo:
Los sistemas fotovoltaicos son fuentes emergentes de energías renovables que generan electricidad a partir de la radiación solar. El monitoreo de los sistemas fotovoltaicos aislados proporciona información necesaria que permite a sus propietarios mantener, operar y controlar estos sistemas, reduciendo los costes de operación y evitando indeseadas interrupciones en el suministro eléctrico de zonas aisladas. En este artículo, se propone el desarrollo de una plataforma para el monitoreo de sistemas fotovoltaicos aislados en el Ecuador con el objetivo fundamental de desarrollar una solución escalable, basada en el uso de software libre, en el empleo de sensores de bajo consumo y en el desarrollo de servicios web en la modalidad ‘Software as a Service’ (SaaS) para el procesamiento, gestión y publicación de información registrada y la creación de un innovador centro de control solar fotovoltaico en el Ecuador.
Resumo:
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.
Resumo:
En la actualidad, el uso del Cloud Computing se está incrementando y existen muchos proveedores que ofrecen servicios que hacen uso de esta tecnología. Uno de ellos es Amazon Web Services, que a través de su servicio Amazon EC2, nos ofrece diferentes tipos de instancias que podemos utilizar según nuestras necesidades. El modelo de negocio de AWS se basa en el pago por uso, es decir, solo realizamos el pago por el tiempo que se utilicen las instancias. En este trabajo se implementa en Amazon EC2, una aplicación cuyo objetivo es extraer de diferentes fuentes de información, los datos de las ventas realizadas por las editoriales y librerías de España. Estos datos son procesados, cargados en una base de datos y con ellos se generan reportes estadísticos, que ayudarán a los clientes a tomar mejores decisiones. Debido a que la aplicación procesa una gran cantidad de datos, se propone el desarrollo y validación de un modelo, que nos permita obtener una ejecución óptima en Amazon EC2. En este modelo se tienen en cuenta el tiempo de ejecución, el coste por uso y una métrica de coste/rendimiento. Adicionalmente, se utilizará la tecnología de contenedores Docker para llevar a cabo un caso específico del despliegue de la aplicación.
Resumo:
This article will address the main technical aspects that facilitate the use and growth of computer technology in the cloud, which go hand in hand with the emergence of more and better services on the Internet and technological development of the broadband. Finally, we know what is the impact that the cloud computing technologies in the automation of information units.
Resumo:
Cette thèse porte sur les fondements philosophiques des institutions démocratiques canadiennes et analyse comment leur conception réelle contribue à les atteindre. Pour passer de la théorie à la pratique, la démocratie doit être institutionnalisée. Les institutions ne sont pas que de simples contraintes sur les actions du gouvernement. Elles incarnent des normes démocratiques. Cependant, les théories démocratiques contemporaines sont souvent abstraites et désincarnées. Alors qu’elles étudient les fondements normatifs de la démocratie en général, elles réfléchissent rarement sur les mécanismes permettant d’atteindre l’idéal démocratique. À l’inverse, la science politique tente de tracer l’ensemble du paysage institutionnel entourant l’action de l’État. Mais l’approche de la science politique a une faiblesse majeure : elle n’offre aucune justification épistémologique ou morale des institutions démocratiques. Cette dichotomie entre les principes et les institutions est trompeuse. Les principes de la démocratie libérale sont incarnés par les institutions. En se concentrant sur les fondements philosophiques des institutions démocratiques et libérales, cette thèse fait revivre une longue tradition d’Aristote à John Stuart Mill et réunissant des penseurs comme Montesquieu et James Madison. Actuellement, la recherche universitaire se détourne encore des questions institutionnelles, sous prétexte qu’elles ne seraient pas assez philosophiques. Cependant, le design institutionnel est une question philosophique. Cette thèse propose des améliorations pour que les institutions démocratiques remplissent leur rôle philosophique de manière plus adéquate. Le suicide médicalement assisté est utilisé comme un exemple de l’influence des institutions sur la démocratie.