792 resultados para cloud-based computing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In Performance-Based Earthquake Engineering (PBEE), evaluating the seismic performance (or seismic risk) of a structure at a designed site has gained major attention, especially in the past decade. One of the objectives in PBEE is to quantify the seismic reliability of a structure (due to the future random earthquakes) at a site. For that purpose, Probabilistic Seismic Demand Analysis (PSDA) is utilized as a tool to estimate the Mean Annual Frequency (MAF) of exceeding a specified value of a structural Engineering Demand Parameter (EDP). This dissertation focuses mainly on applying an average of a certain number of spectral acceleration ordinates in a certain interval of periods, Sa,avg (T1,…,Tn), as scalar ground motion Intensity Measure (IM) when assessing the seismic performance of inelastic structures. Since the interval of periods where computing Sa,avg is related to the more or less influence of higher vibration modes on the inelastic response, it is appropriate to speak about improved IMs. The results using these improved IMs are compared with a conventional elastic-based scalar IMs (e.g., pseudo spectral acceleration, Sa ( T(¹)), or peak ground acceleration, PGA) and the advanced inelastic-based scalar IM (i.e., inelastic spectral displacement, Sdi). The advantages of applying improved IMs are: (i ) "computability" of the seismic hazard according to traditional Probabilistic Seismic Hazard Analysis (PSHA), because ground motion prediction models are already available for Sa (Ti), and hence it is possibile to employ existing models to assess hazard in terms of Sa,avg, and (ii ) "efficiency" or smaller variability of structural response, which was minimized to assess the optimal range to compute Sa,avg. More work is needed to assess also "sufficiency" and "scaling robustness" desirable properties, which are disregarded in this dissertation. However, for ordinary records (i.e., with no pulse like effects), using the improved IMs is found to be more accurate than using the elastic- and inelastic-based IMs. For structural demands that are dominated by the first mode of vibration, using Sa,avg can be negligible relative to the conventionally-used Sa (T(¹)) and the advanced Sdi. For structural demands with sign.cant higher-mode contribution, an improved scalar IM that incorporates higher modes needs to be utilized. In order to fully understand the influence of the IM on the seismis risk, a simplified closed-form expression for the probability of exceeding a limit state capacity was chosen as a reliability measure under seismic excitations and implemented for Reinforced Concrete (RC) frame structures. This closed-form expression is partuclarly useful for seismic assessment and design of structures, taking into account the uncertainty in the generic variables, structural "demand" and "capacity" as well as the uncertainty in seismic excitations. The assumed framework employs nonlinear Incremental Dynamic Analysis (IDA) procedures in order to estimate variability in the response of the structure (demand) to seismic excitations, conditioned to IM. The estimation of the seismic risk using the simplified closed-form expression is affected by IM, because the final seismic risk is not constant, but with the same order of magnitude. Possible reasons concern the non-linear model assumed, or the insufficiency of the selected IM. Since it is impossibile to state what is the "real" probability of exceeding a limit state looking the total risk, the only way is represented by the optimization of the desirable properties of an IM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A prevalent claim is that we are in knowledge economy. When we talk about knowledge economy, we generally mean the concept of “Knowledge-based economy” indicating the use of knowledge and technologies to produce economic benefits. Hence knowledge is both tool and raw material (people’s skill) for producing some kind of product or service. In this kind of environment economic organization is undergoing several changes. For example authority relations are less important, legal and ownership-based definitions of the boundaries of the firm are becoming irrelevant and there are only few constraints on the set of coordination mechanisms. Hence what characterises a knowledge economy is the growing importance of human capital in productive processes (Foss, 2005) and the increasing knowledge intensity of jobs (Hodgson, 1999). Economic processes are also highly intertwined with social processes: they are likely to be informal and reciprocal rather than formal and negotiated. Another important point is also the problem of the division of labor: as economic activity becomes mainly intellectual and requires the integration of specific and idiosyncratic skills, the task of dividing the job and assigning it to the most appropriate individuals becomes arduous, a “supervisory problem” (Hogdson, 1999) emerges and traditional hierarchical control may result increasingly ineffective. Not only specificity of know how makes it awkward to monitor the execution of tasks, more importantly, top-down integration of skills may be difficult because ‘the nominal supervisors will not know the best way of doing the job – or even the precise purpose of the specialist job itself – and the worker will know better’ (Hogdson,1999). We, therefore, expect that the organization of the economic activity of specialists should be, at least partially, self-organized. The aim of this thesis is to bridge studies from computer science and in particular from Peer-to-Peer Networks (P2P) to organization theories. We think that the P2P paradigm well fits with organization problems related to all those situation in which a central authority is not possible. We believe that P2P Networks show a number of characteristics similar to firms working in a knowledge-based economy and hence that the methodology used for studying P2P Networks can be applied to organization studies. Three are the main characteristics we think P2P have in common with firms involved in knowledge economy: - Decentralization: in a pure P2P system every peer is an equal participant, there is no central authority governing the actions of the single peers; - Cost of ownership: P2P computing implies shared ownership reducing the cost of owing the systems and the content, and the cost of maintaining them; - Self-Organization: it refers to the process in a system leading to the emergence of global order within the system without the presence of another system dictating this order. These characteristics are present also in the kind of firm that we try to address and that’ why we have shifted the techniques we adopted for studies in computer science (Marcozzi et al., 2005; Hales et al., 2007 [39]) to management science.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tesi si colloca nell'ambito del Cloud Computing, un modello in grado di abilitare l’accesso in rete in maniera condivisa, pratica e on-demand, di diverse risorse computazionali, come potenza di calcolo o memoria di massa. Questo lavoro ha come scopo la realizzazione di una Cloud privata, per la fornitura di servizi, basata su un’architettura P2P. L’elaborato vuole studiare il caso di un sistema P2P di livello infrastruttura (IaaS) e propone la realizzazione di un prototipo capace di sostenere un insime basilare di API. Verranno utilizzati protocolli di gossip per la costruzione dei servizi fondamentali.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aims of my PhD research work have been the investigation of the redox, photophysical and electronic properties of carbon nanotubes (CNT) and their possible uses as functional substrates for the (electro)catalytic production of oxygen and as molecular connectors for Quantum-dot Molecular Automata. While for CNT many and diverse applications in electronics, in sensors and biosensors field, as a structural reinforcing in composite materials have long been proposed, the study of their properties as individual species has been for long a challenging task. CNT are in fact virtually insoluble in any solvent and, for years, most of the studies has been carried out on bulk samples (bundles). In Chapter 2 an appropriate description of carbon nanotubes is reported, about their production methods and the functionalization strategies for their solubilization. In Chapter 3 an extensive voltammetric and vis-NIR spectroelectrochemical investigation of true solutions of unfunctionalized individual single wall CNT (SWNT) is reported that permitted to determine for the first time the standard electrochemical potentials of reduction and oxidation as a function of the tube diameter of a large number of semiconducting SWNTs. We also established the Fermi energy and the exciton binding energy for individual tubes in solution and, from the linear correlation found between the potentials and the optical transition energies, one to calculate the redox potentials of SWNTs that are insufficiently abundant or absent in the samples. In Chapter 4 we report on very efficient and stable nano-structured, oxygen-evolving anodes (OEA) that were obtained by the assembly of an oxygen evolving polyoxometalate cluster, (a totally inorganic ruthenium catalyst) with a conducting bed of multiwalled carbon nanotubes (MWCNT). Here, MWCNT were effectively used as carrier of the polyoxometallate for the electrocatalytic production of oxygen and turned out to greatly increase both the efficiency and stability of the device avoiding the release of the catalysts. Our bioinspired electrode addresses the major challenge of artificial photosynthesis, i.e. efficient water oxidation, taking us closer to when we might power the planet with carbon-free fuels. In Chapter 5 a study on surface-active chiral bis-ferrocenes conveniently designed in order to act as prototypical units for molecular computing devices is reported. Preliminary electrochemical studies in liquid environment demonstrated the capability of such molecules to enter three indistinguishable oxidation states. Side chains introduction allowed to organize them in the form of self-assembled monolayers (SAM) onto a surface and to study the molecular and redox properties on solid substrates. Electrochemical studies on SAMs of these molecules confirmed their attitude to undergo fast (Nernstian) electron transfer processes generating, in the positive potential region, either the full oxidized Fc+-Fc+ or the partly oxidized Fc+-Fc species. Finally, in Chapter 6 we report on a preliminary electrochemical study of graphene solutions prepared according to an original procedure recently described in the literature. Graphene is the newly-born of carbon nanomaterials and is certainly bound to be among the most promising materials for the next nanoelectronic generation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il Cloud computing è probabilmente l'argomento attualmente più dibattuto nel mondo dell'Information and Communication Technology (ICT). La diffusione di questo nuovo modo di concepire l'erogazione di servizi IT, è l'evoluzione di una serie di tecnologie che stanno rivoluzionando le modalit à in cui le organizzazioni costruiscono le proprie infrastrutture informatiche. I vantaggi che derivano dall'utilizzo di infrastrutture di Cloud Computing sono ad esempio un maggiore controllo sui servizi, sulla struttura dei costi e sugli asset impiegati. I costi sono proporzionati all'eettivo uso dei servizi (pay-per-use), evitando dunque gli sprechi e rendendo più efficiente il sistema di sourcing. Diverse aziende hanno già cominciato a provare alcuni servizi cloud e molte altre stanno valutando l'inizio di un simile percorso. La prima organizzazione a fornire una piattaforma di cloud computing fu Amazon, grazie al suo Elastic Computer Cloud (EC2). Nel luglio del 2010 nasce OpenStack, un progetto open-source creato dalla fusione dei codici realizzati dall'agenzia governativa della Nasa[10] e dell'azienda statunitense di hosting Rackspace. Il software realizzato svolge le stesse funzioni di quello di Amazon, a differenza di questo, però, è stato rilasciato con licenza Apache, quindi nessuna restrizione di utilizzo e di implementazione. Oggi il progetto Openstack vanta di numerose aziende partner come Dell, HP, IBM, Cisco, e Microsoft. L'obiettivo del presente elaborato è quello di comprendere ed analizzare il funzionamento del software OpenStack. Il fine principale è quello di familiarizzare con i diversi componenti di cui è costituito e di concepire come essi interagiscono fra loro, per poter costruire infrastrutture cloud del tipo Infrastructure as a service (IaaS). Il lettore si troverà di fronte all'esposizione degli argomenti organizzati nei seguenti capitoli. Nel primo capitolo si introduce la definizione di cloud computing, trattandone le principali caratteristiche, si descrivono poi, i diversi modelli di servizio e di distribuzione, delineando vantaggi e svantaggi che ne derivano. Nel secondo capitolo due si parla di una delle tecnologie impiegate per la realizzazione di infrastrutture di cloud computing, la virtualizzazione. Vengono trattate le varie forme e tipologie di virtualizzazione. Nel terzo capitolo si analizza e descrive in dettaglio il funzionamento del progetto OpenStack. Per ogni componente del software, viene illustrata l'architettura, corredata di schemi, ed il relativo meccanismo. Il quarto capitolo rappresenta la parte relativa all'installazione del software e alla configurazione dello stesso. Inoltre si espongono alcuni test effettuati sulla macchina in cui è stato installato il software. Infine nel quinto capitolo si trattano le conclusioni con le considerazioni sugli obiettivi raggiunti e sulle caratteristiche del software preso in esame.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic applications are nowadays converging under the umbrella of the cloud computing vision. The future ecosystem of information and communication technology is going to integrate clouds of portable clients and embedded devices exchanging information, through the internet layer, with processing clusters of servers, data-centers and high performance computing systems. Even thus the whole society is waiting to embrace this revolution, there is a backside of the story. Portable devices require battery to work far from the power plugs and their storage capacity does not scale as the increasing power requirement does. At the other end processing clusters, such as data-centers and server farms, are build upon the integration of thousands multiprocessors. For each of them during the last decade the technology scaling has produced a dramatic increase in power density with significant spatial and temporal variability. This leads to power and temperature hot-spots, which may cause non-uniform ageing and accelerated chip failure. Nonetheless all the heat removed from the silicon translates in high cooling costs. Moreover trend in ICT carbon footprint shows that run-time power consumption of the all spectrum of devices accounts for a significant slice of entire world carbon emissions. This thesis work embrace the full ICT ecosystem and dynamic power consumption concerns by describing a set of new and promising system levels resource management techniques to reduce the power consumption and related issues for two corner cases: Mobile Devices and High Performance Computing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical model for studying the influences of deep convective cloud systems on photochemistry was developed based on a non-hydrostatic meteorological model and chemistry from a global chemistry transport model. The transport of trace gases, the scavenging of soluble trace gases, and the influences of lightning produced nitrogen oxides (NOx=NO+NO2) on the local ozone-related photochemistry were investigated in a multi-day case study for an oceanic region located in the tropical western Pacific. Model runs considering influences of large scale flows, previously neglected in multi-day cloud resolving and single column model studies of tracer transport, yielded that the influence of the mesoscale subsidence (between clouds) on trace gas transport was considerably overestimated in these studies. The simulated vertical transport and scavenging of highly soluble tracers were found to depend on the initial profiles, reconciling contrasting results from two previous studies. Influences of the modeled uptake of trace gases by hydrometeors in the liquid and the ice phase were studied in some detail for a small number of atmospheric trace gases and novel aspects concerning the role of the retention coefficient (i.e. the fraction of a dissolved trace gas that is retained in the ice phase upon freezing) on the vertical transport of highly soluble gases were illuminated. Including lightning NOx production inside a 500 km 2-D model domain was found to be important for the NOx budget and caused small to moderate changes in the domain averaged ozone concentrations. A number of sensitivity studies yielded that the fraction of lightning associated NOx which was lost through photochemical reactions in the vicinity of the lightning source was considerable, but strongly depended on assumptions about the magnitude and the altitude of the lightning NOx source. In contrast to a suggestion from an earlier study, it was argued that the near zero upper tropospheric ozone mixing ratios which were observed close to the study region were most probably not caused by the formation of NO associated with lightning. Instead, it was argued in agreement with suggestions from other studies that the deep convective transport of ozone-poor air masses from the relatively unpolluted marine boundary layer, which have most likely been advected horizontally over relatively large distances (both before and after encountering deep convection) probably played a role. In particular, it was suggested that the ozone profiles observed during CEPEX (Central Equatorial Pacific Experiment) were strongly influenced by the deep convection and the larger scale flow which are associated with the intra-seasonal oscillation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negli ultimi anni le Web application stanno assumendo un ruolo sempre più importante nella vita di ognuno di noi. Se fino a qualche anno fa eravamo abituati ad utilizzare quasi solamente delle applicazioni “native”, che venivano eseguite completamente all’interno del nostro Personal Computer, oggi invece molti utenti utilizzano i loro vari dispositivi quasi esclusivamente per accedere a delle Web application. Grazie alle applicazioni Web si sono potuti creare i cosiddetti social network come Facebook, che sta avendo un enorme successo in tutto il mondo ed ha rivoluzionato il modo di comunicare di molte persone. Inoltre molte applicazioni più tradizionali come le suite per ufficio, sono state trasformate in applicazioni Web come Google Docs, che aggiungono per esempio la possibilità di far lavorare più persone contemporanemente sullo stesso documento. Le Web applications stanno assumendo quindi un ruolo sempre più importante, e di conseguenza sta diventando fondamentale poter creare delle applicazioni Web in grado di poter competere con le applicazioni native, che siano quindi in grado di svolgere tutti i compiti che sono stati sempre tradizionalmente svolti dai computer. In questa Tesi ci proporremo quindi di analizzare le varie possibilità con le quali poter migliorare le applicazioni Web, sia dal punto di vista delle funzioni che esse possono svolgere, sia dal punto di vista della scalabilità. Dato che le applicazioni Web moderne hanno sempre di più la necessità di poter svolgere calcoli in modo concorrente e distribuito, analizzeremo un modello computazionale che si presta particolarmente per progettare questo tipo di software: il modello ad Attori. Vedremo poi, come caso di studio di framework per la realizzazione di applicazioni Web avanzate, il Play framework: esso si basa sulla piattaforma Akka di programmazione ad Attori, e permette di realizzare in modo semplice applicazioni Web estremamente potenti e scalabili. Dato che le Web application moderne devono avere già dalla nascita certi requisiti di scalabilità e fault tolerance, affronteremo il problema di come realizzare applicazioni Web predisposte per essere eseguite su piattaforme di Cloud Computing. In particolare vedremo come pubblicare una applicazione Web basata sul Play framework sulla piattaforma Heroku, un servizio di Cloud Computing PaaS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nel primo capitolo si è studiata la nuova tecnologia del Cloud Computing, fornendo una semplice analisi di tutte le caratteristiche principali, gli attori coinvolti e i relativi metodi di distribuzione e servizi offerti. Nel secondo capitolo si è introdotta la nozione di coordination as a service, discutendone le relative astrazioni che compongono l'architettura logica. Successivamente si è preso in considerazione il modello di coordinazione TuCSoN definendo cosa si intende per nodo, agente, centro di tuple e agent coordination context ed è stato analizzato il relativo linguaggio di coordinazione attraverso il quale essi interagiscono. Nel terzo capitolo sono state riviste ed estese le nozioni di TuCSoN, precedentemente acquisite, nell'ambito del Cloud Computing ed è stato fornito un modello astratto ed una possibile architettura di TuCSoN in the Cloud. Sono stati analizzati anche gli aspetti di un possibile servizio di tale genere nello scenario di servizio pay-per-use. Infine nel quarto ed ultimo capitolo si è sviluppato un caso di studio in cui si è implementata un'interfaccia per l'attuale CLI di TuCSoN sottoforma di applet, che è stata poi inserita nel Cloud attraverso la piattaforma PaaS Cloudify.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lo scopo della tesi è esplorare il nuovo dualismo tra calcolo situato e calcolo come mero servizio immateriale che si osserva nel rafforzarsi di due paradigmi apparentemente antitetici come Cloud Computing e Pervasive Computing. Si vuole quindi dimostrare che i due paradigmi sono complementari, e possibilmente sviluppare un modello e un approccio metodologico per sistemi distribuiti che sfrutti opportunamente le caratteristiche dei due paradigmi. A tale scopo si utilizzerà come caso di studio il modello TuCSoN con linguaggio ReSpecT, combinando opportunamente Situated ReSpecT con il modello Coordination as a Service (CaaS) espresso da TuCSoN on Cloud.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This doctoral thesis is devoted to the study of the causal effects of the maternal smoking on the delivery cost. The interest of economic consequences of smoking in pregnancy have been studied fairly extensively in the USA, and very little is known in European context. To identify the causal relation between different maternal smoking status and the delivery cost in the Emilia-Romagna region two distinct methods were used. The first - geometric multidimensional - is mainly based on the multivariate approach and involves computing and testing the global imbalance, classifying cases in order to generate well-matched comparison groups, and then computing treatment effects. The second - structural modelling - refers to a general methodological account of model-building and model-testing. The main idea of this approach is to decompose the global mechanism into sub-mechanisms though a recursive decomposition of a multivariate distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In distributed systems like clouds or service oriented frameworks, applications are typically assembled by deploying and connecting a large number of heterogeneous software components, spanning from fine-grained packages to coarse-grained complex services. The complexity of such systems requires a rich set of techniques and tools to support the automation of their deployment process. By relying on a formal model of components, a technique is devised for computing the sequence of actions allowing the deployment of a desired configuration. An efficient algorithm, working in polynomial time, is described and proven to be sound and complete. Finally, a prototype tool implementing the proposed algorithm has been developed. Experimental results support the adoption of this novel approach in real life scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with heterogeneous architectures in standard workstations. Heterogeneous architectures represent an appealing alternative to traditional supercomputers because they are based on commodity components fabricated in large quantities. Hence their price-performance ratio is unparalleled in the world of high performance computing (HPC). In particular, different aspects related to the performance and consumption of heterogeneous architectures have been explored. The thesis initially focuses on an efficient implementation of a parallel application, where the execution time is dominated by an high number of floating point instructions. Then the thesis touches the central problem of efficient management of power peaks in heterogeneous computing systems. Finally it discusses a memory-bounded problem, where the execution time is dominated by the memory latency. Specifically, the following main contributions have been carried out: A novel framework for the design and analysis of solar field for Central Receiver Systems (CRS) has been developed. The implementation based on desktop workstation equipped with multiple Graphics Processing Units (GPUs) is motivated by the need to have an accurate and fast simulation environment for studying mirror imperfection and non-planar geometries. Secondly, a power-aware scheduling algorithm on heterogeneous CPU-GPU architectures, based on an efficient distribution of the computing workload to the resources, has been realized. The scheduler manages the resources of several computing nodes with a view to reducing the peak power. The two main contributions of this work follow: the approach reduces the supply cost due to high peak power whilst having negligible impact on the parallelism of computational nodes. from another point of view the developed model allows designer to increase the number of cores without increasing the capacity of the power supply unit. Finally, an implementation for efficient graph exploration on reconfigurable architectures is presented. The purpose is to accelerate graph exploration, reducing the number of random memory accesses.