919 resultados para Drop on Demand


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Il termine cloud ha origine dal mondo delle telecomunicazioni quando i provider iniziarono ad utilizzare servizi basati su reti virtuali private (VPN) per la comunicazione dei dati. Il cloud computing ha a che fare con la computazione, il software, l’accesso ai dati e servizi di memorizzazione in modo tale che l’utente finale non abbia idea della posizione fisica dei dati e la configurazione del sistema in cui risiedono. Il cloud computing è un recente trend nel mondo IT che muove la computazione e i dati lontano dai desktop e dai pc portatili portandoli in larghi data centers. La definizione di cloud computing data dal NIST dice che il cloud computing è un modello che permette accesso di rete on-demand a un pool condiviso di risorse computazionali che può essere rapidamente utilizzato e rilasciato con sforzo di gestione ed interazione con il provider del servizio minimi. Con la proliferazione a larga scala di Internet nel mondo le applicazioni ora possono essere distribuite come servizi tramite Internet; come risultato, i costi complessivi di questi servizi vengono abbattuti. L’obbiettivo principale del cloud computing è utilizzare meglio risorse distribuite, combinarle assieme per raggiungere un throughput più elevato e risolvere problemi di computazione su larga scala. Le aziende che si appoggiano ai servizi cloud risparmiano su costi di infrastruttura e mantenimento di risorse computazionali poichè trasferiscono questo aspetto al provider; in questo modo le aziende si possono occupare esclusivamente del business di loro interesse. Mano a mano che il cloud computing diventa più popolare, vengono esposte preoccupazioni riguardo i problemi di sicurezza introdotti con l’utilizzo di questo nuovo modello. Le caratteristiche di questo nuovo modello di deployment differiscono ampiamente da quelle delle architetture tradizionali, e i meccanismi di sicurezza tradizionali risultano inefficienti o inutili. Il cloud computing offre molti benefici ma è anche più vulnerabile a minacce. Ci sono molte sfide e rischi nel cloud computing che aumentano la minaccia della compromissione dei dati. Queste preoccupazioni rendono le aziende restie dall’adoperare soluzioni di cloud computing, rallentandone la diffusione. Negli anni recenti molti sforzi sono andati nella ricerca sulla sicurezza degli ambienti cloud, sulla classificazione delle minacce e sull’analisi di rischio; purtroppo i problemi del cloud sono di vario livello e non esiste una soluzione univoca. Dopo aver presentato una breve introduzione sul cloud computing in generale, l’obiettivo di questo elaborato è quello di fornire una panoramica sulle vulnerabilità principali del modello cloud in base alle sue caratteristiche, per poi effettuare una analisi di rischio dal punto di vista del cliente riguardo l’utilizzo del cloud. In questo modo valutando i rischi e le opportunità un cliente deve decidere se adottare una soluzione di tipo cloud. Alla fine verrà presentato un framework che mira a risolvere un particolare problema, quello del traffico malevolo sulla rete cloud. L’elaborato è strutturato nel modo seguente: nel primo capitolo verrà data una panoramica del cloud computing, evidenziandone caratteristiche, architettura, modelli di servizio, modelli di deployment ed eventuali problemi riguardo il cloud. Nel secondo capitolo verrà data una introduzione alla sicurezza in ambito informatico per poi passare nello specifico alla sicurezza nel modello di cloud computing. Verranno considerate le vulnerabilità derivanti dalle tecnologie e dalle caratteristiche che enucleano il cloud, per poi passare ad una analisi dei rischi. I rischi sono di diversa natura, da quelli prettamente tecnologici a quelli derivanti da questioni legali o amministrative, fino a quelli non specifici al cloud ma che lo riguardano comunque. Per ogni rischio verranno elencati i beni afflitti in caso di attacco e verrà espresso un livello di rischio che va dal basso fino al molto alto. Ogni rischio dovrà essere messo in conto con le opportunità che l’aspetto da cui quel rischio nasce offre. Nell’ultimo capitolo verrà illustrato un framework per la protezione della rete interna del cloud, installando un Intrusion Detection System con pattern recognition e anomaly detection.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis deals with the synthesis and the conformation analysis of hybrid foldamers containing the 4-carboxyoxazolidin-2-one unit or related molecules, in which an imido-type function is obtained by coupling the nitrogen of the heterocycle with the carboxylic acid moiety of the next unit. The imide group is characterized by a nitrogen atom connected to an endocyclic and an exocyclic carbonyl, which tend always to adopt the trans conformation. As a consequence of this locally constrained disposition effect, these imide-type oligomers are forced to fold in ordered conformations. The synthetic approach is highly tuneable with endless variations, so, simply by changing the design and the synthesis, a wide variety of foldamers with the required properties may be prepared “on demand”. Thus a wide variety of unusual secondary structures and interesting supramolecular materials may be obtained with hybrid foldamers. The behaviour in the solid state of some of these compounds has been analyzed in detail, thus showing the formation of different kinds of supramolecular materials that may be used for several applications. A winning example is the production of a bolaamphiphilic gelators that may also be doped with small amounts of dansyl containing compounds, needed to show the cellular uptake into IGROV-1 cells, by confocal laser scanning microscopy. These gels are readily internalized by cells and are biologically inactive, making them very good candidates in the promising field of drug delivery. In the last part of the thesis, a particular attention was directed to the search of new scaffolds that behave as constrained amino acid mimetics, showing that tetramic acids derivatives could be good candidates for the synthesis and applications of molecules having an ordered secondary structure.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La chemioembolizzazione (TACE) è uno dei trattamenti locoregionali più largamente utilizzati nel trattamento dell’epatocarcinoma (HCC). A tutt’oggi però rimangono irrisolte alcune importanti controversie sul suo impiego. Nella presente tesi sono stati analizzati alcuni dei principali oggetti di dibattito quali (1) indicazione al trattamento, (2) trattamenti multipli e schema di ritrattamento e (3) trattamento dei pazienti candidabili a trapianto di fegato. A tal fine sono stati riportati tre studi che hanno analizzato gli argomenti sopradescritti. La TACE viene comunemente eseguita nei pazienti al di fuori delle raccomandazioni delle linee guida tra cui i pazienti con nodulo singolo, i pazienti con trombosi portale e con performance status (PS) compromesso. Dallo studio 1 è emerso che la TACE può essere considerata una valida opzione terapeutica nei pazienti con HCC singolo non candidabili a trattamenti curativi, che la trombosi portale non neoplastica ed una lieve compromissione del performance status (PS-1) verosimilmente legata alla cirrosi non hanno impatto sulla sopravvivenza post-trattamento. Multipli trattamenti di chemioembolizzazione vengono frequentemente eseguiti ma non esiste a tutt’oggi un numero ottimale di ritrattamenti TACE. Dallo studio 2 è emerso che il trattamento TACE eseguito “on demand” può essere efficacemente ripetuto nei pazienti che non abbiano scompenso funzionale e non siano candidabili a trattamenti curativi anche se solo una piccola percentuale di pazienti selezionati può essere sottoposto a più cicli di trattamento. La TACE è frequentemente impiegata nei pazienti in lista per trapianto di fegato ma non c’è evidenza dell’efficacia di trattamenti ripetuti in questi pazienti. Dallo studio 3 è emerso che il numero di TACE non è significativamente associato né alla necrosi tumorale, né alla recidiva né alla sopravvivenza post-trapianto. Un tempo d’attesa prima del trapianto ≤6 mesi è invece risultato essere fattore predittivo indipendente di recidiva riflettendo la possibile maggiore aggressività tumorale in questa classe di pazienti.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

During the last years great effort has been devoted to the fabrication of superhydrophobic surfaces because of their self-cleaning properties. A water drop on a superhydrophobic surface rolls off even at inclinations of only a few degrees while taking up contaminants encountered on its way. rnSuperhydrophobic, self-cleaning coatings are desirable for convenient and cost-effective maintenance of a variety of surfaces. Ideally, such coatings should be easy to make and apply, mechanically resistant, and long-term stable. None of the existing methods have yet mastered the challenge of meeting all of these criteria.rnSuperhydrophobicity is associated with surface roughness. The lotus leave, with its dual scale roughness, is one of the most efficient examples of superhydrophobic surface. This thesis work proposes a novel technique to prepare superhydrophobic surfaces that introduces the two length scale roughness by growing silica particles (~100 nm in diameter) onto micrometer-sized polystyrene particles using the well-established Stöber synthesis. Mechanical resistance is conferred to the resulting “raspberries” by the synthesis of a thin silica shell on their surface. Besides of being easy to make and handle, these particles offer the possibility for improving suitability or technical applications: since they disperse in water, multi-layers can be prepared on substrates by simple drop casting even on surfaces with grooves and slots. The solution of the main problem – stabilizing the multilayer – also lies in the design of the particles: the shells – although mechanically stable – are porous enough to allow for leakage of polystyrene from the core. Under tetrahydrofuran vapor polystyrene bridges form between the particles that render the multilayer-film stable. rnMulti-layers are good candidate to design surfaces whose roughness is preserved after scratch. If the top-most layer is removed, the roughness can still be ensured by the underlying layer.rnAfter hydrophobization by chemical vapor deposition (CVD) of a semi-fluorinated silane, the surfaces are superhydrophobic with a tilting angle of a few degrees. rnrnrn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chapter 1 studies how consumers’ switching costs affect the pricing and profits of firms competing in two-sided markets such as Apple and Google in the smartphone market. When two-sided markets are dynamic – rather than merely static – I show that switching costs lower the first-period price if network externalities are strong, which is in contrast to what has been found in one-sided markets. By contrast, switching costs soften price competition in the initial period if network externalities are weak and consumers are more patient than the platforms. Moreover, an increase in switching costs on one side decreases the first-period price on the other side. Chapter 2 examines firms’ incentives to invest in local and flexible resources when demand is uncertain and correlated. I find that market power of the monopolist providing flexible resources distorts investment incentives, while competition mitigates them. The extent of improvement depends critically on demand correlation and the cost of capacity: under social optimum and monopoly, if the flexible resource is cheap, the relationship between investment and correlation is positive, and if it is costly, the relationship becomes negative; under duopoly, the relationship is positive. The analysis also sheds light on some policy discussions in markets such as cloud computing. Chapter 3 develops a theory of sequential investments in cybersecurity. The regulator can use safety standards and liability rules to increase security. I show that the joint use of an optimal standard and a full liability rule leads to underinvestment ex ante and overinvestment ex post. Instead, switching to a partial liability rule can correct the inefficiencies. This suggests that to improve security, the regulator should encourage not only firms, but also consumers to invest in security.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The cannabinoid type 1 (CB1) receptor is involved in a plethora of physiological functions and heterogeneously expressed on different neuronal populations. Several conditional loss-of-function studies revealed distinct effects of CB1 receptor signaling on glutamatergic and GABAergic neurons, respectively. To gain a comprehensive picture of CB1 receptor-mediated effects, the present study aimed at developing a gain-of-function approach, which complements conditional loss-of-function studies. Therefore, adeno-associated virus (AAV)-mediated gene delivery and Cre-mediated recombination were combined to recreate an innovative method, which ensures region- and cell type-specific transgene expression in the brain. This method was used to overexpress the CB1 receptor in glutamatergic pyramidal neurons of the mouse hippocampus. Enhanced CB1 receptor activity at glutamatergic terminals caused impairment in hippocampus-dependent memory performance. On the other hand, elevated CB1 receptor levels provoked an increased protection against kainic acid-induced seizures and against excitotoxic neuronal cell death. This finding indicates the protective role of CB1 receptor on hippocampal glutamatergic terminals as a molecular stout guard in controlling excessive neuronal network activity. Hence, CB1 receptor on glutamatergic hippocampal neurons may represent a target for novel agents to restrain excitotoxic events and to treat neurodegenerative diseases. Endocannabinoid synthesizing and degrading enzymes tightly regulate endocannabinoid signaling, and thus, represent a promising therapeutic target. To further elucidate the precise function of the 2-AG degrading enzyme monoacylglycerol lipase (MAGL), MAGL was overexpressed specifically in hippocampal pyramidal neurons. This genetic modification resulted in highly increased MAGL activity accompanied by a 50 % decrease in 2-AG levels without affecting the content of arachidonic acid and anandamide. Elevated MAGL protein levels at glutamatergic terminals eliminated depolarization-induced suppression of excitation (DSE), while depolarization-induced suppression of inhibition (DSI) was unchanged. This result indicates that the on-demand availability of the endocannabinoid 2-AG is crucial for short-term plasticity at glutamatergic synapses in the hippocampus. Mice overexpressing MAGL exhibited elevated corticosterone levels under basal conditions and an increase in anxiety-like behavior, but surprisingly, showed no changes in aversive memory formation and in seizure susceptibility. This finding suggests that 2 AG-mediated hippocampal DSE is essential for adapting to aversive situations, but is not required to form aversive memory and to protect against kainic acid-induced seizures. Thus, specific inhibition of MAGL expressed in hippocampal pyramidal neurons may represent a potential treatment strategy for anxiety and stress disorders. Finally, the method of AAV-mediated cell type-specific transgene expression was advanced to allow drug-inducible and reversible transgene expression. Therefore, elements of the tetracycline-controlled gene expression system were incorporated in our “conditional” AAV vector. This approach showed that transgene expression is switched on after drug application and that background activity in the uninduced state was only detectable in scattered cells of the hippocampus. Thus, this AAV vector will proof useful for future research applications and gene therapy approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cloud services are becoming ever more important for everyone's life. Cloud storage? Web mails? Yes, we don't need to be working in big IT companies to be surrounded by cloud services. Another thing that's growing in importance, or at least that should be considered ever more important, is the concept of privacy. The more we rely on services of which we know close to nothing about, the more we should be worried about our privacy. In this work, I will analyze a prototype software based on a peer to peer architecture for the offering of cloud services, to see if it's possible to make it completely anonymous, meaning that not only the users using it will be anonymous, but also the Peers composing it will not know the real identity of each others. To make it possible, I will make use of anonymizing networks like Tor. I will start by studying the state of art of Cloud Computing, by looking at some real example, followed by analyzing the architecture of the prototype, trying to expose the differences between its distributed nature and the somehow centralized solutions offered by the famous vendors. After that, I will get as deep as possible into the working principle of the anonymizing networks, because they are not something that can just be 'applied' mindlessly. Some de-anonymizing techniques are very subtle so things must be studied carefully. I will then implement the required changes, and test the new anonymized prototype to see how its performances differ from those of the standard one. The prototype will be run on many machines, orchestrated by a tester script that will automatically start, stop and do all the required API calls. As to where to find all these machines, I will make use of Amazon EC2 cloud services and their on-demand instances.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La nostra sfida è stata sviluppare un dispositivo che potesse riunire differenti funzionalità, dalla telepresenza alla visione dei dati on demand, e fosse in grado di portare innovazione nel panorama attuale. Abbiamo quindi deciso di creare un device che potesse svolgere attività d’ispezione e monitoraggio, concentrandoci nel corso dell’implementazione su alcuni possibili campi di utilizzo. Il sistema che abbiamo realizzato è open-source, modulare e dinamico, in grado di rispondere a esigenze diverse e facilmente riadattabile. Il prototipo progettato è in grado di comunicare con uno smartphone, grazie al quale viene guidato dall’utente primario, e di trasmettere in rete i dati rilevati dai diversi sensori integrati. Le informazioni generate sono gestibili attraverso una piattaforma online: il device utilizza il Cloud per storicizzare i dati, rendendoli potenzialmente accessibili a chiunque. Per la configurazione hardware abbiamo usato la kit-board Pi2Go e la piattaforma Raspberry Pi, alle quali abbiamo unito una videocamera e alcuni sensori di prossimità, temperatura e umidità e distanza. È nato così il prototipo InspectorPi, un veicolo telecomandato tramite dispositivo mobile in grado di esplorare ambienti ostili in cui vi sono difficoltà fisiche o ambientali alle quali sovvenire.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Data gathering, either for event recognition or for monitoring applications is the primary intention for sensor network deployments. In many cases, data is acquired periodically and autonomously, and simply logged onto secondary storage (e.g. flash memory) either for delayed offline analysis or for on demand burst transfer. Moreover, operational data such as connectivity information, node and network state is typically kept as well. Naturally, measurement and/or connectivity logging comes at a cost. Space for doing so is limited. Finding a good representative model for the data and providing clever coding of information, thus data compression, may be a means to use the available space to its best. In this paper, we explore the design space for data compression for wireless sensor and mesh networks by profiling common, publicly available algorithms. Several goals such as a low overhead in terms of utilized memory and compression time as well as a decent compression ratio have to be well balanced in order to find a simple, yet effective compression scheme.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using multicast communication in Wireless Sensor Networks (WSNs) is an efficient way to disseminate the same data (from one sender) to multiple receivers, e.g., transmitting code updates to a group of sensor nodes. Due to the nature of code update traffic a multicast protocol has to support bulky traffic and end-to-end reliability. We are interested in an energy-efficient multicast protocol due to the limited resources of wireless sensor nodes. Current data dissemination schemes do not fulfill the above requirements. In order to close the gap, we designed and implemented the SNOMC (Sensor Node Overlay Multicast) protocol. It is an overlay multicast protocol, which supports reliable, time-efficient, and energy-efficient data dissemination of bulky data from one sender to many receivers. To ensure end-to-end reliability, SNOMC uses a NACK-based reliability mechanism with different caching strategies.

Relevância:

80.00% 80.00%

Publicador: