884 resultados para cloud-based applications


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho visa propor uma solução contendo um sistema de reconhecimento de fala automático em nuvem. Dessa forma, não há necessidade de um reconhecedor sendo executado na própria máquina cliente, pois o mesmo estará disponível através da Internet. Além do reconhecimento automático de voz em nuvem, outra vertente deste trabalho é alta disponibilidade. A importância desse tópico se d´a porque o ambiente servidor onde se planeja executar o reconhecimento em nuvem não pode ficar indisponível ao usuário. Dos vários aspectos que requerem robustez, tal como a própria conexão de Internet, o escopo desse trabalho foi definido como os softwares livres que permitem a empresas aumentarem a disponibilidade de seus serviços. Dentre os resultados alcançados e para as condições simuladas, mostrou-se que o reconhecedor de voz em nuvem desenvolvido pelo grupo atingiu um desempenho próximo ao do Google.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scintillations are rapid fluctuations in the phase and amplitude of transionospheric radio signals which are caused by small-scale plasma density irregularities in the ionosphere. In the case of the Global Navigation Satellite System (GNSS) receivers, scintillation can cause cycle slips, degrade the positioning accuracy and, when severe enough, can even lead to a complete loss of signal lock. Thus, the required levels of availability, accuracy, integrity and reliability for the GNSS applications may not be met during scintillation occurrence; this poses a major threat to a large number of modern-day GNSS-based applications. The whole of Latin America, Brazil in particular, is located in one of the regions most affected by scintillations. These effects will be exacerbated during solar maxima, the next predicted for 2013. This paper presents initial results from a research work aimed to tackle ionospheric scintillation effects for GNSS users in Latin America. This research is a part of the CIGALA (Concept for Ionospheric Scintillation Mitigation for Professional GNSS in Latin America) project, co-funded by the EC Seventh Framework Program and supervised by the GNSS Supervisory Authority (GSA), which aims to develop and test ionospheric scintillation countermeasures to be implemented in multi-frequency, multi-constellation GNSS receivers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This work, entitled Websislapam: People Rating System Based on Web Technologies, allows the creation of questionnaires, and the organization of entities and people who participate in evaluations. Entities collect data from people with the help of resources that reduce typing mistakes. The Websislapam maintains a database and provides graphical reporting, which enable the analysis of those tested. Developed using Web technologies such as PHP, Javascript, CSS, and others. Developed with the paradigm of object-oriented programming and the MySQL DBMS. For the theoretical basis, research in the areas of System Database, Web Technologies and Web Engineering were performed. It was determined the evaluation process, systems and Web-based applications, Web and System Engineering Database. Technologies applied in the implementation of Websislapam been described. In a separate chapter presented the main features and artifacts used in the development of Websislapam. A case study demonstrates the practical use of the system

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[EN]The use of new technologies in order to step up the inter- action between humans and machines is the main proof that faces are important in videos. Therefore we suggest a novel Face Video Database for development, testing and veri cation of algorithms related to face- based applications and to facial recognition applications. In addition of facial expression videos, the database includes body videos. The videos are taken by three di erent cameras, working in real time, without vary- ing illumination conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Plasmonen stellen elektromagnetische Moden in metallischen Strukturen dar, in denen die quasifreien Elektronen im Metall kollektiv oszillieren. Während des letzten Jahrzehnts erfuhr das Gebiet der Plasmonik eine rasante Entwicklung, basierend auf zunehmenden Fortschritten der Nanostrukturierungsmethoden und spektroskopischen Untersuchungsmethoden, die zu der Möglichkeit von systematischen Einzelobjektuntersuchungen wohldefinierter Nanostrukturen führte. Die Anregung von Plasmonen resultiert neben einer radiativen Verstärkung der optischen Streuintensität im Fernfeld in einer nicht-radiativen Überhöhung der Feldstärke in unmittelbarer Umgebung der metallischen Struktur (Nahfeld), die durch die kohärente Ladungsansammlung an der metallischen Oberfläche hervorgerufen wird. Das optische Nahfeld stellt folglich eine bedeutende Größe für das fundamentale Verständnis der Wirkung und Wechselwirkung von Plasmonen sowie für die Optimierung plasmonbasierter Applikationen dar. Die große Herausforderung liegt in der Kompliziertheit des experimentellen Zugangs zum Nahfeld, der die Entwicklung eines grundlegenden Verständisses des Nahfeldes verhinderte.rnIm Rahmen dieser Arbeit wurde Photoemissionselektronenmikroskopie (PEEM) bzw. -mikrospektroskopie genutzt, um ortsaufgelöst die Eigenschaften nahfeld-induzierter Elektronenemission zu bestimmen. Die elektrodynamischen Eigenschaften der untersuchten Systeme wurden zudem mit numerischen, auf der Finiten Integrationsmethode basierenden Berechnungen bestimmt und mit den experimentellen Resultaten verglichen.rnAg-Scheiben mit einem Durchmesser von 1µm und einer Höhe von 50nm wurden mit fs-Laserstrahlung der Wellenlänge 400nm unter verschiedenen Polarisationszuständen angeregt. Die laterale Verteilung der infolge eines 2PPE-Prozesses emittierten Elektronen wurde mit dem PEEM aufgenommen. Aus dem Vergleich mit den numerischen Berechnungen lässt sich folgern, dass sich das Nahfeld an unterschiedlichen Stellen der metallischen Struktur verschiedenartig ausbildet. Insbesondere wird am Rand der Scheibe bei s-polarisierter Anregung (verschwindende Vertikalkomponente des elektrischen Felds) ein Nahfeld mit endlicher z-Komponente induziert, während im Zentrum der Scheibe das Nahfeld stets proportional zum einfallenden elektrischen Feld ist.rnWeiterhin wurde erstmalig das Nahfeld optisch angeregter, stark gekoppelter Plasmonen spektral (750-850nm) untersucht und für identische Nanoobjekte mit den entsprechenden Fernfeldspektren verglichen. Dies erfolgte durch Messung der spektralen Streucharakteristik der Einzelobjekte mit einem Dunkelfeldkonfokalmikroskop. Als Modellsystem stark gekoppelter Plasmonen dienten Au Nanopartikel in sub-Nanometerabstand zu einem Au Film (nanoparticle on plane, NPOP). Mit Hilfe dieser Kombination aus komplementären Untersuchungsmethoden konnte erstmalig die spektrale Trennung von radiativen und nicht-radiativen Moden stark gekoppelter Plasmonen nachgewiesen werden. Dies ist insbesondere für Anwendungen von großer Relevanz, da reine Nahfeldmoden durch den unterdrückten radiativen Zerfall eine große Lebensdauer besitzen, so dass deren Verstärkungswirkung besonders lange nutzbar ist. Ursachen für die Unterschiede im spektralen Verhalten von Fern- und Nahfeld konnten durch numerische Berechnungen identifiziert werden. Sie zeigten, dass das Nahfeld nicht-spärischer NPOPs durch die komplexe Oszillationsbewegung der Elektronen innerhalb des Spaltes zwischen Partikel und Film stark ortsabhängig ist. Zudem reagiert das Nahfeld stark gekoppelter Plasmonen deutlich empfindlicher auf strukturelle Störstellen des Resonators als die Fernfeld-Response. Ferner wurde der Elektronenemissionsmechanismus als optischer Feldemissionsprozess identifiziert. Um den Vorgang beschreiben zu können, wurde die Fowler-Nordheim Theorie der statischen Feldemission für den Fall harmonisch oszillierender Felder modifiziert.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Questa tesi si pone l’obiettivo di effettuare un’analisi aggiornata sulla recente evoluzione del Cloud Computing e dei nuovi modelli architetturali a sostegno della continua crescita di richiesta di risorse di computazione, di storage e di rete all'interno dei data center, per poi dedicarsi ad una fase sperimentale di migrazioni live singole e concorrenti di macchine virtuali, studiandone le prestazioni a livello di risorse applicative e di rete all’interno della piattaforma open source di virtualizzazione QEMU-KVM, oggi alla base di sistemi cloud-based come Openstack. Nel primo capitolo, viene effettuato uno studio dello stato dell’arte del Cloud Computing, dei suoi attuali limiti e delle prospettive offerte da un modello di Cloud Federation nel futuro immediato. Nel secondo capitolo vengono discusse nel dettaglio le tecniche di live migration, di recente riferimento per la comunità scientifica internazionale e le possibili ottimizzazioni in scenari inter e intra data center, con l’intento di definire la base teorica per lo studio approfondito dell’implementazione effettiva del processo di migrazione su piattaforma QEMU-KVM, che viene affrontato nel terzo capitolo. In particolare, in quest’ultimo sono descritti i principi architetturali e di funzionamento dell'hypervisor e viene definito il modello di progettazione e l’algoritmo alla base del processo di migrazione. Nel quarto capitolo, infine, si presenta il lavoro svolto, le scelte configurative e progettuali per la creazione di un ambiente di testbed adatto allo studio di sessioni di live migration concorrenti e vengono discussi i risultati delle misure di performance e del comportamento del sistema, tramite le sperimentazioni effettuate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I sistemi di autenticazione con secondo fattore permettono una migliore protezione dell'identità digitale degli utenti. Questo lavoro descrive l'analisi e progettazione della soluzione di autenticazione a più fattori da integrare nel sistema di Ateneo, conclusasi con lo sviluppo del modulo di integrazione tra il servizio di autenticazione Web Single Sign-On dell'Università di Bologna (ADFS 3.0) e la piattaforma scelta per la fornitura (Time4ID). L'integrazione è stata effettuata programmando un Authentication Provider, costituito da una libreria di integrazione scritta in C#, capace di integrarsi con la piattaforma cloud-based di verifica del secondo fattore.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aufbau einer föderativen Dienstlandschaft in der Ruhr-Region auf Basis von SAML mit dem Ziel eine organisationsübergreifende Nutzung von webbasierten IT-Diensten zu ermöglichen

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of broadband Internet connections has fostered new audiovisual media services and opened new possibilities for accessing broadcasts. The Internet retransmission case of TVCatchup before the CJEU was the first case concerning new technologies in the light of Art. 3.1. of the Information Society Directive. On the other side of the Atlantic the Aereo case reached the U.S. Supreme Court and challenged the interpretation of public performance rights. In both cases the recipients of the services could receive broadcast programs in a way alternative to traditional broadcasting channels including terrestrial broadcasting or cable transmission. The Aereo case raised the debate on the possible impact of the interpretation of copyright law in the context of the development of new technologies, particularly cloud based services. It is interesting to see whether any similar problems occur in the EU. The „umbrella” in the title refers to Art. 8 WCT, which covers digital and Internet transmission and constitutes the backrgound for the EU and the U.S. legal solutions. The article argues that no international standard for qualification of the discussed services exists.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recently telecommunication industry benefits from infrastructure sharing, one of the most fundamental enablers of cloud computing, leading to emergence of the Mobile Virtual Network Operator (MVNO) concept. The most momentous intents by this approach are the support of on-demand provisioning and elasticity of virtualized mobile network components, based on data traffic load. To realize it, during operation and management procedures, the virtualized services need be triggered in order to scale-up/down or scale-out/in an instance. In this paper we propose an architecture called MOBaaS (Mobility and Bandwidth Availability Prediction as a Service), comprising two algorithms in order to predict user(s) mobility and network link bandwidth availability, that can be implemented in cloud based mobile network structure and can be used as a support service by any other virtualized mobile network services. MOBaaS can provide prediction information in order to generate required triggers for on-demand deploying, provisioning, disposing of virtualized network components. This information can be used for self-adaptation procedures and optimal network function configuration during run-time operation, as well. Through the preliminary experiments with the prototype implementation on the OpenStack platform, we evaluated and confirmed the feasibility and the effectiveness of the prediction algorithms and the proposed architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chromatin, composed of repeating nucleosome units, is the genetic polymer of life. To aid in DNA compaction and organized storage, the double helix wraps around a core complex of histone proteins to form the nucleosome, and is therefore no longer freely accessible to cellular proteins for the processes of transcription, replication and DNA repair. Over the course of evolution, DNA-based applications have developed routes to access DNA bound up in chromatin, and further, have actually utilized the chromatin structure to create another level of complexity and information storage. The histone molecules that DNA surrounds have free-floating tails that extend out of the nucleosome. These tails are post-translationally modified to create docking sites for the proteins involved in transcription, replication and repair, thus providing one prominent way that specific genomic sequences are accessed and manipulated. Adding another degree of information storage, histone tail-modifications paint the genome in precise manners to influence a state of transcriptional activity or repression, to generate euchromatin, containing gene-dense regions, or heterochromatin, containing repeat sequences and low-density gene regions. The work presented here is the study of histone tail modifications, how they are written and how they are read, divided into two projects. Both begin with protein microarray experiments where we discover the protein domains that can bind modified histone tails, and how multiple tail modifications can influence this binding. Project one then looks deeper into the enzymes that lay down the tail modifications. Specifically, we studied histone-tail arginine methylation by PRMT6. We found that methylation of a specific histone residue by PRMT6, arginine 2 of H3, can antagonize the binding of protein domains to the H3 tail and therefore affect transcription of genes regulated by the H3-tail binding proteins. Project two focuses on a protein we identified to bind modified histone tails, PHF20, and was an endeavor to discover the biological role of this protein. Thus, in total, we are looking at a complete process: (1) histone tail modification by an enzyme (here, PRMT6), (2) how this and other modifications are bound by conserved protein domains, and (3) by using PHF20 as an example, the functional outcome of binding through investigating the biological role of a chromatin reader. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The coagulation of milk is the fundamental process in cheese-making, based on a gel formation as consequence of physicochemical changes taking place in the casein micelles, the monitoring the whole process of milk curd formation is a constant preoccupation for dairy researchers and cheese companies (Lagaude et al., 2004). In addition to advances in composition-based applications of near infrared spectroscopy (NIRS), innovative uses of this technology are pursuing dynamic applications that show promise, especially in regard to tracking a sample in situ during food processing (Bock and Connelly, 2008). In this way the literature describes cheese making process applications of NIRS for curd cutting time determination, which conclude that NIRS would be a suitable method of monitoring milk coagulation, as shown i.e. the works published by Fagan et al. (Fagan et al., 2008; Fagan et al., 2007), based in the use of the commercial CoAguLite probe (with a LED at 880nm and a photodetector for light reflectance detection).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Some of the recent proposals of web-based applications are oriented to provide advanced search services through virtual shops. Within this context, this paper proposes an advanced type of software application that simulates how a sales assistant dialogues with a consumer to dynamically configure a product according to particular needs. The paper presents the general knowl- edge model that uses artificial intelligence and knowledge-based techniques to simulate the configuration process. Finally, the paper illustrates the description with an example of an application in the field of photography equipment.