796 resultados para Web-based applications
Resumo:
Food webs have been used in order to understand the trophic relationship among organisms within an ecosystem, however the extension by which sampling efficiency could affect food web responses remain poorly understood. Still, there is a lack of long-term sampling data for many insect groups, mainly related to the interactions between herbivores and their host plants. In the first chapter, I describe a source food web based on the Senegalia tenuifolia plant by identifying the associated insect species and the interactions among them and with this host plant. Furthermore, I check for the data robustness from each trophic level and propose a cost-efficiently methodology. The results from this chapter show that the collected dataset and the methodology presented are a good tool for sample most insect richness of a source food web. In total the food web comprises 27 species belonging to four trophic levels. In the second chapter, I demonstrate the temporal variation in the species richness and abundance from each trophic level, as well as the relationship among distinct trophic levels. Moreover, I investigate the diversity patterns of the second and third trophic level by assessing the contribution of alfa and beta-diversity components along the years. This chapter shows that in our system the parasitoid abundance is regulated by the herbivore abundances. Besides, the species richness and abundances of the trophic levels vary temporally. It also shows that alfa-diversity was the diversity component that most contribute to the herbivore species diversity (2nd trophic level), while the contribution of alfa- and beta-diversity changed along the years for parasitoid diversity (3rd level). Overall, this dissertation describes a source food web and bring insights into some food web challenges related to the sampling effort to gather enough species from all trophic levels. It also discuss the relation among communities associated with distinct trophic levels and their temporal variation and diversity patterns. Finally, this dissertation contributes for the world food web database and in understanding the interactions among its trophic levels and each trophic level pattern along time and space
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
Devido às suas características únicas, redes de sensores ópticos têm encontrado aplicação em muitos campos, como em Engenharia Civil, Engenharia Geotécnica, Aeronáutica, Energia e Indústrias de Petróleo & Gás. Soluções de monitoramento baseadas nessa tecnologia têm se mostrado particularmente rentáveis e podem ser aplicadas às estruturas de grande porte, onde centenas de sensores devem ser implantados para medições a longo prazo de diferentes parâmetros mecânicos e físicos. Sensores baseados em Grades de Bragg em fibra (FBGs) são a solução mais comumente utilizada no Monitoramento de Saúde Estrutural (SHM) e as medições são realizadas por instrumentos especiais conhecidos como interrogadores ópticos. Taxas de aquisição cada vez mais elevadas têm sido possíveis utilizando interrogadores ópticos mais recentes, o que dá origem a um grande volume de dados cuja manipulação, armazenamento, gerenciamento e visualização podem demandar aplicações de software especiais. Este trabalho apresenta duas aplicações de software de tempo real desenvolvidas para esses fins: Interrogator Abstraction (InterAB) e Web-based System (WbS). As inovações neste trabalho incluem a integração, sincronização, independência, segurança, processamento e visualização em tempo real, e persistência de dados ou armazenamento proporcionados pelo trabalho conjunto das aplicações desenvolvidas. Os resultados obtidos durante testes em laboratório e ambiente real demonstraram a eficiência, robustez e flexibilidade desses softwares para diferentes tipos de sensores e interrogadores ópticos, garantindo atomicidade, consistência, isolamento e durabilidade dos dados persistidos pelo InterAB e apresentados pelo WbS.
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
[ES] Este Trabajo de Fin de Grado es un servicio basado en tecnologías web (PHP, HTML5, CSS, JQUERY y AJAX). El objetivo principal es ofrecer un servicio de creación y gestión de actas para el Ayuntamiento de Las Palmas de Gran Canaria. Para ello, consta de dos módulos principales, uno para “crear actas” y otro para “editar actas”. La aplicación consta de dos partes. Una primera parte desarrollada por mí, que consiste en primer lugar en todas las reuniones que fueron necesarias con el personal del Ayuntamiento de Las Palmas de Gran Canaria para entender sus necesidades y cómo poder afrontarlas como desarrollador. Y en segundo lugar, me he encargado de la elaboración y la estructura de la página web, mediante la generación de los distintos ficheros con contenido HTML, en la interconexión de estos ficheros y en el paso de parámetros entre dichos ficheros mediante las distintas herramientas (JQUERY, AJAX), así como también he dotado a la web de todo el contenido JavaScript necesario. En este apartado también se encuentra la tarea de realizar un módulo de búsqueda y un módulo para mostrar las actas ya acabadas. El de búsqueda contiene un formulario con un campo de búsqueda y busca las coincidencias dentro de todos los ficheros que se han generado con la aplicación. También muestra un link para abrir ese fichero desde el navegador. Como aportación adicional también me he encargado de la configuración y generación de las tablas necesarias de la base de datos para el funcionamiento de la aplicación.
Resumo:
This thesis deals with Context Aware Services, Smart Environments, Context Management and solutions for Devices and Service Interoperability. Multi-vendor devices offer an increasing number of services and end-user applications that base their value on the ability to exploit the information originating from the surrounding environment by means of an increasing number of embedded sensors, e.g. GPS, compass, RFID readers, cameras and so on. However, usually such devices are not able to exchange information because of the lack of a shared data storage and common information exchange methods. A large number of standards and domain specific building blocks are available and are heavily used in today's products. However, the use of these solutions based on ready-to-use modules is not without problems. The integration and cooperation of different kinds of modules can be daunting because of growing complexity and dependency. In this scenarios it might be interesting to have an infrastructure that makes the coexistence of multi-vendor devices easy, while enabling low cost development and smooth access to services. This sort of technologies glue should reduce both software and hardware integration costs by removing the trouble of interoperability. The result should also lead to faster and simplified design, development and, deployment of cross-domain applications. This thesis is mainly focused on SW architectures supporting context aware service providers especially on the following subjects: - user preferences service adaptation - context management - content management - information interoperability - multivendor device interoperability - communication and connectivity interoperability Experimental activities were carried out in several domains including Cultural Heritage, indoor and personal smart spaces – all of which are considered significant test-beds in Context Aware Computing. The work evolved within european and national projects: on the europen side, I carried out my research activity within EPOCH, the FP6 Network of Excellence on “Processing Open Cultural Heritage” and within SOFIA, a project of the ARTEMIS JU on embedded systems. I worked in cooperation with several international establishments, including the University of Kent, VTT (the Technical Reserarch Center of Finland) and Eurotech. On the national side I contributed to a one-to-one research contract between ARCES and Telecom Italia. The first part of the thesis is focused on problem statement and related work and addresses interoperability issues and related architecture components. The second part is focused on specific architectures and frameworks: - MobiComp: a context management framework that I used in cultural heritage applications - CAB: a context, preference and profile based application broker which I designed within EPOCH Network of Excellence - M3: "Semantic Web based" information sharing infrastructure for smart spaces designed by Nokia within the European project SOFIA - NoTa: a service and transport independent connectivity framework - OSGi: the well known Java based service support framework The final section is dedicated to the middleware, the tools and, the SW agents developed during my Doctorate time to support context-aware services in smart environments.
Resumo:
Two of the main features of today complex software systems like pervasive computing systems and Internet-based applications are distribution and openness. Distribution revolves around three orthogonal dimensions: (i) distribution of control|systems are characterised by several independent computational entities and devices, each representing an autonomous and proactive locus of control; (ii) spatial distribution|entities and devices are physically distributed and connected in a global (such as the Internet) or local network; and (iii) temporal distribution|interacting system components come and go over time, and are not required to be available for interaction at the same time. Openness deals with the heterogeneity and dynamism of system components: complex computational systems are open to the integration of diverse components, heterogeneous in terms of architecture and technology, and are dynamic since they allow components to be updated, added, or removed while the system is running. The engineering of open and distributed computational systems mandates for the adoption of a software infrastructure whose underlying model and technology could provide the required level of uncoupling among system components. This is the main motivation behind current research trends in the area of coordination middleware to exploit tuple-based coordination models in the engineering of complex software systems, since they intrinsically provide coordinated components with communication uncoupling and further details in the references therein. An additional daunting challenge for tuple-based models comes from knowledge-intensive application scenarios, namely, scenarios where most of the activities are based on knowledge in some form|and where knowledge becomes the prominent means by which systems get coordinated. Handling knowledge in tuple-based systems induces problems in terms of syntax - e.g., two tuples containing the same data may not match due to differences in the tuple structure - and (mostly) of semantics|e.g., two tuples representing the same information may not match based on a dierent syntax adopted. Till now, the problem has been faced by exploiting tuple-based coordination within a middleware for knowledge intensive environments: e.g., experiments with tuple-based coordination within a Semantic Web middleware (surveys analogous approaches). However, they appear to be designed to tackle the design of coordination for specic application contexts like Semantic Web and Semantic Web Services, and they result in a rather involved extension of the tuple space model. The main goal of this thesis was to conceive a more general approach to semantic coordination. In particular, it was developed the model and technology of semantic tuple centres. It is adopted the tuple centre model as main coordination abstraction to manage system interactions. A tuple centre can be seen as a programmable tuple space, i.e. an extension of a Linda tuple space, where the behaviour of the tuple space can be programmed so as to react to interaction events. By encapsulating coordination laws within coordination media, tuple centres promote coordination uncoupling among coordinated components. Then, the tuple centre model was semantically enriched: a main design choice in this work was to try not to completely redesign the existing syntactic tuple space model, but rather provide a smooth extension that { although supporting semantic reasoning { keep the simplicity of tuple and tuple matching as easier as possible. By encapsulating the semantic representation of the domain of discourse within coordination media, semantic tuple centres promote semantic uncoupling among coordinated components. The main contributions of the thesis are: (i) the design of the semantic tuple centre model; (ii) the implementation and evaluation of the model based on an existent coordination infrastructure; (iii) a view of the application scenarios in which semantic tuple centres seem to be suitable as coordination media.
Resumo:
In questa tesi è stato realizzato un sistema web-based, per la configurazione di modelli meccanici tridimensionali. L’intero software è basato su architettura multi-tier. Il back-end espone servizi RESTful che permettono l’interrogazione di una base di dati contenente l’anagrafica dei modelli e l’interazione con il CAD 3D SolidWorks. Il front-end è rappresentato da due pagine HTML ideate come SPA (Single Page Application), una per l’amministratore e l’altra per l’utente finale; esse sono responsabili delle chiamate asincrone verso i servizi, dell’aggiornamento automatico dell’interfaccia e dell’interazione con immagini tridimensionali.
Resumo:
L’azienda committente vuole ampliare la possibilità di accedere al suo database, sia dall’esterno che dall’interno, non solamente da parte di un unico utente amministratore, ma anche da tutti gli agenti distribuiti sul territorio nazionale e tutti gli impiegati degli uffici. Si vuole inoltre creare un sistema di notifiche via email che permetta a tutti gli utenti (agenti) registrati di avere in tempo reale informazioni sempre aggiornate riguardo i propri clienti. La ditta si chiama Tropical Lane S.p.A. e vuole realizzare questo progetto per rendere maggiormente accessibili le informazioni per i propri agenti, distribuiti nelle varie regioni italiane, direttamente tramite un loro dispositivo mobile, come smartphone o tablet, senza dover necessariamente telefonare continuamente in ditta dove è presente una segretaria che svolge solamente questa mansione. In questo modo si cerca di velocizzare la procedura di accesso ai dati così da liberare una risorsa utile per lo svolgimento di altre funzioni interne. Si è deciso quindi di creare un sito web, basato su tecnologia Active Server Pages e interfacciato al database aziendale, che sia fruibile dall’esterno della rete aziendale e sul quale vengono mostrati i dati, organizzati in modo da essere facilmente comprensibili e visualizzabili da tutti. La realizzazione del sito è stata effettuata utilizzando notepad++ sia per le pagine ASP che per le poche pagine di HTML statico presenti. Gli script sono stati realizzati in linguaggio Javascript, compatibile con le specifiche dei browser più diffusi. Il sito è supportato da un unico database realizzato in precedenza con Microsoft SQL ed integrato.
Resumo:
Bioinformatics, in the last few decades, has played a fundamental role to give sense to the huge amount of data produced. Obtained the complete sequence of a genome, the major problem of knowing as much as possible of its coding regions, is crucial. Protein sequence annotation is challenging and, due to the size of the problem, only computational approaches can provide a feasible solution. As it has been recently pointed out by the Critical Assessment of Function Annotations (CAFA), most accurate methods are those based on the transfer-by-homology approach and the most incisive contribution is given by cross-genome comparisons. In the present thesis it is described a non-hierarchical sequence clustering method for protein automatic large-scale annotation, called “The Bologna Annotation Resource Plus” (BAR+). The method is based on an all-against-all alignment of more than 13 millions protein sequences characterized by a very stringent metric. BAR+ can safely transfer functional features (Gene Ontology and Pfam terms) inside clusters by means of a statistical validation, even in the case of multi-domain proteins. Within BAR+ clusters it is also possible to transfer the three dimensional structure (when a template is available). This is possible by the way of cluster-specific HMM profiles that can be used to calculate reliable template-to-target alignments even in the case of distantly related proteins (sequence identity < 30%). Other BAR+ based applications have been developed during my doctorate including the prediction of Magnesium binding sites in human proteins, the ABC transporters superfamily classification and the functional prediction (GO terms) of the CAFA targets. Remarkably, in the CAFA assessment, BAR+ placed among the ten most accurate methods. At present, as a web server for the functional and structural protein sequence annotation, BAR+ is freely available at http://bar.biocomp.unibo.it/bar2.0.
Resumo:
Questa tesi tratta della realizzazione e valutazione di un simulatore Web-Based i cui nodi sono connessi tramite Web Real Time Communication (WebRTC) e si testa la sua efficienza mediante la simulazione di un semplice modello di mobilità. Si espongono i principali concetti di simulazione e di WebRTC, fornendo le basi per una maggior comprensione del testo e delle scelte progettuali ed implementative. Si conclude con serie di test comparativi dell’applicativo mettendo in luce pregi e difetti di questo approccio alternativo alla simulazione.
Resumo:
Questa tesi è il risultato dell’attività di sviluppo di una applicazione Web per l’analisi e la presentazione di dati relativi al mercato immobiliare italiano, svolta presso l’azienda responsabile del portale immobiliare all’indirizzo www.affitto.it. L'azienda commissiona lo sviluppo di un sistema software che generi uno storico e descriva l'andamento del mercato immobiliare nazionale. In questa tesi verrà presentato il processo di sviluppo software che ha portato alla realizzazione del prodotto, che è costituito da un applicativo Web-based implementato col supporto di tecnologie quali PHP,HTML,MySQL,CSS.
Resumo:
Routine bridge inspections require labor intensive and highly subjective visual interpretation to determine bridge deck surface condition. Light Detection and Ranging (LiDAR) a relatively new class of survey instrument has become a popular and increasingly used technology for providing as-built and inventory data in civil applications. While an increasing number of private and governmental agencies possess terrestrial and mobile LiDAR systems, an understanding of the technology’s capabilities and potential applications continues to evolve. LiDAR is a line-of-sight instrument and as such, care must be taken when establishing scan locations and resolution to allow the capture of data at an adequate resolution for defining features that contribute to the analysis of bridge deck surface condition. Information such as the location, area, and volume of spalling on deck surfaces, undersides, and support columns can be derived from properly collected LiDAR point clouds. The LiDAR point clouds contain information that can provide quantitative surface condition information, resulting in more accurate structural health monitoring. LiDAR scans were collected at three study bridges, each of which displayed a varying degree of degradation. A variety of commercially available analysis tools and an independently developed algorithm written in ArcGIS Python (ArcPy) were used to locate and quantify surface defects such as location, volume, and area of spalls. The results were visual and numerically displayed in a user-friendly web-based decision support tool integrating prior bridge condition metrics for comparison. LiDAR data processing procedures along with strengths and limitations of point clouds for defining features useful for assessing bridge deck condition are discussed. Point cloud density and incidence angle are two attributes that must be managed carefully to ensure data collected are of high quality and useful for bridge condition evaluation. When collected properly to ensure effective evaluation of bridge surface condition, LiDAR data can be analyzed to provide a useful data set from which to derive bridge deck condition information.
Resumo:
We describe the use of log file analysis to investigate whether the use of CSCL applications corresponds to its didactical purposes. Exemplarily we examine the use of the web-based system CommSy as software support for project-oriented university courses. We present two findings: (1) We suggest measures to shape the context of CSCL applications and support their initial and continuous use. (2) We show how log files can be used to analyze how, when and by whom a CSCL system is used and thus help to validate further empirical findings. However, log file analyses can only be interpreted reasonably when additional data concerning the context of use is available.
Resumo:
The aim of the web-based course “Advertising Psychology – The Blog Seminar” was to offer a contemporary teaching design using typical Web 2.0 characteristics such as comments, discussions and social media integration which covers facebook and Twitter support, as nowadays, this is a common part of students’ everyday life. This weblog (blog)-based seminar for Advertising Psychology was set up in order to make the course accessible to students from different campuses in the Ruhr metropolitan area. The technical aspect of the open-source content management system Drupal 6.0 and the didactical course structure, based on Merrill’s five first principles of instruction, are introduced. To date, this blog seminar has been conducted three times with a total of 84 participants, who were asked to rate the course according to the benefits of different didactical elements and with regard to Kirkpatrick’s levels of evaluation model. This model covers a) reactions such as reported enjoyment, perceived usefulness and perceived difficulty, and b) effects on learning through the subjectively reported increase in knowledge and attitude towards the seminar. Overall, the blog seminar was evaluated very positively and can be considered as providing support for achieving the learning objectives. However, a successful blended learning approach should always be tailored to the learning contents and the environment.