917 resultados para Web Log Data
Resumo:
Muito se tem falado sobre revolução tecnológica e do aparecimento constante de novas aplicações Web, com novas funcionalidades que visam facilitar o trabalho dos utilizadores. Mas será que estas aplicações garantem que os dados transmitidos são tratados e enviados por canais seguros (protocolos)? Que garantias é que o utilizador tem que mesmo que a aplicação utilize um canal, que prevê a privacidade e integridade de dados, esta não apresente alguma vulnerabilidade pondo em causa a informação sensível do utilizador? Software que não foi devidamente testado, aliado à falta de sensibilização por parte dos responsáveis pelo desenvolvimento de software para questões de segurança, levam ao aumento de vulnerabilidades e assim exponenciam o número de potenciais vítimas. Isto aliado ao efeito de desinibição que o sentimento de invisibilidade pode provocar, conduz ao facilitismo e consequentemente ao aumento do número de vítimas alvos de ataques informáticos. O utilizador, por vezes, não sabe muito bem do que se deve proteger, pois a confiança que depõem no software não pressupõem que os seus dados estejam em risco. Neste contexto foram recolhidos dados históricos relativos a vulnerabilidades nos protocolos SSL/TLS, para perceber o impacto que as mesmas apresentam e avaliar o grau de risco. Para além disso, foram avaliados um número significativo de domínios portugueses para perceber se os mesmos têm uma vulnerabilidade específica do protocolo SSL/TLS.
Semantic web approach for dealing with administrative boundary revisions: a case study of Dhaka City
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Informática
Resumo:
Companies are increasingly more and more dependent on distributed web-based software systems to support their businesses. This increases the need to maintain and extend software systems with up-to-date new features. Thus, the development process to introduce new features usually needs to be swift and agile, and the supporting software evolution process needs to be safe, fast, and efficient. However, this is usually a difficult and challenging task for a developer due to the lack of support offered by programming environments, frameworks, and database management systems. Changes needed at the code level, database model, and the actual data contained in the database must be planned and developed together and executed in a synchronized way. Even under a careful development discipline, the impact of changing an application data model is hard to predict. The lifetime of an application comprises changes and updates designed and tested using data, which is usually far from the real, production, data. So, coding DDL and DML SQL scripts to update database schema and data, is the usual (and hard) approach taken by developers. Such manual approach is error prone and disconnected from the real data in production, because developers may not know the exact impact of their changes. This work aims to improve the maintenance process in the context of Agile Platform by Outsystems. Our goal is to design and implement new data-model evolution features that ensure a safe support for change and a sound migration process. Our solution includes impact analysis mechanisms targeting the data model and the data itself. This provides, to developers, a safe, simple, and guided evolution process.
Resumo:
Stratigraphic Columns (SC) are the most useful and common ways to represent the eld descriptions (e.g., grain size, thickness of rock packages, and fossil and lithological components) of rock sequences and well logs. In these representations the width of SC vary according to the grain size (i.e., the wider the strata, the coarser the rocks (Miall 1990; Tucker 2011)), and the thickness of each layer is represented at the vertical axis of the diagram. Typically these representations are drawn 'manually' using vector graphic editors (e.g., Adobe Illustrator®, CorelDRAW®, Inskape). Nowadays there are various software which automatically plot SCs, but there are not versatile open-source tools and it is very di cult to both store and analyse stratigraphic information. This document presents Stratigraphic Data Analysis in R (SDAR), an analytical package1 designed for both plotting and facilitate the analysis of Stratigraphic Data in R (R Core Team 2014). SDAR, uses simple stratigraphic data and takes advantage of the exible plotting tools available in R to produce detailed SCs. The main bene ts of SDAR are: (i) used to generate accurate and complete SC plot including multiple features (e.g., sedimentary structures, samples, fossil content, color, structural data, contacts between beds), (ii) developed in a free software environment for statistical computing and graphics, (iii) run on a wide variety of platforms (i.e., UNIX, Windows, and MacOS), (iv) both plotting and analysing functions can be executed directly on R's command-line interface (CLI), consequently this feature enables users to integrate SDAR's functions with several others add-on packages available for R from The Comprehensive R Archive Network (CRAN).
Resumo:
Crisis-affected communities and global organizations for international aid are becoming increasingly digital as consequence geotechnology popularity. Humanitarian sector changed in profound ways by adopting new technical approach to obtain information from area with difficult geographical or political access. Since 2011, turkey is hosting a growing number of Syrian refugees along southeastern region. Turkish policy of hosting them in camps and the difficulty created by governors to international aid group expeditions to get information, made such international organizations to investigate and adopt other approach in order to obtain information needed. They intensified its remote sensing approach. However, the majority of studies used very high-resolution satellite imagery (VHRSI). The study area is extensive and the temporal resolution of VHRSI is low, besides it is infeasible only using these sensors as unique approach for the whole area. The focus of this research, aims to investigate the potentialities of mid-resolution imagery (here only Landsat) to obtain information from region in crisis (here, southeastern Turkey) through a new web-based platform called Google Earth Engine (GEE). Hereby it is also intended to verify GEE currently reliability once the Application Programming Interface (API) is still in beta version. The finds here shows that the basic functions are trustworthy. Results pointed out that Landsat can recognize change in the spectral resolution clearly only for the first settlement. The ongoing modifications vary for each case. Overall, Landsat demonstrated high limitations, but need more investigations and may be used, with restriction, as a support of VHRSI.
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.
Resumo:
Recently, there has been a growing interest in the field of metabolomics, materialized by a remarkable growth in experimental techniques, available data and related biological applications. Indeed, techniques as Nuclear Magnetic Resonance, Gas or Liquid Chromatography, Mass Spectrometry, Infrared and UV-visible spectroscopies have provided extensive datasets that can help in tasks as biological and biomedical discovery, biotechnology and drug development. However, as it happens with other omics data, the analysis of metabolomics datasets provides multiple challenges, both in terms of methodologies and in the development of appropriate computational tools. Indeed, from the available software tools, none addresses the multiplicity of existing techniques and data analysis tasks. In this work, we make available a novel R package, named specmine, which provides a set of methods for metabolomics data analysis, including data loading in different formats, pre-processing, metabolite identification, univariate and multivariate data analysis, machine learning, and feature selection. Importantly, the implemented methods provide adequate support for the analysis of data from diverse experimental techniques, integrating a large set of functions from several R packages in a powerful, yet simple to use environment. The package, already available in CRAN, is accompanied by a web site where users can deposit datasets, scripts and analysis reports to be shared with the community, promoting the efficient sharing of metabolomics data analysis pipelines.
Resumo:
Ground-based measurements of the parameters of atmosphere in Tbilisi during the same period, which are provided by the Mikheil Nodia Institute of geophysics, were used as calibration data. Satellite data monthly averaging, preprocessing, analysis and visualization was performed using Giovanni web-based application. Maps of trends and periodic components of the atmosphere aerosol optical thickness and ozone concentration over the study area were calculated.
Resumo:
We present experimental and theoretical analyses of data requirements for haplotype inference algorithms. Our experiments include a broad range of problem sizes under two standard models of tree distribution and were designed to yield statistically robust results despite the size of the sample space. Our results validate Gusfield's conjecture that a population size of n log n is required to give (with high probability) sufficient information to deduce the n haplotypes and their complete evolutionary history. The experimental results inspired our experimental finding with theoretical bounds on the population size. We also analyze the population size required to deduce some fixed fraction of the evolutionary history of a set of n haplotypes and establish linear bounds on the required sample size. These linear bounds are also shown theoretically.
Resumo:
Estudi elaborat a partir d’una estada al Politecnico de Milano, Itàlia, entre gener i juny del 2006. Un dels principals objectius de l’Enginyeria del Programari és automatitzar el màxim possible el procés de desenvolupament del programari, reduint costos mitjançant la generació automàtica del programari a partir de la seva especificació. Per assolir-ho, entre altres, cal resoldre el problema de la comprovació eficient de restriccions, que són una part fonamental de l’especificació del programari. Aquest és precisament l’àmbit en què s’està desenvolupant una tesi que presentarà un mètode que poden integrar totes les eines generadores de codi per tal d’assolir una implementació eficient de les restriccions d’integritat. En l’actual fase del projecte s’ha treballat per validar el mètode de la tesi, optimitzant-lo pel cas específic de les aplicacions web i estendre’l per poder tractar també aplicacions basades en workflows. Pel que fa a l’optimització del mètode per aplicacions web, s’han definit una sèrie de paràmetres que permeten configurar la implementació del mètode tenint en compte les necessitats específiques de rendiment de cada aplicació web en particular. Respecte als workflows (cada cop més populars i que s’usen com a definició d’alt nivell per a les aplicacions a desenvolupar) s’ha estudiat quins són els tipus de restriccions que impliquen i com després es pot aplicar el mètode de la tesi sobre aquestes restriccions per tal de generar de forma eficient també les aplicacions basades en workflows.
Resumo:
Aquesta memòria tracta sobre el procediment de creació d’una aplicació web de notícies. Està dividida en 3 zones, una on usuaris amb permisos d’administració poden penjar notícies per ser visualitzades per tothom, una altra que s’hi accedeix si s’és usuari registrat i permet visualitzar noticies d’altres servidors mitjançant el format de dades RSS, i un tercer apartat de gestió administrativa, incorporar noves notícies, modificar-ne de presents o introduir noves pàgines web que continguin notícies. Els usuaris registrats podran seleccionar el diaris dels quals rebran informació, així com especificar quines temàtiques prefereixen en la cerca de notícies.