917 resultados para Web Log Data
Resumo:
Over recent years databases have become an extremely important resource for biomedical research. Immunology research is increasingly dependent on access to extensive biological databases to extract existing information, plan experiments, and analyse experimental results. This review describes 15 immunological databases that have appeared over the last 30 years. In addition, important issues regarding database design and the potential for misuse of information contained within these databases are discussed. Access pointers are provided for the major immunological databases and also for a number of other immunological resources accessible over the World Wide Web (WWW). (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
Spatial data has now been used extensively in the Web environment, providing online customized maps and supporting map-based applications. The full potential of Web-based spatial applications, however, has yet to be achieved due to performance issues related to the large sizes and high complexity of spatial data. In this paper, we introduce a multiresolution approach to spatial data management and query processing such that the database server can choose spatial data at the right resolution level for different Web applications. One highly desirable property of the proposed approach is that the server-side processing cost and network traffic can be reduced when the level of resolution required by applications are low. Another advantage is that our approach pushes complex multiresolution structures and algorithms into the spatial database engine. That is, the developer of spatial Web applications needs not to be concerned with such complexity. This paper explains the basic idea, technical feasibility and applications of multiresolution spatial databases.
Resumo:
Esta dissertação trata da análise da produção científica e tecnológica internacional e brasileira na área de conhecimento Engenharia Civil, por meio de indicadores bibliométricos. A área Engenharia Civil foi escolhida em razão da sua relevância para o desenvolvimento econômico do país. No entanto, em termos absolutos e relativos, está entre os setores tecnologicamente mais atrasados da economia. A bibliometria é uma disciplina com alcance multidisciplinar que estuda o uso e os aspectos quantitativos da produção científica registrada. Os indicadores de produção científica são objeto de análise de várias áreas do conhecimento, tanto para o planejamento e a execução de políticas públicas de vários setores quanto para maior conhecimento da comunidade científica sobre o sistema em que está inserida. A metodologia utilizada para a elaboração deste estudo descritivo de caráter exploratório foi a análise documental e bibliométrica, baseada em dados das publicações científicas, no período de 1970 a 2012, e tecnológicas, no período de 2001 a 2012, da área Engenharia Civil, indexadas nas bases de dados Science Citattion Index Expanded (SCI); Social Science Citation Index (SSCI); Conference Proceedings Citation Index (CPCI) e da Derwent Innovations Index (DII), que compõem a base de dados multidisciplinar da Web of Sicence (WoS). As informações foram qualificadas e quantificadas com o auxílio do software bibliométrico VantagePoint®. Os resultados obtidos confirmaram o baixo número de publicações científicas e tecnológicas na área de conhecimento Engenharia Civil de autores filiados a instituições de ensino e pesquisa brasileiras quando comparados aos dos países industrializados. Existe um conjunto de fortes condicionantes que ultrapassam o poder de decisão e de influência da academia, dificultando e limitando a disseminação das pesquisas e patentes brasileiras relacionadas a fatores de caráter sistêmico e cultural. A possibilidade de análise de indicadores de produção científica e tecnológica na Engenharia Civil contribui para criar políticas que, se utilizadas por agências de fomento, podem subsidiar investimentos mais fundamentados por parte dos governos e da iniciativa privada, a exemplo do que é feito por outros setores industriais.
Resumo:
Today, information overload and the lack of systems that enable locating employees with the right knowledge or skills are common challenges that large organisations face. This makes knowledge workers to re-invent the wheel and have problems to retrieve information from both internal and external resources. In addition, information is dynamically changing and ownership of data is moving from corporations to the individuals. However, there is a set of web based tools that may cause a major progress in the way people collaborate and share their knowledge. This article aims to analyse the impact of ‘Web 2.0’ on organisational knowledge strategies. A comprehensive literature review was done to present the academic background followed by a review of current ‘Web 2.0’ technologies and assessment of their strengths and weaknesses. As the framework of this study is oriented to business applications, the characteristics of the involved segments and tools were reviewed from an organisational point of view. Moreover, the ‘Enterprise 2.0’ paradigm does not only imply tools but also changes the way people collaborate, the way the work is done (processes) and finally impacts on other technologies. Finally, gaps in the literature in this area are outlined.
Resumo:
Fluorescent protein microscopy imaging is nowadays one of the most important tools in biomedical research. However, the resulting images present a low signal to noise ratio and a time intensity decay due to the photobleaching effect. This phenomenon is a consequence of the decreasing on the radiation emission efficiency of the tagging protein. This occurs because the fluorophore permanently loses its ability to fluoresce, due to photochemical reactions induced by the incident light. The Poisson multiplicative noise that corrupts these images, in addition with its quality degradation due to photobleaching, make long time biological observation processes very difficult. In this paper a denoising algorithm for Poisson data, where the photobleaching effect is explicitly taken into account, is described. The algorithm is designed in a Bayesian framework where the data fidelity term models the Poisson noise generation process as well as the exponential intensity decay caused by the photobleaching. The prior term is conceived with Gibbs priors and log-Euclidean potential functions, suitable to cope with the positivity constrained nature of the parameters to be estimated. Monte Carlo tests with synthetic data are presented to characterize the performance of the algorithm. One example with real data is included to illustrate its application.
Resumo:
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
Perante a evolução constante da Internet, a sua utilização é quase obrigatória. Através da web, é possível conferir extractos bancários, fazer compras em países longínquos, pagar serviços sem sair de casa, entre muitos outros. Há inúmeras alternativas de utilização desta rede. Ao se tornar tão útil e próxima das pessoas, estas começaram também a ganhar mais conhecimentos informáticos. Na Internet, estão também publicados vários guias para intrusão ilícita em sistemas, assim como manuais para outras práticas criminosas. Este tipo de informação, aliado à crescente capacidade informática do utilizador, teve como resultado uma alteração nos paradigmas de segurança informática actual. Actualmente, em segurança informática a preocupação com o hardware é menor, sendo o principal objectivo a salvaguarda dos dados e continuidade dos serviços. Isto deve-se fundamentalmente à dependência das organizações nos seus dados digitais e, cada vez mais, dos serviços que disponibilizam online. Dada a mudança dos perigos e do que se pretende proteger, também os mecanismos de segurança devem ser alterados. Torna-se necessário conhecer o atacante, podendo prever o que o motiva e o que pretende atacar. Neste contexto, propôs-se a implementação de sistemas de registo de tentativas de acesso ilícitas em cinco instituições de ensino superior e posterior análise da informação recolhida com auxílio de técnicas de data mining (mineração de dados). Esta solução é pouco utilizada com este intuito em investigação, pelo que foi necessário procurar analogias com outras áreas de aplicação para recolher documentação relevante para a sua implementação. A solução resultante revelou-se eficaz, tendo levado ao desenvolvimento de uma aplicação de fusão de logs das aplicações Honeyd e Snort (responsável também pelo seu tratamento, preparação e disponibilização num ficheiro Comma Separated Values (CSV), acrescentando conhecimento sobre o que se pode obter estatisticamente e revelando características úteis e previamente desconhecidas dos atacantes. Este conhecimento pode ser utilizado por um administrador de sistemas para melhorar o desempenho dos seus mecanismos de segurança, tais como firewalls e Intrusion Detection Systems (IDS).
Resumo:
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia Informática.
Resumo:
With the advent of wearable sensing and mobile technologies, biosignals have seen an increasingly growing number of application areas, leading to the collection of large volumes of data. One of the difficulties in dealing with these data sets, and in the development of automated machine learning systems which use them as input, is the lack of reliable ground truth information. In this paper we present a new web-based platform for visualization, retrieval and annotation of biosignals by non-technical users, aimed at improving the process of ground truth collection for biomedical applications. Moreover, a novel extendable and scalable data representation model and persistency framework is presented. The results of the experimental evaluation with possible users has further confirmed the potential of the presented framework.
Resumo:
Constrained and unconstrained Nonlinear Optimization Problems often appear in many engineering areas. In some of these cases it is not possible to use derivative based optimization methods because the objective function is not known or it is too complex or the objective function is non-smooth. In these cases derivative based methods cannot be used and Direct Search Methods might be the most suitable optimization methods. An Application Programming Interface (API) including some of these methods was implemented using Java Technology. This API can be accessed either by applications running in the same computer where it is installed or, it can be remotely accessed through a LAN or the Internet, using webservices. From the engineering point of view, the information needed from the API is the solution for the provided problem. On the other hand, from the optimization methods researchers’ point of view, not only the solution for the problem is needed. Also additional information about the iterative process is useful, such as: the number of iterations; the value of the solution at each iteration; the stopping criteria, etc. In this paper are presented the features added to the API to allow users to access to the iterative process data.
Resumo:
Introduction: multimodality environment; requirement for greater understanding of the imaging technologies used, the limitations of these technologies, and how to best interpret the results; dose optimization; introduction of new techniques; current practice and best practice; incidental findings, in low-dose CT images obtained as part of the hybrid imaging process, are an increasing phenomenon with advancing CT technology; resultant ethical and medico-legal dilemmas; understanding limitations of these procedures important when reporting images and recommending follow-up; free-response observer performance study was used to evaluate lesion detection in low-dose CT images obtained during attenuation correction acquisitions for myocardial perfusion imaging, on two hybrid imaging systems.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertação apresentada como requisito parcial para obtenção do grau de Mestre em Ciência e Sistemas de Informação Geográfica
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies