948 resultados para portale, monitoring, web usage mining


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Relatório de Estágio apresentado para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Novos Media e Práticas Web

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O Kosovo declarou-­‐se independente a 17 de Fevereiro de 2008, sendo, neste momento, o segundo país mais recente do mundo, só ultrapassado pelo Sudão do Sul em 2011. Após a desintegração da ex-­‐Jugoslávia e todos os conflitos militares que se sucederam, a história do Kosovo, as suas origens, é talvez a que apresenta ainda mais questões por responder. O web-­‐documentário “Quem és tu, Kosovo?” trata-­‐se de um projecto documental de recolha de histórias de vida, onde o ponto central de cada entrevista é a procura de uma paralelismo entre a identidade dos seus cidadãos e o seu país. Este projecto, e o trabalho desenvolvido pelo jornalista no país, propõem-­‐se a ser uma experiência piloto para ser apresentado à população do Kosovo, e não só, com o objectivo de criar um arquivo de histórias do país e aumentar a diversidade de perspectivas identitárias.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The processes of mobilization of land for infrastructures of public and private domain are developed according to proper legal frameworks and systematically confronted with the impoverished national situation as regards the cadastral identification and regularization, which leads to big inefficiencies, sometimes with very negative impact to the overall effectiveness. This project report describes Ferbritas Cadastre Information System (FBSIC) project and tools, which in conjunction with other applications, allow managing the entire life-cycle of Land Acquisition and Cadastre, including support to field activities with the integration of information collected in the field, the development of multi-criteria analysis information, monitoring all information in the exploration stage, and the automated generation of outputs. The benefits are evident at the level of operational efficiency, including tools that enable process integration and standardization of procedures, facilitate analysis and quality control and maximize performance in the acquisition, maintenance and management of registration information and expropriation (expropriation projects). Therefore, the implemented system achieves levels of robustness, comprehensiveness, openness, scalability and reliability suitable for a structural platform. The resultant solution, FBSIC, is a fit-for-purpose cadastre information system rooted in the field of railway infrastructures. FBSIC integrating nature of allows: to accomplish present needs and scale to meet future services; to collect, maintain, manage and share all information in one common platform, and transform it into knowledge; to relate with other platforms; to increase accuracy and productivity of business processes related with land property management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

During drilling operation, cuttings are produced downhole and must be removed to avoid issues which can lead to Non Productive Time (NPT). Most of stuck pipe and then Bottom-Hole Assembly (BHA) lost events are hole cleaned related. There are many parameters which help determine hole cleaning conditions, but a proper selection of the key parameters will facilitate monitoring hole cleaning conditions and interventions. The aim of Hole Cleaning Monitoring is to keep track of borehole conditions including hole cleaning efficiency and wellbore stability issues during drilling operations. Adequate hole cleaning is the one of the main concerns in the underbalanced drilling operations especially for directional and horizontal wells. This dissertation addresses some hole cleaning fundamentals which will act as the basis for recommendation practice during drilling operations. Understand how parameters such as Flowrate, Rotation per Minute (RPM), Rate of Penetration (ROP) and Mud Weight are useful to improve the hole cleaning performance and how Equivalent Circulate Density (ECD), Torque & Drag (T&D) and Cuttings Volumes coming from downhole help to indicate how clean and stable the well is. For case study, hole cleaning performance or cuttings volume removal monitoring, will be based on real-time measurements of the cuttings volume removal from downhole at certain time, taking into account Flowrate, RPM, ROP and Drilling fluid or Mud properties, and then will be plotted and compared to the volume being drilled expected. ECD monitoring will dictate hole stability conditions and T&D and Cuttings Volume coming from downhole monitoring will dictate how clean the well is. T&D Modeling Software provide theoretical calculated T&D trends which will be plotted and compared to the real-time measurements. It will use the measured hookloads to perform a back-calculation of friction factors along the wellbore.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Benefits of long-term monitoring have drawn considerable attention in healthcare. Since the acquired data provides an important source of information to clinicians and researchers, the choice for long-term monitoring studies has become frequent. However, long-term monitoring can result in massive datasets, which makes the analysis of the acquired biosignals a challenge. In this case, visualization, which is a key point in signal analysis, presents several limitations and the annotations handling in which some machine learning algorithms depend on, turn out to be a complex task. In order to overcome these problems a novel web-based application for biosignals visualization and annotation in a fast and user friendly way was developed. This was possible through the study and implementation of a visualization model. The main process of this model, the visualization process, comprised the constitution of the domain problem, the abstraction design, the development of a multilevel visualization and the study and choice of the visualization techniques that better communicate the information carried by the data. In a second process, the visual encoding variables were the study target. Finally, the improved interaction exploration techniques were implemented where the annotation handling stands out. Three case studies are presented and discussed and a usability study supports the reliability of the implemented work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO - A exposição contínua a substâncias químicas tem consequências para a saúde humana, algumas das quais não estão ainda totalmente estabelecidas. A toxicologia ocupacional é uma área interdisciplinar que envolve conhecimentos de higiene e de medicina ocupacional, de epidemiologia e de toxicologia e que tem por principal objectivo prevenir a ocorrência de efeitos adversos decorrentes do ambiente ocupacional sendo um dos seus principais papéis fornecer o máximo de dados que possam contribuir para o conhecimento dos potenciais efeitos na saúde. O chumbo é um tóxico de características cumulativas que provoca na saúde efeitos principalmente sistémicos, ou seja, o efeito tóxico manifesta-se em locais afastados do contacto inicial que resultam essencialmente de exposições crónicas, resultantes de períodos de exposição mais ou menos longos ao metal (entre meses e anos). Pode interagir com diferentes órgãos e tecidos, ligando-se a moléculas e constituintes celulares. Uma vez que não possui qualquer função fisiológica, a presença do chumbo no organismo humano resulta numa série de efeitos prejudiciais que afectam diversos órgãos e sistemas. A toxicidade do chumbo manifesta-se em diversos órgãos e tecidos, nomeadamente no sistema hematopoiético, no sistema nervoso, no rim, no aparelho reprodutor, no sistema cardiovascular, no sistema endócrino e no sistema imunitário. Da interferência do chumbo com o funcionamento de alguns sistemas biológicos resultam um conjunto de alterações fundamentais ao nível dos processos de transporte através das membranas, da integridade estrutural e funcional das enzimas e de várias vias metabólicas, em especial da fosforilação oxidativa e da síntese do heme sendo os primeiros efeitos bioquímicos do chumbo detectados a partir de valores de plumbémia inferiores a 10 μg/dL. As medidas de higiene e segurança actualmente em vigor nos países desenvolvidos asseguram que os casos de intoxicação grave são cada vez menos frequentes. No entanto, o risco de exposição a nível ocupacional existe em todas as actividades que envolvem materiais que o contenham como as explorações mineiras, as fundições primária e secundária, a produção de baterias de chumbo ácido, a produção de vidro com pigmentos de chumbo, as soldaduras de reparação automóvel e a instrução de tiro. Desde 2006 o chumbo é considerado pela International Agency for Research on Cancer (IARC) uma substância carcinogénica do grupo 2A (provável carcinogénio para o ser humano). Considera-se, assim, que o chumbo tem, inequivocamente, capacidade de induzir cancro em animais experimentais mas que, embora haja fortes indícios de que os mecanismos que medeiam a carcinogénese desses compostos ocorrem no ser humano, os dados disponíveis ainda não podem assegurar essa relação. Com este estudo pretendeu-se contribuir para o conhecimento da toxicidade do chumbo através do estudo da exposição ao chumbo e da influência da susceptibilidade individual (em industrias sem co-exposição significativa a outros agentes conhecidos ou suspeitos de serem carcinogénicos). Pretendeu-se estudar o caso através de uma abordagem múltipla que permitisse relacionar diferentes tipos de marcadores biológicos uma vez que a monitorização biológica integra todas as possíveis vias de entrada no organismo (para além da via respiratória), eventuais exposições fora do contexto estritamente profissional assim como uma série de factores intrínsecos individuais (relacionados com modos de via, de natureza fisiológica e comportamentais). Sendo a co-exposição a outros compostos com propriedades genotóxicas e carcinogénicas uma questão difícil de tornear quando se quer avaliar o potencial genotóxico do chumbo em populações expostas, ocupacional ou ambientalmente este estudo tem a vantagem de ter sido efectuado em populações sem co-exposição conhecida a outras substâncias deste tipo, permitindo concluir sobre os efeitos resultantes apenas da exposição a chumbo na população humana, contribuindo para explicar algumas das aparentes inconsistências e contradições entre diferentes estudos sobre este tema. Os indicadores de exposição usados foram: indicadores de dose interna (doseamento de chumbo e de PPZ no sangue), indicadores de efeitos adversos no heme e genotóxicos (actividade da ALAD, teste do cometa e mutação em TCR) e indicadores de susceptibilidade (polimorfismos genéticos de ALAD e VDR) através de uma abordagem estatística de comparação directa de sub-grupos previamente definidos na população e da aplicação de um modelo de regressão múltipla. Este estudo revelou que os níveis de plumbémia na população portuguesa baixaram significativamente nos últimos 10 anos, tanto na população ocupacionalmente exposta como na população em geral e que a presença do genótipo B-B (do gene VDR) é preditiva das variações de plumbémia, quando comparada com o genótipo mais frequente na população, B-b; ao contrário, o genótipo b-b não aparenta ter influência em nenhum dos marcadores estudados. No que diz respeito a efeitos genotóxicos concluiu-se que estes não se manifestaram na população estudada, levando a concluir que nos níveis de exposição estudados, o chumbo não tem capacidade de induzir este tipo de efeitos per si levando ao reforço da hipótese, já levantada por outros autores, de que o mecanismo de genotoxicidade do chumbo seja essencialmente de promoção de processos de genotoxicidade desencadeados por outros agentes. A realização de estudos de efeitos genotóxicos e de stress oxidativo desenhados de forma a comparar grupos de trabalhadores expostos apenas a chumbo com grupos de trabalhadores com o mesmo nível de exposição a chumbo, mas com co-exposição a outros agentes reconhecidamente carcinogénicos poderá ajudar a aumentar o conhecimento deste efeito do chumbo na saúde humana.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hybrid knowledge bases are knowledge bases that combine ontologies with non-monotonic rules, allowing to join the best of both open world ontologies and close world rules. Ontologies shape a good mechanism to share knowledge on theWeb that can be understood by both humans and machines, on the other hand rules can be used, e.g., to encode legal laws or to do a mapping between sources of information. Taking into account the dynamics present today on the Web, it is important for these hybrid knowledge bases to capture all these dynamics and thus adapt themselves. To achieve that, it is necessary to create mechanisms capable of monitoring the information flow present on theWeb. Up to today, there are no such mechanisms that allow for monitoring events and performing modifications of hybrid knowledge bases autonomously. The goal of this thesis is then to create a system that combine these hybrid knowledge bases with reactive rules, aiming to monitor events and perform actions over a knowledge base. To achieve this goal, a reactive system for the SemanticWeb is be developed in a logic-programming based approach accompanied with a language for heterogeneous rule base evolution having as its basis RIF Production Rule Dialect, which is a standard for exchanging rules over theWeb.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Complex systems, i.e. systems composed of a large set of elements interacting in a non-linear way, are constantly found all around us. In the last decades, different approaches have been proposed toward their understanding, one of the most interesting being the Complex Network perspective. This legacy of the 18th century mathematical concepts proposed by Leonhard Euler is still current, and more and more relevant in real-world problems. In recent years, it has been demonstrated that network-based representations can yield relevant knowledge about complex systems. In spite of that, several problems have been detected, mainly related to the degree of subjectivity involved in the creation and evaluation of such network structures. In this Thesis, we propose addressing these problems by means of different data mining techniques, thus obtaining a novel hybrid approximation intermingling complex networks and data mining. Results indicate that such techniques can be effectively used to i) enable the creation of novel network representations, ii) reduce the dimensionality of analyzed systems by pre-selecting the most important elements, iii) describe complex networks, and iv) assist in the analysis of different network topologies. The soundness of such approach is validated through different validation cases drawn from actual biomedical problems, e.g. the diagnosis of cancer from tissue analysis, or the study of the dynamics of the brain under different neurological disorders.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the recent progresses in robotics, autonomous robots still have too many limitations to reliably help people with disabilities. On the other hand, animals, and especially dogs, have already demonstrated great skills in assisting people in many daily situations. However, dogs also have their own set of limitations. For example, they need to rest periodically, to be healthy (physically and psychologically), and it is difficult to control them remotely. This project aims to “augment” the Assistance dog, by developing a system that compensates some of the dog weaknesses through a robotic device mounted on the dog harness. This specific study, involved in the COCHISE project, focuses on the development of a system for the monitoring of dogs activity and physiological parameters.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O crescente poder computacional dos dispositivos móveis e a maior eficiência dos navegadores fomentam a construção de aplicações Web mais rápidas e fluídas, através da troca assíncrona de dados em vez de páginas HTML completas. A OutSystems Platform é um ambiente de desenvolvimento usado para a construção rápida e validada de aplicaçõesWeb, que integra numa só linguagem a construção de interfaces de utilizador, lógica da aplicação e modelo de dados. O modelo normal de interação cliente-servidor da plataforma é coerente com o ciclo completo de pedido-resposta, embora seja possível implementar, de forma explícita, aplicações assíncronas. Neste trabalho apresentamos um modelo de separação, baseado em análise estática sobre a definição de uma aplicação, entre os dados apresentados nas páginas geradas pela plataforma e o código correspondente à sua estrutura e apresentação. Esta abordagem permite a geração automática e transparente de interfaces de utilizador mais rápidas e fluídas, a partir do modelo de uma aplicação OutSystems. O modelo apresentado, em conjunto com a análise estática, permite identificar o subconjunto mínimo dos dados a serem transmitidos na rede para a execução de uma funcionalidade no servidor, e isolar a execução de código no cliente. Como resultado da utilização desta abordagem obtém-se uma diminuição muito significativa na transmissão de dados, e possivelmente uma redução na carga de processamento no servidor, dado que a geração das páginasWeb é delegada no cliente, e este se torna apto para executar código. Este modelo é definido sobre uma linguagem, inspirada na da plataforma OutSystems, a partir da qual é implementado um gerador de código. Neste contexto, uma linguagem de domínio específico cria uma camada de abstração entre a definição do modelo de uma aplicação e o respetivo código gerado, tornando transparente a criação de templates clientside e o código executado no cliente e no servidor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontologies formalized by means of Description Logics (DLs) and rules in the form of Logic Programs (LPs) are two prominent formalisms in the field of Knowledge Representation and Reasoning. While DLs adhere to the OpenWorld Assumption and are suited for taxonomic reasoning, LPs implement reasoning under the Closed World Assumption, so that default knowledge can be expressed. However, for many applications it is useful to have a means that allows reasoning over an open domain and expressing rules with exceptions at the same time. Hybrid MKNF knowledge bases make such a means available by formalizing DLs and LPs in a common logic, the Logic of Minimal Knowledge and Negation as Failure (MKNF). Since rules and ontologies are used in open environments such as the Semantic Web, inconsistencies cannot always be avoided. This poses a problem due to the Principle of Explosion, which holds in classical logics. Paraconsistent Logics offer a solution to this issue by assigning meaningful models even to contradictory sets of formulas. Consequently, paraconsistent semantics for DLs and LPs have been investigated intensively. Our goal is to apply the paraconsistent approach to the combination of DLs and LPs in hybrid MKNF knowledge bases. In this thesis, a new six-valued semantics for hybrid MKNF knowledge bases is introduced, extending the three-valued approach by Knorr et al., which is based on the wellfounded semantics for logic programs. Additionally, a procedural way of computing paraconsistent well-founded models for hybrid MKNF knowledge bases by means of an alternating fixpoint construction is presented and it is proven that the algorithm is sound and complete w.r.t. the model-theoretic characterization of the semantics. Moreover, it is shown that the new semantics is faithful w.r.t. well-studied paraconsistent semantics for DLs and LPs, respectively, and maintains the efficiency of the approach it extends.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crisis-affected communities and global organizations for international aid are becoming increasingly digital as consequence geotechnology popularity. Humanitarian sector changed in profound ways by adopting new technical approach to obtain information from area with difficult geographical or political access. Since 2011, turkey is hosting a growing number of Syrian refugees along southeastern region. Turkish policy of hosting them in camps and the difficulty created by governors to international aid group expeditions to get information, made such international organizations to investigate and adopt other approach in order to obtain information needed. They intensified its remote sensing approach. However, the majority of studies used very high-resolution satellite imagery (VHRSI). The study area is extensive and the temporal resolution of VHRSI is low, besides it is infeasible only using these sensors as unique approach for the whole area. The focus of this research, aims to investigate the potentialities of mid-resolution imagery (here only Landsat) to obtain information from region in crisis (here, southeastern Turkey) through a new web-based platform called Google Earth Engine (GEE). Hereby it is also intended to verify GEE currently reliability once the Application Programming Interface (API) is still in beta version. The finds here shows that the basic functions are trustworthy. Results pointed out that Landsat can recognize change in the spectral resolution clearly only for the first settlement. The ongoing modifications vary for each case. Overall, Landsat demonstrated high limitations, but need more investigations and may be used, with restriction, as a support of VHRSI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reduction of greenhouse gas emissions is one of the big global challenges for the next decades due to its severe impact on the atmosphere that leads to a change in the climate and other environmental factors. One of the main sources of greenhouse gas is energy consumption, therefore a number of initiatives and calls for awareness and sustainability in energy use are issued among different types of institutional and organizations. The European Council adopted in 2007 energy and climate change objectives for 20% improvement until 2020. All European countries are required to use energy with more efficiency. Several steps could be conducted for energy reduction: understanding the buildings behavior through time, revealing the factors that influence the consumption, applying the right measurement for reduction and sustainability, visualizing the hidden connection between our daily habits impacts on the natural world and promoting to more sustainable life. Researchers have suggested that feedback visualization can effectively encourage conservation with energy reduction rate of 18%. Furthermore, researchers have contributed to the identification process of a set of factors which are very likely to influence consumption. Such as occupancy level, occupants behavior, environmental conditions, building thermal envelope, climate zones, etc. Nowadays, the amount of energy consumption at the university campuses are huge and it needs great effort to meet the reduction requested by European Council as well as the cost reduction. Thus, the present study was performed on the university buildings as a use case to: a. Investigate the most dynamic influence factors on energy consumption in campus; b. Implement prediction model for electricity consumption using different techniques, such as the traditional regression way and the alternative machine learning techniques; and c. Assist energy management by providing a real time energy feedback and visualization in campus for more awareness and better decision making. This methodology is implemented to the use case of University Jaume I (UJI), located in Castellon, Spain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.