932 resultados para Information dispersal algorithm


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertação para obtenção do Grau de Doutor em Engenharia do Ambiente

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The processes of mobilization of land for infrastructures of public and private domain are developed according to proper legal frameworks and systematically confronted with the impoverished national situation as regards the cadastral identification and regularization, which leads to big inefficiencies, sometimes with very negative impact to the overall effectiveness. This project report describes Ferbritas Cadastre Information System (FBSIC) project and tools, which in conjunction with other applications, allow managing the entire life-cycle of Land Acquisition and Cadastre, including support to field activities with the integration of information collected in the field, the development of multi-criteria analysis information, monitoring all information in the exploration stage, and the automated generation of outputs. The benefits are evident at the level of operational efficiency, including tools that enable process integration and standardization of procedures, facilitate analysis and quality control and maximize performance in the acquisition, maintenance and management of registration information and expropriation (expropriation projects). Therefore, the implemented system achieves levels of robustness, comprehensiveness, openness, scalability and reliability suitable for a structural platform. The resultant solution, FBSIC, is a fit-for-purpose cadastre information system rooted in the field of railway infrastructures. FBSIC integrating nature of allows: to accomplish present needs and scale to meet future services; to collect, maintain, manage and share all information in one common platform, and transform it into knowledge; to relate with other platforms; to increase accuracy and productivity of business processes related with land property management.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The goal of this thesis is the study of a tool that can help analysts in finding sequential patterns. This tool will have a focus on financial markets. A study will be made on how new and relevant knowledge can be mined from real life information, potentially giving investors, market analysts, and economists new basis to make informed decisions. The Ramex Forum algorithm will be used as a basis for the tool, due to its ability to find sequential patterns in financial data. So that it further adapts to the needs of the thesis, a study of relevant improvements to the algorithm will be made. Another important aspect of this algorithm is the way that it displays the patterns found, even with good results it is difficult to find relevant patterns among all the studied samples without a proper result visualization component. As such, different combinations of parameterizations and ways to visualize data will be evaluated and their influence in the analysis of those patterns will be discussed. In order to properly evaluate the utility of this tool, case studies will be performed as a final test. Real information will be used to produce results and those will be evaluated in regards to their accuracy, interest, and relevance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cloud computing has been one of the most important topics in Information Technology which aims to assure scalable and reliable on-demand services over the Internet. The expansion of the application scope of cloud services would require cooperation between clouds from different providers that have heterogeneous functionalities. This collaboration between different cloud vendors can provide better Quality of Services (QoS) at the lower price. However, current cloud systems have been developed without concerns of seamless cloud interconnection, and actually they do not support intercloud interoperability to enable collaboration between cloud service providers. Hence, the PhD work is motivated to address interoperability issue between cloud providers as a challenging research objective. This thesis proposes a new framework which supports inter-cloud interoperability in a heterogeneous computing resource cloud environment with the goal of dispatching the workload to the most effective clouds available at runtime. Analysing different methodologies that have been applied to resolve various problem scenarios related to interoperability lead us to exploit Model Driven Architecture (MDA) and Service Oriented Architecture (SOA) methods as appropriate approaches for our inter-cloud framework. Moreover, since distributing the operations in a cloud-based environment is a nondeterministic polynomial time (NP-complete) problem, a Genetic Algorithm (GA) based job scheduler proposed as a part of interoperability framework, offering workload migration with the best performance at the least cost. A new Agent Based Simulation (ABS) approach is proposed to model the inter-cloud environment with three types of agents: Cloud Subscriber agent, Cloud Provider agent, and Job agent. The ABS model is proposed to evaluate the proposed framework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transport is an essential sector in modern societies. It connects economic sectors and industries. Next to its contribution to economic development and social interconnection, it also causes adverse impacts on the environment and results in health hazards. Transport is a major source of ground air pollution, especially in urban areas, and therefore contributing to the health problems, such as cardiovascular and respiratory diseases, cancer, and physical injuries. This thesis presents the results of a health risk assessment that quantifies the mortality and the diseases associated with particulate matter pollution resulting from urban road transport in Hai Phong City, Vietnam. The focus is on the integration of modelling and GIS approaches in the exposure analysis to increase the accuracy of the assessment and to produce timely and consistent assessment results. The modelling was done to estimate traffic conditions and concentrations of particulate matters based on geo-references data. A simplified health risk assessment was also done for Ha Noi based on monitoring data that allows a comparison of the results between the two cases. The results of the case studies show that health risk assessment based on modelling data can provide a much more detail results and allows assessing health impacts of different mobility development options at micro level. The use of modeling and GIS as a common platform for the integration of different assessments (environmental, health, socio-economic, etc.) provides various strengths, especially in capitalising on the available data stored in different units and forms and allows handling large amount of data. The use of models and GIS in a health risk assessment, from a decision making point of view, can reduce the processing/waiting time while providing a view at different scales: from micro scale (sections of a city) to a macro scale. It also helps visualising the links between air quality and health outcomes which is useful discussing different development options. However, a number of improvements can be made to further advance the integration. An improved integration programme of the data will facilitate the application of integrated models in policy-making. Data on mobility survey, environmental monitoring and measuring must be standardised and legalised. Various traffic models, together with emission and dispersion models, should be tested and more attention should be given to their uncertainty and sensitivity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The particular characteristics and affordances of technologies play a significant role in human experience by defining the realm of possibilities available to individuals and societies. Some technological configurations, such as the Internet, facilitate peer-to-peer communication and participatory behaviors. Others, like television broadcasting, tend to encourage centralization of creative processes and unidirectional communication. In other instances still, the affordances of technologies can be further constrained by social practices. That is the case, for example, of radio which, although technically allowing peer-to-peer communication, has effectively been converted into a broadcast medium through the legislation of the airwaves. How technologies acquire particular properties, meanings and uses, and who is involved in those decisions are the broader questions explored here. Although a long line of thought maintains that technologies evolve according to the logic of scientific rationality, recent studies demonstrated that technologies are, in fact, primarily shaped by social forces in specific historical contexts. In this view, adopted here, there is no one best way to design a technological artifact or system; the selection between alternative designs—which determine the affordances of each technology—is made by social actors according to their particular values, assumptions and goals. Thus, the arrangement of technical elements in any technological artifact is configured to conform to the views and interests of those involved in its development. Understanding how technologies assume particular shapes, who is involved in these decisions and how, in turn, they propitiate particular behaviors and modes of organization but not others, requires understanding the contexts in which they are developed. It is argued here that, throughout the last century, two distinct approaches to the development and dissemination of technologies have coexisted. In each of these models, based on fundamentally different ethoi, technologies are developed through different processes and by different participants—and therefore tend to assume different shapes and offer different possibilities. In the first of these approaches, the dominant model in Western societies, technologies are typically developed by firms, manufactured in large factories, and subsequently disseminated to the rest of the population for consumption. In this centralized model, the role of users is limited to selecting from the alternatives presented by professional producers. Thus, according to this approach, the technologies that are now so deeply woven into human experience, are primarily shaped by a relatively small number of producers. In recent years, however, a group of three interconnected interest groups—the makers, hackerspaces, and open source hardware communities—have increasingly challenged this dominant model by enacting an alternative approach in which technologies are both individually transformed and collectively shaped. Through a in-depth analysis of these phenomena, their practices and ethos, it is argued here that the distributed approach practiced by these communities offers a practical path towards a democratization of the technosphere by: 1) demystifying technologies, 2) providing the public with the tools and knowledge necessary to understand and shape technologies, and 3) encouraging citizen participation in the development of technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays there is a big percentage of the population, specially young users, which are smartphone users and there is a lot of information to be provided within the applications, information provision should be done carefully and should be accurate, otherwise an overload of information will be produced, and the user will discard the app which is providing the information. Mobile devices are becoming smarter and provide many ways to filter information. However, there are alternatives to improve information provision from the side of the application. Some examples are, taking into account the local time, considering the battery level before doing an action and checking the user location to send personalized information attached to that location. SmartCampus and SmartCities are becoming a reality and they have more and more data integrated every day. With all this amount of data it is crucial to decide when and where is the user going to receive a notification with new information. Geofencing is a technique which allows applications to deliver information in a more useful way, in the right time and in the right place. It consists of geofences, physical regions delimited by boundaries, and devices that are eligible to receive the information assigned to the geofence. When devices cross one of these geofences an alert is pushed to the mobile device with the information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ion Mobility Spectrometry coupled with Multi Capillary Columns (MCC -IMS) is a fast analytical technique working at atmospheric pressure with high sensitivity and selectivity making it suitable for the analysis of complex biological matrices. MCC-IMS analysis generates its information through a 3D spectrum with peaks, corresponding to each of the substances detected, providing quantitative and qualitative information. Sometimes peaks of different substances overlap, making the quantification of substances present in the biological matrices a difficult process. In the present work we use peaks of isoprene and acetone as a model for this problem. These two volatile organic compounds (VOCs) that when detected by MCC-IMS produce two overlapping peaks. In this work it’s proposed an algorithm to identify and quantify these two peaks. This algorithm uses image processing techniques to treat the spectra and to detect the position of the peaks, and then fits the data to a custom model in order to separate the peaks. Once the peaks are separated it calculates the contribution of each peak to the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work project focuses on developing new approaches which enhance Portuguese exports towards a defined German industry sector within the information technology and electronics fields. Firstly and foremost, information was collected and a set of expert and top managers’ interviews were performed in order to acknowledge the demand of the German market while identifying compatible Portuguese supply capabilities. Among the main findings, Industry 4.0 presents itself as a valuable opportunity in the German market for Portuguese medium sized companies in the embedded systems area of expertise for machinery and equipment companies. In order to achieve the purpose of the work project, an embedded systems platform targeting machinery and equipment companies was suggested as well as it was developed several recommendations on how to implement it. An alternative approach for this platform was also considered within the German market namely the eHealth sector having the purpose of enhancing the current healthcare service provision.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of this work project is to analyze the current algorithm used by EDP to estimate their clients’ electrical energy consumptions, create a new algorithm and compare the advantages and disadvantages of both. This new algorithm is different from the current one as it incorporates some effects from temperature variations. The results of the comparison show that this new algorithm with temperature variables performed better than the same algorithm without temperature variables, although there is still potential for further improvements of the current algorithm, if the prediction model is estimated using a sample of daily data, which is the case of the current EDP algorithm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AbstractINTRODUCTION:We present a review of injuries in humans caused by aquatic animals in Brazil using the Information System for Notifiable Diseases [ Sistema de Informação de Agravos de Notificação (SINAN)] database.METHODS:A descriptive and retrospective epidemiological study was conducted from 2007 to 2013.RESULTS:A total of 4,118 accidents were recorded. Of these accidents, 88.7% (3,651) were caused by venomous species, and 11.3% (467) were caused by poisonous, traumatic or unidentified aquatic animals. Most of the events were injuries by stingrays (69%) and jellyfish (13.1%). The North region was responsible for the majority of reports (66.2%), with a significant emphasis on accidents caused by freshwater stingrays (92.2% or 2,317 cases). In the South region, the region with the second highest number of records (15.7%), jellyfish caused the majority of accidents (83.7% or 452 cases). The Northeastern region, with 12.5% of the records, was notable because almost all accidents were caused by toadfish (95.6% or 174 cases).CONCLUSIONS:Although a comparison of different databases has not been performed, the data presented in this study, compared to local and regional surveys, raises the hypothesis of underreporting of accidents. As the SINAN is the official system for the notification of accidents by venomous animals in Brazil, it is imperative that its operation be reviewed and improved, given that effective measures to prevent accidents by venomous animals depend on a reliable database and the ability to accurately report the true conditions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Combinatorial Optimization Problems occur in a wide variety of contexts and generally are NP-hard problems. At a corporate level solving this problems is of great importance since they contribute to the optimization of operational costs. In this thesis we propose to solve the Public Transport Bus Assignment problem considering an heterogeneous fleet and line exchanges, a variant of the Multi-Depot Vehicle Scheduling Problem in which additional constraints are enforced to model a real life scenario. The number of constraints involved and the large number of variables makes impracticable solving to optimality using complete search techniques. Therefore, we explore metaheuristics, that sacrifice optimality to produce solutions in feasible time. More concretely, we focus on the development of algorithms based on a sophisticated metaheuristic, Ant-Colony Optimization (ACO), which is based on a stochastic learning mechanism. For complex problems with a considerable number of constraints, sophisticated metaheuristics may fail to produce quality solutions in a reasonable amount of time. Thus, we developed parallel shared-memory (SM) synchronous ACO algorithms, however, synchronism originates the straggler problem. Therefore, we proposed three SM asynchronous algorithms that break the original algorithm semantics and differ on the degree of concurrency allowed while manipulating the learned information. Our results show that our sequential ACO algorithms produced better solutions than a Restarts metaheuristic, the ACO algorithms were able to learn and better solutions were achieved by increasing the amount of cooperation (number of search agents). Regarding parallel algorithms, our asynchronous ACO algorithms outperformed synchronous ones in terms of speedup and solution quality, achieving speedups of 17.6x. The cooperation scheme imposed by asynchronism also achieved a better learning rate than the original one.