979 resultados para Information Acquisition


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methicillin-resistant Staphylococcus aureus (MRSA), a human pathogen confined to hospitals (HAMRSA) for over 30 years have been emerging worldwide in the last two decades as a leading cause of severe infections in healthy individuals in the community (CA-MRSA). Despite its clinical significance, in the beginning of our studies no information existed on the prevalence, and population structure of CA-MRSA in Portugal. Moreover, it remained to be clarified how CA-MRSA emerged in our country. In particular, it was not known if CA-MRSA emerged locally by acquisition of the staphylococcal cassette chromosome mec (SCCmec) by established methicillin-susceptible S. aureus (MSSA) in the community, if they were imported from abroad or have escaped from the hospital.(...)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transport is an essential sector in modern societies. It connects economic sectors and industries. Next to its contribution to economic development and social interconnection, it also causes adverse impacts on the environment and results in health hazards. Transport is a major source of ground air pollution, especially in urban areas, and therefore contributing to the health problems, such as cardiovascular and respiratory diseases, cancer, and physical injuries. This thesis presents the results of a health risk assessment that quantifies the mortality and the diseases associated with particulate matter pollution resulting from urban road transport in Hai Phong City, Vietnam. The focus is on the integration of modelling and GIS approaches in the exposure analysis to increase the accuracy of the assessment and to produce timely and consistent assessment results. The modelling was done to estimate traffic conditions and concentrations of particulate matters based on geo-references data. A simplified health risk assessment was also done for Ha Noi based on monitoring data that allows a comparison of the results between the two cases. The results of the case studies show that health risk assessment based on modelling data can provide a much more detail results and allows assessing health impacts of different mobility development options at micro level. The use of modeling and GIS as a common platform for the integration of different assessments (environmental, health, socio-economic, etc.) provides various strengths, especially in capitalising on the available data stored in different units and forms and allows handling large amount of data. The use of models and GIS in a health risk assessment, from a decision making point of view, can reduce the processing/waiting time while providing a view at different scales: from micro scale (sections of a city) to a macro scale. It also helps visualising the links between air quality and health outcomes which is useful discussing different development options. However, a number of improvements can be made to further advance the integration. An improved integration programme of the data will facilitate the application of integrated models in policy-making. Data on mobility survey, environmental monitoring and measuring must be standardised and legalised. Various traffic models, together with emission and dispersion models, should be tested and more attention should be given to their uncertainty and sensitivity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The particular characteristics and affordances of technologies play a significant role in human experience by defining the realm of possibilities available to individuals and societies. Some technological configurations, such as the Internet, facilitate peer-to-peer communication and participatory behaviors. Others, like television broadcasting, tend to encourage centralization of creative processes and unidirectional communication. In other instances still, the affordances of technologies can be further constrained by social practices. That is the case, for example, of radio which, although technically allowing peer-to-peer communication, has effectively been converted into a broadcast medium through the legislation of the airwaves. How technologies acquire particular properties, meanings and uses, and who is involved in those decisions are the broader questions explored here. Although a long line of thought maintains that technologies evolve according to the logic of scientific rationality, recent studies demonstrated that technologies are, in fact, primarily shaped by social forces in specific historical contexts. In this view, adopted here, there is no one best way to design a technological artifact or system; the selection between alternative designs—which determine the affordances of each technology—is made by social actors according to their particular values, assumptions and goals. Thus, the arrangement of technical elements in any technological artifact is configured to conform to the views and interests of those involved in its development. Understanding how technologies assume particular shapes, who is involved in these decisions and how, in turn, they propitiate particular behaviors and modes of organization but not others, requires understanding the contexts in which they are developed. It is argued here that, throughout the last century, two distinct approaches to the development and dissemination of technologies have coexisted. In each of these models, based on fundamentally different ethoi, technologies are developed through different processes and by different participants—and therefore tend to assume different shapes and offer different possibilities. In the first of these approaches, the dominant model in Western societies, technologies are typically developed by firms, manufactured in large factories, and subsequently disseminated to the rest of the population for consumption. In this centralized model, the role of users is limited to selecting from the alternatives presented by professional producers. Thus, according to this approach, the technologies that are now so deeply woven into human experience, are primarily shaped by a relatively small number of producers. In recent years, however, a group of three interconnected interest groups—the makers, hackerspaces, and open source hardware communities—have increasingly challenged this dominant model by enacting an alternative approach in which technologies are both individually transformed and collectively shaped. Through a in-depth analysis of these phenomena, their practices and ethos, it is argued here that the distributed approach practiced by these communities offers a practical path towards a democratization of the technosphere by: 1) demystifying technologies, 2) providing the public with the tools and knowledge necessary to understand and shape technologies, and 3) encouraging citizen participation in the development of technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the recent past, hardly anyone could predict this course of GIS development. GIS is moving from desktop to cloud. Web 2.0 enabled people to input data into web. These data are becoming increasingly geolocated. Big amounts of data formed something that is called "Big Data". Scientists still don't know how to deal with it completely. Different Data Mining tools are used for trying to extract some useful information from this Big Data. In our study, we also deal with one part of these data - User Generated Geographic Content (UGGC). The Panoramio initiative allows people to upload photos and describe them with tags. These photos are geolocated, which means that they have exact location on the Earth's surface according to a certain spatial reference system. By using Data Mining tools, we are trying to answer if it is possible to extract land use information from Panoramio photo tags. Also, we tried to answer to what extent this information could be accurate. At the end, we compared different Data Mining methods in order to distinguish which one has the most suited performances for this kind of data, which is text. Our answers are quite encouraging. With more than 70% of accuracy, we proved that extracting land use information is possible to some extent. Also, we found Memory Based Reasoning (MBR) method the most suitable method for this kind of data in all cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays there is a big percentage of the population, specially young users, which are smartphone users and there is a lot of information to be provided within the applications, information provision should be done carefully and should be accurate, otherwise an overload of information will be produced, and the user will discard the app which is providing the information. Mobile devices are becoming smarter and provide many ways to filter information. However, there are alternatives to improve information provision from the side of the application. Some examples are, taking into account the local time, considering the battery level before doing an action and checking the user location to send personalized information attached to that location. SmartCampus and SmartCities are becoming a reality and they have more and more data integrated every day. With all this amount of data it is crucial to decide when and where is the user going to receive a notification with new information. Geofencing is a technique which allows applications to deliver information in a more useful way, in the right time and in the right place. It consists of geofences, physical regions delimited by boundaries, and devices that are eligible to receive the information assigned to the geofence. When devices cross one of these geofences an alert is pushed to the mobile device with the information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since the invention of photography humans have been using images to capture, store and analyse the act that they are interested in. With the developments in this field, assisted by better computers, it is possible to use image processing technology as an accurate method of analysis and measurement. Image processing's principal qualities are flexibility, adaptability and the ability to easily and quickly process a large amount of information. Successful examples of applications can be seen in several areas of human life, such as biomedical, industry, surveillance, military and mapping. This is so true that there are several Nobel prizes related to imaging. The accurate measurement of deformations, displacements, strain fields and surface defects are challenging in many material tests in Civil Engineering because traditionally these measurements require complex and expensive equipment, plus time consuming calibration. Image processing can be an inexpensive and effective tool for load displacement measurements. Using an adequate image acquisition system and taking advantage of the computation power of modern computers it is possible to accurately measure very small displacements with high precision. On the market there are already several commercial software packages. However they are commercialized at high cost. In this work block-matching algorithms will be used in order to compare the results from image processing with the data obtained with physical transducers during laboratory load tests. In order to test the proposed solutions several load tests were carried out in partnership with researchers from the Civil Engineering Department at Universidade Nova de Lisboa (UNL).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

RESUMO - A qualidade e segurança são pilares essenciais dos sistemas de saúde modernos. A sua monitorização e avaliação tem como primeiro passo o conhecimento da realidade no que se refere aos eventos adversos que afetam os utentes. Existem diversas metodologias de medição de eventos adversos. A revisão de processos clínicos, apesar de constituir o padrão de ouro, não permite, ao contrário da análise dos dados administrativos, avaliar de forma abrangente, os episódios de internamento. Esta metodologia a partir de dados recolhidos rotineiramente em inúmeros países, como Portugal, apresenta porém diversas limitações, para as quais têm sido instituídas soluções tais como a sinalização do momento de aquisição do diagnóstico, que pela recente instituição não foi utilizada neste trabalho. Num hospital do Sistema nacional de saúde em Portugal, pela análise dos dados administrativos, determinou-se nos episódios de internamento cirúrgico uma incidência de 2,5% de eventos adversos. Comprovou-se a relação de idade, sexo masculino e admissão urgente com a sua ocorrência. Os doentes que sofreram um evento adverso apresentaram uma probabilidade de óbito bastante superior (odds ratio 12,2) e apresentaram tempos de internamento médio prolongados em cerca de vinte dias. Estes dados não são contudo ajustados para o risco do doente e das intervenções a que são sujeitos. Se forem considerados os tempos de internamento das tabelas de GDH em Portugal, o prolongamento do internamento é de 8,4 dias. A avaliação dos custos adicionais, realizada pelos dias de internamento adicionais, está condicionada à questão metodológica atrás reportada, estimando-se implicações de 1,1% a 8,8% de dias de internamento, com custos de 1.000.000 a 8.600.000 Euros. Em Portugal a monitorização sistemática da ocorrência de eventos, e consequentemente das implicações para a saúde do doente e custos financeiros, não é ainda uma realidade. A implementação do código "presente na admissão" permitirá dar o passo seguinte na utilização dos dados administrativos na compreensão do fenómeno dos eventos adversos.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Work Project I present focuses on the analysis of L’Oréal acquisition policy, trying to outline if the M&A deals it has led over the last 14 years have succeeded in creating value. By replicating the model proposed by Todd Hazelkorn, Marc Zenner and Anil Shivdasani in their paper “Creating Value with Mergers and Acquisitions”, I analyzed the 29 M&A deals that L’Oréal has led worldwide, understanding the common factors able to explain the success of such transactions. Further, I focused on The Body Shop case study, a highly criticized and controversial acquisition that has proved to be profitable and able to create value.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this work project is to analyze the acquisition of WhatsApp from Facebook occurred on 19th February 2014. The main research has the aim to understand if the price tag of $19 billion paid by Mark Zuckerberg was fair. Along the reaction of Facebook’s EPS on the keydays after the purchase, a balanced assessment of the acquisition was obtained and discussed. Results suggest that the price tag could be reasonable. However, taking into account the industry in which the two companies operate, where competition is quite intense, Facebook should assess this deal in a longerterm perspective.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

UNL - NSBE

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work project focuses on developing new approaches which enhance Portuguese exports towards a defined German industry sector within the information technology and electronics fields. Firstly and foremost, information was collected and a set of expert and top managers’ interviews were performed in order to acknowledge the demand of the German market while identifying compatible Portuguese supply capabilities. Among the main findings, Industry 4.0 presents itself as a valuable opportunity in the German market for Portuguese medium sized companies in the embedded systems area of expertise for machinery and equipment companies. In order to achieve the purpose of the work project, an embedded systems platform targeting machinery and equipment companies was suggested as well as it was developed several recommendations on how to implement it. An alternative approach for this platform was also considered within the German market namely the eHealth sector having the purpose of enhancing the current healthcare service provision.