95 resultados para Knowledge Discovery Database
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
ABSTRACT: Schizophrenia with its disabling features has been placed in the top ten of global burden of disease and is associated with long-term decline in functional ability. General Practitioners not only have an important role in treating patients with an established diagnosis of schizophrenia but they can also contribute significantly by identifying people in early stages of psychosis as they are the first hand medical help available and the duration of untreated psychosis is a good indicator of patient’s prognosis. This cross sectional survey, conducted at the clinics of General Practitioners, was designed to assess the knowledge and practices of general practitioners in Peshawar on diagnosis and treatment of schizophrenia. A semi structured questionnaire was used to assess their knowledge and practices regarding schizophrenia. The Knowledge/Practice was then categorized as good or poor based on their responses to the questions of the administered questionnaire. Overall, the results showed that the knowledge and practices of general practitioners of district Peshawar were poor regarding schizophrenia and may be responsible for delayed diagnosis, inadequate treatment and poor prognosis.
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
The present article is based on the report for the Doctoral Conference of the PhD programme in Technology Assessment, held at FCT-UNL Campus, Monte de Caparica, July 9th, 2012. The PhD thesis has the supervision of Prof. Cristina Sousa (ISCTE-IUL), and co-supervision of Prof. José Cardoso e Cunha (FCT-UNL).
Resumo:
A Work Project, presented as part of the requirements for the Award of a Masters Degree in Management from the NOVA – School of Business and Economics
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia e Gestão Industrial
Resumo:
Dissertation to Obtain Master Degree in Biomedical Engineering
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Eletrotécnica e de Computadores
Resumo:
This research aims to provide a better understanding on how firms stimulate knowledge sharing through the utilization of collaboration tools, in particular Emergent Social Software Platforms (ESSPs). It focuses on the distinctive applications of ESSPs and on the initiatives contributing to maximize its advantages. In the first part of the research, I have itemized all types of existing collaboration tools and classify them in different categories according to their capabilities, objectives and according to their faculty for promoting knowledge sharing. In the second part, and based on an exploratory case study at Cisco Systems, I have identified the main applications of an existing enterprise social software platform named Webex Social. By combining a qualitative and quantitative approach, as well as combining data collected from survey’s results and from the analysis of the company’s documents, I am expecting to maximize the outcome of this investigation and reduce the risk of bias. Although effects cannot be universalized based on one single case study, some utilization patterns have been underlined from the data collected and potential trends in managing knowledge have been observed. The results of the research have also enabled identifying most of the constraints experienced by the users of the firm’s social software platform. Utterly, this research should provide a primary framework for firms planning to create or implement a social software platform and for firms willing to increase adoption levels and to promote the overall participation of users. It highlights the common traps that should be avoided by developers when designing a social software platform and the capabilities that it should inherently carry to support an effective knowledge management strategy.
Resumo:
Epistemology in philosophy of mind is a difficult endeavor. Those who believe that our phenomenal life is different from other domains suggest that self-knowledge about phenomenal properties is certain and therefore privileged. Usually, this so called privileged access is explained by the idea that we have direct access to our phenomenal life. This means, in contrast to perceptual knowledge, self-knowledge is non-inferential. It is widely believed that, this kind of directness involves two different senses: an epistemic sense and a metaphysical sense. Proponents of this view often claim that this is due to the fact that we are acquainted with our current experiences. The acquaintance thesis, therefore, is the backbone in justifying privileged access. Unfortunately the whole approach has a profound flaw. For the thesis to work, acquaintance has to be a genuine explanation. Since it is usually assumed that any knowledge relation between judgments and the corresponding objects are merely causal and contingent (e.g. in perception), the proponent of the privileged access view needs to show that acquaintance can do the job. In this thesis, however, I claim that the latter cannot be done. Based on considerations introduced by Levine, I conclude that this approach involves either the introduction of ontologically independent properties or a rather obscure knowledge relation. A proper explanation, however, cannot employ either of the two options. The acquaintance thesis is, therefore, bound to fail. Since the privileged access intuition seems to be vital to epistemology within the philosophy of mind, I will explore alternative justifications. After discussing a number of options, I will focus on the so called revelation thesis. This approach states that by simply having an experience with phenomenal properties, one is in the position to know the essence of those phenomenal properties. I will argue that, after finding a solution for the controversial essence claim, this thesis is a successful replacement explanation which maintains all the virtues of the acquaintance account without necessarily introducing ontologically independent properties or an obscure knowledge relation. The overall solution consists in qualifying the essence claim in the relevant sense, leaving us with an appropriate ontology for phenomenal properties. On the one hand, this avoids employing mysterious independent properties, since this ontological view is physicalist in nature. On the other hand, this approach has the right kind of structure to explain privileged self-knowledge of our phenomenal life. My final conclusion consists in the claim that the privileged access intuition is in fact veridical. It cannot, however, be justified by the popular acquaintance approach, but rather, is explainable by the controversial revelation thesis.
Resumo:
Hybrid knowledge bases are knowledge bases that combine ontologies with non-monotonic rules, allowing to join the best of both open world ontologies and close world rules. Ontologies shape a good mechanism to share knowledge on theWeb that can be understood by both humans and machines, on the other hand rules can be used, e.g., to encode legal laws or to do a mapping between sources of information. Taking into account the dynamics present today on the Web, it is important for these hybrid knowledge bases to capture all these dynamics and thus adapt themselves. To achieve that, it is necessary to create mechanisms capable of monitoring the information flow present on theWeb. Up to today, there are no such mechanisms that allow for monitoring events and performing modifications of hybrid knowledge bases autonomously. The goal of this thesis is then to create a system that combine these hybrid knowledge bases with reactive rules, aiming to monitor events and perform actions over a knowledge base. To achieve this goal, a reactive system for the SemanticWeb is be developed in a logic-programming based approach accompanied with a language for heterogeneous rule base evolution having as its basis RIF Production Rule Dialect, which is a standard for exchanging rules over theWeb.
Resumo:
The Corporate world is becoming more and more competitive. This leads organisations to adapt to this reality, by adopting more efficient processes, which result in a decrease in cost as well as an increase of product quality. One of these processes consists in making proposals to clients, which necessarily include a cost estimation of the project. This estimation is the main focus of this project. In particular, one of the goals is to evaluate which estimation models fit the Altran Portugal software factory the most, the organization where the fieldwork of this thesis will be carried out. There is no broad agreement about which is the type of estimation model more suitable to be used in software projects. Concerning contexts where there is plenty of objective information available to be used as input to an estimation model, model-based methods usually yield better results than the expert judgment. However, what happens more frequently is not having this volume and quality of information, which has a negative impact in the model-based methods performance, favouring the usage of expert judgement. In practice, most organisations use expert judgment, making themselves dependent on the expert. A common problem found is that the performance of the expert’s estimation depends on his previous experience with identical projects. This means that when new types of projects arrive, the estimation will have an unpredictable accuracy. Moreover, different experts will make different estimates, based on their individual experience. As a result, the company will not directly attain a continuous growing knowledge about how the estimate should be carried. Estimation models depend on the input information collected from previous projects, the size of the project database and the resources available. Altran currently does not store the input information from previous projects in a systematic way. It has a small project database and a team of experts. Our work is targeted to companies that operate in similar contexts. We start by gathering information from the organisation in order to identify which estimation approaches can be applied considering the organization’s context. A gap analysis is used to understand what type of information the company would have to collect so that other approaches would become available. Based on our assessment, in our opinion, expert judgment is the most adequate approach for Altran Portugal, in the current context. We analysed past development and evolution projects from Altran Portugal and assessed their estimates. This resulted in the identification of common estimation deviations, errors, and patterns, which lead to the proposal of metrics to help estimators produce estimates leveraging past projects quantitative and qualitative information in a convenient way. This dissertation aims to contribute to more realistic estimates, by identifying shortcomings in the current estimation process and supporting the self-improvement of the process, by gathering as much relevant information as possible from each finished project.
Resumo:
This research seeks to design and implement a WebGIS application allowing high school students to work with information related to the disciplinary competencies of the competency-teaching model, in Mexico. This paradigm assumes knowledge to be acquired through the application of new technologies and to link it with everyday life situations of students. The WebGIS provides access to maps regarding natural risks in Mexico, e.g. volcanism, seismic activities, or hurricanes; the prototype's user interface was designed with special emphasis on scholar needs for high school students.
Resumo:
In the last few years, we have observed an exponential increasing of the information systems, and parking information is one more example of them. The needs of obtaining reliable and updated information of parking slots availability are very important in the goal of traffic reduction. Also parking slot prediction is a new topic that has already started to be applied. San Francisco in America and Santander in Spain are examples of such projects carried out to obtain this kind of information. The aim of this thesis is the study and evaluation of methodologies for parking slot prediction and the integration in a web application, where all kind of users will be able to know the current parking status and also future status according to parking model predictions. The source of the data is ancillary in this work but it needs to be understood anyway to understand the parking behaviour. Actually, there are many modelling techniques used for this purpose such as time series analysis, decision trees, neural networks and clustering. In this work, the author explains the best techniques at this work, analyzes the result and points out the advantages and disadvantages of each one. The model will learn the periodic and seasonal patterns of the parking status behaviour, and with this knowledge it can predict future status values given a date. The data used comes from the Smart Park Ontinyent and it is about parking occupancy status together with timestamps and it is stored in a database. After data acquisition, data analysis and pre-processing was needed for model implementations. The first test done was with the boosting ensemble classifier, employed over a set of decision trees, created with C5.0 algorithm from a set of training samples, to assign a prediction value to each object. In addition to the predictions, this work has got measurements error that indicates the reliability of the outcome predictions being correct. The second test was done using the function fitting seasonal exponential smoothing tbats model. Finally as the last test, it has been tried a model that is actually a combination of the previous two models, just to see the result of this combination. The results were quite good for all of them, having error averages of 6.2, 6.6 and 5.4 in vacancies predictions for the three models respectively. This means from a parking of 47 places a 10% average error in parking slot predictions. This result could be even better with longer data available. In order to make this kind of information visible and reachable from everyone having a device with internet connection, a web application was made for this purpose. Beside the data displaying, this application also offers different functions to improve the task of searching for parking. The new functions, apart from parking prediction, were: - Park distances from user location. It provides all the distances to user current location to the different parks in the city. - Geocoding. The service for matching a literal description or an address to a concrete location. - Geolocation. The service for positioning the user. - Parking list panel. This is not a service neither a function, is just a better visualization and better handling of the information.