954 resultados para Model information
Resumo:
For the actual existence of e-government it is necessary and crucial to provide public information and documentation, making its access simple to citizens. A portion, not necessarily small, of these documents is in an unstructured form and in natural language, and consequently outside of which the current search systems are generally able to cope and effectively handle. Thus, in thesis, it is possible to improve access to these contents using systems that process natural language and create structured information, particularly if supported in semantics. In order to put this thesis to test, this work was developed in three major phases: (1) design of a conceptual model integrating the creation of structured information and making it available to various actors, in line with the vision of e-government 2.0; (2) definition and development of a prototype instantiating the key modules of this conceptual model, including ontology based information extraction supported by examples of relevant information, knowledge management and access based on natural language; (3) assessment of the usability and acceptability of querying information as made possible by the prototype - and in consequence of the conceptual model - by users in a realistic scenario, that included comparison with existing forms of access. In addition to this evaluation, at another level more related to technology assessment and not to the model, evaluations were made on the performance of the subsystem responsible for information extraction. The evaluation results show that the proposed model was perceived as more effective and useful than the alternatives. Associated with the performance of the prototype to extract information from documents, comparable to the state of the art, results demonstrate the feasibility and advantages, with current technology, of using natural language processing and integration of semantic information to improve access to unstructured contents in natural language. The conceptual model and the prototype demonstrator intend to contribute to the future existence of more sophisticated search systems that are also more suitable for e-government. To have transparency in governance, active citizenship, greater agility in the interaction with the public administration, among others, it is necessary that citizens and businesses have quick and easy access to official information, even if it was originally created in natural language.
Resumo:
The rapid evolution and proliferation of a world-wide computerized network, the Internet, resulted in an overwhelming and constantly growing amount of publicly available data and information, a fact that was also verified in biomedicine. However, the lack of structure of textual data inhibits its direct processing by computational solutions. Information extraction is the task of text mining that intends to automatically collect information from unstructured text data sources. The goal of the work described in this thesis was to build innovative solutions for biomedical information extraction from scientific literature, through the development of simple software artifacts for developers and biocurators, delivering more accurate, usable and faster results. We started by tackling named entity recognition - a crucial initial task - with the development of Gimli, a machine-learning-based solution that follows an incremental approach to optimize extracted linguistic characteristics for each concept type. Afterwards, Totum was built to harmonize concept names provided by heterogeneous systems, delivering a robust solution with improved performance results. Such approach takes advantage of heterogenous corpora to deliver cross-corpus harmonization that is not constrained to specific characteristics. Since previous solutions do not provide links to knowledge bases, Neji was built to streamline the development of complex and custom solutions for biomedical concept name recognition and normalization. This was achieved through a modular and flexible framework focused on speed and performance, integrating a large amount of processing modules optimized for the biomedical domain. To offer on-demand heterogenous biomedical concept identification, we developed BeCAS, a web application, service and widget. We also tackled relation mining by developing TrigNER, a machine-learning-based solution for biomedical event trigger recognition, which applies an automatic algorithm to obtain the best linguistic features and model parameters for each event type. Finally, in order to assist biocurators, Egas was developed to support rapid, interactive and real-time collaborative curation of biomedical documents, through manual and automatic in-line annotation of concepts and relations. Overall, the research work presented in this thesis contributed to a more accurate update of current biomedical knowledge bases, towards improved hypothesis generation and knowledge discovery.
Resumo:
Nitrate and urban waste water directives have raised the need for a better understanding of coastal systems in European Union. The incorrect application of these directives can lead to important ecological or social penalties. In the paper this problem is addressed to Ria Formosa Coastal Lagoon. Ria Formosa hosts a Natural Park, important ports of the southern Portuguese coast and significant bivalve aquaculture activity. Four major urban waste water treatment plants discharging in the lagoon are considered in this study. Its treatment level must be selected, based on detailed information from a monitoring program and on a good knowledge of the processes determining the fate of the material discharged in the lagoon. In this paper the results of a monitoring program and simulations using a coupled hydrodynamic and water quality / ecological model, MOHID, are used to characterise the system and to understand the processes in Ria Formosa. It is shown that the water residence time in most of the lagoon is quite low, of the order of days, but it can be larger in the upper parts of the channels where land generated water is discharged. The main supply of nutrients to the lagoon comes from the open sea rather than from the urban discharges. For this reason the characteristics and behaviour of the general lagoon contrasts with the behaviour of the upper reaches of the channels where the influence of the waste water treatment plants are high. In this system the bottom mineralization was found to be an important mechanism, and the inclusion of that process in the model was essential to obtain good results.
Resumo:
Saliency maps determine the likelihood that we focus on interesting areas of scenes or images. These maps can be built using several low-level image features, one of which having a particular relevance: colour. In this paper we present a new computational model, based only on colour features, which provides a sound basis for saliency maps for static images and video, plus region segregation and cues for local gist vision.
Resumo:
We are developing a frontend that is based on the image representation in the visual cortex and plausible processing schemes. This frontend consists of multiscale line/edge and keypoint (vertex) detection, using models of simple, complex and end-stopped cells. This frontend is being extended by a new disparity model. Assuming that there is no neural inverse tangent operator, we do not exploit Gabor phase information. Instead, we directly use simple cell (Gabor) responses at positions where lines and edges are detected.
Resumo:
Dissertação de Mestrado, Gestão da Água e da Costa, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2010
Resumo:
A biological disparity energy model can estimate local depth information by using a population of V1 complex cells. Instead of applying an analytical model which explicitly involves cell parameters like spatial frequency, orientation, binocular phase and position difference, we developed a model which only involves the cells’ responses, such that disparity can be extracted from a population code, using only a set of previously trained cells with random-dot stereograms of uniform disparity. Despite good results in smooth regions, the model needs complementary processing, notably at depth transitions. We therefore introduce a new model to extract disparity at keypoints such as edge junctions, line endings and points with large curvature. Responses of end-stopped cells serve to detect keypoints, and those of simple cells are used to detect orientations of their underlying line and edge structures. Annotated keypoints are then used in the leftright matching process, with a hierarchical, multi-scale tree structure and a saliency map to segregate disparity. By combining both models we can (re)define depth transitions and regions where the disparity energy model is less accurate.
Resumo:
Disparity energy models (DEMs) estimate local depth information on the basis ofVl complex cells. Our recent DEM (Martins et al, 2011 ISSPlT261-266) employs a population code. Once the population's cells have been trained with randorn-dot stereograms, it is applied at all retinotopic positions in the visual field. Despite producing good results in textured regions, the model needs to be made more precise, especially at depth transitions.
Resumo:
Modelling species distributions with presence data from atlases, museum collections and databases is challenging. In this paper, we compare seven procedures to generate pseudoabsence data, which in turn are used to generate GLM-logistic regressed models when reliable absence data are not available. We use pseudo-absences selected randomly or by means of presence-only methods (ENFA and MDE) to model the distribution of a threatened endemic Iberian moth species (Graellsia isabelae). The results show that the pseudo-absence selection method greatly influences the percentage of explained variability, the scores of the accuracy measures and, most importantly, the degree of constraint in the distribution estimated. As we extract pseudo-absences from environmental regions further from the optimum established by presence data, the models generated obtain better accuracy scores, and over-prediction increases. When variables other than environmental ones influence the distribution of the species (i.e., non-equilibrium state) and precise information on absences is non-existent, the random selection of pseudo-absences or their selection from environmental localities similar to those of species presence data generates the most constrained predictive distribution maps, because pseudo-absences can be located within environmentally suitable areas. This study showsthat ifwe do not have reliable absence data, the method of pseudo-absence selection strongly conditions the obtained model, generating different model predictions in the gradient between potential and realized distributions.
Resumo:
This article presents the experience of a rehabilitation program that un- dertook the challenge to reorganize its services to address accessibility issues and im- prove service quality. The context in which the reorganization process occurred, along with the relevant literature justifying the need for a new service delivery model, and an historical perspective on the planning; implementation; and evaluation phases of the process are described. In the planning phase, the constitution of the working committee, the data collected, and the information found in the literature are presented. Apollo, the new service delivery model, is then described along with each of its components (e.g., community, group, and individual interventions). Actions and lessons learnt during the implementation of each component are presented. We hope by sharing our experiences that we can help others make informed decisions about service reorganization to im- prove the quality of services provided to children with disabilities, their families, and their communities.
Resumo:
There is a general consensus that new service delivery models are needed for children with developmental coordination disorder (DCD). Emerging principles to guide service delivery include the use of graduated levels of intensity and evidence-based services that focus on function and participation. Interdisciplinary, community-based service delivery models based on best practice principles are needed. In this case report, we propose the Apollo model as an example of an innovative service delivery model for children with DCD. We describe the context that led to the creation of a program for children with DCD, describe the service delivery model and services, and share lessons learned through implementation. The Apollo model has 5 components: first contact, service delivery coordination, community-, group- and individual-interventions. This model guided the development of a streamlined set of services offered to children with DCD, including early-intake to share educational information with families, community interventions, inter-disciplinary and occupational therapy groups and individual interventions. Following implementation of the Apollo model, waiting times decreased and numbers of children receiving services increased, without compromising service quality. Lessons learned are shared to facilitate development of other practice models to support children with DCD.
Resumo:
Montado ecosystem in the Alentejo Region, south of Portugal, has enormous agro-ecological and economics heterogeneities. A definition of homogeneous sub-units among this heterogeneous ecosystem was made, but for them is disposal only partial statistical information about soil allocation agro-forestry activities. The paper proposal is to recover the unknown soil allocation at each homogeneous sub-unit, disaggregating a complete data set for the Montado ecosystem area using incomplete information at sub-units level. The methodological framework is based on a Generalized Maximum Entropy approach, which is developed in thee steps concerning the specification of a r order Markov process, the estimates of aggregate transition probabilities and the disaggregation data to recover the unknown soil allocation at each homogeneous sub-units. The results quality is evaluated using the predicted absolute deviation (PAD) and the "Disagegation Information Gain" (DIG) and shows very acceptable estimation errors.
Resumo:
Tese de doutoramento, Ciências do Mar ( Processos de Ecossistemas Marinhos), Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2012
Resumo:
Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
High concentration levels of Ganoderma spp. spores were observed in Worcester, UK, during 2006–2010.These basidiospores are known to cause sensitization due to
the allergen content and their small dimensions. This enables them to penetrate the lower part of the respiratory tract in humans. Establishment of a link between occurring symptoms of sensitization to Ganoderma spp. and other basidiospores is challenging due to lack of information regarding spore concentration in the air. Hence, aerobiological monitoring should be conducted, and if possible extended with the construction of forecast models. Daily mean concentration of allergenic Ganoderma spp. spores in the atmosphere of Worcester was measured using 7-day volumetric spore sampler through five consecutive years. The relationships between the presence of spores in the air and the weather parameters were examined. Forecast models were constructed for Ganoderma spp. spores using advanced statistical techniques, i.e. multivariate regression trees and artificial neural networks. Dew point temperature along with maximumtemperature was the most important factor influencing the presence of spores in the air of Worcester. Based on these two major factors and several others of lesser importance, thresholds for certain levels of fungal spore concentration, i.e. low (0–49 s m−3), moderate(50–99 s m−3), high (100–149 s m−3) and very high (150