983 resultados para Content areas articulation
Resumo:
In order to decrease the risk of severe wildfire, prescribed fire has recently been adopted in Portugal and elsewhere in the Mediterranean as a major tool for reducing the fuel load instead of manual or mechanical removal of vegetation. There has been some research into its impact on soils in shrublands and grasslands, but to date little research has been conducted in forested areas in the region. As a result, the impact of prescribed fire on the physico-chemical soil characteristics of forest soils has been assumed to be minimal, but this has not been demonstrated. In this study, we present the results of a monitoring campaign of a detailed pre- and post-prescribed fire assessment of soil properties in a long-unburnt P. pinaster plantation, NW Portugal. The soil characteristics examined were pH, total porosity, bulk density, moisture content, organic matter content and litter/ash quantity. The results show that there was no significant impact on the measured soil properties, the only effect being confined to minor changes in the upper 1 cm of soil. We conclude that provided the fire is carried out according to strict guidelines in P. pinaster forest, a minimal impact on soil properties can be expected.
Resumo:
In the current context of serious climate changes, where the increase of the frequency of some extreme events occurrence can enhance the rate of periods prone to high intensity forest fires, the National Forest Authority often implements, in several Portuguese forest areas, a regular set of measures in order to control the amount of fuel mass availability (PNDFCI, 2008). In the present work we’ll present a preliminary analysis concerning the assessment of the consequences given by the implementation of prescribed fire measures to control the amount of fuel mass in soil recovery, in particular in terms of its water retention capacity, its organic matter content, pH and content of iron. This work is included in a larger study (Meira-Castro, 2009(a); Meira-Castro, 2009(b)). According to the established praxis on the data collection, embodied in multidimensional matrices of n columns (variables in analysis) by p lines (sampled areas at different depths), and also considering the quantitative data nature present in this study, we’ve chosen a methodological approach that considers the multivariate statistical analysis, in particular, the Principal Component Analysis (PCA ) (Góis, 2004). The experiments were carried out in a soil cover over a natural site of Andaluzitic schist, in Gramelas, Caminha, NW Portugal, who was able to maintain itself intact from prescribed burnings from four years and was submit to prescribed fire in March 2008. The soils samples were collected from five different plots at six different time periods. The methodological option that was adopted have allowed us to identify the most relevant relational structures inside the n variables, the p samples and in two sets at the same time (Garcia-Pereira, 1990). Consequently, and in addition to the traditional outputs produced from the PCA, we have analyzed the influence of both sampling depths and geomorphological environments in the behavior of all variables involved.
Resumo:
The aim of this work was to assess the influence of meteorological conditions on the dispersion of particulate matter from an industrial zone into urban and suburban areas. The particulate matter concentration was related to the most important meteorological variables such as wind direction, velocity and frequency. A coal-fired power plant was considered to be the main emission source with two stacks of 225 m height. A middle point between the two stacks was taken as the centre of two concentric circles with 6 and 20 km radius delimiting the sampling area. About 40 sampling collectors were placed within this area. Meteorological data was obtained from a portable meteorological station placed at approximately 1.7 km to SE from the stacks. Additional data was obtained from the electrical company that runs the coal power plant. These data covers the years from 2006 to the present. A detailed statistical analysis was performed to identify the most frequent meteorological conditions concerning mainly wind speed and direction. This analysis revealed that the most frequent wind blows from Northwest and North and the strongest winds blow from Northwest. Particulate matter deposition was obtained in two sampling campaigns carried out in summer and in spring. For the first campaign the monthly average flux deposition was 1.90 g/m2 and for the second campaign this value was 0.79 g/m2. Wind dispersion occurred predominantly from North to South, away from the nearest residential area, located at about 6 km to Northwest from the stacks. Nevertheless, the higher deposition fluxes occurred in the NW/N and NE/E quadrants. This study was conducted considering only the contribution of particulate matter from coal combustion, however, others sources may be present as well, such as road traffic. Additional chemical analyses and microanalysis are needed to identify the source linkage to flux deposition levels.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Gestão Estratégica das Relações Públicas.
Resumo:
It is very difficult to make paleoclimatic correlations between continental and marine areas, but it is possible with biostratigraphic data. Reliable correlations can be made only between broad periods: between 3.5 and 3 Ma, around 2.4 Ma, until 1.6 Ma and after 1.6 Ma. The arid Mediterranean phases led to the disappearance of the European Villafranchian fauna (1.0 Ma).
Resumo:
To meet the increasing demands of the complex inter-organizational processes and the demand for continuous innovation and internationalization, it is evident that new forms of organisation are being adopted, fostering more intensive collaboration processes and sharing of resources, in what can be called collaborative networks (Camarinha-Matos, 2006:03). Information and knowledge are crucial resources in collaborative networks, being their management fundamental processes to optimize. Knowledge organisation and collaboration systems are thus important instruments for the success of collaborative networks of organisations having been researched in the last decade in the areas of computer science, information science, management sciences, terminology and linguistics. Nevertheless, research in this area didn’t give much attention to multilingual contexts of collaboration, which pose specific and challenging problems. It is then clear that access to and representation of knowledge will happen more and more on a multilingual setting which implies the overcoming of difficulties inherent to the presence of multiple languages, through the use of processes like localization of ontologies. Although localization, like other processes that involve multilingualism, is a rather well-developed practice and its methodologies and tools fruitfully employed by the language industry in the development and adaptation of multilingual content, it has not yet been sufficiently explored as an element of support to the development of knowledge representations - in particular ontologies - expressed in more than one language. Multilingual knowledge representation is then an open research area calling for cross-contributions from knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences. This workshop joined researchers interested in multilingual knowledge representation, in a multidisciplinary environment to debate the possibilities of cross-fertilization between knowledge engineering, terminology, ontology engineering, cognitive sciences, computational linguistics, natural language processing, and management sciences applied to contexts where multilingualism continuously creates new and demanding challenges to current knowledge representation methods and techniques. In this workshop six papers dealing with different approaches to multilingual knowledge representation are presented, most of them describing tools, approaches and results obtained in the development of ongoing projects. In the first case, Andrés Domínguez Burgos, Koen Kerremansa and Rita Temmerman present a software module that is part of a workbench for terminological and ontological mining, Termontospider, a wiki crawler that aims at optimally traverse Wikipedia in search of domainspecific texts for extracting terminological and ontological information. The crawler is part of a tool suite for automatically developing multilingual termontological databases, i.e. ontologicallyunderpinned multilingual terminological databases. In this paper the authors describe the basic principles behind the crawler and summarized the research setting in which the tool is currently tested. In the second paper, Fumiko Kano presents a work comparing four feature-based similarity measures derived from cognitive sciences. The purpose of the comparative analysis presented by the author is to verify the potentially most effective model that can be applied for mapping independent ontologies in a culturally influenced domain. For that, datasets based on standardized pre-defined feature dimensions and values, which are obtainable from the UNESCO Institute for Statistics (UIS) have been used for the comparative analysis of the similarity measures. The purpose of the comparison is to verify the similarity measures based on the objectively developed datasets. According to the author the results demonstrate that the Bayesian Model of Generalization provides for the most effective cognitive model for identifying the most similar corresponding concepts existing for a targeted socio-cultural community. In another presentation, Thierry Declerck, Hans-Ulrich Krieger and Dagmar Gromann present an ongoing work and propose an approach to automatic extraction of information from multilingual financial Web resources, to provide candidate terms for building ontology elements or instances of ontology concepts. The authors present a complementary approach to the direct localization/translation of ontology labels, by acquiring terminologies through the access and harvesting of multilingual Web presences of structured information providers in the field of finance, leading to both the detection of candidate terms in various multilingual sources in the financial domain that can be used not only as labels of ontology classes and properties but also for the possible generation of (multilingual) domain ontologies themselves. In the next paper, Manuel Silva, António Lucas Soares and Rute Costa claim that despite the availability of tools, resources and techniques aimed at the construction of ontological artifacts, developing a shared conceptualization of a given reality still raises questions about the principles and methods that support the initial phases of conceptualization. These questions become, according to the authors, more complex when the conceptualization occurs in a multilingual setting. To tackle these issues the authors present a collaborative platform – conceptME - where terminological and knowledge representation processes support domain experts throughout a conceptualization framework, allowing the inclusion of multilingual data as a way to promote knowledge sharing and enhance conceptualization and support a multilingual ontology specification. In another presentation Frieda Steurs and Hendrik J. Kockaert present us TermWise, a large project dealing with legal terminology and phraseology for the Belgian public services, i.e. the translation office of the ministry of justice, a project which aims at developing an advanced tool including expert knowledge in the algorithms that extract specialized language from textual data (legal documents) and whose outcome is a knowledge database including Dutch/French equivalents for legal concepts, enriched with the phraseology related to the terms under discussion. Finally, Deborah Grbac, Luca Losito, Andrea Sada and Paolo Sirito report on the preliminary results of a pilot project currently ongoing at UCSC Central Library, where they propose to adapt to subject librarians, employed in large and multilingual Academic Institutions, the model used by translators working within European Union Institutions. The authors are using User Experience (UX) Analysis in order to provide subject librarians with a visual support, by means of “ontology tables” depicting conceptual linking and connections of words with concepts presented according to their semantic and linguistic meaning. The organizers hope that the selection of papers presented here will be of interest to a broad audience, and will be a starting point for further discussion and cooperation.
Resumo:
The need for better adaptation of networks to transported flows has led to research on new approaches such as content aware networks and network aware applications. In parallel, recent developments of multimedia and content oriented services and applications such as IPTV, video streaming, video on demand, and Internet TV reinforced interest in multicast technologies. IP multicast has not been widely deployed due to interdomain and QoS support problems; therefore, alternative solutions have been investigated. This article proposes a management driven hybrid multicast solution that is multi-domain and media oriented, and combines overlay multicast, IP multicast, and P2P. The architecture is developed in a content aware network and network aware application environment, based on light network virtualization. The multicast trees can be seen as parallel virtual content aware networks, spanning a single or multiple IP domains, customized to the type of content to be transported while fulfilling the quality of service requirements of the service provider.
Resumo:
The clinical content of administrative databases includes, among others, patient demographic characteristics, and codes for diagnoses and procedures. The data in these databases is standardized, clearly defined, readily available, less expensive than collected by other means, and normally covers hospitalizations in entire geographic areas. Although with some limitations, this data is often used to evaluate the quality of healthcare. Under these circumstances, the quality of the data, for instance, errors, or it completeness, is of central importance and should never be ignored. Both the minimization of data quality problems and a deep knowledge about this data (e.g., how to select a patient group) are important for users in order to trust and to correctly interpret results. In this paper we present, discuss and give some recommendations for some problems found in these administrative databases. We also present a simple tool that can be used to screen the quality of data through the use of domain specific data quality indicators. These indicators can significantly contribute to better data, to give steps towards a continuous increase of data quality and, certainly, to better informed decision-making.
Resumo:
Phenylketonuria is an inborn error of metabolism, involving, in most cases, a deficient activity of phenylalanine hydroxylase. Neonatal diagnosis and a prompt special diet (low phenylalanine and natural-protein restricted diets) are essential to the treatment. The lack of data concerning phenylalanine contents of processed foodstuffs is an additional limitation for an already very restrictive diet. Our goals were to quantify protein (Kjeldahl method) and amino acid (18) content (HPLC/fluorescence) in 16 dishes specifically conceived for phenylketonuric patients, and compare the most relevant results with those of several international food composition databases. As might be expected, all the meals contained low protein levels (0.67–3.15 g/100 g) with the highest ones occurring in boiled rice and potatoes. These foods also contained the highest amounts of phenylalanine (158.51 and 62.65 mg/100 g, respectively). In contrast to the other amino acids, it was possible to predict phenylalanine content based on protein alone. Slight deviations were observed when comparing results with the different food composition databases.
Resumo:
As e-learning gradually evolved many specialized and disparate systems appeared to fulfil the needs of teachers and students, such as repositories of learning objects, authoring tools, intelligent tutors and automatic evaluators. This heterogeneity raises interoperability issues giving the standardization of content an important role in e-learning. This article presents a survey on current e-learning content aggregation standards focusing on their internal organization and packaging. This study is part of an effort to choose the most suitable specifications and standards for an e-learning framework called Ensemble defined as a conceptual tool to organize a network of e-learning systems and services for domains with complex evaluation.
Resumo:
This article evaluates the sustainability and economic potential of microalgae grown in brewery wastewater for biodiesel and biomass production. Three sustainability and two economic indicators were considered in the evaluation within a life cycle perspective. For the production system the most efficient process units were selected. Results show that harvesting and oil separation are the main process bottlenecks. Microalgae with higher lipid content and productivity are desirable for biodiesel production, although comparable to other biofuel’s feedstock concerning sustainability. However, improvements are still needed to reach the performance level of fossil diesel. Profitability reaches a limit for larger cultivation areas, being higher when extracted biomass is sold together with microalgae oil, in which case the influence of lipid content and areal productivity is smaller. The values of oil and/or biomass prices calculated to ensure that the process is economically sound are still very high compared with other fuel options, especially biodiesel.
Resumo:
Dissertação de Mestrado apresentada ao Instituto Superior de Contabilidade e Administração do Porto para a obtenção do grau de Mestre em Marketing Digital, sob orientação do Mestre Paulo Gonçalves e da Doutora Madalena Vilas Boas Esta versão não contém as críticas e sugestões dos elementos do júri
Resumo:
Hydatid disease in tropical areas poses a serious diagnostic problem due to the high frequence of cross-reactivity with other endemic helminthic infections. The enzyme-linked-immunosorbent assay (ELISA) and the double diffusion arc 5 showed respectively a sensitivity of 73% and 57% and a specificity of 84-95% and 100%. However, the specificity of ELISA was greatly increased by using ovine serum and phosphorylcholine in the diluent buffer. The hydatic antigen obtained from ovine cyst fluid showed three main protein bands of 64,58 and 30 KDa using SDS PAGE and immunoblotting. Sera from patients with onchocerciasis, cysticercosis, toxocariasis and Strongyloides infection cross-reacted with the 64 and 58 KDa bands by immunoblotting. However, none of the analyzed sera recognized the 30 KDa band, that seems to be specific in this assay. The immunoblotting showed a sensitivity of 80% and a specificity of 100% when used to recognize the 30 KDa band.
Resumo:
Mestrado em Engenharia Informática - Área de Especialização em Tecnologias do Conhecimento e Decisão