895 resultados para Measurement based model identification
Resumo:
Nowadays communication is switching from a centralized scenario, where communication media like newspapers, radio, TV programs produce information and people are just consumers, to a completely different decentralized scenario, where everyone is potentially an information producer through the use of social networks, blogs, forums that allow a real-time worldwide information exchange. These new instruments, as a result of their widespread diffusion, have started playing an important socio-economic role. They are the most used communication media and, as a consequence, they constitute the main source of information enterprises, political parties and other organizations can rely on. Analyzing data stored in servers all over the world is feasible by means of Text Mining techniques like Sentiment Analysis, which aims to extract opinions from huge amount of unstructured texts. This could lead to determine, for instance, the user satisfaction degree about products, services, politicians and so on. In this context, this dissertation presents new Document Sentiment Classification methods based on the mathematical theory of Markov Chains. All these approaches bank on a Markov Chain based model, which is language independent and whose killing features are simplicity and generality, which make it interesting with respect to previous sophisticated techniques. Every discussed technique has been tested in both Single-Domain and Cross-Domain Sentiment Classification areas, comparing performance with those of other two previous works. The performed analysis shows that some of the examined algorithms produce results comparable with the best methods in literature, with reference to both single-domain and cross-domain tasks, in $2$-classes (i.e. positive and negative) Document Sentiment Classification. However, there is still room for improvement, because this work also shows the way to walk in order to enhance performance, that is, a good novel feature selection process would be enough to outperform the state of the art. Furthermore, since some of the proposed approaches show promising results in $2$-classes Single-Domain Sentiment Classification, another future work will regard validating these results also in tasks with more than $2$ classes.
Resumo:
Background Urinary tract infections (UTI) are frequent in outpatients. Fast pathogen identification is mandatory for shortening the time of discomfort and preventing serious complications. Urine culture needs up to 48 hours until pathogen identification. Consequently, the initial antibiotic regimen is empirical. Aim To evaluate the feasibility of qualitative urine pathogen identification by a commercially available real-time PCR blood pathogen test (SeptiFast®) and to compare the results with dipslide and microbiological culture. Design of study Pilot study with prospectively collected urine samples. Setting University hospital. Methods 82 prospectively collected urine samples from 81 patients with suspected UTI were included. Dipslide urine culture was followed by microbiological pathogen identification in dipslide positive samples. In parallel, qualitative DNA based pathogen identification (SeptiFast®) was performed in all samples. Results 61 samples were SeptiFast® positive, whereas 67 samples were dipslide culture positive. The inter-methodological concordance of positive and negative findings in the gram+, gram- and fungi sector was 371/410 (90%), 477/492 (97%) and 238/246 (97%), respectively. Sensitivity and specificity of the SeptiFast® test for the detection of an infection was 0.82 and 0.60, respectively. SeptiFast® pathogen identifications were available at least 43 hours prior to culture results. Conclusion The SeptiFast® platform identified bacterial DNA in urine specimens considerably faster compared to conventional culture. For UTI diagnosis sensitivity and specificity is limited by its present qualitative setup which does not allow pathogen quantification. Future quantitative assays may hold promise for PCR based UTI pathogen identification as a supplementation of conventional culture methods.
Resumo:
Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.
Resumo:
OBJECTIVES: Donation after circulatory declaration of death (DCDD) could significantly improve the number of cardiac grafts for transplantation. Graft evaluation is particularly important in the setting of DCDD given that conditions of cardio-circulatory arrest and warm ischaemia differ, leading to variable tissue injury. The aim of this study was to identify, at the time of heart procurement, means to predict contractile recovery following cardioplegic storage and reperfusion using an isolated rat heart model. Identification of reliable approaches to evaluate cardiac grafts is key in the development of protocols for heart transplantation with DCDD. METHODS: Hearts isolated from anaesthetized male Wistar rats (n = 34) were exposed to various perfusion protocols. To simulate DCDD conditions, rats were exsanguinated and maintained at 37°C for 15-25 min (warm ischaemia). Isolated hearts were perfused with modified Krebs-Henseleit buffer for 10 min (unloaded), arrested with cardioplegia, stored for 3 h at 4°C and then reperfused for 120 min (unloaded for 60 min, then loaded for 60 min). Left ventricular (LV) function was assessed using an intraventricular micro-tip pressure catheter. Statistical significance was determined using the non-parametric Spearman rho correlation analysis. RESULTS: After 120 min of reperfusion, recovery of LV work measured as developed pressure (DP)-heart rate (HR) product ranged from 0 to 15 ± 6.1 mmHg beats min(-1) 10(-3) following warm ischaemia of 15-25 min. Several haemodynamic parameters measured during early, unloaded perfusion at the time of heart procurement, including HR and the peak systolic pressure-HR product, correlated significantly with contractile recovery after cardioplegic storage and 120 min of reperfusion (P < 0.001). Coronary flow, oxygen consumption and lactate dehydrogenase release also correlated significantly with contractile recovery following cardioplegic storage and 120 min of reperfusion (P < 0.05). CONCLUSIONS: Haemodynamic and biochemical parameters measured at the time of organ procurement could serve as predictive indicators of contractile recovery. We believe that evaluation of graft suitability is feasible prior to transplantation with DCDD, and may, consequently, increase donor heart availability.
Resumo:
Large Power transformers, an aging and vulnerable part of our energy infrastructure, are at choke points in the grid and are key to reliability and security. Damage or destruction due to vandalism, misoperation, or other unexpected events is of great concern, given replacement costs upward of $2M and lead time of 12 months. Transient overvoltages can cause great damage and there is much interest in improving computer simulation models to correctly predict and avoid the consequences. EMTP (the Electromagnetic Transients Program) has been developed for computer simulation of power system transients. Component models for most equipment have been developed and benchmarked. Power transformers would appear to be simple. However, due to their nonlinear and frequency-dependent behaviors, they can be one of the most complex system components to model. It is imperative that the applied models be appropriate for the range of frequencies and excitation levels that the system experiences. Thus, transformer modeling is not a mature field and newer improved models must be made available. In this work, improved topologically-correct duality-based models are developed for three-phase autotransformers having five-legged, three-legged, and shell-form cores. The main problem in the implementation of detailed models is the lack of complete and reliable data, as no international standard suggests how to measure and calculate parameters. Therefore, parameter estimation methods are developed here to determine the parameters of a given model in cases where available information is incomplete. The transformer nameplate data is required and relative physical dimensions of the core are estimated. The models include a separate representation of each segment of the core, including hysteresis of the core, λ-i saturation characteristic, capacitive effects, and frequency dependency of winding resistance and core loss. Steady-state excitation, and de-energization and re-energization transients are simulated and compared with an earlier-developed BCTRAN-based model. Black start energization cases are also simulated as a means of model evaluation and compared with actual event records. The simulated results using the model developed here are reasonable and more correct than those of the BCTRAN-based model. Simulation accuracy is dependent on the accuracy of the equipment model and its parameters. This work is significant in that it advances existing parameter estimation methods in cases where the available data and measurements are incomplete. The accuracy of EMTP simulation for power systems including three-phase autotransformers is thus enhanced. Theoretical results obtained from this work provide a sound foundation for development of transformer parameter estimation methods using engineering optimization. In addition, it should be possible to refine which information and measurement data are necessary for complete duality-based transformer models. To further refine and develop the models and transformer parameter estimation methods developed here, iterative full-scale laboratory tests using high-voltage and high-power three-phase transformer would be helpful.
Resumo:
Context. The abundance of deuterium in the interstellar gas in front of the Sun gives insight into the processes of filtration of neutral interstellar species through the heliospheric interface and potentially into the chemical evolution of the Galactic gas. Aims: We investigate the possibility of detection of neutral interstellar deuterium at 1 AU from the Sun by direct sampling by the Interstellar Boundary Explorer (IBEX). Methods: Using both previous and the most recent determinations of the flow parameters of neutral gas in the local interstellar cloud (LIC) and an observation-based model of solar radiation pressure and ionization in the heliosphere, we simulated the flux of neutral interstellar D at IBEX for the actual measurement conditions. We assessed the number of interstellar D atom counts expected during the first three years of IBEX operation. We also simulated the observations expected during an epoch of high solar activity. In addition, we calculated the expected counts of D atoms from the thin terrestrial water layer covering the IBEX-Lo conversion surface, sputtered by neutral interstellar He atoms. Results: Most D counts registered by IBEX-Lo are expected to come from the water layer, exceeding the interstellar signal by 2 orders of magnitude. However, the sputtering should stop once the Earth leaves the portion of orbit traversed by interstellar He atoms. We identify seasons during the year when mostly the genuine interstellar D atoms are expected in the signal. During the first 3 years of IBEX operations about 2 detectable interstellar D atoms are expected. This number is comparable to the expected number of sputtered D atoms registered during the same time intervals. Conclusions: The most favorable conditions for the detection occur during low solar activity, in an interval including March and April each year. The detection chances could be improved by extending the instrument duty cycle, say, by making observations in the special deuterium mode of IBEX-Lo.
Resumo:
The induction of late long-term potentiation (L-LTP) involves complex interactions among second-messenger cascades. To gain insights into these interactions, a mathematical model was developed for L-LTP induction in the CA1 region of the hippocampus. The differential equation-based model represents actions of protein kinase A (PKA), MAP kinase (MAPK), and CaM kinase II (CAMKII) in the vicinity of the synapse, and activation of transcription by CaM kinase IV (CAMKIV) and MAPK. L-LTP is represented by increases in a synaptic weight. Simulations suggest that steep, supralinear stimulus-response relationships between stimuli (e.g., elevations in [Ca(2+)]) and kinase activation are essential for translating brief stimuli into long-lasting gene activation and synaptic weight increases. Convergence of multiple kinase activities to induce L-LTP helps to generate a threshold whereby the amount of L-LTP varies steeply with the number of brief (tetanic) electrical stimuli. The model simulates tetanic, -burst, pairing-induced, and chemical L-LTP, as well as L-LTP due to synaptic tagging. The model also simulates inhibition of L-LTP by inhibition of MAPK, CAMKII, PKA, or CAMKIV. The model predicts results of experiments to delineate mechanisms underlying L-LTP induction and expression. For example, the cAMP antagonist RpcAMPs, which inhibits L-LTP induction, is predicted to inhibit ERK activation. The model also appears useful to clarify similarities and differences between hippocampal L-LTP and long-term synaptic strengthening in other systems.
Resumo:
In situ diffusion experiments are performed in geological formations at underground research laboratories to overcome the limitations of laboratory diffusion experiments and investigate scale effects. Tracer concentrations are monitored at the injection interval during the experiment (dilution data) and measured from host rock samples around the injection interval at the end of the experiment (overcoring data). Diffusion and sorption parameters are derived from the inverse numerical modeling of the measured tracer data. The identifiability and the uncertainties of tritium and Na-22(+) diffusion and sorption parameters are studied here by synthetic experiments having the same characteristics as the in situ diffusion and retention (DR) experiment performed on Opalinus Clay. Contrary to previous identifiability analyses of in situ diffusion experiments, which used either dilution or overcoring data at approximate locations, our analysis of the parameter identifiability relies simultaneously on dilution and overcoring data, accounts for the actual position of the overcoring samples in the claystone, uses realistic values of the standard deviation of the measurement errors, relies on model identification criteria to select the most appropriate hypothesis about the existence of a borehole disturbed zone and addresses the effect of errors in the location of the sampling profiles. The simultaneous use of dilution and overcoring data provides accurate parameter estimates in the presence of measurement errors, allows the identification of the right hypothesis about the borehole disturbed zone and diminishes other model uncertainties such as those caused by errors in the volume of the circulation system and the effective diffusion coefficient of the filter. The proper interpretation of the experiment requires the right hypothesis about the borehole disturbed zone. A wrong assumption leads to large estimation errors. The use of model identification criteria helps in the selection of the best model. Small errors in the depth of the overcoring samples lead to large parameter estimation errors. Therefore, attention should be paid to minimize the errors in positioning the depth of the samples. The results of the identifiability analysis do not depend on the particular realization of random numbers. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
PURPOSE To reliably determine the amplitude of the transmit radiofrequency ( B1+) field in moving organs like the liver and heart, where most current techniques are usually not feasible. METHODS B1+ field measurement based on the Bloch-Siegert shift induced by a pair of Fermi pulses in a double-triggered modified Point RESolved Spectroscopy (PRESS) sequence with motion-compensated crusher gradients has been developed. Performance of the sequence was tested in moving phantoms and in muscle, liver, and heart of six healthy volunteers each, using different arrangements of transmit/receive coils. RESULTS B1+ determination in a moving phantom was almost independent of type and amplitude of the motion and agreed well with theory. In vivo, repeated measurements led to very small coefficients of variance (CV) if the amplitude of the Fermi pulse was chosen above an appropriate level (CV in muscle 0.6%, liver 1.6%, heart 2.3% with moderate amplitude of the Fermi pulses and 1.2% with stronger Fermi pulses). CONCLUSION The proposed sequence shows a very robust determination of B1+ in a single voxel even under challenging conditions (transmission with a surface coil or measurements in the heart without breath-hold). Magn Reson Med, 2015. © 2015 Wiley Periodicals, Inc.
Resumo:
This study provides a review of the current alcoholism planning process of the Houston-Galveston planning process of the Houston-Galveston Area Council, an agency carrying out planning for a thirteen county region in surrounding Houston, Texas. The four central groups involved in this planning are identified, and the role that each plays and how it effects the planning outcomes is discussed.^ The most substantive outcome of the Houston-Galveston Area Council's alcoholism planning, the Regional Alcoholism/Alcohol Abuse Plan is examined. Many of the shortcomings in the data provided, and the lack of other data necessary for planning are offered.^ A problem oriented planning model is presented as an alternative to the Houston-Galveston Area Council's current service oriented approach to alcoholism planning. Five primary phases of the model, identification of the problem, statement of objectives, selection of alternative programs, implementation, and evaluation, are presented, and an overview of the tasks involved in the application of this model to alcoholism planning is offered.^ A specific aspect of the model, the use of problem status indicators is explored using cirrhosis and suicide mortality data. A review of the literature suggests that based on five criteria, availability, subgroup identification, validity, reliability, and sensitivity, both suicide and cirrhosis are suitable as indicators of the alcohol problem when combined with other indicators.^ Cirrhosis and suicide mortality data are examined for the thirteen county Houston-Galveston Region for the years 1969 through 1976. Data limitations preclude definite conclusions concerning the alcohol problem in the region. Three hypotheses about the nature of the regional alcohol problem are presented. First, there appears to be no linear trend in the number of alcoholics that are at risk of suicide and cirrhosis mortality. Second, the number of alcoholics in the metropolitan areas seems to be greater than the number of rural areas. Third, the number of male alcoholics at risk of cirrhosis and suicide mortality is greater than the number of female alcoholics.^
Resumo:
Measurement of the absorbed dose from ionizing radiation in medical applications is an essential component to providing safe and reproducible patient care. There are a wide variety of tools available for measuring radiation dose; this work focuses on the characterization of two common, solid-state dosimeters in medical applications: thermoluminescent dosimeters (TLD) and optically stimulated luminescent dosimeters (OSLD). There were two main objectives to this work. The first objective was to evaluate the energy dependence of TLD and OSLD for non-reference measurement conditions in a radiotherapy environment. The second objective was to fully characterize the OSLD nanoDot in a CT environment, and to provide validated calibration procedures for CT dose measurement using OSLD. Current protocols for dose measurement using TLD and OSLD generally assume a constant photon energy spectrum within a nominal beam energy regardless of measurement location, tissue composition, or changes in beam parameters. Variations in the energy spectrum of therapeutic photon beams may impact the response of TLD and OSLD and could thereby result in an incorrect measure of dose unless these differences are accounted for. In this work, we used a Monte Carlo based model to simulate variations in the photon energy spectra of a Varian 6MV beam; then evaluated the impact of the perturbations in energy spectra on the response of both TLD and OSLD using Burlin Cavity Theory. Energy response correction factors were determined for a range of conditions and compared to measured correction factors with good agreement. When using OSLD for dose measurement in a diagnostic imaging environment, photon energy spectra are often referenced to a therapy-energy or orthovoltage photon beam – commonly 250kVp, Co-60, or even 6MV, where the spectra are substantially different. Appropriate calibration techniques specifically for the OSLD nanoDot in a CT environment have not been presented in the literature; furthermore the dependence of the energy response of the calibration energy has not been emphasized. The results of this work include detailed calibration procedures for CT dosimetry using OSLD, and a full characterization of this dosimetry system in a low-dose, low-energy setting.
Resumo:
A time series of fCO2, SST, and fluorescence data was collected between 1995 and 1997 by a CARIOCA buoy moored at the DyFAMed station (Dynamique des Flux Atmospheriques en Mediterranée) located in the northwestern Mediterranean Sea. On seasonal timescales, the spring phytoplankton bloom decreases the surface water fCO2 to approximately 290 µatm, followed by summer heating and a strong increase in fCO2 to a maximum of approximately 510 µatm. While the DELTA fCO2 shows strong variations on seasonal timescales, the annual average air-sea disequilibrium is only 2 µatm. Temperature-normalized fCO2 shows a continued decrease in dissolved CO2 throughout the summer and fall at a rate of approximately 0.6 µatm/d. The calculated annual air-sea CO2 transfer rate is -0.10 to -0.15 moles CO2 m-2 y-1, with these low values reflecting the relatively weak wind speed regime and small annual air-sea fCO2 disequilibrium. Extrapolating this rate over the whole Mediterranean Sea would lead to a flux of approximately -3 * 10**12 to -4.5 * 10**12 grams C/y, in good agreement with other estimates. An analysis of the effects of sampling frequency on annual air-sea CO2 flux estimates showed that monthly sampling is adequate to resolve the annual CO2 flux to within approximately ±10 - 18% at this site. Annual flux estimates made using temperature-derived fCO2 based on the measured fCO2-SST correlations are in agreement with measurement-based calculations to within ± 7-10% (depending on the gas transfer parameterization used), and suggest that annual CO2 flux estimates may be reasonably well predicted in this region from satellite or model-derived SST and wind speed information.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based