985 resultados para Semantic features matrix
Resumo:
Erosive demineralisation causes characteristic histological features. In enamel, mineral is dissolved from the surface, resulting in a roughened structure similar to an etching pattern. If the acid impact continues, the initial surface mineral loss turns into bulk tissue loss and with time a visible defect can develop. The microhardness of the remaining surface is reduced, increasing the susceptibility to physical wear. The histology of eroded dentine is much more complex because the mineral component of the tissue is dissolved by acids whereas the organic part is remaining. At least in experimental erosion, a distinct zone of demineralised organic material develops, the thickness of which depends on the acid impact. This structure is of importance for many aspects, e.g. the progression rate or the interaction with active agents and physical impacts, and needs to be considered when quantifying mineral loss. The histology of experimental erosion is increasingly well understood, but there is lack of knowledge about the histology of in vivo lesions. For enamel erosion, it is reasonable to assume that the principal features may be similar, but the fate of the demineralised dentine matrix in the oral cavity is unclear. As dentine lesions normally appear hard clinically, it can be assumed that it is degraded by the variety of enzymes present in the oral cavity. Erosive tooth wear may lead to the formation of reactionary or reparative dentine.
Resumo:
The aim of this study was to investigate whether there is a correlation between the expressions of four matrix metalloproteinases (MMPs): MMP-2, MMP-7, MMP-9 and MMP-13, and the TNM (tumour-node-metastasis) stages of oral squamous cell carcinoma (OSCC); and to explore the implication of these MMPs in OSCC dissemination. Samples from 61 patients diagnosed with oropharyngeal tumour were studied by immunohistochemistry against MMP-2, MMP-7, MMP-9 and MMP-13. The assessment of immunoreactivity was semi-quantitative. The results showed that MMP-2 and MMP-9 had similar expression patterns in the tumour cells with no changes in the immunoreactivity during tumour progression. MMP-9 always had the highest expression, whereas that of MMP-2 was moderate. MMP-7 showed a significant decrease in expression levels during tumour evolution. MMP-13 had constant expression levels within stage T2 and T3, but showed a remarkable decline in immunoreactivity in stage T4. No significant differences in the MMPs immunoreactivity between tumour cells and stroma were observed. Although strong evidence for the application of MMPs as reliable predictive markers for node metastasis was not acquired, we believe that combining patients' MMPs expression intensity and clinical features may improve the diagnosis and prognosis. Strong evidence for the application of MMPs as reliable predictive markers for node metastasis was not acquired. Application of MMPs as prognostic indicators for the malignancy potential of OSCC might be considered in every case of tumour examination. We believe that combining patients' MMPs expression intensity and clinical features may improve the process of making diagnosis and prognosis.
Resumo:
Seizure freedom in patients suffering from pharmacoresistant epilepsies is still not achieved in 20–30% of all cases. Hence, current therapies need to be improved, based on a more complete understanding of ictogenesis. In this respect, the analysis of functional networks derived from intracranial electroencephalographic (iEEG) data has recently become a standard tool. Functional networks however are purely descriptive models and thus are conceptually unable to predict fundamental features of iEEG time-series, e.g., in the context of therapeutical brain stimulation. In this paper we present some first steps towards overcoming the limitations of functional network analysis, by showing that its results are implied by a simple predictive model of time-sliced iEEG time-series. More specifically, we learn distinct graphical models (so called Chow–Liu (CL) trees) as models for the spatial dependencies between iEEG signals. Bayesian inference is then applied to the CL trees, allowing for an analytic derivation/prediction of functional networks, based on thresholding of the absolute value Pearson correlation coefficient (CC) matrix. Using various measures, the thus obtained networks are then compared to those which were derived in the classical way from the empirical CC-matrix. In the high threshold limit we find (a) an excellent agreement between the two networks and (b) key features of periictal networks as they have previously been reported in the literature. Apart from functional networks, both matrices are also compared element-wise, showing that the CL approach leads to a sparse representation, by setting small correlations to values close to zero while preserving the larger ones. Overall, this paper shows the validity of CL-trees as simple, spatially predictive models for periictal iEEG data. Moreover, we suggest straightforward generalizations of the CL-approach for modeling also the temporal features of iEEG signals.
Resumo:
BACKGROUND The process of neurite outgrowth is the initial step in producing the neuronal processes that wire the brain. Current models about neurite outgrowth have been derived from classic two-dimensional (2D) cell culture systems, which do not recapitulate the topographical cues that are present in the extracellular matrix (ECM) in vivo. Here, we explore how ECM nanotopography influences neurite outgrowth. METHODOLOGY/PRINCIPAL FINDINGS We show that, when the ECM protein laminin is presented on a line pattern with nanometric size features, it leads to orientation of neurite outgrowth along the line pattern. This is also coupled with a robust increase in neurite length. The sensing mechanism that allows neurite orientation occurs through a highly stereotypical growth cone behavior involving two filopodia populations. Non-aligned filopodia on the distal part of the growth cone scan the pattern in a lateral back and forth motion and are highly unstable. Filopodia at the growth cone tip align with the line substrate, are stabilized by an F-actin rich cytoskeleton and enable steady neurite extension. This stabilization event most likely occurs by integration of signals emanating from non-aligned and aligned filopodia which sense different extent of adhesion surface on the line pattern. In contrast, on the 2D substrate only unstable filopodia are observed at the growth cone, leading to frequent neurite collapse events and less efficient outgrowth. CONCLUSIONS/SIGNIFICANCE We propose that a constant crosstalk between both filopodia populations allows stochastic sensing of nanotopographical ECM cues, leading to oriented and steady neurite outgrowth. Our work provides insight in how neuronal growth cones can sense geometric ECM cues. This has not been accessible previously using routine 2D culture systems.
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).
Resumo:
Among the main features that are intended to become part of what can be expected from the Smart City, one of them should be an improved energy management system, in order to benefit from a healthier relation with the environment, minimize energy expenses, and offer dynamic market opportunities. A Smart Grid seems like a very suitable infrastructure for this objective, as it guarantees a two-way information flow that will provide the means for energy management enhancement. However, to obtain all the required information, another entity must care about all the devices required to gather the data. What is more, this entity must consider the lifespan of the devices within the Smart Grid—when they are turned on and off or when new appliances are added—along with the services that devices are able to provide. This paper puts forward SMArc—an acronym for semantic middleware architecture—as a middleware proposal for the Smart Grid, so as to process the collected data and use it to insulate applications from the complexity of the metering facilities and guarantee that any change that may happen at these lower levels will be updated for future actions in the system.