984 resultados para Computational-linguistic domain
Resumo:
A minimalist representation of protein structures using a Go- like potential for interactions is implemented to investigate the mechanisms of the domain swapping of p13suc1, a protein that exists in two native conformations: a monomer and a domain- swapped dimer formed by the exchange of a beta- strand. Inspired by experimental studies which showed a similarity of the transition states for folding of the monomer and the dimer, in this study we justify this similarity in molecular descriptions. When intermediates are populated in the simulations, formation of a domain- swapped dimer initiates from the ensemble of unfolded monomers, given by the fact that the dimer formation occurs at the folding/ unfolding temperature of the monomer ( T-f). It is also shown that transitions, leading to a dimer, involve the presence of two intermediates, one of them has a dimeric form and the other is monomeric; the latter is much more populated than the former. However, at temperatures lower than T-f, the population of intermediates decreases. It is argued that the two folded forms may coexist in absence of intermediates at a temperature much lower than T-f. Computational simulations enable us to find a mechanism, `` lock- and- dock'', for domain swapping of p13suc1. To explore the route toward dimer formation, the folding of unstructured monomers must be retarded by first locking one of the free ends of each chain. Then, the other free termini could follow and dock at particular regions, where most intrachain contacts are formed, and thus de. ne the transition states of the dimer. The simulations also showed that a decrease in the maximum distance between monomers increased their stability, which is explained based on confinement arguments. Although the simulations are based on models extracted from the native structure of the monomer and the dimer of p13suc1, the mechanism of the domain- swapping process could be general, not only for p13suc1.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Abstract not available
Resumo:
This research work analyses techniques for implementing a cell-centred finite-volume time-domain (ccFV-TD) computational methodology for the purpose of studying microwave heating. Various state-of-the-art spatial and temporal discretisation methods employed to solve Maxwell's equations on multidimensional structured grid networks are investigated, and the dispersive and dissipative errors inherent in those techniques examined. Both staggered and unstaggered grid approaches are considered. Upwind schemes using a Riemann solver and intensity vector splitting are studied and evaluated. Staggered and unstaggered Leapfrog and Runge-Kutta time integration methods are analysed in terms of phase and amplitude error to identify which method is the most accurate and efficient for simulating microwave heating processes. The implementation and migration of typical electromagnetic boundary conditions. from staggered in space to cell-centred approaches also is deliberated. In particular, an existing perfectly matched layer absorbing boundary methodology is adapted to formulate a new cell-centred boundary implementation for the ccFV-TD solvers. Finally for microwave heating purposes, a comparison of analytical and numerical results for standard case studies in rectangular waveguides allows the accuracy of the developed methods to be assessed.
Resumo:
Since manually constructing domain-specific sentiment lexicons is extremely time consuming and it may not even be feasible for domains where linguistic expertise is not available. Research on the automatic construction of domain-specific sentiment lexicons has become a hot topic in recent years. The main contribution of this paper is the illustration of a novel semi-supervised learning method which exploits both term-to-term and document-to-term relations hidden in a corpus for the construction of domain specific sentiment lexicons. More specifically, the proposed two-pass pseudo labeling method combines shallow linguistic parsing and corpusbase statistical learning to make domain-specific sentiment extraction scalable with respect to the sheer volume of opinionated documents archived on the Internet these days. Another novelty of the proposed method is that it can utilize the readily available user-contributed labels of opinionated documents (e.g., the user ratings of product reviews) to bootstrap the performance of sentiment lexicon construction. Our experiments show that the proposed method can generate high quality domain-specific sentiment lexicons as directly assessed by human experts. Moreover, the system generated domain-specific sentiment lexicons can improve polarity prediction tasks at the document level by 2:18% when compared to other well-known baseline methods. Our research opens the door to the development of practical and scalable methods for domain-specific sentiment analysis.
Resumo:
Generic sentiment lexicons have been widely used for sentiment analysis these days. However, manually constructing sentiment lexicons is very time-consuming and it may not be feasible for certain application domains where annotation expertise is not available. One contribution of this paper is the development of a statistical learning based computational method for the automatic construction of domain-specific sentiment lexicons to enhance cross-domain sentiment analysis. Our initial experiments show that the proposed methodology can automatically generate domain-specific sentiment lexicons which contribute to improve the effectiveness of opinion retrieval at the document level. Another contribution of our work is that we show the feasibility of applying the sentiment metric derived based on the automatically constructed sentiment lexicons to predict product sales of certain product categories. Our research contributes to the development of more effective sentiment analysis system to extract business intelligence from numerous opinionated expressions posted to the Web
Resumo:
Flow induced shear stress plays an important role in regulating cell growth and distribution in scaffolds. This study sought to correlate wall shear stress and chondrocytes activity for engineering design of micro-porous osteochondral grafts based on the hypothesis that it is possible to capture and discriminate between the transmitted force and cell response at the inner irregularities. Unlike common tissue engineering therapies with perfusion bioreactors in which flow-mediated stress is the controlling parameter, this work assigned the associated stress as a function of porosity to influence in vitro proliferation of chondrocytes. D-optimality criterion was used to accommodate three pore characteristics for appraisal in a mixed level fractional design of experiment (DOE); namely, pore size (4 levels), distribution pattern (2 levels) and density (3 levels). Micro-porous scaffolds (n=12) were fabricated according to the DOE using rapid prototyping of an acrylic-based bio-photopolymer. Computational fluid dynamics (CFD) models were created correspondingly and used on an idealized boundary condition with a Newtonian fluid domain to simulate the dynamic microenvironment inside the pores. In vitro condition was reproduced for the 3D printed constructs seeded by high pellet densities of human chondrocytes and cultured for 72 hours. The results showed that cell proliferation was significantly different in the constructs (p<0.05). Inlet fluid velocity of 3×10-2mms-1 and average shear stress of 5.65×10-2 Pa corresponded with increased cell proliferation for scaffolds with smaller pores in hexagonal pattern and lower densities. Although the analytical solution of a Poiseuille flow inside the pores was found insufficient for the description of the flow profile probably due to the outside flow induced turbulence, it showed that the shear stress would increase with cell growth and decrease with pore size. This correlation demonstrated the basis for determining the relation between the induced stress and chondrocyte activity to optimize microfabrication of engineered cartilaginous constructs.
Resumo:
A sub‒domain smoothed Galerkin method is proposed to integrate the advantages of mesh‒free Galerkin method and FEM. Arbitrarily shaped sub‒domains are predefined in problems domain with mesh‒free nodes. In each sub‒domain, based on mesh‒free Galerkin weak formulation, the local discrete equation can be obtained by using the moving Kriging interpolation, which is similar to the discretization of the high‒order finite elements. Strain smoothing technique is subsequently applied to the nodal integration of sub‒domain by dividing the sub‒domain into several smoothing cells. Moreover, condensation of DOF can also be introduced into the local discrete equations to improve the computational efficiency. The global governing equations of present method are obtained on the basis of the scheme of FEM by assembling all local discrete equations of the sub‒domains. The mesh‒free properties of Galerkin method are retained in each sub‒domain. Several 2D elastic problems have been solved on the basis of this newly proposed method to validate its computational performance. These numerical examples proved that the newly proposed sub‒domain smoothed Galerkin method is a robust technique to solve solid mechanics problems based on its characteristics of high computational efficiency, good accuracy, and convergence.
Resumo:
Unsaturated water flow in soil is commonly modelled using Richards’ equation, which requires the hydraulic properties of the soil (e.g., porosity, hydraulic conductivity, etc.) to be characterised. Naturally occurring soils, however, are heterogeneous in nature, that is, they are composed of a number of interwoven homogeneous soils each with their own set of hydraulic properties. When the length scale of these soil heterogeneities is small, numerical solution of Richards’ equation is computationally impractical due to the immense effort and refinement required to mesh the actual heterogeneous geometry. A classic way forward is to use a macroscopic model, where the heterogeneous medium is replaced with a fictitious homogeneous medium, which attempts to give the average flow behaviour at the macroscopic scale (i.e., at a scale much larger than the scale of the heterogeneities). Using the homogenisation theory, a macroscopic equation can be derived that takes the form of Richards’ equation with effective parameters. A disadvantage of the macroscopic approach, however, is that it fails in cases when the assumption of local equilibrium does not hold. This limitation has seen the introduction of two-scale models that include at each point in the macroscopic domain an additional flow equation at the scale of the heterogeneities (microscopic scale). This report outlines a well-known two-scale model and contributes to the literature a number of important advances in its numerical implementation. These include the use of an unstructured control volume finite element method and image-based meshing techniques, that allow for irregular micro-scale geometries to be treated, and the use of an exponential time integration scheme that permits both scales to be resolved simultaneously in a completely coupled manner. Numerical comparisons against a classical macroscopic model confirm that only the two-scale model correctly captures the important features of the flow for a range of parameter values.
Resumo:
Molecular biology is a scientific discipline which has changed fundamentally in character over the past decade to rely on large scale datasets – public and locally generated - and their computational analysis and annotation. Undergraduate education of biologists must increasingly couple this domain context with a data-driven computational scientific method. Yet modern programming and scripting languages and rich computational environments such as R and MATLAB present significant barriers to those with limited exposure to computer science, and may require substantial tutorial assistance over an extended period if progress is to be made. In this paper we report our experience of undergraduate bioinformatics education using the familiar, ubiquitous spreadsheet environment of Microsoft Excel. We describe a configurable extension called QUT.Bio.Excel, a custom ribbon, supporting a rich set of data sources, external tools and interactive processing within the spreadsheet, and a range of problems to demonstrate its utility and success in addressing the needs of students over their studies.
Resumo:
Iterative computational models have been used to investigate the regulation of bone fracture healing by local mechanical conditions. Although their predictions replicate some mechanical responses and histological features, they do not typically reproduce the predominantly radial hard callus growth pattern observed in larger mammals. We hypothesised that this discrepancy results from an artefact of the models’ initial geometry. Using axisymmetric finite element models, we demonstrated that pre-defining a field of soft tissue in which callus may develop introduces high deviatoric strains in the periosteal region adjacent to the fracture. These bone-inhibiting strains are not present when the initial soft tissue is confined to a thin periosteal layer. As observed in previous healing models, tissue differentiation algorithms regulated by deviatoric strain predicted hard callus forming remotely and growing towards the fracture. While dilatational strain regulation allowed early bone formation closer to the fracture, hard callus still formed initially over a broad area, rather than expanding over time. Modelling callus growth from a thin periosteal layer successfully predicted the initiation of hard callus growth close to the fracture site. However, these models were still susceptible to elevated deviatoric strains in the soft tissues at the edge of the hard callus. Our study highlights the importance of the initial soft tissue geometry used for finite element models of fracture healing. If this cannot be defined accurately, alternative mechanisms for the prediction of early callus development should be investigated.