973 resultados para Semantic domain
Resumo:
A DS-CDMA (Direct Sequence-Coded Division Multiple Access) system has maximum spectral efficiency if the system is fully loaded (i.e., the number of users is equal to the spreading factor) and we employ signals with bandwidth equal to the chip rate. However, due to implementation constraints we need to employ signals with higher bandwidth, decreasing the system’s spectral efficiency. In this paper we consider prefixassisted DS-CDMA systems with bandwidth that can be significantly above the chip rate. To allow high spectral efficiency we consider highly overloaded systems where the number of users can be twice the spreading factor or even more. To cope with the strong interference levels we present an iterative frequencydomain receiver that takes full advantage of the total bandwidth of the transmitted signals. Our performance results show that the proposed receiver can have excellent performance, even for highly overloaded systems. Moreover, the overall system performance can be close to the maximum theoretical spectral efficiency, even with transmitted signals that have bandwidth significantly above the chip rate.
Resumo:
Das Ziel dieser Arbeit ist es, ein Konzept für eine Darstellung der Personennamendatei(PND) in den Sprachen Resource Description Framework (RDF), Resource DescriptionFramework Schema Language (RDFS) und Web Ontology Language (OWL) zu entwickeln. Der Prämisse des Semantic Web folgend, Daten sowohl in menschenverständlicher als auch in maschinell verarbeitbarer Form darzustellen und abzulegen, wird eine Struktur für Personendaten geschaffen. Dabei wird von der bestehenden Daten- und Struktursituation im Pica-Format ausgegangen. Die Erweiterbarkeit und Anpassbarkeit des Modells im Hinblick auf zukünftige, im Moment gegebenenfalls noch nicht absehbare Anwendungen und Strukurveränderungen, muss aber darüber hinaus gewährleistet sein. Die Modellierung orientiert sich an bestehenden Standards wie Dublin Core, Friend Of A Friend (FOAF), Functional Requirements for Bibliographic Records (FRBR), Functional Requirements for Authority Data (FRAD) und Resource Description and Access (RDA).
Resumo:
Tese de doutoramento, Informática (Bioinformática), Universidade de Lisboa, Faculdade de Ciências, 2014
Resumo:
Resumo:
Tese de doutoramento, Informática (Ciências da Computação), Universidade de Lisboa, Faculdade de Ciências, 2015
Resumo:
Cost-effective semantic description and annotation of shared knowledge resources has always been of great importance for digital libraries and large scale information systems in general. With the emergence of the Social Web and Web 2.0 technologies, a more effective semantic description and annotation, e.g., folksonomies, of digital library contents is envisioned to take place in collaborative and personalised environments. However, there is a lack of foundation and mathematical rigour for coping with contextualised management and retrieval of semantic annotations throughout their evolution as well as diversity in users and user communities. In this paper, we propose an ontological foundation for semantic annotations of digital libraries in terms of flexonomies. The proposed theoretical model relies on a high dimensional space with algebraic operators for contextualised access of semantic tags and annotations. The set of the proposed algebraic operators, however, is an adaptation of the set theoretic operators selection, projection, difference, intersection, union in database theory. To this extent, the proposed model is meant to lay the ontological foundation for a Digital Library 2.0 project in terms of geometric spaces rather than logic (description) based formalisms as a more efficient and scalable solution to the semantic annotation problem in large scale.
Resumo:
In the context of monolingual and bilingual retrieval, Simple Knowledge Organisation System (SKOS) datasets can play a dual role as knowledge bases for semantic annotations and as language-independent resources for translation. With no existing track of formal evaluations of these aspects for datasets in SKOS format, we describe a case study on the usage of the Thesaurus for the Social Sciences in SKOS format for a retrieval setup based on the CLEF 2004-2006 Domain-Specific Track topics, documents and relevance assessments. Results showed a mixed picture with significant system-level improvements in terms of mean average precision in the bilingual runs. Our experiments set a new and improved baseline for using SKOS-based datasets with the GIRT collection and are an example of component-based evaluation.
Resumo:
A retrieval model describes the transformation of a query into a set of documents. The question is: what drives this transformation? For semantic information retrieval type of models this transformation is driven by the content and structure of the semantic models. In this case, Knowledge Organization Systems (KOSs) are the semantic models that encode the meaning employed for monolingual and cross-language retrieval. The focus of this research is the relationship between these meanings’ representations and their role and potential in augmenting existing retrieval models effectiveness. The proposed approach is unique in explicitly interpreting a semantic reference as a pointer to a concept in the semantic model that activates all its linked neighboring concepts. It is in fact the formalization of the information retrieval model and the integration of knowledge resources from the Linguistic Linked Open Data cloud that is distinctive from other approaches. The preprocessing of the semantic model using Formal Concept Analysis enables the extraction of conceptual spaces (formal contexts)that are based on sub-graphs from the original structure of the semantic model. The types of conceptual spaces built in this case are limited by the KOSs structural relations relevant to retrieval: exact match, broader, narrower, and related. They capture the definitional and relational aspects of the concepts in the semantic model. Also, each formal context is assigned an operational role in the flow of processes of the retrieval system enabling a clear path towards the implementations of monolingual and cross-lingual systems. By following this model’s theoretical description in constructing a retrieval system, evaluation results have shown statistically significant results in both monolingual and bilingual settings when no methods for query expansion were used. The test suite was run on the Cross-Language Evaluation Forum Domain Specific 2004-2006 collection with additional extensions to match the specifics of this model.
Resumo:
AMPA receptors are tetrameric glutamate-gated ion channels that mediate fast synaptic neurotransmission in mammalian brain. Their subunits contain a two-lobed N-terminal domain (NTD) that comprises over 40% of the mature polypeptide. The NTD is not obligatory for the assembly of tetrameric receptors, and its functional role is still unclear. By analyzing full-length and NTD-deleted GluA1-4 AMPA receptors expressed in HEK 293 cells, we found that the removal of the NTD leads to a significant reduction in receptor transport to the plasma membrane, a higher steady state-to-peak current ratio of glutamate responses, and strongly increased sensitivity to glutamate toxicity in cell culture. Further analyses showed that NTD-deleted receptors display both a slower onset of desensitization and a faster recovery from desensitization of agonist responses. Our results indicate that the NTD promotes the biosynthetic maturation of AMPA receptors and, for membrane-expressed channels, enhances the stability of the desensitized state. Moreover, these findings suggest that interactions of the NTD with extracellular/synaptic ligands may be able to fine-tune AMPA receptor-mediated responses, in analogy with the allosteric regulatory role demonstrated for the NTD of NMDA receptors.
Resumo:
This paper seeks to discover in what sense we can classify vocabulary items as technical terms in the later medieval period. In order to arrive at a principled categorization of technicality, distribution is taken as a diagnostic factor: vocabulary shared across the widest range of text types may be assumed to be both prototypical for the semantic field, but also the most general and therefore least technical terms since lexical items derive at least part of their meaning from context, a wider range of contexts implying a wider range of senses. A further way of addressing the question of technicality is tested through the classification of the lexis into semantic hierarchies: in the terms of componential analysis, having more components of meaning puts a term lower in the semantic hierarchy and flags it as having a greater specificity of sense, and thus as more technical. The various text types are interrogated through comparison of the number of levels in their hierarchies and number of lexical items at each level within the hierarchies. Focusing on the vocabulary of a single semantic field, DRESS AND TEXTILES, this paper investigates how four medieval text types (wills, sumptuary laws, petitions, and romances) employ technical terminology in the establishment of the conventions of their genres.
Resumo:
The emergence of new business models, namely, the establishment of partnerships between organizations, the chance that companies have of adding existing data on the web, especially in the semantic web, to their information, led to the emphasis on some problems existing in databases, particularly related to data quality. Poor data can result in loss of competitiveness of the organizations holding these data, and may even lead to their disappearance, since many of their decision-making processes are based on these data. For this reason, data cleaning is essential. Current approaches to solve these problems are closely linked to database schemas and specific domains. In order that data cleaning can be used in different repositories, it is necessary for computer systems to understand these data, i.e., an associated semantic is needed. The solution presented in this paper includes the use of ontologies: (i) for the specification of data cleaning operations and, (ii) as a way of solving the semantic heterogeneity problems of data stored in different sources. With data cleaning operations defined at a conceptual level and existing mappings between domain ontologies and an ontology that results from a database, they may be instantiated and proposed to the expert/specialist to be executed over that database, thus enabling their interoperability.
Resumo:
The process of resources systems selection takes an important part in Distributed/Agile/Virtual Enterprises (D/A/V Es) integration. However, the resources systems selection is still a difficult matter to solve in a D/A/VE, as it is pointed out in this paper. Globally, we can say that the selection problem has been equated from different aspects, originating different kinds of models/algorithms to solve it. In order to assist the development of a web prototype tool (broker tool), intelligent and flexible, that integrates all the selection model activities and tools, and with the capacity to adequate to each D/A/V E project or instance (this is the major goal of our final project), we intend in this paper to show: a formulation of a kind of resources selection problem and the limitations of the algorithms proposed to solve it. We formulate a particular case of the problem as an integer programming, which is solved using simplex and branch and bound algorithms, and identify their performance limitations (in terms of processing time) based on simulation results. These limitations depend on the number of processing tasks and on the number of pre-selected resources per processing tasks, defining the domain of applicability of the algorithms for the problem studied. The limitations detected open the necessity of the application of other kind of algorithms (approximate solution algorithms) outside the domain of applicability founded for the algorithms simulated. However, for a broker tool it is very important the knowledge of algorithms limitations, in order to, based on problem features, develop and select the most suitable algorithm that guarantees a good performance.
Resumo:
In this paper we propose the use of the least-squares based methods for obtaining digital rational approximations (IIR filters) to fractional-order integrators and differentiators of type sα, α∈R. Adoption of the Padé, Prony and Shanks techniques is suggested. These techniques are usually applied in the signal modeling of deterministic signals. These methods yield suboptimal solutions to the problem which only requires finding the solution of a set of linear equations. The results reveal that the least-squares approach gives similar or superior approximations in comparison with other widely used methods. Their effectiveness is illustrated, both in the time and frequency domains, as well in the fractional differintegration of some standard time domain functions.