946 resultados para VOCABULARY
Resumo:
En los tratados de Filón, el concepto polítes "ciudadano" cumple un papel esencial en la tarea de comprender la configuración histórica y el funcionamiento de la comunidad dentro de la sociedad alejandrina porque permite conocer no solo cuál fue el lugar social ocupado por los judíos en la ciudad, sino también la relación entre los judíos y los demás grupos poblacionales que integraron el mismo espacio social. En nuestro estudio analizaremos entonces el uso cívico de polítes en Vida de Moisés y en los tratados históricos Contra Flaco y Embajada a Gayo, en conexión con el vocabulario y los eventos históricos que pueden ser incluidos dentro de los márgenes de su interpretación
Resumo:
Leopoldo Alas, 'Clarín', acude con frecuencia a los clásicos grecolatinos, no pocas veces con intención irónica o crítica, demostrando la ignorancia o mal uso de la tradición grecorromana en su época. Examino algunos aspectos de esa tradición en La Regenta (1884-5), distribuyendo los materiales en varios apartados: observaciones sobre el conocimiento o la ignorancia de las lenguas griega o latina; alusiones a autores u obras de la literatura griega; referencias directas a autores u obras de la literatura latina; personajes históricos griegos o romanos; notas de cultura grecorromana; apunte sobre el léxico de origen griego o latino.
Resumo:
El uso de imágenes ubica al Satiricón de Petronio como una obra de intersección (canón, cardo) entre la literatura antigua precedente griega y latina, y las composiciones posteriores de gran importancia en el surgimiento de la ficción en prosa moderna en el Renacimiento. Las imágenes visuales y de color en Petronio contribuyen notablemente a esa configuración, mientras que un estudio del léxico correspondiente, a partir en especial de las ediciones humanísticas de la obra, pueden sin duda arrojar luz sobre aspectos textuales aún no resueltos en esa obra maestra de la literatura latina
Resumo:
En este trabajo se comunican los resultados preliminares de una investigación más amplia sobre desempeño y autopercepción en lectura y escritura de alumnos universitarios. Se trata en este caso de los resultados en comprensión verbal, dadas las relaciones que guarda con la comprensión lectora. Esta última constituye un aspecto crítico, particularmente en la construcción del modelo de situación , que se logra vía integración de la información proporcionada por el texto con el conocimiento previo relevante. Para su estudio se seleccionó una muestra piloto aleatoria de 60 alumnos de tercer año, de ambos sexos, de 23 años de edad promedio, a los que se les aplicó colectivamente las pruebas Vocabulario, Información y Analogías del WAIS III. Los resultados en el Índice de Comprensión Verbal muestran medidas de tendencia central semejantes a los de la muestra de tipificación y una dispersión menor, con casos particulares con puntuaciones en el límite inferior del promedio. Se hacen observaciones respecto del desempeño en población universitaria y sobre algunas particularidades acerca del tipo de errores en las respuestas proporcionadas, que merecen consideraciones adicionales. En principio indican ausencia e insuficiencia de información, confusiones conceptuales en algunos términos de uso relativamente frecuente y dificultades en la formación de conceptos, situación a ser atendida por sus implicaciones en el aprendizaje exitoso a partir de los textos
Resumo:
En este trabajo se examina el desempeño en lectura y escritura de alumnos del último año del ex nivel polimodal, pertenecientes a un establecimiento educativo de la Provincia de Buenos Aires y sus relaciones con el nivel de vocabulario y el rendimiento escolar. Su propósito final fue identificar las fortalezas y debilidades en tales dominios para implementar estrategias de intervención adecuadas. A partir de un diseño transeccional en panel, se evaluó el desempeño de 20 alumnos de dicho nivel, a partir de instrumentos de evaluación específicos que posibilitan la caracterización del rendimiento en lectura, escritura y vocabulario. Como indicadores del rendimiento escolar se consideraron la nota promedio total y las notas en diferentes asignaturas. Los resultados obtenidos fueron analizados con estadísticos descriptivos e inferenciales, complementados con el análisis cualitativo de algunas de las producciones. En líneas generales se han observado dificultades en lectura y escritura, en un porcentaje significativo de alumnos, en la medida que no se corresponden con los niveles esperados en el tramo del trayecto formativo considerado. Asimismo aparece una variabilidad intergrupal de importancia. El promedio en la materia ?Lengua? resulta correlacionado significativamente con los puntajes correspondientes a vocabulario, escritura de un cuento narrativo y el promedio general de notas escolares. En cuanto al desempeño en las variables examinadas y las notas promedio en otras asignaturas, sólo se obtuvieron correlaciones moderadas, circunstancia que merece una elucidación de mayores alcances. A partir de los hallazgos del presente estudio se presentan algunas estrategias de intervención destinadas no solo a los alumnos examinados sino a ser tenidas en cuenta en diferentes niveles escolares, para coadyuvar a los procesos de enseñanza y aprendizaje
Resumo:
Por su construcción, estructura, léxico y contenido, el monólogo con que se inicia el Auto IV de La Celestina , " Agora que voy sola ...", refleja importantes elementos, recursos y aspectos de la obra. En él encontramos cuestiones fundamentales para la comprensión del sentido y significado de muchas de las acciones, tanto de la alcahueta como de los personajes que van a fungir como rectores en el desarrollo de la obra
Resumo:
To deliver sample estimates provided with the necessary probability foundation to permit generalization from the sample data subset to the whole target population being sampled, probability sampling strategies are required to satisfy three necessary not sufficient conditions: (i) All inclusion probabilities be greater than zero in the target population to be sampled. If some sampling units have an inclusion probability of zero, then a map accuracy assessment does not represent the entire target region depicted in the map to be assessed. (ii) The inclusion probabilities must be: (a) knowable for nonsampled units and (b) known for those units selected in the sample: since the inclusion probability determines the weight attached to each sampling unit in the accuracy estimation formulas, if the inclusion probabilities are unknown, so are the estimation weights. This original work presents a novel (to the best of these authors' knowledge, the first) probability sampling protocol for quality assessment and comparison of thematic maps generated from spaceborne/airborne Very High Resolution (VHR) images, where: (I) an original Categorical Variable Pair Similarity Index (CVPSI, proposed in two different formulations) is estimated as a fuzzy degree of match between a reference and a test semantic vocabulary, which may not coincide, and (II) both symbolic pixel-based thematic quality indicators (TQIs) and sub-symbolic object-based spatial quality indicators (SQIs) are estimated with a degree of uncertainty in measurement in compliance with the well-known Quality Assurance Framework for Earth Observation (QA4EO) guidelines. Like a decision-tree, any protocol (guidelines for best practice) comprises a set of rules, equivalent to structural knowledge, and an order of presentation of the rule set, known as procedural knowledge. The combination of these two levels of knowledge makes an original protocol worth more than the sum of its parts. The several degrees of novelty of the proposed probability sampling protocol are highlighted in this paper, at the levels of understanding of both structural and procedural knowledge, in comparison with related multi-disciplinary works selected from the existing literature. In the experimental session the proposed protocol is tested for accuracy validation of preliminary classification maps automatically generated by the Satellite Image Automatic MapperT (SIAMT) software product from two WorldView-2 images and one QuickBird-2 image provided by DigitalGlobe for testing purposes. In these experiments, collected TQIs and SQIs are statistically valid, statistically significant, consistent across maps and in agreement with theoretical expectations, visual (qualitative) evidence and quantitative quality indexes of operativeness (OQIs) claimed for SIAMT by related papers. As a subsidiary conclusion, the statistically consistent and statistically significant accuracy validation of the SIAMT pre-classification maps proposed in this contribution, together with OQIs claimed for SIAMT by related works, make the operational (automatic, accurate, near real-time, robust, scalable) SIAMT software product eligible for opening up new inter-disciplinary research and market opportunities in accordance with the visionary goal of the Global Earth Observation System of Systems (GEOSS) initiative and the QA4EO international guidelines.
Resumo:
This paper describes a preprocessing module for improving the performance of a Spanish into Spanish Sign Language (Lengua de Signos Espanola: LSE) translation system when dealing with sparse training data. This preprocessing module replaces Spanish words with associated tags. The list with Spanish words (vocabulary) and associated tags used by this module is computed automatically considering those signs that show the highest probability of being the translation of every Spanish word. This automatic tag extraction has been compared to a manual strategy achieving almost the same improvement. In this analysis, several alternatives for dealing with non-relevant words have been studied. Non-relevant words are Spanish words not assigned to any sign. The preprocessing module has been incorporated into two well-known statistical translation architectures: a phrase-based system and a Statistical Finite State Transducer (SFST). This system has been developed for a specific application domain: the renewal of Identity Documents and Driver's License. In order to evaluate the system a parallel corpus made up of 4080 Spanish sentences and their LSE translation has been used. The evaluation results revealed a significant performance improvement when including this preprocessing module. In the phrase-based system, the proposed module has given rise to an increase in BLEU (Bilingual Evaluation Understudy) from 73.8% to 81.0% and an increase in the human evaluation score from 0.64 to 0.83. In the case of SFST, BLEU increased from 70.6% to 78.4% and the human evaluation score from 0.65 to 0.82.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Abstract. The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies for describing different domains. Ac-cording to LD principles, developers should reuse as many available terms as possible to describe their data. Importing ontologies or referring to their terms’ URIs are the two main ways to reuse knowledge from available ontologies. In this paper, we have analyzed 18589 terms appearing within 196 ontologies in-cluded in the Linked Open Vocabularies (LOV) registry with the aim of under-standing the current state of ontology reuse in the LD context. In order to char-acterize the landscape of ontology reuse in this context, we have extracted sta-tistics about currently reused elements, calculated ratios for reuse, and drawn graphs about imports and references between ontologies. Keywords: ontology, vocabulary, reuse, linked data, ontology import
Resumo:
Die grafische Darstellung des Verbundquerschnittes mit einer oberen Betonplatte und einem darunter liegenden Stahlträger war seit seiner Vorstellung in den 1950er Jahren ein Symbol, das weit über die Theorie hinausging und weite Verbreitung in der Praxis des Verbundbrückenbaus fand. Seit den 1970er bzw. 1980er Jahren hat dieses Bild – bedingt durch neue und freiere Formen, Beton und Stahl miteinander zu kombinieren – seine Symbolhaftigkeit verloren. In Deutschland und Spanien wurde der Doppelverbund mit unten liegenden Betonplatten in Bereichen mit negativen Momenten eingeführt, in Frankreich werden Stahlträger auch in vorgespannten Betonquerschnitten eingebettet. Beide Ansätze haben dazu beigetragen, dass in der Gegenwart die Materialien Stahl und Beton im Verbundbau frei miteinander kombiniert werden können. On the development of sections in composite bridges. A comprehensive theory of composite construction was established in Germany by Sattler in 1953. The theoretical image of the composite section with a superior concrete slab and a lower metallic structure was shaped in addition to the analytical resolution. Theory and graphical representation were going to be known together in Europe. This figure was repeated in all theoretical and academic publications, so becoming an authentic icon of the composite section. Its translation to the bridge deck in flexion was obvious: the superior slab defines the tread platform, while the metallic structure was left off-hook at the bottom. Nevertheless, in continuous decks the section is not optimal at all in zones of negative bending moments. But the overcoming of the graphical representation of the theory did not happen immediately. It was produced after a process in which several European countries played an active role and where different mechanisms of technological transference were developed. One approach to this overcoming is the “double composite action”, with a lower concrete slab in areas of a negative bending moment. The first accomplishments, a bridge in Orasje built in 1968 with 134 m span, as well as the publications of the system proposed by Fabrizio de Miranda in 1971 did not extend nor had continuity. Spanish bridges by Fernández Ordoñez and Martínez Calzón used double composite action for the first time in 1979. The German team of Leonhard, Andrä und partners, has used it since the end of the 1980's to solve bridges of great span. Once the technology has been well known thanks to the ASCE International Congress and the Spanish International Meetings organised by the “Colegio de Ingenieros de Caminos”, double composite action has been integrated well into the structural vocabulary everywhere. In France the approach was different. What Michel Virlogeux calls “double floor composite section” was reached as an evolution of prestressed concrete bridges. In an experimental process widely known, the external prestressing allows weight reduction by diminishing the thickness of the concrete webs. The following step, in the 1980's, was the substitution of the webs by metallic elements: stiffened plates, trusses or folded plates. A direct result of this development is the Brass de la Plaine Bridge in the Reunion Island in 2001 with 280 m span. Both approaches have contributed to a freedom of design in composite construction in steel and concrete today.
Resumo:
In this paper we investigate whether conventional text categorization methods may suffice to infer different verbal intelligence levels. This research goal relies on the hypothesis that the vocabulary that speakers make use of reflects their verbal intelligence levels. Automatic verbal intelligence estimation of users in a spoken language dialog system may be useful when defining an optimal dialog strategy by improving its adaptation capabilities. The work is based on a corpus containing descriptions (i.e. monologs) of a short film by test persons yielding different educational backgrounds and the verbal intelligence scores of the speakers. First, a one-way analysis of variance was performed to compare the monologs with the film transcription and to demonstrate that there are differences in the vocabulary used by the test persons yielding different verbal intelligence levels. Then, for the classification task, the monologs were represented as feature vectors using the classical TF–IDF weighting scheme. The Naive Bayes, k-nearest neighbors and Rocchio classifiers were tested. In this paper we describe and compare these classification approaches, define the optimal classification parameters and discuss the classification results obtained.
Resumo:
This document presents an innovative, formal educational initiative that is aimed at enhancing the development of engineering students’ specific competences when studying Project Management (PM) subject. The framework of the experience combines (1) theoretical concepts, (2) the development of a real-case project carried out by multidisciplinary groups of three different universities, (3) the use of software web 2.0 tools and (4) group and individual assignments of students that play different roles (project managers and team members). Under this scenario, the study focuses on monitoring the communication competence in the ever growing PM virtual environment. Factors such as corporal language, technical means, stage, and PM specific vocabulary among others have been considered in order to assess the students’ performance on this issue. As a main contribution, the paper introduces an ad-hoc rubric that, based on previous investigations, has been adapted and tested for the first time to this new and specific context. Additionally, the research conducted has provided some interesting findings that suggest further actions to improve and better define future rubrics, oriented to communication or even other competences. As specific PM subject concerns, it has been detected that students playing the role of Project Managers strengthen their competences more than those ones that play the role of Team Members. It has also been detected that students have more difficulty assimilating concepts related to risk and quality management. However those concepts related with scope, time or cost areas of knowledge have been better assimilated by the students.
Resumo:
In this paper the authors present an approach for the semantic annotation of RESTful services in the geospatial domain. Their approach automates some stages of the annotation process, by using a combination of resources and services: a cross-domain knowledge base like DBpedia, two domain ontologies like GeoNames and the WGS84 vocabulary, and suggestion and synonym services. The authors’ approach has been successfully evaluated with a set of geospatial RESTful services obtained from ProgrammableWeb.com, where geospatial services account for a third of the total amount of services available in this registry.
Resumo:
Semantic Sensor Web infrastructures use ontology-based models to represent the data that they manage; however, up to now, these ontological models do not allow representing all the characteristics of distributed, heterogeneous, and web-accessible sensor data. This paper describes a core ontological model for Semantic Sensor Web infrastructures that covers these characteristics and that has been built with a focus on reusability. This ontological model is composed of different modules that deal, on the one hand, with infrastructure data and, on the other hand, with data from a specific domain, that is, the coastal flood emergency planning domain. The paper also presents a set of guidelines, followed during the ontological model development, to satisfy a common set of requirements related to modelling domain-specific features of interest and properties. In addition, the paper includes the results obtained after an exhaustive evaluation of the developed ontologies along different aspects (i.e., vocabulary, syntax, structure, semantics, representation, and context).