954 resultados para Model of the semantic fields
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The main goal of this work is to build a sketch on how language is used in mathematics classrooms. We specifically try to understand how teachers use language in order to share meanings with their students. We initially present our main intentions, summarizing some studies that are close to our purposes. The two theoretical frameworks which support our study – the Model of Semantic Fields and the Wittgensteinian “games of language” – are then presented and discussed about their similarities and distinctions. Our empirical data are some classroom activities recorded and turned into “clips”. Such clips were transcribed and our analysis was based on these transcriptions. Data analysis – developed according to our theoretical framework – allowed us to build the so-called “events” and, then, comment on some understandings on how language can be used in mathematics classrooms.
Resumo:
The erosion depth profile of planar targets in balanced and unbalanced magnetron cathodes with cylindrical symmetry is measured along the target radius. The magnetic fields have rotational symmetry. The horizontal and vertical components of the magnetic field B are measured at points above the cathode target with z = 2 x 10(-3) m. The experimental data reveal that the target erosion depth profile is a function of the angle. made by B with a horizontal line defined by z = 2 x 10(-3) m. To explain this dependence a simplified model of the discharge is developed. In the scope of the model, the pathway lengths of the secondary electrons in the pre-sheath region are calculated by analytical integration of the Lorentz differential equations. Weighting these lengths by using the distribution law of the mean free path of the secondary electrons, we estimate the densities of the ionizing events over the cathode and the relative flux of the sputtered atoms. The expression so deduced correlates for the first time the erosion depth profile of the target with the angle theta. The model shows reasonably good fittings to the experimental target erosion depth profiles confirming that ionization occurs mainly in the pre-sheath zone.
Resumo:
Choice of industrial development options and the relevant allocation of the research funds become more and more difficult because of the increasing R&D costs and pressure for shorter development period. Forecast of the research progress is based on the analysis of the publications activity in the field of interest as well as on the dynamics of its change. Moreover, allocation of funds is hindered by exponential growth in the number of publications and patents. Thematic clusters become more and more difficult to identify, and their evolution hard to follow. The existing approaches of research field structuring and identification of its development are very limited. They do not identify the thematic clusters with adequate precision while the identified trends are often ambiguous. Therefore, there is a clear need to develop methods and tools, which are able to identify developing fields of research. The main objective of this Thesis is to develop tools and methods helping in the identification of the promising research topics in the field of separation processes. Two structuring methods as well as three approaches for identification of the development trends have been proposed. The proposed methods have been applied to the analysis of the research on distillation and filtration. The results show that the developed methods are universal and could be used to study of the various fields of research. The identified thematic clusters and the forecasted trends of their development have been confirmed in almost all tested cases. It proves the universality of the proposed methods. The results allow for identification of the fast-growing scientific fields as well as the topics characterized by stagnant or diminishing research activity.
Resumo:
With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.
Resumo:
The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Research is presented on the semantic structure of 15 emotion terms as measured by judged-similarity tasks for monolingual English-speaking and monolingual and bilingual Japanese subjects. A major question is the relative explanatory power of a single shared model for English and Japanese versus culture-specific models for each language. The data support a shared model for the semantic structure of emotion terms even though some robust and significant differences are found between English and Japanese structures. The Japanese bilingual subjects use a model more like English when performing tasks in English than when performing the same task in Japanese.
Resumo:
In modern magnetic resonance imaging, both patients and health care workers are exposed to strong. non-uniform static magnetic fields inside and outside of the scanner. In which body movement may be able to induce electric currents in tissues which could be potentially harmful. This paper presents theoretical investigations into the spatial distribution of induced E-fields in a tissue-equivalent human model when moving at various positions around the magnet. The numerical calculations are based on an efficient. quasi-static, finite-difference scheme. Three-dimensional field profiles from an actively shielded 4 T magnet system are used and the body model projected through the field profile with normalized velocity. The simulation shows that it is possible to induce E-fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The methodology presented herein can be extrapolated to very high field strengths for the evaluation of the effects of motion at a variety of field strengths and velocities. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In this Study we examine the spectral and morphometric properties of the four important lunar mare dome fields near Cauchy, Arago, Hortensius. and Milichius. We utilize Clementine UV vis mulfispectral data to examine the soil composition of the mare domes while employing telescopic CCD imagery to compute digital elevation maps in order to determine their morphometric properties, especially flank slope, height, and edifice Volume. After reviewing previous attempts to determine topographic data for lunar domes, we propose an image-based 3D reconstruction approach which is based on a combination of photoclinometry and shape from shading. Accordingly, we devise a classification scheme for lunar Marc domes which is based on a principal component analysis of the determined spectral and morphometric features. For the effusive mare domes of the examined fields we establish four Classes, two of which are further divided into two subclasses, respectively, where each class represents distinct combinations of spectral and morphometric dome properties. As a general trend, shallow and steep domes formed out of low-TiO2 basalts are observed in the Hortensius and Milichius dome fields, while the domes near Cauchy and Arago that consist of high-TiO2 basalts are all very shallow. The intrusive domes of our data set cover a wide continuous range of spectral and morphometric quantities, generally characterized by larger diameters and shallower flank slopes than effusive domes. A comparison to effusive and intrusive mare domes in other lunar regions, highland domes, and lunar cones has shown that the examined four mare dome fields display Such a richness in spectral properties and 3D dome shape that the established representation remains valid in a more global context. Furthermore, we estimate the physical parameters of dome formation for the examined domes based on a rheologic model. Each class of effusive domes defined in terms of spectral and morphometric properties is characterized by its specific range of values for lava viscosity, effusion rate, and duration of the effusion process. For our data set we report lava viscosities between about 10(2) and 10(8) Pas, effusion rates between 25 and 600 m(3) s(-1), and durations of the effusion process between three weeks and 18 years. Lava viscosity decreases with increasing R-415/R-750 spectral ratio and thus TiO2 content; however, the correlation is not strong, implying an important influence of further parameters like effusion temperature on lava viscosity.
Resumo:
The St. Lawrence Island polynya (SLIP) is a commonly occurring winter phenomenon in the Bering Sea, in which dense saline water produced during new ice formation is thought to flow northward through the Bering Strait to help maintain the Arctic Ocean halocline. Winter darkness and inclement weather conditions have made continuous in situ and remote observation of this polynya difficult. However, imagery acquired from the European Space Agency ERS-1 Synthetic Aperture Radar (SAR) has allowed observation of the St. Lawrence Island polynya using both the imagery and derived ice displacement products. With the development of ARCSyM, a high resolution regional model of the Arctic atmosphere/sea ice system, simulation of the SLIP in a climate model is now possible. Intercomparisons between remotely sensed products and simulations can lead to additional insight into the SLIP formation process. Low resolution SAR, SSM/I and AVHRR infrared imagery for the St. Lawrence Island region are compared with the results of a model simulation for the period of 24-27 February 1992. The imagery illustrates a polynya event (polynya opening). With the northerly winds strong and consistent over several days, the coupled model captures the SLIP event with moderate accuracy. However, the introduction of a stability dependent atmosphere-ice drag coefficient, which allows feedbacks between atmospheric stability, open water, and air-ice drag, produces a more accurate simulation of the SLIP in comparison to satellite imagery. Model experiments show that the polynya event is forced primarily by changes in atmospheric circulation followed by persistent favorable conditions: ocean surface currents are found to have a small but positive impact on the simulation which is enhanced when wind forcing is weak or variable.
Resumo:
Loss of magnetic medium solids from dense medium circuits is a substantial contributor to operating cost. Much of this loss is by way of wet drum magnetic separator effluent. A model of the separator would be useful for process design, optimisation and control. A review of the literature established that although various rules of thumb exist, largely based on empirical or anecdotal evidence, there is no model of magnetics recovery in a wet drum magnetic separator which includes as inputs all significant machine and operating variables. A series of trials, in both factorial experiments and in single variable experiments, was therefore carried out using a purpose built rig which featured a small industrial scale (700 mm lip length, 900 mm diameter) wet drum magnetic separator. A substantial data set of 191 trials was generated in the work. The results of the factorial experiments were used to identify the variables having a significant effect on magnetics recovery. Observations carried out as an adjunct to this work, as well as magnetic theory, suggests that the capture of magnetic particles in the wet drum magnetic separator is by a flocculation process. Such a process should be defined by a flocculation rate and a flocculation time; the latter being defined by the volumetric flowrate and the volume within the separation zone. A model based on this concept and containing adjustable parameters was developed. This model was then fitted to a randomly chosen 80% of the data, and validated by application to the remaining 20%. The model is shown to provide a satisfactory fit to the data over three orders of magnitude of magnetics loss. (C) 2003 Elsevier Science BY. All rights reserved.
Resumo:
The IEEE 802.15.4 protocol has the ability to support time-sensitive Wireless Sensor Network (WSN) applications due to the Guaranteed Time Slot (GTS) Medium Access Control mechanism. Recently, several analytical and simulation models of the IEEE 802.15.4 protocol have been proposed. Nevertheless, currently available simulation models for this protocol are both inaccurate and incomplete, and in particular they do not support the GTS mechanism. In this paper, we propose an accurate OPNET simulation model, with focus on the implementation of the GTS mechanism. The motivation that has driven this work is the validation of the Network Calculus based analytical model of the GTS mechanism that has been previously proposed and to compare the performance evaluation of the protocol as given by the two alternative approaches. Therefore, in this paper we contribute an accurate OPNET model for the IEEE 802.15.4 protocol. Additionally, and probably more importantly, based on the simulation model we propose a novel methodology to tune the protocol parameters such that a better performance of the protocol can be guaranteed, both concerning maximizing the throughput of the allocated GTS as well as concerning minimizing frame delay.
Resumo:
AIMS: While successful termination by pacing of organized atrial tachycardias has been observed in patients, single site rapid pacing has not yet led to conclusive results for the termination of atrial fibrillation (AF). The purpose of this study was to evaluate a novel atrial septal pacing algorithm for the termination of AF in a biophysical model of the human atria. METHODS AND RESULTS: Sustained AF was generated in a model based on human magnetic resonance images and membrane kinetics. Rapid pacing was applied from the septal area following a dual-stage scheme: (i) rapid pacing for 10-30 s at pacing intervals 62-70% of AF cycle length (AFCL), (ii) slow pacing for 1.5 s at 180% AFCL, initiated by a single stimulus at 130% AFCL. Atrial fibrillation termination success rates were computed. A mean success rate for AF termination of 10.2% was obtained for rapid septal pacing only. The addition of the slow pacing phase increased this rate to 20.2%. At an optimal pacing cycle length (64% AFCL) up to 29% of AF termination was observed. CONCLUSION: The proposed septal pacing algorithm could suppress AF reentries in a more robust way than classical single site rapid pacing. Experimental studies are now needed to determine whether similar termination mechanisms and rates can be observed in animals or humans, and in which types of AF this pacing strategy might be most effective.
Resumo:
This paper outlines the approach that the WHO's Family of International Classifications (WHO-FIC) network is undertaking to create ICD-11. We also outline the more focused work of the Quality and Safety Topic Advisory Group, whose activities include the following: (i) cataloguing existing ICD-9 and ICD-10 quality and safety indicators; (ii) reviewing ICD morbidity coding rules for main condition, diagnosis timing, numbers of diagnosis fields and diagnosis clustering; (iii) substantial restructuring of the health-care related injury concepts coded in the ICD-10 chapters 19/20, (iv) mapping of ICD-11 quality and safety concepts to the information model of the WHO's International Classification for Patient Safety and the AHRQ Common Formats; (v) the review of vertical chapter content in all chapters of the ICD-11 beta version and (vi) downstream field testing of ICD-11 prior to its official 2015 release. The transition from ICD-10 to ICD-11 promises to produce an enhanced classification that will have better potential to capture important concepts relevant to measuring health system safety and quality-an important use case for the classification.