867 resultados para Model of the semantic fields
Resumo:
Peer reviewed
Resumo:
Perceived discrimination is associated with increased engagement in unhealthy behaviors. We propose an identity-based pathway to explain this link. Drawing on an identity-based motivation model of health behaviors (Oyserman, Fryberg, & Yoder, 2007), we propose that erceptions of discrimination lead individuals to engage in ingroup-prototypical behaviors in the service of validating their identity and creating a sense of ingroup belonging. To the extent that people perceive unhealthy behaviors as ingroup-prototypical, perceived discrimination may thus increase motivation to engage in unhealthy behaviors. We describe our theoretical model and two studies that demonstrate initial support for some paths in this model. In Study 1, African American participants who reflected on racial discrimination were more likely to endorse unhealthy ingroup-prototypical behavior as self-characteristic than those who reflected on a neutral event. In Study 2, among African American participants who perceived unhealthy behaviors to be ingroup-prototypical, discrimination predicted greater endorsement of unhealthy behaviors as self-characteristic as compared to a control condition. These effects held both with and without controlling for body mass index (BMI) and income. Broader implications of this model for how discrimination adversely affects health-related decisions are discussed.
Resumo:
RBC donor (copy 2): Ernest Haywood Collection.
URIs and Intertextuality: Incumbent Philosophical Commitments in the Development of the Semantic Web
Resumo:
Examines two commitments inherent in Resource Description Framework (RDF): intertextuality and rationalism. After introducing how rationalism has been studied in knowledge organization, this paper then introduces the concept of bracketed-rationalism. This paper closes with a discussion of ramifications of intertextuality and bracketed rationalism on evaluation of RDF.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The main goal of this work is to build a sketch on how language is used in mathematics classrooms. We specifically try to understand how teachers use language in order to share meanings with their students. We initially present our main intentions, summarizing some studies that are close to our purposes. The two theoretical frameworks which support our study – the Model of Semantic Fields and the Wittgensteinian “games of language” – are then presented and discussed about their similarities and distinctions. Our empirical data are some classroom activities recorded and turned into “clips”. Such clips were transcribed and our analysis was based on these transcriptions. Data analysis – developed according to our theoretical framework – allowed us to build the so-called “events” and, then, comment on some understandings on how language can be used in mathematics classrooms.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.
Resumo:
With many operational centers moving toward order 1-km-gridlength models for routine weather forecasting, this paper presents a systematic investigation of the properties of high-resolution versions of the Met Office Unified Model for short-range forecasting of convective rainfall events. The authors describe a suite of configurations of the Met Office Unified Model running with grid lengths of 12, 4, and 1 km and analyze results from these models for a number of convective cases from the summers of 2003, 2004, and 2005. The analysis includes subjective evaluation of the rainfall fields and comparisons of rainfall amounts, initiation, cell statistics, and a scale-selective verification technique. It is shown that the 4- and 1-km-gridlength models often give more realistic-looking precipitation fields because convection is represented explicitly rather than parameterized. However, the 4-km model representation suffers from large convective cells and delayed initiation because the grid length is too long to correctly reproduce the convection explicitly. These problems are not as evident in the 1-km model, although it does suffer from too numerous small cells in some situations. Both the 4- and 1-km models suffer from poor representation at the start of the forecast in the period when the high-resolution detail is spinning up from the lower-resolution (12 km) starting data used. A scale-selective precipitation verification technique implies that for later times in the forecasts (after the spinup period) the 1-km model performs better than the 12- and 4-km models for lower rainfall thresholds. For higher thresholds the 4-km model scores almost as well as the 1-km model, and both do better than the 12-km model.
Resumo:
The polynyas of the Laptev Sea are regions of particular interest due to the strong formation of Arctic sea-ice. In order to simulate the polynya dynamics and to quantify ice production, we apply the Finite Element Sea-Ice Ocean Model FESOM. In previous simulations FESOM has been forced with daily atmospheric NCEP (National Centers for Environmental Prediction) 1. For the periods 1 April to 9 May 2008 and 1 January to 8 February 2009 we examine the impact of different forcing data: daily and 6-hourly NCEP reanalyses 1 (1.875° x 1.875°), 6-hourly NCEP reanalyses 2 (1.875° x 1.875°), 6-hourly analyses from the GME (Global Model of the German Weather Service) (0.5° x 0.5°) and high-resolution hourly COSMO (Consortium for Small-Scale Modeling) data (5 km x 5 km). In all FESOM simulations, except for those with 6-hourly and daily NCEP 1 data, the openings and closings of polynyas are simulated in principle agreement with satellite products. Over the fast-ice area the wind fields of all atmospheric data are similar and close to in situ measurements. Over the polynya areas, however, there are strong differences between the forcing data with respect to air temperature and turbulent heat flux. These differences have a strong impact on sea-ice production rates. Depending on the forcing fields polynya ice production ranges from 1.4 km3 to 7.8 km3 during 1 April to 9 May 2011 and from 25.7 km3 to 66.2 km3 during 1 January to 8 February 2009. Therefore, atmospheric forcing data with high spatial and temporal resolution which account for the presence of the polynyas are needed to reduce the uncertainty in quantifying ice production in polynyas.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
Research is presented on the semantic structure of 15 emotion terms as measured by judged-similarity tasks for monolingual English-speaking and monolingual and bilingual Japanese subjects. A major question is the relative explanatory power of a single shared model for English and Japanese versus culture-specific models for each language. The data support a shared model for the semantic structure of emotion terms even though some robust and significant differences are found between English and Japanese structures. The Japanese bilingual subjects use a model more like English when performing tasks in English than when performing the same task in Japanese.
Resumo:
In modern magnetic resonance imaging, both patients and health care workers are exposed to strong. non-uniform static magnetic fields inside and outside of the scanner. In which body movement may be able to induce electric currents in tissues which could be potentially harmful. This paper presents theoretical investigations into the spatial distribution of induced E-fields in a tissue-equivalent human model when moving at various positions around the magnet. The numerical calculations are based on an efficient. quasi-static, finite-difference scheme. Three-dimensional field profiles from an actively shielded 4 T magnet system are used and the body model projected through the field profile with normalized velocity. The simulation shows that it is possible to induce E-fields/currents near the level of physiological significance under some circumstances and provides insight into the spatial characteristics of the induced fields. The methodology presented herein can be extrapolated to very high field strengths for the evaluation of the effects of motion at a variety of field strengths and velocities. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
In this Study we examine the spectral and morphometric properties of the four important lunar mare dome fields near Cauchy, Arago, Hortensius. and Milichius. We utilize Clementine UV vis mulfispectral data to examine the soil composition of the mare domes while employing telescopic CCD imagery to compute digital elevation maps in order to determine their morphometric properties, especially flank slope, height, and edifice Volume. After reviewing previous attempts to determine topographic data for lunar domes, we propose an image-based 3D reconstruction approach which is based on a combination of photoclinometry and shape from shading. Accordingly, we devise a classification scheme for lunar Marc domes which is based on a principal component analysis of the determined spectral and morphometric features. For the effusive mare domes of the examined fields we establish four Classes, two of which are further divided into two subclasses, respectively, where each class represents distinct combinations of spectral and morphometric dome properties. As a general trend, shallow and steep domes formed out of low-TiO2 basalts are observed in the Hortensius and Milichius dome fields, while the domes near Cauchy and Arago that consist of high-TiO2 basalts are all very shallow. The intrusive domes of our data set cover a wide continuous range of spectral and morphometric quantities, generally characterized by larger diameters and shallower flank slopes than effusive domes. A comparison to effusive and intrusive mare domes in other lunar regions, highland domes, and lunar cones has shown that the examined four mare dome fields display Such a richness in spectral properties and 3D dome shape that the established representation remains valid in a more global context. Furthermore, we estimate the physical parameters of dome formation for the examined domes based on a rheologic model. Each class of effusive domes defined in terms of spectral and morphometric properties is characterized by its specific range of values for lava viscosity, effusion rate, and duration of the effusion process. For our data set we report lava viscosities between about 10(2) and 10(8) Pas, effusion rates between 25 and 600 m(3) s(-1), and durations of the effusion process between three weeks and 18 years. Lava viscosity decreases with increasing R-415/R-750 spectral ratio and thus TiO2 content; however, the correlation is not strong, implying an important influence of further parameters like effusion temperature on lava viscosity.
Resumo:
Acid hydrolysis is a popular pretreatment for removing hemicellulose from lignocelluloses in order to produce a digestible substrate for enzymatic saccharification. In this work, a novel model for the dilute acid hydrolysis of hemicellulose within sugarcane bagasse is presented and calibrated against experimental oligomer profiles. The efficacy of mathematical models as hydrolysis yield predictors and as vehicles for investigating the mechanisms of acid hydrolysis is also examined. Experimental xylose, oligomer (degree of polymerisation 2 to 6) and furfural yield profiles were obtained for bagasse under dilute acid hydrolysis conditions at temperatures ranging from 110C to 170C. Population balance kinetics, diffusion and porosity evolution were incorporated into a mathematical model of the acid hydrolysis of sugarcane bagasse. This model was able to produce a good fit to experimental xylose yield data with only three unknown kinetic parameters ka, kb and kd. However, fitting this same model to an expanded data set of oligomeric and furfural yield profiles did not successfully reproduce the experimental results. It was found that a ``hard-to-hydrolyse'' parameter, $\alpha$, was required in the model to ensure reproducibility of the experimental oligomer profiles at 110C, 125C and 140C. The parameters obtained through the fitting exercises at lower temperatures were able to be used to predict the oligomer profiles at 155C and 170C with promising results. The interpretation of kinetic parameters obtained by fitting a model to only a single set of data may be ambiguous. Although these parameters may correctly reproduce the data, they may not be indicative of the actual rate parameters, unless some care has been taken to ensure that the model describes the true mechanisms of acid hydrolysis. It is possible to challenge the robustness of the model by expanding the experimental data set and hence limiting the parameter space for the fitting parameters. The novel combination of ``hard-to-hydrolyse'' and population balance dynamics in the model presented here appears to stand up to such rigorous fitting constraints.
Resumo:
Hydrographic observations were taken along two coastal sections and one open ocean section in the Bay of Bengal during the 1999 southwest monsoon, as a part of the Bay of Bengal Monsoon Experiment (BOBMEX). The coastal section in the northwestern Bay of Bengal, which was occupied twice, captured a freshwater plume in its two stages: first when the plume was restricted to the coastal region although separated from the coast, and then when the plume spread offshore. Below the freshwater layer there were indications of an undercurrent. The coastal section in the southern Bay of Bengal was marked by intense coastal upwelling in a 50 km wide band. In regions under the influence of the freshwater plume, the mixed layer was considerably thinner and occasionally led to the formation of a temperature inversion. The mixed layer and isothermal layer were of similar depth for most of the profiles within and outside the freshwater plume and temperature below the mixed layer decreased rapidly till the top of seasonal thermocline. There was no barrier layer even in regions well under the influence of the freshwater plume. The freshwater plume in the open Bay of Bengal does not advect to the south of 16 degrees N during the southwest monsoon. A model of the Indian Ocean, forced by heat, momentum and freshwater fluxes for the year 1999, reproduces the freshwater plume in the Bay of Bengal reasonably well. Model currents as well as the surface circulation calculated as the sum of geostrophic and Ekman drift show a southeastward North Bay Monsoon Current (NBMC) across the Bay, which forms the southern arm of a cyclonic gyre. The NBMC separates the very low salinity waters of the northern Bay from the higher salinities in the south and thus plays an important role in the regulation of near surface stratification. (c) 2007 Elsevier Ltd. All rights reserved.