878 resultados para sets of words
Resumo:
To investigate the perception of emotional facial expressions, researchers rely on shared sets of photos or videos, most often generated by actor portrayals. The drawback of such standardized material is a lack of flexibility and controllability, as it does not allow the systematic parametric manipulation of specific features of facial expressions on the one hand, and of more general properties of the facial identity (age, ethnicity, gender) on the other. To remedy this problem, we developed FACSGen: a novel tool that allows the creation of realistic synthetic 3D facial stimuli, both static and dynamic, based on the Facial Action Coding System. FACSGen provides researchers with total control over facial action units, and corresponding informational cues in 3D synthetic faces. We present four studies validating both the software and the general methodology of systematically generating controlled facial expression patterns for stimulus presentation.
Resumo:
An important goal in computational neuroanatomy is the complete and accurate simulation of neuronal morphology. We are developing computational tools to model three-dimensional dendritic structures based on sets of stochastic rules. This paper reports an extensive, quantitative anatomical characterization of simulated motoneurons and Purkinje cells. We used several local and global algorithms implemented in the L-Neuron and ArborVitae programs to generate sets of virtual neurons. Parameters statistics for all algorithms were measured from experimental data, thus providing a compact and consistent description of these morphological classes. We compared the emergent anatomical features of each group of virtual neurons with those of the experimental database in order to gain insights on the plausibility of the model assumptions, potential improvements to the algorithms, and non-trivial relations among morphological parameters. Algorithms mainly based on local constraints (e.g., branch diameter) were successful in reproducing many morphological properties of both motoneurons and Purkinje cells (e.g. total length, asymmetry, number of bifurcations). The addition of global constraints (e.g., trophic factors) improved the angle-dependent emergent characteristics (average Euclidean distance from the soma to the dendritic terminations, dendritic spread). Virtual neurons systematically displayed greater anatomical variability than real cells, suggesting the need for additional constraints in the models. For several emergent anatomical properties, a specific algorithm reproduced the experimental statistics better than the others did. However, relative performances were often reversed for different anatomical properties and/or morphological classes. Thus, combining the strengths of alternative generative models could lead to comprehensive algorithms for the complete and accurate simulation of dendritic morphology.
Resumo:
It is generally assumed that the variability of neuronal morphology has an important effect on both the connectivity and the activity of the nervous system, but this effect has not been thoroughly investigated. Neuroanatomical archives represent a crucial tool to explore structure–function relationships in the brain. We are developing computational tools to describe, generate, store and render large sets of three–dimensional neuronal structures in a format that is compact, quantitative, accurate and readily accessible to the neuroscientist. Single–cell neuroanatomy can be characterized quantitatively at several levels. In computer–aided neuronal tracing files, a dendritic tree is described as a series of cylinders, each represented by diameter, spatial coordinates and the connectivity to other cylinders in the tree. This ‘Cartesian’ description constitutes a completely accurate mapping of dendritic morphology but it bears little intuitive information for the neuroscientist. In contrast, a classical neuroanatomical analysis characterizes neuronal dendrites on the basis of the statistical distributions of morphological parameters, e.g. maximum branching order or bifurcation asymmetry. This description is intuitively more accessible, but it only yields information on the collective anatomy of a group of dendrites, i.e. it is not complete enough to provide a precise ‘blueprint’ of the original data. We are adopting a third, intermediate level of description, which consists of the algorithmic generation of neuronal structures within a certain morphological class based on a set of ‘fundamental’, measured parameters. This description is as intuitive as a classical neuroanatomical analysis (parameters have an intuitive interpretation), and as complete as a Cartesian file (the algorithms generate and display complete neurons). The advantages of the algorithmic description of neuronal structure are immense. If an algorithm can measure the values of a handful of parameters from an experimental database and generate virtual neurons whose anatomy is statistically indistinguishable from that of their real counterparts, a great deal of data compression and amplification can be achieved. Data compression results from the quantitative and complete description of thousands of neurons with a handful of statistical distributions of parameters. Data amplification is possible because, from a set of experimental neurons, many more virtual analogues can be generated. This approach could allow one, in principle, to create and store a neuroanatomical database containing data for an entire human brain in a personal computer. We are using two programs, L–NEURON and ARBORVITAE, to investigate systematically the potential of several different algorithms for the generation of virtual neurons. Using these programs, we have generated anatomically plausible virtual neurons for several morphological classes, including guinea pig cerebellar Purkinje cells and cat spinal cord motor neurons. These virtual neurons are stored in an online electronic archive of dendritic morphology. This process highlights the potential and the limitations of the ‘computational neuroanatomy’ strategy for neuroscience databases.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes the paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate appropriate and diverse range of keyphrases that reflect the document. This paper proposes a solution that examines the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work.
Resumo:
For those portfolio managers who follow a top-down approach to fund management when they are trying to develop a pan-European investment strategy they need to know which are the most important factors affecting property returns, so as to concentrate their management and research efforts accordingly. In order to examine this issue this paper examines the relative importance of country, sector and regional effects in determining property returns across Europe using the largest database of individual property returns currently available. Using annual data over the period 1996 to 2002 for a sample of over 25,000 properties the results show that the country-specific effects dominate sector-specific factors, which in turn dominate the regional-specific factors. This is true even for different sub-sets of countries and sectors. In other words, real estate returns are mainly determined by local (country specific) conditions and are only mildly affected by general European factors. Thus, for those institutional investors contemplating investment into Europe the first level of analysis must be an examination of the individual countries, followed by the prospects of the property sectors within the country and then an assessment of the differences in expected performance between the main city and the rest of the country.
Resumo:
The close relationship between children’s vocabulary size and their later academic success has led researchers to explore how vocabulary development might be promoted during the early school years. We describe a study that explored the effectiveness of naturalistic classroom storytelling as an instrument for teaching new vocabulary to six- to nine-year-old children. We examined whether learning was facilitated by encountering new words in single versus multiple story contexts, or by the provision of age-appropriate definitions of words as they were encountered. Results showed that encountering words in stories on three occasions led to significant gains in word knowledge in children of all ages and abilities, and that learning was further enhanced across the board when teachers elaborated on the new words’ meanings by providing dictionary definitions. Our findings clarify how classroom storytelling activities can be a highly effective means of promoting vocabulary development.
Resumo:
This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.
Resumo:
We review the proposal of the International Committee for Weights and Measures (Comité International des Poids et Mesures, CIPM), currently being considered by the General Conference on Weights and Measures (Conférences Générales des Poids et Mesures, CGPM), to revise the International System of Units (Le Système International d’Unitès, SI). The proposal includes new definitions for four of the seven base units of the SI, and a new form of words to present the definitions of all the units. The objective of the proposed changes is to adopt definitions referenced to constants of nature, taken in the widest sense, so that the definitions may be based on what are believed to be true invariants. In particular, whereas in the current SI the kilogram, ampere, kelvin and mole are linked to exact numerical values of the mass of the international prototype of the kilogram, the magnetic constant (permeability of vacuum), the triple-point temperature of water and the molar mass of carbon-12, respectively, in the new SI these units are linked to exact numerical values of the Planck constant, the elementary charge, the Boltzmann constant and the Avogadro constant, respectively. The new wording used expresses the definitions in a simple and unambiguous manner without the need for the distinction between base and derived units. The importance of relations among the fundamental constants to the definitions, and the importance of establishing a mise en pratique for the realization of each definition, are also discussed.
Resumo:
In this paper sequential importance sampling is used to assess the impact of observations on a ensemble prediction for the decadal path transitions of the Kuroshio Extension (KE). This particle filtering approach gives access to the probability density of the state vector, which allows us to determine the predictive power — an entropy based measure — of the ensemble prediction. The proposed set-up makes use of an ensemble that, at each time, samples the climatological probability distribution. Then, in a post-processing step, the impact of different sets of observations is measured by the increase in predictive power of the ensemble over the climatological signal during one-year. The method is applied in an identical-twin experiment for the Kuroshio Extension using a reduced-gravity shallow water model. We investigate the impact of assimilating velocity observations from different locations during the elongated and the contracted meandering state of the KE. Optimal observations location correspond to regions with strong potential vorticity gradients. For the elongated state the optimal location is in the first meander of the KE. During the contracted state of the KE it is located south of Japan, where the Kuroshio separates from the coast.
Resumo:
Background: Jargon aphasia with neologisms (i.e., novel nonword utterances) is a challenging language disorder that lacks a definitive theoretical description as well as clear treatment recommendations (Marshall, 2006). Aim: The aims of this two part investigation were to determine the source of neologisms in an individual with jargon aphasia (FF), to identify potential facilitatory semantic and/or phonological cuing effects in picture naming, and to determine whether the timing of the cues relative to the target picture mediated the cuing advantage. Methods and Procedures: FF’s underlying linguistic deficits were determined using several cognitive and linguistic tests. A series of computerized naming experiments using a modified version of the 175 item-Philadelphia Naming Test (Roach, Schwartz, Martin, Grewal, & Brecher, 1996) manipulated the cue type (semantic versus phonological) and relatedness (related versus unrelated). In a follow-up experiment, the relative timing of phonological cues was manipulated to test the effect of timing on the cuing advantage. The accuracy of naming responses and error patterns were analyzed. Outcome and Results: FF’s performance on the linguistic and cognitive test battery revealed a severe naming impairment with relatively spared word and nonword repetition, auditory comprehension of words and monitoring, and fairly well preserved semantic abilities. This performance profile was used to evaluate various explanations for neologisms including a loss of phonological codes, monitoring failure, and impairments in semantic system. The primary locus of his deficit appears to involve the connection between semantics to phonology, specifically, when word production involves accessing the phonological forms following semantic access. FF showed a significant cuing advantage only for phonological cues in picture naming, particularly when the cue preceded or coincided with the onset of the target picture. Conclusions: When integrated with previous findings, the results from this study suggest that the core deficit of this and at least some other jargon aphasics is in the connection from semantics to phonology. The facilitative advantage of phonological cues could potentially be exploited in future clinical and research studies to test the effectiveness of these cues for enhancing naming performance in individuals like FF.
Resumo:
In 2003 the European Commission started using Impact Assessment (IA) as the main empirical basis for its major policy proposals. The aim was to systematically assess ex ante the economic, social and environmental impacts of EU policy proposals. In parallel, research proliferated in search for theoretical grounds for IAs and in an attempt to evaluate empirically the performance of the first sets of IAs produced by the European Commission. This paper combines conceptual and evaluative studies carried out in the first five years of EU IAs. It concludes that the great discrepancy between rationale and practice calls for a different theoretical focus and a higher emphasis on evaluating empirically crucial risk economics aspects of IAs, such as the value of statistical life, price of carbon, the integration of macroeconomic modelling and scenario analysis.
Resumo:
Hocaoglu MB, Gaffan EA, Ho AK. The Huntington's disease health-related quality of life questionnaire: a disease-specific measure of health-related quality of life. Huntington's disease (HD) is a genetic neurodegenerative disorder characterized by motor, cognitive and psychiatric disturbances, and yet there is no disease-specific patient-reported health-related quality of life outcome measure for patients. Our aim was to develop and validate such an instrument, i.e. the Huntington's Disease health-related Quality of Life questionnaire (HDQoL), to capture the true impact of living with this disease. Semi-structured interviews were conducted with the full spectrum of people living with HD, to form a pool of items, which were then examined in a larger sample prior to data-driven item reduction. We provide the statistical basis for the extraction of three different sets of scales from the HDQoL, and present validation and psychometric data on these scales using a sample of 152 participants living with HD. These new patient-derived scales provide promising patient-reported outcome measures for HD.
Resumo:
Automatic keyword or keyphrase extraction is concerned with assigning keyphrases to documents based on words from within the document. Previous studies have shown that in a significant number of cases author-supplied keywords are not appropriate for the document to which they are attached. This can either be because they represent what the author believes a paper is about not what it actually is, or because they include keyphrases which are more classificatory than explanatory e.g., “University of Poppleton” instead of “Knowledge Discovery in Databases”. Thus, there is a need for a system that can generate an appropriate and diverse range of keyphrases that reflect the document. This paper proposes two possible solutions that examine the synonyms of words and phrases in the document to find the underlying themes, and presents these as appropriate keyphrases. Using three different freely available thesauri, the work undertaken examines two different methods of producing keywords and compares the outcomes across multiple strands in the timeline. The primary method explores taking n-grams of the source document phrases, and examining the synonyms of these, while the secondary considers grouping outputs by their synonyms. The experiments undertaken show the primary method produces good results and that the secondary method produces both good results and potential for future work. In addition, the different qualities of the thesauri are examined and it is concluded that the more entries in a thesaurus, the better it is likely to perform. The age of the thesaurus or the size of each entry does not correlate to performance.
Resumo:
[1] Remotely sensed, multiannual data sets of shortwave radiative surface fluxes are now available for assimilation into land surface schemes (LSSs) of climate and/or numerical weather prediction models. The RAMI4PILPS suite of virtual experiments assesses the accuracy and consistency of the radiative transfer formulations that provide the magnitudes of absorbed, reflected, and transmitted shortwave radiative fluxes in LSSs. RAMI4PILPS evaluates models under perfectly controlled experimental conditions in order to eliminate uncertainties arising from an incomplete or erroneous knowledge of the structural, spectral and illumination related canopy characteristics typical for model comparison with in situ observations. More specifically, the shortwave radiation is separated into a visible and near-infrared spectral region, and the quality of the simulated radiative fluxes is evaluated by direct comparison with a 3-D Monte Carlo reference model identified during the third phase of the Radiation transfer Model Intercomparison (RAMI) exercise. The RAMI4PILPS setup thus allows to focus in particular on the numerical accuracy of shortwave radiative transfer formulations and to pinpoint to areas where future model improvements should concentrate. The impact of increasing degrees of structural and spectral subgrid variability on the simulated fluxes is documented and the relevance of any thus emerging biases with respect to gross primary production estimates and shortwave radiative forcings due to snow and fire events are investigated.
Resumo:
This study compares two sets of measurements of the composition of bulk precipitation and throughfall at a site in southern England with a 20-year gap between them. During this time, SO2 emissions from the UK fell by 82%, NOx emissions by 35% and NH3 emissions by 7%. These reductions were partly reflected in bulk precipitation, with deposition reductions of 56% in SO4,38% in NO3, 32% in NH4, and 73% in H+. In throughfall under Scots pine, the effects were more dramatic, with an 89% reduction in SO4 deposition and a 98% reduction in H+ deposition. The mean pH under these trees increased from 2.85 to 4.30. Nitrate and ammonium deposition in throughfall increased slightly, however. In the earlier period, the Scots pines were unable to neutralise the high flux of acidity associated with sulphur deposition, even though this was not a highly polluted part of the UK, and deciduous trees (oak and birch) were only able to neutralise it in summer when the leaves were present. In the later period, the sulphur flux had reduced to the point where the acidity could be neutralised by all species — the neutralisation mechanism is thus likely to be largely leaching of base cations and buffering substances from the foliage. The high fluxes are partly due to the fact that these are 60–80 year old trees growing in an open forest structure. The increase in NO3 and NH4 in throughfall in spite of decreased deposition seems likely due to a decrease in foliar uptake, perhaps due to the increasing nitrogen saturation of the catchment soils. These changes may increase the rate of soil microbial activity as nitrogen increases and acidity declines, with consequent effects on water quality of the catchment drainage stream.