957 resultados para Computational Lexical Semantics


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The computational grammatical complexity ( CGC) hypothesis claims that children with G(rammatical)-specific language impairment ( SLI) have a domain-specific deficit in the computational system affecting syntactic dependencies involving 'movement'. One type of such syntactic dependencies is filler-gap dependencies. In contrast, the Generalized Slowing Hypothesis claims that SLI children have a domain-general deficit affecting processing speed and capacity. Aims: To test contrasting accounts of SLI we investigate processing of syntactic (filler-gap) dependencies in wh-questions. Methods & Procedures: Fourteen 10; 2 - 17; 2 G-SLI children, 14 age- matched and 17 vocabulary-matched controls were studied using the cross- modal picturepriming paradigm. Outcomes & Results: G-SLI children's processing speed was significantly slower than the age controls, but not younger vocabulary controls. The G- SLI children and vocabulary controls did not differ on memory span. However, the typically developing and G-SLI children showed a qualitatively different processing pattern. The age and vocabulary controls showed priming at the gap, indicating that they process wh-questions through syntactic filler-gap dependencies. In contrast, G-SLI children showed priming only at the verb. Conclusions: The findings indicate that G-SLI children fail to establish reliably a syntactic filler- gap dependency and instead interpret wh-questions via lexical thematic information. These data challenge the Generalized Slowing Hypothesis account, but support the CGC hypothesis, according to which G-SLI children have a particular deficit in the computational system affecting syntactic dependencies involving 'movement'. As effective remediation often depends on aetiological insight, the discovery of the nature of the syntactic deficit, along side a possible compensatory use of semantics to facilitate sentence processing, can be used to direct therapy. However, the therapeutic strategy to be used, and whether such similar strengths and weaknesses within the language system are found in other SLI subgroups are empirical issues that warrant further research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. “moo”, “splash”). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage may be due to a stronger connection between semantic information and phonological forms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subgradient optimization method is a simple and flexible linear programming iterative algorithm. It is much simpler than Newton's method and can be applied to a wider variety of problems. It also converges when the objective function is non-differentiable. Since an efficient algorithm will not only produce a good solution but also take less computing time, we always prefer a simpler algorithm with high quality. In this study a series of step size parameters in the subgradient equation is studied. The performance is compared for a general piecewise function and a specific p-median problem. We examine how the quality of solution changes by setting five forms of step size parameter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyzes some forms of linguistic manipulation in Japanese in newspapers when reporting on North Korea and its nuclear tests. The focus lies on lexical ambiguity in headlines and journalist’s voices in the body of the articles, that results in manipulation of the minds of the readers. The study is based on a corpus of nine articles from two of Japan’s largest newspapers Yomiuri Online and Asahi Shimbun Digital. The linguistic phenomenon that contribute to create manipulation are divided into Short Term Memory impact or Long Term Memory impact and examples will be discussed under each of the categories.The main results of the study are that headlines in Japanese newspapers do not make use of an ambiguous, double grounded structure. However, the articles are filled with explicit and implied attitudes as well as attributed material from people of a high social status, which suggests that manipulation of the long term memory is a tool used in Japanese media.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the first questions to consider when designing a new roll forming line is the number of forming steps required to produce a profile. The number depends on material properties, the cross-section geometry and tolerance requirements, but the tool designer also wants to minimize the number of forming steps in order to reduce the investment costs for the customer. There are several computer aided engineering systems on the market that can assist the tool designing process. These include more or less simple formulas to predict deformation during forming as well as the number of forming steps. In recent years it has also become possible to use finite element analysis for the design of roll forming processes. The objective of the work presented in this thesis was to answer the following question: How should the roll forming process be designed for complex geometries and/or high strength steels? The work approach included both literature studies as well as experimental and modelling work. The experimental part gave direct insight into the process and was also used to develop and validate models of the process. Starting with simple geometries and standard steels the work progressed to more complex profiles of variable depth and width, made of high strength steels. The results obtained are published in seven papers appended to this thesis. In the first study (see paper 1) a finite element model for investigating the roll forming of a U-profile was built. It was used to investigate the effect on longitudinal peak membrane strain and deformation length when yield strength increases, see paper 2 and 3. The simulations showed that the peak strain decreases whereas the deformation length increases when the yield strength increases. The studies described in paper 4 and 5 measured roll load, roll torque, springback and strain history during the U-profile forming process. The measurement results were used to validate the finite element model in paper 1. The results presented in paper 6 shows that the formability of stainless steel (e.g. AISI 301), that in the cold rolled condition has a large martensite fraction, can be substantially increased by heating the bending zone. The heated area will then become austenitic and ductile before the roll forming. Thanks to the phenomenon of strain induced martensite formation, the steel will regain the martensite content and its strength during the subsequent plastic straining. Finally, a new tooling concept for profiles with variable cross-sections is presented in paper 7. The overall conclusions of the present work are that today, it is possible to successfully develop profiles of complex geometries (3D roll forming) in high strength steels and that finite element simulation can be a useful tool in the design of the roll forming process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new method for solving large scale p-median problem instances based on real data. We compare different approaches in terms of runtime, memory footprint and quality of solutions obtained. In order to test the different methods on real data, we introduce a new benchmark for the p-median problem based on real Swedish data. Because of the size of the problem addressed, up to 1938 candidate nodes, a number of algorithms, both exact and heuristic, are considered. We also propose an improved hybrid version of a genetic algorithm called impGA. Experiments show that impGA behaves as well as other methods for the standard set of medium-size problems taken from Beasley’s benchmark, but produces comparatively good results in terms of quality, runtime and memory footprint on our specific benchmark based on real Swedish data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Innovative media management, annotation, delivery, and navigation services will enrich online shopping, help-desk services, and anytime-anywhere training over wireless devices. However, the semantic gap between the rich meaning that users want when they query and browse media and the shallowness of the content descriptions that one can actually compute is weakening today's automatic content-annotation systems. To address such problems, an approach that markedly departs from existing methods based on detecting and annotating low-level audio-visual features is advocated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Herrera and Mart́inez initiated a 2-tuple fuzzy linguistic representation model for computing with words.Moreover, Wang and Hao further developed a new 2-tuple fuzzy linguistic representation model to deal with the linguistic term sets that are not uniformly and symmetrically distributed. This study proposes another linguistic computational model based on 2-tuples and intervals, which we call an interval version of the 2-tuple fuzzy linguistic representation model. The proposed model possesses three steps: 1) interval numerical scale; 2) computation based on interval numbers; and 3) a generalized inverse operation of the interval numerical scale. The first step transforms linguistic terms into interval numbers, based on which the second step is executed with output as an interval number. Finally, this number is then mapped into the interval of the linguistic 2-tuples by the generalized inverse operation. This study also generalizes the numerical scale approach, presented in the Wang and Hao model, to set the interval numerical scale, by considering the context where semantics of linguistic terms are defined by interval type-2 fuzzy sets (IT2 FSs). In order to compare the proposed model with the existing linguistic computational model based on IT2 FSs, we have conducted extensive simulations. The simulations demonstrate that the results obtained by our proposal are consistent with the results of the linguistic computational model based on IT2 FSs (in some sense) in a vast majority of cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Schistosomiasis is still an endemic disease in many regions, with 250 million people infected with Schistosoma and about 500,000 deaths per year. Praziquantel (PZQ) is the drug of choice for schistosomiasis treatment, however it is classified as Class II in the Biopharmaceutics Classification System, as its low solubility hinders its performance in biological systems. The use of cyclodextrins is a useful tool to increase the solubility and bioavailability of drugs. The aim of this work was to prepare an inclusion compound of PZQ and methyl-beta-cyclodextrin (MeCD), perform its physico-chemical characterization, and explore its in vitro cytotoxicity. SEM showed a change of the morphological characteristics of PZQ:MeCD crystals, and IR data supported this finding, with changes after interaction with MeCD including effects on the C-H of the aromatic ring, observed at 758 cm(-1). Differential scanning calorimetry measurements revealed that complexation occurred in a 1:1 molar ratio, as evidenced by the lack of a PZQ transition temperature after inclusion into the MeCD cavity. In solution, the PZQ UV spectrum profile in the presence of MeCD was comparable to the PZQ spectrum in a hydrophobic solvent. Phase solubility diagrams showed that there was a 5.5-fold increase in PZQ solubility, and were indicative of a type A(L) isotherm, that was used to determine an association constant (K(a)) of 140.8 M(-1). No cytotoxicity of the PZQ:MeCD inclusion compound was observed in tests using 3T3 cells. The results suggest that the association of PZQ with MeCD could be a good alternative for the treatment of schistosomiasis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the application of computational intelligence techniques to assist complex problems concerning lightning in transformers. In order to estimate the currents related to lightning in a transformer, a neural tool is presented. ATP has generated the training vectors. The input variables used in Artificial Neural Networks (ANN) were the wave front time, the wave tail time, the voltage variation rate and the output variable is the maximum current in the secondary of the transformer. These parameters can define the behavior and severity of lightning. Based on these concepts and from the results obtained, it can be verified that the overvoltages at the secondary of transformer are also affected by the discharge waveform in a similar way to the primary side. By using the tool developed, the high voltage process in the distribution transformers can be mapped and estimated with more precision aiding the transformer project process, minimizing empirics and evaluation errors, and contributing to minimize the failure rate of transformers. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Classical Monte Carlo simulations were carried out on the NPT ensemble at 25°C and 1 atm, aiming to investigate the ability of the TIP4P water model [Jorgensen, Chandrasekhar, Madura, Impey and Klein; J. Chem. Phys., 79 (1983) 926] to reproduce the newest structural picture of liquid water. The results were compared with recent neutron diffraction data [Soper; Bruni and Ricci; J. Chem. Phys., 106 (1997) 247]. The influence of the computational conditions on the thermodynamic and structural results obtained with this model was also analyzed. The findings were compared with the original ones from Jorgensen et al [above-cited reference plus Mol. Phys., 56 (1985) 1381]. It is notice that the thermodynamic results are dependent on the boundary conditions used, whereas the usual radial distribution functions g(O/O(r)) and g(O/H(r)) do not depend on them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents an analysis of the wavelet-Galerkin method for one-dimensional elastoplastic-damage problems. Time-stepping algorithm for non-linear dynamics is presented. Numerical treatment of the constitutive models is developed by the use of return-mapping algorithm. For spacial discretization we can use wavelet-Galerkin method instead of standard finite element method. This approach allows to locate singularities. The discrete formulation developed can be applied to the simulation of one-dimensional problems for elastic-plastic-damage models. (C) 2007 Elsevier B.V. All rights reserved.