961 resultados para General-purpose computing on graphics processing units (GPGPU)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The experience of learning and using a second language (L2) has been shown to affect the grey matter (GM) structure of the brain. Importantly, GM density in several cortical and subcortical areas has been shown to be related to performance in L2 tasks. Here we show that bilingualism can lead to increased GM volume in the cerebellum, a structure that has been related to the processing of grammatical rules. Additionally, the cerebellar GM volume of highly proficient L2 speakers is correlated to their performance in a task tapping on grammatical processing in a L2, demonstrating the importance of the cerebellum for the establishment and use of grammatical rules in a L2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a software-based study of a hardware-based non-sorting median calculation method on a set of integer numbers. The method divides the binary representation of each integer element in the set into bit slices in order to find the element located in the middle position. The method exhibits a linear complexity order and our analysis shows that the best performance in execution time is obtained when slices of 4-bit in size are used for 8-bit and 16-bit integers, in mostly any data set size. Results suggest that software implementation of bit slice method for median calculation outperforms sorting-based methods with increasing improvement for larger data set size. For data set sizes of N > 5, our simulations show an improvement of at least 40%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ability to match individual patients to tailored treatments has the potential to greatly improve outcomes for individuals suffering from major depression. In particular, while the vast majority of antidepressant treatments affect either serotonin or noradrenaline or a combination of these two neurotransmitters, it is not known whether there are particular patients or symptom profiles which respond preferentially to the potentiation of serotonin over noradrenaline or vice versa. Experimental medicine models suggest that the primary mode of action of these treatments may be to remediate negative biases in emotional processing. Such models may provide a useful framework for interrogating the specific actions of antidepressants. Here, we therefore review evidence from studies examining the effects of drugs which potentiate serotonin, noradrenaline or a combination of both neurotransmitters on emotional processing. These results suggest that antidepressants targeting serotonin and noradrenaline may have some specific actions on emotion and reward processing which could be used to improve tailoring of treatment or to understand the effects of dual-reuptake inhibition. Specifically, serotonin may be particularly important in alleviating distress symptoms, while noradrenaline may be especially relevant to anhedonia. The data reviewed here also suggest that noradrenergic-based treatments may have earlier effects on emotional memory that those which affect serotonin.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

IEEE 754 floating-point arithmetic is widely used in modern, general-purpose computers. It is based on real arithmetic and is made total by adding both a positive and a negative infinity, a negative zero, and many Not-a-Number (NaN) states. Transreal arithmetic is total. It also has a positive and a negative infinity but no negative zero, and it has a single, unordered number, nullity. Modifying the IEEE arithmetic so that it uses transreal arithmetic has a number of advantages. It removes one redundant binade from IEEE floating-point objects, doubling the numerical precision of the arithmetic. It removes eight redundant, relational,floating-point operations and removes the redundant total order operation. It replaces the non-reflexive, floating-point, equality operator with a reflexive equality operator and it indicates that some of the exceptions may be removed as redundant { subject to issues of backward compatibility and transient future compatibility as programmers migrate to the transreal paradigm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body size affects nearly all aspects of organismal biology, so it is important to understand the constraints and dynamics of body size evolution. Despite empirical work on the macroevolution and macroecology of minimum and maximum size, there is little general quantitative theory on rates and limits of body size evolution. We present a general theory that integrates individual productivity, the lifestyle component of the slow–fast life-history continuum, and the allometric scaling of generation time to predict a clade's evolutionary rate and asymptotic maximum body size, and the shape of macroevolutionary trajectories during diversifying phases of size evolution. We evaluate this theory using data on the evolution of clade maximum body sizes in mammals during the Cenozoic. As predicted, clade evolutionary rates and asymptotic maximum sizes are larger in more productive clades (e.g. baleen whales), which represent the fast end of the slow–fast lifestyle continuum, and smaller in less productive clades (e.g. primates). The allometric scaling exponent for generation time fundamentally alters the shape of evolutionary trajectories, so allometric effects should be accounted for in models of phenotypic evolution and interpretations of macroevolutionary body size patterns. This work highlights the intimate interplay between the macroecological and macroevolutionary dynamics underlying the generation and maintenance of morphological diversity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a general approach based on nonequilibrium thermodynamics for bridging the gap between a well-defined microscopic model and the macroscopic rheology of particle-stabilised interfaces. Our approach is illustrated by starting with a microscopic model of hard ellipsoids confined to a planar surface, which is intended to simply represent a particle-stabilised fluid–fluid interface. More complex microscopic models can be readily handled using the methods outlined in this paper. From the aforementioned microscopic starting point, we obtain the macroscopic, constitutive equations using a combination of systematic coarse-graining, computer experiments and Hamiltonian dynamics. Exemplary numerical solutions of the constitutive equations are given for a variety of experimentally relevant flow situations to explore the rheological behaviour of our model. In particular, we calculate the shear and dilatational moduli of the interface over a wide range of surface coverages, ranging from the dilute isotropic regime, to the concentrated nematic regime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While there has been a fair amount of research investigating children’s syntactic processing during spoken language comprehension, and a wealth of research examining adults’ syntactic processing during reading, as yet very little research has focused on syntactic processing during text reading in children. In two experiments, children and adults read sentences containing a temporary syntactic ambiguity while their eye movements were monitored. In Experiment 1, participants read sentences such as, ‘The boy poked the elephant with the long stick/trunk from outside the cage’ in which the attachment of a prepositional phrase was manipulated. In Experiment 2, participants read sentences such as, ‘I think I’ll wear the new skirt I bought tomorrow/yesterday. It’s really nice’ in which the attachment of an adverbial phrase was manipulated. Results showed that adults and children exhibited similar processing preferences, but that children were delayed relative to adults in their detection of initial syntactic misanalysis. It is concluded that children and adults have the same sentence-parsing mechanism in place, but that it operates with a slightly different time course. In addition, the data support the hypothesis that the visual processing system develops at a different rate than the linguistic processing system in children.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present article examines production and on-line processing of definite articles in Turkish-speaking sequential bilingual children acquiring English and Dutch as second languages (L2) in the UK and in the Netherlands, respectively. Thirty-nine 6–8-year-old L2 children and 48 monolingual (L1) age-matched children participated in two separate studies examining the production of definite articles in English and Dutch in conditions manipulating semantic context, that is, the anaphoric and the bridging contexts. Sensitivity to article omission was examined in the same groups of children using an on-line processing task involving article use in the same semantic contexts as in the production task. The results indicate that both L2 children and L1 controls are less accurate when definiteness is established by keeping track of the discourse referents (anaphoric) than when it is established via world knowledge (bridging). Moreover, despite variable production, all groups of children were sensitive to the omission of definite articles in the on-line comprehension task. This suggests that the errors of omission are not due to the lack of abstract syntactic representations, but could result from processes implicated in the spell-out of definite articles. The findings are in line with the idea that variable production in child L2 learners does not necessarily indicate lack of abstract representations (Haznedar and Schwartz, 1997).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Empirical mode decomposition (EMD) is a data-driven method used to decompose data into oscillatory components. This paper examines to what extent the defined algorithm for EMD might be susceptible to data format. Two key issues with EMD are its stability and computational speed. This paper shows that for a given signal there is no significant difference between results obtained with single (binary32) and double (binary64) floating points precision. This implies that there is no benefit in increasing floating point precision when performing EMD on devices optimised for single floating point format, such as graphical processing units (GPUs).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Existing research on the legitimacy of the UN Security Council is conceptual or theoretical, for the most part, as scholars tend to make legitimacy assessments with reference to objective standards. Whether UN member states perceive the Security Council as legitimate or illegitimate has yet to be investigated systematically; nor do we know whether states care primarily about the Council's compliance with its legal mandate, its procedures, or its effectiveness. To address this gap, our article analyzes evaluative statements made by states in UN General Assembly debates on the Security Council, for the period 1991–2009. In making such statements, states confer legitimacy on the Council or withhold legitimacy from it. We conclude the following: First, the Security Council suffers from a legitimacy deficit because negative evaluations of the Council by UN member states far outweigh positive ones. Nevertheless, the Council does not find itself in an intractable legitimacy crisis because it still enjoys a rudimentary degree of legitimacy. Second, the Council's legitimacy deficit results primarily from states' concerns regarding the body's procedural shortcomings. Misgivings as regards shortcomings in performance rank second. Whether or not the Council complies with its legal mandate has failed to attract much attention at all.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article reports on a study investigating the relative influence of the first and dominant language on L2 and L3 morpho-lexical processing. A lexical decision task compared the responses to English NV-er compounds (e.g., taxi driver) and non-compounds provided by a group of native speakers and three groups of learners at various levels of English proficiency: L1 Spanish-L2 English sequential bilinguals and two groups of early Spanish-Basque bilinguals with English as their L3. Crucially, the two trilingual groups differed in their first and dominant language (i.e., L1 Spanish-L2 Basque vs. L1 Basque-L2 Spanish). Our materials exploit an (a)symmetry between these languages: while Basque and English pattern together in the basic structure of (productive) NV-er compounds, Spanish presents a construction that differs in directionality as well as inflection of the verbal element (V[3SG] + N). Results show between and within group differences in accuracy and response times that may be ascribable to two factors besides proficiency: the number of languages spoken by a given participant and their dominant language. An examination of response bias reveals an influence of the participants' first and dominant language on the processing of NV-er compounds. Our data suggest that morphological information in the nonnative lexicon may extend beyond morphemic structure and that, similarly to bilingualism, there are costs to sequential multilingualism in lexical retrieval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel cryptography method based on the Lorenz`s attractor chaotic system is presented. The proposed algorithm is secure and fast, making it practical for general use. We introduce the chaotic operation mode, which provides an interaction among the password, message and a chaotic system. It ensures that the algorithm yields a secure codification, even if the nature of the chaotic system is known. The algorithm has been implemented in two versions: one sequential and slow and the other, parallel and fast. Our algorithm assures the integrity of the ciphertext (we know if it has been altered, which is not assured by traditional algorithms) and consequently its authenticity. Numerical experiments are presented, discussed and show the behavior of the method in terms of security and performance. The fast version of the algorithm has a performance comparable to AES, a popular cryptography program used commercially nowadays, but it is more secure, which makes it immediately suitable for general purpose cryptography applications. An internet page has been set up, which enables the readers to test the algorithm and also to try to break into the cipher.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This essay is an ideographic study over the capacity of high school students to use the Internet with source criticism, and their attitudes towards this media. It’s a qualitative study whose main purpose is to seek out how the students reflect upon the problems facing them when searching facts and information over the internet.The focus group is six 18 year old high school students, who are studying in a normal sized town in the middle of Sweden, and are about to finish their studies in the program for social sciences. The study was made through interviews, one on one with the students,The results of the analysis have shown that even if students not necessarily use a pre meditated method when searching for information over the internet, they do have a basic understanding of the matter, especially concerning the nature of the source, who published it and why. Not all students made any thorough comparison with other, already established media, such as TV or books, but the analysis made it clear that they more or less deliberately saw the established media as more trustworthy in general. Individuals publishing on the internet, such as bloggers and Wikipedia, are seen with the utmost skepticism while public institutions such as universities and public service TV, are generally trusted as being honest and objective, also when publishing on the internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using a physically based model, the microstructural evolution of Nb microalloyed steels during rolling in SSAB Tunnplåt’s hot strip mill was modeled. The model describes the evolution of dislocation density, the creation and diffusion of vacancies, dynamic and static recovery through climb and glide, subgrain formation and growth, dynamic and static recrystallization and grain growth. Also, the model describes the dissolution and precipitation of particles. The impeding effect on grain growth and recrystallization due to solute drag and particles is accounted for. During hot strip rolling of Nb steels, Nb in solid solution retards recrystallization due to solute drag and at lower temperatures strain-induced precipitation of Nb(C,N) may occur which effectively retard recrystallization. The flow stress behavior during hot rolling was calculated where the mean flow stress values were calculated using both the model and measured mill data. The model showed that solute drag has an essential effect on recrystallization during hot rolling of Nb steels.