964 resultados para implicit categorization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article summarizes current concepts of the working memory with regard to its role within emotional coping strategies. In particular, it focuses on the fact that the limited capacity of the working memory to process now-relevant information can be turned into an advantage, when the individual is occupied by dealing with unpleasant emotion. Based on a phenomenon known as dual-task interference (DTI), this emotion can be chased by intense arousal due to clearly identifiable external stressors. Thus, risk perception might be used as a 'DTI inductor' that allows avoidance of unpleasant emotion. Successful mastery of risk adds a highly relevant dopaminergic component to the overall experience. The resulting mechanism of implicit learning may contribute to the development of a behavioural addiction. Besides its putative effects in the development of a behavioural addiction, the use of DTI might be of a more general interest for the clinical practice, especially in the field of psychotherapy. © 2013 S. Karger AG, Basel.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work the valuation methodology of compound option written on a downand-out call option, developed by Ericsson and Reneby (2003), has been applied to deduce a credit risk model. It is supposed that the firm has a debt structure with two maturity dates and that the credit event takes place when the assets firm value falls under a determined level called barrier. An empirical application of the model for 105 firms of Spanish continuous market is carried out. For each one of them its value in the date of analysis, the volatility and the critical value are obtained and from these, the default probability to short and long-term and the implicit probability in the two previous probabilities are deduced. The results are compared with the ones obtained from the Geskemodel (1977).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La actual situación económica y las perspectivas presupuestarias a largo plazo han suscitado una discusión acerca de la conveniencia de la Ley de Estabilidad Presupuestaria. Este trabajo, empleando contabilidad generacional, evalúa la sostenibilidad de la política fiscal española ampliando el horizonte temporal más allá del ciclo de los negocios, considerando los efectos del ciclo demográfico. Los resultados muestran que, aunque el proceso de consolidación fiscal ha mejorado ostensiblemente la situación financiera de las AA.PP, se sigue trasladando al futuro una deuda implí­cita sustancial

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes the long-lasting psychological after-effects of a traumatic experience. There is growing lmowledge of the biomedical underpinnings of these phenomena: the underlying mechanisms belong to an implicit learning process whereby the victim remains under the influence of painful past experiences. One of these mechanisms concerns the development of a traumatic bonding which Iioticeably impedes the establishment of interpersona! relationships. The other mechanism, called "contextualisation deficit", is the difficulty of adjusting a person's emotional and behavioural reactivity to the context of present day !ife. This capacity of a traumatic experience to become incrusted long-termina human being's mind, and to haunt the victim with various forms of psychological and physical suffering, can be compared with the presence of a tumour or an abscess in somatic medicine. Th us, severe drug addiction can be conceptualised as a disorder in which the patient tries - in most cases ineffectively - to soothe the pain of today's world in cmmection with the trauma of the past. In conclusion, this article urges the development of psychiatrie care programmes which operate at the centre of the suffering encountered by the se patients, as a complement to the already well-established offers such as harm reduction, substitution therapy and social support.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé La théorie de l'autocatégorisation est une théorie de psychologie sociale qui porte sur la relation entre l'individu et le groupe. Elle explique le comportement de groupe par la conception de soi et des autres en tant que membres de catégories sociales, et par l'attribution aux individus des caractéristiques prototypiques de ces catégories. Il s'agit donc d'une théorie de l'individu qui est censée expliquer des phénomènes collectifs. Les situations dans lesquelles un grand nombre d'individus interagissent de manière non triviale génèrent typiquement des comportements collectifs complexes qui sont difficiles à prévoir sur la base des comportements individuels. La simulation informatique de tels systèmes est un moyen fiable d'explorer de manière systématique la dynamique du comportement collectif en fonction des spécifications individuelles. Dans cette thèse, nous présentons un modèle formel d'une partie de la théorie de l'autocatégorisation appelée principe du métacontraste. À partir de la distribution d'un ensemble d'individus sur une ou plusieurs dimensions comparatives, le modèle génère les catégories et les prototypes associés. Nous montrons que le modèle se comporte de manière cohérente par rapport à la théorie et est capable de répliquer des données expérimentales concernant divers phénomènes de groupe, dont par exemple la polarisation. De plus, il permet de décrire systématiquement les prédictions de la théorie dont il dérive, notamment dans des situations nouvelles. Au niveau collectif, plusieurs dynamiques peuvent être observées, dont la convergence vers le consensus, vers une fragmentation ou vers l'émergence d'attitudes extrêmes. Nous étudions également l'effet du réseau social sur la dynamique et montrons qu'à l'exception de la vitesse de convergence, qui augmente lorsque les distances moyennes du réseau diminuent, les types de convergences dépendent peu du réseau choisi. Nous constatons d'autre part que les individus qui se situent à la frontière des groupes (dans le réseau social ou spatialement) ont une influence déterminante sur l'issue de la dynamique. Le modèle peut par ailleurs être utilisé comme un algorithme de classification automatique. Il identifie des prototypes autour desquels sont construits des groupes. Les prototypes sont positionnés de sorte à accentuer les caractéristiques typiques des groupes, et ne sont pas forcément centraux. Enfin, si l'on considère l'ensemble des pixels d'une image comme des individus dans un espace de couleur tridimensionnel, le modèle fournit un filtre qui permet d'atténuer du bruit, d'aider à la détection d'objets et de simuler des biais de perception comme l'induction chromatique. Abstract Self-categorization theory is a social psychology theory dealing with the relation between the individual and the group. It explains group behaviour through self- and others' conception as members of social categories, and through the attribution of the proto-typical categories' characteristics to the individuals. Hence, it is a theory of the individual that intends to explain collective phenomena. Situations involving a large number of non-trivially interacting individuals typically generate complex collective behaviours, which are difficult to anticipate on the basis of individual behaviour. Computer simulation of such systems is a reliable way of systematically exploring the dynamics of the collective behaviour depending on individual specifications. In this thesis, we present a formal model of a part of self-categorization theory named metacontrast principle. Given the distribution of a set of individuals on one or several comparison dimensions, the model generates categories and their associated prototypes. We show that the model behaves coherently with respect to the theory and is able to replicate experimental data concerning various group phenomena, for example polarization. Moreover, it allows to systematically describe the predictions of the theory from which it is derived, specially in unencountered situations. At the collective level, several dynamics can be observed, among which convergence towards consensus, towards frag-mentation or towards the emergence of extreme attitudes. We also study the effect of the social network on the dynamics and show that, except for the convergence speed which raises as the mean distances on the network decrease, the observed convergence types do not depend much on the chosen network. We further note that individuals located at the border of the groups (whether in the social network or spatially) have a decisive influence on the dynamics' issue. In addition, the model can be used as an automatic classification algorithm. It identifies prototypes around which groups are built. Prototypes are positioned such as to accentuate groups' typical characteristics and are not necessarily central. Finally, if we consider the set of pixels of an image as individuals in a three-dimensional color space, the model provides a filter that allows to lessen noise, to help detecting objects and to simulate perception biases such as chromatic induction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The difficulties arising in the calculation of the nuclear curvature energy are analyzed in detail, especially with reference to relativistic models. It is underlined that the implicit dependence on curvature of the quantal wave functions is directly accessible only in a semiclassical framework. It is shown that also in the relativistic models quantal and semiclassical calculations of the curvature energy are in good agreement.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The ability to discriminate conspecific vocalizations is observed across species and early during development. However, its neurophysiologic mechanism remains controversial, particularly regarding whether it involves specialized processes with dedicated neural machinery. We identified spatiotemporal brain mechanisms for conspecific vocalization discrimination in humans by applying electrical neuroimaging analyses to auditory evoked potentials (AEPs) in response to acoustically and psychophysically controlled nonverbal human and animal vocalizations as well as sounds of man-made objects. AEP strength modulations in the absence of topographic modulations are suggestive of statistically indistinguishable brain networks. First, responses were significantly stronger, but topographically indistinguishable to human versus animal vocalizations starting at 169-219 ms after stimulus onset and within regions of the right superior temporal sulcus and superior temporal gyrus. This effect correlated with another AEP strength modulation occurring at 291-357 ms that was localized within the left inferior prefrontal and precentral gyri. Temporally segregated and spatially distributed stages of vocalization discrimination are thus functionally coupled and demonstrate how conventional views of functional specialization must incorporate network dynamics. Second, vocalization discrimination is not subject to facilitated processing in time, but instead lags more general categorization by approximately 100 ms, indicative of hierarchical processing during object discrimination. Third, although differences between human and animal vocalizations persisted when analyses were performed at a single-object level or extended to include additional (man-made) sound categories, at no latency were responses to human vocalizations stronger than those to all other categories. Vocalization discrimination transpires at times synchronous with that of face discrimination but is not functionally specialized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this article is to show how, although the evident idealization of Greece and Platonic love throughout the Victorian-Edwardian England, both also show their limits. In order to make it clear the author refers constantly to the implicit Greek texts such as Plato's Symposium and Phaedrus and perhaps even to Plutarch¿s Eroticus in search of a Classical Tradition which is highly significant in order to understand that England at the beginning of the twentieth century.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The multiscale finite-volume (MSFV) method is designed to reduce the computational cost of elliptic and parabolic problems with highly heterogeneous anisotropic coefficients. The reduction is achieved by splitting the original global problem into a set of local problems (with approximate local boundary conditions) coupled by a coarse global problem. It has been shown recently that the numerical errors in MSFV results can be reduced systematically with an iterative procedure that provides a conservative velocity field after any iteration step. The iterative MSFV (i-MSFV) method can be obtained with an improved (smoothed) multiscale solution to enhance the localization conditions, with a Krylov subspace method [e.g., the generalized-minimal-residual (GMRES) algorithm] preconditioned by the MSFV system, or with a combination of both. In a multiphase-flow system, a balance between accuracy and computational efficiency should be achieved by finding a minimum number of i-MSFV iterations (on pressure), which is necessary to achieve the desired accuracy in the saturation solution. In this work, we extend the i-MSFV method to sequential implicit simulation of time-dependent problems. To control the error of the coupled saturation/pressure system, we analyze the transport error caused by an approximate velocity field. We then propose an error-control strategy on the basis of the residual of the pressure equation. At the beginning of simulation, the pressure solution is iterated until a specified accuracy is achieved. To minimize the number of iterations in a multiphase-flow problem, the solution at the previous timestep is used to improve the localization assumption at the current timestep. Additional iterations are used only when the residual becomes larger than a specified threshold value. Numerical results show that only a few iterations on average are necessary to improve the MSFV results significantly, even for very challenging problems. Therefore, the proposed adaptive strategy yields efficient and accurate simulation of multiphase flow in heterogeneous porous media.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Résumé Suite aux recentes avancées technologiques, les archives d'images digitales ont connu une croissance qualitative et quantitative sans précédent. Malgré les énormes possibilités qu'elles offrent, ces avancées posent de nouvelles questions quant au traitement des masses de données saisies. Cette question est à la base de cette Thèse: les problèmes de traitement d'information digitale à très haute résolution spatiale et/ou spectrale y sont considérés en recourant à des approches d'apprentissage statistique, les méthodes à noyau. Cette Thèse étudie des problèmes de classification d'images, c'est à dire de catégorisation de pixels en un nombre réduit de classes refletant les propriétés spectrales et contextuelles des objets qu'elles représentent. L'accent est mis sur l'efficience des algorithmes, ainsi que sur leur simplicité, de manière à augmenter leur potentiel d'implementation pour les utilisateurs. De plus, le défi de cette Thèse est de rester proche des problèmes concrets des utilisateurs d'images satellite sans pour autant perdre de vue l'intéret des méthodes proposées pour le milieu du machine learning dont elles sont issues. En ce sens, ce travail joue la carte de la transdisciplinarité en maintenant un lien fort entre les deux sciences dans tous les développements proposés. Quatre modèles sont proposés: le premier répond au problème de la haute dimensionalité et de la redondance des données par un modèle optimisant les performances en classification en s'adaptant aux particularités de l'image. Ceci est rendu possible par un système de ranking des variables (les bandes) qui est optimisé en même temps que le modèle de base: ce faisant, seules les variables importantes pour résoudre le problème sont utilisées par le classifieur. Le manque d'information étiquétée et l'incertitude quant à sa pertinence pour le problème sont à la source des deux modèles suivants, basés respectivement sur l'apprentissage actif et les méthodes semi-supervisées: le premier permet d'améliorer la qualité d'un ensemble d'entraînement par interaction directe entre l'utilisateur et la machine, alors que le deuxième utilise les pixels non étiquetés pour améliorer la description des données disponibles et la robustesse du modèle. Enfin, le dernier modèle proposé considère la question plus théorique de la structure entre les outputs: l'intègration de cette source d'information, jusqu'à présent jamais considérée en télédétection, ouvre des nouveaux défis de recherche. Advanced kernel methods for remote sensing image classification Devis Tuia Institut de Géomatique et d'Analyse du Risque September 2009 Abstract The technical developments in recent years have brought the quantity and quality of digital information to an unprecedented level, as enormous archives of satellite images are available to the users. However, even if these advances open more and more possibilities in the use of digital imagery, they also rise several problems of storage and treatment. The latter is considered in this Thesis: the processing of very high spatial and spectral resolution images is treated with approaches based on data-driven algorithms relying on kernel methods. In particular, the problem of image classification, i.e. the categorization of the image's pixels into a reduced number of classes reflecting spectral and contextual properties, is studied through the different models presented. The accent is put on algorithmic efficiency and the simplicity of the approaches proposed, to avoid too complex models that would not be used by users. The major challenge of the Thesis is to remain close to concrete remote sensing problems, without losing the methodological interest from the machine learning viewpoint: in this sense, this work aims at building a bridge between the machine learning and remote sensing communities and all the models proposed have been developed keeping in mind the need for such a synergy. Four models are proposed: first, an adaptive model learning the relevant image features has been proposed to solve the problem of high dimensionality and collinearity of the image features. This model provides automatically an accurate classifier and a ranking of the relevance of the single features. The scarcity and unreliability of labeled. information were the common root of the second and third models proposed: when confronted to such problems, the user can either construct the labeled set iteratively by direct interaction with the machine or use the unlabeled data to increase robustness and quality of the description of data. Both solutions have been explored resulting into two methodological contributions, based respectively on active learning and semisupervised learning. Finally, the more theoretical issue of structured outputs has been considered in the last model, which, by integrating outputs similarity into a model, opens new challenges and opportunities for remote sensing image processing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this article is to treat a currently much debated issue, the effects of age on second language learning. To do so, we contrast data collected by our research team from over one thousand seven hundred young and adult learners with four popular beliefs or generalizations, which, while deeply rooted in this society, are not always corroborated by our data.Two of these generalizations about Second Language Acquisition (languages spoken in the social context) seem to be widely accepted: a) older children, adolescents and adults are quicker and more efficient at the first stages of learning than are younger learners; b) in a natural context children with an early start are more liable to attain higher levels of proficiency. However, in the context of Foreign Language Acquisition, the context in which we collect the data, this second generalization is difficult to verify due to the low number of instructional hours (a maximum of some 800 hours) and the lower levels of language exposure time provided. The design of our research project has allowed us to study differences observed with respect to the age of onset (ranging from 2 to 18+), but in this article we focus on students who began English instruction at the age of 8 (LOGSE Educational System) and those who began at the age of 11 (EGB). We have collected data from both groups after a period of 200 (Time 1) and 416 instructional hours (Time 2), and we are currently collecting data after a period of 726 instructional hours (Time 3). We have designed and administered a variety of tests: tests on English production and reception, both oral and written, and within both academic and communicative oriented approaches, on the learners' L1 (Spanish and Catalan), as well as a questionnaire eliciting personal and sociolinguistic information. The questions we address and the relevant empirical evidence are as follows: 1. "For young children, learning languages is a game. They enjoy it more than adults."Our data demonstrate that the situation is not quite so. Firstly, both at the levels of Primary and Secondary education (ranging from 70.5% in 11-year-olds to 89% in 14-year-olds) students have a positive attitude towards learning English. Secondly, there is a difference between the two groups with respect to the factors they cite as responsible for their motivation to learn English: the younger students cite intrinsic factors, such as the games they play, the methodology used and the teacher, whereas the older students cite extrinsic factors, such as the role of their knowledge of English in the achievement of their future professional goals. 2 ."Young children have more resources to learn languages." Here our data suggest just the opposite. The ability to employ learning strategies (actions or steps used) increases with age. Older learners' strategies are more varied and cognitively more complex. In contrast, younger learners depend more on their interlocutor and external resources and therefore have a lower level of autonomy in their learning. 3. "Young children don't talk much but understand a lot"This third generalization does seem to be confirmed, at least to a certain extent, by our data in relation to the analysis of differences due to the age factor and productive use of the target language. As seen above, the comparably slower progress of the younger learners is confirmed. Our analysis of interpersonal receptive abilities demonstrates as well the advantage of the older learners. Nevertheless, with respect to passive receptive activities (for example, simple recognition of words or sentences) no great differences are observed. Statistical analyses suggest that in this test, in contrast to the others analyzed, the dominance of the subjects' L1s (reflecting a cognitive capacity that grows with age) has no significant influence on the learning process. 4. "The sooner they begin, the better their results will be in written language"This is not either completely confirmed in our research. First of all, we perceive that certain compensatory strategies disappear only with age, but not with the number of instructional hours. Secondly, given an identical number of instructional hours, the older subjects obtain better results. With respect to our analysis of data from subjects of the same age (12 years old) but with a different number of instructional hours (200 and 416 respectively, as they began at the ages of 11 and 8), we observe that those who began earlier excel only in the area of lexical fluency. In conclusion, the superior rate of older learners appears to be due to their higher level of cognitive development, a factor which allows them to benefit more from formal or explicit instruction in the school context. Younger learners, however, do not benefit from the quantity and quality of linguistic exposure typical of a natural acquisition context in which they would be allowed to make use of implicit learning abilities. It seems clear, then, that the initiative in this country to begin foreign language instruction earlier will have positive effects only if it occurs in combination with either higher levels of exposure time to the foreign language, or, alternatively, with its use as the language of instruction in other areas of the curriculum.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work the valuation methodology of compound option written on a downand-out call option, developed by Ericsson and Reneby (2003), has been applied to deduce a credit risk model. It is supposed that the firm has a debt structure with two maturity dates and that the credit event takes place when the assets firm value falls under a determined level called barrier. An empirical application of the model for 105 firms of Spanish continuous market is carried out. For each one of them its value in the date of analysis, the volatility and the critical value are obtained and from these, the default probability to short and long-term and the implicit probability in the two previous probabilities are deduced. The results are compared with the ones obtained from the Geskemodel (1977).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

"IT'S THE ECONOMY STUPID", BUT CHARISMA MATTERS TOO: A DUAL PROCESS MODEL OF PRESIDENTIAL ELECTION OUTCOMES. ABSTRACT Because charisma is assumed to be an important determinant of effective leadership, the extent to which a presidential nominee is more charismatic than his opponent should be an important determinant of voter choices. We computed a composite measure of the rhetorical richness of acceptances speeches given by U.S. presidential candidates at their national party convention. We added this marker of charisma to Ray C. Fair's presidential vote-share equation (1978; 2009). We theorized that voters decide using psychological attribution (i.e., due to macroeconomics and incumbency) as well as inferential processes (i.e., due to leader charismatic behavior) when voting. Controlling for the macro-level variables and incumbency in the Fair model, our results indicated that difference between nominees' charisma is a significant determinant of electoral success, particularly in close elections. This extended model significantly improves the precision of the Fair model and correctly predicts 23 out of the last 24 U.S. presidential elections. Paper 2: IT CEO LEADERSHIP, CORPORATE SOCIAL AND FINANCIAL PERFORMANCE. ABSTRACT We investigated whether CEO leadership predicted corporate financial performance (CFP) and corporate social performance (CSP). Using longitudinal data on 258 CEOs from 117 firms across 19 countries and 10 industry sectors, we found that determinants of CEO leadership (i.e., implicit motives) significantly predicted both CFP and CSP. As expected, the most consistent positive predictor was Responsibility Disposition when interacting with n (need for) Power. n Achievement and n Affiliation were generally negatively related or unrelated to outcomes. CSP was positively related to accounting measures of CFP. Our findings suggest that executive leader characteristics have important consequences for corporate level outcomes. Paper 3. PUNISHING THE POWERFUL: ATTRIBUTIONS OF BLAME AND LEADERSHIP ABSTRACT We propose that individuals are more lenient in attributing blame to leaders than to nonleaders. We advance a motivational explanation building on the perspective of punishment and on system justification theory. We conducted two scenario experiments which supported our proposition. In study 1, wrongdoer leader status was negatively related to blame and the perceived seriousness of the wrongdoing. In study 2, controlling for the Big-Five personality factor and individual differences in moral evaluation (i.e., moral foundations), wrongdoer leader status was negatively related with desired severity of punishment, and fair punishments were perceived as more just for non-leaders than for leaders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La actual situación económica y las perspectivas presupuestarias a largo plazo han suscitado una discusión acerca de la conveniencia de la Ley de Estabilidad Presupuestaria. Este trabajo, empleando contabilidad generacional, evalúa la sostenibilidad de la política fiscal española ampliando el horizonte temporal más allá del ciclo de los negocios, considerando los efectos del ciclo demográfico. Los resultados muestran que, aunque el proceso de consolidación fiscal ha mejorado ostensiblemente la situación financiera de las AA.PP, se sigue trasladando al futuro una deuda implí­cita sustancial