973 resultados para Individual ability


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dual-section variable frequency microwave systems enable rapid, controllable heating of materials within an individual surface mount component in a chip-on=board assembly. The ability to process devices individually allows components with disparate processing requirements to be mounted on the same assembly. The temperature profile induced by the microwave system can be specifically tailored to the needs of the component, allowing optimisation and degree of cure whilst minimising thermomechanical stresses. This paper presents a review of dual-section microwave technology and its application to curing of thermosetting polymer materials in microelectronics applications. Curing processes using both conventional and microwave technologies are assessed and compared. Results indicate that dual-section microwave systems are able to cure individual surface mount packages in a significantly shorter time, at the expense of an increase in thermomechanical stresses and a greater variation in degree of cure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract: Raman spectroscopy has been used for the first time to predict the FA composition of unextracted adipose tissue of pork, beef, lamb, and chicken. It was found that the bulk unsaturation parameters could be predicted successfully [R-2 = 0.97, root mean square error of prediction (RMSEP) = 4.6% of 4 sigma], with cis unsaturation, which accounted for the majority of the unsaturation, giving similar correlations. The combined abundance of all measured PUFA (>= 2 double bonds per chain) was also well predicted with R-2 = 0.97 and RMSEP = 4.0% of 4 sigma. Trans unsaturation was not as well modeled (R-2 = 0.52, RMSEP = 18% of 4 sigma); this reduced prediction ability can be attributed to the low levels of trans FA found in adipose tissue (0.035 times the cis unsaturation level). For the individual FA, the average partial least squares (PLS) regression coefficient of the 18 most abundant FA (relative abundances ranging from 0.1 to 38.6% of the total FA content) was R-2 = 0.73; the average RMSEP = 11.9% of 4 sigma. Regression coefficients and prediction errors for the five most abundant FA were all better than the average value (in some cases as low as RMSEP = 4.7% of 4 sigma). Cross-correlation between the abundances of the minor FA and more abundant acids could be determined by principal component analysis methods, and the resulting groups of correlated compounds were also well-predicted using PLS. The accuracy of the prediction of individual FA was at least as good as other spectroscopic methods, and the extremely straightforward sampling method meant that very rapid analysis of samples at ambient temperature was easily achieved. This work shows that Raman profiling of hundreds of samples per day is easily achievable with an automated sampling system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The arithmetical performance of typically achieving 5- to 7-year-olds (N = 29) was measured at four 6-month intervals. The same seven tasks were used at each time point: exact calculation, story problems, approximate arithmetic, place value, calculation principles, forced retrieval, and written problems. Although group analysis showed mostly linear growth over the 18-month period, analysis of individual differences revealed a much more complex picture. Some children exhibited marked variation in performance across the seven tasks, including evidence of difficulty in some cases. Individual growth patterns also showed differences in developmental trajectories between children on each task and within children across tasks. The findings support the idea of the componential nature of arithmetical ability and underscore the need for further longitudinal research on typically achieving children and of careful consideration of individual differences. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper I examine the scope of publicly available information on the religious composition of employees in private-sector companies in Northern Ireland. I highlight the unavailability of certain types of monitoring data and the impact of data aggregation at company as opposed to site level. Both oversights lead to underestimates of the extent of workplace segregation in Northern Ireland. The ability to provide more-coherent data on workplace segregation, by religion, in Northern Ireland is crucial in terms of advancing equality and other social-justice agendas. I argue that a more-accurate monitoring of religious composition of workplaces is part of an overall need to develop a spatial approach in which the importance of ethnically territorialised spaces in the reproduction of ethnosectarian disputation is understood.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many assemblages contain numerous rare species, which can show large increases in abundances. Common species can become rare. Recent calls for experimental tests of the causes and consequences of rarity prompted us to investigate competition between co-existing rare and common species of intertidal gastropods. In various combinations, we increased densities of rare gastropod species to match those of common species to evaluate effects of intra- and interspecific competition on growth and survival of naturally rare or naturally common species at small and large densities. Rarity per se did not cause responses of rare species to differ from those of common species. Rare species did not respond to the abundances of other rare species, nor show consistently different responses from those of common species. Instead, individual species responded differently to different densities, regardless of whether they are naturally rare or abundant. This type of experimental evidence is important to be able to predict the effects of increased environmental variability on rare as opposed to abundant species and therefore, ultimately, on the structure of diverse assemblages. © 2012 Inter-Research.


--------------------------------------------------------------------------------

Reaxys Database Information|

--------------------------------------------------------------------------------

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cross-sectional and longitudinal data consistently indicate that mathematical difficulties are more prevalent in older than in younger children (e.g. Department of Education, 2011). Children’s trajectories can take a variety of shapes such as linear, flat, curvilinear and uneven, and shape has been found to vary within children and across tasks (J Jordan et al. 2009). There has been an increase in the use of statistical methods which are specifically designed to study development, and this has greatly improved our understanding of children’s mathematical development. However, the effects of many cognitive and social variables (e.g. working memory and verbal ability) on mathematical development are unclear. It is likely that greater consistency between studies will be achieved by adopting a componential approach to study mathematics, rather than treating mathematics as a unitary concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The miscibility of monoethanolamine (MEA) in five superbase ionic liquids (ILs), namely the trihexyl-tetradecylphosphonium benzotriazolide ([P66614][Bentriz]), trihexyl-tetradecylphosphonium benzimidazolide ([P66614][Benzim]), trihexyl-tetradecylphosphonium 1,2,3-triazolide ([P66614][123Triz]), trihexyl-tetradecylphosphonium 1,2,4-triazolide ([P66614][124Triz]), and trihexyl-tetradecylphosphonium imidazolide ([P66614][Im]) was determined at 295.15 K using 1H NMR spectroscopy. The solubility of carbon dioxide (CO2) in equimolar (IL + MEA) mixtures was then studied experimentally using a gravimetric technique at 295.15 K and 0.1 MPa. The effect of MEA on the CO2 capture ability of these ILs was investigated together with the viscosity of these systems in the presence or absence of CO2 to evaluate their practical application in CO2 capture processes. The effect of the presence of MEA on the rate of CO2 uptake was also studied. The study showed that the MEA can enhance CO2 absorption over the ideal values in the case of [P66614][123Triz] and [P66614][Bentriz] while in the other systems the mixtures behave ideally. A comparison of the effect of MEA addition with the addition of water to these superbase ILs showed that similar trends were observed in each case for the individual ILs studied.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is now well established that some patients who are diagnosed as being in a vegetative state or a minimally conscious state show reliable signs of volition that may only be detected by measuring neural responses. A pertinent question is whether these patients are also capable of logical thought. Here, we validate an fMRI paradigm that can detect the neural fingerprint of reasoning processes and moreover, can confirm whether a participant derives logical answers. We demonstrate the efficacy of this approach in a physically non-communicative patient who had been shown to engage in mental imagery in response to simple audi- tory instructions. Our results demonstrate that this individual retains a remarkable capacity for higher cogni- tion, engaging in the reasoning task and deducing logical answers. We suggest that this approach is suitable for detecting residual reasoning ability using neural responses and could readily be adapted to assess other aspects of cognition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

years 8 months) and 24 older (M == 7 years 4 months) children. A Monitoring Process Model (MPM) was developed and tested in order to ascertain at which component process ofthe MPM age differences would emerge. The MPM had four components: (1) assessment; (2) evaluation; (3) planning; and (4) behavioural control. The MPM was assessed directly using a referential communication task in which the children were asked to make a series of five Lego buildings (a baseline condition and one building for each MPM component). Children listened to instructions from one experimenter while a second experimenter in the room (a confederate) intetjected varying levels ofverbal feedback in order to assist the children and control the component ofthe MPM. This design allowed us to determine at which "stage" ofprocessing children would most likely have difficulty monitoring themselves in this social-cognitive task. Developmental differences were obselVed for the evaluation, planning and behavioural control components suggesting that older children were able to be more successful with the more explicit metacomponents. Interestingly, however, there was no age difference in terms ofLego task success in the baseline condition suggesting that without the intelVention ofthe confederate younger children monitored the task about as well as older children. This pattern ofresults indicates that the younger children were disrupted by the feedback rather than helped. On the other hand, the older children were able to incorporate the feedback offered by the confederate into a plan ofaction. Another aim ofthis study was to assess similar processing components to those investigated by the MPM Lego task in a more naturalistic observation. Together the use ofthe Lego Task ( a social cognitive task) and the naturalistic social interaction allowed for the appraisal of cross-domain continuities and discontinuities in monitoring behaviours. In this vein, analyses were undertaken in order to ascertain whether or not successful performance in the MPM Lego Task would predict cross-domain competence in the more naturalistic social interchange. Indeed, success in the two latter components ofthe MPM (planning and behavioural control) was related to overall competence in the naturalistic task. However, this cross-domain prediction was not evident for all levels ofthe naturalistic interchange suggesting that the nature ofthe feedback a child receives is an important determinant ofresponse competency. Individual difference measures reflecting the children's general cognitive capacity (Working Memory and Digit Span) and verbal ability (vocabulary) were also taken in an effort to account for more variance in the prediction oftask success. However, these individual difference measures did not serve to enhance the prediction oftask performance in either the Lego Task or the naturalistic task. Similarly, parental responses to questionnaires pertaining to their child's temperament and social experience also failed to increase prediction oftask performance. On-line measures ofthe children's engagement, positive affect and anxiety also failed to predict competence ratings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional psychometric theory and practice classify people according to broad ability dimensions but do not examine how these mental processes occur. Hunt and Lansman (1975) proposed a 'distributed memory' model of cognitive processes with emphasis on how to describe individual differences based on the assumption that each individual possesses the same components. It is in the quality of these components ~hat individual differences arise. Carroll (1974) expands Hunt's model to include a production system (after Newell and Simon, 1973) and a response system. He developed a framework of factor analytic (FA) factors for : the purpose of describing how individual differences may arise from them. This scheme is to be used in the analysis of psychometric tes ts . Recent advances in the field of information processing are examined and include. 1) Hunt's development of differences between subjects designated as high or low verbal , 2) Miller's pursuit of the magic number seven, plus or minus two, 3) Ferguson's examination of transfer and abilities and, 4) Brown's discoveries concerning strategy teaching and retardates . In order to examine possible sources of individual differences arising from cognitive tasks, traditional psychometric tests were searched for a suitable perceptual task which could be varied slightly and administered to gauge learning effects produced by controlling independent variables. It also had to be suitable for analysis using Carroll's f ramework . The Coding Task (a symbol substitution test) found i n the Performance Scale of the WISe was chosen. Two experiments were devised to test the following hypotheses. 1) High verbals should be able to complete significantly more items on the Symbol Substitution Task than low verbals (Hunt, Lansman, 1975). 2) Having previous practice on a task, where strategies involved in the task may be identified, increases the amount of output on a similar task (Carroll, 1974). J) There should be a sUbstantial decrease in the amount of output as the load on STM is increased (Miller, 1956) . 4) Repeated measures should produce an increase in output over trials and where individual differences in previously acquired abilities are involved, these should differentiate individuals over trials (Ferguson, 1956). S) Teaching slow learners a rehearsal strategy would improve their learning such that their learning would resemble that of normals on the ,:same task. (Brown, 1974). In the first experiment 60 subjects were d.ivided·into high and low verbal, further divided randomly into a practice group and nonpractice group. Five subjects in each group were assigned randomly to work on a five, seven and nine digit code throughout the experiment. The practice group was given three trials of two minutes each on the practice code (designed to eliminate transfer effects due to symbol similarity) and then three trials of two minutes each on the actual SST task . The nonpractice group was given three trials of two minutes each on the same actual SST task . Results were analyzed using a four-way analysis of variance . In the second experiment 18 slow learners were divided randomly into two groups. one group receiving a planned strategy practioe, the other receiving random practice. Both groups worked on the actual code to be used later in the actual task. Within each group subjects were randomly assigned to work on a five, seven or nine digit code throughout. Both practice and actual tests consisted on three trials of two minutes each. Results were analyzed using a three-way analysis of variance . It was found in t he first experiment that 1) high or low verbal ability by itself did not produce significantly different results. However, when in interaction with the other independent variables, a difference in performance was noted . 2) The previous practice variable was significant over all segments of the experiment. Those who received previo.us practice were able to score significantly higher than those without it. J) Increasing the size of the load on STM severely restricts performance. 4) The effect of repeated trials proved to be beneficial. Generally, gains were made on each successive trial within each group. S) In the second experiment, slow learners who were allowed to practice randomly performed better on the actual task than subjeots who were taught the code by means of a planned strategy. Upon analysis using the Carroll scheme, individual differences were noted in the ability to develop strategies of storing, searching and retrieving items from STM, and in adopting necessary rehearsals for retention in STM. While these strategies may benef it some it was found that for others they may be harmful . Temporal aspects and perceptual speed were also found to be sources of variance within individuals . Generally it was found that the largest single factor i nfluencing learning on this task was the repeated measures . What e~ables gains to be made, varies with individuals . There are environmental factors, specific abilities, strategy development, previous learning, amount of load on STM , perceptual and temporal parameters which influence learning and these have serious implications for educational programs .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract This study was undertaken to examine traditional forms of literacy and the newest form of literacy: technology. Students who have trouble reading traditional forms of literacy tend to have lower self-esteem. This research intended to explore if students with reading difficulties and, therefore, lower self-esteem, could use Social Networking Technologies including text messaging, Facebook, email, blogging, MySpace, or Twitter to help improve their self-esteem, in a field where spelling mistakes and grammatical errors are commonplace, if not encouraged. A collective case study was undertaken based on surveys, individual interviews, and gathered documents from 3 students 9-13 years old. The data collected in this study were analyzed and interpreted using qualitative methods. These cases were individually examined for themes, which were then analyzed across the cases to examine points of convergence and divergence in the data. The research found that students with reading difficulties do not necessarily have poor self-esteem, as prior research has suggested (Carr, Borkowski, & Maxwell, 1991; Feiler, & Logan, 2007; Meece, Wigfield, & Eccles, 1990; Pintirch & DeGroot, 1990; Pintrich & Garcia, 1991). All of the participants who had reading difficulties, were found both through interviews and the CFSEI-3 self-esteem test (Battle, 2002) to have average self-esteem, although their parents all stated that their child felt poorly about their academic abilities. The research also found that using Social Networking Technologies helped improve the self-esteem of the majority of the participants both socially and academically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photosynthesis is a process in which electromagnetic radiation is converted into chemical energy. Photosystems capture photons with chromophores and transfer their energy to reaction centers using chromophores as a medium. In the reaction center, the excitation energy is used to perform chemical reactions. Knowledge of chromophore site energies is crucial to the understanding of excitation energy transfer pathways in photosystems and the ability to compute the site energies in a fast and accurate manner is mandatory for investigating how protein dynamics ef-fect the site energies and ultimately energy pathways with time. In this work we developed two software frameworks designed to optimize the calculations of chro-mophore site energies within a protein environment. The first is for performing quantum mechanical energy optimizations on molecules and the second is for com-puting site energies of chromophores in a fast and accurate manner using the polar-izability embedding method. The two frameworks allow for the fast and accurate calculation of chromophore site energies within proteins, ultimately allowing for the effect of protein dynamics on energy pathways to be studied. We use these frame-works to compute the site energies of the eight chromophores in the reaction center of photosystem II (PSII) using a 1.9 Å resolution x-ray structure of photosystem II. We compare our results to conflicting experimental data obtained from both isolat-ed intact PSII core preparations and the minimal reaction center preparation of PSII, and find our work more supportive of the former.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cette étude de cas, composée de trois articles, examine les diverses sources d’explication de l’écart salarial selon le genre chez les professeurs d’une grande université de recherche canadienne. Le premier article analyse les écarts selon le genre sur les primes “de marché” à partir de données d’un sondage réalisé auprès des professeurs en 2002. Une analyse des correspondances donne une solution à deux facteurs dans laquelle le second facteur oppose clairement les professeurs qui ont reçu une prime à ceux qui n’en n’ont pas reçue. Le genre est fortement associé à ce facteur, la catégorie “femme” se retrouvant du côté de l’axe associé à l’absence de primes de marché. Les résultats de la régression logistique confirment que le secteur d’activité, la fréquence des contrats de recherche, la valorisation du salaire ainsi que le rang combiné à l’ancienneté sont reliés à la présence de primes de marché, tel que proposé par les hypothèses. Toutefois, même après avoir contrôlé pour ces relations, les femmes sont toujours près de trois fois moins susceptibles de s’être vu attribuer des primes de marché que leurs homologues masculins. Dans l’ensemble, les résultats suggèrent que dans un contexte où les salaires sont déterminés par convention collective, la réindividualisation du processus de détermination des salaires — en particulier le versement de primes de marché aux professeurs d’université — peut favoriser la réapparition d’écarts de salaire selon le genre. Le second article est réalisé à partir de données administratives portant sur les années 1997 à 2006. Les contributions respectives de quatre composantes de la rémunération à l’écart salarial selon le genre y sont analysées, soit le salaire de base, l’accès au rang de professeur titulaire, l’accès aux primes de marché et chaires de recherche du Canada, de même que les montants reçus. Les composantes varient quant à leur degré de formalisation. Ceci permet de tester l’hypothèse selon laquelle l’ampleur de l’écart salarial selon le genre varie en fonction du degré de formalisation des composantes salariales. Nous déterminons également dans quelle mesure l’écart selon le genre sur les diverses composantes de la rémunération varie en fonction de la représentation relative des femmes professeurs au sein des unités. Les résultats démontrent l’existence de variations dans l’ampleur des différences selon le genre en fonction du degré de formalisation des pratiques de rémunération. Qui plus est, après contrôles, la rémunération est plus faible dans les unités où les femmes sont fortement représentées. Le dernier article examine les mécanismes pouvant mener à un écart selon le genre en ce qui a trait à l’accès aux primes de marché chez les professeurs de l’institution. Les processus d’attribution de ces suppléments salariaux sont examinés à partir d’entretiens réalisés avec 17 administrateurs à tous les niveaux hiérarchiques de l’institution et dans une diversité d’unités académiques. Les résultats suggèrent que les différences selon le genre pourraient être liées à des caractéristiques spécifiques du processus d’attribution et à une distribution inégale des primes aux unités à forte représentation féminine. De façon générale, les résultats démontrent que l’écart de rémunération selon le genre chez les professeurs de cette université n’est pas totalement expliqué par des différences dans les caractéristiques individuelles des hommes et femmes. L’analyse révèle que l’écart réside dans des différences selon le genre en ce qui a trait à l’accès aux primes de marché et aux chaires de recherches du Canada et, dans une moindre mesure, au rang de professeur titulaire. Aucune différence n’est observée sur le salaire de base et le montant des primes salariales reçues, que celles-ci soient dites de “marché” ou associées à une chaire de recherche du Canada. Qui plus est, on constate que la rémunération est plus faible dans les unités où les femmes sont le mieux représentées. L’accès différencié selon le genre aux primes de marché qui est observé pourrait être lié à certains processus organisationnels qui limitent les probabilités d’octrois à des femmes. Les femmes pourraient être particulièrement désavantagées dans ce système d’octroi, pour plusieurs raisons. L’existence de différences selon le genre en ce qui a trait aux dispositions ou habiletés des individus à négocier leur salaire est évoquée et supposée par certains administrateurs. Un accès limité aux informations concernant la politique de primes pourrait réduire la probabilité que des femmes tentent d’obtenir ces suppléments salariaux. Les directeurs d’unités, qui sont en majorité des hommes, pourraient être biaisées en faveur des professeurs masculins dans leurs évaluations s’ils tendent à favoriser ceux qui leurs ressemblent. Il est également possible que les directeurs d’unités où les femmes sont les mieux représentées n’aient pas reçu d’information sur les primes de marché ou que des traditions disciplinaires les aient rendu réticents à demander des primes.