933 resultados para Normative intuition
Resumo:
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions. © 2012 Rüter et al.
Resumo:
Deciding whether a set of objects are the same or different is a cornerstone of perception and cognition. Surprisingly, no principled quantitative model of sameness judgment exists. We tested whether human sameness judgment under sensory noise can be modeled as a form of probabilistically optimal inference. An optimal observer would compare the reliability-weighted variance of the sensory measurements with a set size-dependent criterion. We conducted two experiments, in which we varied set size and individual stimulus reliabilities. We found that the optimal-observer model accurately describes human behavior, outperforms plausible alternatives in a rigorous model comparison, and accounts for three key findings in the animal cognition literature. Our results provide a normative footing for the study of sameness judgment and indicate that the notion of perception as near-optimal inference extends to abstract relations.
Resumo:
This paper presents an analysis of the slow-peaking phenomenon, a pitfall of low-gain designs that imposes basic limitations to large regions of attraction in nonlinear control systems. The phenomenon is best understood on a chain of integrators perturbed by a vector field up(x, u) that satisfies p(x, 0) = 0. Because small controls (or low-gain designs) are sufficient to stabilize the unperturbed chain of integrators, it may seem that smaller controls, which attenuate the perturbation up(x, u) in a large compact set, can be employed to achieve larger regions of attraction. This intuition is false, however, and peaking may cause a loss of global controllability unless severe growth restrictions are imposed on p(x, u). These growth restrictions are expressed as a higher order condition with respect to a particular weighted dilation related to the peaking exponents of the nominal system. When this higher order condition is satisfied, an explicit control law is derived that achieves global asymptotic stability of x = 0. This stabilization result is extended to more general cascade nonlinear systems in which the perturbation p(x, v) v, v = (ξ, u) T, contains the state ξ and the control u of a stabilizable subsystem ξ = a(ξ, u). As an illustration, a control law is derived that achieves global stabilization of the frictionless ball-and-beam model.
Resumo:
This paper reports on fuel design optimization of a PWR operating in a self sustainable Th-233U fuel cycle. Monte Carlo simulated annealing method was used in order to identify the fuel assembly configuration with the most attractive breeding performance. In previous studies, it was shown that breeding may be achieved by employing heterogeneous Seed-Blanket fuel geometry. The arrangement of seed and blanket pins within the assemblies may be determined by varying the designed parameters based on basic reactor physics phenomena which affect breeding. However, the amount of free parameters may still prove to be prohibitively large in order to systematically explore the design space for optimal solution. Therefore, the Monte Carlo annealing algorithm for neutronic optimization is applied in order to identify the most favorable design. The objective of simulated annealing optimization is to find a set of design parameters, which maximizes some given performance function (such as relative period of net breeding) under specified constraints (such as fuel cycle length). The first objective of the study was to demonstrate that the simulated annealing optimization algorithm will lead to the same fuel pins arrangement as was obtained in the previous studies which used only basic physics phenomena as guidance for optimization. In the second part of this work, the simulated annealing method was used to optimize fuel pins arrangement in much larger fuel assembly, where the basic physics intuition does not yield clearly optimal configuration. The simulated annealing method was found to be very efficient in selecting the optimal design in both cases. In the future, this method will be used for optimization of fuel assembly design with larger number of free parameters in order to determine the most favorable trade-off between the breeding performance and core average power density.
Resumo:
This paper provides an introduction to the topic of optimization on manifolds. The approach taken uses the language of differential geometry, however,we choose to emphasise the intuition of the concepts and the structures that are important in generating practical numerical algorithms rather than the technical details of the formulation. There are a number of algorithms that can be applied to solve such problems and we discuss the steepest descent and Newton's method in some detail as well as referencing the more important of the other approaches.There are a wide range of potential applications that we are aware of, and we briefly discuss these applications, as well as explaining one or two in more detail. © 2010 Springer -Verlag Berlin Heidelberg.
Resumo:
Contaminated land remediation has traditionally been viewed as sustainable practice because it reduces urban sprawl and mitigates risks to human being and the environment. However, in an emerging green and sustainable remediation (GSR) movement, remediation practitioners have increasingly recognized that remediation operations have their own environmental footprint. The GSR calls for sustainable behaviour in the remediation industry, for which a series of white papers and guidance documents have been published by various government agencies and professional organizations. However, the relationship between the adoption of such sustainable behaviour and its underlying driving forces has not been studied. This study aims to contribute to sustainability science by rendering a better understanding of what drives organizational behaviour in adopting sustainable practices. Factor analysis (FA) and structural equation modelling (SEM) were used to investigate the relationship between sustainable practices and key factors driving these behaviour changes in the remediation field. A conceptual model on sustainability in the environmental remediation industry was developed on the basis of stakeholder and institutional theories. The FA classified sustainability considerations, institutional promoting and impeding forces, and stakeholder's influence. Subsequently the SEM showed that institutional promoting forces had significant positive effects on adopting sustainability measures, and institutional impeding forces had significant negative effects. Stakeholder influences were found to have only marginal direct effect on the adoption of sustainability; however, they exert significant influence on institutional promoting forces, thus rendering high total effect (i.e. direct effect plus indirect effect) on the adoption of sustainability. This study suggests that sustainable remediation represents an advanced sustainable practice, which may only be fully endorsed by both internal and external stakeholders after its regulatory, normative and cognitive components are institutionalized. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
When cooled or compressed sufficiently rapidly, a liquid vitrifies into a glassy amorphous state. Vitrification in a dense liquid is associated with jamming of the particles. For hard spheres, the density and degree of order in the final structure depend on the compression rate: simple intuition suggests, and previous computer simulation demonstrates, that slower compression results in states that are both denser and more ordered. In this work, we use the Lubachevsky-Stillinger algorithm to generate a sequence of structurally arrested hard-sphere states by varying the compression rate.
Resumo:
Allen的时间理论因直观、易懂而倍受推崇,但它存在不能处理连续变化事件等缺欠。本文提出更为一般的时间理论框架,以扩展Allen的理论,本框架的特点为:(l)将Allen的理论纳入其中;(2)可由时间点构造时区,并可处理时区和时间点;(3)以时间元素集的2D图形表示为基础的约束传播算法,既高效又可视化。
Resumo:
The nature of the distinction between conscious and unconscious knowledge is a core issue in the implicit learning field. Furthermore, the phenomenological experience associated with having knowledge is central to the conscious or unconscious status of that knowledge. Consistently, Dienes and Scott (2005) measured the conscious or unconscious status of structure knowledge using subjective measures. Believing that one is purely guessing when in fact one knows indicates unconscious knowledge. But unconscious structural knowledge can also be associated with feelings of intuition or familiarity. In this thesis, we explored whether phenomenological feelings, like familiarity, associated with unconscious structural knowledge could be used, paradoxically, to exert conscious control over the use of the knowledge, and whether people could obtain repetition structure knowledge. We also investigated the neural correlates of awareness of knowing, as measured phenomenologically. In study one, subjects were trained on two grammars and then asked to endorse strings from only one of the grammars. Subjects also rated how familiar each string they felt and reported whether or not they used familiarity to make their grammaticality judgment. We found subjects could endorse the strings of just one grammar and ignore the strings from the other. Importantly, when subjects said they were using familiarity, the rated familiarity for test strings consistent with their chosen grammar was greater than that for strings from the other grammar. Familiarity, subjectively defined, is sensitive to intentions and can play a key role in strategic control. In study two, we manipulated the structural characteristic of stings and explored whether participants could learn repetition structures in the grammatical strings. We measured phenomenology again and also ERPs. Deviant letters of ungrammatical strings violating the repetition structure elicited the N2 component; we took this to be an indication of knowledge, whether conscious or not. Strings which were attributed to conscious categories (rules and recollection) rather than phenomenology associated with unconscious structural knowledge (guessing, intuition and familiarity) elicited the P300 component. Different waveforms provided evidence for the neural correlates of different phenomenologies associated with knowledge of an artificial grammar.
Resumo:
The research objectives were to investigate the psychological structure of employees' organizational commitments(OCs), and its antecedents, and to examine the relative effects of employees' OCs to their performances. In order to deeply uncover the nature of OCs, some standard methods, such as in-depth interview, focus-group, semi-open questionnaire, standard questionnaire etc., were employed. In data analysis, not only some common statistical methods, such as multivariate analysis of variance, cross-table analysis, factor analysis, but also some forefront ones, such as confirmatory factor analysis and path analysis of SEM, were used. The paper covers six chapters: 1) In the first chapter, Firstly some previous empirical studies, which examined structures, antecedents, correlates, and/or consequences of organizational commitment in China and Western countries, were summarized. This summary covers most of the respectable researchers' works of this field, such as H.S.Becker, B.Buchanan, L.W.Porter, G. Ritzer, H.M.Trice, J.A.Alutto, L.G.Hrebiniak, R.T.Mowday, J.P.Meyer, N.J.Allen, G.W.McGee, R.C.Ford, R.Eisenberger, etc. Then three theoretical hypothesis were put forward as following: ① In China, OCs should be multidimensional psychological structures, which means there should exist more than one type of OCs; ② There should be some different antecedents to different OCs; ③ Employees with different types of OC should perform differently in their works. Finally the theoretical and practical significance were discussed. 2) In the second chapter, great efforts were made to investigate the OC types. Firstly, in-depth interview with managers and employees, semi-open questionnaire, and some other methods were used in the pilot research to gather much qualitative material. Then OC questionnaire was designed to get quantitative data in about 20 enterprises, including state-owned, collective-owned, wholly foreign-funded, and joint-ventures. During revising of this questionnaire, there were about 5000 samples surveyed. after factor analysis, the data shows that there should be 5 types of OCs in China, which were respectively named as Affective Commitment, Normative commitment, Ideal Commitment, Economic Commitment, Choice Commitment. Thirdly, confirmatory factor analysis method was used to successfully confirm this 5-factor model. Finally, Cronbach a and test-retest correlate indicate that this questionnaire is reliable. Since factor analysis result has show its construct validity, a simple criterion-related validity research was conducted. 3) In order to investigate the correlation between different OC and employee performance and different antecedents of OC, 5 other questionnaires, such as Employee Satisfaction Questionnaire, Perceived Organizational Support Questionnaire, Social Exchange Questionnaire, Altruism Scale, and Leader Confidence Scale were revised in the third chapter. 4)In the fourth chapter, a lot of correlates, cross-table analysis were conducted to show the correlation between different OCs and 10 performances, which indicate employees with different OCs will show different performance in 10 variables, such as altruism, etc. 5) In the fifth chapter, correlate analysis, multivariate of analysis, and path analysis of SEM were used to investigate the antecedents of OC. A satisfactory model showing the correlation between OC and their antecedents was confirmed. 6) In the last chapter, all researches about OC, and its limitations were summarized.
Resumo:
The research studied self-efficacy and job mechanism of insurance salesmen in China by the methods of in-depth interview, focus-group, semi-open questionnaire, standard questionnaire. There were about 1300 samples surveyed. The way of data analysis such as factor analysis, correlation analysis, regreesion analysis and structure equation was used. Four following conclusions were drawn: First, self-efficacy of inssurance salesmen in China consists of eight factorswhich are interview skills, manner, persistence, control of emotion, plans and comments, master of knowledge, intuition and judgement, preparation. Second, the relationship between self-efficacy and other job variables such as achievement motivation, work incentive, coping strategy, view of ability, performance, goalsetting, colleague relationship, the way of feedback from leader, job satisfaction and exertion were tested and all the correlations were significant. Third, regression analysis was used to test the relationship between self-efficacy and the antecedent variables. The result was that four antecedent varialbes enter equation (p<.05). They are self-oriented achievement motivation, stability of emotion, performance and colleague relationship.. Finally, vertified by path-analysis, the research posits a comprehensive model about job for insurance salesmen, in which self-efficacy was the most important factor. On the one hand, self-efficacy has dominant effects on the consequent variables, such as mastery goal, performance- approach goal, job satisfaction, exertion, coping strategy, on the another hand, self-efficacy was found as mediator of the relationship between the antecedent variables and consequent variables.
Resumo:
Problem solving is one of the basic processes of human cognition and heuristic strategy is the key to human problem solving, hence, the studies on heuristic strategy is of great importance in cognitive psychology. Current studies on heuristics in problem solving may be summarized as follows: nature and structure of heuristics, problem structure and representation, expert knowledge and expert intuition, nature and role of image, social cognition and social learning. The present study deals with the nature and structure of heuristics. The Solitaire problem was used in our the experiments. Both traditional experimental method and computer simulation were used to study the nature and structure of heuristics. Through a series of experiments, the knowledge of Solitaire problem solving was summed up, its metastrategy is worked out, and then the the metastrategy by computer simulation and experimental verification are tested.
Resumo:
Most knowledge representation languages are based on classes and taxonomic relationships between classes. Taxonomic hierarchies without defaults or exceptions are semantically equivalent to a collection of formulas in first order predicate calculus. Although designers of knowledge representation languages often express an intuitive feeling that there must be some advantage to representing facts as taxonomic relationships rather than first order formulas, there are few, if any, technical results supporting this intuition. We attempt to remedy this situation by presenting a taxonomic syntax for first order predicate calculus and a series of theorems that support the claim that taxonomic syntax is superior to classical syntax.
Resumo:
The digital divide has been, at least until very recently, a major theme in policy as well as interdisciplinary academic circles across the world, as well as at a collective global level, as attested by the World Summit on the Information Society. Numerous research papers and volumes have attempted to conceptualise the digital divide and to offer reasoned prescriptive and normative responses. What has been lacking in many of these studies, it is submitted, is a rigorous negotiation of moral and political philosophy, the result being a failure to situate the digital divide - or rather, more widely, information imbalances - in a holistic understanding of social structures of power and wealth. In practice, prescriptive offerings have been little more than philanthropic in tendency, whether private or corporate philanthropy. Instead, a theory of distributive justice is required, one that recovers the tradition of emancipatory, democratic struggle. This much has been said before. What is new here, however, is that the paper suggests a specific formula, the Rawls-Tawney theorem, as a solution at the level of analytical moral-political philosophy. Building on the work of John Rawls and R. H. Tawney, this avoids both the Charybdis of Marxism and the Scylla of liberalism. It delineates some of the details of the meaning of social justice in the information age. Promulgating a conception of isonomia, which while egalitarian eschews arithmetic equality (the equality of misery), the paper hopes to contribute to the emerging ideal of communicative justice in the media-saturated, post-industrial epoch.
Resumo:
In this thesis I study how the legal system reacts (or ought to react) to unforeseen circumstances that interfere with the functioning of long term contractual relationships. More precisely, I investigate whether mandatory renegotiation would be an appropriate tool to guarantee the flexibility that long-term relationships require. Furthermore, after having analyzed the instruments that our legal system offers to preserve long-term contractual relationships, I explore the solutions adopted by other legal systems. This comparative analysis helps to formulate normative proposals to improve the functioning of our legal system.