990 resultados para low level


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Construal Level Theory is a kind of social cognitive theory, with the main conclusion of that the more time distance is, the more people use high level construal and the more high level construal affect in judging, decision making and such of these cognitive processes, and the low level construal has the mirror process as well. In these ten years, construal level theory is developing very fast and some of the researches of consumer behavior has concerned this developed trend very much. In 2007’s Journal of Consumer psychology, half of the papers focused on construal topics and ask for more researches about these topics. So our research mainly focused on such topics, the application of construal level theory in consumer behavior research. The first part is contained 3 experiments which mean to discuss the questions from reviewing the prior literatures. The research methods, manipulated definition, construal level factors VS construal level events are the goal of first part of this study. The results showed that construal level’s effect might be a consecutive variable and has a consecutive trend when time goes by. Time distance might not be a semantic cognitive effect to construal level but a internal ones for when there’s no verbal information of time distance the results would have the same one as well. The second part of the study is focus on a simplified model of decision-making, which contained two times decision-making, to detect whether and how would the prior decision making affect the next decision making. The study conducted the time distance in first decision making with the situation for the decision and conducted the second time distance in the time distance between the two decision situations. Although we couldn’t get the detail of the weight changes in construal level factors limited by our research, we can have a logical hypothesis based on “one-down- the other-up”. The results showed that the level of construal factors can affect the memory and the more the factors affected, the bigger the error would be. The first time distance and second time distance could effect the second decision in a interaction-way, and support the hypothesis. The whole research is mean to explore the application of construal level theory in consumer behavior research. The conclusion suggests that the periods of products and the consuming may need different marketing strategies in a long-term perspective and time distance should be control are detected in marketing analysis even there’s no time information on the paper.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Color has an unresolved role in the rapid process of natural scene. The temporal changes of the color effect might partly account for the debates. Besides, the distinction of localized and unlocalized information has not been addressed directly in these color studies. Here we present two experiments that investigate whether color contributes to categorization in a briefly flashed natural image and also whether it is mediated by time and low-level information. By controlling the interval between target and mask stimuli, Experiment 1 tested the hypothesis that colors could facilitate in the early stage of scene perception and the effect would decay in later processing. Experiment 2 examined how the randomization of local phase information influenced the color’s advantage over gray. Together, the results suggest that color does enhance natural scene categorization at short exposure time. Furthermore, results imply that effect of color is stable between 12 and120ms, and is not accounted by showing the structures organized by localized information. Therefore,we concluded that color always make effect in the process of rapid scene categorization, and do not depend on localized information. Thus, the present study is an attempt to fill the gap in previous research; its results is an contribution to deeper understanding of the role of color in natural scene perception.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since the middle of 1980's, the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge has been highly concerned. The dominant theory is theory of common element (Singley & Anderson, 1989) which predict that there will be little or no transfer between subskills within the same domain when knowledge is used in different ways, even though the subskills might rest on a common body of declarative knowledge. This idea is termed as "principle of use specificity of knowledge" (Anderson, 1987). Although this principle has gained some empirical evidence from different domains such as elementary geometry (Neves & Anderson, 1981) and computer programming (McKendree & Anderson, 1987), it is challenged by some research (Pennington et al., 1991; 1995) in which substantially larger amounts of transfer of training was found between substills that rest on a shared declarative knowledge but share little procedures (production rules). Pennington et al. (1995) provided evidence that this larger amounts of transfer are due to the elaboration of declarative knowledge. Our research provide a test of these two different explanation, by considering transfer between two subskills within the domain of elementary geometry and elementary algebra respectively, and the inference of learning method ("learning from examples" and "learning from declarative-text") and subject ability (high, middle, low) on the amounts of transfer. Within the domain of elementary geometry, the two subskills of generating proofs" (GP) and "explaining proofs" (EP) which are rest on the declarative knowledge of "theorems on the characters of parallelogram" share little procedures. Within the domain of elementary algebra, the two subskills of "calculation" (C) and "simplification" (S) which are rest on the declarative knowledge of "multiplication of radical" share some more procedures. The results demonstrate that: 1. Within the domain of elementary geometry, although little transfer was found between the two subskills of GP and EP within the total subjects, different results occurred when considering the factor of subject's ability. Within the high level subjects, significant positive transfer was found from EP to GP, while little transfer was found on the opposite direction (i. e. from GP to EP). Within the low level subjects, significant positive transfer was found from EP to GP, while significant negative transfer was found on the opposite direction. For the middle level subject, little transfer was found between the two subskills. 2. Within the domain of elementary algebra, significant positive transfer was found from S to C, while significant negative transfer was found on the opposite direction (i. e. from C to S), when considering the total subjects. The same pattern of transfer occurred within the middle level subjects and low level subject. Within the high level subjects, no transfer was found between the two subskills. 3. Within theses two domains, different learning methods yield little influence on transfer of training between subskills. Apparently, these results can not be attributed to either common procedures or elaboration of declarative knowledge. A kind of synthetic inspection is essential to construct a reasonable explanation of these results which should take into account the following three elements: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge. 排Excluding the factor of subject, transfer of training between subskills can be predicted and explained by analyzing the relations between the procedures of two subskills. However, when considering some certain subjects, the explanation of transfer of training between subskills must include subjects' elaboration of declarative knowledge and procedural knowledge, especially the influence of the elaboration on performing the other subskill. The fact that different learning methods yield little influence on transfer of training between subskills can be explained by the fact that these two methods did not effect the level of declarative knowledge. Protocol analysis provided evidence to support these hypothesis. From this research, we conclude that in order to expound the mechanisms of transfer of training between cognitive subskills rest on the same body of declarative knowledge, three elements must be considered synthetically which include: (1) relations between the procedures of subskills; (2) elaboration of declarative knowledge; (3) elaboration of procedural knowledge.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this study, bibliometric method was usded in the investigation of 2274 papers concerning child developmental and educational psychology, which were published during the ten years of 1979-1988, in 14 psychological journals and 97 other scientific journals. According to the quantitative and qualitative analyses, the results are as follows: 1979-1988 saw the rapid development and prosperous period in China's child developmental and educational psychology, During which more papers were published and more fields couched than in the psvious thirty years. The number of literature publications increased and went to the peak in 1983 and 1984, and came down since 1985. The trend was found to result from the decrease in popular science introductions of psychology, which reflected that a heat of psychology had appeared in 1983 and started to cool in 1985. At the mean time, the number of research reports had been holding a steady increase by 1987 and decreased obviously in 1988, especially in the fields of cognitive and social development. There could be several possible explanations of this phenonemon: Piagetian studies are becoming fewer and the eakening of Piaget's influence might predict a period of standstill in the field of developmental psychology in China; As researches become more and more difficult, researchers have turned to be more cautious in lay out their reports; the cutdown of fees and staff could also be one of reasons for less publication in 1988. As the factors mentioned above still exist and their influences last, the number of papers are not expected to increase in the near future. The field of thinking and menory is closely connected with that of artificial intelligence. The downhill situations in these two fileds should be taken seriously. 2. The types of research work are divided on the bases of their problem raising. The trends show that the deepening studies, which represent a comaratively higher level of exploration, are waving fewer, while repeated studies and creative studies are becoming more as the years go along. This fact is worth being further analysed. Big progress could be seen from research methods. The methods currently used are mainly experiment, psychological measurement and assessment, and theoretical reasoning. There is a rapid increase of research by using scales. Wechsler Intelligence Scale for Children, Binet Scale and Baley Scale have been revised andstandardized. Chinese researchers have also developed several good scales of their own, some of which are valuable and need to be standardized. In the papers investigated, the amount of citation is significantly lower than the world average level as well as the average citation number of whole China's scientific literature. Among the papers cited, most are of Chinese and English languages, and only a small rate were published in resently five years. The renewal of literature cited seems to stay at a low level in the ten years. Tremendous work could be reflected by the number of subjects used the research work in those ten years: 362665. A lot of studies piled on the period of 4-16 year olds. Compared with the previous thirty years, the age range was much enlarged and there were quite a few studies about preschool, school and adolescent periods. The study of newborn of 0-3 has been a weak point so far and it is a field to which chinese developmental psychologists should pay more attention. The progress in using statistics is one of the most obvious part in the development in the research work of child developmental and educational psychology. The one tendency that should be awared and avoid is to put the cart before the horse: seeking for more sophisticated statistic method while neglecting the meanings of research problems. 3. Citation analysis was used in selecting scholars who had great influence in the field of child developmental and educational psychology. Among the often cited and famous scholars, 31 are Chinese researchers and 12 are Western psychologists. The authoritative journal for child developmental psychology and educational psychology is Acta Psychologica Sinica.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In low-level vision, the representation of scene properties such as shape, albedo, etc., are very high dimensional as they have to describe complicated structures. The approach proposed here is to let the image itself bear as much of the representational burden as possible. In many situations, scene and image are closely related and it is possible to find a functional relationship between them. The scene information can be represented in reference to the image where the functional specifies how to translate the image into the associated scene. We illustrate the use of this representation for encoding shape information. We show how this representation has appealing properties such as locality and slow variation across space and scale. These properties provide a way of improving shape estimates coming from other sources of information like stereo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The goal of low-level vision is to estimate an underlying scene, given an observed image. Real-world scenes (e.g., albedos or shapes) can be very complex, conventionally requiring high dimensional representations which are hard to estimate and store. We propose a low-dimensional representation, called a scene recipe, that relies on the image itself to describe the complex scene configurations. Shape recipes are an example: these are the regression coefficients that predict the bandpassed shape from bandpassed image data. We describe the benefits of this representation, and show two uses illustrating their properties: (1) we improve stereo shape estimates by learning shape recipes at low resolution and applying them at full resolution; (2) Shape recipes implicitly contain information about lighting and materials and we use them for material segmentation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article describes a model for including scene/context priors in attention guidance. In the proposed scheme, visual context information can be available early in the visual processing chain, in order to modulate the saliency of image regions and to provide an efficient short cut for object detection and recognition. The scene is represented by means of a low-dimensional global description obtained from low-level features. The global scene features are then used to predict the probability of presence of the target object in the scene, and its location and scale, before exploring the image. Scene information can then be used to modulate the saliency of image regions early during the visual processing in order to provide an efficient short cut for object detection and recognition.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis examines the problem of an autonomous agent learning a causal world model of its environment. Previous approaches to learning causal world models have concentrated on environments that are too "easy" (deterministic finite state machines) or too "hard" (containing much hidden state). We describe a new domain --- environments with manifest causal structure --- for learning. In such environments the agent has an abundance of perceptions of its environment. Specifically, it perceives almost all the relevant information it needs to understand the environment. Many environments of interest have manifest causal structure and we show that an agent can learn the manifest aspects of these environments quickly using straightforward learning techniques. We present a new algorithm to learn a rule-based causal world model from observations in the environment. The learning algorithm includes (1) a low level rule-learning algorithm that converges on a good set of specific rules, (2) a concept learning algorithm that learns concepts by finding completely correlated perceptions, and (3) an algorithm that learns general rules. In addition this thesis examines the problem of finding a good expert from a sequence of experts. Each expert has an "error rate"; we wish to find an expert with a low error rate. However, each expert's error rate and the distribution of error rates are unknown. A new expert-finding algorithm is presented and an upper bound on the expected error rate of the expert is derived.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We constructed a parallelizing compiler that utilizes partial evaluation to achieve efficient parallel object code from very high-level data independent source programs. On several important scientific applications, the compiler attains parallel performance equivalent to or better than the best observed results from the manual restructuring of code. This is the first attempt to capitalize on partial evaluation's ability to expose low-level parallelism. New static scheduling techniques are used to utilize the fine-grained parallelism of the computations. The compiler maps the computation graph resulting from partial evaluation onto the Supercomputer Toolkit, an eight VLIW processor parallel computer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conventional parallel computer architectures do not provide support for non-uniformly distributed objects. In this thesis, I introduce sparsely faceted arrays (SFAs), a new low-level mechanism for naming regions of memory, or facets, on different processors in a distributed, shared memory parallel processing system. Sparsely faceted arrays address the disconnect between the global distributed arrays provided by conventional architectures (e.g. the Cray T3 series), and the requirements of high-level parallel programming methods that wish to use objects that are distributed over only a subset of processing elements. A sparsely faceted array names a virtual globally-distributed array, but actual facets are lazily allocated. By providing simple semantics and making efficient use of memory, SFAs enable efficient implementation of a variety of non-uniformly distributed data structures and related algorithms. I present example applications which use SFAs, and describe and evaluate simple hardware mechanisms for implementing SFAs. Keeping track of which nodes have allocated facets for a particular SFA is an important task that suggests the need for automatic memory management, including garbage collection. To address this need, I first argue that conventional tracing techniques such as mark/sweep and copying GC are inherently unscalable in parallel systems. I then present a parallel memory-management strategy, based on reference-counting, that is capable of garbage collecting sparsely faceted arrays. I also discuss opportunities for hardware support of this garbage collection strategy. I have implemented a high-level hardware/OS simulator featuring hardware support for sparsely faceted arrays and automatic garbage collection. I describe the simulator and outline a few of the numerous details associated with a "real" implementation of SFAs and SFA-aware garbage collection. Simulation results are used throughout this thesis in the evaluation of hardware support mechanisms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

CO hydrogenation to light alkenes was carried out on manganese promoted iron catalysts prepared by coprecipitation and sol-gel techniques. Addition of manganese in the range of 1-4 mol.% by means of coprecipitation could improve notably the percentage of C-2 (=) similar to C-4 (=) in the products, but it was not so efficient when the sol-gel method was employed. XRD and H-2-TPR measurements showed that the catalyst samples giving high C-2 (=) similar to C-4 (=) yields possessed ultra. ne particles in the form of pure alpha-(Fe1-xMnx)(2)O-3, and high quality in lowering the reduction temperature of the iron oxide. Furthermore, these samples displayed deep extent of carburization and different surface procedures to the others in the tests of Temperature Programmed Surface Carburization (TPSC). The different surface procedures of these samples were considered to have close relationship with the evolving of surface oxygen. It was also suggested that for the catalysts with high C-2 (=) similar to C-4 (=) yields, the turnover rate of the active site could be kept at a relatively high level due to the improved reducing and carburizing capabilities. Consequently, there would be a large number of sites for CO adsorption/dissociation and an enhanced carburization environment on the catalyst surface, so that the process of hydrogenation could be suppressed relatively to a low level. As a result, the percentage of the light alkenes in the products could be raised.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: the aim of this study was to quantify mast cells at different time intervals after partial Achilles tendon rupture in rats treated with low-level laser therapy (LLLT). Background data: There is a high incidence of lesions and ruptures in the Achilles tendon that can take weeks and even months to heal completely. As the mast cells help in the healing repair phase, and LLLT has favorable effects on this tissue repair process, study of this modality on the quantity of mastocytes in the ruptured tendon is relevant. Methods: Sixty Wistar rats were subjected to partial Achilles' tendon rupture by direct trauma, randomized into 10 groups, and then divided into the group treated with 80mW aluminum gallium arsenide infrared laser diode, continuous wave, 2.8W/cm(2) power density, 40J/cm(2) energy density, and 1.12J total energy, and the simulation group. Both the groups were subdivided according to the histological assessment period of the sample, either 6h, 12h, 24h, 2 days, or 3 days after the rupture, to quantify the mastocytes in the Achilles' tendon. Results: the group subjected to LLLT presented a greater quantity of mastocytes in the periods of 6h, 12h, 24h, 2 days, and 3 days after rupture, compared with the simulation groups, but differences were detected between the sample assessment periods only in the simulation group. Conclusions: LLLT was shown to increase the quantity of mastocytes in the assessment periods compared with the simulation groups.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Chungui Lu, Olga A. Koroleva, John F. Farrar, Joe Gallagher, Chris J. Pollock, and A. Deri Tomos (2002). Rubisco small subunit, chlorophyll a/b-binding protein and sucrose : fructan-6-fructosyl transferase gene expression and sugar status in single barley leaf cells in situ. Cell type specificity and induction by light. Plant Physiology, 130 (3) pp.1335-1348 Sponsorship: BBSRC RAE2008

Relevância:

60.00% 60.00%

Publicador:

Resumo:

El nucleopoliedrovirus de Spodoptera exigua (SeMNPV) es un patógeno natural de las poblaciones larvarias de S. exigua que constituye la base de un bioinsecticida comercializado en España para el control biológico de esta plaga en pimiento. Recientes estudios han demostrado que la transmisión del virus a la descendencia (transmisión vertical) se da con frecuencia y podría ser una característica deseable para su uso en aplicaciones de campo. En el presente trabajo se discute la conveniencia de utilizar una mezcla de dos genotipos SeAl1 (transmisión vertical) y SeG25 (transmisión horizontal) en determinadas proporciones para mejorar las características que cada uno de ellos presenta por separado y así explotar cada una de las vías de transmisión. La patogenicidad (CL50) del genotipo SeG25, y de cualquiera de las mezclas que contienen un 25, 50 o 75 % del mismo, fue más alta que la del aislado SeAl1. Sin embargo, en términos de virulencia (TMM) y productividad (OBs/larva) no se observaron diferencias significativas entre genotipos ni entre sus mezclas. Además se evaluó la capacidad de producir infecciones encubiertas de cada genotipo y sus mezclas sometiendo larvas de S. exigua a infecciones subletales del virus. Se encontraron transcritos del virus para el gen temprano ie0 mediante RT-PCR en los adultos supervivientes a infecciones provocadas por el genotipo SeG25 y todas las mezclas. También se testaron otros dos genes virales que se expresan de manera temprana y tardía en la infección de baculovirus (DNA-polimerasa y polihedrina) para los que en ningún caso se detectaron transcritos.