55 resultados para Level of processing
Resumo:
Three experiments examined the extent to which attitudes following majority and minority influence are resistant to counter-persuasion. In Experiment 1, participants’ attitudes were measured after being exposed to two messages which argued opposite positions (initial pro-attitudinal message and subsequent, counter-attitudinal counter-message). Attitudes following minority endorsement of the initial message were more resistant to a (second) counter-message than attitudes following majority endorsement of the initial message. Experiment 2 replicated this finding when the message direction was reversed (counter-attitudinal initial message and pro-attitudinal counter-message) and showed that the level of message elaboration mediated the amount of attitude resistance. Experiment 3 included conditions where participants received only the counter-message and showed that minority-source participants had resisted the second message (counter-message) rather than being influenced by it. These results show that minority influence induces systematic processing of its arguments which leads to attitudes which are resistant to counter-persuasion.
Resumo:
The effects of attentional modulation on activity within the human visual cortex were investigated using magnetoencephalography. Chromatic sinusoidal stimuli were used to evoke activity from the occipital cortex, with attention directed either toward or away from the stimulus using a bar-orientation judgment task. For five observers, global magnetic field power was plotted as a function of time from stimulus onset. The major peak of each function occurred at about 120 ms latency and was well modeled by a current dipole near the calcarine sulcus. Independent component analysis (ICA) on the non-averaged data for each observer also revealed one component of calcarine origin, the location of which matched that of the dipolar source determined from the averaged data. For two observers, ICA revealed a second component near the parieto-occipital sulcus. Although no effects of attention were evident using standard averaging procedures, time-varying spectral analyses of single trials revealed that the main effect of attention was to alter the level of oscillatory activity. Most notably, a sustained increase in alpha-band (7-12 Hz) activity of both calcarine and parieto-occipital origin was evident. In addition, calcarine activity in the range of 13-21 Hz was enhanced, while calcarine activity in the range of 5-6 Hz was reduced. Our results are consistent with the hypothesis that attentional modulation affects neural processing within the calcarine and parieto-occipital cortex by altering the amplitude of alpha-band activity and other natural brain rhythms. © 2003 Elsevier Inc. All rights reserved.
Resumo:
An increasing number of neuroimaging studies are concerned with the identification of interactions or statistical dependencies between brain areas. Dependencies between the activities of different brain regions can be quantified with functional connectivity measures such as the cross-correlation coefficient. An important factor limiting the accuracy of such measures is the amount of empirical data available. For event-related protocols, the amount of data also affects the temporal resolution of the analysis. We use analytical expressions to calculate the amount of empirical data needed to establish whether a certain level of dependency is significant when the time series are autocorrelated, as is the case for biological signals. These analytical results are then contrasted with estimates from simulations based on real data recorded with magnetoencephalography during a resting-state paradigm and during the presentation of visual stimuli. Results indicate that, for broadband signals, 50-100 s of data is required to detect a true underlying cross-correlations coefficient of 0.05. This corresponds to a resolution of a few hundred milliseconds for typical event-related recordings. The required time window increases for narrow band signals as frequency decreases. For instance, approximately 3 times as much data is necessary for signals in the alpha band. Important implications can be derived for the design and interpretation of experiments to characterize weak interactions, which are potentially important for brain processing.
Resumo:
Serial and parallel interconnection of photonic devices is integral to the construction of any all-optical data processing system. This thesis presents results from a series of experiments centering on the use of the nonlinear-optical loop mirror (NOLM) switch in architectures for the manipulation and generation of ultrashort pulses. Detailed analysis of soliton switching in a single NOLM and cascade of two NOLM's is performed, centering on primary limitations to device operation, effect of cascading on amplitude response, and impact of switching on the characteristics of incident pulses. By using relatively long input pulses, device failure due to stimulated Raman generation is postponed to demonstrate multiple-peaked switching for the first time. It is found that while cascading leads to a sharpening of the overall switching characteristic, pulse spectral and temporal integrity is not significantly degraded, and emerging pulses retain their essential soliton character. In addition, by including an asymmetrically placed in-fibre Bragg reflector as a wavelength selective loss element in the basic NOLM configuration, both soliton self-switching and dual-wavelength control-pulse switching are spectrally quantised. Results are presented from a novel dual-wavelength laser configuration generating pulse trains with an ultra-low rms inter-pulse-stream timing jitter level of 630fs enabling application in ultrafast switching environments at data rates as high as 130GBits/s. In addition, the fibre NOLM is included in architectures for all-optical memory, demonstrating storage and logical inversion of a 0.5kByte random data sequence; and ultrafast phase-locking of a gain-switched distributed feedback laser at 1.062GHz, the fourteenth harmonic of the system baseband frequency. The stringent requirements for environmental robustness of these architectures highlight the primary weaknesses of the NOLM in its fibre form and recommendations to overcome its inherent drawbacks are presented.
Resumo:
We conducted a detailed study of a case of linguistic talent in the context of autism spectrum disorder, specifically Asperger syndrome. I.A. displays language strengths at the level of morphology and syntax. Yet, despite this grammar advantage, processing of figurative language and inferencing based on context presents a problem for him. The morphology advantage for I.A. is consistent with the weak central coherence (WCC) account of autism. From this account, the presence of a local processing bias is evident in the ways in which autistic individuals solve common problems, such as assessing similarities between objects and finding common patterns, and may therefore provide an advantage in some cognitive tasks compared to typical individuals. We extend the WCC account to language and provide evidence for a connection between the local processing bias and the acquisition of morphology and grammar.
Resumo:
Metallocene catalyzed linear low density polyethylene (m-LLDPE) is a new generation of olefin copolymer. Based on the more recently developed metallocene-type catalysts, m-LLDPE can be synthesized with exactly controlled short chain branches and stereo-regular microstructure. The unique properties of these polymers have led to their applications in many areas. As a result, it is important to have a good understanding of the oxidation mechanism of m-LLDPE during melt processing in order to develop more effective stabilisation systems and continue to increase the performance of the material. The primary objectives of this work were, firstly, to investigate the oxidative degradation mechanisms of m-LLDPE polymers having different comonomer (I-octene) content during melt processing. Secondly, to examine the effectiveness of some commercial antioxidants on the stabilisation of m-LLDPE melt. A Ziegler-polymerized LLDPE (z-LLDPE) based on the same comonomer was chosen and processed under the same conditions for comparison with the metallocene polymers. The LLDPE polymers were processed using an internal mixer (torque rheometer, TR) and a co-rotating twin-screw extruder (TSE). The effects of processing variables (time, temperature) on the rheological (MI, MWD, rheometry) and molecular (unsaturation type and content, carbonyl compounds, chain branching) characteristics of the processed polymers were examined. It was found that the catalyst type (metallocene or Ziegler) and comonomer content of the polymers have great impact on their oxidative degradation behavior (crosslinking or chain scission) during melt processing. The metallocene polymers mainly underwent chain scission at lower temperature (<220°C) but crosslinking became predominant at higher temperature for both TR and TSE processed polymers. Generally, the more comonomers the m-LLDPE contains, a larger extent of chain scission can be expected. In contrast, crosslinking reactions were shown to be always dominant in the case of the Ziegler LLDPE. Furthermore, it is clear that the molecular weight distribution (MWD) of all LLDPE became broader after processing and tended generally to be broader at elevated temperatures and more extrusion passes. So, it can be concluded that crosslinking and chain scission are temperature dependent and occur simultaneously as competing reactions during melt processing. Vinyl is considered to be the most important unsaturated group leading to polymer crosslinking as its concentration in all the LLDPE decreased after processing. Carbonyl compounds were produced during LLDPE melt processing and ketones were shown to be the most imp0l1ant carbonyl-containing products in all processed polymers. The carbonyl concentration generally increased with temperature and extrusion passes, and the higher carbonyl content fonned in processed z-LLDPE and m-LLDPE polymers having higher comonomer content indicates their higher susceptibility of oxidative degradation. Hindered phenol and lactone antioxidants were shown to be effective in the stabilization of m-LLDPE melt when they were singly used in TSE extrusion. The combination of hindered phenol and phosphite has synergistic effect on m-LLDPE stabilization and the phenol-phosphite-Iactone mixture imparted the polymers with good stability during extrusion, especially for m-LLDPE with higher comonomer content.
Resumo:
Interpolated data are an important part of the environmental information exchange as many variables can only be measured at situate discrete sampling locations. Spatial interpolation is a complex operation that has traditionally required expert treatment, making automation a serious challenge. This paper presents a few lessons learnt from INTAMAP, a project that is developing an interoperable web processing service (WPS) for the automatic interpolation of environmental data using advanced geostatistics, adopting a Service Oriented Architecture (SOA). The “rainbow box” approach we followed provides access to the functionality at a whole range of different levels. We show here how the integration of open standards, open source and powerful statistical processing capabilities allows us to automate a complex process while offering users a level of access and control that best suits their requirements. This facilitates benchmarking exercises as well as the regular reporting of environmental information without requiring remote users to have specialized skills in geostatistics.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.
Resumo:
The proliferation of data throughout the strategic, tactical and operational areas within many organisations, has provided a need for the decision maker to be presented with structured information that is appropriate for achieving allocated tasks. However, despite this abundance of data, managers at all levels in the organisation commonly encounter a condition of ‘information overload’, that results in a paucity of the correct information. Specifically, this thesis will focus upon the tactical domain within the organisation and the information needs of management who reside at this level. In doing so, it will argue that the link between decision making at the tactical level in the organisation, and low-level transaction processing data, should be through a common object model that used a framework based upon knowledge leveraged from co-ordination theory. In order to achieve this, the Co-ordinated Business Object Model (CBOM) was created. Detailing a two-tier framework, the first tier models data based upon four interactive object models, namely, processes, activities, resources and actors. The second tier analyses the data captured by the four object models, and returns information that can be used to support tactical decision making. In addition, the Co-ordinated Business Object Support System (CBOSS), is a prototype tool that has been developed in order to both support the CBOM implementation, and to also demonstrate the functionality of the CBOM as a modelling approach for supporting tactical management decision making. Containing a graphical user interface, the system’s functionality allows the user to create and explore alternative implementations of an identified tactical level process. In order to validate the CBOM, three verification tests have been completed. The results provide evidence that the CBOM framework helps bridge the gap between low level transaction data, and the information that is used to support tactical level decision making.
Resumo:
Exporting is one of the main ways in which organizations internationalize. With the more turbulent, heterogeneous, sophisticated and less familiar export environment, the organizational learning ability of the exporting organization may become its only source of sustainable competitive advantage. However, achieving a competitive level of learning is not easy. Companies must be able to find ways to improve their learning capability by enhancing the different aspects of the learning process. One of these is export memory. Building from an export information processing framework this research work particularly focuses on the quality of export memory, its determinants, its subsequent use in decision-making, and its ultimate relationship with export performance. Within export memory use, four export memory use dimensions have been discovered: instrumental, conceptual, legitimizing and manipulating. Results from the qualitative study based on the data from a mail survey with 354 responses reveal that the development of export memory quality is positively related with quality of export information acquisition, the quality of export information interpretation, export coordination, and integration of the information into the organizational system. Several company and environmental factors have also been examined in terms of their relationship with export memory use. The two factors found to be significantly related to the extent of export memory use are acquisition of export information quality and export memory quality. The results reveal that export memory quality is positively related to the extent of export memory use which in turn was found to be positively related to export performance. Furthermore, results of the study show that there is only one aspect of export memory use that significantly affects export performance – the extent of export memory use. This finding could mean that there is no particular type of export memory use favored since the choice of the type of use is situation specific. Additional results reveal that environmental turbulence and export memory overload have moderating effects on the relationship between export memory use and export performance.
Resumo:
The recent explosive growth in advanced manufacturing technology (AMT) and continued development of sophisticated information technologies (IT) is expected to have a profound effect on the way we design and operate manufacturing businesses. Furthermore, the escalating capital requirements associated with these developments have significantly increased the level of risk associated with initial design, ongoing development and operation. This dissertation has examined the integration of two key sub-elements of the Computer Integrated Manufacturing (CIM) system, namely the manufacturing facility and the production control system. This research has concentrated on the interactions between production control (MRP) and an AMT based production facility. The disappointing performance of such systems has been discussed in the context of a number of potential technological and performance incompatibilities between these two elements. It was argued that the design and selection of operating policies for both is the key to successful integration. Furthermore, policy decisions are shown to play an important role in matching the performance of the total system to the demands of the marketplace. It is demonstrated that a holistic approach to policy design must be adopted if successful integration is to be achieved. It is shown that the complexity of the issues resulting from such an approach required the formulation of a structured design methodology. Such a methodology was subsequently developed and discussed. This combined a first principles approach to the behaviour of system elements with the specification of a detailed holistic model for use in the policy design environment. The methodology aimed to make full use of the `low inertia' characteristics of AMT, whilst adopting a JIT configuration of MRP and re-coupling the total system to the market demands. This dissertation discussed the application of the methodology to an industrial case study and the subsequent design of operational policies. Consequently a novel approach to production control resulted. A central feature of which was a move toward reduced manual intervention in the MRP processing and scheduling logic with increased human involvement and motivation in the management of work-flow on the shopfloor. Experimental results indicated that significant performance advantages would result from the adoption of the recommended policy set.
Resumo:
The thesis is concerned with relationships between profit, technology and environmental change. Existing work has concentrated on only a few questions, treated at either micro or macro levels of analysis. And there has been something of an impasse since the neoclassical and neomarxist approaches are either in direct conflict (macro level), or hardly interact (micro level). The aim of the thesis was to bypass this impasse by starting to develop a meso level of analysis that focusses on issues largely ignored in the traditional approaches - on questions about distribution. The first questions looked at were descriptive - what were the patterns of distribution over time of the variability in types and rates of environmental change, and in particular, was there any evidence of periodization? Two case studies were used to examine these issues. The first looked at environmental change in the iron and steel industry since 1700, and the second studied pollution in five industries in the basic processing sector. It was established that environmental change has been markedly periodized, with an apparently fairly regular `cycle length' of about fifty years. The second questions considered were explanatory - whether and how this periodization could be accounted for by reference to variations in aspects of profitability and technical change. In the iron and steel industry, it was found that diffusion rates and the rate of nature of innovation were periodized on the same pattern as was environmental change. And the same sort of variation was also present in the realm of profits, as evidenced by cyclical changes in output growth. Simple theoretical accounts could be given for all the empirically demonstrable links, and it was suggested that the most useful models at this meso level of analysis are provided by structural change models of economic development.
Resumo:
Attention defines our mental ability to select and respond to stimuli, internal or external, on the basis of behavioural goals in the presence of competing, behaviourally irrelevant, stimuli. The frontal and parietal cortices are generally agreed to be involved with attentional processing, in what is termed the 'fronto-parietal' network. The left parietal cortex has been seen as the site for temporal attentional processing, whereas the right parietal cortex has been seen as the site for spatial attentional processing. There is much debate about when the modulation of the primary visual cortex occurs, whether it is modulated in the feedforward sweep of processing or modulated by feedback projections from extrastriate and higher cortical areas. MEG and psychophysical measurements were used to look at spatially selective covert attention. Dual-task and cue-based paradigms were used. It was found that the posterior parietal cortex (PPC), in particular the SPL and IPL, was the main site of activation during these experiments, and that the left parietal lobe was activated more strongly than the right parietal lobe throughout. The levels of activation in both parietal and occipital areas were modulated in accordance with attentional demands. It is likely that spatially selective covert attention is dominated by the left parietal lobe, and that this takes the form of the proposed sensory-perceptual lateralization within the parietal lobes. Another form of lateralization is proposed, termed the motor-processing lateralization, the side of dominance being determined by handedness, being reversed in left- relative to right-handers. In terms of the modulation of the primary visual cortex, it was found that it is unlikely that V1 is modulated initially; rather the modulation takes the form of feedback from higher extrastriate and parietal areas. This fits with the idea of preattentive visual processing, a commonly accepted idea which, in itself, prevents the concept of initial modulation of V1.
Resumo:
Despite the difficulties that we have regarding the use of English in tertiary education in Turkey, we argue that it is necessary for those involved to study in the medium of English. Furthermore, significant advances have been made on this front. These efforts have been for the most part language-oriented, but also include research into needs analysis and the pedagogy of team-teaching. Considering the current situation at this level of education, however, there still seems to be more to do. And the question is, what more can we do? What further contribution can we make? Or, how can we take this process further? The purpose of the study reported here is to respond to this last question. We test the proposition that it is possible to take this process further by investigating the efficient management of transition from Turkish-medium to English-medium at the tertiary level of education in Turkey. Beyond what is achieved by only the language orientation of the EAP approach, and moving conceptually deeper than what has been achieved by the team-teaching approach, the research undertaken for the purpose of this study focuses on the idea of the discourse community that people want to belong to. It then pursues an adaptation of the essentially psycho-social approach of apprenticeship, as people become aspirants and apprentices to that discourse community. In this thesis, the researcher recognises that she cannot follow all the way through to the full implementation of her ideas in a fully-taught course. She is not in a position to change the education system. What she does here is to introduce a concept and sample its effects in terms of motivation, and thereby of integration and of success, for individuals and groups of learners. Evaluation is provided by acquiring both qualitative and quantitative data concerning mature members' perceptions of apprenticed-neophytes functioning as members in the new community, apprenticed-neophytes' perceptions of their own membership and of the preparation process undertaken, and the comparison of these neophytes' performance with that of other neophytes in the community. The data obtained provide strong evidence in support of the potential usefulness of this apprenticeship model towards the declared purpose of improving the English-medium tertiary education of Turkish students in their chosen fields of study.