864 resultados para Representation of polynomials
Resumo:
Inelastic neutron scattering spectroscopy has been used to observe and characterise hydrogen on the carbon component of a Pt/C catalyst. INS provides the complete vibration spectrum of coronene, regarded as a molecular model of a graphite layer. The vibrational modes are assigned with the aid of ab initio density functional theory calculations and the INS spectra by the a-CLIMAX program. A spectrum for which the H modes of coronene have been computationally suppressed, a carbon-only coronene spectrum, is a better representation of the spectrum of a graphite layer than is coronene itself. Dihydrogen dosing of a Pt/C catalyst caused amplification of the surface modes of carbon, an effect described as H riding on carbon. From the enhancement of the low energy carbon modes (100-600 cm(-1)) it is concluded that spillover hydrogen becomes attached to dangling bonds at the edges of graphitic regions of the carbon support. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
We report variational calculations of rovibrational energies of CH4 using the code MULTIMODE and an ab initio force field of Schwenke and Partridge. The systematic convergence of the energies with respect to the level of mode coupling is presented. Converged vibrational energies calculated using the five-mode representation of the potential for zero total angular momentum are compared with previous, benchmark calculations based on Radau coordinates using this force field for zero total angular momentum and for J = 1. Very good agreement with the previous benchmark calculations is found. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Some of the most pressing problems currently facing chemical education throughout the world are rehearsed. It is suggested that if the notion of "context" is to be used as the basis for an address to these problems, it must enable a number of challenges to be met. Four generic models of "context" are identified that are currently used or that may be used in some form within chemical education as the basis for curriculum design. It is suggested that a model based on physical settings, together with their cultural justifications, and taught with a socio-cultural perspective on learning, is likely to meet those challenges most fully. A number of reasons why the relative efficacies of these four models of approaches cannot be evaluated from the existing research literature are suggested. Finally, an established model for the representation of the development of curricula is used to discuss the development and evaluation of context-based chemical curricula.
Resumo:
The development and performance of a three-stage tubular model of the large human intestine is outlined. Each stage comprises a membrane fermenter where flow of an aqueous polyethylene glycol solution on the outside of the tubular membrane is used to control the removal of water and metabolites (principally short chain fatty acids) from, and thus the pH of, the flowing contents on the fermenter side. The three stage system gave a fair representation of conditions in the human gut. Numbers of the main bacterial groups were consistently higher than in an existing three-chemostat gut model system, suggesting the advantages of the new design in providing an environment for bacterial growth to represent the actual colonic microflora. Concentrations of short chain fatty acids and Ph levels throughout the system were similar to those associated with corresponding sections of the human colon. The model was able to achieve considerable water transfer across the membrane, although the values were not as high as those in the colon. The model thus goes some way towards a realistic simulation of the colon, although it makes no pretence to simulate the pulsating nature of the real flow. The flow conditions in each section are characterized by low Reynolds numbers: mixing due to Taylor dispersion is significant, and the implications of Taylor mixing and biofilm development for the stability, that is the ability to operate without washout, of the system are briefly analysed and discussed. It is concluded that both phenomena are important for stabilizing the model and the human colon.
Resumo:
Using an immersive virtual reality system, we measured the ability of observers to detect the rotation of an object when its movement was yoked to the observer's own translation. Most subjects had a large bias such that a static object appeared to rotate away from them as they moved. Thresholds for detecting target rotation were similar to those for an equivalent speed discrimination task carried out by static observers, suggesting that visual discrimination is the predominant limiting factor in detecting target rotation. Adding a stable visual reference frame almost eliminated the bias. Varying the viewing distance of the target had little effect, consistent with observers underestimating distance walked. However, accuracy of walking to a briefly presented visual target was high and not consistent with an underestimation of distance walked. We discuss implications for theories of a task-independent representation of visual space. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
Do we view the world differently if it is described to us in figurative rather than literal terms? An answer to this question would reveal something about both the conceptual representation of figurative language and the scope of top-down influences oil scene perception. Previous work has shown that participants will look longer at a path region of a picture when it is described with a type of figurative language called fictive motion (The road goes through the desert) rather than without (The road is in the desert). The current experiment provided evidence that such fictive motion descriptions affect eye movements by evoking mental representations of motion. If participants heard contextual information that would hinder actual motion, it influenced how they viewed a picture when it was described with fictive motion. Inspection times and eye movements scanning along the path increased during fictive motion descriptions when the terrain was first described as difficult (The desert is hilly) as compared to easy (The desert is flat); there were no such effects for descriptions without fictive motion. It is argued that fictive motion evokes a mental simulation of motion that is immediately integrated with visual processing, and hence figurative language can have a distinct effect on perception. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
In young adults information designated for future enactment is more readily accessible from memory than information not intended for enactment (e.g. Goschke & Kuhl, 1993). We examined whether this advantage for to-be-enacted material is reduced in older adults and thus whether attenuated action accessibility could underlie age-associated declines in prospective remembering. Young and older adults showed an equivalent increase in accessibility (faster recognition latencies) for to-be-enacted items over items intended for verbal report. Both age groups also showed increased accessibility for actions performed at encoding compared with verbally encoded items. Moreover, these effects were non-additive, suggesting similarities in the representation of completed and to-be-completed actions.
Resumo:
In studies of prospective memory, recall of the content of delayed intentions is normally excellent, probably because they contain actions that have to be enacted at a later time. Action words encoded for later enactment are more accessible from memory than those encoded for later verbal report [Freeman, J.E., and Ellis, J.A. 2003a. The representation of delayed intentions: A prospective subject-performed task? Journal of Experimental Psychology: Learning, Memory, and Cognition, 29, 976-992.]. As this higher assessibility is lost when the intended actions have to be enacted during encoding, or when a motor interference task is introduced concurrent to intention encoding, Freeman and Ellis suggested that the advantage of to-be-enacted actions is due to additional preparatory motor operations during encoding. Accordingly, in a fMRI study with 10 healthy young participants, we investigated whether motor brain regions are differentially activated during verbal encoding of actions for later enactment with the right hand in contrast to verbal encoding of actions for later verbal report. We included an additional condition of verbal encoding of abstract verbs for later verbal report to investigate whether the semantic motor information inherent in action verbs in contrast to abstract verbs activates motor brain regions different from those involved in the verbal encoding of actions for later enactment. Differential activation for the verbal encoding of to-be-enacted actions in contrast to to-be-reported actions was found in brain regions known to be involved in covert motor preparation for hand movements, i.e. the postcentral gyrus, the precuneus, the dorsal and ventral premotor cortex, the posterior middle temporal gyrus and the inferior parietal lobule. There was no overlap between these brain regions and those differentially activated during the verbal encoding of actions in contrast to abstract verbs for later verbal report. Consequently, the results of this fMRI study suggest the presence of preparatory motor operations during the encoding of delayed intentions requiring a future motor response, which cannot be attributed to semantic information inherent to action verbs. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper offers general guidelines for the development of effective visual languages. That is, languages for constructing diagrams that can be easily and readily interpreted and manipulated by the human reader. We use these guidelines first to examine classical AND/OR trees as a representation of logical proofs, and second to design and evaluate a visual language for representing proofs in LofA: a Logic of Dependability Arguments, for which we provide a brief motivation and overview.
Resumo:
The SPE taxonomy of evolving software systems, first proposed by Lehman in 1980, is re-examined in this work. The primary concepts of software evolution are related to generic theories of evolution, particularly Dawkins' concept of a replicator, to the hermeneutic tradition in philosophy and to Kuhn's concept of paradigm. These concepts provide the foundations that are needed for understanding the phenomenon of software evolution and for refining the definitions of the SPE categories. In particular, this work argues that a software system should be defined as of type P if its controlling stakeholders have made a strategic decision that the system must comply with a single paradigm in its representation of domain knowledge. The proposed refinement of SPE is expected to provide a more productive basis for developing testable hypotheses and models about possible differences in the evolution of E- and P-type systems than is provided by the original scheme. Copyright (C) 2005 John Wiley & Sons, Ltd.
Resumo:
This work compares and contrasts results of classifying time-domain ECG signals with pathological conditions taken from the MITBIH arrhythmia database. Linear discriminant analysis and a multi-layer perceptron were used as classifiers. The neural network was trained by two different methods, namely back-propagation and a genetic algorithm. Converting the time-domain signal into the wavelet domain reduced the dimensionality of the problem at least 10-fold. This was achieved using wavelets from the db6 family as well as using adaptive wavelets generated using two different strategies. The wavelet transforms used in this study were limited to two decomposition levels. A neural network with evolved weights proved to be the best classifier with a maximum of 99.6% accuracy when optimised wavelet-transform ECG data wits presented to its input and 95.9% accuracy when the signals presented to its input were decomposed using db6 wavelets. The linear discriminant analysis achieved a maximum classification accuracy of 95.7% when presented with optimised and 95.5% with db6 wavelet coefficients. It is shown that the much simpler signal representation of a few wavelet coefficients obtained through an optimised discrete wavelet transform facilitates the classification of non-stationary time-variant signals task considerably. In addition, the results indicate that wavelet optimisation may improve the classification ability of a neural network. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Transient neural assemblies mediated by synchrony in particular frequency ranges are thought to underlie cognition. We propose a new approach to their detection, using empirical mode decomposition (EMD), a data-driven approach removing the need for arbitrary bandpass filter cut-offs. Phase locking is sought between modes. We explore the features of EMD, including making a quantitative assessment of its ability to preserve phase content of signals, and proceed to develop a statistical framework with which to assess synchrony episodes. Furthermore, we propose a new approach to ensure signal decomposition using EMD. We adapt the Hilbert spectrum to a time-frequency representation of phase locking and are able to locate synchrony successfully in time and frequency between synthetic signals reminiscent of EEG. We compare our approach, which we call EMD phase locking analysis (EMDPL) with existing methods and show it to offer improved time-frequency localisation of synchrony.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
In this paper the meteorological processes responsible for transporting tracer during the second ETEX (European Tracer EXperiment) release are determined using the UK Met Office Unified Model (UM). The UM predicted distribution of tracer is also compared with observations from the ETEX campaign. The dominant meteorological process is a warm conveyor belt which transports large amounts of tracer away from the surface up to a height of 4 km over a 36 h period. Convection is also an important process, transporting tracer to heights of up to 8 km. Potential sources of error when using an operational numerical weather prediction model to forecast air quality are also investigated. These potential sources of error include model dynamics, model resolution and model physics. In the UM a semi-Lagrangian monotonic advection scheme is used with cubic polynomial interpolation. This can predict unrealistic negative values of tracer which are subsequently set to zero, and hence results in an overprediction of tracer concentrations. In order to conserve mass in the UM tracer simulations it was necessary to include a flux corrected transport method. Model resolution can also affect the accuracy of predicted tracer distributions. Low resolution simulations (50 km grid length) were unable to resolve a change in wind direction observed during ETEX 2, this led to an error in the transport direction and hence an error in tracer distribution. High resolution simulations (12 km grid length) captured the change in wind direction and hence produced a tracer distribution that compared better with the observations. The representation of convective mixing was found to have a large effect on the vertical transport of tracer. Turning off the convective mixing parameterisation in the UM significantly reduced the vertical transport of tracer. Finally, air quality forecasts were found to be sensitive to the timing of synoptic scale features. Errors in the position of the cold front relative to the tracer release location of only 1 h resulted in changes in the predicted tracer concentrations that were of the same order of magnitude as the absolute tracer concentrations.
Resumo:
We analyze the publicly released outputs of the simulations performed by climate models (CMs) in preindustrial (PI) and Special Report on Emissions Scenarios A1B (SRESA1B) conditions. In the PI simulations, most CMs feature biases of the order of 1 W m −2 for the net global and the net atmospheric, oceanic, and land energy balances. This does not result from transient effects but depends on the imperfect closure of the energy cycle in the fluid components and on inconsistencies over land. Thus, the planetary emission temperature is underestimated, which may explain the CMs' cold bias. In the PI scenario, CMs agree on the meridional atmospheric enthalpy transport's peak location (around 40°N/S), while discrepancies of ∼20% exist on the intensity. Disagreements on the oceanic transport peaks' location and intensity amount to ∼10° and ∼50%, respectively. In the SRESA1B runs, the atmospheric transport's peak shifts poleward, and its intensity increases up to ∼10% in both hemispheres. In most CMs, the Northern Hemispheric oceanic transport decreases, and the peaks shift equatorward in both hemispheres. The Bjerknes compensation mechanism is active both on climatological and interannual time scales. The total meridional transport peaks around 35° in both hemispheres and scenarios, whereas disagreements on the intensity reach ∼20%. With increased CO 2 concentration, the total transport increases up to ∼10%, thus contributing to polar amplification of global warming. Advances are needed for achieving a self-consistent representation of climate as a nonequilibrium thermodynamical system. This is crucial for improving the CMs' skill in representing past and future climate changes.