942 resultados para Time-memory attacks
Resumo:
Computational simulations of the title reaction are presented, covering a temperature range from 300 to 2000 K. At lower temperatures we find that initial formation of the cyclopropene complex by addition of methylene to acetylene is irreversible, as is the stabilisation process via collisional energy transfer. Product branching between propargyl and the stable isomers is predicted at 300 K as a function of pressure for the first time. At intermediate temperatures (1200 K), complex temporal evolution involving multiple steady states begins to emerge. At high temperatures (2000 K) the timescale for subsequent unimolecular decay of thermalized intermediates begins to impinge on the timescale for reaction of methylene, such that the rate of formation of propargyl product does not admit a simple analysis in terms of a single time-independent rate constant until the methylene supply becomes depleted. Likewise, at the elevated temperatures the thermalized intermediates cannot be regarded as irreversible product channels. Our solution algorithm involves spectral propagation of a symmetrised version of the discretized master equation matrix, and is implemented in a high precision environment which makes hitherto unachievable low-temperature modelling a reality.
Resumo:
A new method is presented to determine an accurate eigendecomposition of difficult low temperature unimolecular master equation problems. Based on a generalisation of the Nesbet method, the new method is capable of achieving complete spectral resolution of the master equation matrix with relative accuracy in the eigenvectors. The method is applied to a test case of the decomposition of ethane at 300 K from a microcanonical initial population with energy transfer modelled by both Ergodic Collision Theory and the exponential-down model. The fact that quadruple precision (16-byte) arithmetic is required irrespective of the eigensolution method used is demonstrated. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
There is overwhelming evidence for the existence of substantial genetic influences on individual differences in general and specific cognitive abilities, especially in adults. The actual localization and identification of genes underlying variation in cognitive abilities and intelligence has only just started, however. Successes are currently limited to neurological mutations with rather severe cognitive effects. The current approaches to trace genes responsible for variation in the normal ranges of cognitive ability consist of large scale linkage and association studies. These are hampered by the usual problems of low statistical power to detect quantitative trait loci (QTLs) of small effect. One strategy to boost the power of genomic searches is to employ endophenotypes of cognition derived from the booming field of cognitive neuroscience This special issue of Behavior Genetics reports on one of the first genome-wide association studies for general IQ. A second paper summarizes candidate genes for cognition, based on animal studies. A series of papers then introduces two additional levels of analysis in the ldquoblack boxrdquo between genes and cognitive ability: (1) behavioral measures of information-processing speed (inspection time, reaction time, rapid naming) and working memory capacity (performance on on single or dual tasks of verbal and spatio-visual working memory), and (2) electrophyiosological derived measures of brain function (e.g., event-related potentials). The obvious way to assess the reliability and validity of these endophenotypes and their usefulness in the search for cognitive ability genes is through the examination of their genetic architecture in twin family studies. Papers in this special issue show that much of the association between intelligence and speed-of-information processing/brain function is due to a common gene or set of genes, and thereby demonstrate the usefulness of considering these measures in gene-hunting studies for IQ.
Resumo:
Amultidisciplinary collaborative study examining cognition in a large sample of twins is outlined. A common experimental protocol and design is used in The Netherlands, Australia and Japan to measure cognitive ability using traditional IQ measures (i.e., psychometric IQ), processing speed (e.g., reaction time [RT] and inspection time [IT]), and working memory (e.g., spatial span, delayed response [DR] performance). The main aim is to investigate the genetic covariation among these cognitive phenotypes in order to use the correlated biological markers in future linkage and association analyses to detect quantitativetrait loci (QTLs). We outline the study and methodology, and report results from our preliminary analyses that examines the heritability of processing speed and working memory indices, and their phenotypic correlation with IQ. Heritability of Full Scale IQ was 87% in the Netherlands, 83% in Australia, and 71% in Japan. Heritability estimates for processing speed and working memory indices ranged from 33–64%. Associations of IQ with RT and IT (−0.28 to −0.36) replicated previous findings with those of higher cognitive ability showing faster speed of processing. Similarly, significant correlations were indicated between IQ and the spatial span working memory task (storage [0.31], executive processing [0.37]) and the DR working memory task (0.25), with those of higher cognitive ability showing better memory performance. These analyses establish the heritability of the processing speed and working memory measures to be used in our collaborative twin study of cognition, and support the findings that individual differences in processing speed and working memory may underlie individual differences in psychometric IQ.
Resumo:
Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)
Resumo:
Nineteen persons with Parkinson's disease (PD) and 19 matched control participants completed a battery of online lexical decision tasks designed to isolate the automatic and attentional aspects of semantic activation within the semantic priming paradigm. Results highlighted key processing abnormalities in PD. Specifically, persons with PD exhibited a delayed time course of semantic activation. In addition, results suggest that experimental participants were unable to implicitly process prime information and, therefore, failed to engage strategic processing mechanisms in response to manipulations of the relatedness proportion. Results are discussed in terms of the 'Gain/Decay' hypothesis (Milberg, McGlinchey-Berroth, Duncan, & Higgins, 1999) and the dopaminergic modulation of signal to noise ratios in semantic networks.
Resumo:
Research on the stability of flavours during high temperature extrusion cooking is reviewed. The important factors that affect flavour and aroma retention during the process of extrusion are illustrated. A substantial number of flavour volatiles which are incorporated prior to extrusion are normally lost during expansion, this is because of steam distillation. Therefore, a general practice has been to introduce a flavour mix after the extrusion process. This extra operation requires a binding agent (normally oil), and may also result in a non-uniform distribution of the flavour and low oxidative stability of the flavours exposed on the surface. Therefore, the importance of encapsulated flavours, particularly the beta -cyclodextrin-flavour complex, is highlighted in this paper.
Resumo:
It is not possible to make measurements of the phase of an optical mode using linear optics without introducing an extra phase uncertainty. This extra phase variance is quite large for heterodyne measurements, however it is possible to reduce it to the theoretical limit of log (n) over bar (4 (n) over bar (2)) using adaptive measurements. These measurements are quite sensitive to experimental inaccuracies, especially time delays and inefficient detectors. Here it is shown that the minimum introduced phase variance when there is a time delay of tau is tau/(8 (n) over bar). This result is verified numerically, showing that the phase variance introduced approaches this limit for most of the adaptive schemes using the best final phase estimate. The main exception is the adaptive mark II scheme with simplified feedback, which is extremely sensitive to time delays. The extra phase variance due to time delays is considered for the mark I case with simplified feedback, verifying the tau /2 result obtained by Wiseman and Killip both by a more rigorous analytic technique and numerically.
Resumo:
Mass balance calculations were performed to model the effect of solution treatment time on A356 and A357 alloy microstructures. Image analysis and electron probe microanalysis were used to characterise microstructures and confirm model predictions. In as-cast microstructures, up to 8 times more Mg is tied up in the pi-phase than in Mg2Si. The dissolution of pi is accompanied by a corresponding increase in the amount of beta-phase. This causes the rate of pi dissolution to be limited by the rate of beta formation. It is predicted that solution treatments of the order of tens of minutes at 540degreesC produce near-maximum T6 yield strengths, and that Mg contents in excess of 0.52 wt% have no advantage.
Resumo:
A data warehouse is a data repository which collects and maintains a large amount of data from multiple distributed, autonomous and possibly heterogeneous data sources. Often the data is stored in the form of materialized views in order to provide fast access to the integrated data. One of the most important decisions in designing a data warehouse is the selection of views for materialization. The objective is to select an appropriate set of views that minimizes the total query response time with the constraint that the total maintenance time for these materialized views is within a given bound. This view selection problem is totally different from the view selection problem under the disk space constraint. In this paper the view selection problem under the maintenance time constraint is investigated. Two efficient, heuristic algorithms for the problem are proposed. The key to devising the proposed algorithms is to define good heuristic functions and to reduce the problem to some well-solved optimization problems. As a result, an approximate solution of the known optimization problem will give a feasible solution of the original problem. (C) 2001 Elsevier Science B.V. All rights reserved.