974 resultados para Main Memory
Resumo:
PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.
Resumo:
Streamciphers are common cryptographic algorithms used to protect the confidentiality of frame-based communications like mobile phone conversations and Internet traffic. Streamciphers are ideal cryptographic algorithms to encrypt these types of traffic as they have the potential to encrypt them quickly and securely, and have low error propagation. The main objective of this thesis is to determine whether structural features of keystream generators affect the security provided by stream ciphers.These structural features pertain to the state-update and output functions used in keystream generators. Using linear sequences as keystream to encrypt messages is known to be insecure. Modern keystream generators use nonlinear sequences as keystream.The nonlinearity can be introduced through a keystream generator's state-update function, output function, or both. The first contribution of this thesis relates to nonlinear sequences produced by the well-known Trivium stream cipher. Trivium is one of the stream ciphers selected in a final portfolio resulting from a multi-year project in Europe called the ecrypt project. Trivium's structural simplicity makes it a popular cipher to cryptanalyse, but to date, there are no attacks in the public literature which are faster than exhaustive keysearch. Algebraic analyses are performed on the Trivium stream cipher, which uses a nonlinear state-update and linear output function to produce keystream. Two algebraic investigations are performed: an examination of the sliding property in the initialisation process and algebraic analyses of Trivium-like streamciphers using a combination of the algebraic techniques previously applied separately by Berbain et al. and Raddum. For certain iterations of Trivium's state-update function, we examine the sets of slid pairs, looking particularly to form chains of slid pairs. No chains exist for a small number of iterations.This has implications for the period of keystreams produced by Trivium. Secondly, using our combination of the methods of Berbain et al. and Raddum, we analysed Trivium-like ciphers and improved on previous on previous analysis with regards to forming systems of equations on these ciphers. Using these new systems of equations, we were able to successfully recover the initial state of Bivium-A.The attack complexity for Bivium-B and Trivium were, however, worse than exhaustive keysearch. We also show that the selection of stages which are used as input to the output function and the size of registers which are used in the construction of the system of equations affect the success of the attack. The second contribution of this thesis is the examination of state convergence. State convergence is an undesirable characteristic in keystream generators for stream ciphers, as it implies that the effective session key size of the stream cipher is smaller than the designers intended. We identify methods which can be used to detect state convergence. As a case study, theMixer streamcipher, which uses nonlinear state-update and output functions to produce keystream, is analysed. Mixer is found to suffer from state convergence as the state-update function used in its initialisation process is not one-to-one. A discussion of several other streamciphers which are known to suffer from state convergence is given. From our analysis of these stream ciphers, three mechanisms which can cause state convergence are identified.The effect state convergence can have on stream cipher cryptanalysis is examined. We show that state convergence can have a positive effect if the goal of the attacker is to recover the initial state of the keystream generator. The third contribution of this thesis is the examination of the distributions of bit patterns in the sequences produced by nonlinear filter generators (NLFGs) and linearly filtered nonlinear feedback shift registers. We show that the selection of stages used as input to a keystream generator's output function can affect the distribution of bit patterns in sequences produced by these keystreamgenerators, and that the effect differs for nonlinear filter generators and linearly filtered nonlinear feedback shift registers. In the case of NLFGs, the keystream sequences produced when the output functions take inputs from consecutive register stages are less uniform than sequences produced by NLFGs whose output functions take inputs from unevenly spaced register stages. The opposite is true for keystream sequences produced by linearly filtered nonlinear feedback shift registers.
Resumo:
Shaft fracture at an early stage of operation is a common problem for a certain type of wind turbine. To determine the cause of shaft failure a series of experimental tests were conducted to evaluate the chemical composition and mechanical properties. A detail analysis involving macroscopic feature and microstructure analysis of the material of the shaft was also performed to have an in depth knowledge of the cause of fracture. The experimental tests and analysis results show that there are no significant differences in the material property of the main shaft when comparing it with the Standard, EN10083-3:2006. The results show that stress concentration on the shaft surface close to the critical section of the shaft due to rubbing of the annular ring and coupled with high stress concentration caused by the change of inner diameter of the main shaft are the main reasons that result in fracture of the main shaft. In addition, inhomogeneity of the main shaft micro-structure also accelerates up the fracture process of the main shaft. In addition, the theoretical calculation of equivalent stress at the end of the shaft was performed, which demonstrate that cracks can easily occur under the action of impact loads. The contribution of this paper is to provide a reference in fracture analysis of similar main shaft of wind turbines.
Resumo:
A key question in neuroscience is how memory is selectively allocated to neural networks in the brain. This question remains a significant research challenge, in both rodent models and humans alike, because of the inherent difficulty in tracking and deciphering large, highly dimensional neuronal ensembles that support memory (i.e., the engram). In a previous study we showed that consolidation of a new fear memory is allocated to a common topography of amygdala neurons. When a consolidated memory is retrieved, it may enter a labile state, requiring reconsolidation for it to persist. What is not known is whether the original spatial allocation of a consolidated memory changes during reconsolidation. Knowledge about the spatial allocation of a memory, during consolidation and reconsolidation, provides fundamental insight into its core physical structure (i.e., the engram). Using design-based stereology, we operationally define reconsolidation by showing a nearly identical quantity of neurons in the dorsolateral amygdala (LAd) that expressed a plasticity-related protein, phosphorylated mitogen-activated protein kinase, following both memory acquisition and retrieval. Next, we confirm that Pavlovian fear conditioning recruits a stable, topographically organized population of activated neurons in the LAd. When the stored fear memory was briefly reactivated in the presence of the relevant conditioned stimulus, a similar topography of activated neurons was uncovered. In addition, we found evidence for activated neurons allocated to new regions of the LAd. These findings provide the first insight into the spatial allocation of a fear engram in the LAd, during its consolidation and reconsolidation phase.
Resumo:
Pavlovian fear conditioning is a robust technique for examining behavioral and cellular components of fear learning and memory. In fear conditioning, the subject learns to associate a previously neutral stimulus with an inherently noxious co-stimulus. The learned association is reflected in the subjects' behavior upon subsequent re-exposure to the previously neutral stimulus or the training environment. Using fear conditioning, investigators can obtain a large amount of data that describe multiple aspects of learning and memory. In a single test, researchers can evaluate functional integrity in fear circuitry, which is both well characterized and highly conserved across species. Additionally, the availability of sensitive and reliable automated scoring software makes fear conditioning amenable to high-throughput experimentation in the rodent model; thus, this model of learning and memory is particularly useful for pharmacological and toxicological screening. Due to the conserved nature of fear circuitry across species, data from Pavlovian fear conditioning are highly translatable to human models. We describe equipment and techniques needed to perform and analyze conditioned fear data. We provide two examples of fear conditioning experiments, one in rats and one in mice, and the types of data that can be collected in a single experiment. © 2012 Springer Science+Business Media, LLC.
Resumo:
Pavlovian fear conditioning, also known as classical fear conditioning is an important model in the study of the neurobiology of normal and pathological fear. Progress in the neurobiology of Pavlovian fear also enhances our understanding of disorders such as posttraumatic stress disorder (PTSD) and with developing effective treatment strategies. Here we describe how Pavlovian fear conditioning is a key tool for understanding both the neurobiology of fear and the mechanisms underlying variations in fear memory strength observed across different phenotypes. First we discuss how Pavlovian fear models aspects of PTSD. Second, we describe the neural circuits of Pavlovian fear and the molecular mechanisms within these circuits that regulate fear memory. Finally, we show how fear memory strength is heritable; and describe genes which are specifically linked to both changes in Pavlovian fear behavior and to its underlying neural circuitry. These emerging data begin to define the essential genes, cells and circuits that contribute to normal and pathological fear.
Resumo:
This thesis is a study of how the contents of volatile memory on the Windows operating system can be better understood and utilised for the purposes of digital forensic investigations. It proposes several techniques to improve the analysis of memory, with a focus on improving the detection of unknown code such as malware. These contributions allow the creation of a more complete reconstruction of the state of a computer at acquisition time, including whether or not the computer has been infected by malicious code.
Resumo:
The study of memory in most behavioral paradigms, including emotional memory paradigms, has focused on the feed forward components that underlie Hebb’s first postulate, associative synaptic plasticity. Hebb’s second postulate argues that activated ensembles of neurons reverberate in order to provide temporal coordination of different neural signals, and thereby facilitate coincidence detection. Recent evidence from our groups has suggested that the lateral amygdala (LA) contains recurrent microcircuits and that these may reverberate. Additionally this reverberant activity is precisely timed with latencies that would facilitate coincidence detection between cortical and sub cortical afferents to the LA.Thus, recent data at the microcircuit level in the amygdala provide some physiological evidence in support of the second Hebbian postulate.
Resumo:
Distributed generation (DG) resources are commonly used in the electric systems to obtain minimum line losses, as one of the benefits of DG, in radial distribution systems. Studies have shown the importance of appropriate selection of location and size of DGs. This paper proposes an analytical method for solving optimal distributed generation placement (ODGP) problem to minimize line losses in radial distribution systems using loss sensitivity factor (LSF) based on bus-injection to branch-current (BIBC) matrix. The proposed method is formulated and tested on 12 and 34 bus radial distribution systems. The classical grid search algorithm based on successive load flows is employed to validate the results. The main advantages of the proposed method as compared with the other conventional methods are the robustness and no need to calculate and invert large admittance or Jacobian matrices. Therefore, the simulation time and the amount of computer memory, required for processing data especially for the large systems, decreases.
Resumo:
We revisit the venerable question of access credentials management, which concerns the techniques that we, humans with limited memory, must employ to safeguard our various access keys and tokens in a connected world. Although many existing solutions can be employed to protect a long secret using a short password, those solutions typically require certain assumptions on the distribution of the secret and/or the password, and are helpful against only a subset of the possible attackers. After briefly reviewing a variety of approaches, we propose a user-centric comprehensive model to capture the possible threats posed by online and offline attackers, from the outside and the inside, against the security of both the plaintext and the password. We then propose a few very simple protocols, adapted from the Ford-Kaliski server-assisted password generator and the Boldyreva unique blind signature in particular, that provide the best protection against all kinds of threats, for all distributions of secrets. We also quantify the concrete security of our approach in terms of online and offline password guesses made by outsiders and insiders, in the random-oracle model. The main contribution of this paper lies not in the technical novelty of the proposed solution, but in the identification of the problem and its model. Our results have an immediate and practical application for the real world: they show how to implement single-sign-on stateless roaming authentication for the internet, in a ad-hoc user-driven fashion that requires no change to protocols or infrastructure.
Resumo:
Physical design objects such as sketches, drawings, collages, storyboards and models play an important role in supporting communication and coordination in design studios. CAM (Cooperative Artefact Memory) is a mobile-tagging based messaging system that allows designers to collaboratively store relevant information onto their design objects in the form of messages, annotations and external web links. We studied the use of CAM in a Product Design studio over three weeks, involving three different design teams. In this paper, we briefly describe CAM and show how it serves as 'object memory'.
Resumo:
Here, we investigate the genetic basis of human memory in healthy individuals and the potential role of two polymorphisms, previously implicated in memory function. We have explored aspects of retrospective and prospective memory including semantic, short term, working and long-term memory in conjunction with brain derived neurotrophic factor (BDNF) and tumor necrosis factor-alpha (TNF-alpha). The memory scores for healthy individuals in the population were obtained for each memory type and the population was genotyped via restriction fragment length polymorphism for the BDNF rs6265 (Val66Met) SNP and via pyrosequencing for the TNF-alpha rs113325588 SNP. Using univariate ANOVA, a significant association of the BDNF polymorphism with visual and spatial memory retention and a significant association of the TNF-alpha polymorphism was observed with spatial memory retention. In addition, a significant interactive effect between BDNF and TNF-alpha polymorphisms was observed in spatial memory retention. In practice visual memory involves spatial information and the two memory systems work together, however our data demonstrate that individuals with the Val/Val BDNF genotype have poorer visual memory but higher spatial memory retention, indicating a level of interaction between TNF-alpha and BDNF in spatial memory retention. This is the first study to use genetic analysis to determine the interaction between BDNF and TNF-alpha in relation to memory in normal adults and provides important information regarding the effect of genetic determinants and gene interactions on human memory.
Resumo:
This paper addresses the problem of joint identification of infinite-frequency added mass and fluid memory models of marine structures from finite frequency data. This problem is relevant for cases where the code used to compute the hydrodynamic coefficients of the marine structure does not give the infinite-frequency added mass. This case is typical of codes based on 2D-potential theory since most 3D-potential-theory codes solve the boundary value associated with the infinite frequency. The method proposed in this paper presents a simpler alternative approach to other methods previously presented in the literature. The advantage of the proposed method is that the same identification procedure can be used to identify the fluid-memory models with or without having access to the infinite-frequency added mass coefficient. Therefore, it provides an extension that puts the two identification problems into the same framework. The method also exploits the constraints related to relative degree and low-frequency asymptotic values of the hydrodynamic coefficients derived from the physics of the problem, which are used as prior information to refine the obtained models.
Resumo:
The dynamics describing the motion response of a marine structure in waves can be represented within a linear framework by the Cummins Equation. This equation contains a convolution term that represents the component of the radiation forces associated with fluid memory effects. Several methods have been proposed in the literature for the identification of parametric models to approximate and replace this convolution term. This replacement can facilitate the model implementation in simulators and the analysis of motion control designs. Some of the reported identification methods consider the problem in the time domain while other methods consider the problem in the frequency domain. This paper compares the application of these identification methods. The comparison is based not only on the quality of the estimated models, but also on the ease of implementation, ease of use, and the flexibility of the identification method to incorporate prior information related to the model being identified. To illustrate the main points arising from the comparison, a particular example based on the coupled vertical motion of a modern containership vessel is presented.