868 resultados para Novel of memory
Resumo:
Very large spatially-referenced datasets, for example, those derived from satellite-based sensors which sample across the globe or large monitoring networks of individual sensors, are becoming increasingly common and more widely available for use in environmental decision making. In large or dense sensor networks, huge quantities of data can be collected over small time periods. In many applications the generation of maps, or predictions at specific locations, from the data in (near) real-time is crucial. Geostatistical operations such as interpolation are vital in this map-generation process and in emergency situations, the resulting predictions need to be available almost instantly, so that decision makers can make informed decisions and define risk and evacuation zones. It is also helpful when analysing data in less time critical applications, for example when interacting directly with the data for exploratory analysis, that the algorithms are responsive within a reasonable time frame. Performing geostatistical analysis on such large spatial datasets can present a number of problems, particularly in the case where maximum likelihood. Although the storage requirements only scale linearly with the number of observations in the dataset, the computational complexity in terms of memory and speed, scale quadratically and cubically respectively. Most modern commodity hardware has at least 2 processor cores if not more. Other mechanisms for allowing parallel computation such as Grid based systems are also becoming increasingly commonly available. However, currently there seems to be little interest in exploiting this extra processing power within the context of geostatistics. In this paper we review the existing parallel approaches for geostatistics. By recognising that diffeerent natural parallelisms exist and can be exploited depending on whether the dataset is sparsely or densely sampled with respect to the range of variation, we introduce two contrasting novel implementations of parallel algorithms based on approximating the data likelihood extending the methods of Vecchia [1988] and Tresp [2000]. Using parallel maximum likelihood variogram estimation and parallel prediction algorithms we show that computational time can be significantly reduced. We demonstrate this with both sparsely sampled data and densely sampled data on a variety of architectures ranging from the common dual core processor, found in many modern desktop computers, to large multi-node super computers. To highlight the strengths and weaknesses of the diffeerent methods we employ synthetic data sets and go on to show how the methods allow maximum likelihood based inference on the exhaustive Walker Lake data set.
Resumo:
The 1980s have seen spectacular advances in our understanding of the molecular bases of neurobiology. Biological membranes, channel proteins, cytoskeletal elements, and neuroactive peptides have all been illuminated by the molecular approach. The operation of synapses can be seen to be far more subtle and complex than has previously been imagined, and the development of the brain and physical basis of memory have both been illuminated by this new understanding. In addition, some of the ways in which the brain may go wrong can be traced to malfunction at the molecular level. This study attemps a synthesis of this new knowledge, to provide an indication of how an understanding at the molecular level can help towards a theory of the brain in health and disease. The text will be of benefit to undergraduate students of biochemistry, medical science, pharmacy, pharmacology and general biology.
Resumo:
This thesis describes the design and implementation of a new dynamic simulator called DASP. It is a computer program package written in standard Fortran 77 for the dynamic analysis and simulation of chemical plants. Its main uses include the investigation of a plant's response to disturbances, the determination of the optimal ranges and sensitivities of controller settings and the simulation of the startup and shutdown of chemical plants. The design and structure of the program and a number of features incorporated into it combine to make DASP an effective tool for dynamic simulation. It is an equation-oriented dynamic simulator but the model equations describing the user's problem are generated from in-built model equation library. A combination of the structuring of the model subroutines, the concept of a unit module, and the use of the connection matrix of the problem given by the user have been exploited to achieve this objective. The Executive program has a structure similar to that of a CSSL-type simulator. DASP solves a system of differential equations coupled to nonlinear algebraic equations using an advanced mixed equation solver. The strategy used in formulating the model equations makes it possible to obtain the steady state solution of the problem using the same model equations. DASP can handle state and time events in an efficient way and this includes the modification of the flowsheet. DASP is highly portable and this has been demonstrated by running it on a number of computers with only trivial modifications. The program runs on a microcomputer with 640 kByte of memory. It is a semi-interactive program, with the bulk of all input data given in pre-prepared data files with communication with the user is via an interactive terminal. Using the features in-built in the package, the user can view or modify the values of any input data, variables and parameters in the model, and modify the structure of the flowsheet of the problem during a simulation session. The program has been demonstrated and verified using a number of example problems.
Resumo:
The English writing system is notoriously irregular in its orthography at the phonemic level. It was therefore proposed that focusing beginner-spellers’ attention on sound-letter relations at the sub-syllabic level might improve spelling performance. This hypothesis was tested in Experiments 1 and 2 using a ‘clue word’ paradigm to investigate the effect of analogy teaching intervention / non-intervention on the spelling performance of an experimental group and controls. The results overall showed the intervention to be effective in improving spelling, and this effect to be enduring. Experiment 3 demonstrated a greater application of analogy in spelling, when clue words, which participants used in analogy to spell test words, remained in view during testing. A series of regression analyses, with spelling entered as the criterion variable and age, analogy and phonological plausibility (PP) as predictors, showed both analogy and PP to be highly predictive of spelling. Experiment 4 showed that children could use analogy to improve their spelling, even without intervention, by comparing their performance in spelling words presented in analogous categories or in random lists. Consideration of children’s patterns of analogy use at different points of development showed three age groups to use similar patterns of analogy, but contrasting analogy patterns for spelling different words. This challenges stage theories of analogy use in literacy. Overall the most salient units used in analogy were the rime and, to a slightly lesser degree, the onset-vowel and vowel. Finally, Experiment 5 showed analogy and phonology to be fairly equally influential in spelling, but analogy to be more influential than phonology in reading. Five separate experiments therefore found analogy to be highly influential in spelling. Experiment 5 also considered the role of memory and attention in literacy attainment. The important implications of this research are that analogy, rather than purely phonics-based strategy, is instrumental in correct spelling in English.
Resumo:
A paradox of memory research is that repeated checking results in a decrease in memory certainty, memory vividness and confidence [van den Hout, M. A., & Kindt, M. (2003a). Phenomenological validity of an OCD-memory model and the remember/know distinction. Behaviour Research and Therapy, 41, 369–378; van den Hout, M. A., & Kindt, M. (2003b). Repeated checking causes memory distrust. Behaviour Research and Therapy, 41, 301–316]. Although these findings have been mainly attributed to changes in episodic long-term memory, it has been suggested [Shimamura, A. P. (2000). Toward a cognitive neuroscience of metacognition. Consciousness and Cognition, 9, 313–323] that representations in working memory could already suffer from detrimental checking. In two experiments we set out to test this hypothesis by employing a delayed-match-to-sample working memory task. Letters had to be remembered in their correct locations, a task that was designed to engage the episodic short-term buffer of working memory [Baddeley, A. D. (2000). The episodic buffer: a new component in working memory? Trends in Cognitive Sciences, 4, 417–423]. Of most importance, we introduced an intermediate distractor question that was prone to induce frustrating and unnecessary checking on trials where no correct answer was possible. Reaction times and confidence ratings on the actual memory test of these trials confirmed the success of this manipulation. Most importantly, high checkers [cf. VOCI; Thordarson, D. S., Radomsky, A. S., Rachman, S., Shafran, R, Sawchuk, C. N., & Hakstian, A. R. (2004). The Vancouver obsessional compulsive inventory (VOCI). Behaviour Research and Therapy, 42(11), 1289–1314] were less accurate than low checkers when frustrating checking was induced, especially if the experimental context actually emphasized the irrelevance of the misleading question. The clinical relevance of this result was substantiated by means of an extreme groups comparison across the two studies. The findings are discussed in the context of detrimental checking and lack of distractor inhibition as a way of weakening fragile bindings within the episodic short-term buffer of Baddeley's (2000) model. Clinical implications, limitations and future research are considered.
Resumo:
Compulsive checking is known to influence memory, yet there is little consideration of checking as a cognitive style within the typical population. We employed a working memory task where letters had to be remembered in their locations. The key experimental manipulation was to induce repeated checking after encoding by asking about a stimulus that had not been presented. We recorded the effect that such misleading probes had on a subsequent memory test. Participants drawn from the typical population but who scored highly on a checking-scale had poorer memory and less confidence than low scoring individuals. While thoroughness is regarded as a quality, our results indicate that a cognitive style that favours repeated checking does not always lead to the best performance as it can undermine the authenticity of memory traces. This may affect various aspects of everyday life including the work environment and we discuss its implications and possible counter-measures. Copyright © 2010 John Wiley & Sons, Ltd.
Resumo:
Efficiency in the mutual fund (MF), is one of the issues that has attracted many investors in countries with advanced financial market for many years. Due to the need for frequent study of MF's efficiency in short-term periods, investors need a method that not only has high accuracy, but also high speed. Data envelopment analysis (DEA) is proven to be one of the most widely used methods in the measurement of the efficiency and productivity of decision making units (DMUs). DEA for a large dataset with many inputs/outputs would require huge computer resources in terms of memory and CPU time. This paper uses neural network back-ropagation DEA in measurement of mutual funds efficiency and shows the requirements, in the proposed method, for computer memory and CPU time are far less than that needed by conventional DEA methods and can therefore be a useful tool in measuring the efficiency of a large set of MFs. Copyright © 2014 Inderscience Enterprises Ltd.
Resumo:
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Resumo:
Recognition of the object contours in the image as sequences of digital straight segments and/or digital curve arcs is considered in this article. The definitions of digital straight segments and of digital curve arcs are proposed. The methods and programs to recognize the object contours are represented. The algorithm to recognize the digital straight segments is formulated in terms of the growing pyramidal networks taking into account the conceptual model of memory and identification (Rabinovich [4]).
Resumo:
Христина Костадинова, Красимир Йорджев - В статията се обсъжда представянето на произволна бинарна матрица с помощта на последователност от цели неотрицателни числа. Разгледани са някои предимства и недостатъци на това представяне като алтернатива на стандартното, общоприето представяне чрез двумерен масив. Показано е, че представянето на бинарните матрици с помощта на наредени n-торки от естествени числа води до по-бързи алгоритми и до съществена икономия на оперативна памет. Използуван е апарата на обектно-ориентираното програмиране със синтаксиса и семантиката на езика C++.
Resumo:
THE YOUTH MOVEMENT NASHI (OURS) WAS FOUNDED IN THE SPRING of 2005 against the backdrop of Ukraine’s ‘Orange Revolution’. Its aim was to stabilise Russia’s political system and take back the streets from opposition demonstrators. Personally loyal to Putin and taking its ideological orientation from Surkov’s concept of ‘sovereign democracy’, Nashi has sought to turn the tide on ‘defeatism’ and develop Russian youth into a patriotic new elite that ‘believes in the future of Russia’ (p. 15). Combining a wealth of empirical detail and the application of insights from discourse theory, Ivo Mijnssen analyses the organisation’s development between 2005 and 2012. His analysis focuses on three key moments—the organisation’s foundation, the apogee of its mobilisation around the Bronze Soldier dispute with Estonia, and the 2010 Seliger youth camp—to help understand Nashi’s organisation, purpose and ideational outlook as well as the limitations and challenges it faces. As such,the book is insightful both for those with an interest in post-Soviet Russian youth culture, and for scholars seeking a rounded understanding of the Kremlin’s initiatives to return a sense of identity and purpose to Russian national life.The first chapter, ‘Background and Context’, outlines the conceptual toolkit provided by Ernesto Laclau and Chantal Mouffe to help make sense of developments on the terrain of identity politics. In their terms, since the collapse of the Soviet Union, Russia has experienced acute dislocation of its identity. With the tangible loss of great power status, Russian realities have become unfixed from a discourse enabling national life to be constructed, albeit inherently contingently, as meaningful. The lack of a Gramscian hegemonic discourse to provide a unifying national idea was securitised as an existential threat demanding special measures. Accordingly, the identification of those who are ‘notUs’ has been a recurrent theme of Nashi’s discourse and activity. With the victory in World War II held up as a foundational moment, a constitutive other is found in the notion of ‘unusual fascists’. This notion includes not just neo-Nazis, but reflects a chain of equivalence that expands to include a range of perceived enemies of Putin’s consolidation project such as oligarchs and pro-Western liberals.The empirical background is provided by the second chapter, ‘Russia’s Youth, the Orange Revolution, and Nashi’, which traces the emergence of Nashi amid the climate of political instability of 2004 and 2005. A particularly note-worthy aspect of Mijnssen’s work is the inclusion of citations from his interviews with Nashicommissars; the youth movement’s cadres. Although relatively few in number, such insider conversations provide insight into the ethos of Nashi’s organisation and the outlook of those who have pledged their involvement. Besides the discussion of Nashi’s manifesto, the reader thus gains insight into the motivations of some participants and behind-the-scenes details of Nashi’s activities in response to the perceived threat of anti-government protests. The third chapter, ‘Nashi’s Bronze Soldier’, charts Nashi’s role in elevating the removal of a World War II monument from downtown Tallinn into an international dispute over the interpretation of history. The events subsequent to this securitisation of memory are charted in detail, concluding that Nashi’s activities were ultimately unsuccessful as their demands received little official support.The fourth chapter, ‘Seliger: The Foundry of Modernisation’, presents a distinctive feature of Mijnssen’s study, namely his ethnographic account as a participant observer in the Youth International Forum at Seliger. In the early years of the camp (2005–2007), Russian participants received extensive training, including master classes in ‘methods of forestalling mass unrest’ (p. 131), and the camp served to foster a sense of group identity and purpose among activists. After 2009 the event was no longer officially run as a Nashi camp, and its role became that of a forum for the exchange of ideas about innovation, although camp spirit remained a central feature. In 2010 the camp welcomed international attendees for the first time. As one of about 700 international participants in that year the author provides a fascinating account based on fieldwork diaries.Despite the polemical nature of the topic, Mijnssen’s analysis remains even-handed, exemplified in his balanced assessment of the Seliger experience. While he details the frustrations and disappointments of the international participants with regard to the unaccustomed strict camp discipline, organisational and communication failures, and the controlled format of many discussions,he does not neglect to note the camp’s successes in generating a gratifying collective dynamic between the participants, even among the international attendees who spent only a week there.In addition to the useful bibliography, the book is back-ended by two appendices, which provide the reader with important Russian-language primary source materials. The first is Nashi’s ‘Unusual Fascism’ (Neobyknovennyi fashizm) brochure, and the second is the booklet entitled ‘Some Uncomfortable Questions to the Russian Authorities’ (Neskol’ko neudobnykh voprosov rossiiskoivlasti) which was provided to the Seliger 2010 instructors to guide them in responding to probing questions from foreign participants. Given that these are not readily publicly available even now, they constitute a useful resource from the historical perspective.
Resumo:
Nonbelieved memories (NBMs) highlight the independence between metamemorial judgments that contribute to the experience of remembering. Initial definitions of NBMs portrayed them as involving the withdrawal of autobiographical belief despite sustained recollection. While people rate belief for their NBMs as weaker than recollection, the average difference is too small to support the idea that belief is completely withdrawn in all cases. Furthermore, ratings vary considerably across NBMs. In two studies, we reanalyzed reports from prior studies to examine whether NBM reports reflect a single category or multiple sub-categories using cluster analytic methods. In Study 1, we identified three sub-types of NBMs. In Study 2 we incorporated the concept of belief in accuracy, and found that two of the clusters from Study 1 split into two clusters apiece. Higher ratings of recollection than belief in occurrence characterized all clusters, which were differentiated by the degree of difference between these variables. In both studies the clusters were differentiated by a number of memory characteristic ratings and by reasons reported as leading to the alteration of belief. Implications for understanding the remembering of past events and predicting the creation of NBMs are discussed.
Resumo:
This dissertation explored memory conformity effects on people who interacted with a confederate and of bystanders to that interaction. Two studies were carried out. Study 1 was conducted in the field. A male confederate approached a group of people at the beach and had a brief interaction. About a minute later a research assistant approached the group and administered a target-absent lineup to each person in the group. Analyses revealed that memory conformity occurred during the lineup task. Bystanders were twice as likely to conform as those who interacted with the confederate. Study 2 was carried out in a laboratory under controlled conditions. Participants were exposed to two events during their time in the laboratory. In one event, participants were shown a brief video with no determinate roles assigned. In the other event participants were randomly assigned to interact with a confederate (actor condition) or to witness that interaction (bystander condition). Participants were given memory tests on both events to understand the effects of participant role (actor vs. bystander) on memory conformity. Participants answered second to all questions, following a confederate acting as a participant, who disseminated misinformation on critical questions. Analyses revealed no significant differences in memory conformity between actors and bystanders during the movie memory task. However, differences were found for the interaction memory task such that bystanders conformed more than actors on two of four critical questions. Bystanders also conformed more than actors during a lineup identification task. The results of these studies suggest that the role a person plays in an interaction affects how susceptible they are to information from a co-witness. Theoretical and applied implications are discussed. First, the results are explained through the use of two models of memory. Second, recommendations are made for forensic investigators.
Resumo:
Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.