944 resultados para ONLY-MEMORY DISC
Resumo:
A compact direct digital frequency synthesizer (DDFS) for system-on-chip implementation of the high precision rubidium atomic frequency standard is developed. For small chip size and low power consumption, the phase to sine mapping data is compressed using sine symmetry technique, sine-phase difference technique, quad line approximation technique,and quantization and error read only memory (QE-ROM) technique. The ROM size is reduced by 98% using these techniques. A compact DDFS chip with 32bit phase storage depth and a 10bit on-chip digital to analog converter has been successfully implemented using a standard 0.35μm CMOS process. The core area of the DDFS is 1.6mm^2. It consumes 167mW at 3.3V,and its spurious free dynamic range is 61dB.
Resumo:
Pós-graduação em Educação - FFC
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Pós-graduação em Educação - IBRC
Resumo:
STUDY DESIGN: The structural integrity of the nucleus pulposus (NP) of intervertebral discs was targeted by enzyme-specific degradations to correlate their effects to the magnetic resonance (MR) signal. OBJECTIVE: To develop quantitative MR imaging as an accurate and noninvasive diagnostic tool to better understand and treat disc degeneration. SUMMARY OF BACKGROUND DATA: Quantitative MR analysis has been previously shown to reflect not only the disc matrix composition, but also the structural integrity of the disc matrix. Further work is required to identify the contribution of the structural integrity versus the matrix composition to the MR signal. METHODS: The bovine coccygeal NPs were injected with either enzyme or buffer, incubated at 37 degrees C as static, unloaded and closed 3-disc segments, and analyzed by a 1.5-Tesla MR scanner to measure MR parameters. RESULTS: Collagenase degradation of the NP significantly decreased the relaxation times, slightly decreased the magnetization transfer ratio, and slightly increased the apparent diffusion coefficient. Targeting the proteoglycan and/or hyaluronan integrity by trypsin and hyaluronidase did not significantly affect the MR parameters, except for an increase in the apparent diffusion coefficient of the disc after trypsin treatment. CONCLUSIONS: Our results demonstrate that changes in the structural integrity of matrix proteins can be assessed by quantitative MR.
Resumo:
Numerous observations in clinical and preclinical studies indicate that the developing brain is particular sensitive to lead (Pb)'s pernicious effects. However, the effect of gestation-only Pb exposure on cognitive functions at maturation has not been studied. We investigated the potential effects of three levels of Pb exposure (low, middle, and high Pb: 0.03%, 0.09%, and 0.27% of lead acetate-containing diets) at the gestational period on the spatial memory of young adult offspring by Morris water maze spatial learning and fixed location/visible platform tasks. Our results revealed that three levels of Pb exposure significantly impaired memory retrieval in male offspring, but only female offspring at low levels of Pb exposure showed impairment of memory retrieval. These impairments were not due to the gross disturbances in motor performance and in vision because these animals performed the fixed location/visible platform task as well as controls, indicating that the specific aspects of spatial learning/memory were impaired. These results suggest that exposure to Pb during the gestational period is sufficient to cause long-term learning/memory deficits in young adult offspring. (C) 2003 Elsevier Inc. All rights reserved.
Resumo:
Money’s ability to enhance memory has received increased attention in recent research. However, previous studies have not directly addressed the time-dependent nature of monetary effects on memory, which are suggested to exist by research in cognitive neuroscience, and the possible detrimental effects of monetary rewards on learning interesting material, as indicated by studies in motivational psychology. By utilizing a trivia question paradigm, the current study incorporated these perspectives and examined the effect of monetary rewards on immediate and delayed memory performance for answers to uninteresting and interesting questions. Results showed that monetary rewards promote memory performance only after a delay. In addition, the memory enhancement effect of monetary rewards was only observed for uninteresting questions. These results are consistent with both the hippocampus-dependent memory consolidation model of reward learning and previous findings documenting the ineffectiveness of monetary rewards on tasks that have intrinsic value.
Memory suppression can help people “unlearn” behavioral responses—but only for nonemotional memories
Resumo:
When encountering reminders of memories that we prefer not to think about, we often try to exclude those memories from awareness. Past studies have revealed that such suppression attempts can reduce the subsequent recollection of unwanted memories. In the present study, we examined whether the inhibitory effects extend even to associated behavioral responses. Participants learned cue–target pairs for emotional and nonemotional targets and were additionally trained in behavioral responses for each cue. Afterward, they were shown the cues and instructed either to think or to avoid thinking about the targets without performing any behaviors. In a final test phase, behavioral performance was tested for all of the cues. When the targets were neutral, participants’ attempts to avoid retrieval reduced accuracy and increased reaction times in generating behavioral responses associated with cues. By contrast, behavioral performance was not affected by suppression attempts when the targets were emotional. These results indicate that controlling unwanted recollection is powerful enough to inhibit associated behavioral responses—but only for nonemotional memories.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.
Resumo:
Institutions of public memory are increasingly undertaking co-creative media initiatives in which community members create content with the support of institutional expertise and resources. This paper discusses one such initiative: the State Library of Queensland’s ‘Responses to the Apology’, which used a collaborative digital storytelling methodology to co-produce seven short videos capturing individual responses to Prime Minister Kevin Rudd’s 2008 ‘Apology to Australia’s Indigenous Peoples’. In examining this program, we are interested not only in the juxtaposition of ‘ordinary’ responses to an ‘official’ event, but also in how the production and display of these stories might also demonstrate a larger mediatisation of public memory.
Resumo:
A century ago, as the Western world embarked on a period of traumatic change, the visual realism of photography and documentary film brought print and radio news to life. The vision that these new mediums threw into stark relief was one of intense social and political upheaval: the birth of modernity fired and tempered in the crucible of the Great War. As millions died in this fiery chamber and the influenza pandemic that followed, lines of empires staggered to their fall, and new geo-political boundaries were scored in the raw, red flesh of Europe. The decade of 1910 to 1919 also heralded a prolific period of artistic experimentation. It marked the beginning of the social and artistic age of modernity and, with it, the nascent beginnings of a new art form: film. We still live in the shadow of this violent, traumatic and fertile age; haunted by the ghosts of Flanders and Gallipoli and its ripples of innovation and creativity. Something happened here, but to understand how and why is not easy; for the documentary images we carry with us in our collective cultural memory have become what Baudrillard refers to as simulacra. Detached from their referents, they have become referents themselves, to underscore other, grand narratives in television and Hollywood films. The personal histories of the individuals they represent so graphically–and their hope, love and loss–are folded into a national story that serves, like war memorials and national holidays, to buttress social myths and values. And, as filmic images cross-pollinate, with each iteration offering a new catharsis, events that must have been terrifying or wondrous are abstracted. In this paper we first discuss this transformation through reference to theories of documentary and memory–this will form a conceptual framework for a subsequent discussion of the short film Anmer. Produced by the first author in 2010, Anmer is a visual essay on documentary, simulacra and the symbolic narratives of history. Its form, structure and aesthetic speak of the confluence of documentary, history, memory and dream. Located in the first decade of the twentieth century, its non-linear narratives of personal tragedy and poetic dreamscapes are an evocative reminder of the distance between intimate experience, grand narratives, and the mythologies of popular films. This transformation of documentary sources not only played out in the processes of the film’s production, but also came to form its theme.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
Business process models have traditionally been an effective way of examining business practices to identify areas for improvement. While common information gathering approaches are generally efficacious, they can be quite time consuming and have the risk of developing inaccuracies when information is forgotten or incorrectly interpreted by analysts. In this study, the potential of a role-playing approach for process elicitation and specification has been examined. This method allows stakeholders to enter a virtual world and role-play actions as they would in reality. As actions are completed, a model is automatically developed, removing the need for stakeholders to learn and understand a modelling grammar. Empirical data obtained in this study suggests that this approach may not only improve both the number of individual process task steps remembered and the correctness of task ordering, but also provide a reduction in the time required for stakeholders to model a process view.
Resumo:
BACKGROUND Tilted disc syndrome (TDS) is associated with characteristic ocular findings. The purpose of this study was to evaluate the ocular, refractive, and biometric characteristics in patients with TDS. METHODS This case-control study included 41 eyes of 25 patients who had established TDS and 40 eyes of 20 healthy control subjects. All participants underwent a complete ocular examination, including refraction and analysis using Fourier transformation, slit lamp biomicroscopy, pachymetry, keratometry, and ocular biometry. Corneal topography examinations were performed in the syndrome group only. RESULTS There were no significant differences in spherical equivalent (P = 0.13) and total astigmatism (P = 0.37) between groups. However, mean best spectacle-corrected visual acuity (Log Mar) was significantly worse in TDS patients (P = 0.003). The lenticular astigmatism was greater in the syndrome group, whereas the corneal component was greater in controls (P = 0.059 and P = 0.028, respectively). The measured biometric features were the same in both groups, except for the lens thickness and lens-axial length factor, which were greater in the TDS group (P = 0.007 and P = 0.055, respectively). CONCLUSIONS Clinically significant lenticular astigmatism, more oblique corneal astigmatism, and thicker lenses were characteristic findings in patients with TDS.
Resumo:
Purpose: To evaluate the ocular refractive and biometric characteristics in patients with tilted disc syndrome (TDS). Methods: This case-control study comprised 41 eyes of 25 patients with established TDS and forty eyes of 20 age- and sex-matched healthy control subjects. All had a complete ocular examination including refraction and analysis using Fourier transformation, slit lamp biomicroscopy, pachymetry keratometry, and ocular biometry. Corneal topography examinations were performed in the syndrome group only. Results: There were no significant differences in spherical equivalent (p = 0.334) and total astigmatism (p= 0.246) between groups. However, mean best spectacular corrected visual acuity was significantly worse in TDS patients (P < 0.001). The lenticular astigmatism was significantly greater in the syndrome group, while the corneal component was greater in the controls (p = 0.004 and p = 0.002, respectively). The measured biometric features were the same in both groups, except for the lens thickness, relative lens position, and lens-axial length factor which were greater in the TDS group (p = 0.002, p = 0.015, and p = 0.025, respectively). Conclusions: Clinically significant lenticular astigmatism, more oblique corneal astigmatism, and thicker lens were characteristic findings in patients with TDS.