936 resultados para static random access memory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Following study, participants received 2 tests. The 1st was a recognition test; the 2nd was designed to tap recollection. The objective was to examine performance on Test I conditional on Test 2 performance. In Experiment 1, contrary to process dissociation assumptions, exclusion errors better predicted subsequent recollection than did inclusion errors. In Experiments 2 and 3, with alternate questions posed on Test 2, words having high estimates of recollection with one question had high estimates of familiarity with the other question. Results supported the following: (a) the 2-test procedure has considerable potential for elucidating the relationship between recollection and familiarity; (b) there is substantial evidence for dependency between such processes when estimates are obtained using the process dissociation and remember-know procedures; and (c) order of information access appears to depend on the question posed to the memory system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores potential for the RAMpage memory hierarchy to use a microkernel with a small memory footprint, in a specialized cache-speed static RAM (tightly-coupled memory, TCM). Dreamy memory is DRAM kept in low-power mode, unless referenced. Simulations show that a small microkernel suits RAMpage well, in that it achieves significantly better speed and energy gains than a standard hierarchy from adding TCM. RAMpage, in its best 128KB L2 case, gained 11% speed using TCM, and reduced energy 14%. Equivalent conventional hierarchy gains were under 1%. While 1MB L2 was significantly faster against lower-energy cases for the smaller L2, the larger SRAM's energy does not justify the speed gain. Using a 128KB L2 cache in a conventional architecture resulted in a best-case overall run time of 2.58s, compared with the best dreamy mode run time (RAMpage without context switches on misses) of 3.34s, a speed penalty of 29%. Energy in the fastest 128KB L2 case was 2.18J vs. 1.50J, a reduction of 31%. The same RAMpage configuration without dreamy mode took 2.83s as simulated, and used 2.39J, an acceptable trade-off (penalty under 10%) for being able to switch easily to a lower-energy mode.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Public statues that commemorate the lives and achievements of athletes are pervasive and influential forms of social memory in Western societies. Despite this important nexus between cultural practice and history making, there is a relative void of critical studies of statuary dedicated to athletes. This article will attempt to contribute to a broader understanding in this area by considering a bronze statue of Duke Paoa Kahanamoku, the Hawaiian Olympian, swimmer and surfer, at Waikīkī, Hawaii. This prominent monument demonstrates the processes of remembering and forgetting that are integral to acts of social memory. In this case, Kahanamoku's identity as a surfer is foregrounded over his legacy as a swimmer. The distillation and use of Kahanamoku's memory in this representation is enmeshed in deeper cultural forces about Hawaii's identity. Competing meanings of the statue's symbolism indicate its role as a 'hollow icon', and illustrate the way that apparently static objects representing the sporting past are in fact objects of the present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis work we develop a new generative model of social networks belonging to the family of Time Varying Networks. The importance of correctly modelling the mechanisms shaping the growth of a network and the dynamics of the edges activation and inactivation are of central importance in network science. Indeed, by means of generative models that mimic the real-world dynamics of contacts in social networks it is possible to forecast the outcome of an epidemic process, optimize the immunization campaign or optimally spread an information among individuals. This task can now be tackled taking advantage of the recent availability of large-scale, high-quality and time-resolved datasets. This wealth of digital data has allowed to deepen our understanding of the structure and properties of many real-world networks. Moreover, the empirical evidence of a temporal dimension in networks prompted the switch of paradigm from a static representation of graphs to a time varying one. In this work we exploit the Activity-Driven paradigm (a modeling tool belonging to the family of Time-Varying-Networks) to develop a general dynamical model that encodes fundamental mechanism shaping the social networks' topology and its temporal structure: social capital allocation and burstiness. The former accounts for the fact that individuals does not randomly invest their time and social interactions but they rather allocate it toward already known nodes of the network. The latter accounts for the heavy-tailed distributions of the inter-event time in social networks. We then empirically measure the properties of these two mechanisms from seven real-world datasets and develop a data-driven model, analytically solving it. We then check the results against numerical simulations and test our predictions with real-world datasets, finding a good agreement between the two. Moreover, we find and characterize a non-trivial interplay between burstiness and social capital allocation in the parameters phase space. Finally, we present a novel approach to the development of a complete generative model of Time-Varying-Networks. This model is inspired by the Kaufman's adjacent possible theory and is based on a generalized version of the Polya's urn. Remarkably, most of the complex and heterogeneous feature of real-world social networks are naturally reproduced by this dynamical model, together with many high-order topological properties (clustering coefficient, community structure etc.).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cascaded multilevel inverters-based Static Var Generators (SVGs) are FACTS equipment introduced for active and reactive power flow control. They eliminate the need for zigzag transformers and give a fast response. However, with regard to their application for flicker reduction in using Electric Arc Furnace (EAF), the existing multilevel inverter-based SVGs suffer from the following disadvantages. (1) To control the reactive power, an off-line calculation of Modulation Index (MI) is required to adjust the SVG output voltage. This slows down the transient response to the changes of reactive power; and (2) Random active power exchange may cause unbalance to the voltage of the d.c. link (HBI) capacitor when the reactive power control is done by adjusting the power angle d alone. To resolve these problems, a mathematical model of 11-level cascaded SVG, was developed. A new control strategy involving both MI (modulation index) and power angle (d) is proposed. A selected harmonics elimination method (SHEM) is taken for switching pattern calculations. To shorten the response time and simplify the controls system, feed forward neural networks are used for on-line computation of the switching patterns instead of using look-up tables. The proposed controller updates the MI and switching patterns once each line-cycle according to the sampled reactive power Qs. Meanwhile, the remainder reactive power (compensated by the MI) and the reactive power variations during the line-cycle will be continuously compensated by adjusting the power angles, d. The scheme senses both variables MI and d, and takes action through the inverter switching angle, qi. As a result, the proposed SVG is expected to give a faster and more accurate response than present designs allow. In support of the proposal there is a mathematical model for reactive powered distribution and a sensitivity matrix for voltage regulation assessment, MATLAB simulation results are provided to validate the proposed schemes. The performance with non-linear time varying loads is analysed and refers to a general review of flicker, of methods for measuring flickers due to arc furnace and means for mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over recent years, evidence has been accumulating in favour of the importance of long-term information as a variable which can affect the success of short-term recall. Lexicality, word frequency, imagery and meaning have all been shown to augment short term recall performance. Two competing theories as to the causes of this long-term memory influence are outlined and tested in this thesis. The first approach is the order-encoding account, which ascribes the effect to the usage of resources at encoding, hypothesising that word lists which require less effort to process will benefit from increased levels of order encoding, in turn enhancing recall success. The alternative view, trace redintegration theory, suggests that order is automatically encoded phonologically, and that long-term information can only influence the interpretation of the resultant memory trace. The free recall experiments reported here attempted to determine the importance of order encoding as a facilitatory framework and to determine the locus of the effects of long-term information in free recall. Experiments 1 and 2 examined the effects of word frequency and semantic categorisation over a filled delay, and experiments 3 and 4 did the same for immediate recall. Free recall was improved by both long-term factors tested. Order information was not used over a short filled delay, but was evident in immediate recall. Furthermore, it was found that both long-term factors increased the amount of order information retained. Experiment 5 induced an order encoding effect over a filled delay, leaving a picture of short-term processes which are closely associated with long-term processes, and which fit conceptions of short-term memory being part of language processes rather better than either the encoding or the retrieval-based models. Experiments 6 and 7 aimed to determine to what extent phonological processes were responsible for the pattern of results observed. Articulatory suppression affected the encoding of order information where speech rate had no direct influence, suggesting that it is ease of lexical access which is the most important factor in the influence of long-term memory on immediate recall tasks. The evidence presented in this thesis does not offer complete support for either the retrieval-based account or the order encoding account of long-term influence. Instead, the evidence sits best with models that are based upon language-processing. The path urged for future research is to find ways in which this diffuse model can be better specified, and which can take account of the versatility of the human brain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis includes analysis of disordered spin ensembles corresponding to Exact Cover, a multi-access channel problem, and composite models combining sparse and dense interactions. The satisfiability problem in Exact Cover is addressed using a statistical analysis of a simple branch and bound algorithm. The algorithm can be formulated in the large system limit as a branching process, for which critical properties can be analysed. Far from the critical point a set of differential equations may be used to model the process, and these are solved by numerical integration and exact bounding methods. The multi-access channel problem is formulated as an equilibrium statistical physics problem for the case of bit transmission on a channel with power control and synchronisation. A sparse code division multiple access method is considered and the optimal detection properties are examined in typical case by use of the replica method, and compared to detection performance achieved by interactive decoding methods. These codes are found to have phenomena closely resembling the well-understood dense codes. The composite model is introduced as an abstraction of canonical sparse and dense disordered spin models. The model includes couplings due to both dense and sparse topologies simultaneously. The new type of codes are shown to outperform sparse and dense codes in some regimes both in optimal performance, and in performance achieved by iterative detection methods in finite systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper attempts to address the effectiveness of physical-layer network coding (PNC) on the throughput improvement for multi-hop multicast in random wireless ad hoc networks (WAHNs). We prove that the per session throughput order with PNC is tightly bounded as T((nvmR (n))-1) if m = O(R-2 (n)), where n is the total number of nodes, R(n) is the communication range, and m is the number of destinations for each multicast session. We also show that per-session throughput order with PNC is tight bounded as T(n-1), when m = O(R-2(n)). The results of this paper imply that PNC cannot improve the throughput order of multicast in random WAHNs, which is different from the intuition that PNC may improve the throughput order as it allows simultaneous signal access and combination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We suggest a model for data losses in a single node (memory buffer) of a packet-switched network (like the Internet) which reduces to one-dimensional discrete random walks with unusual boundary conditions. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that for a finite-capacity buffer at the critical point the loss rate exhibits strong fluctuations and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Access control (AC) limits access to the resources of a system only to authorized entities. Given that information systems today are increasingly interconnected, AC is extremely important. The implementation of an AC service is a complicated task. Yet the requirements to an AC service vary a lot. Accordingly, the design of an AC service should be flexible and extensible in order to save development effort and time. Unfortunately, with conventional object-oriented techniques, when an extension has not been anticipated at the design time, the modification incurred by the extension is often invasive. Invasive changes destroy design modularity, further deteriorate design extensibility, and even worse, they reduce product reliability. ^ A concern is crosscutting if it spans multiple object-oriented classes. It was identified that invasive changes were due to the crosscutting nature of most unplanned extensions. To overcome this problem, an aspect-oriented design approach for AC services was proposed, as aspect-oriented techniques could effectively encapsulate crosscutting concerns. The proposed approach was applied to develop an AC framework that supported role-based access control model. In the framework, the core role-based access control mechanism is given in an object-oriented design, while each extension is captured as an aspect. The resulting framework is well-modularized, flexible, and most importantly, supports noninvasive adaptation. ^ In addition, a process to formalize the aspect-oriented design was described. The purpose is to provide high assurance for AC services. Object-Z was used to specify the static structure and Predicate/Transition net was used to model the dynamic behavior. Object-Z was extended to facilitate specification in an aspect-oriented style. The process of formal modeling helps designers to enhance their understanding of the design, hence to detect problems. Furthermore, the specification can be mathematically verified. This provides confidence that the design is correct. It was illustrated through an example that the model was ready for formal analysis. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural vibration control is of great importance. Current active and passive vibration control strategies usually employ individual elements to fulfill this task, such as viscoelastic patches for providing damping, transducers for picking up signals and actuators for inputting actuating forces. The goal of this dissertation work is to design, manufacture, investigate and apply a new type of multifunctional composite material for structural vibration control. This new composite, which is based on multi-walled carbon nanotube (MWCNT) film, is potentially to function as free layer damping treatment and strain sensor simultaneously. That is, the new material integrates the transducer and the damping patch into one element. The multifunctional composite was prepared by sandwiching the MWCNT film between two adhesive layers. Static sensing test indicated that the MWCNT film sensor resistance changes almost linearly with the applied load. Sensor sensitivity factors were comparable to those of the foil strain gauges. Dynamic test indicated that the MWCNT film sensor can outperform the foil strain gage in high frequency ranges. Temperature test indicated the MWCNT sensor had good temperature stability over the range of 237 K-363 K. The Young’s modulus and shear modulus of the MWCNT film composite were acquired by nanoindentation test and direct shear test, respectively. A free vibration damping test indicated that the MWCNT composite sensor can also provide good damping without adding excessive weight to the base structure. A new model for sandwich structural vibration control was then proposed. In this new configuration, a cantilever beam covered with MWCNT composite on top and one layer of shape memory alloy (SMA) on the bottom was used to illustrate this concept. The MWCNT composite simultaneously serves as free layer damping and strain sensor, and the SMA acts as actuator. Simple on-off controller was designed for controlling the temperature of the SMA so as to control the SMA recovery stress as input and the system stiffness. Both free and forced vibrations were analyzed. Simulation work showed that this new configuration for sandwich structural vibration control was successful especially for low frequency system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressional- and shear-wave velocity logs (Vp and Vs, respectively) that were run to a sub-basement depth of 1013 m (1287.5 m sub-bottom) in Hole 504B suggest the presence of Layer 2A and document the presence of layers 2B and 2C on the Costa Rica Rift. Layer 2A extends from the mudline to 225 m sub-basement and is characterized by compressional-wave velocities of 4.0 km/s or less. Layer 2B extends from 225 to 900 m and may be divided into two intervals: an upper level from 225 to 600 m in which Vp decreases slowly from 5.0 to 4.8 km/s and a lower level from 600 to about 900 m in which Vp increases slowly to 6.0 km/s. In Layer 2C, which was logged for about 100 m to a depth of 1 km, Vp and Vs appear to be constant at 6.0 and 3.2 km/s, respectively. This velocity structure is consistent with, but more detailed than the structure determined by the oblique seismic experiment in the same hole. Since laboratory measurements of the compressional- and shear-wave velocity of samples from Hole 504B at Pconfining = Pdifferential average 6.0 and 3.2 km/s respectively, and show only slight increases with depth, we conclude that the velocity structure of Layer 2 is controlled almost entirely by variations in porosity and that the crack porosity of Layer 2C approaches zero. A comparison between the compressional-wave velocities determined by logging and the formation porosities calculated from the results of the large-scale resistivity experiment using Archie's Law suggest that the velocity- porosity relation derived by Hyndman et al. (1984) for laboratory samples serves as an upper bound for Vp, and the noninteractive relation derived by Toksöz et al. (1976) for cracks with an aspect ratio a = 1/32 serves as a lower bound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation explores the complex interactions between organizational structure and the environment. In Chapter 1, I investigate the effect of financial development on the formation of European corporate groups. Since cross-country regressions are hard to interpret in a causal sense, we exploit exogenous industry measures to investigate a specific channel through which financial development may affect group affiliation: internal capital markets. Using a comprehensive firm-level dataset on European corporate groups in 15 countries, we find that countries

with less developed financial markets have a higher percentage of group affiliates in more capital intensive industries. This relationship is more pronounced for young and small firms and for affiliates of large and diversified groups. Our findings are consistent with the view that internal capital markets may, under some conditions, be more efficient than prevailing external markets, and that this may drive group affiliation even in developed economies. In Chapter 2, I bridge current streams of innovation research to explore the interplay between R&D, external knowledge, and organizational structure–three elements of a firm’s innovation strategy which we argue should logically be studied together. Using within-firm patent assignment patterns,

we develop a novel measure of structure for a large sample of American firms. We find that centralized firms invest more in research and patent more per R&D dollar than decentralized firms. Both types access technology via mergers and acquisitions, but their acquisitions differ in terms of frequency, size, and i\ntegration. Consistent with our framework, their sources of value creation differ: while centralized firms derive more value from internal R&D, decentralized firms rely more on external knowledge. We discuss how these findings should stimulate more integrative work on theories of innovation. In Chapter 3, I use novel data on 1,265 newly-public firms to show that innovative firms exposed to environments with lower M&A activity just after their initial public offering (IPO) adapt by engaging in fewer technological acquisitions and

more internal research. However, this adaptive response becomes inertial shortly after IPO and persists well into maturity. This study advances our understanding of how the environment shapes heterogeneity and capabilities through its impact on firm structure. I discuss how my results can help bridge inertial versus adaptive perspectives in the study of organizations, by

documenting an instance when the two interact.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two novel studies examining the capacity and characteristics of working memory for object weights, experienced through lifting, were completed. Both studies employed visually identical objects of varying weight and focused on memories linking object locations and weights. Whereas numerous studies have examined the capacity of visual working memory, the capacity of sensorimotor memory involved in motor control and object manipulation has not yet been explored. In addition to assessing working memory for object weights using an explicit perceptual test, we also assessed memory for weight using an implicit measure based on motor performance. The vertical lifting or LF and the horizontal GF applied during lifts, measured from force sensors embedded in the object handles, were used to assess participants’ ability to predict object weights. In Experiment 1, participants were presented with sets of 3, 4, 5, 7 or 9 objects. They lifted each object in the set and then repeated this procedure 10 times with the objects lifted either in a fixed or random order. Sensorimotor memory was examined by assessing, as a function of object set size, how lifting forces changed across successive lifts of a given object. The results indicated that force scaling for weight improved across the repetitions of lifts, and was better for smaller set sizes when compared to the larger set sizes, with the latter effect being clearest when objects were lifting in a random order. However, in general the observed force scaling was poorly scaled. In Experiment 2, working memory was examined in two ways: by determining participants’ ability to detect a change in the weight of one of 3 to 6 objects lifted twice, and by simultaneously measuring the fingertip forces applied when lifting the objects. The results showed that, even when presented with 6 objects, participants were extremely accurate in explicitly detecting which object changed weight. In addition, force scaling for object weight, which was generally quite weak, was similar across set sizes. Thus, a capacity limit less than 6 was not found for either the explicit or implicit measures collected.