946 resultados para Memory-based
Resumo:
The behavior of a semiconductor optical amplifier (SOA)-based nonlinear loop mirror with feedback has been investigated as a potential device for all-optical signal processing. In the feedback device, input signal pulses (ones) are injected into the loop, and amplified reflected pulses are fed back into the loop as switching pulses. The feedback device has two stable modes of operation - block mode, where alternating blocks of ones and zeros are observed, and spontaneous clock division mode, where halving of the input repetition rate is achieved. Improved models of the feedback device have been developed to study its performance in different operating conditions. The feedback device could be optimized to give a choice of either of the two stable modes by shifting the arrival time of the switching pulses at the SOA. Theoretically, it was found possible to operate the device at only tens of fJ switching pulse energies if the SOA is biased to produce very high gain in the presence of internal loss. The clock division regime arises from the combination of incomplete SOA gain recovery and memory of the startup sequence that is provided by the feedback. Clock division requires a sufficiently high differential phase shift per unit differential gain, which is related to the SOA linewidth enhancement factor.
Resumo:
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Resumo:
This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.
Resumo:
We describe an approach for recovering the plaintext in block ciphers having a design structure similar to the Data Encryption Standard but with improperly constructed S-boxes. The experiments with a backtracking search algorithm performing this kind of attack against modified DES/Triple-DES in ECB mode show that the unknown plaintext can be recovered with a small amount of uncertainty and this algorithm is highly efficient both in time and memory costs for plaintext sources with relatively low entropy. Our investigations demonstrate once again that modifications resulting to S-boxes which still satisfy some design criteria may lead to very weak ciphers. ACM Computing Classification System (1998): E.3, I.2.7, I.2.8.
Resumo:
The integrability of the nonlinear Schräodinger equation (NLSE) by the inverse scattering transform shown in a seminal work [1] gave an interesting opportunity to treat the corresponding nonlinear channel similar to a linear one by using the nonlinear Fourier transform. Integrability of the NLSE is in the background of the old idea of eigenvalue communications [2] that was resurrected in recent works [3{7]. In [6, 7] the new method for the coherent optical transmission employing the continuous nonlinear spectral data | nonlinear inverse synthesis was introduced. It assumes the modulation and detection of data using directly the continuous part of nonlinear spectrum associated with an integrable transmission channel (the NLSE in the case considered). Although such a transmission method is inherently free from nonlinear impairments, the noisy signal corruptions, arising due to the ampli¯er spontaneous emission, inevitably degrade the optical system performance. We study properties of the noise-corrupted channel model in the nonlinear spectral domain attributed to NLSE. We derive the general stochastic equations governing the signal evolution inside the nonlinear spectral domain and elucidate the properties of the emerging nonlinear spectral noise using well-established methods of perturbation theory based on inverse scattering transform [8]. It is shown that in the presence of small noise the communication channel in the nonlinear domain is the additive Gaussian channel with memory and signal-dependent correlation matrix. We demonstrate that the effective spectral noise acquires colouring", its autocorrelation function becomes slow decaying and non-diagonal as a function of \frequencies", and the noise loses its circular symmetry, becoming elliptically polarized. Then we derive a low bound for the spectral effiency for such a channel. Our main result is that by using the nonlinear spectral techniques one can significantly increase the achievable spectral effiency compared to the currently available methods [9]. REFERENCES 1. Zakharov, V. E. and A. B. Shabat, Sov. Phys. JETP, Vol. 34, 62{69, 1972. 2. Hasegawa, A. and T. Nyu, J. Lightwave Technol., Vol. 11, 395{399, 1993. 3. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4312{4328, 2014. 4. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4329{4345 2014. 5. Yousefi, M. I. and F. R. Kschischang, IEEE Trans. Inf. Theory, Vol. 60, 4346{4369, 2014. 6. Prilepsky, J. E., S. A. Derevyanko, K. J. Blow, I. Gabitov, and S. K. Turitsyn, Phys. Rev. Lett., Vol. 113, 013901, 2014. 7. Le, S. T., J. E. Prilepsky, and S. K. Turitsyn, Opt. Express, Vol. 22, 26720{26741, 2014. 8. Kaup, D. J. and A. C. Newell, Proc. R. Soc. Lond. A, Vol. 361, 413{446, 1978. 9. Essiambre, R.-J., G. Kramer, P. J. Winzer, G. J. Foschini, and B. Goebel, J. Lightwave Technol., Vol. 28, 662{701, 2010.
Resumo:
Shape memory alloys are a special class of metals that can undergo large deformation yet still be able to recover their original shape through the mechanism of phase transformations. However, when they experience plastic slip, their ability to recover their original shape is reduced. This is due to the presence of dislocations generated by plastic flow that interfere with shape recovery through the shape memory effect and the superelastic effect. A one-dimensional model that captures the coupling between shape memory effect, the superelastic effect and plastic deformation is introduced. The shape memory alloy is assumed to have only 3 phases: austenite, positive variant martensite and negative variant martensite. If the SMA flows plastically, each phase will exhibit a dislocation field that permanently prevents a portion of it from being transformed back to other phases. Hence, less of the phase is available for subsequent phase transformations. A constitutive model was developed to depict this phenomena and simulate the effect of plasticity on both the shape memory effect and the superelastic effect in shape memory alloys. In addition, experimental tests were conducted to characterize the phenomenon in shape memory wire and superelastic wire. ^ The constitutive model was then implemented in within a finite element context as UMAT (User MATerial Subroutine) for the commercial finite element package ABAQUS. The model is phenomenological in nature and is based on the construction of stress-temperature phase diagram. ^ The model has been shown to be capable of capturing the qualitative and quantitative aspects of the coupling between plasticity and the shape memory effect and plasticity and the super elastic effect within acceptable limits. As a verification case a simple truss structure was built and tested and then simulated using the FEA constitutive model. The results where found to be close the experimental data. ^
Resumo:
Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.
Resumo:
Structural vibration control is of great importance. Current active and passive vibration control strategies usually employ individual elements to fulfill this task, such as viscoelastic patches for providing damping, transducers for picking up signals and actuators for inputting actuating forces. The goal of this dissertation work is to design, manufacture, investigate and apply a new type of multifunctional composite material for structural vibration control. This new composite, which is based on multi-walled carbon nanotube (MWCNT) film, is potentially to function as free layer damping treatment and strain sensor simultaneously. That is, the new material integrates the transducer and the damping patch into one element. The multifunctional composite was prepared by sandwiching the MWCNT film between two adhesive layers. Static sensing test indicated that the MWCNT film sensor resistance changes almost linearly with the applied load. Sensor sensitivity factors were comparable to those of the foil strain gauges. Dynamic test indicated that the MWCNT film sensor can outperform the foil strain gage in high frequency ranges. Temperature test indicated the MWCNT sensor had good temperature stability over the range of 237 K-363 K. The Young’s modulus and shear modulus of the MWCNT film composite were acquired by nanoindentation test and direct shear test, respectively. A free vibration damping test indicated that the MWCNT composite sensor can also provide good damping without adding excessive weight to the base structure. A new model for sandwich structural vibration control was then proposed. In this new configuration, a cantilever beam covered with MWCNT composite on top and one layer of shape memory alloy (SMA) on the bottom was used to illustrate this concept. The MWCNT composite simultaneously serves as free layer damping and strain sensor, and the SMA acts as actuator. Simple on-off controller was designed for controlling the temperature of the SMA so as to control the SMA recovery stress as input and the system stiffness. Both free and forced vibrations were analyzed. Simulation work showed that this new configuration for sandwich structural vibration control was successful especially for low frequency system.
Resumo:
Electrical energy is an essential resource for the modern world. Unfortunately, its price has almost doubled in the last decade. Furthermore, energy production is also currently one of the primary sources of pollution. These concerns are becoming more important in data-centers. As more computational power is required to serve hundreds of millions of users, bigger data-centers are becoming necessary. This results in higher electrical energy consumption. Of all the energy used in data-centers, including power distribution units, lights, and cooling, computer hardware consumes as much as 80%. Consequently, there is opportunity to make data-centers more energy efficient by designing systems with lower energy footprint. Consuming less energy is critical not only in data-centers. It is also important in mobile devices where battery-based energy is a scarce resource. Reducing the energy consumption of these devices will allow them to last longer and re-charge less frequently. Saving energy in computer systems is a challenging problem. Improving a system's energy efficiency usually comes at the cost of compromises in other areas such as performance or reliability. In the case of secondary storage, for example, spinning-down the disks to save energy can incur high latencies if they are accessed while in this state. The challenge is to be able to increase the energy efficiency while keeping the system as reliable and responsive as before. This thesis tackles the problem of improving energy efficiency in existing systems while reducing the impact on performance. First, we propose a new technique to achieve fine grained energy proportionality in multi-disk systems; Second, we design and implement an energy-efficient cache system using flash memory that increases disk idleness to save energy; Finally, we identify and explore solutions for the page fetch-before-update problem in caching systems that can: (a) control better I/O traffic to secondary storage and (b) provide critical performance improvement for energy efficient systems.
Resumo:
The move from Standard Definition (SD) to High Definition (HD) represents a six times increases in data, which needs to be processed. With expanding resolutions and evolving compression, there is a need for high performance with flexible architectures to allow for quick upgrade ability. The technology advances in image display resolutions, advanced compression techniques, and video intelligence. Software implementation of these systems can attain accuracy with tradeoffs among processing performance (to achieve specified frame rates, working on large image data sets), power and cost constraints. There is a need for new architectures to be in pace with the fast innovations in video and imaging. It contains dedicated hardware implementation of the pixel and frame rate processes on Field Programmable Gate Array (FPGA) to achieve the real-time performance. ^ The following outlines the contributions of the dissertation. (1) We develop a target detection system by applying a novel running average mean threshold (RAMT) approach to globalize the threshold required for background subtraction. This approach adapts the threshold automatically to different environments (indoor and outdoor) and different targets (humans and vehicles). For low power consumption and better performance, we design the complete system on FPGA. (2) We introduce a safe distance factor and develop an algorithm for occlusion occurrence detection during target tracking. A novel mean-threshold is calculated by motion-position analysis. (3) A new strategy for gesture recognition is developed using Combinational Neural Networks (CNN) based on a tree structure. Analysis of the method is done on American Sign Language (ASL) gestures. We introduce novel point of interests approach to reduce the feature vector size and gradient threshold approach for accurate classification. (4) We design a gesture recognition system using a hardware/ software co-simulation neural network for high speed and low memory storage requirements provided by the FPGA. We develop an innovative maximum distant algorithm which uses only 0.39% of the image as the feature vector to train and test the system design. Database set gestures involved in different applications may vary. Therefore, it is highly essential to keep the feature vector as low as possible while maintaining the same accuracy and performance^
Resumo:
Hardware/software (HW/SW) cosimulation integrates software simulation and hardware simulation simultaneously. Usually, HW/SW co-simulation platform is used to ease debugging and verification for very large-scale integration (VLSI) design. To accelerate the computation of the gesture recognition technique, an HW/SW implementation using field programmable gate array (FPGA) technology is presented in this paper. The major contributions of this work are: (1) a novel design of memory controller in the Verilog Hardware Description Language (Verilog HDL) to reduce memory consumption and load on the processor. (2) The testing part of the neural network algorithm is being hardwired to improve the speed and performance. The American Sign Language gesture recognition is chosen to verify the performance of the approach. Several experiments were carried out on four databases of the gestures (alphabet signs A to Z). (3) The major benefit of this design is that it takes only few milliseconds to recognize the hand gesture which makes it computationally more efficient.
Resumo:
Wireless sensor networks are emerging as effective tools in the gathering and dissemination of data. They can be applied in many fields including health, environmental monitoring, home automation and the military. Like all other computing systems it is necessary to include security features, so that security sensitive data traversing the network is protected. However, traditional security techniques cannot be applied to wireless sensor networks. This is due to the constraints of battery power, memory, and the computational capacities of the miniature wireless sensor nodes. Therefore, to address this need, it becomes necessary to develop new lightweight security protocols. This dissertation focuses on designing a suite of lightweight trust-based security mechanisms and a cooperation enforcement protocol for wireless sensor networks. This dissertation presents a trust-based cluster head election mechanism used to elect new cluster heads. This solution prevents a major security breach against the routing protocol, namely, the election of malicious or compromised cluster heads. This dissertation also describes a location-aware, trust-based, compromise node detection, and isolation mechanism. Both of these mechanisms rely on the ability of a node to monitor its neighbors. Using neighbor monitoring techniques, the nodes are able to determine their neighbors’ reputation and trust level through probabilistic modeling. The mechanisms were designed to mitigate internal attacks within wireless sensor networks. The feasibility of the approach is demonstrated through extensive simulations. The dissertation also addresses non-cooperation problems in multi-user wireless sensor networks. A scalable lightweight enforcement algorithm using evolutionary game theory is also designed. The effectiveness of this cooperation enforcement algorithm is validated through mathematical analysis and simulation. This research has advanced the knowledge of wireless sensor network security and cooperation by developing new techniques based on mathematical models. By doing this, we have enabled others to build on our work towards the creation of highly trusted wireless sensor networks. This would facilitate its full utilization in many fields ranging from civilian to military applications.
Resumo:
Attention-deficit hyperactivity disorder (ADHD) is the most prevalent and impairing neurodevelopmental disorder, with worldwide estimates of 5.29%. ADHD is clinically characterized by hyperactivity-impulsivity and inattention, with neuropsychological deficits in executive functions, attention, working memory and inhibition. These cognitive processes rely on prefrontal cortex function; cognitive training programs enhance performance of ADHD participants supporting the idea of neuronal plasticity. Here we propose the development of an on-line puzzle game based assessment and training tool in which participants must deduce the ‘winning symbol’ out of N distracters. To increase ecological validity of assessments strategically triggered Twitter/Facebook notifications will challenge the ability to ignore distracters. In the UK, significant cost for the disorder on health, social and education services, stand at £23m a year. Thus the potential impact of neuropsychological assessment and training to improve our understanding of the pathophysiology of ADHD, and hence our treatment interventions and patient outcomes, cannot be overstated.
Resumo:
Practice-oriented film education aimed at children has been hailed for various reasons: at a personal level, as a means of providing tools for self-expression, for developing creativity and communication skills. And at a social level, it is argued that children must now become competent producers, in addition to critical consumers, of audiovisual content so they can take part in the global public sphere that is arguably emerging. This chapter discusses how the challenges posed by introducing children to filmmaking (i.e. digital video) are being met at three civil associations in Mexico: La Matatena AC, which seeks to enrich the children’s lives by means of the aesthetic experience filmmaking can bring them. Comunicaciòn Comunitaria, concerned with the impact filmmaking can have on the community, preserving cultural memory and enabling participation. And Juguemos a Grabar, with a focus on urban regeneration through the cultural industries.
Resumo:
Encryption and integrity trees guard against phys- ical attacks, but harm performance. Prior academic work has speculated around the latency of integrity verification, but has done so in an insecure manner. No industrial implementations of secure processors have included speculation. This work presents PoisonIvy, a mechanism which speculatively uses data before its integrity has been verified while preserving security and closing address-based side-channels. PoisonIvy reduces per- formance overheads from 40% to 20% for memory intensive workloads and down to 1.8%, on average.