22 resultados para On-chip debug
Resumo:
Advancements in IC processing technology has led to the innovation and growth happening in the consumer electronics sector and the evolution of the IT infrastructure supporting this exponential growth. One of the most difficult obstacles to this growth is the removal of large amount of heatgenerated by the processing and communicating nodes on the system. The scaling down of technology and the increase in power density is posing a direct and consequential effect on the rise in temperature. This has resulted in the increase in cooling budgets, and affects both the life-time reliability and performance of the system. Hence, reducing on-chip temperatures has become a major design concern for modern microprocessors. This dissertation addresses the thermal challenges at different levels for both 2D planer and 3D stacked systems. It proposes a self-timed thermal monitoring strategy based on the liberal use of on-chip thermal sensors. This makes use of noise variation tolerant and leakage current based thermal sensing for monitoring purposes. In order to study thermal management issues from early design stages, accurate thermal modeling and analysis at design time is essential. In this regard, spatial temperature profile of the global Cu nanowire for on-chip interconnects has been analyzed. It presents a 3D thermal model of a multicore system in order to investigate the effects of hotspots and the placement of silicon die layers, on the thermal performance of a modern ip-chip package. For a 3D stacked system, the primary design goal is to maximise the performance within the given power and thermal envelopes. Hence, a thermally efficient routing strategy for 3D NoC-Bus hybrid architectures has been proposed to mitigate on-chip temperatures by herding most of the switching activity to the die which is closer to heat sink. Finally, an exploration of various thermal-aware placement approaches for both the 2D and 3D stacked systems has been presented. Various thermal models have been developed and thermal control metrics have been extracted. An efficient thermal-aware application mapping algorithm for a 2D NoC has been presented. It has been shown that the proposed mapping algorithm reduces the effective area reeling under high temperatures when compared to the state of the art.
Resumo:
This thesis presents a novel design paradigm, called Virtual Runtime Application Partitions (VRAP), to judiciously utilize the on-chip resources. As the dark silicon era approaches, where the power considerations will allow only a fraction chip to be powered on, judicious resource management will become a key consideration in future designs. Most of the works on resource management treat only the physical components (i.e. computation, communication, and memory blocks) as resources and manipulate the component to application mapping to optimize various parameters (e.g. energy efficiency). To further enhance the optimization potential, in addition to the physical resources we propose to manipulate abstract resources (i.e. voltage/frequency operating point, the fault-tolerance strength, the degree of parallelism, and the configuration architecture). The proposed framework (i.e. VRAP) encapsulates methods, algorithms, and hardware blocks to provide each application with the abstract resources tailored to its needs. To test the efficacy of this concept, we have developed three distinct self adaptive environments: (i) Private Operating Environment (POE), (ii) Private Reliability Environment (PRE), and (iii) Private Configuration Environment (PCE) that collectively ensure that each application meets its deadlines using minimal platform resources. In this work several novel architectural enhancements, algorithms and policies are presented to realize the virtual runtime application partitions efficiently. Considering the future design trends, we have chosen Coarse Grained Reconfigurable Architectures (CGRAs) and Network on Chips (NoCs) to test the feasibility of our approach. Specifically, we have chosen Dynamically Reconfigurable Resource Array (DRRA) and McNoC as the representative CGRA and NoC platforms. The proposed techniques are compared and evaluated using a variety of quantitative experiments. Synthesis and simulation results demonstrate VRAP significantly enhances the energy and power efficiency compared to state of the art.
Resumo:
Due to various advantages such as flexibility, scalability and updatability, software intensive systems are increasingly embedded in everyday life. The constantly growing number of functions executed by these systems requires a high level of performance from the underlying platform. The main approach to incrementing performance has been the increase of operating frequency of a chip. However, this has led to the problem of power dissipation, which has shifted the focus of research to parallel and distributed computing. Parallel many-core platforms can provide the required level of computational power along with low power consumption. On the one hand, this enables parallel execution of highly intensive applications. With their computational power, these platforms are likely to be used in various application domains: from home use electronics (e.g., video processing) to complex critical control systems. On the other hand, the utilization of the resources has to be efficient in terms of performance and power consumption. However, the high level of on-chip integration results in the increase of the probability of various faults and creation of hotspots leading to thermal problems. Additionally, radiation, which is frequent in space but becomes an issue also at the ground level, can cause transient faults. This can eventually induce a faulty execution of applications. Therefore, it is crucial to develop methods that enable efficient as well as resilient execution of applications. The main objective of the thesis is to propose an approach to design agentbased systems for many-core platforms in a rigorous manner. When designing such a system, we explore and integrate various dynamic reconfiguration mechanisms into agents functionality. The use of these mechanisms enhances resilience of the underlying platform whilst maintaining performance at an acceptable level. The design of the system proceeds according to a formal refinement approach which allows us to ensure correct behaviour of the system with respect to postulated properties. To enable analysis of the proposed system in terms of area overhead as well as performance, we explore an approach, where the developed rigorous models are transformed into a high-level implementation language. Specifically, we investigate methods for deriving fault-free implementations from these models into, e.g., a hardware description language, namely VHDL.
Resumo:
In modern society, the body health is a very important issue to everyone. With the development of the science and technology, the new and developed body health monitoring device and technology will play the key role in the daily medical activities. This paper focus on making progress in the design of the wearable vital sign system. A vital sign monitoring system has been proposed and designed. The whole detection system is composed of signal collecting subsystem, signal processing subsystem, short-range wireless communication subsystem and user interface subsystem. The signal collecting subsystem is composed of light source and photo diode, after emiting light of two different wavelength, the photo diode collects the light signal reflected by human body tissue. The signal processing subsystem is based on the analog front end AFE4490 and peripheral circuits, the collected analog signal would be filtered and converted into digital signal in this stage. After a series of processing, the signal would be transmitted to the short-range wireless communication subsystem through SPI, this subsystem is mainly based on Bluetooth 4.0 protocol and ultra-low power System on Chip(SoC) nRF51822. Finally, the signal would be transmitted to the user end. After proposing and building the system, this paper focus on the research of the key component in the system, that is, the photo detector. Based on the study of the perovskite materials, a low temperature processed photo detector has been proposed, designed and researched. The device is made up of light absorbing layer, electron transporting and hole blocking layer, hole transporting and electron blocking layer, conductive substrate layer and metal electrode layer. The light absorbing layer is the important part of whole device, and it is fabricated by perovskite materials. After accepting the light, the electron-hole pair would be produced in this layer, and due to the energy level difference, the electron and hole produced would be transmitted to metal electrode and conductive substrate electrode through electron transporting layer and hole transporting layer respectively. In this way the response current would be produced. Based on this structure, the specific fabrication procedure including substrate cleaning; PEDOT:PSS layer preparation; pervoskite layer preparation; PCBM layer preparation; C60, BCP, and Ag electrode layer preparation. After the device fabrication, a series of morphological characterization and performance testing has been done. The testing procedure including film-forming quality inspection, response current and light wavelength analysis, linearity and response time and other optical and electrical properties testing. The testing result shows that the membrane has been fabricated uniformly; the device can produce obvious response current to the incident light with the wavelength from 350nm to 800nm, and the response current could be changed along with the light wavelength. When the light wavelength keeps constant, there exists a good linear relationship between the intensity of the response current and the power of the incident light, based on which the device could be used as the photo detector to collect the light information. During the changing period of the light signal, the response time of the device is several microseconds, which is acceptable working as a photo detector in our system. The testing results show that the device has good electronic and optical properties, and the fabrication procedure is also repeatable, the properties of the devices has good uniformity, which illustrates the fabrication method and procedure could be used to build the photo detector in our wearable system. Based on a series of testing results, the paper has drawn the conclusion that the photo detector fabricated could be integrated on the flexible substrate and is also suitable for the monitoring system proposed, thus made some progress on the research of the wearable monitoring system and device. Finally, some future prospect in system design aspect and device design and fabrication aspect are proposed.
Resumo:
This work is based on the utilisation of sawdust and wood chip screenings for different purposes. A substantial amount of these byproducts are readily available in the Finnish forest industry. A black liquor impregnation study showed that sawdust-like wood material behaves differently from normal chips. Furthermore, the fractionation and removal of the smallest size fractions did not have a significant effect on the impregnation of sawdust-like wood material. Sawdust kraft cooking equipped with an impregnation stage increases the cooking yield and decreases the lignin content of the produced pulp. Impregnation also increases viscosity of the pulp and decreases chlorine dioxide consumption in bleaching. In addition, impregnation increases certain pulp properties after refining. Hydrotropic extraction showed that more lignin can be extracted from hardwood than softwood. However, the particle size had a major influence on the lignin extraction. It was possible to extract more lignin from spruce sawdust than spruce chips. Wood chip screenings are usually combusted to generate energy. They can also be used in the production of kraft pulp, ethanol and chemicals. It is not economical to produce ethanol from wood chip screenings because of the expensive wood material. Instead, they should be used for production of steam and energy, kraft pulp and higher value added chemicals. Bleached sawdust kraft pulp can be used to replace softwood kraft pulp in mechanical pulp based papers because it can improve certain physical properties. It is economically more feasible to use bleached sawdust kraft pulp in stead of softwood kraft pulp, especially when the reinforcement power requirement is moderate.
Resumo:
The research of condition monitoring of electric motors has been wide for several decades. The research and development at universities and in industry has provided means for the predictive condition monitoring. Many different devices and systems are developed and are widely used in industry, transportation and in civil engineering. In addition, many methods are developed and reported in scientific arenas in order to improve existing methods for the automatic analysis of faults. The methods, however, are not widely used as a part of condition monitoring systems. The main reasons are, firstly, that many methods are presented in scientific papers but their performance in different conditions is not evaluated, secondly, the methods include parameters that are so case specific that the implementation of a systemusing such methods would be far from straightforward. In this thesis, some of these methods are evaluated theoretically and tested with simulations and with a drive in a laboratory. A new automatic analysis method for the bearing fault detection is introduced. In the first part of this work the generation of the bearing fault originating signal is explained and its influence into the stator current is concerned with qualitative and quantitative estimation. The verification of the feasibility of the stator current measurement as a bearing fault indicatoris experimentally tested with the running 15 kW induction motor. The second part of this work concentrates on the bearing fault analysis using the vibration measurement signal. The performance of the micromachined silicon accelerometer chip in conjunction with the envelope spectrum analysis of the cyclic bearing faultis experimentally tested. Furthermore, different methods for the creation of feature extractors for the bearing fault classification are researched and an automatic fault classifier using multivariate statistical discrimination and fuzzy logic is introduced. It is often important that the on-line condition monitoring system is integrated with the industrial communications infrastructure. Two types of a sensor solutions are tested in the thesis: the first one is a sensor withcalculation capacity for example for the production of the envelope spectra; the other one can collect the measurement data in memory and another device can read the data via field bus. The data communications requirements highly depend onthe type of the sensor solution selected. If the data is already analysed in the sensor the data communications are needed only for the results but in the other case, all measurement data need to be transferred. The complexity of the classification method can be great if the data is analysed at the management level computer, but if the analysis is made in sensor itself, the analyses must be simple due to the restricted calculation and memory capacity.
Resumo:
The objective of this work was to study the effects of partial removal of wood hemicelluloses on the properties of kraft pulp.The work was conducted by extracting hemicelluloses (1) by a softwood chip pretreatment process prior to kraft pulping, (2) by alkaline extraction from bleached birch kraft pulp, and (3) by enzymatic treatment, xylanase treatment in particular, of bleached birch kraft pulp. The qualitative and quantitative changes in fibers and paper properties were evaluated. In addition, the applicability of the extraction concepts and hemicellulose-extracted birch kraft pulp as a raw material in papermaking was evaluated in a pilot-scale papermaking environment. The results showed that each examined hemicellulose extraction method has its characteristic effects on fiber properties, seen as differences in both the physical and chemical nature of the fibers. A prehydrolysis process prior to the kraft pulping process offered reductions in cooking time, bleaching chemical consumption and produced fibers with low hemicellulose content that are more susceptible to mechanically induced damages and dislocations. Softwood chip pretreatment for hemicellulose recovery prior to cooking, whether acidic or alkaline, had an impact on the physical properties of the non-refined and refined pulp. In addition, all the pretreated pulps exhibited slower beating response than the unhydrolyzed reference pulp. Both alkaline extraction and enzymatic (xylanase) treatment of bleached birch kraft pulp fibers indicated very selective hemicellulose removal, particularly xylan removal. Furthermore, these two hemicellulose-extracted birch kraft pulps were utilized in a pilot-scale papermaking environment in order to evaluate the upscalability of the extraction concepts. Investigations made using pilot paper machine trials revealed that some amount of alkalineextracted birch kraft pulp, with a 24.9% reduction in the total amount of xylan, could be used in the papermaking stock as a mixture with non-extracted pulp when producing 75 g/m2 paper. For xylanase-treated fibers there were no reductions in the mechanical properties of the 180 g/m2 paper produced compared to paper made from the control pulp, although there was a 14.2% reduction in the total amount of xylan in the xylanase-treated pulp compared to the control birch kraft pulp. This work emphasized the importance of the hemicellulose extraction method in providing new solutions to create functional fibers and in providing a valuable hemicellulose co-product stream. The hemicellulose removal concept therefore plays an important role in the integrated forest biorefinery scenario, where the target is to the co-production of hemicellulose-extracted pulp and hemicellulose-based chemicals or fuels.