826 resultados para Stack Overflow


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This project addresses the unreliability of operating system code, in particular in device drivers. Device driver software is the interface between the operating system and the device's hardware. Device drivers are written in low level code, making them difficult to understand. Almost all device drivers are written in the programming language C which allows for direct manipulation of memory. Due to the complexity of manual movement of data, most mistakes in operating systems occur in device driver code. The programming language Clay can be used to check device driver code at compile-time. Clay does most of its error checking statically to minimize the overhead of run-time checks in order to stay competitive with C's performance time. The Clay compiler can detect a lot more types of errors than the C compiler like buffer overflows, kernel stack overflows, NULL pointer uses, freed memory uses, and aliasing errors. Clay code that successfully compiles is guaranteed to run without failing on errors that Clay can detect. Even though C is unsafe, currently most device drivers are written in it. Not only are device drivers the part of the operating system most likely to fail, they also are the largest part of the operating system. As rewriting every existing device driver in Clay by hand would be impractical, this thesis is part of a project to automate translation of existing drivers from C to Clay. Although C and Clay both allow low level manipulation of data and fill the same niche for developing low level code, they have different syntax, type systems, and paradigms. This paper explores how C can be translated into Clay. It identifies what part of C device drivers cannot be translated into Clay and what information drivers in Clay will require that C cannot provide. It also explains how these translations will occur by explaining how each C structure is represented in the compiler and how these structures are changed to represent a Clay structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Content Addressable Memory (CAM) is a special type of Complementary Metal-Oxide-Semiconductor (CMOS) storage element that allows for a parallel search operation on a memory stack in addition to the read and write operations yielded by a conventional SRAM storage array. In practice, it is often desirable to be able to store a “don’t care” state for faster searching operation. However, commercially available CAM chips are forced to accomplish this functionality by having to include two binary memory storage elements per CAM cell,which is a waste of precious area and power resources. This research presents a novel CAM circuit that achieves the “don’t care” functionality with a single ternary memory storage element. Using the recent development of multiple-voltage-threshold (MVT) CMOS transistors, the functionality of the proposed circuit is validated and characteristics for performance, power consumption, noise immunity, and silicon area are presented. This workpresents the following contributions to the field of CAM and ternary-valued logic:• We present a novel Simple Ternary Inverter (STI) transistor geometry scheme for achieving ternary-valued functionality in existing SOI-CMOS 0.18µm processes.• We present a novel Ternary Content Addressable Memory based on Three-Valued Logic (3CAM) as a single-storage-element CAM cell with “don’t care” functionality.• We explore the application of macro partitioning schemes to our proposed 3CAM array to observe the benefits and tradeoffs of architecture design in the context of power, delay, and area.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new image-guided microscope using augmented reality overlays has been developed. Unlike other systems, the novelty of our design consists in mounting a precise mini and low-cost tracker directly on the microscope to track the motion of the surgical tools and the patient. Correctly scaled cut-views of the pre-operative computed tomography (CT) stack can be displayed on the overlay, orthogonal to the optical view or even including the direction of a clinical tool. Moreover, the system can manage three-dimensional models for tumours or bone structures and allows interaction with them using virtual tools, showing trajectories and distances. The mean error of the overlay was 0.7 mm. Clinical accuracy has shown results of 1.1-1.8 mm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the development of micro systems, there is an increasing demand for integrable porous materials. In addition to those conventional applications, such as filtration, wicking, and insulating, many new micro devices, including micro reactors, sensors, actuators, and optical components, can benefit from porous materials. Conventional porous materials, such as ceramics and polymers, however, cannot meet the challenges posed by micro systems, due to their incompatibility with standard micro-fabrication processes. In an effort to produce porous materials that can be used in micro systems, porous silicon (PS) generated by anodization of single crystalline silicon has been investigated. In this work, the PS formation process has been extensively studied and characterized as a function of substrate type, crystal orientation, doping concentration, current density and surfactant concentration and type. Anodization conditions have been optimized for producing very thick porous silicon layers with uniform pore size, and for obtaining ideal pore morphologies. Three different types of porous silicon materials: meso porous silicon, macro porous silicon with straight pores, and macro porous silicon with tortuous pores, have been successfully produced. Regular pore arrays with controllable pore size in the range of 2µm to 6µm have been demonstrated as well. Localized PS formation has been achieved by using oxide/nitride/polysilicon stack as masking materials, which can withstand anodization in hydrofluoric acid up to twenty hours. A special etching cell with electrolytic liquid backside contact along with two process flows has been developed to enable the fabrication of thick macro porous silicon membranes with though wafer pores. For device assembly, Si-Au and In-Au bonding technologies have been developed. Very low bonding temperature (~200 degrees C) and thick/soft bonding layers (~6µm) have been achieved by In-Au bondi ng technology, which is able to compensate the potentially rough surface on the porous silicon sample without introducing significant thermal stress. The application of the porous silicon material in micro systems has been demonstrated in a micro gas chromatograph system by two indispensable components: an integrated vapor source and an inlet filter, wherein porous silicon performs the basic functions of porous media: wicking and filtration. By utilizing a macro porous silicon wick, the calibration vapor source was able to produce a uniform and repeatable vapor generation for n-decane with less than a 0.1% variation in 9 hours, and less than a 0.5% variation in rate over 7 days. With engineered porous silicon membranes the inlet filter was able to show a depth filtration with nearly 100% collection efficiency for particles larger than 0.3µm in diameter, a low pressure-drop of 523Pa at 20sccm flow rate, and a filter capacity of 500µg/cm2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The alveolated structure of the pulmonary acinus plays a vital role in gas exchange function. Three-dimensional (3D) analysis of the parenchymal region is fundamental to understanding this structure-function relationship, but only a limited number of attempts have been conducted in the past because of technical limitations. In this study, we developed a new image processing methodology based on finite element (FE) analysis for accurate 3D structural reconstruction of the gas exchange regions of the lung. Stereologically well characterized rat lung samples (Pediatr Res 53: 72-80, 2003) were imaged using high-resolution synchrotron radiation-based X-ray tomographic microscopy. A stack of 1,024 images (each slice: 1024 x 1024 pixels) with resolution of 1.4 mum(3) per voxel were generated. For the development of FE algorithm, regions of interest (ROI), containing approximately 7.5 million voxels, were further extracted as a working subunit. 3D FEs were created overlaying the voxel map using a grid-based hexahedral algorithm. A proper threshold value for appropriate segmentation was iteratively determined to match the calculated volume density of tissue to the stereologically determined value (Pediatr Res 53: 72-80, 2003). The resulting 3D FEs are ready to be used for 3D structural analysis as well as for subsequent FE computational analyses like fluid dynamics and skeletonization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this project we developed conductive thermoplastic resins by adding varying amounts of three different carbon fillers: carbon black (CB), synthetic graphite (SG) and multi-walled carbon nanotubes (CNT) to a polypropylene matrix for application as fuel cell bipolar plates. This component of fuel cells provides mechanical support to the stack, circulates the gases that participate in the electrochemical reaction within the fuel cell and allows for removal of the excess heat from the system. The materials fabricated in this work were tested to determine their mechanical and thermal properties. These materials were produced by adding varying amounts of single carbon fillers to a polypropylene matrix (2.5 to 15 wt.% Ketjenblack EC-600 JD carbon black, 10 to 80 wt.% Asbury Carbon's Thermocarb TC-300 synthetic graphite, and 2.5 to 15 wt.% of Hyperion Catalysis International's FIBRILTM multi-walled carbon nanotubes) In addition, composite materials containing combinations of these three fillers were produced. The thermal conductivity results showed an increase in both through-plane and in-plane thermal conductivities, with the largest increase observed for synthetic graphite. The Department of Energy (DOE) had previously set a thermal conductivity goal of 20 W/m·K, which was surpassed by formulations containing 75 wt.% and 80 wt.% SG, yielding in-plane thermal conductivity values of 24.4 W/m·K and 33.6 W/m·K, respectively. In addition, composites containing 2.5 wt.% CB, 65 wt.% SG, and 6 wt.% CNT in PP had an in–plane thermal conductivity of 37 W/m·K. Flexural and tensile tests were conducted. All composite formulations exceeded the flexural strength target of 25 MPa set by DOE. The tensile and flexural modulus of the composites increased with higher concentration of carbon fillers. Carbon black and synthetic graphite caused a decrease in the tensile and flexural strengths of the composites. However, carbon nanotubes increased the composite tensile and flexural strengths. Mathematical models were applied to estimate through-plane and in-plane thermal conductivities of single and multiple filler formulations, and tensile modulus of single-filler formulations. For thermal conductivity, Nielsen's model yielded accurate thermal conductivity values when compared to experimental results obtained through the Flash method. For prediction of tensile modulus Nielsen's model yielded the smallest error between the predicted and experimental values. The second part of this project consisted of the development of a curriculum in Fuel Cell and Hydrogen Technologies to address different educational barriers identified by the Department of Energy. By the creation of new courses and enterprise programs in the areas of fuel cells and the use of hydrogen as an energy carrier, we introduced engineering students to the new technologies, policies and challenges present with this alternative energy. Feedback provided by students participating in these courses and enterprise programs indicate positive acceptance of the different educational tools. Results obtained from a survey applied to students after participating in these courses showed an increase in the knowledge and awareness of energy fundamentals, which indicates the modules developed in this project are effective in introducing students to alternative energy sources.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a methodology for measuring thermal properties in situ, with a special focus on obtaining properties of layered stack-ups commonly used in armored vehicle components. The technique involves attaching a thermal source to the surface of a component, measuring the heat flux transferred between the source and the component, and measuring the surface temperature response. The material properties of the component can subsequently be determined from measurement of the transient heat flux and temperature response at the surface alone. Experiments involving multilayered specimens show that the surface temperature response to a sinusoidal heat flux forcing function is also sinusoidal. A frequency domain analysis shows that sinusoidal thermal excitation produces a gain and phase shift behavior typical of linear systems. Additionally, this analysis shows that the material properties of sub-surface layers affect the frequency response function at the surface of a particular stack-up. The methodology involves coupling a thermal simulation tool with an optimization algorithm to determine the material properties from temperature and heat flux measurement data. Use of a sinusoidal forcing function not only provides a mechanism to perform the frequency domain analysis described above, but sinusoids also have the practical benefit of reducing the need for instrumentation of the backside of the component. Heat losses can be minimized by alternately injecting and extracting heat on the front surface, as long as sufficiently high frequencies are used.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We used the Green's functions from auto-correlations and cross-correlations of seismic ambient noise to monitor temporal velocity changes in the subsurface at Villarrica volcano in the Southern Andes of Chile. Campaigns were conducted from March to October 2010 and February to April 2011 with 8 broadband and 6 short-period stations, respectively. We prepared the data by removing the instrument response, normalizing with a root-mean-square method, whitening the spectra, and filtering from 1 to 10 Hz. This frequency band was chosen based on the relatively high background noise level in that range. Hour-long auto- and cross-correlations were computed and the Green's functions stacked by day and total time. To track the temporal velocity changes we stretched a 24 hour moving window of correlation functions from 90% to 110% of the original and cross correlated them with the total stack. All of the stations' auto-correlations detected what is interpreted as an increase in velocity in 2010, with an average increase of 0.13%. Cross-correlations from station V01, near the summit, to the other stations show comparable changes that are also interpreted as increases in velocity. We attribute this change to the closing of cracks in the subsurface due either to seasonal snow loading or regional tectonics. In addition to the common increase in velocity across the stations, there are excursions in velocity on the same order lasting several days. Amplitude decreases as the station's distance from the vent increases suggesting these excursions may be attributed to changes within the volcanic edifice. In at least two occurrences the amplitudes at stations V06 and V07, the stations farthest from the vent, are smaller. Similar short temporal excursions were seen in the auto-correlations from 2011, however, there was little to no increase in the overall velocity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Back-in-time debuggers are extremely useful tools for identifying the causes of bugs, as they allow us to inspect the past states of objects no longer present in the current execution stack. Unfortunately the "omniscient" approaches that try to remember all previous states are impractical because they either consume too much space or they are far too slow. Several approaches rely on heuristics to limit these penalties, but they ultimately end up throwing out too much relevant information. In this paper we propose a practical approach to back-in-time debugging that attempts to keep track of only the relevant past data. In contrast to other approaches, we keep object history information together with the regular objects in the application memory. Although seemingly counter-intuitive, this approach has the effect that past data that is not reachable from current application objects (and hence, no longer relevant) is automatically garbage collected. In this paper we describe the technical details of our approach, and we present benchmarks that demonstrate that memory consumption stays within practical bounds. Furthermore since our approach works at the virtual machine level, the performance penalty is significantly better than with other approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even back-in-time debuggers do not help answer the question, ``Where did this object come from?'' The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Wireless Mesh Networks (WMN) have proven to be a key technology for increased network coverage of Internet infrastructures. The development process for new protocols and architectures in the area of WMN is typically split into evaluation by network simulation and testing of a prototype in a test-bed. Testing a prototype in a real test-bed is time-consuming and expensive. Irrepressible external interferences can occur which makes debugging difficult. Moreover, the test-bed usually supports only a limited number of test topologies. Finally, mobility tests are impractical. Therefore, we propose VirtualMesh as a new testing architecture which can be used before going to a real test-bed. It provides instruments to test the real communication software including the network stack inside a controlled environment. VirtualMesh is implemented by capturing real traffic through a virtual interface at the mesh nodes. The traffic is then redirected to the network simulator OMNeT++. In our experiments, VirtualMesh has proven to be scalable and introduces moderate delays. Therefore, it is suitable for predeployment testing of communication software for WMNs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aufbau einer föderativen Dienstlandschaft in der Ruhr-Region auf Basis von SAML mit dem Ziel eine organisationsübergreifende Nutzung von webbasierten IT-Diensten zu ermöglichen

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper presents a link layer stack for wireless sensor networks, which consists of the Burst-aware Energy-efficient Adaptive Medium access control (BEAM) and the Hop-to-Hop Reliability (H2HR) protocol. BEAM can operate with short beacons to announce data transmissions or include data within the beacons. Duty cycles can be adapted by a traffic prediction mechanism indicating pending packets destined for a node and by estimating its wake-up times. H2HR takes advantage of information provided by BEAM such as neighbour information and transmission information to perform per-hop congestion control. We justify the design decisions by measurements in a real-world wireless sensor network testbed and compare the performance with other link layer protocols.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is a summary of the main contribu- tions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the commu- nication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with con- tributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is intro- duced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a metho- dology for robust, reliable and accurate software-based energy-estimation, which is calculated at network run- time on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adap- tively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communi- cations in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the perfor- mance degrading effects of packet corruption and trans- mission failures when transmitting data over multiple hops. The performance of all developed protocols are eval- uated on a self-developed real-world WSN testbed and achieve superior performance over selected existing ap- proaches, especially where traffic load and channel condi- tions are suspect to rapid variations over time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract. Rock magnetic, biochemical and inorganic records of the sediment cores PG1351 and Lz1024 from Lake El’gygytgyn, Chukotka peninsula, Far East Russian Arctic, were subject to a hierarchical agglomerative cluster analysis in order to refine and extend the pattern of climate modes as defined by Melles et al. (2007). Cluster analysis of the data obtained from both cores yielded similar results, differentiating clearly between the four climate modes warm, peak warm, cold and dry, and cold and moist. In addition, two transitional phases were identified, representing the early stages of a cold phase and slightly colder conditions during a warm phase. The statistical approach can thus be used to resolve gradual changes in the sedimentary units as an indicator of available oxygen in the hypolimnion in greater detail. Based upon cluster analyses on core Lz1024, the published succession of climate modes in core PG1351, covering the last 250 ka, was modified and extended back to 350 ka. Comparison to the marine oxygen isotope (�18O) stack LR04 (Lisiecki and Raymo, 2005) and the summer insolation at 67.5� N, with the extended Lake El’gygytgyn parameter records of magnetic susceptibility (�LF), total organic carbon content (TOC) and the chemical index of alteration (CIA; Minyuk et al., 2007), revealed that all stages back to marine isotope stage (MIS) 10 and most of the substages are clearly reflected in the pattern derived from the cluster analysis.