895 resultados para Other Computer Engineering
Resumo:
"Supported in part by National Science Foundation under Grant No. NSF-GP-7634."
Resumo:
"To be presented at the First Annual IEEE Computer Conference, Chicago, Illinois, September 6-8, 1967."
Resumo:
Thesis (Master's)--University of Washington, 2016-06
Resumo:
Electron backscattering diffraction has been applied on polycrystalline diamond films grown using microwave plasma assisted chemical vapour deposition on silicon substrate, in order to provide a map of the individual diamond grains, grain boundary, and the crystal orientation of discrete crystallites. The nucleation rate and orientation are strongly affected by using a voltage bias on the substrate to influence and enhance the nucleation process, the bias enhanced nucleation process. In this work, the diamond surface is mapped using electron backscattering diffraction, then a layer of a few microns is ion milled away exposing a lower layer for analysis and so on. This then permits a three dimensions reconstruction of the film texture.
Resumo:
The authors present a super-fast scanning (SFS) technique for phased array weather radar applications. The fast scanning feature of the SFS technique is described and its drawbacks identified. Techniques which combat these drawbacks are also presented. A concept design phased array radar system (CDPAR) is used as a benchmark to compare the performance of a conventional scanning phased array radar system with the SFS technique. It is shown that the SFS technique, in association with suitable waveform processing, can realise four times the scanning speed and achieve similar accuracy compared to the conventional phased array benchmark.
Resumo:
Adsorption of argon and nitrogen at their respective boiling points in cylindrical pores of MCM-41 type silica-like adsorbents is studied by means of a non-local density functional theory (NLDFT), which is modified to deal with amorphous solids. By matching the theoretical results of the pore filling pressure versus pore diameter against the experimental data, we arrive at a conclusion that the adsorption branch (rather than desorption) corresponds to the true thermodynamic equilibrium. If this is accepted, we derive the optimal values for the solid–fluid molecular parameters for the system amorphous silica–Ar and amorphous silica–N2, and at the same time we could derive reliably the specific surface area of non-porous and mesoporous silica-like adsorbents, without a recourse to the BET method. This method is then logically extended to describe the local adsorption isotherms of argon and nitrogen in silica-like pores, which are then used as the bases (kernel) to determine the pore size distribution. We test this with a number of adsorption isotherms on the MCM-41 samples, and the results are quite realistic and in excellent agreement with the XRD results, justifying the approach adopted in this paper.
Resumo:
Adsorption of pure nitrogen, argon, acetone, chloroform and acetone-chloroform mixture on graphitized thermal carbon black is considered at sub-critical conditions by means of molecular layer structure theory (MLST). In the present version of the MLST an adsorbed fluid is considered as a sequence of 2D molecular layers, whose Helmholtz free energies are obtained directly from the analysis of experimental adsorption isotherm of pure components. The interaction of the nearest layers is accounted for in the framework of mean field approximation. This approach allows quantitative correlating of experimental nitrogen and argon adsorption isotherm both in the monolayer region and in the range of multi-layer coverage up to 10 molecular layers. In the case of acetone and chloroform the approach also leads to excellent quantitative correlation of adsorption isotherms, while molecular approaches such as the non-local density functional theory (NLDFT) fail to describe those isotherms. We extend our new method to calculate the Helmholtz free energy of an adsorbed mixture using a simple mixing rule, and this allows us to predict mixture adsorption isotherms from pure component adsorption isotherms. The approach, which accounts for the difference in composition in different molecular layers, is tested against the experimental data of acetone-chloroform mixture (non-ideal mixture) adsorption on graphitized thermal carbon black at 50 degrees C. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The microstructural variation of Norit RI Extra activated carbon, progressively heated at 1373 K, was explored in terms of pore size and pore wall thickness distributions, for various periods of heating time, determined by argon adsorption at 87 K, both using an infinite as well as and finite wall thickness model. The latter approach has recently been developed in our laboratory and has been applied to several virgin carbons. The current results show significant variations in small pore size regions (< 7 angstrom) in association with strong growth of thick walls having at least three carbon sheets, as a result of heat treatment. In particular, shrinkage of the smallest pores due to strong interaction between their opposite walls as well as smoothening of carbon wall surfaces due to an increase in graphitization degree under thermal treatment have been found. Further, the results of pore wall thickness distribution are well corroborated by X-ray diffraction. The results of pore size and pore wall thickness distributions are also shown to be consistent with transmission electron microscopy analyses. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The recently described process of simultaneous nitrification, denitrification and phosphorus removal (SNDPR) has a great potential to save capital and operating costs for wastewater treatment plants. However, the presence of glycogen-accumulating organisms (GAOs) and the accumulation of nitrous oxide (N2O) can severely compromise the advantages of this process. In this study, these two issues were investigated using a lab-scale sequencing batch reactor performing SNDPR over a 5-month period. The reactor was highly enriched in polyphosphate-accumulating organisms (PAOs) and GAOs representing around 70% of the total microbial community. PAOs were the dominant population at all times and their abundance increased, while GAOs population decreased over the study period. Anoxic batch tests demonstrated that GAOs rather than denitrifying PAOs were responsible for denitrification. NO accumulated from denitrification and more than half of the nitrogen supplied in a reactor cycle was released into the atmosphere as NO. After mixing SNDPR sludge with other denitrifying sludge, N2O present in the bulk liquid was reduced immediately if external carbon was added. We therefore suggest that the N2O accumulation observed in the SNDPR reactor is an artefact of the low microbial diversity facilitated by the use of synthetic wastewater with only a single carbon source. (C) 2005 Elsevier B.V. All rights reserved.
Resumo:
We calculate tangential momentum coefficients for the exchange of momentum between molecules in transport and the internal surface of a membrane pore, modelled as a simple atomic structure. We introduce a local specular reflection (LSR) hypothesis, which states that impinging molecules undergo mirror-like reflection in a plane tangent to a surface atom at the point of impact. As a consequence, the components of the velocity, parallel to the direction of flow will (in general) change on impact. The overall effect is a loss of tangential momentum, since more is lost in the upstream direction than is gained in the downstream direction. The loss of tangential momentum is greater when the size ratio of fluid to solid atom is small, allowing more steeply inclined impact planes to become accessible to the fluid phase molecules. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
We propose a novel interpretation and usage of Neural Network (NN) in modeling physiological signals, which are allowed to be nonlinear and/or nonstationary. The method consists of training a NN for the k-step prediction of a physiological signal, and then examining the connection-weight-space (CWS) of the NN to extract information about the signal generator mechanism. We de. ne a novel feature, Normalized Vector Separation (gamma(ij)), to measure the separation of two arbitrary states i and j in the CWS and use it to track the state changes of the generating system. The performance of the method is examined via synthetic signals and clinical EEG. Synthetic data indicates that gamma(ij) can track the system down to a SNR of 3.5 dB. Clinical data obtained from three patients undergoing carotid endarterectomy of the brain showed that EEG could be modeled (within a root-means-squared-error of 0.01) by the proposed method, and the blood perfusion state of the brain could be monitored via gamma(ij), with small NNs having no more than 21 connection weight altogether.
Resumo:
Choice of the operational frequency is one of the most responsible parts of any radar design process. Parameters of radars for buried object detection (BOD) are very sensitive to both carrier frequency and ranging signal bandwidth. Such radars have a specific propagation environment with a strong frequency-dependent attenuation and, as a result, short operational range. This fact dictates some features of the radar's parameters: wideband signal-to provide a high range resolution (fractions of a meter) and a low carrier frequency (tens or hundreds megahertz) for deeper penetration. The requirement to have a wideband ranging signal and low carrier frequency are partly in contradiction. As a result, low-frequency (LF) ultrawide-band (UWB) signals are used. The major goal of this paper is to examine the influence of the frequency band choice on the radar performance and develop relevant methodologies for BOD radar design and optimization. In this article, high-efficient continuous wave (CW) signals with most advanced stepped frequency (SF) modulation are considered; however, the main conclusions can be applied to any kind of ranging signals.
Resumo:
Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.
Resumo:
Since multimedia data, such as images and videos, are way more expressive and informative than ordinary text-based data, people find it more attractive to communicate and express with them. Additionally, with the rising popularity of social networking tools such as Facebook and Twitter, multimedia information retrieval can no longer be considered a solitary task. Rather, people constantly collaborate with one another while searching and retrieving information. But the very cause of the popularity of multimedia data, the huge and different types of information a single data object can carry, makes their management a challenging task. Multimedia data is commonly represented as multidimensional feature vectors and carry high-level semantic information. These two characteristics make them very different from traditional alpha-numeric data. Thus, to try to manage them with frameworks and rationales designed for primitive alpha-numeric data, will be inefficient. An index structure is the backbone of any database management system. It has been seen that index structures present in existing relational database management frameworks cannot handle multimedia data effectively. Thus, in this dissertation, a generalized multidimensional index structure is proposed which accommodates the atypical multidimensional representation and the semantic information carried by different multimedia data seamlessly from within one single framework. Additionally, the dissertation investigates the evolving relationships among multimedia data in a collaborative environment and how such information can help to customize the design of the proposed index structure, when it is used to manage multimedia data in a shared environment. Extensive experiments were conducted to present the usability and better performance of the proposed framework over current state-of-art approaches.
Resumo:
The development of 3G (the 3rd generation telecommunication) value-added services brings higher requirements of Quality of Service (QoS). Wideband Code Division Multiple Access (WCDMA) is one of three 3G standards, and enhancement of QoS for WCDMA Core Network (CN) becomes more and more important for users and carriers. The dissertation focuses on enhancement of QoS for WCDMA CN. The purpose is to realize the DiffServ (Differentiated Services) model of QoS for WCDMA CN. Based on the parallelism characteristic of Network Processors (NPs), the NP programming model is classified as Pool of Threads (POTs) and Hyper Task Chaining (HTC). In this study, an integrated programming model that combines both of the two models was designed. This model has highly efficient and flexible features, and also solves the problems of sharing conflicts and packet ordering. We used this model as the programming model to realize DiffServ QoS for WCDMA CN. ^ The realization mechanism of the DiffServ model mainly consists of buffer management, packet scheduling and packet classification algorithms based on NPs. First, we proposed an adaptive buffer management algorithm called Packet Adaptive Fair Dropping (PAFD), which takes into consideration of both fairness and throughput, and has smooth service curves. Then, an improved packet scheduling algorithm called Priority-based Weighted Fair Queuing (PWFQ) was introduced to ensure the fairness of packet scheduling and reduce queue time of data packets. At the same time, the delay and jitter are also maintained in a small range. Thirdly, a multi-dimensional packet classification algorithm called Classification Based on Network Processors (CBNPs) was designed. It effectively reduces the memory access and storage space, and provides less time and space complexity. ^ Lastly, an integrated hardware and software system of the DiffServ model of QoS for WCDMA CN was proposed. It was implemented on the NP IXP2400. According to the corresponding experiment results, the proposed system significantly enhanced QoS for WCDMA CN. It extensively improves consistent response time, display distortion and sound image synchronization, and thus increases network efficiency and saves network resource.^