959 resultados para Engineering, Electronics and Electrical|Physics, Condensed Matter
Resumo:
Magnetic resonance imaging, with its exquisite soft tissue contrast, is an ideal modality for investigating spinal cord pathology. While conventional MRI techniques are very sensitive for spinal cord pathology, their specificity is somewhat limited. Diffusion MRI is an advanced technique which is a very sensitive and specific indicator of the integrity of white matter tracts. Diffusion imaging has been shown to detect early ischemic changes in white matter, while conventional imaging demonstrates no change. By acquiring the complete apparent diffusion tensor (ADT), tissue diffusion properties can be expressed in terms of quantitative and rotationally invariant parameters. ^ Systematic study of SCI in vivo requires controlled animal models such as the popular rat model. To date, studies of spinal cord using ADT imaging have been performed exclusively in fixed, excised spinal cords, introducing inevitable artifacts and losing the benefits of MRI's noninvasive nature. In vivo imaging reflects the actual in vivo tissue properties, and allows each animal to be imaged at multiple time points, greatly reducing the number of animals required to achieve statistical significance. Because the spinal cord is very small, the available signal-to-noise ratio (SNR) is very low. Prior spin-echo based ADT studies of rat spinal cord have relied on high magnetic field strengths and long imaging times—on the order of 10 hours—for adequate SNR. Such long imaging times are incompatible with in vivo imaging, and are not relevant for imaging the early phases following SCI. Echo planar imaging (EPI) is one of the fastest imaging methods, and is popular for diffusion imaging. However, EPI further lowers the image SNR, and is very sensitive to small imperfections in the magnetic field, such as those introduced by the bony spine. Additionally, The small field-of-view (FOV) needed for spinal cord imaging requires large imaging gradients which generate EPI artifacts. The addition of diffusion gradients introduces yet further artifacts. ^ This work develops a method for rapid EPI-based in vivo diffusion imaging of rat spinal cord. The method involves improving the SNR using an implantable coil; reducing magnetic field inhomogeneities by means of an autoshim, and correcting EPI artifacts by post-processing. New EPI artifacts due to diffusion gradients described, and post-processing correction techniques are developed. ^ These techniques were used to obtain rotationally invariant diffusion parameters from 9 animals in vivo, and were validated using the gold-standard, but slow, spinecho based diffusion sequence. These are the first reported measurements of the ADT in spinal cord in vivo . ^ Many of the techniques described are equally applicable toward imaging of human spinal cord. We anticipate that these techniques will aid in evaluating and optimizing potential therapies, and will lead to improved patient care. ^
Resumo:
Three new technologies have been brought together to develop a miniaturized radiation monitoring system. The research involved (1) Investigation a new HgI$\sb2$ detector. (2) VHDL modeling. (3) FPGA implementation. (4) In-circuit Verification. The packages used included an EG&G's crystal(HgI$\sb2$) manufactured at zero gravity, the Viewlogic's VHDL and Synthesis, Xilinx's technology library, its FPGA implementation tool, and a high density device (XC4003A). The results show: (1) Reduced cycle-time between Design and Hardware implementation; (2) Unlimited Re-design and implementation using the static RAM technology; (3) Customer based design, verification, and system construction; (4) Well suited for intelligent systems. These advantages excelled conventional chip design technologies and methods in easiness, short cycle time, and price in medium sized VLSI applications. It is also expected that the density of these devices will improve radically in the near future. ^
Resumo:
In 1972 the ionized cluster beam (ICB) deposition technique was introduced as a new method for thin film deposition. At that time the use of clusters was postulated to be able to enhance film nucleation and adatom surface mobility, resulting in high quality films. Although a few researchers reported singly ionized clusters containing 10$\sp2$-10$\sp3$ atoms, others were unable to repeat their work. The consensus now is that film effects in the early investigations were due to self-ion bombardment rather than clusters. Subsequently in recent work (early 1992) synthesis of large clusters of zinc without the use of a carrier gas was demonstrated by Gspann and repeated in our laboratory. Clusters resulted from very significant changes in two source parameters. Crucible pressure was increased from the earlier 2 Torr to several thousand Torr and a converging-diverging nozzle 18 mm long and 0.4 mm in diameter at the throat was used in place of the 1 mm x 1 mm nozzle used in the early work. While this is practical for zinc and other high vapor pressure materials it remains impractical for many materials of industrial interest such as gold, silver, and aluminum. The work presented here describes results using gold and silver at pressures of around 1 and 50 Torr in order to study the effect of the pressure and nozzle shape. Significant numbers of large clusters were not detected. Deposited films were studied by atomic force microscopy (AFM) for roughness analysis, and X-ray diffraction.^ Nanometer size islands of zinc deposited on flat silicon substrates by ICB were also studied by atomic force microscopy and the number of atoms/cm$\sp2$ was calculated and compared to data from Rutherford backscattering spectrometry (RBS). To improve the agreement between data from AFM and RBS, convolution and deconvolution algorithms were implemented to study and simulate the interaction between tip and sample in atomic force microscopy. The deconvolution algorithm takes into account the physical volume occupied by the tip resulting in an image that is a more accurate representation of the surface.^ One method increasingly used to study the deposited films both during the growth process and following, is ellipsometry. Ellipsometry is a surface analytical technique used to determine the optical properties and thickness of thin films. In situ measurements can be made through the windows of a deposition chamber. A method for determining the optical properties of a film, that is sensitive only to the growing film and accommodates underlying interfacial layers, multiple unknown underlayers, and other unknown substrates was developed. This method is carried out by making an initial ellipsometry measurement well past the real interface and by defining a virtual interface in the vicinity of this measurement. ^
Resumo:
The objectives of this research are to analyze and develop a modified Principal Component Analysis (PCA) and to develop a two-dimensional PCA with applications in image processing. PCA is a classical multivariate technique where its mathematical treatment is purely based on the eigensystem of positive-definite symmetric matrices. Its main function is to statistically transform a set of correlated variables to a new set of uncorrelated variables over $\IR\sp{n}$ by retaining most of the variations present in the original variables.^ The variances of the Principal Components (PCs) obtained from the modified PCA form a correlation matrix of the original variables. The decomposition of this correlation matrix into a diagonal matrix produces a set of orthonormal basis that can be used to linearly transform the given PCs. It is this linear transformation that reproduces the original variables. The two-dimensional PCA can be devised as a two successive of one-dimensional PCA. It can be shown that, for an $m\times n$ matrix, the PCs obtained from the two-dimensional PCA are the singular values of that matrix.^ In this research, several applications for image analysis based on PCA are developed, i.e., edge detection, feature extraction, and multi-resolution PCA decomposition and reconstruction. ^
Resumo:
Recent advances in telecommunications technologies have transformed the modes of learning and teaching. One potentially vital component in the equation will be Remote Education or Remote Learning, the ability to compress time and space between teachers and students through the judicious application of technology. The purpose of this thesis is to develop a Remote Learning and Laboratory Center over the Internet and ISDN, which provide education and access to resources to those living in remote areas, children in hospitals and traveling families, with audio, video and data.^ Remote Learning and Laboratory Center (RLLC) is not restricted to merely traditional education processes such as universities or colleges, it can be very useful for companies to train their engineers, via networks. This capability will facilitate the best use of scarce, high quality educational resources and will bring equity of services to students as well as will be helpful to the Industries to train their engineers. The RLLC over the Internet and ISDN has been described in details and implemented successfully. For the Remote Laboratory, the experiment procedure has been demonstrated on reprogrammable CPLD design using ISR Kit. ^
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
This dissertation is about the research carried on developing an MPS (Multipurpose Portable System) which consists of an instrument and many accessories. The instrument is portable, hand-held, and rechargeable battery operated, and it measures temperature, absorbance, and concentration of samples by using optical principles. The system also performs auxiliary functions like incubation and mixing. This system can be used in environmental, industrial, and medical applications. ^ Research emphasis is on system modularity, easy configuration, accuracy of measurements, power management schemes, reliability, low cost, computer interface, and networking. The instrument can send the data to a computer for data analysis and presentation, or to a printer. ^ This dissertation includes the presentation of a full working system. This involved integration of hardware and firmware for the micro-controller in assembly language, software in C and other application modules. ^ The instrument contains the Optics, Transimpedance Amplifiers, Voltage-to-Frequency Converters, LCD display, Lamp Driver, Battery Charger, Battery Manager, Timer, Interface Port, and Micro-controller. ^ The accessories are a Printer, Data Acquisition Adapter (to transfer the measurements to a computer via the Printer Port and expand the Analog/Digital conversion capability), Car Plug Adapter, and AC Transformer. This system has been fully evaluated for fault tolerance and the schemes will also be presented. ^
Resumo:
A novel and new thermal management technology for advanced ceramic microelectronic packages has been developed incorporating miniature heat pipes embedded in the ceramic substrate. The heat pipes use an axially grooved wick structure and water as the working fluid. Prototype substrate/heat pipe systems were fabricated using high temperature co-fired ceramic (alumina). The heat pipes were nominally 81 mm in length, 10 mm in width, and 4 mm in height, and were charged with approximately 50–80 μL of water. Platinum thick film heaters were fabricated on the surface of the substrate to simulate heat dissipating electronic components. Several thermocouples were affixed to the substrate to monitor temperature. One end of the substrate was affixed to a heat sink maintained at constant temperature. The prototypes were tested and shown to successful and reliably operate with thermal loads over 20 Watts, with thermal input from single and multiple sources along the surface of the substrate. Temperature distributions are discussed for the various configurations and the effective thermal resistance of the substrate/heat pipe system is calculated. Finite element analysis was used to support the experimental findings and better understand the sources of the system's thermal resistance. ^
Resumo:
This dissertation develops a new mathematical approach that overcomes the effect of a data processing phenomenon known as “histogram binning” inherent to flow cytometry data. A real-time procedure is introduced to prove the effectiveness and fast implementation of such an approach on real-world data. The histogram binning effect is a dilemma posed by two seemingly antagonistic developments: (1) flow cytometry data in its histogram form is extended in its dynamic range to improve its analysis and interpretation, and (2) the inevitable dynamic range extension introduces an unwelcome side effect, the binning effect, which skews the statistics of the data, undermining as a consequence the accuracy of the analysis and the eventual interpretation of the data. ^ Researchers in the field contended with such a dilemma for many years, resorting either to hardware approaches that are rather costly with inherent calibration and noise effects; or have developed software techniques based on filtering the binning effect but without successfully preserving the statistical content of the original data. ^ The mathematical approach introduced in this dissertation is so appealing that a patent application has been filed. The contribution of this dissertation is an incremental scientific innovation based on a mathematical framework that will allow researchers in the field of flow cytometry to improve the interpretation of data knowing that its statistical meaning has been faithfully preserved for its optimized analysis. Furthermore, with the same mathematical foundation, proof of the origin of such an inherent artifact is provided. ^ These results are unique in that new mathematical derivations are established to define and solve the critical problem of the binning effect faced at the experimental assessment level, providing a data platform that preserves its statistical content. ^ In addition, a novel method for accumulating the log-transformed data was developed. This new method uses the properties of the transformation of statistical distributions to accumulate the output histogram in a non-integer and multi-channel fashion. Although the mathematics of this new mapping technique seem intricate, the concise nature of the derivations allow for an implementation procedure that lends itself to a real-time implementation using lookup tables, a task that is also introduced in this dissertation. ^
Resumo:
The contributions of this dissertation are in the development of two new interrelated approaches to video data compression: (1) A level-refined motion estimation and subband compensation method for the effective motion estimation and motion compensation. (2) A shift-invariant sub-decimation decomposition method in order to overcome the deficiency of the decimation process in estimating motion due to its shift-invariant property of wavelet transform. ^ The enormous data generated by digital videos call for an intense need of efficient video compression techniques to conserve storage space and minimize bandwidth utilization. The main idea of video compression is to reduce the interpixel redundancies inside and between the video frames by applying motion estimation and motion compensation (MEMO) in combination with spatial transform coding. To locate the global minimum of the matching criterion function reasonably, hierarchical motion estimation by coarse to fine resolution refinements using discrete wavelet transform is applied due to its intrinsic multiresolution and scalability natures. ^ Due to the fact that most of the energies are concentrated in the low resolution subbands while decreased in the high resolution subbands, a new approach called level-refined motion estimation and subband compensation (LRSC) method is proposed. It realizes the possible intrablocks in the subbands for lower entropy coding while keeping the low computational loads of motion estimation as the level-refined method, thus to achieve both temporal compression quality and computational simplicity. ^ Since circular convolution is applied in wavelet transform to obtain the decomposed subframes without coefficient expansion, symmetric-extended wavelet transform is designed on the finite length frame signals for more accurate motion estimation without discontinuous boundary distortions. ^ Although wavelet transformed coefficients still contain spatial domain information, motion estimation in wavelet domain is not as straightforward as in spatial domain due to the shift variance property of the decimation process of the wavelet transform. A new approach called sub-decimation decomposition method is proposed, which maintains the motion consistency between the original frame and the decomposed subframes, improving as a consequence the wavelet domain video compressions by shift invariant motion estimation and compensation. ^
Resumo:
The strong couplings between different degrees of freedom are believed to be responsible for novel and complex phenomena discovered in transition metal oxides (TMOs). The physical complexity is directly responsible for their tunability. Creating surfaces/interfaces add an additional ' man-made' twist, approaching the quantum phenomena of correlated materials. ^ The dissertation focused on the structural and electronic properties in proximity of surface of three prototype TMO compounds by using three complementary techniques: scanning tunneling microscopy, angle-resolved photoelectron spectroscopy and low energy electron diffraction, particularly emphasized the effects of broken symmetry and imperfections like defects on the coupling between charge and lattice degrees of freedom. ^ Ca1.5Sr0.5RuO4 is a layered ruthenate with square lattice and at the boundary of magnetic/orbital instability in Ca2-xSrxRuO4. That the substitution of Sr 2+ with Ca2+ causing RuO6 rotation narrows the dxy band width and changes the Fermi surface topology. Particularly, the γ(dxy) Fermi surface sheet exhibited hole-like in Ca1.5Sr0.5RuO4 in contrast to electron-like in Sr2RuO4, showing a strong charge-lattice coupling. ^ Na0.75CoO2 is a layered cobaltite with triangular lattice exhibiting extraordinary thermoelectric properties. The well-ordered CoO2-terminated surface with random Na distribution was observed. However, lattice constants of the surface are smaller than that in bulk. The surface density of states (DOS) showed strong temperature dependence. Especially, an unusual shift of the minimum DOS occurs below 230 K, clearly indicating a local charging effect on the surface. ^ Cd2Re2O7 is the first known pyrochlore oxide superconductor (Tc ∼ 1K). It exhibited an unusual second-order phase transition occurring at TS1 = 200 K and a controversial first-order transition at TS2 = 120 K. While bulk properties display large anomalies at TS1 but rather subtle and sample-dependent changes at TS2, the surface DOS near the EF show no change at T s1 but a substantial increase below TS2---a complete reversal as the signature for the transitions. We argued that crystal imperfections, mainly defects, which were considerably enhanced at the surface, resulted in the transition at TS2. ^
Resumo:
Today's wireless networks rely mostly on infrastructural support for their operation. With the concept of ubiquitous computing growing more popular, research on infrastructureless networks have been rapidly growing. However, such types of networks face serious security challenges when deployed. This dissertation focuses on designing a secure routing solution and trust modeling for these infrastructureless networks. ^ The dissertation presents a trusted routing protocol that is capable of finding a secure end-to-end route in the presence of malicious nodes acting either independently or in collusion, The solution protects the network from active internal attacks, known to be the most severe types of attacks in an ad hoc application. Route discovery is based on trust levels of the nodes, which need to be dynamically computed to reflect the malicious behavior in the network. As such, we have developed a trust computational model in conjunction with the secure routing protocol that analyzes the different malicious behavior and quantifies them in the model itself. Our work is the first step towards protecting an ad hoc network from colluding internal attack. To demonstrate the feasibility of the approach, extensive simulation has been carried out to evaluate the protocol efficiency and scalability with both network size and mobility. ^ This research has laid the foundation for developing a variety of techniques that will permit people to justifiably trust the use of ad hoc networks to perform critical functions, as well as to process sensitive information without depending on any infrastructural support and hence will enhance the use of ad hoc applications in both military and civilian domains. ^
Resumo:
Series Micro-Electro-Mechanical System (MEMS) switches based on superconductor are utilized to switch between two bandpass hairpin filters with bandwidths of 365 MHz and nominal center frequencies of 2.1 GHz and 2.6 GHz. This was accomplished with 4 switches actuated in pairs, one pair at a time. When one pair was actuated the first bandpass filter was coupled to the input and output ports. When the other pair was actuated the second bandpass filter was coupled to the input and output ports. The device is made of a YBa2Cu 3O7 thin film deposited on a 20 mm x 20 mm LaAlO3 substrate by pulsed laser deposition. BaTiO3 deposited by RF magnetron sputtering in utilized as the insulation layer at the switching points of contact. These results obtained assured great performance showing a switchable device at 68 V with temperature of 40 K for the 2.1 GHz filter and 75 V with temperature of 30 K for the 2.6 GHz hairpin filter. ^
Resumo:
This dissertation establishes the foundation for a new 3-D visual interface integrating Magnetic Resonance Imaging (MRI) to Diffusion Tensor Imaging (DTI). The need for such an interface is critical for understanding brain dynamics, and for providing more accurate diagnosis of key brain dysfunctions in terms of neuronal connectivity. ^ This work involved two research fronts: (1) the development of new image processing and visualization techniques in order to accurately establish relational positioning of neuronal fiber tracts and key landmarks in 3-D brain atlases, and (2) the obligation to address the computational requirements such that the processing time is within the practical bounds of clinical settings. The system was evaluated using data from thirty patients and volunteers with the Brain Institute at Miami Children's Hospital. ^ Innovative visualization mechanisms allow for the first time white matter fiber tracts to be displayed alongside key anatomical structures within accurately registered 3-D semi-transparent images of the brain. ^ The segmentation algorithm is based on the calculation of mathematically-tuned thresholds and region-detection modules. The uniqueness of the algorithm is in its ability to perform fast and accurate segmentation of the ventricles. In contrast to the manual selection of the ventricles, which averaged over 12 minutes, the segmentation algorithm averaged less than 10 seconds in its execution. ^ The registration algorithm established searches and compares MR with DT images of the same subject, where derived correlation measures quantify the resulting accuracy. Overall, the images were 27% more correlated after registration, while an average of 1.5 seconds is all it took to execute the processes of registration, interpolation, and re-slicing of the images all at the same time and in all the given dimensions. ^ This interface was fully embedded into a fiber-tracking software system in order to establish an optimal research environment. This highly integrated 3-D visualization system reached a practical level that makes it ready for clinical deployment. ^
Resumo:
Biometrics is afield of study which pursues the association of a person's identity with his/her physiological or behavioral characteristics.^ As one aspect of biometrics, face recognition has attracted special attention because it is a natural and noninvasive means to identify individuals. Most of the previous studies in face recognition are based on two-dimensional (2D) intensity images. Face recognition based on 2D intensity images, however, is sensitive to environment illumination and subject orientation changes, affecting the recognition results. With the development of three-dimensional (3D) scanners, 3D face recognition is being explored as an alternative to the traditional 2D methods for face recognition.^ This dissertation proposes a method in which the expression and the identity of a face are determined in an integrated fashion from 3D scans. In this framework, there is a front end expression recognition module which sorts the incoming 3D face according to the expression detected in the 3D scans. Then, scans with neutral expressions are processed by a corresponding 3D neutral face recognition module. Alternatively, if a scan displays a non-neutral expression, e.g., a smiling expression, it will be routed to an appropriate specialized recognition module for smiling face recognition.^ The expression recognition method proposed in this dissertation is innovative in that it uses information from 3D scans to perform the classification task. A smiling face recognition module was developed, based on the statistical modeling of the variance between faces with neutral expression and faces with a smiling expression.^ The proposed expression and face recognition framework was tested with a database containing 120 3D scans from 30 subjects (Half are neutral faces and half are smiling faces). It is shown that the proposed framework achieves a recognition rate 10% higher than attempting the identification with only the neutral face recognition module.^