74 resultados para Electrical engineering|Electromagnetics|Energy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. Mobile wireless communications have witnessed the adoption of several generations, each of them complementing and improving the former. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. 4G is a collection of technologies and standards that will allow a range of ubiquitous computing and wireless communication architectures. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications from 100 Mbps, in high mobility links, to as high as 1 Gbps for low mobility users, in addition to high efficiency in the spectrum usage. On mobile wireless communications networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations, where a terrestrial infrastructure is unavailable. Thus, they must rely upon satellite coverage. Good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. This technique must adapt to the characteristics of the satellite channel and also be efficient in the use of allocated bandwidth. Satellite links are fading channels, when used by mobile users. Some measures designed to approach these fading environments make use of: (1) spatial diversity (two receive antenna configuration); (2) time diversity (channel interleaver/spreading techniques); and (3) upper layer FEC. The author proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. For this purpose, a good channel model is necessary.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Microstructure manipulation is a fundamental process to the study of biology and medicine, as well as to advance micro- and nano-system applications. Manipulation of microstructures has been achieved through various microgripper devices developed recently, which lead to advances in micromachine assembly, and single cell manipulation, among others. Only two kinds of integrated feedback have been demonstrated so far, force sensing and optical binary feedback. As a result, the physical, mechanical, optical, and chemical information about the microstructure under study must be extracted from macroscopic instrumentation, such as confocal fluorescence microscopy and Raman spectroscopy. In this research work, novel Micro-Opto-Electro-Mechanical-System (MOEMS) microgrippers are presented. These devices utilize flexible optical waveguides as gripping arms, which provide the physical means for grasping a microobject, while simultaneously enabling light to be delivered and collected. This unique capability allows extensive optical characterization of the structure being held such as transmission, reflection, or fluorescence. The microgrippers require external actuation which was accomplished by two methods: initially with a micrometer screw, and later with a piezoelectric actuator. Thanks to a novel actuation mechanism, the "fishbone", the gripping facets remain parallel within 1 degree. The design, simulation, fabrication, and characterization are systematically presented. The devices mechanical operation was verified by means of 3D finite element analysis simulations. Also, the optical performance and losses were simulated by the 3D-to-2D effective index (finite difference time domain FDTD) method as well as 3D Beam Propagation Method (3D-BPM). The microgrippers were designed to manipulate structures from submicron dimensions up to approximately 100 μm. The devices were implemented in SU-8 due to its suitable optical and mechanical properties. This work demonstrates two practical applications: the manipulation of single SKOV-3 human ovarian carcinoma cells, and the detection and identification of microparts tagged with a fluorescent "barcode" implemented with quantum dots. The novel devices presented open up new possibilities in the field of micromanipulation at the microscale, scalable to the nano-domain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital systems can generate left and right audio channels that create the effect of virtual sound source placement (spatialization) by processing an audio signal through pairs of Head-Related Transfer Functions (HRTFs) or, equivalently, Head-Related Impulse Responses (HRIRs). The spatialization effect is better when individually-measured HRTFs or HRIRs are used than when generic ones (e.g., from a mannequin) are used. However, the measurement process is not available to the majority of users. There is ongoing interest to find mechanisms to customize HRTFs or HRIRs to a specific user, in order to achieve an improved spatialization effect for that subject. Unfortunately, the current models used for HRTFs and HRIRs contain over a hundred parameters and none of those parameters can be easily related to the characteristics of the subject. This dissertation proposes an alternative model for the representation of HRTFs, which contains at most 30 parameters, all of which have a defined functional significance. It also presents methods to obtain the value of parameters in the model to make it approximately equivalent to an individually-measured HRTF. This conversion is achieved by the systematic deconstruction of HRIR sequences through an augmented version of the Hankel Total Least Squares (HTLS) decomposition approach. An average 95% match (fit) was observed between the original HRIRs and those re-constructed from the Damped and Delayed Sinusoids (DDSs) found by the decomposition process, for ipsilateral source locations. The dissertation also introduces and evaluates an HRIR customization procedure, based on a multilinear model implemented through a 3-mode tensor, for mapping of anatomical data from the subjects to the HRIR sequences at different sound source locations. This model uses the Higher-Order Singular Value Decomposition (HOSVD) method to represent the HRIRs and is capable of generating customized HRIRs from easily attainable anatomical measurements of a new intended user of the system. Listening tests were performed to compare the spatialization performance of customized, generic and individually-measured HRIRs when they are used for synthesized spatial audio. Statistical analysis of the results confirms that the type of HRIRs used for spatialization is a significant factor in the spatialization success, with the customized HRIRs yielding better results than generic HRIRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Buffered crossbar switches have recently attracted considerable attention as the next generation of high speed interconnects. They are a special type of crossbar switches with an exclusive buffer at each crosspoint of the crossbar. They demonstrate unique advantages over traditional unbuffered crossbar switches, such as high throughput, low latency, and asynchronous packet scheduling. However, since crosspoint buffers are expensive on-chip memories, it is desired that each crosspoint has only a small buffer. This dissertation proposes a series of practical algorithms and techniques for efficient packet scheduling for buffered crossbar switches. To reduce the hardware cost of such switches and make them scalable, we considered partially buffered crossbars, whose crosspoint buffers can be of an arbitrarily small size. Firstly, we introduced a hybrid scheme called Packet-mode Asynchronous Scheduling Algorithm (PASA) to schedule best effort traffic. PASA combines the features of both distributed and centralized scheduling algorithms and can directly handle variable length packets without Segmentation And Reassembly (SAR). We showed by theoretical analysis that it achieves 100% throughput for any admissible traffic in a crossbar with a speedup of two. Moreover, outputs in PASA have a large probability to avoid the more time-consuming centralized scheduling process, and thus make fast scheduling decisions. Secondly, we proposed the Fair Asynchronous Segment Scheduling (FASS) algorithm to handle guaranteed performance traffic with explicit flow rates. FASS reduces the crosspoint buffer size by dividing packets into shorter segments before transmission. It also provides tight constant performance guarantees by emulating the ideal Generalized Processor Sharing (GPS) model. Furthermore, FASS requires no speedup for the crossbar, lowering the hardware cost and improving the switch capacity. Thirdly, we presented a bandwidth allocation scheme called Queue Length Proportional (QLP) to apply FASS to best effort traffic. QLP dynamically obtains a feasible bandwidth allocation matrix based on the queue length information, and thus assists the crossbar switch to be more work-conserving. The feasibility and stability of QLP were proved, no matter whether the traffic distribution is uniform or non-uniform. Hence, based on bandwidth allocation of QLP, FASS can also achieve 100% throughput for best effort traffic in a crossbar without speedup.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a novel automated book reader as an assistive technology tool for persons with blindness. The literature shows extensive work in the area of optical character recognition, but the current methodologies available for the automated reading of books or bound volumes remain inadequate and are severely constrained during document scanning or image acquisition processes. The goal of the book reader design is to automate and simplify the task of reading a book while providing a user-friendly environment with a realistic but affordable system design. This design responds to the main concerns of (a) providing a method of image acquisition that maintains the integrity of the source (b) overcoming optical character recognition errors created by inherent imaging issues such as curvature effects and barrel distortion, and (c) determining a suitable method for accurate recognition of characters that yields an interface with the ability to read from any open book with a high reading accuracy nearing 98%. This research endeavor focuses in its initial aim on the development of an assistive technology tool to help persons with blindness in the reading of books and other bound volumes. But its secondary and broader aim is to also find in this design the perfect platform for the digitization process of bound documentation in line with the mission of the Open Content Alliance (OCA), a nonprofit Alliance at making reading materials available in digital form. The theoretical perspective of this research relates to the mathematical developments that are made in order to resolve both the inherent distortions due to the properties of the camera lens and the anticipated distortions of the changing page curvature as one leafs through the book. This is evidenced by the significant increase of the recognition rate of characters and a high accuracy read-out through text to speech processing. This reasonably priced interface with its high performance results and its compatibility to any computer or laptop through universal serial bus connectors extends greatly the prospects for universal accessibility to documentation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops an innovative approach towards less-constrained iris biometrics. Two major contributions are made in this research endeavor: (1) Designed an award-winning segmentation algorithm in the less-constrained environment where image acquisition is made of subjects on the move and taken under visible lighting conditions, and (2) Developed a pioneering iris biometrics method coupling segmentation and recognition of the iris based on video of moving persons under different acquisitions scenarios. The first part of the dissertation introduces a robust and fast segmentation approach using still images contained in the UBIRIS (version 2) noisy iris database. The results show accuracy estimated at 98% when using 500 randomly selected images from the UBIRIS.v2 partial database, and estimated at 97% in a Noisy Iris Challenge Evaluation (NICE.I) in an international competition that involved 97 participants worldwide involving 35 countries, ranking this research group in sixth position. This accuracy is achieved with a processing speed nearing real time. The second part of this dissertation presents an innovative segmentation and recognition approach using video-based iris images. Following the segmentation stage which delineates the iris region through a novel segmentation strategy, some pioneering experiments on the recognition stage of the less-constrained video iris biometrics have been accomplished. In the video-based and less-constrained iris recognition, the test or subject iris videos/images and the enrolled iris images are acquired with different acquisition systems. In the matching step, the verification/identification result was accomplished by comparing the similarity distance of encoded signature from test images with each of the signature dataset from the enrolled iris images. With the improvements gained, the results proved to be highly accurate under the unconstrained environment which is more challenging. This has led to a false acceptance rate (FAR) of 0% and a false rejection rate (FRR) of 17.64% for 85 tested users with 305 test images from the video, which shows great promise and high practical implications for iris biometrics research and system design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the Morris worm was released in 1988, Internet worms continue to be one of top security threats. For example, the Conficker worm infected 9 to 15 million machines in early 2009 and shut down the service of some critical government and medical networks. Moreover, it constructed a massive peer-to-peer (P2P) botnet. Botnets are zombie networks controlled by attackers setting out coordinated attacks. In recent years, botnets have become the number one threat to the Internet. The objective of this research is to characterize spatial-temporal infection structures of Internet worms, and apply the observations to study P2P-based botnets formed by worm infection. First, we infer temporal characteristics of the Internet worm infection structure, i.e., the host infection time and the worm infection sequence, and thus pinpoint patient zero or initially infected hosts. Specifically, we apply statistical estimation techniques on Darknet observations. We show analytically and empirically that our proposed estimators can significantly improve the inference accuracy. Second, we reveal two key spatial characteristics of the Internet worm infection structure, i.e., the number of children and the generation of the underlying tree topology formed by worm infection. Specifically, we apply probabilistic modeling methods and a sequential growth model. We show analytically and empirically that the number of children has asymptotically a geometric distribution with parameter 0.5, and the generation follows closely a Poisson distribution. Finally, we evaluate bot detection strategies and effects of user defenses in P2P-based botnets formed by worm infection. Specifically, we apply the observations of the number of children and demonstrate analytically and empirically that targeted detection that focuses on the nodes with the largest number of children is an efficient way to expose bots. However, we also point out that future botnets may self-stop scanning to weaken targeted detection, without greatly slowing down the speed of worm infection. We then extend the worm spatial infection structure and show empirically that user defenses, e.g. , patching or cleaning, can significantly mitigate the robustness and the effectiveness of P2P-based botnets. To counterattack, we evaluate a simple measure by future botnets that enhances topology robustness through worm re-infection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, most conventional surveillance networks are based on analog system, which has a lot of constraints like manpower and high-bandwidth requirements. It becomes the barrier for today's surveillance network development. This dissertation describes a digital surveillance network architecture based on the H.264 coding/decoding (CODEC) System-on-a-Chip (SoC) platform. The proposed digital surveillance network architecture includes three major layers: software layer, hardware layer, and the network layer. The following outlines the contributions to the proposed digital surveillance network architecture. (1) We implement an object recognition system and an object categorization system on the software layer by applying several Digital Image Processing (DIP) algorithms. (2) For better compression ratio and higher video quality transfer, we implement two new modules on the hardware layer of the H.264 CODEC core, i.e., the background elimination module and the Directional Discrete Cosine Transform (DDCT) module. (3) Furthermore, we introduce a Digital Signal Processor (DSP) sub-system on the main bus of H.264 SoC platforms as the major hardware support system for our software architecture. Thus we combine the software and hardware platforms to be an intelligent surveillance node. Lab results show that the proposed surveillance node can dramatically save the network resources like bandwidth and storage capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation introduces a new system for handwritten text recognition based on an improved neural network design. Most of the existing neural networks treat mean square error function as the standard error function. The system as proposed in this dissertation utilizes the mean quartic error function, where the third and fourth derivatives are non-zero. Consequently, many improvements on the training methods were achieved. The training results are carefully assessed before and after the update. To evaluate the performance of a training system, there are three essential factors to be considered, and they are from high to low importance priority: (1) error rate on testing set, (2) processing time needed to recognize a segmented character and (3) the total training time and subsequently the total testing time. It is observed that bounded training methods accelerate the training process, while semi-third order training methods, next-minimal training methods, and preprocessing operations reduce the error rate on the testing set. Empirical observations suggest that two combinations of training methods are needed for different case character recognition. Since character segmentation is required for word and sentence recognition, this dissertation provides also an effective rule-based segmentation method, which is different from the conventional adaptive segmentation methods. Dictionary-based correction is utilized to correct mistakes resulting from the recognition and segmentation phases. The integration of the segmentation methods with the handwritten character recognition algorithm yielded an accuracy of 92% for lower case characters and 97% for upper case characters. In the testing phase, the database consists of 20,000 handwritten characters, with 10,000 for each case. The testing phase on the recognition 10,000 handwritten characters required 8.5 seconds in processing time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An iterative travel time forecasting scheme, named the Advanced Multilane Prediction based Real-time Fastest Path (AMPRFP) algorithm, is presented in this dissertation. This scheme is derived from the conventional kernel estimator based prediction model by the association of real-time nonlinear impacts that caused by neighboring arcs’ traffic patterns with the historical traffic behaviors. The AMPRFP algorithm is evaluated by prediction of the travel time of congested arcs in the urban area of Jacksonville City. Experiment results illustrate that the proposed scheme is able to significantly reduce both the relative mean error (RME) and the root-mean-squared error (RMSE) of the predicted travel time. To obtain high quality real-time traffic information, which is essential to the performance of the AMPRFP algorithm, a data clean scheme enhanced empirical learning (DCSEEL) algorithm is also introduced. This novel method investigates the correlation between distance and direction in the geometrical map, which is not considered in existing fingerprint localization methods. Specifically, empirical learning methods are applied to minimize the error that exists in the estimated distance. A direction filter is developed to clean joints that have negative influence to the localization accuracy. Synthetic experiments in urban, suburban and rural environments are designed to evaluate the performance of DCSEEL algorithm in determining the cellular probe’s position. The results show that the cellular probe’s localization accuracy can be notably improved by the DCSEEL algorithm. Additionally, a new fast correlation technique for overcoming the time efficiency problem of the existing correlation algorithm based floating car data (FCD) technique is developed. The matching process is transformed into a 1-dimensional (1-D) curve matching problem and the Fast Normalized Cross-Correlation (FNCC) algorithm is introduced to supersede the Pearson product Moment Correlation Co-efficient (PMCC) algorithm in order to achieve the real-time requirement of the FCD method. The fast correlation technique shows a significant improvement in reducing the computational cost without affecting the accuracy of the matching process.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Polynomial phase modulated (PPM) signals have been shown to provide improved error rate performance with respect to conventional modulation formats under additive white Gaussian noise and fading channels in single-input single-output (SISO) communication systems. In this dissertation, systems with two and four transmit antennas using PPM signals were presented. In both cases we employed full-rate space-time block codes in order to take advantage of the multipath channel. For two transmit antennas, we used the orthogonal space-time block code (OSTBC) proposed by Alamouti and performed symbol-wise decoding by estimating the phase coefficients of the PPM signal using three different methods: maximum-likelihood (ML), sub-optimal ML (S-ML) and the high-order ambiguity function (HAF). In the case of four transmit antennas, we used the full-rate quasi-OSTBC (QOSTBC) proposed by Jafarkhani. However, in order to ensure the best error rate performance, PPM signals were selected such as to maximize the QOSTBC’s minimum coding gain distance (CGD). Since this method does not always provide a unique solution, an additional criterion known as maximum channel interference coefficient (CIC) was proposed. Through Monte Carlo simulations it was shown that by using QOSTBCs along with the properly selected PPM constellations based on the CGD and CIC criteria, full diversity in flat fading channels and thus, low BER at high signal-to-noise ratios (SNR) can be ensured. Lastly, the performance of symbol-wise decoding for QOSTBCs was evaluated. In this case a quasi zero-forcing method was used to decouple the received signal and it was shown that although this technique reduces the decoding complexity of the system, there is a penalty to be paid in terms of error rate performance at high SNRs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this research is design considerations for environmental monitoring platforms for the detection of hazardous materials using System-on-a-Chip (SoC) design. Design considerations focus on improving key areas such as: (1) sampling methodology; (2) context awareness; and (3) sensor placement. These design considerations for environmental monitoring platforms using wireless sensor networks (WSN) is applied to the detection of methylmercury (MeHg) and environmental parameters affecting its formation (methylation) and deformation (demethylation). ^ The sampling methodology investigates a proof-of-concept for the monitoring of MeHg using three primary components: (1) chemical derivatization; (2) preconcentration using the purge-and-trap (P&T) method; and (3) sensing using Quartz Crystal Microbalance (QCM) sensors. This study focuses on the measurement of inorganic mercury (Hg) (e.g., Hg2+) and applies lessons learned to organic Hg (e.g., MeHg) detection. ^ Context awareness of a WSN and sampling strategies is enhanced by using spatial analysis techniques, namely geostatistical analysis (i.e., classical variography and ordinary point kriging), to help predict the phenomena of interest in unmonitored locations (i.e., locations without sensors). This aids in making more informed decisions on control of the WSN (e.g., communications strategy, power management, resource allocation, sampling rate and strategy, etc.). This methodology improves the precision of controllability by adding potentially significant information of unmonitored locations.^ There are two types of sensors that are investigated in this study for near-optimal placement in a WSN: (1) environmental (e.g., humidity, moisture, temperature, etc.) and (2) visual (e.g., camera) sensors. The near-optimal placement of environmental sensors is found utilizing a strategy which minimizes the variance of spatial analysis based on randomly chosen points representing the sensor locations. Spatial analysis is employed using geostatistical analysis and optimization occurs with Monte Carlo analysis. Visual sensor placement is accomplished for omnidirectional cameras operating in a WSN using an optimal placement metric (OPM) which is calculated for each grid point based on line-of-site (LOS) in a defined number of directions where known obstacles are taken into consideration. Optimal areas of camera placement are determined based on areas generating the largest OPMs. Statistical analysis is examined by using Monte Carlo analysis with varying number of obstacles and cameras in a defined space. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Various nondestructive testing (NDT) technologies for construction and performance monitoring have been studied for decades. Recently, the rapid evolution of wireless sensor network (WSN) technologies has enabled the development of sensors that can be embedded in concrete to monitor the structural health of infrastructure. Such sensors can be buried inside concrete and they can collect and report valuable volumetric data related to the health of a structure during and/or after construction. Wireless embedded sensors monitoring system is also a promising solution for decreasing the high installation and maintenance cost of the conventional wire based monitoring systems. Wireless monitoring sensors need to operate for long time. However, sensor batteries have finite life-time. Therefore, in order to enable long operational life of wireless sensors, novel wireless powering methods, which can charge the sensors’ rechargeable batteries wirelessly, need to be developed. The optimization of RF wireless powering of sensors embedded in concrete is studied here. First, our analytical results focus on calculating the transmission loss and propagation loss of electromagnetic waves penetrating into plain concrete at different humidity conditions for various frequencies. This analysis specifically leads to the identification of an optimum frequency range within 20–80 MHz that is validated through full-wave electromagnetic simulations. Second, the effects of various reinforced bar configurations on the efficiency of wireless powering are investigated. Specifically, effects of the following factors are studied: rebar types, rebar period, rebar radius, depth inside concrete, and offset placement. This analysis leads to the identification of the 902–928 MHz ISM band as the optimum power transmission frequency range for sensors embedded in reinforced concrete, since antennas working in this band are less sensitive to the effects of varying humidity as well as rebar configurations. Finally, optimized rectennas are designed for receiving and/or harvesting power in order to charge the rechargeable batteries of the embedded sensors. Such optimized wireless powering systems exhibit significantly larger efficiencies than the efficiencies of conventional RF wireless powering systems for sensors embedded in plain or reinforced concrete.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation presents and evaluates a methodology for scheduling medical application workloads in virtualized computing environments. Such environments are being widely adopted by providers of "cloud computing" services. In the context of provisioning resources for medical applications, such environments allow users to deploy applications on distributed computing resources while keeping their data secure. Furthermore, higher level services that further abstract the infrastructure-related issues can be built on top of such infrastructures. For example, a medical imaging service can allow medical professionals to process their data in the cloud, easing them from the burden of having to deploy and manage these resources themselves. In this work, we focus on issues related to scheduling scientific workloads on virtualized environments. We build upon the knowledge base of traditional parallel job scheduling to address the specific case of medical applications while harnessing the benefits afforded by virtualization technology. To this end, we provide the following contributions: (1) An in-depth analysis of the execution characteristics of the target applications when run in virtualized environments. (2) A performance prediction methodology applicable to the target environment. (3) A scheduling algorithm that harnesses application knowledge and virtualization-related benefits to provide strong scheduling performance and quality of service guarantees. In the process of addressing these pertinent issues for our target user base (i.e. medical professionals and researchers), we provide insight that benefits a large community of scientific application users in industry and academia. Our execution time prediction and scheduling methodologies are implemented and evaluated on a real system running popular scientific applications. We find that we are able to predict the execution time of a number of these applications with an average error of 15%. Our scheduling methodology, which is tested with medical image processing workloads, is compared to that of two baseline scheduling solutions and we find that it outperforms them in terms of both the number of jobs processed and resource utilization by 20–30%, without violating any deadlines. We conclude that our solution is a viable approach to supporting the computational needs of medical users, even if the cloud computing paradigm is not widely adopted in its current form.