25 resultados para generative and performative modeling
Resumo:
We organized an international campaign to observe the blazar 0716+714 in the optical band. The observations took place from February 24, 2009 to February 26, 2009. The global campaign was carried out by observers from more that sixteen countries and resulted in an extended light curve nearly seventy-eight hours long. The analysis and the modeling of this light curve form the main work of this dissertation project. In the first part of this work, we present the time series and noise analyses of the data. The time series analysis utilizes discrete Fourier transform and wavelet analysis routines to search for periods in the light curve. We then present results of the noise analysis which is based on the idea that each microvariability curve is the realization of the same underlying stochastic noise processes in the blazar jet. ^ Neither reoccuring periods nor random noise can successfully explain the observed optical fluctuations. Hence in the second part, we propose and develop a new model to account for the microvariability we see in blazar 0716+714. We propose that the microvariability is due to the emission from turbulent regions in the jet that are energized by the passage of relativistic shocks. Emission from each turbulent cell forms a pulse of emission, and when convolved with other pulses, yields the observed light curve. We use the model to obtain estimates of the physical parameters of the emission regions in the jet.^
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity.^ We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. ^ This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.^
Resumo:
Recreational abuse of the drugs cocaine, methamphetamine, and morphine continues to be prevalent in the United States of America and around the world. While numerous methods of detection exist for each drug, they are generally limited by the lifetime of the parent drug and its metabolites in the body. However, the covalent modification of endogenous proteins by these drugs of abuse may act as biomarkers of exposure and allow for extension of detection windows for these drugs beyond the lifetime of parent molecules or metabolites in the free fraction. Additionally, existence of covalently bound molecules arising from drug ingestion can offer insight into downstream toxicities associated with each of these drugs. This research investigated the metabolism of cocaine, methamphetamine, and morphine in common in vitro assay systems, specifically focusing on the generation of reactive intermediates and metabolites that have the potential to form covalent protein adducts. Results demonstrated the formation of covalent adduction products between biological cysteine thiols and reactive moieties on cocaine and morphine metabolites. Rigorous mass spectrometric analysis in conjunction with in vitro metabolic activation, pharmacogenetic reaction phenotyping, and computational modeling were utilized to characterize structures and mechanisms of formation for each resultant thiol adduction product. For cocaine, data collected demonstrated the formation of adduction products from a reactive arene epoxide intermediate, designating a novel metabolic pathway for cocaine. In the case of morphine, data expanded on known adduct-forming pathways using sensitive and selective analysis techniques, following the known reactive metabolite, morphinone, and a proposed novel metabolite, morphine quinone methide. Data collected in this study describe novel metabolic events for multiple important drugs of abuse, culminating in detection methods and mechanistic descriptors useful to both medical and forensic investigators when examining the toxicology associated with cocaine, methamphetamine, and morphine.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.
Resumo:
Two suites of intermediate-felsic plutonic rocks were recovered by dredges RD63 and RD64 (R/V KK81-06-26) from the northern wall of the Mariana trench near Guam, which is located in the southern part of the Izu-Bonin-Mariana (IBM) island arc system. The locations of the dredges are significant as the area contains volcanic rocks (forearc basalts and boninites) that have been pivotal in explaining processes that occur when one lithospheric plate initially begins to subduct beneath another. The plutonic rocks have been classified based on petrologic and geochemical analyses, which provides insight to their origin and evolution in context of the surrounding Mariana trench. Based on whole rock geochemistry, these rocks (SiO2: 49-78 wt%) have island arc trace element signatures (Ba, Sr, Rb enrichment, Nb-Ta negative anomalies, U/Th enrichment), consistent with the adjacent IBM volcanics. Depletion of rare earth elements (REEs) relative to primitive mantle and excess Zr and Hf compared to the middle REEs indicate that the source of the plutonic rocks is similar to boninites and transitional boninites. Early IBM volcanic rocks define isotopic fields (Sr, Pb, Nd and Hf-isotopes) that represent different aspects of the subduction process (e.g., sediment influence, mantle provenance). The southern Mariana plutonic rocks overlap these fields, but show a clear distinction between RD63 and RD64. Modeling of the REEs, Zr and Hf shows that the plutonic suites formed via melting of boninite crust or by crystallization from a boninite-like magma rather than other sources that are found in the IBM system. The data presented support the hypothesis that the plutonic rocks from RD63 and RD64 are products of subduction initiation and are likely pieces of middle crust in the forearc exposed at the surface by faulting and serpentine mudvolcanoes. Their existence shows that intermediate-felsic crust may form very early in the history of an intra-oceanic island arc system. Plutonic rocks with similar formation histories may exist in obducted suprasubduction zone ophiolites and would be evidence that felsic-intermediate forearc plutonics are eventually accreted to the continents.
Resumo:
Aluminum oxide (A1203, or alumina) is a conventional ceramic known for applications such as wear resistant coatings, thermal liners, heaters, crucibles, dielectric systems, etc. However applications of A1203 are limited owing to its inherent brittleness. Due to its excellent mechanical properties and bending strength, carbon nanotubes (CNT) is an ideal reinforcement for A1203 matrix to improve its fracture toughness. The role of CNT dispersion in the fracture toughening of the plasma sprayed A1203-CNT nanocomposite coating is discussed in the current work. Pretreatment of powder feedstock is required for dispersing CNTs in the matrix. Four coatings namely spray dried A1203 (A-SD), A1203 blended with 4wt.% CNT (A4C-B), composite spray dried A1203-4wt.% CNT (A4C-SD) and composite spray dried A1203-8wt.% CNT (A8CSD), are synthesized by plasma spraying. Owing to extreme temperatures and velocities involved in the plasma spraying of ceramics, retention of CNTs in the resulting coatings necessitates optimizing plasma processing parameters using an inflight particle diagnostic sensor. A bimodal microstructure was obtained in the matrix that consists of fully melted and resolidified structure and solid state sintered structure. CNTs are retained both in the fully melted region and solid-state sintered regions of processed coatings. Fracture toughness of A-SD, A4C-B, A4C-SD and A8C-SD coatings was 3.22, 3.86, 4.60 and 5.04 MPa m1/2 respectively. This affirms the improvement of fracture toughness from 20 % (in A4C-B coating) to 43% (in A4C-SD coating) when compared to the A-SD coating because of the CNT dispersion. Fracture toughness improvement from 43 % (in A4C-SD) to 57% (in A8C-SD) coating is evinced because of the CNT content. Reinforcement by CNTs is described by its bridging, anchoring, hook formation, impact alignment, fusion with splat, and mesh formation. The A1203/CNT interface is critical in assisting the stress transfer and utilizing excellent mechanical properties of CNTs. Mathematical and computational modeling using ab-initio principle is applied to understand the wetting behavior at the A1203/CNTinterface. Contrasting storage modulus was obtained by nanoindentation (~ 210, 250, 250-350 and 325-420 GPa in A-SD, A4C-B, A4C-SD, and A8C-SD coatings respectively) depicting the toughening associated with CNT content and dispersion.
Resumo:
We organized an international campaign to observe the blazar 0716+714 in the optical band. The observations took place from February 24, 2009 to February 26, 2009. The global campaign was carried out by observers from more that sixteen countries and resulted in an extended light curve nearly seventy-eight hours long. The analysis and the modeling of this light curve form the main work of this dissertation project. In the first part of this work, we present the time series and noise analyses of the data. The time series analysis utilizes discrete Fourier transform and wavelet analysis routines to search for periods in the light curve. We then present results of the noise analysis which is based on the idea that each microvariability curve is the realization of the same underlying stochastic noise processes in the blazar jet. Neither reoccuring periods nor random noise can successfully explain the observed optical fluctuations. Hence in the second part, we propose and develop a new model to account for the microvariability we see in blazar 0716+714. We propose that the microvariability is due to the emission from turbulent regions in the jet that are energized by the passage of relativistic shocks. Emission from each turbulent cell forms a pulse of emission, and when convolved with other pulses, yields the observed light curve. We use the model to obtain estimates of the physical parameters of the emission regions in the jet.
Resumo:
Recreational abuse of the drugs cocaine, methamphetamine, and morphine continues to be prevalent in the United States of America and around the world. While numerous methods of detection exist for each drug, they are generally limited by the lifetime of the parent drug and its metabolites in the body. However, the covalent modification of endogenous proteins by these drugs of abuse may act as biomarkers of exposure and allow for extension of detection windows for these drugs beyond the lifetime of parent molecules or metabolites in the free fraction. Additionally, existence of covalently bound molecules arising from drug ingestion can offer insight into downstream toxicities associated with each of these drugs. This research investigated the metabolism of cocaine, methamphetamine, and morphine in common in vitro assay systems, specifically focusing on the generation of reactive intermediates and metabolites that have the potential to form covalent protein adducts. Results demonstrated the formation of covalent adduction products between biological cysteine thiols and reactive moieties on cocaine and morphine metabolites. Rigorous mass spectrometric analysis in conjunction with in vitro metabolic activation, pharmacogenetic reaction phenotyping, and computational modeling were utilized to characterize structures and mechanisms of formation for each resultant thiol adduction product. For cocaine, data collected demonstrated the formation of adduction products from a reactive arene epoxide intermediate, designating a novel metabolic pathway for cocaine. In the case of morphine, data expanded on known adduct-forming pathways using sensitive and selective analysis techniques, following the known reactive metabolite, morphinone, and a proposed novel metabolite, morphine quinone methide. Data collected in this study describe novel metabolic events for multiple important drugs of abuse, culminating in detection methods and mechanistic descriptors useful to both medical and forensic investigators when examining the toxicology associated with cocaine, methamphetamine, and morphine.
Resumo:
The rapid growth of virtualized data centers and cloud hosting services is making the management of physical resources such as CPU, memory, and I/O bandwidth in data center servers increasingly important. Server management now involves dealing with multiple dissimilar applications with varying Service-Level-Agreements (SLAs) and multiple resource dimensions. The multiplicity and diversity of resources and applications are rendering administrative tasks more complex and challenging. This thesis aimed to develop a framework and techniques that would help substantially reduce data center management complexity. We specifically addressed two crucial data center operations. First, we precisely estimated capacity requirements of client virtual machines (VMs) while renting server space in cloud environment. Second, we proposed a systematic process to efficiently allocate physical resources to hosted VMs in a data center. To realize these dual objectives, accurately capturing the effects of resource allocations on application performance is vital. The benefits of accurate application performance modeling are multifold. Cloud users can size their VMs appropriately and pay only for the resources that they need; service providers can also offer a new charging model based on the VMs performance instead of their configured sizes. As a result, clients will pay exactly for the performance they are actually experiencing; on the other hand, administrators will be able to maximize their total revenue by utilizing application performance models and SLAs. This thesis made the following contributions. First, we identified resource control parameters crucial for distributing physical resources and characterizing contention for virtualized applications in a shared hosting environment. Second, we explored several modeling techniques and confirmed the suitability of two machine learning tools, Artificial Neural Network and Support Vector Machine, to accurately model the performance of virtualized applications. Moreover, we suggested and evaluated modeling optimizations necessary to improve prediction accuracy when using these modeling tools. Third, we presented an approach to optimal VM sizing by employing the performance models we created. Finally, we proposed a revenue-driven resource allocation algorithm which maximizes the SLA-generated revenue for a data center.
Resumo:
With the progress of computer technology, computers are expected to be more intelligent in the interaction with humans, presenting information according to the user's psychological and physiological characteristics. However, computer users with visual problems may encounter difficulties on the perception of icons, menus, and other graphical information displayed on the screen, limiting the efficiency of their interaction with computers. In this dissertation, a personalized and dynamic image precompensation method was developed to improve the visual performance of the computer users with ocular aberrations. The precompensation was applied on the graphical targets before presenting them on the screen, aiming to counteract the visual blurring caused by the ocular aberration of the user's eye. A complete and systematic modeling approach to describe the retinal image formation of the computer user was presented, taking advantage of modeling tools, such as Zernike polynomials, wavefront aberration, Point Spread Function and Modulation Transfer Function. The ocular aberration of the computer user was originally measured by a wavefront aberrometer, as a reference for the precompensation model. The dynamic precompensation was generated based on the resized aberration, with the real-time pupil diameter monitored. The potential visual benefit of the dynamic precompensation method was explored through software simulation, with the aberration data from a real human subject. An "artificial eye'' experiment was conducted by simulating the human eye with a high-definition camera, providing objective evaluation to the image quality after precompensation. In addition, an empirical evaluation with 20 human participants was also designed and implemented, involving image recognition tests performed under a more realistic viewing environment of computer use. The statistical analysis results of the empirical experiment confirmed the effectiveness of the dynamic precompensation method, by showing significant improvement on the recognition accuracy. The merit and necessity of the dynamic precompensation were also substantiated by comparing it with the static precompensation. The visual benefit of the dynamic precompensation was further confirmed by the subjective assessments collected from the evaluation participants.