962 resultados para Plasma applications
Resumo:
A switching control strategy is proposed for single inductor current-fed push-pull converter with a secondary side active voltage doubler rectifier or a voltage rectifier used in photovoltaic (PV) grid interfacing. The proposed switching control strategy helps to turn-on and turn-off the primary side power switches with zero-voltage and zero-current switching. The operation of the push-pull converter is analyzed for two modes of operation. The feasibility of the proposed switching control strategy is validated using simulation and experimental results.
Resumo:
This thesis proposes a novel gate drive circuit to improve the switching performance of MOSFET power switches in power electronic converters. The proposed topology exploits the cascode configuration, allowing the minimisation of switching losses in the presence of practical circuit constraints, which enables efficiency and power density improvements. Switching characteristics of the new topology are investigated and key mechanisms that control the switching process are identified. Unique analysis tools and techniques are also developed to demonstrate the application of the cascode gate drive circuit for switching performance optimisation.
Resumo:
Introduced in this paper is a Bayesian model for isolating the resonant frequency from combustion chamber resonance. The model shown in this paper focused on characterising the initial rise in the resonant frequency to investigate the rise of in-cylinder bulk temperature associated with combustion. By resolving the model parameters, it is possible to determine: the start of pre-mixed combustion, the start of diffusion combustion, the initial resonant frequency, the resonant frequency as a function of crank angle, the in-cylinder bulk temperature as a function of crank angle and the trapped mass as a function of crank angle. The Bayesian method allows for individual cycles to be examined without cycle-averaging|allowing inter-cycle variability studies. Results are shown for a turbo-charged, common-rail compression ignition engine run at 2000 rpm and full load.
Resumo:
Non-thermal plasma (NTP) has been introduced over the past several years as a promising method for nitrogen oxide (NOx) removal. The intent, when using NTP, is to selectively transfer input electrical energy to the electrons, and to not expend this in heating the entire gas stream, which generates free radicals through collisions, and promotes the desired chemical changes in the exhaust gases. The generated active species react with the pollutant molecules and decompose them. This paper reviews and summarizes relevant literature regarding various aspects of the application of {NTP} technology on {NOx} removal from exhaust gases. A comprehensive description of available scientific literature on {NOx} removal using {NTP} technology is presented, including various types of NTP, e.g. dielectric barrier discharge, corona discharge and electron beam. Furthermore, the combination of {NTP} with catalyst and adsorbent for better {NOx} removal efficiency is presented in detail. The removal of {NOx} from both simulated gases and real diesel engines is also considered in this review paper. As {NTP} is a new technique and is not yet commercialized, there is a need for more studies to be performed in this field.
Resumo:
There is an increasing need in biology and clinical medicine to robustly and reliably measure tens-to-hundreds of peptides and proteins in clinical and biological samples with high sensitivity, specificity, reproducibility and repeatability. Previously, we demonstrated that LC-MRM-MS with isotope dilution has suitable performance for quantitative measurements of small numbers of relatively abundant proteins in human plasma, and that the resulting assays can be transferred across laboratories while maintaining high reproducibility and quantitative precision. Here we significantly extend that earlier work, demonstrating that 11 laboratories using 14 LC-MS systems can develop, determine analytical figures of merit, and apply highly multiplexed MRM-MS assays targeting 125 peptides derived from 27 cancer-relevant proteins and 7 control proteins to precisely and reproducibly measure the analytes in human plasma. To ensure consistent generation of high quality data we incorporated a system suitability protocol (SSP) into our experimental design. The SSP enabled real-time monitoring of LC-MRM-MS performance during assay development and implementation, facilitating early detection and correction of chromatographic and instrumental problems. Low to sub-nanogram/mL sensitivity for proteins in plasma was achieved by one-step immunoaffinity depletion of 14 abundant plasma proteins prior to analysis. Median intra- and inter-laboratory reproducibility was <20%, sufficient for most biological studies and candidate protein biomarker verification. Digestion recovery of peptides was assessed and quantitative accuracy improved using heavy isotope labeled versions of the proteins as internal standards. Using the highly multiplexed assay, participating laboratories were able to precisely and reproducibly determine the levels of a series of analytes in blinded samples used to simulate an inter-laboratory clinical study of patient samples. Our study further establishes that LC-MRM-MS using stable isotope dilution, with appropriate attention to analytical validation and appropriate quality c`ontrol measures, enables sensitive, specific, reproducible and quantitative measurements of proteins and peptides in complex biological matrices such as plasma.
Resumo:
This paper discusses the issue of sensing and control for stabilizing a swinging load. Our work has focused in particular on the dragline as used for overburden stripping in open-pit coal mining, but many of the principles would also be applicable to construction cranes. Results obtained from experimental work on a full-scale production dragline are presented.
Resumo:
This thesis investigates the design of motivating and engaging software experiences. In particular it examines the use of video game elements in non-game contexts, known as gamification, and how to effectively design gamification experiences for smartphone applications. The original contribution of this thesis is a novel framework for designing gamification, derived from an iterative process of evaluating gamified prototypes. The outcomes of this research can help us to better understand the impact of gamification in today's society and how it can be used to design more effective software.
Resumo:
Many complex aeronautical design problems can be formulated with efficient multi-objective evolutionary optimization methods and game strategies. This book describes the role of advanced innovative evolution tools in the solution, or the set of solutions of single or multi disciplinary optimization. These tools use the concept of multi-population, asynchronous parallelization and hierarchical topology which allows different models including precise, intermediate and approximate models with each node belonging to the different hierarchical layer handled by a different Evolutionary Algorithm. The efficiency of evolutionary algorithms for both single and multi-objective optimization problems are significantly improved by the coupling of EAs with games and in particular by a new dynamic methodology named “Hybridized Nash-Pareto games”. Multi objective Optimization techniques and robust design problems taking into account uncertainties are introduced and explained in detail. Several applications dealing with civil aircraft and UAV, UCAV systems are implemented numerically and discussed. Applications of increasing optimization complexity are presented as well as two hands-on test cases problems. These examples focus on aeronautical applications and will be useful to the practitioner in the laboratory or in industrial design environments. The evolutionary methods coupled with games presented in this volume can be applied to other areas including surface and marine transport, structures, biomedical engineering, renewable energy and environmental problems.
Resumo:
This study is seeking to investigate the effect of non-thermal plasma technology in the abatement of particulate matter (PM) from the actual diesel exhaust. Ozone (O3) strongly promotes PM oxidation, the main product of which is carbon dioxide (CO2). PM oxidation into the less harmful product (CO2) is the main objective whiles the correlation between PM, O3 and CO2 is considered. A dielectric barrier discharge reactor has been designed with pulsed power technology to produce plasma inside the diesel exhaust. To characterise the system under varied conditions, a range of applied voltages from 11 kVPP to 21kVPP at repetition rates of 2.5, 5, 7.5 and 10 kHz, have been experimentally investigated. The results show that by increasing the applied voltage and repetition rate, higher discharge power and CO2 dissociation can be achieved. The PM removal efficiency of more than 50% has been achieved during the experiments and high concentrations of ozone on the order of a few hundreds of ppm have been observed at high discharge powers. Furthermore, O3, CO2 and PM concentrations at different plasma states have been analysed for time dependence. Based on this analysis, an inverse relationship between ozone concentration and PM removal has been found and the role of ozone in PM removal in plasma treatment of diesel exhaust has been highlighted.
Resumo:
This work addresses fundamental issues in the mathematical modelling of the diffusive motion of particles in biological and physiological settings. New mathematical results are proved and implemented in computer models for the colonisation of the embryonic gut by neural cells and the propagation of electrical waves in the heart, offering new insights into the relationships between structure and function. In particular, the thesis focuses on the use of non-local differential operators of non-integer order to capture the main features of diffusion processes occurring in complex spatial structures characterised by high levels of heterogeneity.
Resumo:
Affect is an important feature of multimedia content and conveys valuable information for multimedia indexing and retrieval. Most existing studies for affective content analysis are limited to low-level features or mid-level representations, and are generally criticized for their incapacity to address the gap between low-level features and high-level human affective perception. The facial expressions of subjects in images carry important semantic information that can substantially influence human affective perception, but have been seldom investigated for affective classification of facial images towards practical applications. This paper presents an automatic image emotion detector (IED) for affective classification of practical (or non-laboratory) data using facial expressions, where a lot of “real-world” challenges are present, including pose, illumination, and size variations etc. The proposed method is novel, with its framework designed specifically to overcome these challenges using multi-view versions of face and fiducial point detectors, and a combination of point-based texture and geometry. Performance comparisons of several key parameters of relevant algorithms are conducted to explore the optimum parameters for high accuracy and fast computation speed. A comprehensive set of experiments with existing and new datasets, shows that the method is effective despite pose variations, fast, and appropriate for large-scale data, and as accurate as the method with state-of-the-art performance on laboratory-based data. The proposed method was also applied to affective classification of images from the British Broadcast Corporation (BBC) in a task typical for a practical application providing some valuable insights.
Resumo:
The ability to estimate the expected Remaining Useful Life (RUL) is critical to reduce maintenance costs, operational downtime and safety hazards. In most industries, reliability analysis is based on the Reliability Centred Maintenance (RCM) and lifetime distribution models. In these models, the lifetime of an asset is estimated using failure time data; however, statistically sufficient failure time data are often difficult to attain in practice due to the fixed time-based replacement and the small population of identical assets. When condition indicator data are available in addition to failure time data, one of the alternate approaches to the traditional reliability models is the Condition-Based Maintenance (CBM). The covariate-based hazard modelling is one of CBM approaches. There are a number of covariate-based hazard models; however, little study has been conducted to evaluate the performance of these models in asset life prediction using various condition indicators and data availability. This paper reviews two covariate-based hazard models, Proportional Hazard Model (PHM) and Proportional Covariate Model (PCM). To assess these models’ performance, the expected RUL is compared to the actual RUL. Outcomes demonstrate that both models achieve convincingly good results in RUL prediction; however, PCM has smaller absolute prediction error. In addition, PHM shows over-smoothing tendency compared to PCM in sudden changes of condition data. Moreover, the case studies show PCM is not being biased in the case of small sample size.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
This project constructed virtual plant leaf surfaces from digitised data sets for use in droplet spray models. Digitisation techniques for obtaining data sets for cotton, chenopodium and wheat leaves are discussed and novel algorithms for the reconstruction of the leaves from these three plant species are developed. The reconstructed leaf surfaces are included into agricultural droplet spray models to investigate the effect of the nozzle and spray formulation combination on the proportion of spray retained by the plant. A numerical study of the post-impaction motion of large droplets that have formed on the leaf surface is also considered.
Resumo:
The world is facing an energy crisis due to exponential population growth and limited availability of fossil fuels. Carbon, one of the most abundant materials found on earth, and its allotrope forms have been proposed in this project for novel energy generation and storage devices. This studied investigated the synthesis and properties of these carbon nanomaterials for applications in organic solar cells and supercapacitors.