952 resultados para single channel algorithm


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many classical as well as modern optimization techniques exist. One such modern method belonging to the field of swarm intelligence is termed ant colony optimization. This relatively new concept in optimization involves the use of artificial ants and is based on real ant behavior inspired by the way ants search for food. In this thesis, a novel ant colony optimization technique for continuous domains was developed. The goal was to provide improvements in computing time and robustness when compared to other optimization algorithms. Optimization function spaces can have extreme topologies and are therefore difficult to optimize. The proposed method effectively searched the domain and solved difficult single-objective optimization problems. The developed algorithm was run for numerous classic test cases for both single and multi-objective problems. The results demonstrate that the method is robust, stable, and that the number of objective function evaluations is comparable to other optimization algorithms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Respiratory gating in lung PET imaging to compensate for respiratory motion artifacts is a current research issue with broad potential impact on quantitation, diagnosis and clinical management of lung tumors. However, PET images collected at discrete bins can be significantly affected by noise as there are lower activity counts in each gated bin unless the total PET acquisition time is prolonged, so that gating methods should be combined with imaging-based motion correction and registration methods. The aim of this study was to develop and validate a fast and practical solution to the problem of respiratory motion for the detection and accurate quantitation of lung tumors in PET images. This included: (1) developing a computer-assisted algorithm for PET/CT images that automatically segments lung regions in CT images, identifies and localizes lung tumors of PET images; (2) developing and comparing different registration algorithms which processes all the information within the entire respiratory cycle and integrate all the tumor in different gated bins into a single reference bin. Four registration/integration algorithms: Centroid Based, Intensity Based, Rigid Body and Optical Flow registration were compared as well as two registration schemes: Direct Scheme and Successive Scheme. Validation was demonstrated by conducting experiments with the computerized 4D NCAT phantom and with a dynamic lung-chest phantom imaged using a GE PET/CT System. Iterations were conducted on different size simulated tumors and different noise levels. Static tumors without respiratory motion were used as gold standard; quantitative results were compared with respect to tumor activity concentration, cross-correlation coefficient, relative noise level and computation time. Comparing the results of the tumors before and after correction, the tumor activity values and tumor volumes were closer to the static tumors (gold standard). Higher correlation values and lower noise were also achieved after applying the correction algorithms. With this method the compromise between short PET scan time and reduced image noise can be achieved, while quantification and clinical analysis become fast and precise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.

Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.

Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.

Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Optical control of interactions in ultracold gases opens new fields of research by creating ``designer" interactions with high spatial and temporal resolution. However, previous optical methods using single optical fields generally suffer from atom loss due to spontaneous scattering. This thesis reports new optical methods, employing two optical fields to control interactions in ultracold gases, while suppressing spontaneous scattering by quantum interference. In this dissertation, I will discuss the experimental demonstration of two optical field methods to control narrow and broad magnetic Feshbach resonances in an ultracold gas of $^6$Li atoms. The narrow Feshbach resonance is shifted by $30$ times its width and atom loss suppressed by destructive quantum interference. Near the broad Feshbach resonance, the spontaneous lifetime of the atoms is increased from $0.5$ ms for single field methods to $400$ ms using our two optical field method. Furthermore, I report on a new theoretical model, the continuum-dressed state model, that calculates the optically induced scattering phase shift for both the broad and narrow Feshbach resonances by treating them in a unified manner. The continuum-dressed state model fits the experimental data both in shape and magnitude using only one free parameter. Using the continuum-dressed state model, I illustrate the advantages of our two optical field method over single-field optical methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effectiveness of an optimization algorithm can be reduced to its ability to navigate an objective function’s topology. Hybrid optimization algorithms combine various optimization algorithms using a single meta-heuristic so that the hybrid algorithm is more robust, computationally efficient, and/or accurate than the individual algorithms it is made of. This thesis proposes a novel meta-heuristic that uses search vectors to select the constituent algorithm that is appropriate for a given objective function. The hybrid is shown to perform competitively against several existing hybrid and non-hybrid optimization algorithms over a set of three hundred test cases. This thesis also proposes a general framework for evaluating the effectiveness of hybrid optimization algorithms. Finally, this thesis presents an improved Method of Characteristics Code with novel boundary conditions, which better characterizes pipelines than previous codes. This code is coupled with the hybrid optimization algorithm in order to optimize the operation of real-world piston pumps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thick Holocene sedimentary sections (>45 m) cored in the Palmer Deep by the United States Antarctic Program (USAP) and during Ocean Drilling Program (ODP) Leg 178 provide the first opportunity to examine past geomagnetic field behavior at high southern latitudes. After removal of a low-coercivity drilling overprint the sediments display a stable, single-component remanent magnetization. Two short cores that recovered the uppermost 2.6 m of sediment have inclinations that fluctuate about the present day inclination (-57°) measured at Faraday Station, and several features with wavelengths of 10 to 20 cm appear to be correlative. However, shipboard measurements of inclination fluctuations on split-core samples from three holes drilled at ODP Site 1098 do not correlate well with each other, even though the intensity and susceptibility data correlate very well and the overall mean inclination for cores from each hole is consistent with the expected geocentric axial dipole (GAD) inclination. The correlation is improved dramatically by using inclinations measured on u-channels taken from the pristine center of a split core. Consequently, the anomalous directions and the resulting poor between-hole correlation of inclinations obtained from shipboard data can be attributed to coring-induced deformation, which is common on the outer edge of ODP piston cores, and/or measurement artifacts in the split-core data. Our preferred inclination record is thus derived from u-channel results. The upper ~25 m represents continuous sedimentation over the past 9000 yr, with an average sedimentation rate exceeding 250 cm/kyr (0.25 cm/yr). Given that remanence measurements on u-channels average over an interval <7 cm long, we obtained independent measurements of the paleo-geomagnetic field that average over only ~30 yr. This high-resolution record is characterized by an inclination that fluctuates within +/-15° of the current GAD inclination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-frequency Eddy Current (EC) inspection with a transmit-receive probe (two horizontally offset coils) is used to monitor the Pressure Tube (PT) to Calandria Tube (CT) gap of CANDU® fuel channels. Accurate gap measurements are crucial to ensure fitness of service; however, variations in probe liftoff, PT electrical resistivity, and PT wall thickness can generate systematic measurement errors. Validated mathematical models of the EC probe are very useful for data interpretation, and may improve the gap measurement under inspection conditions where these parameters vary. As a first step, exact solutions for the electromagnetic response of a transmit-receive coil pair situated above two parallel plates separated by an air gap were developed. This model was validated against experimental data with flat-plate samples. Finite element method models revealed that this geometrical approximation could not accurately match experimental data with real tubes, so analytical solutions for the probe in a double-walled pipe (the CANDU® fuel channel geometry) were generated using the Second-Order Vector Potential (SOVP) formalism. All electromagnetic coupling coefficients arising from the probe, and the layered conductors were determined and substituted into Kirchhoff’s circuit equations for the calculation of the pickup coil signal. The flat-plate model was used as a basis for an Inverse Algorithm (IA) to simultaneously extract the relevant experimental parameters from EC data. The IA was validated over a large range of second layer plate resistivities (1.7 to 174 µΩ∙cm), plate wall thickness (~1 to 4.9 mm), probe liftoff (~2 mm to 8 mm), and plate-to plate gap (~0 mm to 13 mm). The IA achieved a relative error of less than 6% for the extracted FP resistivity and an accuracy of ±0.1 mm for the LO measurement. The IA was able to achieve a plate gap measurement with an accuracy of less than ±0.7 mm error over a ~2.4 mm to 7.5 mm probe liftoff and ±0.3 mm at nominal liftoff (2.42±0.05 mm), providing confidence in the general validity of the algorithm. This demonstrates the potential of using an analytical model to extract variable parameters that may affect the gap measurement accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a linear precoder design for an underlay cognitive radio multiple-input multiple-output broadcast channel, where the secondary system consisting of a secondary base-station (BS) and a group of secondary users (SUs) is allowed to share the same spectrum with the primary system. All the transceivers are equipped with multiple antennas, each of which has its own maximum power constraint. Assuming zero-forcing method to eliminate the multiuser interference, we study the sum rate maximization problem for the secondary system subject to both per-antenna power constraints at the secondary BS and the interference power constraints at the primary users. The problem of interest differs from the ones studied previously that often assumed a sum power constraint and/or single antenna employed at either both the primary and secondary receivers or the primary receivers. To develop an efficient numerical algorithm, we first invoke the rank relaxation method to transform the considered problem into a convex-concave problem based on a downlink-uplink result. We then propose a barrier interior-point method to solve the resulting saddle point problem. In particular, in each iteration of the proposed method we find the Newton step by solving a system of discrete-time Sylvester equations, which help reduce the complexity significantly, compared to the conventional method. Simulation results are provided to demonstrate fast convergence and effectiveness of the proposed algorithm

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The mammalian binaural cue of interaural time difference (ITD) and cross-correlation have long been used to determine the point of origin of a sound source. The ITD can be defined as the different points in time at which a sound from a single location arrives at each individual ear [1]. From this time difference, the brain can calculate the angle of the sound source in relation to the head [2]. Cross-correlation compares the similarity of each channel of a binaural waveform producing the time lag or offset required for both channels to be in phase with one another. This offset corresponds to the maximum value produced by the cross-correlation function and can be used to determine the ITD and thus the azimuthal angle θ of the original sound source. However, in indoor environments, cross-correlation has been known to have problems with both sound reflections and reverberations. Additionally, cross-correlation has difficulties with localising short-term complex noises when they occur during a longer duration waveform, i.e. in the presence of background noise. The crosscorrelation algorithm processes the entire waveform and the short-term complex noise can be ignored. This paper presents a technique using thresholding which enables higher-localisation abilities for short-term complex sounds in the midst of background noise. To determine the success of this thresholding technique, twenty-five sounds were recorded in a dynamic and echoic environment. The twenty-five sounds consist of hand-claps, finger-clicks and speech. The proposed technique was compared to the regular cross-correlation function for the same waveforms, and an average of the azimuthal angles determined for each individual sample. The sound localisation ability for all twenty-five sound samples is as follows: average of the sampled angles using cross-correlation: 44%; cross-correlation technique with thresholding: 84%. From these results, it is clear that this proposed technique is very successful for the localisation of short-term complex sounds in the midst of background noise and in a dynamic and echoic indoor environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The evolution of wireless communication systems leads to Dynamic Spectrum Allocation for Cognitive Radio, which requires reliable spectrum sensing techniques. Among the spectrum sensing methods proposed in the literature, those that exploit cyclostationary characteristics of radio signals are particularly suitable for communication environments with low signal-to-noise ratios, or with non-stationary noise. However, such methods have high computational complexity that directly raises the power consumption of devices which often have very stringent low-power requirements. We propose a strategy for cyclostationary spectrum sensing with reduced energy consumption. This strategy is based on the principle that p processors working at slower frequencies consume less power than a single processor for the same execution time. We devise a strict relation between the energy savings and common parallel system metrics. The results of simulations show that our strategy promises very significant savings in actual devices.