833 resultados para TO-NOISE RATIO
Resumo:
In recent years, wireless communication infrastructures have been widely deployed for both personal and business applications. IEEE 802.11 series Wireless Local Area Network (WLAN) standards attract lots of attention due to their low cost and high data rate. Wireless ad hoc networks which use IEEE 802.11 standards are one of hot spots of recent network research. Designing appropriate Media Access Control (MAC) layer protocols is one of the key issues for wireless ad hoc networks. ^ Existing wireless applications typically use omni-directional antennas. When using an omni-directional antenna, the gain of the antenna in all directions is the same. Due to the nature of the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 standards, only one of the one-hop neighbors can send data at one time. Nodes other than the sender and the receiver must be either in idle or listening state, otherwise collisions could occur. The downside of the omni-directionality of antennas is that the spatial reuse ratio is low and the capacity of the network is considerably limited. ^ It is therefore obvious that the directional antenna has been introduced to improve spatial reutilization. As we know, a directional antenna has the following benefits. It can improve transport capacity by decreasing interference of a directional main lobe. It can increase coverage range due to a higher SINR (Signal Interference to Noise Ratio), i.e., with the same power consumption, better connectivity can be achieved. And the usage of power can be reduced, i.e., for the same coverage, a transmitter can reduce its power consumption. ^ To utilizing the advantages of directional antennas, we propose a relay-enabled MAC protocol. Two relay nodes are chosen to forward data when the channel condition of direct link from the sender to the receiver is poor. The two relay nodes can transfer data at the same time and a pipelined data transmission can be achieved by using directional antennas. The throughput can be improved significant when introducing the relay-enabled MAC protocol. ^ Besides the strong points, directional antennas also have some explicit drawbacks, such as the hidden terminal and deafness problems and the requirements of retaining location information for each node. Therefore, an omni-directional antenna should be used in some situations. The combination use of omni-directional and directional antennas leads to the problem of configuring heterogeneous antennas, i e., given a network topology and a traffic pattern, we need to find a tradeoff between using omni-directional and using directional antennas to obtain a better network performance over this configuration. ^ Directly and mathematically establishing the relationship between the network performance and the antenna configurations is extremely difficult, if not intractable. Therefore, in this research, we proposed several clustering-based methods to obtain approximate solutions for heterogeneous antennas configuration problem, which can improve network performance significantly. ^ Our proposed methods consist of two steps. The first step (i.e., clustering links) is to cluster the links into different groups based on the matrix-based system model. After being clustered, the links in the same group have similar neighborhood nodes and will use the same type of antenna. The second step (i.e., labeling links) is to decide the type of antenna for each group. For heterogeneous antennas, some groups of links will use directional antenna and others will adopt omni-directional antenna. Experiments are conducted to compare the proposed methods with existing methods. Experimental results demonstrate that our clustering-based methods can improve the network performance significantly. ^
Resumo:
Recently, wireless network technology has grown at such a pace that scientific research has become a practical reality in a very short time span. One mobile system that features high data rates and open network architecture is 4G. Currently, the research community and industry, in the field of wireless networks, are working on possible choices for solutions in the 4G system. The researcher considers one of the most important characteristics of future 4G mobile systems the ability to guarantee reliable communications at high data rates, in addition to high efficiency in the spectrum usage. On mobile wireless communication networks, one important factor is the coverage of large geographical areas. In 4G systems, a hybrid satellite/terrestrial network is crucial to providing users with coverage wherever needed. Subscribers thus require a reliable satellite link to access their services when they are in remote locations where a terrestrial infrastructure is unavailable. The results show that good modulation and access technique are also required in order to transmit high data rates over satellite links to mobile users. The dissertation proposes the use of OFDM (Orthogonal Frequency Multiple Access) for the satellite link by increasing the time diversity. This technique will allow for an increase of the data rate, as primarily required by multimedia applications, and will also optimally use the available bandwidth. In addition, this dissertation approaches the use of Cooperative Satellite Communications for hybrid satellite/terrestrial networks. By using this technique, the satellite coverage can be extended to areas where there is no direct link to the satellite. The issue of Cooperative Satellite Communications is solved through a new algorithm that forwards the received data from the fixed node to the mobile node. This algorithm is very efficient because it does not allow unnecessary transmissions and is based on signal to noise ratio (SNR) measures.
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
A number of studies in the areas of Biomedical Engineering and Health Sciences have employed machine learning tools to develop methods capable of identifying patterns in different sets of data. Despite its extinction in many countries of the developed world, Hansen’s disease is still a disease that affects a huge part of the population in countries such as India and Brazil. In this context, this research proposes to develop a method that makes it possible to understand in the future how Hansen’s disease affects facial muscles. By using surface electromyography, a system was adapted so as to capture the signals from the largest possible number of facial muscles. We have first looked upon the literature to learn about the way researchers around the globe have been working with diseases that affect the peripheral neural system and how electromyography has acted to contribute to the understanding of these diseases. From these data, a protocol was proposed to collect facial surface electromyographic (sEMG) signals so that these signals presented a high signal to noise ratio. After collecting the signals, we looked for a method that would enable the visualization of this information in a way to make it possible to guarantee that the method used presented satisfactory results. After identifying the method's efficiency, we tried to understand which information could be extracted from the electromyographic signal representing the collected data. Once studies demonstrating which information could contribute to a better understanding of this pathology were not to be found in literature, parameters of amplitude, frequency and entropy were extracted from the signal and a feature selection was made in order to look for the features that better distinguish a healthy individual from a pathological one. After, we tried to identify the classifier that best discriminates distinct individuals from different groups, and also the set of parameters of this classifier that would bring the best outcome. It was identified that the protocol proposed in this study and the adaptation with disposable electrodes available in market proved their effectiveness and capability of being used in different studies whose intention is to collect data from facial electromyography. The feature selection algorithm also showed that not all of the features extracted from the signal are significant for data classification, with some more relevant than others. The classifier Support Vector Machine (SVM) proved itself efficient when the adequate Kernel function was used with the muscle from which information was to be extracted. Each investigated muscle presented different results when the classifier used linear, radial and polynomial kernel functions. Even though we have focused on Hansen’s disease, the method applied here can be used to study facial electromyography in other pathologies.
Resumo:
In this work it was developed mathematical resolutions taking as parameter maximum intensity values for the interference analysis of electric and magnetic fields and was given two virtual computer system that supports families of CDMA and WCDMA technologies. The first family were developed computational resources to solve electric and magnetic field calculations and power densities in Radio Base stations , with the use of CDMA technology in the 800 MHz band , taking into account the permissible values referenced by the Commission International Protection on non-Ionizing Radiation . The first family is divided into two segments of calculation carried out in virtual operation. In the first segment to compute the interference field radiated by the base station with input information such as radio channel power; Gain antenna; Radio channel number; Operating frequency; Losses in the cable; Attenuation of direction; Minimum Distance; Reflections. Said computing system allows to quickly and without the need of implementing instruments for measurements, meet the following calculated values: Effective Radiated Power; Sector Power Density; Electric field in the sector; Magnetic field in the sector; Magnetic flux density; point of maximum permissible exposure of electric field and power density. The results are shown in charts for clarity of view of power density in the industry, as well as the coverage area definition. The computer module also includes folders specifications antennas, cables and towers used in cellular telephony, the following manufacturers: RFS World, Andrew, Karthein and BRASILSAT. Many are presented "links" network access "Internet" to supplement the cable specifications, antennas, etc. . In the second segment of the first family work with more variables , seeking to perform calculations quickly and safely assisting in obtaining results of radio signal loss produced by ERB . This module displays screens representing propagation systems denominated "A" and "B". By propagating "A" are obtained radio signal attenuation calculations in areas of urban models , dense urban , suburban , and rural open . In reflection calculations are present the reflection coefficients , the standing wave ratio , return loss , the reflected power ratio , as well as the loss of the signal by mismatch impedance. With the spread " B" seek radio signal losses in the survey line and not targeted , the effective area , the power density , the received power , the coverage radius , the conversion levels and the gain conversion systems radiant . The second family of virtual computing system consists of 7 modules of which 5 are geared towards the design of WCDMA and 2 technology for calculation of telephone traffic serving CDMA and WCDMA . It includes a portfolio of radiant systems used on the site. In the virtual operation of the module 1 is compute-: distance frequency reuse, channel capacity with noise and without noise, Doppler frequency, modulation rate and channel efficiency; Module 2 includes computes the cell area, thermal noise, noise power (dB), noise figure, signal to noise ratio, bit of power (dBm); with the module 3 reaches the calculation: breakpoint, processing gain (dB) loss in the space of BTS, noise power (w), chip period and frequency reuse factor. Module 4 scales effective radiated power, sectorization gain, voice activity and load effect. The module 5 performs the calculation processing gain (Hz / bps) bit time, bit energy (Ws). Module 6 deals with the telephone traffic and scales 1: traffic volume, occupancy intensity, average time of occupancy, traffic intensity, calls completed, congestion. Module 7 deals with two telephone traffic and allows calculating call completion and not completed in HMM. Tests were performed on the mobile network performance field for the calculation of data relating to: CINP , CPI , RSRP , RSRQ , EARFCN , Drop Call , Block Call , Pilot , Data Bler , RSCP , Short Call, Long Call and Data Call ; ECIO - Short Call and Long Call , Data Call Troughput . As survey were conducted surveys of electric and magnetic field in an ERB , trying to observe the degree of exposure to non-ionizing radiation they are exposed to the general public and occupational element. The results were compared to permissible values for health endorsed by the ICNIRP and the CENELEC .
Resumo:
Compensation of the detrimental impacts of nonlinearity on long-haul wavelength division multiplexed system performance is discussed, and the difference between transmitter, receiver and in-line compensation analyzed. We demonstrate that ideal compensation of nonlinear noise could result in an increase in the signal-to-noise ratio (measured in dB) of 50%, and that reaches may be more than doubled for higher order modulation formats. The influence of parametric noise amplification is discussed in detail, showing how increased numbers of optical phase conjugators may further increase the received signal-tonoise ratio. Finally the impact of practical real world system imperfections, such as polarization mode dispersion, are outlined.
Resumo:
This work looks at the effect on mid-gap interface state defect density estimates for In0.53Ga0.47As semiconductor capacitors when different AC voltage amplitudes are selected for a fixed voltage bias step size (100 mV) during room temperature only electrical characterization. Results are presented for Au/Ni/Al2O3/In0.53Ga0.47As/InP metal–oxide–semiconductor capacitors with (1) n-type and p-type semiconductors, (2) different Al2O3 thicknesses, (3) different In0.53Ga0.47As surface passivation concentrations of ammonium sulphide, and (4) different transfer times to the atomic layer deposition chamber after passivation treatment on the semiconductor surface—thereby demonstrating a cross-section of device characteristics. The authors set out to determine the importance of the AC voltage amplitude selection on the interface state defect density extractions and whether this selection has a combined effect with the oxide capacitance. These capacitors are prototypical of the type of gate oxide material stacks that could form equivalent metal–oxide–semiconductor field-effect transistors beyond the 32 nm technology node. The authors do not attempt to achieve the best scaled equivalent oxide thickness in this work, as our focus is on accurately extracting device properties that will allow the investigation and reduction of interface state defect densities at the high-k/III–V semiconductor interface. The operating voltage for future devices will be reduced, potentially leading to an associated reduction in the AC voltage amplitude, which will force a decrease in the signal-to-noise ratio of electrical responses and could therefore result in less accurate impedance measurements. A concern thus arises regarding the accuracy of the electrical property extractions using such impedance measurements for future devices, particularly in relation to the mid-gap interface state defect density estimated from the conductance method and from the combined high–low frequency capacitance–voltage method. The authors apply a fixed voltage step of 100 mV for all voltage sweep measurements at each AC frequency. Each of these measurements is repeated 15 times for the equidistant AC voltage amplitudes between 10 mV and 150 mV. This provides the desired AC voltage amplitude to step size ratios from 1:10 to 3:2. Our results indicate that, although the selection of the oxide capacitance is important both to the success and accuracy of the extraction method, the mid-gap interface state defect density extractions are not overly sensitive to the AC voltage amplitude employed regardless of what oxide capacitance is used in the extractions, particularly in the range from 50% below the voltage sweep step size to 50% above it. Therefore, the use of larger AC voltage amplitudes in this range to achieve a better signal-to-noise ratio during impedance measurements for future low operating voltage devices will not distort the extracted interface state defect density.
Resumo:
X-ray computed tomography (CT) imaging constitutes one of the most widely used diagnostic tools in radiology today with nearly 85 million CT examinations performed in the U.S in 2011. CT imparts a relatively high amount of radiation dose to the patient compared to other x-ray imaging modalities and as a result of this fact, coupled with its popularity, CT is currently the single largest source of medical radiation exposure to the U.S. population. For this reason, there is a critical need to optimize CT examinations such that the dose is minimized while the quality of the CT images is not degraded. This optimization can be difficult to achieve due to the relationship between dose and image quality. All things being held equal, reducing the dose degrades image quality and can impact the diagnostic value of the CT examination.
A recent push from the medical and scientific community towards using lower doses has spawned new dose reduction technologies such as automatic exposure control (i.e., tube current modulation) and iterative reconstruction algorithms. In theory, these technologies could allow for scanning at reduced doses while maintaining the image quality of the exam at an acceptable level. Therefore, there is a scientific need to establish the dose reduction potential of these new technologies in an objective and rigorous manner. Establishing these dose reduction potentials requires precise and clinically relevant metrics of CT image quality, as well as practical and efficient methodologies to measure such metrics on real CT systems. The currently established methodologies for assessing CT image quality are not appropriate to assess modern CT scanners that have implemented those aforementioned dose reduction technologies.
Thus the purpose of this doctoral project was to develop, assess, and implement new phantoms, image quality metrics, analysis techniques, and modeling tools that are appropriate for image quality assessment of modern clinical CT systems. The project developed image quality assessment methods in the context of three distinct paradigms, (a) uniform phantoms, (b) textured phantoms, and (c) clinical images.
The work in this dissertation used the “task-based” definition of image quality. That is, image quality was broadly defined as the effectiveness by which an image can be used for its intended task. Under this definition, any assessment of image quality requires three components: (1) A well defined imaging task (e.g., detection of subtle lesions), (2) an “observer” to perform the task (e.g., a radiologists or a detection algorithm), and (3) a way to measure the observer’s performance in completing the task at hand (e.g., detection sensitivity/specificity).
First, this task-based image quality paradigm was implemented using a novel multi-sized phantom platform (with uniform background) developed specifically to assess modern CT systems (Mercury Phantom, v3.0, Duke University). A comprehensive evaluation was performed on a state-of-the-art CT system (SOMATOM Definition Force, Siemens Healthcare) in terms of noise, resolution, and detectability as a function of patient size, dose, tube energy (i.e., kVp), automatic exposure control, and reconstruction algorithm (i.e., Filtered Back-Projection– FPB vs Advanced Modeled Iterative Reconstruction– ADMIRE). A mathematical observer model (i.e., computer detection algorithm) was implemented and used as the basis of image quality comparisons. It was found that image quality increased with increasing dose and decreasing phantom size. The CT system exhibited nonlinear noise and resolution properties, especially at very low-doses, large phantom sizes, and for low-contrast objects. Objective image quality metrics generally increased with increasing dose and ADMIRE strength, and with decreasing phantom size. The ADMIRE algorithm could offer comparable image quality at reduced doses or improved image quality at the same dose (increase in detectability index by up to 163% depending on iterative strength). The use of automatic exposure control resulted in more consistent image quality with changing phantom size.
Based on those results, the dose reduction potential of ADMIRE was further assessed specifically for the task of detecting small (<=6 mm) low-contrast (<=20 HU) lesions. A new low-contrast detectability phantom (with uniform background) was designed and fabricated using a multi-material 3D printer. The phantom was imaged at multiple dose levels and images were reconstructed with FBP and ADMIRE. Human perception experiments were performed to measure the detection accuracy from FBP and ADMIRE images. It was found that ADMIRE had equivalent performance to FBP at 56% less dose.
Using the same image data as the previous study, a number of different mathematical observer models were implemented to assess which models would result in image quality metrics that best correlated with human detection performance. The models included naïve simple metrics of image quality such as contrast-to-noise ratio (CNR) and more sophisticated observer models such as the non-prewhitening matched filter observer model family and the channelized Hotelling observer model family. It was found that non-prewhitening matched filter observers and the channelized Hotelling observers both correlated strongly with human performance. Conversely, CNR was found to not correlate strongly with human performance, especially when comparing different reconstruction algorithms.
The uniform background phantoms used in the previous studies provided a good first-order approximation of image quality. However, due to their simplicity and due to the complexity of iterative reconstruction algorithms, it is possible that such phantoms are not fully adequate to assess the clinical impact of iterative algorithms because patient images obviously do not have smooth uniform backgrounds. To test this hypothesis, two textured phantoms (classified as gross texture and fine texture) and a uniform phantom of similar size were built and imaged on a SOMATOM Flash scanner (Siemens Healthcare). Images were reconstructed using FBP and a Sinogram Affirmed Iterative Reconstruction (SAFIRE). Using an image subtraction technique, quantum noise was measured in all images of each phantom. It was found that in FBP, the noise was independent of the background (textured vs uniform). However, for SAFIRE, noise increased by up to 44% in the textured phantoms compared to the uniform phantom. As a result, the noise reduction from SAFIRE was found to be up to 66% in the uniform phantom but as low as 29% in the textured phantoms. Based on this result, it clear that further investigation was needed into to understand the impact that background texture has on image quality when iterative reconstruction algorithms are used.
To further investigate this phenomenon with more realistic textures, two anthropomorphic textured phantoms were designed to mimic lung vasculature and fatty soft tissue texture. The phantoms (along with a corresponding uniform phantom) were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Scans were repeated a total of 50 times in order to get ensemble statistics of the noise. A novel method of estimating the noise power spectrum (NPS) from irregularly shaped ROIs was developed. It was found that SAFIRE images had highly locally non-stationary noise patterns with pixels near edges having higher noise than pixels in more uniform regions. Compared to FBP, SAFIRE images had 60% less noise on average in uniform regions for edge pixels, noise was between 20% higher and 40% lower. The noise texture (i.e., NPS) was also highly dependent on the background texture for SAFIRE. Therefore, it was concluded that quantum noise properties in the uniform phantoms are not representative of those in patients for iterative reconstruction algorithms and texture should be considered when assessing image quality of iterative algorithms.
The move beyond just assessing noise properties in textured phantoms towards assessing detectability, a series of new phantoms were designed specifically to measure low-contrast detectability in the presence of background texture. The textures used were optimized to match the texture in the liver regions actual patient CT images using a genetic algorithm. The so called “Clustured Lumpy Background” texture synthesis framework was used to generate the modeled texture. Three textured phantoms and a corresponding uniform phantom were fabricated with a multi-material 3D printer and imaged on the SOMATOM Flash scanner. Images were reconstructed with FBP and SAFIRE and analyzed using a multi-slice channelized Hotelling observer to measure detectability and the dose reduction potential of SAFIRE based on the uniform and textured phantoms. It was found that at the same dose, the improvement in detectability from SAFIRE (compared to FBP) was higher when measured in a uniform phantom compared to textured phantoms.
The final trajectory of this project aimed at developing methods to mathematically model lesions, as a means to help assess image quality directly from patient images. The mathematical modeling framework is first presented. The models describe a lesion’s morphology in terms of size, shape, contrast, and edge profile as an analytical equation. The models can be voxelized and inserted into patient images to create so-called “hybrid” images. These hybrid images can then be used to assess detectability or estimability with the advantage that the ground truth of the lesion morphology and location is known exactly. Based on this framework, a series of liver lesions, lung nodules, and kidney stones were modeled based on images of real lesions. The lesion models were virtually inserted into patient images to create a database of hybrid images to go along with the original database of real lesion images. ROI images from each database were assessed by radiologists in a blinded fashion to determine the realism of the hybrid images. It was found that the radiologists could not readily distinguish between real and virtual lesion images (area under the ROC curve was 0.55). This study provided evidence that the proposed mathematical lesion modeling framework could produce reasonably realistic lesion images.
Based on that result, two studies were conducted which demonstrated the utility of the lesion models. The first study used the modeling framework as a measurement tool to determine how dose and reconstruction algorithm affected the quantitative analysis of liver lesions, lung nodules, and renal stones in terms of their size, shape, attenuation, edge profile, and texture features. The same database of real lesion images used in the previous study was used for this study. That database contained images of the same patient at 2 dose levels (50% and 100%) along with 3 reconstruction algorithms from a GE 750HD CT system (GE Healthcare). The algorithms in question were FBP, Adaptive Statistical Iterative Reconstruction (ASiR), and Model-Based Iterative Reconstruction (MBIR). A total of 23 quantitative features were extracted from the lesions under each condition. It was found that both dose and reconstruction algorithm had a statistically significant effect on the feature measurements. In particular, radiation dose affected five, three, and four of the 23 features (related to lesion size, conspicuity, and pixel-value distribution) for liver lesions, lung nodules, and renal stones, respectively. MBIR significantly affected 9, 11, and 15 of the 23 features (including size, attenuation, and texture features) for liver lesions, lung nodules, and renal stones, respectively. Lesion texture was not significantly affected by radiation dose.
The second study demonstrating the utility of the lesion modeling framework focused on assessing detectability of very low-contrast liver lesions in abdominal imaging. Specifically, detectability was assessed as a function of dose and reconstruction algorithm. As part of a parallel clinical trial, images from 21 patients were collected at 6 dose levels per patient on a SOMATOM Flash scanner. Subtle liver lesion models (contrast = -15 HU) were inserted into the raw projection data from the patient scans. The projections were then reconstructed with FBP and SAFIRE (strength 5). Also, lesion-less images were reconstructed. Noise, contrast, CNR, and detectability index of an observer model (non-prewhitening matched filter) were assessed. It was found that SAFIRE reduced noise by 52%, reduced contrast by 12%, increased CNR by 87%. and increased detectability index by 65% compared to FBP. Further, a 2AFC human perception experiment was performed to assess the dose reduction potential of SAFIRE, which was found to be 22% compared to the standard of care dose.
In conclusion, this dissertation provides to the scientific community a series of new methodologies, phantoms, analysis techniques, and modeling tools that can be used to rigorously assess image quality from modern CT systems. Specifically, methods to properly evaluate iterative reconstruction have been developed and are expected to aid in the safe clinical implementation of dose reduction technologies.
Resumo:
Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.
This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.
In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.
Resumo:
Purpose: Computed Tomography (CT) is one of the standard diagnostic imaging modalities for the evaluation of a patient’s medical condition. In comparison to other imaging modalities such as Magnetic Resonance Imaging (MRI), CT is a fast acquisition imaging device with higher spatial resolution and higher contrast-to-noise ratio (CNR) for bony structures. CT images are presented through a gray scale of independent values in Hounsfield units (HU). High HU-valued materials represent higher density. High density materials, such as metal, tend to erroneously increase the HU values around it due to reconstruction software limitations. This problem of increased HU values due to metal presence is referred to as metal artefacts. Hip prostheses, dental fillings, aneurysm clips, and spinal clips are a few examples of metal objects that are of clinical relevance. These implants create artefacts such as beam hardening and photon starvation that distort CT images and degrade image quality. This is of great significance because the distortions may cause improper evaluation of images and inaccurate dose calculation in the treatment planning system. Different algorithms are being developed to reduce these artefacts for better image quality for both diagnostic and therapeutic purposes. However, very limited information is available about the effect of artefact correction on dose calculation accuracy. This research study evaluates the dosimetric effect of metal artefact reduction algorithms on severe artefacts on CT images. This study uses Gemstone Spectral Imaging (GSI)-based MAR algorithm, projection-based Metal Artefact Reduction (MAR) algorithm, and the Dual-Energy method.
Materials and Methods: The Gemstone Spectral Imaging (GSI)-based and SMART Metal Artefact Reduction (MAR) algorithms are metal artefact reduction protocols embedded in two different CT scanner models by General Electric (GE), and the Dual-Energy Imaging Method was developed at Duke University. All three approaches were applied in this research for dosimetric evaluation on CT images with severe metal artefacts. The first part of the research used a water phantom with four iodine syringes. Two sets of plans, multi-arc plans and single-arc plans, using the Volumetric Modulated Arc therapy (VMAT) technique were designed to avoid or minimize influences from high-density objects. The second part of the research used projection-based MAR Algorithm and the Dual-Energy Method. Calculated Doses (Mean, Minimum, and Maximum Doses) to the planning treatment volume (PTV) were compared and homogeneity index (HI) calculated.
Results: (1) Without the GSI-based MAR application, a percent error between mean dose and the absolute dose ranging from 3.4-5.7% per fraction was observed. In contrast, the error was decreased to a range of 0.09-2.3% per fraction with the GSI-based MAR algorithm. There was a percent difference ranging from 1.7-4.2% per fraction between with and without using the GSI-based MAR algorithm. (2) A range of 0.1-3.2% difference was observed for the maximum dose values, 1.5-10.4% for minimum dose difference, and 1.4-1.7% difference on the mean doses. Homogeneity indexes (HI) ranging from 0.068-0.065 for dual-energy method and 0.063-0.141 with projection-based MAR algorithm were also calculated.
Conclusion: (1) Percent error without using the GSI-based MAR algorithm may deviate as high as 5.7%. This error invalidates the goal of Radiation Therapy to provide a more precise treatment. Thus, GSI-based MAR algorithm was desirable due to its better dose calculation accuracy. (2) Based on direct numerical observation, there was no apparent deviation between the mean doses of different techniques but deviation was evident on the maximum and minimum doses. The HI for the dual-energy method almost achieved the desirable null values. In conclusion, the Dual-Energy method gave better dose calculation accuracy to the planning treatment volume (PTV) for images with metal artefacts than with or without GE MAR Algorithm.
Resumo:
We propose cyclic prefix single carrier full-duplex transmission in amplify-and-forward cooperative spectrum sharing networks to achieve multipath diversity and full-duplex spectral efficiency. Integrating full-duplex transmission into cooperative spectrum sharing systems results in two intrinsic problems: 1) the residual loop interference occurs between the transmit and the receive antennas at the secondary relays and 2) the primary users simultaneously suffer interference from the secondary source (SS) and the secondary relays (SRs). Thus, examining the effects of residual loop interference under peak interference power constraint at the primary users and maximum transmit power constraints at the SS and the SRs is a particularly challenging problem in frequency selective fading channels. To do so, we derive and quantitatively compare the lower bounds on the outage probability and the corresponding asymptotic outage probability for max–min relay selection, partial relay selection, and maximum interference relay selection policies in frequency selective fading channels. To facilitate comparison, we provide the corresponding analysis for half-duplex. Our results show two complementary regions, named as the signal-to-noise ratio (SNR) dominant region and the residual loop interference dominant region, where the multipath diversity and spatial diversity can be achievable only in the SNR dominant region, however the diversity gain collapses to zero in the residual loop interference dominant region.
Resumo:
We investigate the secrecy performance of dualhop amplify-and-forward (AF) multi-antenna relaying systems over Rayleigh fading channels, by taking into account the direct link between the source and destination. In order to exploit the available direct link and the multiple antennas for secrecy improvement, different linear processing schemes at the relay and different diversity combining techniques at the destination are proposed, namely, 1) Zero-forcing/Maximal ratio combining (ZF/MRC), 2) ZF/Selection combining (ZF/SC), 3) Maximal ratio transmission/MRC (MRT/MRC) and 4) MRT/Selection combining (MRT/SC). For all these schemes, we present new closed-form approximations for the secrecy outage probability. Moreover, we investigate a benchmark scheme, i.e., cooperative jamming/ZF (CJ/ZF), where the secrecy outage probability is obtained in exact closed-form. In addition, we present asymptotic secrecy outage expressions for all the proposed schemes in the high signal-to-noise ratio (SNR) regime, in order to characterize key design parameters, such as secrecy diversity order and secrecy array gain. The outcomes of this paper can be summarized as follows: a) MRT/MRC and MRT/SC achieve a full diversity order of M + 1, ZF/MRC and ZF/SC achieve a diversity order of M, while CJ/ZF only achieves unit diversity order, where M is the number of antennas at the relay. b) ZF/MRC (ZF/SC) outperforms the corresponding MRT/MRC (MRT/SC) in the low SNR regime, while becomes inferior to the corresponding MRT/MRC (MRT/SC) in the high SNR. c) All of the proposed schemes tend to outperform the CJ/ZF with moderate number of antennas, and linear processing schemes with MRC attain better performance than those with SC.
Resumo:
This paper considers a wirelessly powered wiretap channel, where an energy constrained multi-antenna information source, powered by a dedicated power beacon, communicates with a legitimate user in the presence of a passive eavesdropper. Based on a simple time-switching protocol where power transfer and information transmission are separated in time, we investigate two popular multi-antenna transmission schemes at the information source, namely maximum ratio transmission (MRT) and transmit antenna selection (TAS). Closed-form expressions are derived for the achievable secrecy outage probability and average secrecy rate for both schemes. In addition, simple approximations are obtained at the high signal-to-noise ratio (SNR) regime. Our results demonstrate that by exploiting the full knowledge of channel state information (CSI), we can achieve a better secrecy performance, e.g., with full CSI of the main channel, the system can achieve substantial secrecy diversity gain. On the other hand, without the CSI of the main channel, no diversity gain can be attained. Moreover, we show that the additional level of randomness induced by wireless power transfer does not affect the secrecy performance in the high SNR regime. Finally, our theoretical claims are validated by the numerical results.
Resumo:
This paper presents an analytical performance investigation of both beamforming (BF) and interference cancellation (IC) strategies for a device-to-device (D2D) communication system underlaying a cellular network with an M-antenna base station (BS). We first derive new closed-form expressions for the ergodic achievable rate for BF and IC precoding strategies with quantized channel state information (CSI), as well as, perfect CSI. Then, novel lower and upper bounds are derived which apply for an arbitrary number of antennas and are shown to be sufficiently tight to the Monte-Carlo results. Based on these results, we examine in detail three important special cases including: high signal-to-noise ratio (SNR), weak interference between cellular link and D2D link, and BS equipped with a large number of antennas. We also derive asymptotic expressions for the ergodic achievable rate for these scenarios. Based on these results, we obtain valuable insights into the impact of the system parameters, such as the number of antennas, SNR and the interference for each link. In particular, we show that an irreducible saturation point exists in the high SNR regime, while the ergodic rate under IC strategy is verified to be always better than that under BF strategy. We also reveal that the ergodic achievable rate under perfect CSI scales as log2M, whilst it reaches a ceiling with quantized CSI.
Resumo:
We investigate the achievable sum rate and energy efficiency of zero-forcing precoded downlink massive multiple-input multiple-output systems in Ricean fading channels. A simple and accurate approximation of the average sum rate is presented, which is valid for a system with arbitrary rank channel means. Based on this expression, the optimal power allocation strategy maximizing the average sum rate is derived. Moreover, considering a general power consumption model, the energy efficiency of the system with rank-1 channel means is characterized. Specifically, the impact of key system parameters, such as the number of users N, the number of BS antennas M, Ricean factor K and the signal-to-noise ratio (SNR) ρ are studied, and closed-form expressions for the optimal ρ and M maximizing the energy efficiency are derived. Our findings show that the optimal power allocation scheme follows the water filling principle, and it can substantially enhance the average sum rate in the presence of strong line-of-sight effect in the low SNR regime. In addition, we demonstrate that the Ricean factor K has significant impact on the optimal values of M, N and ρ.