958 resultados para Electrical engineering|Electromagnetics|Energy


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Every space launch increases the overall amount of space debris. Satellites have limited awareness of nearby objects that might pose a collision hazard. Astrometric, radiometric, and thermal models for the study of space debris in low-Earth orbit have been developed. This modeled approach proposes analysis methods that provide increased Local Area Awareness for satellites in low-Earth and geostationary orbit. Local Area Awareness is defined as the ability to detect, characterize, and extract useful information regarding resident space objects as they move through the space environment surrounding a spacecraft. The study of space debris is of critical importance to all space-faring nations. Characterization efforts are proposed using long-wave infrared sensors for space-based observations of debris objects in low-Earth orbit. Long-wave infrared sensors are commercially available and do not require solar illumination to be observed, as their received signal is temperature dependent. The characterization of debris objects through means of passive imaging techniques allows for further studies into the origination, specifications, and future trajectory of debris objects. Conclusions are made regarding the aforementioned thermal analysis as a function of debris orbit, geometry, orientation with respect to time, and material properties. Development of a thermal model permits the characterization of debris objects based upon their received long-wave infrared signals. Information regarding the material type, size, and tumble-rate of the observed debris objects are extracted. This investigation proposes the utilization of long-wave infrared radiometric models of typical debris to develop techniques for the detection and characterization of debris objects via signal analysis of unresolved imagery. Knowledge regarding the orbital type and semi-major axis of the observed debris object are extracted via astrometric analysis. This knowledge may aid in the constraint of the admissible region for the initial orbit determination process. The resultant orbital information is then fused with the radiometric characterization analysis enabling further characterization efforts of the observed debris object. This fused analysis, yielding orbital, material, and thermal properties, significantly increases a satellite's Local Area Awareness via an intimate understanding of the debris environment surrounding the spacecraft.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The promise of Wireless Sensor Networks (WSNs) is the autonomous collaboration of a collection of sensors to accomplish some specific goals which a single sensor cannot offer. Basically, sensor networking serves a range of applications by providing the raw data as fundamentals for further analyses and actions. The imprecision of the collected data could tremendously mislead the decision-making process of sensor-based applications, resulting in an ineffectiveness or failure of the application objectives. Due to inherent WSN characteristics normally spoiling the raw sensor readings, many research efforts attempt to improve the accuracy of the corrupted or "dirty" sensor data. The dirty data need to be cleaned or corrected. However, the developed data cleaning solutions restrict themselves to the scope of static WSNs where deployed sensors would rarely move during the operation. Nowadays, many emerging applications relying on WSNs need the sensor mobility to enhance the application efficiency and usage flexibility. The location of deployed sensors needs to be dynamic. Also, each sensor would independently function and contribute its resources. Sensors equipped with vehicles for monitoring the traffic condition could be depicted as one of the prospective examples. The sensor mobility causes a transient in network topology and correlation among sensor streams. Based on static relationships among sensors, the existing methods for cleaning sensor data in static WSNs are invalid in such mobile scenarios. Therefore, a solution of data cleaning that considers the sensor movements is actively needed. This dissertation aims to improve the quality of sensor data by considering the consequences of various trajectory relationships of autonomous mobile sensors in the system. First of all, we address the dynamic network topology due to sensor mobility. The concept of virtual sensor is presented and used for spatio-temporal selection of neighboring sensors to help in cleaning sensor data streams. This method is one of the first methods to clean data in mobile sensor environments. We also study the mobility pattern of moving sensors relative to boundaries of sub-areas of interest. We developed a belief-based analysis to determine the reliable sets of neighboring sensors to improve the cleaning performance, especially when node density is relatively low. Finally, we design a novel sketch-based technique to clean data from internal sensors where spatio-temporal relationships among sensors cannot lead to the data correlations among sensor streams.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The low-frequency electromagnetic compatibility (EMC) is an increasingly important aspect in the design of practical systems to ensure the functional safety and reliability of complex products. The opportunities for using numerical techniques to predict and analyze system's EMC are therefore of considerable interest in many industries. As the first phase of study, a proper model, including all the details of the component, was required. Therefore, the advances in EMC modeling were studied with classifying analytical and numerical models. The selected model was finite element (FE) modeling, coupled with the distributed network method, to generate the model of the converter's components and obtain the frequency behavioral model of the converter. The method has the ability to reveal the behavior of parasitic elements and higher resonances, which have critical impacts in studying EMI problems. For the EMC and signature studies of the machine drives, the equivalent source modeling was studied. Considering the details of the multi-machine environment, including actual models, some innovation in equivalent source modeling was performed to decrease the simulation time dramatically. Several models were designed in this study and the voltage current cube model and wire model have the best result. The GA-based PSO method is used as the optimization process. Superposition and suppression of the fields in coupling the components were also studied and verified. The simulation time of the equivalent model is 80-100 times lower than the detailed model. All tests were verified experimentally. As the application of EMC and signature study, the fault diagnosis and condition monitoring of an induction motor drive was developed using radiated fields. In addition to experimental tests, the 3DFE analysis was coupled with circuit-based software to implement the incipient fault cases. The identification was implemented using ANN for seventy various faulty cases. The simulation results were verified experimentally. Finally, the identification of the types of power components were implemented. The results show that it is possible to identify the type of components, as well as the faulty components, by comparing the amplitudes of their stray field harmonics. The identification using the stray fields is nondestructive and can be used for the setups that cannot go offline and be dismantled

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Online Social Network (OSN) services provided by Internet companies bring people together to chat, share the information, and enjoy the information. Meanwhile, huge amounts of data are generated by those services (they can be regarded as the social media ) every day, every hour, even every minute, and every second. Currently, researchers are interested in analyzing the OSN data, extracting interesting patterns from it, and applying those patterns to real-world applications. However, due to the large-scale property of the OSN data, it is difficult to effectively analyze it. This dissertation focuses on applying data mining and information retrieval techniques to mine two key components in the social media data — users and user-generated contents. Specifically, it aims at addressing three problems related to the social media users and contents: (1) how does one organize the users and the contents? (2) how does one summarize the textual contents so that users do not have to go over every post to capture the general idea? (3) how does one identify the influential users in the social media to benefit other applications, e.g., Marketing Campaign? The contribution of this dissertation is briefly summarized as follows. (1) It provides a comprehensive and versatile data mining framework to analyze the users and user-generated contents from the social media. (2) It designs a hierarchical co-clustering algorithm to organize the users and contents. (3) It proposes multi-document summarization methods to extract core information from the social network contents. (4) It introduces three important dimensions of social influence, and a dynamic influence model for identifying influential users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the last decade advances and innovations from Silicon Photonics technology were observed in the telecommunications and computing industries. This technology which employs Silicon as an optical medium, relies on current CMOS micro-electronics fabrication processes to enable medium scale integration of many nano-photonic devices to produce photonic integrated circuitry. ^ However, other fields of research such as optical sensor processing can benefit from silicon photonics technology, specially in sensors where the physical measurement is wavelength encoded. ^ In this research work, we present a design and application of a thermally tuned silicon photonic device as an optical sensor interrogator. ^ The main device is a micro-ring resonator filter of 10 μm of diameter. A photonic design toolkit was developed based on open source software from the research community. With those tools it was possible to estimate the resonance and spectral characteristics of the filter. From the obtained design parameters, a 7.8 × 3.8 mm optical chip was fabricated using standard micro-photonics techniques. In order to tune a ring resonance, Nichrome micro-heaters were fabricated on top of the device. Some fabricated devices were systematically characterized and their tuning response were determined. From measurements, a ring resonator with a free-spectral-range of 18.4 nm and with a bandwidth of 0.14 nm was obtained. Using just 5 mA it was possible to tune the device resonance up to 3 nm. ^ In order to apply our device as a sensor interrogator in this research, a model of wavelength estimation using time interval between peaks measurement technique was developed and simulations were carried out to assess its performance. To test the technique, an experiment using a Fiber Bragg grating optical sensor was set, and estimations of the wavelength shift of this sensor due to axial strains yield an error within 22 pm compared to measurements from spectrum analyzer. ^ Results from this study implies that signals from FBG sensors can be processed with good accuracy using a micro-ring device with the advantage of ts compact size, scalability and versatility. Additionally, the system also has additional applications such as processing optical wavelength shifts from integrated photonic sensors and to be able to track resonances from laser sources.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of several techniques applied to production processes oil is the artificial lift, using equipment in order to reduce the bottom hole pressure, providing a pressure differential, resulting in a flow increase. The choice of the artificial lift method depends on a detailed analysis of the some factors, such as initial costs of installation, maintenance, and the existing conditions in the producing field. The Electrical Submersible Pumping method (ESP) appears to be quite efficient when the objective is to produce high liquid flow rates in both onshore and offshore environments, in adverse conditions of temperature and in the presence of viscous fluids. By definition, ESP is a method of artificial lift in which a subsurface electric motor transforms electrical into mechanical energy to trigger a centrifugal pump of multiple stages, composed of a rotating impeller (rotor) and a stationary diffuser (stator). The pump converts the mechanical energy of the engine into kinetic energy in the form of velocity, which pushes the fluid to the surface. The objective of this work is to implement the optimization method of the flexible polyhedron, known as Modified Simplex Method (MSM) applied to the study of the influence of the modification of the input and output parameters of the centrifugal pump impeller in the channel of a system ESP. In the use of the optimization method by changing the angular parameters of the pump, the resultant data applied to the simulations allowed to obtain optimized values of the Head (lift height), lossless efficiency and the power with differentiated results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The unprecedented and relentless growth in the electronics industry is feeding the demand for integrated circuits (ICs) with increasing functionality and performance at minimum cost and power consumption. As predicted by Moore's law, ICs are being aggressively scaled to meet this demand. While the continuous scaling of process technology is reducing gate delays, the performance of ICs is being increasingly dominated by interconnect delays. In an effort to improve submicrometer interconnect performance, to increase packing density, and to reduce chip area and power consumption, the semiconductor industry is focusing on three-dimensional (3D) integration. However, volume production and commercial exploitation of 3D integration are not feasible yet due to significant technical hurdles.

At the present time, interposer-based 2.5D integration is emerging as a precursor to stacked 3D integration. All the dies and the interposer in a 2.5D IC must be adequately tested for product qualification. However, since the structure of 2.5D ICs is different from the traditional 2D ICs, new challenges have emerged: (1) pre-bond interposer testing, (2) lack of test access, (3) limited ability for at-speed testing, (4) high density I/O ports and interconnects, (5) reduced number of test pins, and (6) high power consumption. This research targets the above challenges and effective solutions have been developed to test both dies and the interposer.

The dissertation first introduces the basic concepts of 3D ICs and 2.5D ICs. Prior work on testing of 2.5D ICs is studied. An efficient method is presented to locate defects in a passive interposer before stacking. The proposed test architecture uses e-fuses that can be programmed to connect or disconnect functional paths inside the interposer. The concept of a die footprint is utilized for interconnect testing, and the overall assembly and test flow is described. Moreover, the concept of weighted critical area is defined and utilized to reduce test time. In order to fully determine the location of each e-fuse and the order of functional interconnects in a test path, we also present a test-path design algorithm. The proposed algorithm can generate all test paths for interconnect testing.

In order to test for opens, shorts, and interconnect delay defects in the interposer, a test architecture is proposed that is fully compatible with the IEEE 1149.1 standard and relies on an enhancement of the standard test access port (TAP) controller. To reduce test cost, a test-path design and scheduling technique is also presented that minimizes a composite cost function based on test time and the design-for-test (DfT) overhead in terms of additional through silicon vias (TSVs) and micro-bumps needed for test access. The locations of the dies on the interposer are taken into consideration in order to determine the order of dies in a test path.

To address the scenario of high density of I/O ports and interconnects, an efficient built-in self-test (BIST) technique is presented that targets the dies and the interposer interconnects. The proposed BIST architecture can be enabled by the standard TAP controller in the IEEE 1149.1 standard. The area overhead introduced by this BIST architecture is negligible; it includes two simple BIST controllers, a linear-feedback-shift-register (LFSR), a multiple-input-signature-register (MISR), and some extensions to the boundary-scan cells in the dies on the interposer. With these extensions, all boundary-scan cells can be used for self-configuration and self-diagnosis during interconnect testing. To reduce the overall test cost, a test scheduling and optimization technique under power constraints is described.

In order to accomplish testing with a small number test pins, the dissertation presents two efficient ExTest scheduling strategies that implements interconnect testing between tiles inside an system on chip (SoC) die on the interposer while satisfying the practical constraint that the number of required test pins cannot exceed the number of available pins at the chip level. The tiles in the SoC are divided into groups based on the manner in which they are interconnected. In order to minimize the test time, two optimization solutions are introduced. The first solution minimizes the number of input test pins, and the second solution minimizes the number output test pins. In addition, two subgroup configuration methods are further proposed to generate subgroups inside each test group.

Finally, the dissertation presents a programmable method for shift-clock stagger assignment to reduce power supply noise during SoC die testing in 2.5D ICs. An SoC die in the 2.5D IC is typically composed of several blocks and two neighboring blocks that share the same power rails should not be toggled at the same time during shift. Therefore, the proposed programmable method does not assign the same stagger value to neighboring blocks. The positions of all blocks are first analyzed and the shared boundary length between blocks is then calculated. Based on the position relationships between the blocks, a mathematical model is presented to derive optimal result for small-to-medium sized problems. For larger designs, a heuristic algorithm is proposed and evaluated.

In summary, the dissertation targets important design and optimization problems related to testing of interposer-based 2.5D ICs. The proposed research has led to theoretical insights, experiment results, and a set of test and design-for-test methods to make testing effective and feasible from a cost perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation studies the coding strategies of computational imaging to overcome the limitation of conventional sensing techniques. The information capacity of conventional sensing is limited by the physical properties of optics, such as aperture size, detector pixels, quantum efficiency, and sampling rate. These parameters determine the spatial, depth, spectral, temporal, and polarization sensitivity of each imager. To increase sensitivity in any dimension can significantly compromise the others.

This research implements various coding strategies subject to optical multidimensional imaging and acoustic sensing in order to extend their sensing abilities. The proposed coding strategies combine hardware modification and signal processing to exploiting bandwidth and sensitivity from conventional sensors. We discuss the hardware architecture, compression strategies, sensing process modeling, and reconstruction algorithm of each sensing system.

Optical multidimensional imaging measures three or more dimensional information of the optical signal. Traditional multidimensional imagers acquire extra dimensional information at the cost of degrading temporal or spatial resolution. Compressive multidimensional imaging multiplexes the transverse spatial, spectral, temporal, and polarization information on a two-dimensional (2D) detector. The corresponding spectral, temporal and polarization coding strategies adapt optics, electronic devices, and designed modulation techniques for multiplex measurement. This computational imaging technique provides multispectral, temporal super-resolution, and polarization imaging abilities with minimal loss in spatial resolution and noise level while maintaining or gaining higher temporal resolution. The experimental results prove that the appropriate coding strategies may improve hundreds times more sensing capacity.

Human auditory system has the astonishing ability in localizing, tracking, and filtering the selected sound sources or information from a noisy environment. Using engineering efforts to accomplish the same task usually requires multiple detectors, advanced computational algorithms, or artificial intelligence systems. Compressive acoustic sensing incorporates acoustic metamaterials in compressive sensing theory to emulate the abilities of sound localization and selective attention. This research investigates and optimizes the sensing capacity and the spatial sensitivity of the acoustic sensor. The well-modeled acoustic sensor allows localizing multiple speakers in both stationary and dynamic auditory scene; and distinguishing mixed conversations from independent sources with high audio recognition rate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Brain-computer interfaces (BCI) have the potential to restore communication or control abilities in individuals with severe neuromuscular limitations, such as those with amyotrophic lateral sclerosis (ALS). The role of a BCI is to extract and decode relevant information that conveys a user's intent directly from brain electro-physiological signals and translate this information into executable commands to control external devices. However, the BCI decision-making process is error-prone due to noisy electro-physiological data, representing the classic problem of efficiently transmitting and receiving information via a noisy communication channel.

This research focuses on P300-based BCIs which rely predominantly on event-related potentials (ERP) that are elicited as a function of a user's uncertainty regarding stimulus events, in either an acoustic or a visual oddball recognition task. The P300-based BCI system enables users to communicate messages from a set of choices by selecting a target character or icon that conveys a desired intent or action. P300-based BCIs have been widely researched as a communication alternative, especially in individuals with ALS who represent a target BCI user population. For the P300-based BCI, repeated data measurements are required to enhance the low signal-to-noise ratio of the elicited ERPs embedded in electroencephalography (EEG) data, in order to improve the accuracy of the target character estimation process. As a result, BCIs have relatively slower speeds when compared to other commercial assistive communication devices, and this limits BCI adoption by their target user population. The goal of this research is to develop algorithms that take into account the physical limitations of the target BCI population to improve the efficiency of ERP-based spellers for real-world communication.

In this work, it is hypothesised that building adaptive capabilities into the BCI framework can potentially give the BCI system the flexibility to improve performance by adjusting system parameters in response to changing user inputs. The research in this work addresses three potential areas for improvement within the P300 speller framework: information optimisation, target character estimation and error correction. The visual interface and its operation control the method by which the ERPs are elicited through the presentation of stimulus events. The parameters of the stimulus presentation paradigm can be modified to modulate and enhance the elicited ERPs. A new stimulus presentation paradigm is developed in order to maximise the information content that is presented to the user by tuning stimulus paradigm parameters to positively affect performance. Internally, the BCI system determines the amount of data to collect and the method by which these data are processed to estimate the user's target character. Algorithms that exploit language information are developed to enhance the target character estimation process and to correct erroneous BCI selections. In addition, a new model-based method to predict BCI performance is developed, an approach which is independent of stimulus presentation paradigm and accounts for dynamic data collection. The studies presented in this work provide evidence that the proposed methods for incorporating adaptive strategies in the three areas have the potential to significantly improve BCI communication rates, and the proposed method for predicting BCI performance provides a reliable means to pre-assess BCI performance without extensive online testing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Human motion monitoring is an important function in numerous applications. In this dissertation, two systems for monitoring motions of multiple human targets in wide-area indoor environments are discussed, both of which use radio frequency (RF) signals to detect, localize, and classify different types of human motion. In the first system, a coherent monostatic multiple-input multiple-output (MIMO) array is used, and a joint spatial-temporal adaptive processing method is developed to resolve micro-Doppler signatures at each location in a wide-area for motion mapping. The downranges are obtained by estimating time-delays from the targets, and the crossranges are obtained by coherently filtering array spatial signals. Motion classification is then applied to each target based on micro-Doppler analysis. In the second system, multiple noncoherent multistatic transmitters (Tx's) and receivers (Rx's) are distributed in a wide-area, and motion mapping is achieved by noncoherently combining bistatic range profiles from multiple Tx-Rx pairs. Also, motion classification is applied to each target by noncoherently combining bistatic micro-Doppler signatures from multiple Tx-Rx pairs. For both systems, simulation and real data results are shown to demonstrate the ability of the proposed methods for monitoring patient repositioning activities for pressure ulcer prevention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although trapped ion technology is well-suited for quantum information science, scalability of the system remains one of the main challenges. One of the challenges associated with scaling the ion trap quantum computer is the ability to individually manipulate the increasing number of qubits. Using micro-mirrors fabricated with micro-electromechanical systems (MEMS) technology, laser beams are focused on individual ions in a linear chain and steer the focal point in two dimensions. Multiple single qubit gates are demonstrated on trapped 171Yb+ qubits and the gate performance is characterized using quantum state tomography. The system features negligible crosstalk to neighboring ions (< 3e-4), and switching speeds comparable to typical single qubit gate times (< 2 us). In a separate experiment, photons scattered from the 171Yb+ ion are coupled into an optical fiber with 63% efficiency using a high numerical aperture lens (0.6 NA). The coupled photons are directed to superconducting nanowire single photon detectors (SNSPD), which provide a higher detector efficiency (69%) compared to traditional photomultiplier tubes (35%). The total system photon collection efficiency is increased from 2.2% to 3.4%, which allows for fast state detection of the qubit. For a detection beam intensity of 11 mW/cm2, the average detection time is 23.7 us with 99.885(7)% detection fidelity. The technologies demonstrated in this thesis can be integrated to form a single quantum register with all of the necessary resources to perform local gates as well as high fidelity readout and provide a photon link to other systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: The purpose of this work was to investigate the breast dose saving potential of a breast positioning technique (BP) for thoracic CT examinations with organ-based tube current modulation (OTCM).

Methods: The study included 13 female patient models (XCAT, age range: 27-65 y.o., weight range: 52 to 105.8 kg). Each model was modified to simulate three breast sizes in standard supine geometry. The modeled breasts were further deformed, emulating a BP that would constrain the breasts within 120° anterior tube current (mA) reduction zone. The tube current value of the CT examination was modeled using an attenuation-based program, which reduces the radiation dose to 20% in the anterior region with a corresponding increase to the posterior region. A validated Monte Carlo program was used to estimate organ doses with a typical clinical system (SOMATOM Definition Flash, Siemens Healthcare). The simulated organ doses and organ doses normalized by CTDIvol were compared between attenuation-based tube current modulation (ATCM), OTCM, and OTCM with BP (OTCMBP).

Results: On average, compared to ATCM, OTCM reduced the breast dose by 19.3±4.5%, whereas OTCMBP reduced breast dose by 36.6±6.9% (an additional 21.3±7.3%). The dose saving of OTCMBP was more significant for larger breasts (on average 32, 38, and 44% reduction for 0.5, 1.5, and 2.5 kg breasts, respectively). Compared to ATCM, OTCMBP also reduced thymus and heart dose by 12.1 ± 6.3% and 13.1 ± 5.4%, respectively.

Conclusions: In thoracic CT examinations, OTCM with a breast positioning technique can markedly reduce unnecessary exposure to the radiosensitive organs in the anterior chest wall, specifically breast tissue. The breast dose reduction is more notable for women with larger breasts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis demonstrates a new way to achieve sparse biological sample detection, which uses magnetic bead manipulation on a digital microfluidic device. Sparse sample detection was made possible through two steps: sparse sample capture and fluorescent signal detection. For the first step, the immunological reaction between antibody and antigen enables the binding between target cells and antibody-­‐‑ coated magnetic beads, hence achieving sample capture. For the second step, fluorescent detection is achieved via fluorescent signal measurement and magnetic bead manipulation. In those two steps, a total of three functions need to work together, namely magnetic beads manipulation, fluorescent signal measurement and immunological binding. The first function is magnetic bead manipulation, and it uses the structure of current-­‐‑carrying wires embedded in the actuation electrode of an electrowetting-­‐‑on-­‐‑dielectric (EWD) device. The current wire structure serves as a microelectromagnet, which is capable of segregating and separating magnetic beads. The device can achieve high segregation efficiency when the wire spacing is 50µμm, and it is also capable of separating two kinds of magnetic beads within a 65µμm distance. The device ensures that the magnetic bead manipulation and the EWD function can be operated simultaneously without introducing additional steps in the fabrication process. Half circle shaped current wires were designed in later devices to concentrate magnetic beads in order to increase the SNR of sample detection. The second function is immunological binding. Immunological reaction kits were selected in order to ensure the compatibility of target cells, magnetic bead function and EWD function. The magnetic bead choice ensures the binding efficiency and survivability of target cells. The magnetic bead selection and binding mechanism used in this work can be applied to a wide variety of samples with a simple switch of the type of antibody. The last function is fluorescent measurement. Fluorescent measurement of sparse samples is made possible of using fluorescent stains and a method to increase SNR. The improved SNR is achieved by target cell concentration and reduced sensing area. Theoretical limitations of the entire sparse sample detection system is as low as 1 Colony Forming Unit/mL (CFU/mL).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-output Gaussian processes provide a convenient framework for multi-task problems. An illustrative and motivating example of a multi-task problem is multi-region electrophysiological time-series data, where experimentalists are interested in both power and phase coherence between channels. Recently, the spectral mixture (SM) kernel was proposed to model the spectral density of a single task in a Gaussian process framework. This work develops a novel covariance kernel for multiple outputs, called the cross-spectral mixture (CSM) kernel. This new, flexible kernel represents both the power and phase relationship between multiple observation channels. The expressive capabilities of the CSM kernel are demonstrated through implementation of 1) a Bayesian hidden Markov model, where the emission distribution is a multi-output Gaussian process with a CSM covariance kernel, and 2) a Gaussian process factor analysis model, where factor scores represent the utilization of cross-spectral neural circuits. Results are presented for measured multi-region electrophysiological data.