995 resultados para Filters methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: When anticoagulation is contraindicated or ineffective, optional vena cava filters can be used to prevent pulmonary embolism. These devices can be removed within a defined period of time or can remain in the vena cava permanently. METHODS: The status of optional vena cava filters was studied by a review of the relevant literature found in a selective Medline search from 2000 to 2008, including a Cochrane review and published guidelines. RESULTS: Optional vena cava filter can be removed up to 20 weeks or even longer after insertion (depending on the filter model) in a small interventional radiological procedure if therapeutic anticoagulation has been achieved or the patient is no longer at risk for venous thromboembolism. Current studies show comparable results for optional filters and permanent filters, but there have not yet been any prospective studies comparing the two filter types. CONCLUSIONS: Optional vena cava filters are an important addition to the management of venous thromboembolic disease. As only limited data are available to date, the use of optional filters should be considered on an individual case basis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND A precise detection of volume change allows for better estimating the biological behavior of the lung nodules. Postprocessing tools with automated detection, segmentation, and volumetric analysis of lung nodules may expedite radiological processes and give additional confidence to the radiologists. PURPOSE To compare two different postprocessing software algorithms (LMS Lung, Median Technologies; LungCARE®, Siemens) in CT volumetric measurement and to analyze the effect of soft (B30) and hard reconstruction filter (B70) on automated volume measurement. MATERIAL AND METHODS Between January 2010 and April 2010, 45 patients with a total of 113 pulmonary nodules were included. The CT exam was performed on a 64-row multidetector CT scanner (Somatom Sensation, Siemens, Erlangen, Germany) with the following parameters: collimation, 24x1.2 mm; pitch, 1.15; voltage, 120 kVp; reference tube current-time, 100 mAs. Automated volumetric measurement of each lung nodule was performed with the two different postprocessing algorithms based on two reconstruction filters (B30 and B70). The average relative volume measurement difference (VME%) and the limits of agreement between two methods were used for comparison. RESULTS At soft reconstruction filters the LMS system produced mean nodule volumes that were 34.1% (P < 0.0001) larger than those by LungCARE® system. The VME% was 42.2% with a limit of agreement between -53.9% and 138.4%.The volume measurement with soft filters (B30) was significantly larger than with hard filters (B70); 11.2% for LMS and 1.6% for LungCARE®, respectively (both with P < 0.05). LMS measured greater volumes with both filters, 13.6% for soft and 3.8% for hard filters, respectively (P < 0.01 and P > 0.05). CONCLUSION There is a substantial inter-software (LMS/LungCARE®) as well as intra-software variability (B30/B70) in lung nodule volume measurement; therefore, it is mandatory to use the same equipment with the same reconstruction filter for the follow-up of lung nodule volume.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AMS-14C applications often require the analysis of small samples. Such is the case of atmospheric aerosols where frequently only a small amount of sample is available. The ion beam physics group at the ETH, Zurich, has designed an Automated Graphitization Equipment (AGE III) for routine graphite production for AMS analysis from organic samples of approximately 1 mg. In this study, we explore the potential use of the AGE III for graphitization of particulate carbon collected in quartz filters. In order to test the methodology, samples of reference materials and blanks with different sizes were prepared in the AGE III and the graphite was analyzed in a MICADAS AMS (ETH) system. The graphite samples prepared in the AGE III showed recovery yields higher than 80% and reproducible 14C values for masses ranging from 50 to 300 lg. Also, reproducible radiocarbon values were obtained for aerosol filters of small sizes that had been graphitized in the AGE III. As a study case, the tested methodology was applied to PM10 samples collected in two urban cities in Mexico in order to compare the source apportionment of biomass and fossil fuel combustion. The obtained 14C data showed that carbonaceous aerosols from Mexico City have much lower biogenic signature than the smaller city of Cuernavaca.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Context. We interpret multicolor data from OSIRIS NAC for the remote-sensing exploration of comet 67P/Churyumov-Gerasimenko. Aims. We determine the most meaningful definition of color maps for the characterization of surface variegation with filters available on OSIRIS NAC. Methods. We analyzed laboratory spectra of selected minerals and olivine-pyroxene mixtures seen through OSIRIS NAC filters, with spectral methods existing in the literature: reflectance ratios, minimum band wavelength, spectral slopes, band tilt, band curvature, and visible tilt. Results. We emphasize the importance of reflectance ratios and particularly the relation of visible tilt vs. band tilt. This technique provides a reliable diagnostic of the presence of silicates. Color maps constructed by red-green-blue colors defined with the green, orange, red, IR, and Fe2O3 filters let us define regions that may significantly differ in composition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En entornos hostiles tales como aquellas instalaciones científicas donde la radiación ionizante es el principal peligro, el hecho de reducir las intervenciones humanas mediante el incremento de las operaciones robotizadas está siendo cada vez más de especial interés. CERN, la Organización Europea para la Investigación Nuclear, tiene alrededor de unos 50 km de superficie subterránea donde robots móviles controlador de forma remota podrían ayudar en su funcionamiento, por ejemplo, a la hora de llevar a cabo inspecciones remotas sobre radiación en los diferentes áreas destinados al efecto. No solo es preciso considerar que los robots deben ser capaces de recorrer largas distancias y operar durante largos periodos de tiempo, sino que deben saber desenvolverse en los correspondientes túneles subterráneos, tener en cuenta la presencia de campos electromagnéticos, radiación ionizante, etc. y finalmente, el hecho de que los robots no deben interrumpir el funcionamiento de los aceleradores. El hecho de disponer de un sistema de comunicaciones inalámbrico fiable y robusto es esencial para la correcta ejecución de las misiones que los robots deben afrontar y por supuesto, para evitar tales situaciones en las que es necesario la recuperación manual de los robots al agotarse su energía o al perder el enlace de comunicaciones. El objetivo de esta Tesis es proveer de las directrices y los medios necesarios para reducir el riesgo de fallo en la misión y maximizar las capacidades de los robots móviles inalámbricos los cuales disponen de almacenamiento finito de energía al trabajar en entornos peligrosos donde no se dispone de línea de vista directa. Para ello se proponen y muestran diferentes estrategias y métodos de comunicación inalámbrica. Teniendo esto en cuenta, se presentan a continuación los objetivos de investigación a seguir a lo largo de la Tesis: predecir la cobertura de comunicaciones antes y durante las misiones robotizadas; optimizar la capacidad de red inalámbrica de los robots móviles con respecto a su posición; y mejorar el rango operacional de esta clase de robots. Por su parte, las contribuciones a la Tesis se citan más abajo. El primer conjunto de contribuciones son métodos novedosos para predecir el consumo de energía y la autonomía en la comunicación antes y después de disponer de los robots en el entorno seleccionado. Esto es importante para proporcionar conciencia de la situación del robot y evitar fallos en la misión. El consumo de energía se predice usando una estrategia propuesta la cual usa modelos de consumo provenientes de diferentes componentes en un robot. La predicción para la cobertura de comunicaciones se desarrolla usando un nuevo filtro de RSS (Radio Signal Strength) y técnicas de estimación con la ayuda de Filtros de Kalman. El segundo conjunto de contribuciones son métodos para optimizar el rango de comunicaciones usando novedosas técnicas basadas en muestreo espacial que son robustas frente a ruidos de campos de detección y radio y que proporcionan redundancia. Se emplean métodos de diferencia central finitos para determinar los gradientes 2D RSS y se usa la movilidad del robot para optimizar el rango de comunicaciones y la capacidad de red. Este método también se valida con un caso de estudio centrado en la teleoperación háptica de robots móviles inalámbricos. La tercera contribución es un algoritmo robusto y estocástico descentralizado para la optimización de la posición al considerar múltiples robots autónomos usados principalmente para extender el rango de comunicaciones desde la estación de control al robot que está desarrollando la tarea. Todos los métodos y algoritmos propuestos se verifican y validan usando simulaciones y experimentos de campo con variedad de robots móviles disponibles en CERN. En resumen, esta Tesis ofrece métodos novedosos y demuestra su uso para: predecir RSS; optimizar la posición del robot; extender el rango de las comunicaciones inalámbricas; y mejorar las capacidades de red de los robots móviles inalámbricos para su uso en aplicaciones dentro de entornos peligrosos, que como ya se mencionó anteriormente, se destacan las instalaciones científicas con emisión de radiación ionizante. En otros términos, se ha desarrollado un conjunto de herramientas para mejorar, facilitar y hacer más seguras las misiones de los robots en entornos hostiles. Esta Tesis demuestra tanto en teoría como en práctica que los robots móviles pueden mejorar la calidad de las comunicaciones inalámbricas mediante la profundización en el estudio de su movilidad para optimizar dinámicamente sus posiciones y mantener conectividad incluso cuando no existe línea de vista. Los métodos desarrollados en la Tesis son especialmente adecuados para su fácil integración en robots móviles y pueden ser aplicados directamente en la capa de aplicación de la red inalámbrica. ABSTRACT In hostile environments such as in scientific facilities where ionising radiation is a dominant hazard, reducing human interventions by increasing robotic operations are desirable. CERN, the European Organization for Nuclear Research, has around 50 km of underground scientific facilities, where wireless mobile robots could help in the operation of the accelerator complex, e.g. in conducting remote inspections and radiation surveys in different areas. The main challenges to be considered here are not only that the robots should be able to go over long distances and operate for relatively long periods, but also the underground tunnel environment, the possible presence of electromagnetic fields, radiation effects, and the fact that the robots shall in no way interrupt the operation of the accelerators. Having a reliable and robust wireless communication system is essential for successful execution of such robotic missions and to avoid situations of manual recovery of the robots in the event that the robot runs out of energy or when the robot loses its communication link. The goal of this thesis is to provide means to reduce risk of mission failure and maximise mission capabilities of wireless mobile robots with finite energy storage capacity working in a radiation environment with non-line-of-sight (NLOS) communications by employing enhanced wireless communication methods. Towards this goal, the following research objectives are addressed in this thesis: predict the communication range before and during robotic missions; optimise and enhance wireless communication qualities of mobile robots by using robot mobility and employing multi-robot network. This thesis provides introductory information on the infrastructures where mobile robots will need to operate, the tasks to be carried out by mobile robots and the problems encountered in these environments. The reporting of research work carried out to improve wireless communication comprises an introduction to the relevant radio signal propagation theory and technology followed by explanation of the research in the following stages: An analysis of the wireless communication requirements for mobile robot for different tasks in a selection of CERN facilities; predictions of energy and communication autonomies (in terms of distance and time) to reduce risk of energy and communication related failures during missions; autonomous navigation of a mobile robot to find zone(s) of maximum radio signal strength to improve communication coverage area; and autonomous navigation of one or more mobile robots acting as mobile wireless relay (repeater) points in order to provide a tethered wireless connection to a teleoperated mobile robot carrying out inspection or radiation monitoring activities in a challenging radio environment. The specific contributions of this thesis are outlined below. The first sets of contributions are novel methods for predicting the energy autonomy and communication range(s) before and after deployment of the mobile robots in the intended environments. This is important in order to provide situational awareness and avoid mission failures. The energy consumption is predicted by using power consumption models of different components in a mobile robot. This energy prediction model will pave the way for choosing energy-efficient wireless communication strategies. The communication range prediction is performed using radio signal propagation models and applies radio signal strength (RSS) filtering and estimation techniques with the help of Kalman filters and Gaussian process models. The second set of contributions are methods to optimise the wireless communication qualities by using novel spatial sampling based techniques that are robust to sensing and radio field noises and provide redundancy features. Central finite difference (CFD) methods are employed to determine the 2-D RSS gradients and use robot mobility to optimise the communication quality and the network throughput. This method is also validated with a case study application involving superior haptic teleoperation of wireless mobile robots where an operator from a remote location can smoothly navigate a mobile robot in an environment with low-wireless signals. The third contribution is a robust stochastic position optimisation algorithm for multiple autonomous relay robots which are used for wireless tethering of radio signals and thereby to enhance the wireless communication qualities. All the proposed methods and algorithms are verified and validated using simulations and field experiments with a variety of mobile robots available at CERN. In summary, this thesis offers novel methods and demonstrates their use to predict energy autonomy and wireless communication range, optimise robots position to improve communication quality and enhance communication range and wireless network qualities of mobile robots for use in applications in hostile environmental characteristics such as scientific facilities emitting ionising radiations. In simpler terms, a set of tools are developed in this thesis for improving, easing and making safer robotic missions in hostile environments. This thesis validates both in theory and experiments that mobile robots can improve wireless communication quality by exploiting robots mobility to dynamically optimise their positions and maintain connectivity even when the (radio signal) environment possess non-line-of-sight characteristics. The methods developed in this thesis are well-suited for easier integration in mobile robots and can be applied directly at the application layer of the wireless network. The results of the proposed methods have outperformed other comparable state-of-the-art methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Il lavoro presentato in questa tesi di Dottorato è incentrato sullo sviluppo di strategie analitiche innovative basate sulla sensoristica e su tecniche di spettrometria di massa in ambito biologico e della sicurezza alimentare. Il primo capitolo tratta lo studio di aspetti metodologici ed applicativi di procedure sensoristiche per l’identificazione e la determinazione di biomarkers associati alla malattia celiaca. In tale ambito, sono stati sviluppati due immunosensori, uno a trasduzione piezoelettrica e uno a trasduzione amperometrica, per la rivelazione di anticorpi anti-transglutaminasi tissutale associati a questa malattia. L’innovazione di questi dispositivi riguarda l’immobilizzazione dell’enzima tTG nella conformazione aperta (Open-tTG), che è stato dimostrato essere quella principalmente coinvolta nella patogenesi. Sulla base dei risultati ottenuti, entrambi i sistemi sviluppati si sono dimostrati una valida alternativa ai test di screening attualmente in uso per la diagnosi della celiachia. Rimanendo sempre nel contesto della malattia celiaca, ulteriore ricerca oggetto di questa tesi di Dottorato, ha riguardato lo sviluppo di metodi affidabili per il controllo di prodotti “gluten-free”. Il secondo capitolo tratta lo sviluppo di un metodo di spettrometria di massa e di un immunosensore competitivo per la rivelazione di prolammine in alimenti “gluten-free”. E’ stato sviluppato un metodo LC-ESI-MS/MS basato su un’analisi target con modalità di acquisizione del segnale selected reaction monitoring per l’identificazione di glutine in diversi cereali potenzialmente tossici per i celiaci. Inoltre ci si è focalizzati su un immunosensore competitivo per la rivelazione di gliadina, come metodo di screening rapido di farine. Entrambi i sistemi sono stati ottimizzati impiegando miscele di farina di riso addizionata di gliadina, avenine, ordeine e secaline nel caso del sistema LC-MS/MS e con sola gliadina nel caso del sensore. Infine i sistemi analitici sono stati validati analizzando sia materie prime (farine) che alimenti (biscotti, pasta, pane, etc.). L’approccio sviluppato in spettrometria di massa apre la strada alla possibilità di sviluppare un test di screening multiplo per la valutazione della sicurezza di prodotti dichiarati “gluten-free”, mentre ulteriori studi dovranno essere svolti per ricercare condizioni di estrazione compatibili con l’immunosaggio competitivo, per ora applicabile solo all’analisi di farine estratte con etanolo. Terzo capitolo di questa tesi riguarda lo sviluppo di nuovi metodi per la rivelazione di HPV, Chlamydia e Gonorrhoeae in fluidi biologici. Si è scelto un substrato costituito da strips di carta in quanto possono costituire una valida piattaforma di rivelazione, offrendo vantaggi grazie al basso costo, alla possibilità di generare dispositivi portatili e di poter visualizzare il risultato visivamente senza la necessità di strumentazioni. La metodologia sviluppata è molto semplice, non prevede l’uso di strumentazione complessa e si basa sull’uso della isothermal rolling-circle amplification per l’amplificazione del target. Inoltre, di fondamentale importanza, è l’utilizzo di nanoparticelle colorate che, essendo state funzionalizzate con una sequenza di DNA complementare al target amplificato derivante dalla RCA, ne permettono la rivelazione a occhio nudo mediante l’uso di filtri di carta. Queste strips sono state testate su campioni reali permettendo una discriminazione tra campioni positivi e negativi in tempi rapidi (10-15 minuti), aprendo una nuova via verso nuovi test altamente competitivi con quelli attualmente sul mercato.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To investigate the effects of light filters on reading speed in normal and low vision due to age-related macular degeneration (AMD). Methods: Reading speed was determined for 12 subjects with normal vision and 12 subjects with non-exudative AMD using stationary lowercase nonsensical print in Times Roman font and four light filters; a yellow Corning Photochromic Filter (CPF) 450, a grey neural density (ND) filter, an individual filter obtained using the Intuitive Colorimeter® and a clear filter. Results: There was no statistically significant light filter effect on reading speed for the normal subjects. The AMD group demonstrated a statistically significant 5% average improvement in reading speed with the CPF450 compared with the other filters although some AMD subjects had improvements of 10-15%. Conclusions: Light filters obtained using the Intuitive Colorimeter® performed poorly when compared with the CPF450, ND and clear filters for both the study groups. For the AMD group, average reading speed was statistically greater with the CPF450 than the other filters, however it is questionable whether the improvement (5%) would be clinically significant. As some of the subjects with AMD had greater improvements with the CPF450 we advocate clinical assessment of light filters using existing protocols on an individual basis. © 2004 The College of Optometrists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The aim was to investigate the visual effect of coloured filters compared to transmission-matched neutral density filters, in patients with dry age-related macular degeneration. Methods: Visual acuity (VA, logMAR), contrast sensitivity (Pelli-Robson) and colour vision (D15) were recorded for 39 patients (average age 79.1 ± 7.2 years) with age-related macular degeneration, both in the presence and absence of glare from a fluorescent source. Patients then chose their preferred coloured and matched neutral density transmission filters (NoIR). Visual function tests were repeated with the chosen filters, both in the presence and absence of glare from the fluorescent source. Patients trialled the two filters for two weeks each, in random order. Following the trial of each filter, a telephone questionnaire was completed. Results: VA and contrast sensitivity were unaffected by the coloured filters but reduced through the neutral density filters (p < 0.01). VA and contrast sensitivity were reduced by similar amounts, following the introduction of the glare source, both in the presence and absence of filters (p < 0.001). Colour vision error scores were increased following the introduction of a neutral density filter (from 177.6 ± 60.2 to 251.9 ± 115.2) and still further through coloured filters (275.1 ± 50.8; p < 0.001). In the absence of any filter, colour vision error scores increased by 29.1 ± 55.60 units in the presence of glare (F2,107 = 3.9, p = 0.02); however, there was little change in colour vision error scores, in the presence of glare, with either the neutral density or coloured filters. Questionnaires indicated that patients tended to gain more benefit from the coloured filters. Conclusions: Coloured filters had minimal impact on VA and contrast sensitivity in patients with age-related macular degeneration; however, they caused a small reduction in objective colour vision, although this was not registered subjectively by patients. Patients indicated that they received more benefit from the coloured filters compared with neutral density filters. © 2013 The Authors © 2013 Optometrists Association Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose - The aim of the study was to determine the effect of optimal spectral filters on reading performance following stroke. Methods - Seventeen stroke subjects, aged 43-85, were considered with an age-matched Control Group (n = 17). Subjects undertook the Wilkins Rate of Reading Test on three occasions: (i) using an optimally selected spectral filter; (ii) subjects were randomly assigned to two groups: Group 1 used an optimal filter, whereas Group 2 used a grey filter, for two-weeks. The grey filter had similar photopic reflectance to the optimal filters, intended as a surrogate for a placebo; (iii) the groups were crossed over with Group 1 using a grey filter and Group 2 given an optimal filter, for two weeks, before undertaking the task once more. An increase in reading speed of >5% was considered clinically relevant. Results - Initial use of a spectral filter in the stroke cohort, increased reading speed by ~8%, almost halving error scores, findings not replicated in controls. Prolonged use of an optimal spectral filter increased reading speed by >9% for stroke subjects; errors more than halved. When the same subjects switched to using a grey filter, reading speed reduced by ~4%. A second group of stroke subjects used a grey filter first; reading speed decreased by ~3% but increased by ~4% with an optimal filter, with error scores almost halving. Conclusions - The present study has shown that spectral filters can immediately improve reading speed and accuracy following stroke, whereas prolonged use does not increase these benefits significantly. © 2013 Spanish General Council of Optometry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The objective of this study was to evaluate, by halometry and under low illumination conditions, the effects of short-wavelength light absorbance filters on visual discrimination capacity in retinitis pigmentosa patients. METHODS: This was an observational, prospective, analytic, and transversal study on 109 eyes of 57 retinitis pigmentosa patients with visual acuity better than 1.25 logMAR. Visual disturbance index (VDI) was determined using the software Halo 1.0, with and without the interposition of filters which absorb (totally or partially) short-wavelength light between 380 and 500 nm. RESULTS: A statistically significant reduction in the VDI values determined using filters which absorb short-wavelength light was observed (p < 0.0001). The established VDIs in patients with VA logMAR <0.4 were 0.30 ± 0.05 (95% CI, 0.26–0.36) for the lens alone, 0.20 ± 0.04 (95% CI, 0.16–0.24) with the filter that completely absorbs wavelengths shorter than 450 nm, and 0.24 ± 0.04 (95% CI, 0.20–0.28) with the filter that partially absorbs wavelengths shorter than 450 nm, which implies a 20 to 33% visual discrimination capacity increase. In addition, a decrease of VDI in at least one eye was observed in more than 90% of patients when using a filter. CONCLUSIONS: Short-wavelength light absorbance filters increase visual discrimination capacity under low illumination conditions in retinitis pigmentosa patients. Use of such filters constitutes a suitable method to improve visual quality related to intraocular light visual disturbances under low illumination conditions in this group of patients. © 2016 American Academy of Optometry

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces two related lines of study on classification of hyperspectral images with nonlinear methods. First, it describes a quantitative and systematic evaluation, by the author, of each major component in a pipeline for classifying hyperspectral images (HSI) developed earlier in a joint collaboration [23]. The pipeline, with novel use of nonlinear classification methods, has reached beyond the state of the art in classification accuracy on commonly used benchmarking HSI data [6], [13]. More importantly, it provides a clutter map, with respect to a predetermined set of classes, toward the real application situations where the image pixels not necessarily fall into a predetermined set of classes to be identified, detected or classified with.

The particular components evaluated are a) band selection with band-wise entropy spread, b) feature transformation with spatial filters and spectral expansion with derivatives c) graph spectral transformation via locally linear embedding for dimension reduction, and d) statistical ensemble for clutter detection. The quantitative evaluation of the pipeline verifies that these components are indispensable to high-accuracy classification.

Secondly, the work extends the HSI classification pipeline with a single HSI data cube to multiple HSI data cubes. Each cube, with feature variation, is to be classified of multiple classes. The main challenge is deriving the cube-wise classification from pixel-wise classification. The thesis presents the initial attempt to circumvent it, and discuss the potential for further improvement.