917 resultados para post-processing method


Relevância:

90.00% 90.00%

Publicador:

Resumo:

O presente trabalho está fundamentado no desenvolvimento de uma metodologia e/ou uma tecnologia de obtenção e caracterização de filtros ópticos de interferência de banda passante variável [C.M. da Silva, 2010] e de banda de corte variáveis, constituídos por refletores dielétricos multicamadas de filmes finos intercalados por cavidades de Fabry-Perot não planares com espessuras linearmente variáveis, que apresentam a propriedade do deslocamento linear da transmitância máxima espectral em função da posição, isto é, um Filtro de Interferência Variável (FIV). Este método apresenta novas e abrangentes possibilidades de confecção de filtros ópticos de interferência variável: lineares ou em outras formas desejadas, de comprimento de onda de corte variável (passa baixa ou alta) e filtros de densidade neutra variável, através da deposição de metais, além de aplicações em uma promissora e nova área de pesquisa na deposição de filmes finos não uniformes. A etapa inicial deste desenvolvimento foi o estudo da teoria dos filtros ópticos dielétricos de interferência para projetar e construir um filtro óptico banda passante convencional de um comprimento de onda central com camadas homogêneas. A etapa seguinte, com base na teoria óptica dos filmes finos já estabelecida, foi desenvolver a extensão destes conhecimentos para determinar que a variação da espessura em um perfil inclinado e linear da cavidade entre os refletores de Bragg é o principal parâmetro para produzir o deslocamento espacial da transmitância espectral, possibilitando o uso de técnicas especiais para se obter uma variação em faixas de bandas de grande amplitude, em um único filtro. Um trabalho de modelagem analítica e análise de tolerância de espessuras dos filmes depositados foram necessários para a seleção da estratégia do \"mascaramento\" seletivo do material evaporado formado na câmara e-Beam (elétron-Beam) com o objetivo da obtenção do filtro espectral linear variável de características desejadas. Para tanto, de acordo com os requisitos de projeto, foram necessárias adaptações em uma evaporadora por e-Beam para receber um obliterador mecânico especialmente projetado para compatibilizar os parâmetros das técnicas convencionais de deposição com o objetivo de se obter um perfil inclinado, perfil este previsto em processos de simulação para ajustar e calibrar a geometria do obliterador e se obter um filme depositado na espessura, conformação e disposição pretendidos. Ao final destas etapas de modelagem analítica, simulação e refinamento recorrente, foram determinados os parâmetros de projeto para obtenção de um determinado FIV (Filtro de Interferência Variável) especificado. Baseadas nos FIVs muitas aplicações são emergentes: dispositivos multi, hiper e ultra espectral para sensoriamento remoto e análise ambiental, sistemas Lab-on-Chip, biossensores, detectores chip-sized, espectrofotometria de fluorescência on-chip, detectores de deslocamento de comprimento de onda, sistemas de interrogação, sistemas de imageamento espectral, microespectrofotômetros e etc. No escopo deste trabalho se pretende abranger um estudo de uma referência básica do emprego do (FIV) filtro de interferência variável como detector de varredura de comprimento de ondas em sensores biológicos e químicos compatível com pós processamento CMOS. Um sistema básico que é constituído por um FIV montado sobre uma matriz de sensores ópticos conectada a um módulo eletrônico dedicado a medir a intensidade da radiação incidente e as bandas de absorção das moléculas presentes em uma câmara de detecção de um sistema próprio de canais de microfluidos, configurando-se em um sistema de aquisição e armazenamento de dados (DAS), é proposto para demonstrar as possibilidades do FIV e para servir de base para estudos exploratórios das suas diversas potencialidades que, entre tantas, algumas são mencionadas ao longo deste trabalho. O protótipo obtido é capaz de analisar fluidos químicos ou biológicos e pode ser confrontado com os resultados obtidos por equipamentos homologados de uso corrente.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Modified oligonucleotides containing sulphur group have been useful tools for studies of carcinogenesis, protein or nucleic acid structures and functions, protein-nucleic acid interactions, and for antisense modulation of gene expression. One successful example has been the synthesis and study of oligodeoxynucleotides containing 6-thio-2'-deoxyguanine. 6-Thio-2-deoxyguanosine was first discovered as metabolic compound of 6- mercaptopurine (6-MP). Later, it was applied as drug to cure leukaemia. During the research of its toxicity, a method was developed to use the sulphur group as a versatile position for post-synthetic modification. The advantage of application of post-synthetic modification lies in its convenience. Synthesis of oligomers with normal sequences has become routine work in most laboratories. However, design and synthesis of a proper phosphoramidite monomer for a new modified nucleoside are always difficult tasks even for a skilful chemist. Thus an alternative method (post-synthetic method) has been invented to overcome the difficulties. This was achieved by incorporation of versatile nucleotides into oligomers which contain a leaving group, that is sufficiently stable to withstand the conditions of synthesis but can be substituted by nucleophiles after synthesis, to produce, a series of oligomers each containing a different modified base. In the current project, a phosphoramidite monomer with 6-thioguanine has been successfully synthesised and incorporated into RNA. A deprotection procedure, which is specific for RNA was designed for oligomers containing 6-thioguanosine. The results were validated by various methods (UV, HPLC, enzymatic digestion). Pioneer work in utilization of the versatile sulphur group for post-synthetic modification was also tested. Post-synthetic modification was also carried out on DNA with 6- deoxythioguanosine. Electrophilic reagents with various functional groups (alphatic, aromatic, fluorescent) and bi-functional groups have been attached with the oligomers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The principle theme of this thesis is the advancement and expansion of ophthalmic research via the collaboration between professional Engineers and professional Optometrists. The aim has been to develop new and novel approaches and solutions to contemporary problems in the field. The work is sub divided into three areas of investigation; 1) High technology systems, 2) Modification of current systems to increase functionality, and 3) Development of smaller more portable and cost effective systems. High Technology Systems: A novel high speed Optical Coherence Tomography (OCT) system with integrated simultaneous high speed photography was developed achieving better operational speed than is currently available commercially. The mechanical design of the system featured a novel 8 axis alignment system. A full set of capture, analysis, and post processing software was developed providing custom analysis systems for ophthalmic OCT imaging, expanding the current capabilities of the technology. A large clinical trial was undertaken to test the dynamics of contact lens edge interaction with the cornea in-vivo. The interaction between lens edge design, lens base curvature, post insertion times and edge positions was investigated. A novel method for correction of optical distortion when assessing lens indentation was also demonstrated. Modification of Current Systems: A commercial autorefractor, the WAM-5500, was modified with the addition of extra hardware and a custom software and firmware solution to produce a system that was capable of measuring dynamic accommodative response to various stimuli in real time. A novel software package to control the data capture process was developed allowing real time monitoring of data by the practitioner, adding considerable functionality of the instrument further to the standard system. The device was used to assess the accommodative response differences between subjects who had worn UV blocking contact lens for 5 years, verses a control group that had not worn UV blocking lenses. While the standard static measurement of accommodation showed no differences between the two groups, it was determined that the UV blocking group did show better accommodative rise and fall times (faster), thus demonstrating the benefits of the modification of this commercially available instrumentation. Portable and Cost effective Systems: A new instrument was developed to expand the capability of the now defunct Keeler Tearscope. A device was developed that provided a similar capability in allowing observation of the reflected mires from the tear film surface, but with the added advantage of being able to record the observations. The device was tested comparatively with the tearscope and other tear film break-up techniques, demonstrating its potential. In Conclusion: This work has successfully demonstrated the advantages of interdisciplinary research between engineering and ophthalmic research has provided new and novel instrumented solutions as well as having added to the sum of scientific understanding in the ophthalmic field.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis describes advances in the characterisation, calibration and data processing of optical coherence tomography (OCT) systems. Femtosecond (fs) laser inscription was used for producing OCT-phantoms. Transparent materials are generally inert to infra-red radiations, but with fs lasers material modification occurs via non-linear processes when the highly focused light source interacts with the materials. This modification is confined to the focal volume and is highly reproducible. In order to select the best inscription parameters, combination of different inscription parameters were tested, using three fs laser systems, with different operating properties, on a variety of materials. This facilitated the understanding of the key characteristics of the produced structures with the aim of producing viable OCT-phantoms. Finally, OCT-phantoms were successfully designed and fabricated in fused silica. The use of these phantoms to characterise many properties (resolution, distortion, sensitivity decay, scan linearity) of an OCT system was demonstrated. Quantitative methods were developed to support the characterisation of an OCT system collecting images from phantoms and also to improve the quality of the OCT images. Characterisation methods include the measurement of the spatially variant resolution (point spread function (PSF) and modulation transfer function (MTF)), sensitivity and distortion. Processing of OCT data is a computer intensive process. Standard central processing unit (CPU) based processing might take several minutes to a few hours to process acquired data, thus data processing is a significant bottleneck. An alternative choice is to use expensive hardware-based processing such as field programmable gate arrays (FPGAs). However, recently graphics processing unit (GPU) based data processing methods have been developed to minimize this data processing and rendering time. These processing techniques include standard-processing methods which includes a set of algorithms to process the raw data (interference) obtained by the detector and generate A-scans. The work presented here describes accelerated data processing and post processing techniques for OCT systems. The GPU based processing developed, during the PhD, was later implemented into a custom built Fourier domain optical coherence tomography (FD-OCT) system. This system currently processes and renders data in real time. Processing throughput of this system is currently limited by the camera capture rate. OCTphantoms have been heavily used for the qualitative characterization and adjustment/ fine tuning of the operating conditions of OCT system. Currently, investigations are under way to characterize OCT systems using our phantoms. The work presented in this thesis demonstrate several novel techniques of fabricating OCT-phantoms and accelerating OCT data processing using GPUs. In the process of developing phantoms and quantitative methods, a thorough understanding and practical knowledge of OCT and fs laser processing systems was developed. This understanding leads to several novel pieces of research that are not only relevant to OCT but have broader importance. For example, extensive understanding of the properties of fs inscribed structures will be useful in other photonic application such as making of phase mask, wave guides and microfluidic channels. Acceleration of data processing with GPUs is also useful in other fields.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background Monocytes are implicated in the initiation and progression of the atherosclerotic plaque contributing to its instability and rupture. Although peripheral monocytosis has been related to poor clinical outcome post ST elevation myocardial infarction (STEMI), only scarce information is available of mechanisms of this association. Tumour necrosis factor alpha (TNFα) is a key cytokine in the acute phase inflammatory response, and it is predominantly produced by inflammatory macrophages. Little is known about TNFα association with circulating monocyte subpopulations post STEMI. Method A total of 142 STEMI patients (mean age 62±13 years; 72% male) treated with percutaneous revascularization were recruited with blood samples obtained within first 24 hours from the onset and on day 10-14. Peripheral blood monocyte subpopulations were enumerated and characterized using flow cytometry after staining for CD14, CD16 and CCR2 and were defined as: CD14++CD16-CCR2+ (Mon1), CD14++CD16+CCR+ (Mon2) and CD14+CD16++CCR2- (Mon3) cells. Plasma levels of TNFα were measured by enzyme-linked immunosorbent assay (ELISA, Peprotec system, UK). Major adverse cardiac events (MACE), defined as recurrent STEMI, new diagnosis of heart failure and death were recorded at follow up, mean of 164±134 days. Results TNFα levels were significantly higher 24 hours post STEMI, compared to day 14 (paired t-test, p <0.001) with day 1 levels weakly correlated with total monocyte count as well as Mon1 (Spearman’s correlation, r=0.19, p=0.02 and r=0.22, p=0.01, respectively). There was no correlation between TNFα and Mon2 or Mon3 subpopulations. TNFα levels were significantly higher in patients with a recorded MACE (n=28, Mann-Whitney test, p<0.001) (figure 1).⇓

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work we introduce the periodic nonlinear Fourier transform (PNFT) and propose a proof-of-concept communication system based on it by using a simple waveform with known nonlinear spectrum (NS). We study the performance (addressing the bit-error-rate (BER), as a function of the propagation distance) of the transmission system based on the use of the PNFT processing method and show the benefits of the latter approach. By analysing our simulation results for the system with lumped amplification, we demonstrate the decent potential of the new processing method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Much of the bridge stock on major transport links in North America and Europe was constructed in the 1950s and 1960s and has since deteriorated or is carrying loads far in excess of the original design loads. Structural Health Monitoring Systems (SHM) can provide valuable information on the bridge capacity but the application of such systems is currently limited by access and bridge type. This paper investigates the use of computer vision systems for SHM. A series of field tests have been carried out to test the accuracy of displacement measurements using contactless methods. A video image of each test was processed using a modified version of the optical flow tracking method to track displacement. These results have been validated with an established measurement method using linear variable differential transformers (LVDTs). The results obtained from the algorithm provided an accurate comparison with the validation measurements. The calculated displacements agree within 2% of the verified LVDT measurements, a number of post processing methods were then applied to attempt to reduce this error.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

There have been over 3000 bridge weigh-in-motion (B-WIM) installations in 25 countries worldwide, this has led vast improvements in post processing of B-WIM systems since its introduction in the 1970’s. This paper introduces a new low-power B-WIM system using fibre optic sensors (FOS). The system consisted of a series of FOS which were attached to the soffit of an existing integral bridge with a single span of 19m. The site selection criteria and full installation process has been detailed in the paper. A method of calibration was adopted using live traffic at the bridge site and based on this calibration the accuracy of the system was determined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Wir betrachten zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängi- gen Gebieten, wobei die Bewegung des Gebietsrandes bekannt ist. Die zeitliche Entwicklung des Gebietes wird durch die ALE-Formulierung behandelt, die die Nachteile der klassischen Euler- und Lagrange-Betrachtungsweisen behebt. Die Position des Randes und seine Geschwindigkeit werden dabei so in das Gebietsinnere fortgesetzt, dass starke Gitterdeformationen verhindert werden. Als Zeitdiskretisierungen höherer Ordnung werden stetige Galerkin-Petrov-Verfahren (cGP) und unstetige Galerkin-Verfahren (dG) auf Probleme in zeitabhängigen Gebieten angewendet. Weiterhin werden das C 1 -stetige Galerkin-Petrov-Verfahren und das C 0 -stetige Galerkin- Verfahren vorgestellt. Deren Lösungen lassen sich auch in zeitabhängigen Gebieten durch ein einfaches einheitliches Postprocessing aus der Lösung des cGP-Problems bzw. dG-Problems erhalten. Für Problemstellungen in festen Gebieten und mit zeitlich konstanten Konvektions- und Reaktionstermen werden Stabilitätsresultate sowie optimale Fehlerabschätzungen für die nachbereiteten Lösungen der cGP-Verfahren und der dG-Verfahren angegeben. Für zeitabhängige Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten präsentieren wir konservative und nicht-konservative Formulierungen, wobei eine besondere Aufmerksamkeit der Behandlung der Zeitableitung und der Gittergeschwindigkeit gilt. Stabilität und optimale Fehlerschätzungen für die in der Zeit semi-diskretisierten konservativen und nicht-konservativen Formulierungen werden vorgestellt. Abschließend wird das volldiskretisierte Problem betrachtet, wobei eine Finite-Elemente-Methode zur Ortsdiskretisierung der Konvektions-Diffusions-Reaktions-Gleichungen in zeitabhängigen Gebieten im ALE-Rahmen einbezogen wurde. Darüber hinaus wird eine lokale Projektionsstabilisierung (LPS) eingesetzt, um der Konvektionsdominanz Rechnung zu tragen. Weiterhin wird numerisch untersucht, wie sich die Approximation der Gebietsgeschwindigkeit auf die Genauigkeit der Zeitdiskretisierungsverfahren auswirkt.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Kandidaatintyön tarkoituksena oli selvittää pienahitsin juuren kriittisyyttä. Työ oli saanut aiheen rakenneputki kokeiden yhteydessä tehdyistä havainnoista. Työssä tutustuttiin millaiset ovat pienahitsin mitoitus menetelmät ja tausta tutkimusta kuinka sitä sovelletaan käytäntöön suurlujuusteräksille. Työssä esitellään käytetyt tutkimusmenetelmät kuinka menetelmätriangulaatio saavutettiin. Tutkimuskysymyksenä oli hitsien kestävyyden mitoituksen riittävyys. Tutkimukset suoritettiin tarkastellen staattisesti kuormitettuja pienahitsejä. Pienahitsi kappaleista tehtiin laboratoriokoekappale ja FEM-laskentamalli joista vertailtiin tuloksia. Laboratoriokokeessa mittaus menetelmänä käytettiin DIC-mittausta, jolle voitiin tehdä jälkikäsittelyjä ja sieltä määrittää haluttuja datapisteitä. Laskennassa suurimmat jännityskeskittymät syntyivät hitsin kohdalle mutta vetokokeessa koekappaleeseen syntyi vauriot sularajalle ja vetokorvakkeen kiinnityshitsin rajaviivalle. Tällä kohtaa todettiin materiaalimalli riittämättömäksi, koska siihen ei ollut määritelty muutosvyöhykkeen parametreja.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.

The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.

We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.

The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.

To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.

A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.

One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Turbulent plasmas inside tokamaks are modeled and studied using guiding center theory, applied to charged test particles, in a Hamiltonian framework. The equations of motion for the guiding center dynamics, under the conditions of a constant and uniform magnetic field and turbulent electrostatic field are derived by averaging over the fast gyroangle, for the first and second order in the guiding center potential, using invertible changes of coordinates such as Lie transforms. The equations of motion are then made dimensionless, exploiting temporal and spatial periodicities of the model chosen for the electrostatic potential. They are implemented numerically in Python. Fast Fourier Transform and its inverse are used. Improvements to the original Python scripts are made, notably the introduction of a power-law curve fitting to account for anomalous diffusion, the possibility to integrate the equations in two steps to save computational time by removing trapped trajectories, and the implementation of multicolored stroboscopic plots to distinguish between trapped and untrapped guiding centers. The post-processing of the results is made in MATLAB. The values and ranges of the parameters chosen for the simulations are selected based on numerous simulations used as feedback tools. In particular, a recurring value for the threshold to detect trapped trajectories is evidenced. Effects of the Larmor radius, the amplitude of the guiding center potential and the intensity of its second order term are studied by analyzing their diffusive regimes, their stroboscopic plots and the shape of guiding center potentials. The main result is the identification of cases anomalous diffusion depending on the values of the parameters (mostly the Larmor radius). The transitions between diffusive regimes are identified. The presence of highways for the super-diffusive trajectories are unveiled. The influence of the charge on these transitions from diffusive to ballistic behaviors is analyzed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, developed countries have turned their attention to clean and renewable energy, such as wind energy and wave energy that can be converted to electrical power. Companies and academic groups worldwide are investigating several wave energy ideas today. Accordingly, this thesis studies the numerical simulation of the dynamic response of the wave energy converters (WECs) subjected to the ocean waves. This study considers a two-body point absorber (2BPA) and an oscillating surge wave energy converter (OSWEC). The first aim is to mesh the bodies of the earlier mentioned WECs to calculate their hydrostatic properties using axiMesh.m and Mesh.m functions provided by NEMOH. The second aim is to calculate the first-order hydrodynamic coefficients of the WECs using the NEMOH BEM solver and to study the ability of this method to eliminate irregular frequencies. The third is to generate a *.h5 file for 2BPA and OSWEC devices, in which all the hydrodynamic data are included. The BEMIO, a pre-and post-processing tool developed by WEC-Sim, is used in this study to create *.h5 files. The primary and final goal is to run the wave energy converter Simulator (WEC-Sim) to simulate the dynamic responses of WECs studied in this thesis and estimate their power performance at different sites located in the Mediterranean Sea and the North Sea. The hydrodynamic data obtained by the NEMOH BEM solver for the 2BPA and OSWEC devices studied in this thesis is imported to WEC-Sim using BEMIO. Lastly, the power matrices and annual energy production (AEP) of WECs are estimated for different sites located in the Sea of Sicily, Sea of Sardinia, Adriatic Sea, Tyrrhenian Sea, and the North Sea. To this end, the NEMOH and WEC-Sim are still the most practical tools to estimate the power generation of WECs numerically.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Diabetic Retinopathy (DR) is a complication of diabetes that can lead to blindness if not readily discovered. Automated screening algorithms have the potential to improve identification of patients who need further medical attention. However, the identification of lesions must be accurate to be useful for clinical application. The bag-of-visual-words (BoVW) algorithm employs a maximum-margin classifier in a flexible framework that is able to detect the most common DR-related lesions such as microaneurysms, cotton-wool spots and hard exudates. BoVW allows to bypass the need for pre- and post-processing of the retinographic images, as well as the need of specific ad hoc techniques for identification of each type of lesion. An extensive evaluation of the BoVW model, using three large retinograph datasets (DR1, DR2 and Messidor) with different resolution and collected by different healthcare personnel, was performed. The results demonstrate that the BoVW classification approach can identify different lesions within an image without having to utilize different algorithms for each lesion reducing processing time and providing a more flexible diagnostic system. Our BoVW scheme is based on sparse low-level feature detection with a Speeded-Up Robust Features (SURF) local descriptor, and mid-level features based on semi-soft coding with max pooling. The best BoVW representation for retinal image classification was an area under the receiver operating characteristic curve (AUC-ROC) of 97.8% (exudates) and 93.5% (red lesions), applying a cross-dataset validation protocol. To assess the accuracy for detecting cases that require referral within one year, the sparse extraction technique associated with semi-soft coding and max pooling obtained an AUC of 94.2 ± 2.0%, outperforming current methods. Those results indicate that, for retinal image classification tasks in clinical practice, BoVW is equal and, in some instances, surpasses results obtained using dense detection (widely believed to be the best choice in many vision problems) for the low-level descriptors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Low-density nanostructured foams are often limited in applications due to their low mechanical and thermal stabilities. Here we report an approach of building the structural units of three-dimensional (3D) foams using hybrid two-dimensional (2D) atomic layers made of stacked graphene oxide layers reinforced with conformal hexagonal boron nitride (h-BN) platelets. The ultra-low density (1/400 times density of graphite) 3D porous structures are scalably synthesized using solution processing method. A layered 3D foam structure forms due to presence of h-BN and significant improvements in the mechanical properties are observed for the hybrid foam structures, over a range of temperatures, compared with pristine graphene oxide or reduced graphene oxide foams. It is found that domains of h-BN layers on the graphene oxide framework help to reinforce the 2D structural units, providing the observed improvement in mechanical integrity of the 3D foam structure.