948 resultados para Frequency Modulated Signals, Parameter Estimation, Signal-to-Noise-Ratio, Simulations


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Swallowing dynamics involves the coordination and interaction of several muscles and nerves which allow correct food transport from mouth to stomach without laryngotracheal penetration or aspiration. Clinical swallowing assessment depends on the evaluator's knowledge of anatomic structures and of neurophysiological processes involved in swallowing. Any alteration in those steps is denominated oropharyngeal dysphagia, which may have many causes, such as neurological or mechanical disorders. Videofluoroscopy of swallowing is presently considered to be the best exam to objectively assess the dynamics of swallowing, but the exam needs to be conducted under certain restrictions, due to patient's exposure to radiation, which limits periodical repetition for monitoring swallowing therapy. Another method, called cervical auscultation, is a promising new diagnostic tool for the assessment of swallowing disorders. The potential to diagnose dysphagia in a noninvasive manner by assessing the sounds of swallowing is a highly attractive option for the dysphagia clinician. Even so, the captured sound has an amount of noise, which can hamper the evaluator's decision. In that way, the present paper proposes the use of a filter to improve the quality of audible sound and facilitate the perception of examination. The wavelet denoising approach is used to decompose the noisy signal. The signal to noise ratio was evaluated to demonstrate the quantitative results of the proposed methodology. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Absorbance detection in capillary electrophoresis (CE), offers an excellent mass sensitivity, but poor concentration detection limits owing to very small injection volumes (normally I to 10 nL). This aspect can be a limiting factor in the applicability of CE/UV to detect species at trace levels, particularly pesticide residues. In the present work, the optical path length of an on-column detection cell was increased through a proper connection of the column (75 mu m i.d.) to a capillary detection cell of 180 mu m optical path length in order to improve detectability. It is shown that the cell with an extended optical path length results in a significant gain in terms of signal to noise ratio. The effect of the increase in the optical path length has been evaluated for six pesticides, namely, carbendazim, thiabendazole, imazalil, procymidone triadimefon, and prochloraz. The resulting optical enhancement of the detection cell provided detection limits of ca. 0.3 mu g/mL for the studied compounds, thus enabling the residue analysis by CE/UV.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, genetically encoded optical indicators have emerged as noninvasive tools of high spatial and temporal resolution utilized to monitor the activity of individual neurons and specific neuronal populations. The increasing number of new optogenetic indicators, together with the absence of comparisons under identical conditions, has generated difficulty in choosing the most appropriate protein, depending on the experimental design. Therefore, the purpose of our study was to compare three recently developed reporter proteins: the calcium indicators GCaMP3 and R-GECO1, and the voltage indicator VSFP butterfly1.2. These probes were expressed in hippocampal neurons in culture, which were subjected to patchclamp recordings and optical imaging. The three groups (each one expressing a protein) exhibited similar values of membrane potential (in mV, GCaMP3: -56 ±8.0, R-GECO1: -57 ±2.5; VSFP: -60 ±3.9, p = 0.86); however, the group of neurons expressing VSFP showed a lower average of input resistance than the other groups (in Mohms, GCaMP3: 161 ±18.3; GECO1-R: 128 ±15.3; VSFP: 94 ±14.0, p = 0.02). Each neuron was submitted to current injections at different frequencies (10 Hz, 5 Hz, 3 Hz, 1.5 Hz, and 0.7 Hz) and their fluorescence responses were recorded in time. In our study, only 26.7% (4/15) of the neurons expressing VSFP showed detectable fluorescence signal in response to action potentials (APs). The average signal-to-noise ratio (SNR) obtained in response to five spikes (at 10 Hz) was small (1.3 ± 0.21), however the rapid kinetics of the VSFP allowed discrimination of APs as individual peaks, with detection of 53% of the evoked APs. Frequencies below 5 Hz and subthreshold signals were undetectable due to high noise. On the other hand, calcium indicators showed the greatest change in fluorescence following the same protocol (five APs at 10 Hz). Among the GCaMP3 expressing neurons, 80% (8/10) exhibited signal, with an average SNR value of 21 ±6.69 (soma), while for the R-GECO1 neurons, 50% (2/4) of the neurons had signal, with a mean SNR value of 52 ±19.7 (soma). For protocols at 10 Hz, 54% of the evoked APs were detected with GCaMP3 and 85% with R-GECO1. APs were detectable in all the analyzed frequencies and fluorescence signals were detected from subthreshold depolarizations as well. Because GCaMP3 is the most likely to yield fluorescence signal and with high SNR, some experiments were performed only with this probe. We demonstrate that GCaMP3 is effective in detecting synaptic inputs (involving Ca2+ influx), with high spatial and temporal resolution. Differences were also observed between the SNR values resulting from evoked APs, compared to spontaneous APs. In recordings of groups of cells, GCaMP3 showed clear discrimination between activated and silent cells, and reveals itself as a potential tool in studies of neuronal synchronization. Thus, our results indicate that the presently available calcium indicators allow detailed studies on neuronal communication, ranging from individual dendritic spines to the investigation of events of synchrony in neuronal networks genetically defined. In contrast, studies employing VSFPs represent a promising technology for monitoring neural activity and, although still to be improved, they may become more appropriate than calcium indicators, since neurons work on a time scale faster than events of calcium may foresee

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The multipath effect affects the differential and relative positioning, even that one involving short baselines. So it is necessary to detect this effect, check the caused error level, and mainly, its removal. This paper aims at analysing and comparing some useful components in the detection of this effect. These components are the Signal to Noise Ratio (SNR), the values of MP1 and MP2 obtained from the TEQC software that indicates the multipath level in the carriers L1 and L2, the multipath repeatability in consecutive days and the elevation angle and the azimuth of the satellites. For this purpose, an experiment is carried out, comparing such components in the presence and the absence of reflector objects that cause the multipath. Not only there is clear multipath repeatability in the residuals, but it also appears in the measures SNR, MP1 and MP2, reaching up 99% of correlation. For reduction, at least, of the high frequency multipath effect, the Multi-Resolution Analysis using wavelets is applied in the double differences (DD) measures. Some statistical tests were accomplished, which indicate results improvement, and mainly, larger reliability in the solution of the ambiguities, reaching up 49% of improvement concerning the Ratio test without applying the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

the aim of this study was to validate the Alternate Current Biosusceptometry (ACB) for monitoring gastric contractions in rats. In vitro data were obtained to establish the relationship between ACB and the strain-gauge (SG) signal amplitude. In vivo experiments were performed on rats with magnetic markers and SGs previously implanted under the gastric serosa. The effects of the prandial state in gastric motility profiles were obtained. The correlation between in vitro signal amplitudes was strong (R = 0.989). The temporal cross-correlation between the ACB and SG signal amplitude was higher in the postprandial than in the fasting state. Irregular signal profiles, low contraction amplitudes, and smaller signal-to-noise ratios explained the poor correlation for fasting-state recordings. The contraction frequencies using ACB were 0.068 ± 0.007 Hz (postprandial) and 0.058 ± 0.007 Hz (fasting) and those using SG were 0.066 ± 0.006 Hz (postprandial) and 0.059 ± 0.008 Hz (fasting) (P < 0.003). When a magnetic tracer was ingested, there was a strong correlation and a small phasedifference between techniques. We conclude that ACB provides an accurate and sensitive technique for studies of GI motility in the rat. © 2010 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O método de empilhamento sísmico CRS simula seções sísmicas ZO a partir de dados de cobertura múltipla, independente do macro-modelo de velocidades. Para meios 2-D, a função tempo de trânsito de empilhamento depende de três parâmetros, a saber: do ângulo de emergência do raio de reflexão normal (em relação à normal da superfície) e das curvaturas das frentes de onda relacionadas às ondas hipotéticas, denominadas NIP e Normal. O empilhamento CRS consiste na soma das amplitudes dos traços sísmicos em dados de múltipla cobertura, ao longo da superfície definida pela função tempo de trânsito do empilhamento CRS, que melhor se ajusta aos dados. O resultado do empilhamento CRS é assinalado a pontos de uma malha pré-definida na seção ZO. Como resultado tem-se a simulação de uma seção sísmica ZO. Isto significa que para cada ponto da seção ZO deve-se estimar o trio de parâmetros ótimos que produz a máxima coerência entre os eventos de reflexão sísmica. Nesta Tese apresenta-se fórmulas para o método CRS 2-D e para a velocidade NMO, que consideram a topografia da superfície de medição. O algoritmo é baseado na estratégia de otimização dos parâmetros de fórmula CRS através de um processo em três etapas: 1) Busca dos parâmetros, o ângulo de emergência e a curvatura da onda NIP, aplicando uma otimização global, 2) busca de um parâmetro, a curvatura da onda N, aplicando uma otimização global, e 3) busca de três parâmetros aplicando uma otimização local para refinar os parâmetros estimados nas etapas anteriores. Na primeira e segunda etapas é usado o algoritmo Simulated Annealing (SA) e na terceira etapa é usado o algoritmo Variable Metric (VM). Para o caso de uma superfície de medição com variações topográficas suaves, foi considerada a curvatura desta superfície no algoritmo do método de empilhamento CRS 2-D, com aplicação a dados sintéticos. O resultado foi uma seção ZO simulada, de alta qualidade ao ser comparada com a seção ZO obtida por modelamento direto, com uma alta razão sinal-ruído, além da estimativa do trio de parâmetros da função tempo de trânsito. Foi realizada uma nálise de sensibilidade para a nova função de tempo de trânsito CRS em relação à curvatura da superfície de medição. Os resultados demonstraram que a função tempo de trânsito CRS é mais sensível nos pontos-médios afastados do ponto central e para grandes afastamentos. As expressões da velocidade NMO apresentadas foram aplicadas para estimar as velocidades e as profundidades dos refletores para um modelo 2-D com topografia suave. Para a inversão destas velocidades e profundidades dos refletores, foi considerado o algoritmo de inversão tipo Dix. A velocidade NMO para uma superfície de medição curva, permite estimar muito melhor estas velocidades e profundidades dos refletores, que as velocidades NMO referidas as superfícies planas. Também apresenta-se uma abordagem do empilhamento CRS no caso 3-D. neste caso a função tempo de trânsito depende de oito parâmetros. São abordadas cinco estratégias de busca destes parâmetros. A combinação de duas destas estratégias (estratégias das três aproximações dos tempos de trânsito e a estratégia das configurações e curvaturas arbitrárias) foi aplicada exitosamente no empilhamento CRS 3-D de dados sintéticos e reais.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tese descreve a aplicação de análise de séries temporais em perfis de poço. Através desta técnica é possível avaliar-se a repetição e a resolução vertical de perfis, e determinar-se o intervalo de amostragem e a velocidade de perfilagem ideais para diferentes perfis. A comparação entre três poços é também feita, baseada num mesmo tipo de perfil. Para tanto, na seqüência utilizada, procurou-se manter num mesmo domínio os dados cuja quantidade total de amostras (N) por perfil não ultrapassou 2048. Desses dados, foram inicialmente retirados o valor médio das amostras e o alinhamento polinomial algébrico eventualmente nelas embutido. Em seguida, foi efetuada a aplicação do ponderador cossenoidal, do filtro passa-alta, da janela Hanning, do cálculo da função coerência, do espectro de fase, da razão sinal-ruído e dos espectros de potência do sinal e do ruído, nesta ordem. Para a função coerência, fez-se necessário o cálculo dos níveis de confiança de 50%, 90% e 95%. O cálculo do primeiro nível teve por base a necessidade de se determinar a resolução vertical de alguns perfis, e dos demais, a fim de que fosse obtida uma informação referente à localização daqueles níveis para a coerência calculada. Em relação ao espectro de fase, seu cálculo surgiu da necessidade de se obter uma informação adicional a respeito dos perfis manipulados, ou seja, o conhecimento da ocorrência ou não de deslocamento relativo de profundidade entre a seção principal e a seção repetida. A razão sinal-ruído foi calculada no sentido de possibilitar a comparação, como elemento avaliador dos diversos tipos de perfis, com a coerência e o cálculo dos espectros de potência. Os espectros de potência do sinal e do ruído foram calculados para se ter mais um parâmetro de avaliação da seção repetida, já que em tese, os espectros de potência do sinal e do ruído da seção repetida devem ser iguais aos respectivos espectros da seção principal. Os dados utilizados na aplicação da metodologia proposta foram fornecidos pela PETROBRÁS e oriundos de quatro poços da Bacia Potiguar emersa. Por questões de sigilo empresarial, os poços foram identificados como poços A, B, C e D. A avaliação da repetição entre diferentes tipos de perfis indica que, para o poço A, o perfil micro-esférico (MSFL) tem repetição melhor do que o perfil de porosidade neutrônica (CNL), o qual tem, por sua vez, repetição melhor do que o perfil de raios gama normal (GR). Para os perfis do poço D, uma diminuição da velocidade de perfilagem de 550 m/h para 275 m/h é vantajosa apenas para o perfil de porosidade neutrônica. Já a velocidade de perfilagem de 920 m/h, utilizada, na obtenção dos perfis do poço C, é totalmente inadequada para os perfis de resistividade (MSFL, ILD e ILM). A diminuição do intervalo de amostragem de 0,20 m para 0,0508 m, nos perfis de raios gama e de porosidade neutrônica, e 0,0254 m para o perfil de densidade, apresenta bons resultados quando aplicada no poço D. O cálculo da resolução vertical indica, para o perfil de porosidade neutrônica, uma superioridade qualitativa em relação ao perfil de raios gama normal, ambos pertencentes ao poço A. Para o poço C, o perfil micro-esférico apresenta uma resolução vertical na mesma ordem de grandeza da resolução do perfil de raios gama do poço B, o que evidencia ainda mais a inconveniência da velocidade de perfilagem utilizada no poço C. Já para o poço D, o cálculo da resolução vertical indica uma superioridade qualitativa do perfil de densidade de alta resolução em relação ao perfil de raios gama de alta resolução. A comparação entre os poços A, B e D, levada a efeito através dos respectivos perfis de porosidade neutrônica normais, comprova que a presença de ruído aleatório, em geral, está diretamente ligada à porosidade da formação - uma maior porosidade indica uma presença maior de ruído e, por conseguinte, uma queda qualitativa no perfil obtido. A análise do espectro de fase de cada perfil indica um deslocamento em profundidade, existente entre as seções principal e repetida de todos os perfis do poço C. E isto pôde ser confirmado com a posterior superposição das seções.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The best way to detect breast cancer is by screening mammography. The mammography equipments are dedicated and require a rigorous quality control in order to have a good quality image and to early detect this disease. The digital equipment is relatively new in the market and there isn’t a national rule for quality control for several types of digital detectors. This study has proposed to compare two different tests manuals for quality control provided by the manufacturers of digital mammography equipments, and also compare them to the “European guidelines for quality assurance in breast cancer screening and diagnosis “(2006). The studied equipments were: Senographe 2000D from General Electric (GE) and the Hologic Selenia Lorad. Both were digital mammography equipments, the GE unit presents an indirect digital system and the other presents a direct digital system. Physical parameters of the image have been studied, such as spatial resolution, contrast resolution, noise, signal-tonoise ratio, contrast-to-noise ratio and modulation transfer function. After that, a study of the importance of quality control and the requirement to implement a Quality Assurance Program has been done. One data collection was done to compare those manual, it was done by checking which tests are indicated and the minimum frequency which they should be conducted in accordance with each manufacturer. The tests were performed by different methodologies and the results were compared. The examined tests were: the breast entrance skin dose, mean glandular dose, contrast-to-noise ratio, signal-to-noise ratio, automatic exposure control and automatic control of density, modulation transfer function, equipment resolution, homogeneity and ghost

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a theoretical model developed for estimating the power, the optical signal to noise ratio and the number of generated carriers in a comb generator, having as a reference the minimum optical signal do noise ratio at the receiver input, for a given fiber link. Based on the recirculating frequency shifting technique, the generator relies on the use of coherent and orthogonal multi-carriers (Coherent-WDM) that makes use of a single laser source (seed) for feeding high capacity (above 100 Gb/s) systems. The theoretical model has been validated by an experimental demonstration, where 23 comb lines with an optical signal to noise ratio ranging from 25 to 33 dB, in a spectral window of similar to 3.5 nm, are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To refine methods of electroretinographical (ERG) recording for the analysis of low retinal potentials under scotopic conditions in advanced retinal degenerative diseases. Standard Ganzfeld ERG equipment (Diagnosys LLC, Cambridge, UK) was used in 27 healthy volunteers (mean age 28 +/- A SD 8.5 years) to define the stimulation protocol. The protocol was then applied in clinical routine and 992 recordings were obtained from patients (mean age 40.6 +/- A 18.3 years) over a period of 5 years. A blue stimulus with a flicker frequency of 9 Hz was specified under scotopic conditions to preferentially record rod-driven responses. A range of stimulus strengths (0.0000012-6.32 scot. cd s/mA(2) and 6-14 ms flash duration) was tested for maximal amplitudes and interference between rods and cones. Analysis of results was done by standard Fourier Transformation and assessment of signal-to-noise ratio. Optimized stimulus parameters were found to be a time-integrated luminance of 0.012 scot. cd s/mA(2) using a blue (470 nm) flash of 10 ms duration at a repetition frequency of 9 Hz. Characteristic stimulus strength versus amplitude curves and tests with stimuli of red or green wavelength suggest a predominant rod-system response. The 9 Hz response was found statistically distinguishable from noise in 38% of patients with otherwise non-recordable rod responses according to International Society for Clinical Electrophysiology of Vision standards. Thus, we believe this protocol can be used to record ERG potentials in patients with advanced retinal diseases and in the evaluation of potential treatments for these patients. The ease of implementation in clinical routine and of statistical evaluation providing an observer-independent evaluation may further facilitate its employment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new method to quantify substructures in clusters of galaxies, based on the analysis of the intensity of structures. This analysis is done in a residual image that is the result of the subtraction of a surface brightness model, obtained by fitting a two-dimensional analytical model (beta-model or Sersic profile) with elliptical symmetry, from the X-ray image. Our method is applied to 34 clusters observed by the Chandra Space Telescope that are in the redshift range z is an element of [0.02, 0.2] and have a signal-to-noise ratio (S/N) greater than 100. We present the calibration of the method and the relations between the substructure level with physical quantities, such as the mass, X-ray luminosity, temperature, and cluster redshift. We use our method to separate the clusters in two sub-samples of high-and low-substructure levels. We conclude, using Monte Carlo simulations, that the method recuperates very well the true amount of substructure for small angular core radii clusters (with respect to the whole image size) and good S/N observations. We find no evidence of correlation between the substructure level and physical properties of the clusters such as gas temperature, X-ray luminosity, and redshift; however, analysis suggest a trend between the substructure level and cluster mass. The scaling relations for the two sub-samples (high-and low-substructure level clusters) are different (they present an offset, i. e., given a fixed mass or temperature, low-substructure clusters tend to be more X-ray luminous), which is an important result for cosmological tests using the mass-luminosity relation to obtain the cluster mass function, since they rely on the assumption that clusters do not present different scaling relations according to their dynamical state.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most biological systems are formed by component parts that are to some degree interrelated. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among traits. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database to illustrate the impact of noise on such analyses. Noise-controlled estimates render far more plausible interpretations that are in full agreement with previous results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hybrid technologies, thanks to the convergence of integrated microelectronic devices and new class of microfluidic structures could open new perspectives to the way how nanoscale events are discovered, monitored and controlled. The key point of this thesis is to evaluate the impact of such an approach into applications of ion-channel High Throughput Screening (HTS)platforms. This approach offers promising opportunities for the development of new classes of sensitive, reliable and cheap sensors. There are numerous advantages of embedding microelectronic readout structures strictly coupled to sensing elements. On the one hand the signal-to-noise-ratio is increased as a result of scaling. On the other, the readout miniaturization allows organization of sensors into arrays, increasing the capability of the platform in terms of number of acquired data, as required in the HTS approach, to improve sensing accuracy and reliabiity. However, accurate interface design is required to establish efficient communication between ionic-based and electronic-based signals. The work made in this thesis will show a first example of a complete parallel readout system with single ion channel resolution, using a compact and scalable hybrid architecture suitable to be interfaced to large array of sensors, ensuring simultaneous signal recording and smart control of the signal-to-noise ratio and bandwidth trade off. More specifically, an array of microfluidic polymer structures, hosting artificial lipid bilayers blocks where single ion channel pores are embededed, is coupled with an array of ultra-low noise current amplifiers for signal amplification and data processing. As demonstrating working example, the platform was used to acquire ultra small currents derived by single non-covalent molecular binding between alpha-hemolysin pores and beta-cyclodextrin molecules in artificial lipid membranes.