253 resultados para deconvolution


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Tara Oceans Expedition (2009-2013) sampled the world oceans on board a 36 m long schooner, collecting environmental data and organisms from viruses to planktonic metazoans for later analyses using modern sequencing and state-of-the-art imaging technologies. Tara Oceans Data are particularly suited to study the genetic, morphological and functional diversity of plankton. The present data set provides continuous measurements made with an Aquatic Laser Fluorescence Analyzer (ALFA) (Chekalyuk et al., 2014), connected in-line to the TARA flow through system during 2013. The ALFA instrument provides dual-wavelength excitation (405 and 514 nm) of laser-stimulated emission (LSE) for spectral and temporal analysis. It offers in vivo fluorescence assessments of phytoplankton pigments, biomass, photosynthetic yield (Fv/Fm), phycobiliprotein (PBP)-containing phytoplankton groups, and chromophoric dissolved organic matter (CDOM) (Chekalyuk and Hafez, 2008; 2013A). Spectral deconvolution (SDC) is used to assess the overlapped spectral bands of aquatic fluorescence constituents and water Raman scattering (R). The Fv/Fm measurements are spectrally corrected for non-chlorophyll fluorescence background produced by CDOM and other constituents (Chekalyuk and Hafez, 2008). The sensor was cleaned weakly following the manufacturer recommended protocol.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Five synthetic combinatorial libraries of 2,080 components each were screened as mixtures for inhibition of DNA binding to two transcription factors. Rapid, solution-phase synthesis coupled to a gel-shift assay led to the identification of two compounds active at a 5- to 10-μM concentration level. The likely mode of inhibition is intercalation between DNA base pairs. The efficient deconvolution through sublibrary synthesis augurs well for the use of large mixtures of small, nonpeptide molecules in biological screens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mechanisms of bacterial pathogenesis have become an increasingly important subject as pathogens have become increasingly resistant to current antibiotics. The adhesion of microorganisms to the surface of host tissue is often a first step in pathogenesis and is a plausible target for new antiinfective agents. Examination of bacterial adhesion has been difficult both because it is polyvalent and because bacterial adhesins often recognize more than one type of cell-surface molecule. This paper describes an experimental procedure that measures the forces of adhesion resulting from the interaction of uropathogenic Escherichia coli to molecularly well defined models of cellular surfaces. This procedure uses self-assembled monolayers (SAMs) to model the surface of epithelial cells and optical tweezers to manipulate the bacteria. Optical tweezers orient the bacteria relative to the surface and, thus, limit the number of points of attachment (that is, the valency of attachment). Using this combination, it was possible to quantify the force required to break a single interaction between pilus and mannose groups linked to the SAM. These results demonstrate the deconvolution and characterization of complicated events in microbial adhesion in terms of specific molecular interactions. They also suggest that the combination of optical tweezers and appropriately functionalized SAMs is a uniquely synergistic system with which to study polyvalent adhesion of bacteria to biologically relevant surfaces and with which to screen for inhibitors of this adhesion.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Light microscopy of thick biological samples, such as tissues, is often limited by aberrations caused by refractive index variations within the sample itself. This problem is particularly severe for live imaging, a field of great current excitement due to the development of inherently fluorescent proteins. We describe a method of removing such aberrations computationally by mapping the refractive index of the sample using differential interference contrast microscopy, modeling the aberrations by ray tracing through this index map, and using space-variant deconvolution to remove aberrations. This approach will open possibilities to study weakly labeled molecules in difficult-to-image live specimens.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cardiac muscle contraction is triggered by a small and brief Ca2+ entry across the t-tubular membranes, which is believed to be locally amplified by release of Ca2+ from the adjacent junctional sarcoplasmic reticulum (SR). As Ca2+ diffusion is thought to be markedly attenuated in cells, it has been predicted that significant intrasarcomeric [Ca2+] gradients should exist during activation. To directly test for this, we measured [Ca2+] distribution in single cardiac myocytes using fluorescent [Ca2+] indicators and high speed, three-dimensional digital imaging microscopy and image deconvolution techniques. Steep cytosolic [Ca2+] gradients from the t-tubule region to the center of the sarcomere developed during the first 15 ms of systole. The steepness of these [Ca2+] gradients varied with treatments that altered Ca2+ release from internal stores. Electron probe microanalysis revealed a loss of Ca2+ from the junctional SR and an accumulation, principally in the A-band during activation. We propose that the prolonged existence of [Ca2+] gradients within the sarcomere reflects the relatively long period of Ca2+ release from the SR, the localization of Ca2+ binding sites and Ca2+ sinks remote from sites of release, and diffusion limitations within the sarcomere. The large [Ca2+] transient near the t-tubular/ junctional SR membranes is postulated to explain numerous features of excitation-contraction coupling in cardiac muscle.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An EPR "spectroscopic ruler" was developed using a series of alpha-helical polypeptides, each modified with two nitroxide spin labels. The EPR line broadening due to electron-electron dipolar interactions in the frozen state was determined using the Fourier deconvolution method. These dipolar spectra were then used to estimate the distances between the two nitroxides separated by 8-25 A. Results agreed well with a simple alpha-helical model. The standard deviation from the model system was 0.9 A in the range of 8-25 A. This technique is applicable to complex systems such as membrane receptors and channels, which are difficult to access with high-resolution NMR or x-ray crystallography, and is expected to be particularly useful for systems for which optical methods are hampered by the presence of light-interfering membranes or chromophores.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A concept termed liquid-phase combinatorial synthesis (LPCS) is described. The central feature of this methodology is that it combines the advantages that classic organic synthesis in solution offers with those that solid-phase synthesis can provide, through the application of a linear homogeneous polymer. To validate this concept two libraries were prepared, one of peptide and the second of nonpeptide origin. The peptide-based library was synthesized by a recursive deconvolution strategy [Erb, E., Janda, K. D. & Brenner, S. (1994) Proc. Natl. Acad. Sci. USA 91, 11422-11426] and several ligands were found within this library to bind a monoclonal antibody elicited against beta-endorphin. The non-peptide molecules synthesized were arylsulfonamides, a class of compounds of known clinical bactericidal efficacy. The results indicate that the reaction scope of LPCS should be general, and its value to multiple, high-throughput screening assays could be of particular merit, since multimilligram quantities of each library member can readily be attained.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objectives of this research dissertation were to develop and present novel analytical methods for the quantification of surface binding interactions between aqueous nanoparticles and water-soluble organic solutes. Quantification of nanoparticle surface interactions are presented in this work as association constants where the solutes have interacted with the surface of the nanoparticles. By understanding these nanoparticle-solute interactions, in part through association constants, the scientific community will better understand how organic drugs and nanomaterials interact in the environment, as well as to understand their eventual environmental fate. The biological community, pharmaceutical, and consumer product industries also have vested interests in nanoparticle-drug interactions for nanoparticle toxicity research and in using nanomaterials as drug delivery vesicles. The presented novel analytical methods, applied to nanoparticle surface association chemistry, may prove to be useful in assisting the scientific community to understand the risks, benefits, and opportunities of nanoparticles. The development of the analytical methods presented uses a model nanoparticle, Laponite-RD (LRD). LRD was the proposed nanoparticle used to model the system and technique because of its size, 25 nm in diameter. The solutes selected to model for these studies were chosen because they are also environmentally important. Caffeine, oxytetracycline (OTC), and quinine were selected to use as models because of their environmental importance and chemical properties that can be exploited in the system. All of these chemicals are found in the environment; thus, how they interact with nanoparticles and are transported through the environment is important. The analytical methods developed utilize and a wide-bore hydrodynamic chromatography to induce a partial hydrodynamic separation between nanoparticles and dissolved solutes. Then, using deconvolution techniques, two separate elution profiles for the nanoparticle and organic solute can be obtained. Followed by a mass balance approach, association constants between LRD, our model nanoparticle, and organic solutes are calculated. These findings are the first of their kind for LRD and nanoclays in dilute dispersions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydroxychloroquine (HCQ) is an antimalarial drug that is also used as a second-line treatment of rheumatoid arthritis (RA). Clinically, the use of HCQ is characterized by a long delay in the onset of action, and withdrawal of treatment is often a result of inefficacy rather than from toxicity. The slow onset of action can be attributed to the pharmacokinetics (PK) of HCQ, and wide interpatient variability is evident. Tentative relationships between concentration and effect have been made, but to date, no population PK model has been developed for HCQ. This study aimed to develop a population PK model including an estimation of the oral bioavailability of HCQ. In addition, the effects of the coadministration of methotrexate on the PK of HCQ were examined. Hydroxychloroquine blood concentration data were combined from previous pharmacokinetic studies in patients with rheumatoid arthritis. A total of 123 patients were studied, giving the data cohort from four previously published studies. Two groups of patients were included: 74 received hydroxychloroquine (HCQ) alone, and 49 received HCQ and methotrexate (MTX). All data analyses were carried out using the NONMEM program. A one-compartment PK model was supported, rather than a three-compartment model as previously published, probably because of the clustering of concentrations taken at the end of a dosing interval. The population estimate of bioavailability of 0.75 (0.07), n = 9, was consistent with literature values. The parameter values from the final model were: (Cl) over bar = 9.9 +/- 0.4 L/h, (V) over bar 605 +/- 91 L, (k(d)) over bar = 0.77 +/- 0.22 hours(-1), (t(tag)) over bar = 0.44 +/- 0.02 hours. Clearance was not affected by the presence of MTX, and, hence, steady-state drug concentrations and maintenance dosage requirements were similar. A population PK model was successfully developed for HCQ.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss the construction of a photometric redshift catalogue of luminous red galaxies (LRGs) from the Sloan Digital Sky Survey (SDSS), emphasizing the principal steps necessary for constructing such a catalogue: (i) photometrically selecting the sample, (ii) measuring photometric redshifts and their error distributions, and (iii) estimating the true redshift distribution. We compare two photometric redshift algorithms for these data and find that they give comparable results. Calibrating against the SDSS and SDSS-2dF (Two Degree Field) spectroscopic surveys, we find that the photometric redshift accuracy is sigma similar to 0.03 for redshifts less than 0.55 and worsens at higher redshift (similar to 0.06 for z < 0.7). These errors are caused by photometric scatter, as well as systematic errors in the templates, filter curves and photometric zero-points. We also parametrize the photometric redshift error distribution with a sum of Gaussians and use this model to deconvolve the errors from the measured photometric redshift distribution to estimate the true redshift distribution. We pay special attention to the stability of this deconvolution, regularizing the method with a prior on the smoothness of the true redshift distribution. The methods that we develop are applicable to general photometric redshift surveys.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this study was to compare the in vitro dissolution profile of a new rapidly absorbed paracetamol tablet containing sodium bicarbonate (PS) with that of a conventional paracetamol tablet (P), and to relate these by deconvolution and mapping to in vivo release. The dissolution methods used include the standard procedure described in the USP monograph for paracetamol tablets, employing buffer at pH5.8 or 0.05 M HCl at stirrer speeds between 10 and 50 rpm. The mapping process was developed and implemented in Microsoft Excel® worksheets that iteratively calculated the optimal values of scale and shape factors which linked in vivo time to in vitro time. The in vitro-in vivo correlation (IVIVC) was carried out simultaneously for both formulations to produce common mapping factors. The USP method, using buffer at pH5.8, demonstrated no difference between the two products. However, using an acidic medium the rate of dissolution of P but not of PS decreased with decreasing stirrer speed. A significant correlation (r=0.773; p<.00001) was established between in vivo release and in vitro dissolution using the profiles obtained with 0.05 M HCl and a stirrer speed of 30 rpm. The scale factor for optimal simultaneous IVIVC in the fasting state was 2.54 and the shape factor was 0.16; corresponding values for mapping in the fed state were 3.37 and 0.13 (implying a larger in vitro-in vivo time difference but reduced shape difference in the fed state). The current IVIVC explains, in part, the observed in vivo variability of the two products. The approach to mapping may also be extended to different batches of these products, to predict the impact of any changes of in vitro dissolution on in vivo release and plasma drug concentration-time profiles.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In 1972 the ionized cluster beam (ICB) deposition technique was introduced as a new method for thin film deposition. At that time the use of clusters was postulated to be able to enhance film nucleation and adatom surface mobility, resulting in high quality films. Although a few researchers reported singly ionized clusters containing 10$\sp2$-10$\sp3$ atoms, others were unable to repeat their work. The consensus now is that film effects in the early investigations were due to self-ion bombardment rather than clusters. Subsequently in recent work (early 1992) synthesis of large clusters of zinc without the use of a carrier gas was demonstrated by Gspann and repeated in our laboratory. Clusters resulted from very significant changes in two source parameters. Crucible pressure was increased from the earlier 2 Torr to several thousand Torr and a converging-diverging nozzle 18 mm long and 0.4 mm in diameter at the throat was used in place of the 1 mm x 1 mm nozzle used in the early work. While this is practical for zinc and other high vapor pressure materials it remains impractical for many materials of industrial interest such as gold, silver, and aluminum. The work presented here describes results using gold and silver at pressures of around 1 and 50 Torr in order to study the effect of the pressure and nozzle shape. Significant numbers of large clusters were not detected. Deposited films were studied by atomic force microscopy (AFM) for roughness analysis, and X-ray diffraction.^ Nanometer size islands of zinc deposited on flat silicon substrates by ICB were also studied by atomic force microscopy and the number of atoms/cm$\sp2$ was calculated and compared to data from Rutherford backscattering spectrometry (RBS). To improve the agreement between data from AFM and RBS, convolution and deconvolution algorithms were implemented to study and simulate the interaction between tip and sample in atomic force microscopy. The deconvolution algorithm takes into account the physical volume occupied by the tip resulting in an image that is a more accurate representation of the surface.^ One method increasingly used to study the deposited films both during the growth process and following, is ellipsometry. Ellipsometry is a surface analytical technique used to determine the optical properties and thickness of thin films. In situ measurements can be made through the windows of a deposition chamber. A method for determining the optical properties of a film, that is sensitive only to the growing film and accommodates underlying interfacial layers, multiple unknown underlayers, and other unknown substrates was developed. This method is carried out by making an initial ellipsometry measurement well past the real interface and by defining a virtual interface in the vicinity of this measurement. ^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tumor functional volume (FV) and its mean activity concentration (mAC) are the quantities derived from positron emission tomography (PET). These quantities are used for estimating radiation dose for a therapy, evaluating the progression of a disease and also use it as a prognostic indicator for predicting outcome. PET images have low resolution, high noise and affected by partial volume effect (PVE). Manually segmenting each tumor is very cumbersome and very hard to reproduce. To solve the above problem I developed an algorithm, called iterative deconvolution thresholding segmentation (IDTS) algorithm; the algorithm segment the tumor, measures the FV, correct for the PVE and calculates mAC. The algorithm corrects for the PVE without the need to estimate camera's point spread function (PSF); also does not require optimizing for a specific camera. My algorithm was tested in physical phantom studies, where hollow spheres (0.5-16 ml) were used to represent tumors with a homogeneous activity distribution. It was also tested on irregular shaped tumors with a heterogeneous activity profile which were acquired using physical and simulated phantom. The physical phantom studies were performed with different signal to background ratios (SBR) and with different acquisition times (1-5 min). The algorithm was applied on ten clinical data where the results were compared with manual segmentation and fixed percentage thresholding method called T50 and T60 in which 50% and 60% of the maximum intensity respectively is used as threshold. The average error in FV and mAC calculation was 30% and -35% for 0.5 ml tumor. The average error FV and mAC calculation were ~5% for 16 ml tumor. The overall FV error was ∼10% for heterogeneous tumors in physical and simulated phantom data. The FV and mAC error for clinical image compared to manual segmentation was around -17% and 15% respectively. In summary my algorithm has potential to be applied on data acquired from different cameras as its not dependent on knowing the camera's PSF. The algorithm can also improve dose estimation and treatment planning.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

How do the magnetic fields of massive stars evolve over time? Are their gyrochronological ages consistent with ages inferred from evolutionary tracks? Why do most stars predicted to host Centrifugal Magnetospheres (CMs) display no H$\alpha$ emission? Does plasma escape from CMs via centrifugal breakout events, or by a steady-state leakage mechanism? This thesis investigates these questions via a population study with a sample of 51 magnetic early B-type stars. The longitudinal magnetic field \bz~was measured from Least Squares Deconvolution profiles extracted from high-resolution spectropolarimetric data. New rotational periods $P_{\rm rot}$ were determined for 15 stars from \bz, leaving only 3 stars for which $P_{\rm rot}$ is unknown. Projected rotational velocities \vsini~were measured from multiple spectral lines. Effective temperatures and surface gravities were measured via ionization balances and line profile fitting of H Balmer lines. Fundamental physical parameters, \bz, \vsini, and $P_{\rm rot}$ were then used to determine radii, masses, ages, dipole oblique rotator model, stellar wind, magnetospheric, and spindown parameters using a Monte Carlo approach that self-consistently calculates all parameters while accounting for all available constraints on stellar properties. Dipole magnetic field strengths $B_{\rm d}$ follow a log-normal distribution similar to that of Ap stars, and decline over time in a fashion consistent with the expected conservation of fossil magnetic flux. $P_{\rm rot}$ increases with fractional main sequence age, mass, and $B_{\rm d}$, as expected from magnetospheric braking. However, comparison of evolutionary track ages to maximum spindown ages $t_{\rm S,max}$ shows that initial rotation fractions may be far below critical for stars with $M_*>10 M_\odot$. Computing $t_{\rm S,max}$ with different mass-loss prescriptions indicates that the mass-loss rates of B-type stars are likely much lower than expected from extrapolation from O-type stars. Stars with H$\alpha$ in emission and absorption occupy distinct regions in the updated rotation-magnetic confinement diagram: H$\alpha$-bright stars are found to be younger, more rapidly rotating, and more strongly magnetized than the general population. Emission strength is sensitive both to the volume of the CM and to the mass-loss rate, favouring leakage over centrifugal breakout.