964 resultados para q-Fourier Transform


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Rapid monitoring of the response to treatment in cancer patients is essential to predict the outcome of the therapeutic regimen early in the course of the treatment. The conventional methods are laborious, time-consuming, subjective and lack the ability to study different biomolecules and their interactions, simultaneously. Since; mechanisms of cancer and its response to therapy is dependent on molecular interactions and not on single biomolecules, an assay capable of studying molecular interactions as a whole, is preferred. Fourier Transform Infrared (FTIR) spectroscopy has become a popular technique in the field of cancer therapy with an ability to elucidate molecular interactions. The aim of this study, was to explore the utility of the FTIR technique along with multivariate analysis to understand whether the method has the resolution to identify the differences in the mechanism of therapeutic response. Towards achieving the aim, we utilized the mouse xenograft model of retinoblastoma and nanoparticle mediated targeted therapy. The results indicate that the mechanism underlying the response differed between the treated and untreated group which can be elucidated by unique spectral signatures generated by each group. The study establishes the efficiency of non-invasive, label-free and rapid FTIR method in assessing the interactions of nanoparticles with cellular macromolecules towards monitoring the response to cancer therapeutics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mathematics Subject Class.: 33C10,33D60,26D15,33D05,33D15,33D90

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die elektromagnetischen Nukleon-Formfaktoren sind fundamentale Größen, welche eng mit der elektromagnetischen Struktur der Nukleonen zusammenhängen. Der Verlauf der elektrischen und magnetischen Sachs-Formfaktoren G_E und G_M gegen Q^2, das negative Quadrat des Viererimpulsübertrags im elektromagnetischen Streuprozess, steht über die Fouriertransformation in direkter Beziehung zu der räumlichen Ladungs- und Strom-Verteilung in den Nukleonen. Präzise Messungen der Formfaktoren über einen weiten Q^2-Bereich werden daher für ein quantitatives Verständnis der Nukleonstruktur benötigt.rnrnDa es keine freien Neutrontargets gibt, gestaltet sich die Messung der Neutron-Formfaktoren schwierig im Vergleich zu der Messung am Proton. Konsequenz daraus ist, dass die Genauigkeit der vorhandenen Daten von Neutron-Formfaktoren deutlich geringer ist als die von Formfaktoren des Protons; auch der vermessene Q^2-Bereich ist kleiner. Insbesondere der elektrische Sachs-Formfaktor des Neutrons G_E^n ist schwierig zu messen, da er aufgrund der verschwindenden Nettoladung des Neutrons im Verhältnis zu den übrigen Nukleon-Formfaktoren sehr klein ist. G_E^n charakterisiert die Ladungsverteilung des elektrisch neutralen Neutrons und ist damit besonders sensitiv auf die innere Struktur des Neutrons.rnrnIn der hier vorgestellten Arbeit wurde G_E^n aus Strahlhelizitätsasymmetrien in der quasielastischen Streuung vec{3He}(vec{e}, e'n)pp bei einem Impulsübertrag von Q^2 = 1.58 (GeV/c)^2 bestimmt. Die Messung fand in Mainz an der Elektronbeschleunigeranlage Mainzer Mikrotron innerhalb der A1-Kollaboration im Sommer 2008 statt. rnrnLongitudinal polarisierte Elektronen mit einer Energie von 1.508 GeV wurden an einem polarisierten ^3He-Gastarget, das als effektives, polarisiertes Neutrontarget diente, gestreut. Die gestreuten Elektronen wurden in Koinzidenz mit den herausgeschlagenen Neutronen detektiert; die Elektronen wurden in einem magnetischen Spektrometer nachgewiesen, durch den Nachweis der Neutronen in einer Matrix aus Plastikszintillatoren wurde der Beitrag der quasielastischen Streuung am Proton unterdrückt.rnrnAsymmetrien des Wirkungsquerschnitts bezüglich der Elektronhelizität sind bei Orientierung der Targetpolarisation in der Streuebene und senkrecht zum Impulsübertrag sensitiv auf G_E^n / G_M^n; mittels deren Messung kann G_E^n bestimmt werden, da der magnetische Formfaktor G_M^n mit vergleichsweise hoher Präzision bekannt ist. Zusätzliche Messungen der Asymmetrie bei einer Polarisationsorientierung parallel zum Impulsübertrag wurden genutzt, um systematische Fehler zu reduzieren.rnrnFür die Messung inklusive statistischem (stat) und systematischem (sys) Fehler ergab sich G_E^n = 0.0244 +/- 0.0057_stat +/- 0.0016_sys.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Denaturation of tissues can provide a unique biological environment for regenerative medicine application only if minimal disruption of their microarchitecture is achieved during the decellularization process. The goal is to keep the structural integrity of such a construct as functional as the tissues from which they were derived. In this work, cartilage-on-bone laminates were decellularized through enzymatic, non-ionic and ionic protocols. This work investigated the effects of decellularization process on the microarchitecture of cartiligous extracellular matrix; determining the extent of how each process deteriorated the structural organization of the network. High resolution microscopy was used to capture cross-sectional images of samples prior to and after treatment. The variation of the microarchitecture was then analysed using a well defined fast Fourier image processing algorithm. Statistical analysis of the results revealed how significant the alternations among aforementioned protocols were (p < 0.05). Ranking the treatments by their effectiveness in disrupting the ECM integrity, they were ordered as: Trypsin> SDS> Triton X-100.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one pre-processes the source image and template/model with a bank of filters (e.g. oriented edges, Gabor, etc.) as: (i) it can handle substantial illumination variations, (ii) the inefficient pre-processing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, (iii) unlike traditional LK the computational cost is invariant to the number of filters and as a result far more efficient, and (iv) this approach can be extended to the inverse compositional form of the LK algorithm where nearly all steps (including Fourier transform and filter bank pre-processing) can be pre-computed leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to non-rigid object alignment tasks that are considered extensions of the LK algorithm such as those found in Active Appearance Models (AAMs).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is commonly believed that in order to synthesize high-quality hydrogenated amorphous silicon carbide (a-Si1-xCx : H) films at competitive deposition rates it is necessary to operate plasma discharges at high power regimes and with heavy hydrogen dilution. Here we report on the fabrication of hydrogenated amorphous silicon carbide films with different carbon contents x (ranging from 0.09 to 0.71) at high deposition rates using inductively coupled plasma (ICP) chemical vapour deposition with no hydrogen dilution and at relatively low power densities (∼0.025 W cm -3) as compared with existing reports. The film growth rate R d peaks at x = 0.09 and x = 0.71, and equals 18 nm min-1 and 17 nm min-1, respectively, which is higher than other existing reports on the fabrication of a-Si1-xCx : H films. The extra carbon atoms for carbon-rich a-Si1-xCx : H samples are incorporated via diamond-like sp3 C-C bonding as deduced by Fourier transform infrared absorption and Raman spectroscopy analyses. The specimens feature a large optical band gap, with the maximum of 3.74 eV obtained at x = 0.71. All the a-Si1-xCx : H samples exhibit low-temperature (77 K) photoluminescence (PL), whereas only the carbon-rich a-Si1-xCx : H samples (x ≥ 0.55) exhibit room-temperature (300 K) PL. Such behaviour is explained by the static disorder model. High film quality in our work can be attributed to the high efficiency of the custom-designed ICP reactor to create reactive radical species required for the film growth. This technique can be used for a broader range of material systems where precise compositional control is required. © 2008 IOP Publishing Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nanocrystalline silicon carbide (nc-SiC) films are prepared by low-frequency inductively coupled plasma chemical vapor deposition from feedstock gases silane and methane diluted with hydrogen at a substrate temperature of 500 °C. The effect of different hydrogen dilution ratios X [hydrogen flow (sccm) / silane + methane flow (sccm)] on the growth of nc-SiC films is investigated by X-ray diffraction, scanning electron microscopy, Fourier transform infrared (FTIR) spectroscopy, and X-ray photoelectron spectroscopy (XPS). At a low hydrogen dilution ratio X, cubic silicon carbide is the main crystal phase; whereas at a high hydrogen dilution ratio X, hexagonal silicon carbide is the main crystal phase. The SiC crystal phase transformation may be explained by the different surface mobility of reactive Si-based and C-based radicals deposited at different hydrogen dilution ratios X. The FTIR and XPS analyses show that the Si-C bonds are the main bonds in the films and elemental composition of SiC is nearly stoichiometric with almost equal share of silicon and carbon atoms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Silylated kaolinites were synthesized at 80°C without the use of inert gas protection. The method presented started with mechanical grinding of kaolinite, followed by grafting with 3-aminopropyltriethoxysilane (APTES). The mechanical grinding treatment destroyed the ordered sheets of kaolinite, formed fine fragments and generated broken bonds (undercoordinated metal ions). These broken bonds served as new sites for the condensation with APTES. Fourier transform infrared spectroscopy (FTIR) confirmed the existence of –CH2 from APTES. 29Si cross-polarization magic-angle spinning nuclear magnetic resonance spectroscopy (29Si CP/MAS NMR) showed that the principal bonding mechanism between APTES and kaolinite fitted a tridentate silylation model (T3) with a chemical shift at 66.7 ppm. The silane loadings of the silylated samples were estimated from the mass loss obtained by TG-DTG curves. The results showed that the 6-hour ground kaolinite could be grafted with the most APTES (7.0%) using cyclohexane as solvent. The loaded amount of APTES in the silylated samples obtained in different solvents decreased in the order as: nonpolar solvent > polar solvent with low dielectric constant (toluene) > polar solvent with high dielectric constant (ethanol).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper investigates several competing procedures for computing the prices of vanilla European options, such as puts, calls and binaries, in which the underlying model has a characteristic function that is known in semi-closed form. The algorithms investigated here are the half-range Fourier cosine series, the half-range Fourier sine series and the full-range Fourier series. Their performance is assessed in simulation experiments in which an analytical solution is available and also for a simple affine model of stochastic volatility in which there is no closed-form solution. The results suggest that the half-range sine series approximation is the least effective of the three proposed algorithms. It is rather more difficult to distinguish between the performance of the halfrange cosine series and the full-range Fourier series. However there are two clear differences. First, when the interval over which the density is approximated is relatively large, the full-range Fourier series is at least as good as the half-range Fourier cosine series, and outperforms the latter in pricing out-of-the-money call options, in particular with maturities of three months or less. Second, the computational time required by the half-range Fourier cosine series is uniformly longer than that required by the full-range Fourier series for an interval of fixed length. Taken together,these two conclusions make a case for pricing options using a full-range range Fourier series as opposed to a half-range Fourier cosine series if a large number of options are to be priced in as short a time as possible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The breakdown of the usual method of Fourier transforms in the problem of an external line crack in a thin infinite elastic plate is discovered and the correct solution of this problem is derived using the concept of a generalised Fourier transform of a type discussed first by Golecki [1] in connection with Flamant's problem.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the Segal-Bargmann transform on M(2). The range of this transform is characterized as a weighted Bergman space. In a similar fashion Poisson integrals are investigated. Using a Gutzmer's type formula we characterize the range as a class of functions extending holomorphically to an appropriate domain in the complexification of M(2). We also prove a Paley-Wiener theorem for the inverse Fourier transform.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using path integrals, we derive an exact expression-valid at all times t-for the distribution P(Q,t) of the heat fluctuations Q of a Brownian particle trapped in a stationary harmonic well. We find that P(Q, t) can be expressed in terms of a modified Bessel function of zeroth order that in the limit t > infinity exactly recovers the heat distribution function obtained recently by Imparato et al. Phys. Rev. E 76, 050101(R) (2007)] from the approximate solution to a Fokker-Planck equation. This long-time result is in very good agreement with experimental measurements carried out by the same group on the heat effects produced by single micron-sized polystyrene beads in a stationary optical trap. An earlier exact calculation of the heat distribution function of a trapped particle moving at a constant speed v was carried out by van Zon and Cohen Phys. Rev. E 69, 056121 (2004)]; however, this calculation does not provide an expression for P(Q, t) itself, but only its Fourier transform (which cannot be analytically inverted), nor can it be used to obtain P(Q, t) for the case v=0.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A complete solution to the fundamental problem of delineation of an ECG signal into its component waves by filtering the discrete Fourier transform of the signal is presented. The set of samples in a component wave is transformed into a complex sequence with a distinct frequency band. The filter characteristics are determined from the time signal itself. Multiplication of the transformed signal with a complex sinusoidal function allows the use of a bank of low-pass filters for the delineation of all component waves. Data from about 300 beats have been analysed and the results are highly satisfactory both qualitatively and quantitatively.