974 resultados para High-Order Accurate Scheme


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pan-viral DNA array (PVDA) and high-throughput sequencing (HTS) are useful tools to identify novel viruses of emerging diseases. However, both techniques have difficulties to identify viruses in clinical samples because of the host genomic nucleic acid content (hg/cont). Both propidium monoazide (PMA) and ethidium bromide monoazide (EMA) have the capacity to bind free DNA/RNA, but are cell membrane-impermeable. Thus, both are unable to bind protected nucleic acid such as viral genomes within intact virions. However, EMA/PMA modified genetic material cannot be amplified by enzymes. In order to assess the potential of EMA/PMA to lower the presence of amplifiable hg/cont in samples and improve virus detection, serum and lung tissue homogenates were spiked with porcine reproductive and respiratory virus (PRRSV) and were processed with EMA/PMA. In addition, PRRSV RT-qPCR positive clinical samples were also tested. EMA/PMA treatments significantly decreased amplifiable hg/cont and significantly increased the number of PVDA positive probes and their signal intensity compared to untreated spiked lung samples. EMA/PMA treatments also increased the sensitivity of HTS by increasing the number of specific PRRSV reads and the PRRSV percentage of coverage. Interestingly, EMA/PMA treatments significantly increased the sensitivity of PVDA and HTS in two out of three clinical tissue samples. Thus, EMA/PMA treatments offer a new approach to lower the amplifiable hg/cont in clinical samples and increase the success of PVDA and HTS to identify viruses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectif: Cette thèse avait pour objectif principal la mise en oeuvre et la validation de la faisabilité, chez l'humain, du paradigme de modulation du réflexe acoustique de sursaut par un court silence (GPIAS) afin de l'utiliser comme mesure objective de l'acouphène. Pour ce faire, trois expériences ont été réalisées. L'expérience 1 avait pour objectif de valider l'inhibition du réflexe de sursaut par un court silence chez des participants humains normo-entendants (sans acouphène) lors de la présentation d'un bruit de fond centré en hautes et en basses fréquences afin de déterminer les paramètres optimaux du paradigme. L'expérience 2 avait pour objectif de valider la précision et la fidélité d'une méthode de caractérisation psychoacoustique de l'acouphène (appariement en intensité et en fréquence). Finalement, l'expérience 3 avait pour objectif d'appliquer le paradigme d'objectivation de l'acouphène par le réflexe de sursaut à des participants atteints d'acouphènes chroniques en utilisant les techniques développées lors des expériences 1 et 2. Méthodologie : L'expérience 1 incluait 157 participants testés dans l'une des conditions de durée du court silence (5, 25, 50, 100, 200 ms) et dans l'un des deux paradigmes (court silence à l'intérieur du bruit de fond ou suivant celui-ci) à l'aide de bruits de fond en hautes et en basses fréquences. L'expérience 2 incluait deux groupes de participants avec acouphène, l'un musicien (n=16) et l'autre sans expérience musicale (n=16) ainsi qu'un groupe de simulateur sans acouphène (n=18). Ils tous ont été évalués sur leur capacité d'appariement en fréquence et en intensité de leur acouphène. Les mesures ont été reprises chez un sous-groupe de participants plusieurs semaines plus tard. L'expérience 3 incluait 15 participants avec acouphène et 17 contrôles évalués à l'aide du paradigme d'inhibition du réflexe de sursaut à l'aide d'un court silence (GPIAS). Les paramètres psychoacoustiques de l'acouphène ont également été mesurés. Toutes les mesures ont été reprises plusieurs mois plus tard chez un sous-groupe de participants. Résultats : Expérience 1 : le paradigme d'inhibition du réflexe acoustique de sursaut par un court silence est applicable chez l'humain normo-entendant. Expérience 2 : les mesures psychoacoustiques informatisées de l'acouphène incluant l'appariement en fréquence et en intensité sont des mesures précises et fidèles du percept de l'acouphène. Expérience 3 : un déficit d'inhibition au paradigme du GPIAS a été retrouvé chez le groupe de participants avec acouphène pour les bruits de fond en hautes et en basses fréquences au test et au retest. Les mesures d'appariement en fréquence ont révélé un acouphène dont la fréquence prédominante était d'environ 16 000 Hz chez la plupart des participants. Discussion : Il est possible d'appliquer le paradigme d'inhibition du réflexe acoustique de sursaut par un court silence à des participants humains atteints d'acouphène, tel qu'il est utilisé en recherche animale pour « objectiver » la présence d'acouphène. Toutefois, le déficit d'inhibition mesuré n'est pas spécifique à la fréquence de l'acouphène lorsque validé à partir des données d'appariement psychoacoustique. Nos résultats soulèvent des questions quant à l'interprétation originale du paradigme pour détecter la présence d'un acouphène chez les animaux.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to characterise the laser ablation process from high-Tc superconductors, the time evolution of plasma produced by a Q-switching Nd:YAG laser from a GdBa2Cu3O7 superconducting sample has been studied using spectroscopic and ion-probe techniques. It has been observed that there is a fairly large delay for the onset of the emission from oxide species in comparison with those from atoms and ions of the constituent elements present in the plasma. Faster decay occurs for emission from oxides and ions compared with that from neutral atoms. These observations support the view that oxides are not directly produced from the target, but are formed by the recombination process while the plasma cools down. Plasma parameters such as temperature and velocity are also evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavelength dependence of saturable absorption (SA) and reverse saturable absorption (RSA) of zinc phthalocyanine was studied using 10 Hz, 8 ns pulses from a tunable laser, in the wavelength range of 520–686 nm, which includes the rising edge of the Q band in the electronic absorption spectrum. The nonlinear response is wavelength dependent and switching from RSA to SA has been observed as the excitation wavelength changes from the low absorption window region to higher absorption regime near the Q band. The SA again changes back to RSA when we further move over to the infrared region. Values of the imaginary part of third order susceptibility are calculated for various wavelengths in this range. This study is important in identifying the spectral range over which the nonlinear material acts as RSA based optical limiter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wavelength dependence of saturable absorption (SA) and reverse saturable absorption (RSA) of zinc phthalocyanine was studied using 10 Hz, 8 ns pulses from a tunable laser, in the wavelength range of 520–686 nm, which includes the rising edge of the Q band in the electronic absorption spectrum. The nonlinear response is wavelength dependent and switching from RSA to SA has been observed as the excitation wavelength changes from the low absorption window region to higher absorption regime near the Q band. The SA again changes back to RSA when we further move over to the infrared region. Values of the imaginary part of third order susceptibility are calculated for various wavelengths in this range. This study is important in identifying the spectral range over which the nonlinear material acts as RSA based optical limiter.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ferrofluids belonging to the series NixFe1 xFe2O4 were synthesised by two different procedures—one by standard co-precipitation techniques, the other by co-precipitation for synthesis of particles and dispersion aided by high-energy ball milling with a view to understand the effect of strain and size anisotropy on the magneto-optical properties of ferrofluids. The birefringence measurements were carried out using a standard ellipsometer. The birefringence signal obtained for chemically synthesised samples was satisfactorily fitted to the standard second Langevin function. The ball-milled ferrofluids showed a deviation and their birefringence was enhanced by an order. This large enhancement in the birefringence value cannot be attributed to the increase in grain size of the samples, considering that the grain sizes of sample synthesised by both modes are comparable; instead, it can be attributed to the lattice strain-induced shape anisotropy(oblation) arising from the high-energy ball-milling process. Thus magnetic-optical (MO) signals can be tuned by ball-milling process, which can find potential applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is an outcome of the investigations carried out on the development of an Artificial Neural Network (ANN) model to implement 2-D DFT at high speed. A new definition of 2-D DFT relation is presented. This new definition enables DFT computation organized in stages involving only real addition except at the final stage of computation. The number of stages is always fixed at 4. Two different strategies are proposed. 1) A visual representation of 2-D DFT coefficients. 2) A neural network approach. The visual representation scheme can be used to compute, analyze and manipulate 2D signals such as images in the frequency domain in terms of symbols derived from 2x2 DFT. This, in turn, can be represented in terms of real data. This approach can help analyze signals in the frequency domain even without computing the DFT coefficients. A hierarchical neural network model is developed to implement 2-D DFT. Presently, this model is capable of implementing 2-D DFT for a particular order N such that ((N))4 = 2. The model can be developed into one that can implement the 2-D DFT for any order N upto a set maximum limited by the hardware constraints. The reported method shows a potential in implementing the 2-D DF T in hardware as a VLSI / ASIC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thunderstorm, resulting from vigorous convective activity, is one of the most spectacular weather phenomena in the atmosphere. A common feature of the weather during the pre-monsoon season over the Indo-Gangetic Plain and northeast India is the outburst of severe local convective storms, commonly known as ‘Nor’westers’(as they move from northwest to southeast). The severe thunderstorms associated with thunder, squall lines, lightning and hail cause extensive losses in agricultural, damage to structure and also loss of life. In this paper, sensitivity experiments have been conducted with the Non-hydrostatic Mesoscale Model (NMM) to test the impact of three microphysical schemes in capturing the severe thunderstorm event occurred over Kolkata on 15 May 2009. The results show that the WRF-NMM model with Ferrier microphysical scheme appears to reproduce the cloud and precipitation processes more realistically than other schemes. Also, we have made an attempt to diagnose four severe thunderstorms that occurred during pre-monsoon seasons of 2006, 2007 and 2008 through the simulated radar reflectivity fields from NMM model with Ferrier microphysics scheme and validated the model results with Kolkata Doppler Weather Radar (DWR) observations. Composite radar reflectivity simulated by WRF-NMM model clearly shows the severe thunderstorm movement as observed by DWR imageries, but failed to capture the intensity as in observations. The results of these analyses demonstrated the capability of high resolution WRF-NMM model in the simulation of severe thunderstorm events and determined that the 3 km model improve upon current abilities when it comes to simulating severe thunderstorms over east Indian region

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ferrofluids belonging to the series NixFe1 xFe2O4 were synthesised by two different procedures—one by standard co-precipitation techniques, the other by co-precipitation for synthesis of particles and dispersion aided by high-energy ball milling with a view to understand the effect of strain and size anisotropy on the magneto-optical properties of ferrofluids. The birefringence measurements were carried out using a standard ellipsometer. The birefringence signal obtained for chemically synthesised samples was satisfactorily fitted to the standard second Langevin function. The ball-milled ferrofluids showed a deviation and their birefringence was enhanced by an order. This large enhancement in the birefringence value cannot be attributed to the increase in grain size of the samples, considering that the grain sizes of sample synthesised by both modes are comparable; instead, it can be attributed to the lattice strain-induced shape anisotropy(oblation) arising from the high-energy ball-milling process. Thus magnetic-optical (MO) signals can be tuned by ball-milling process, which can find potential applications

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over-sampling sigma-delta analogue-to-digital converters (ADCs) are one of the key building blocks of state of the art wireless transceivers. In the sigma-delta modulator design the scaling coefficients determine the overall signal-to-noise ratio. Therefore, selecting the optimum value of the coefficient is very important. To this end, this paper addresses the design of a fourthorder multi-bit sigma-delta modulator for Wireless Local Area Networks (WLAN) receiver with feed-forward path and the optimum coefficients are selected using genetic algorithm (GA)- based search method. In particular, the proposed converter makes use of low-distortion swing suppression SDM architecture which is highly suitable for low oversampling ratios to attain high linearity over a wide bandwidth. The focus of this paper is the identification of the best coefficients suitable for the proposed topology as well as the optimization of a set of system parameters in order to achieve the desired signal-to-noise ratio. GA-based search engine is a stochastic search method which can find the optimum solution within the given constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the technologies for the fabrication of high quality microarray advances rapidly, quantification of microarray data becomes a major task. Gridding is the first step in the analysis of microarray images for locating the subarrays and individual spots within each subarray. For accurate gridding of high-density microarray images, in the presence of contamination and background noise, precise calculation of parameters is essential. This paper presents an accurate fully automatic gridding method for locating suarrays and individual spots using the intensity projection profile of the most suitable subimage. The method is capable of processing the image without any user intervention and does not demand any input parameters as many other commercial and academic packages. According to results obtained, the accuracy of our algorithm is between 95-100% for microarray images with coefficient of variation less than two. Experimental results show that the method is capable of gridding microarray images with irregular spots, varying surface intensity distribution and with more than 50% contamination

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low grade and High grade Gliomas are tumors that originate in the glial cells. The main challenge in brain tumor diagnosis is whether a tumor is benign or malignant, primary or metastatic and low or high grade. Based on the patient's MRI, a radiologist could not differentiate whether it is a low grade Glioma or a high grade Glioma. Because both of these are almost visually similar, autopsy confirms the diagnosis of low grade with high-grade and infiltrative features. In this paper, textural description of Grade I and grade III Glioma are extracted using First order statistics and Gray Level Co-occurance Matrix Method (GLCM). Textural features are extracted from 16X16 sub image of the segmented Region of Interest(ROI) .In the proposed method, first order statistical features such as contrast, Intensity , Entropy, Kurtosis and spectral energy and GLCM features extracted were showed promising results. The ranges of these first order statistics and GLCM based features extracted are highly discriminant between grade I and Grade III. In this study which gives statistical textural information of grade I and grade III Glioma which is very useful for further classification and analysis and thus assisting Radiologist in greater extent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In natural languages with a high degree of word-order freedom syntactic phenomena like dependencies (subordinations) or valencies do not depend on the word-order (or on the individual positions of the individual words). This means that some permutations of sentences of these languages are in some (important) sense syntactically equivalent. Here we study this phenomenon in a formal way. Various types of j-monotonicity for restarting automata can serve as parameters for the degree of word-order freedom and for the complexity of word-order in sentences (languages). Here we combine two types of parameters on computations of restarting automata: 1. the degree of j-monotonicity, and 2. the number of rewrites per cycle. We study these notions formally in order to obtain an adequate tool for modelling and comparing formal descriptions of (natural) languages with different degrees of word-order freedom and word-order complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information display technology is a rapidly growing research and development field. Using state-of-the-art technology, optical resolution can be increased dramatically by organic light-emitting diode - since the light emitting layer is very thin, under 100nm. The main question is what pixel size is achievable technologically? The next generation of display will considers three-dimensional image display. In 2D , one is considering vertical and horizontal resolutions. In 3D or holographic images, there is another dimension – depth. The major requirement is the high resolution horizontal dimension in order to sustain the third dimension using special lenticular glass or barrier masks, separate views for each eye. The high-resolution 3D display offers hundreds of more different views of objects or landscape. OLEDs have potential to be a key technology for information displays in the future. The display technology presented in this work promises to bring into use bright colour 3D flat panel displays in a unique way. Unlike the conventional TFT matrix, OLED displays have constant brightness and colour, independent from the viewing angle i.e. the observer's position in front of the screen. A sandwich (just 0.1 micron thick) of organic thin films between two conductors makes an OLE Display device. These special materials are named electroluminescent organic semi-conductors (or organic photoconductors (OPC )). When electrical current is applied, a bright light is emitted (electrophosphorescence) from the formed Organic Light-Emitting Diode. Usually for OLED an ITO layer is used as a transparent electrode. Such types of displays were the first for volume manufacture and only a few products are available in the market at present. The key challenges that OLED technology faces in the application areas are: producing high-quality white light achieving low manufacturing costs increasing efficiency and lifetime at high brightness. Looking towards the future, by combining OLED with specially constructed surface lenses and proper image management software it will be possible to achieve 3D images.