95 resultados para PRIMARY SOURCE ANALYSIS
Resumo:
Background and Objective: Oral submucous fibrosis, a disease of collagen disorder, has been attributed to arecoline present in the saliva of betel quid chewers. However, the molecular basis of the action of arecoline in the pathogenesis of oral submucous fibrosis is poorly understood. The basic aim of our study was to elucidate the mechanism underlying the action of arecoline on the expression of genes in oral fibroblasts. Material and Methods: Human keratinocytes (HaCaT cells) and primary human gingival fibroblasts were treated with arecoline in combination with various pathway inhibitors, and the expression of transforming growth factor-beta isoform genes and of collagen isoforms was assessed using reverse transcription polymerase chain reaction analysis. Results: We observed the induction of transforming growth factor-beta2 by arecoline in HaCaT cells and this induction was found to be caused by activation of the M-3 muscarinic acid receptor via the induction of calcium and the protein kinase C pathway. Most importantly, we showed that transforming growth factor-beta2 was significantly overexpressed in oral submucous fibrosis tissues (p = 0.008), with a median of 2.13 (n = 21) compared with 0.75 (n = 18) in normal buccal mucosal tissues. Furthermore, arecoline down-regulated the expression of collagens 1A1 and 3A1 in human primary gingival fibroblasts; however these collagens were induced by arecoline in the presence of spent medium of cultured human keratinocytes. Treatment with a transforming growth factor-beta blocker, transforming growth factor-beta1 latency-associated peptide, reversed this up-regulation of collagen, suggesting a role for profibrotic cytokines, such as transforming growth factor-beta, in the induction of collagens. Conclusion: Taken together, our data highlight the importance of arecoline-induced epithelial changes in the pathogenesis of oral submucous fibrosis.
Resumo:
Extensive measurements of aerosol radiative and microphysical properties were made at an island location, Minicoy (8.3 degrees N, 73.04 degrees E) in the southern Arabian Sea. A large variability in aerosol characteristics associated with changes in air mass and precipitation characteristics was observed. Six distinct transport pathways were identified on the basis of cluster analysis. The Indo-Gangetic Plain, along with the northern Arabian Sea and west Asia (NWA), was identified to be the region having the highest potential for aerosol mass loading at the island. This estimate is based on the concentration weighted trajectory as well as cluster analysis. Dust transport from the NWA region was found to make a substantial contribution to the supermicron mass fraction. The black carbon mass mixing ratios observed were the lowest compared to previous measurements over this region. Consequently, the atmospheric radiative forcing efficiency was low and was in the range 10-28 W m(-2).
Resumo:
Context sensitive pointer analyses based on Whaley and Lam’s bddbddb system have been shown to scale to large Java programs. We provide a technique to incorporate flow sensitivity for Java fields into one such analysis and obtain an escape analysis based on it. First, we express an intraprocedural field flow sensitive analysis, using Fink et al.’s Heap Array SSA form in Datalog. We then extend this analysis interprocedurally by introducing two new φ functions for Heap Array SSA Form and adding deduction rules corresponding to them. Adding a few more rules gives us an escape analysis. We describe two types of field flow sensitivity: partial (PFFS) and full (FFFS), the former without strong updates to fields and the latter with strong updates. We compare these analyses with two different (field flow insensitive) versions of Whaley-Lam analysis: one of which is flow sensitive for locals (FS) and the other, flow insensitive for locals (FIS). We have implemented this analysis on the bddbddb system while using the SOOT open source framework as a front end. We have run our analysis on a set of 15 Java programs. Our experimental results show that the time taken by our field flow sensitive analyses is comparable to that of the field flow insensitive versions while doing much better in some cases. Our PFFS analysis achieves average reductions of about 23% and 30% in the size of the points-to sets at load and store statements respectively and discovers 71% more “caller-captured” objects than FIS.
Resumo:
The pattern of expression of the genes involved in the utilization of aryl beta-glucosides such as arbutin and salicin is different in the genus Shigella compared to Escherichia coli. The results presented here indicate that the homologue of the cryptic bgl operon of E. coli is conserved in Shigella sonnei and is the primary system involved in beta-glucoside utilization in the organism. The organization of the bgl genes in 5. sonnei is similar to that of E. coli; however there are three major differences in terms of their pattern of expression. (i) The bglB gene, encoding phospho-beta-glucosidase B, is insertionally inactivated in 5. sonnei. As a result, mutational activation of the silent bgl promoter confers an Arbutin-positive (Arb(+)) phenotype to the cells in a single step; however, acquiring a Salicin-positive (Sal(+)) phenotype requires the reversion or suppression of the bglB mutation in addition. (ii) Unlike in E. coli, a majority of the activating mutations (conferring the Arb(+) phenotype) map within the unlinked hns locus, whereas activation of the E. coli bgl operon under the same conditions is predominantly due to insertions within the bglR locus. (iii) Although the bgl promoter is silent in the wild-type strain of 5. sonnei (as in the case of E. coli), transcriptional and functional analyses indicated a higher basal level of transcription of the downstream genes. This was correlated with a 1 bp deletion within the putative Rho-independent terminator present in the leader sequence preceding the homologue of the bglG gene. The possible evolutionary implications of these differences for the maintenance of the genes in the cryptic state are discussed.
Resumo:
Recently reported experimental results on the rotation sensitivity of Lau fringes to the spatial coherence of the source have been theoretically analyzed and explained on the basis of coherence theory. A theoretical plot of the rotation angle required for the Lau fringes to vanish is obtained as a function of the coherence length of the illumination used in the Lau experiment. The theoretical results compare well with the experimental observations. The analysis as well as the experiment could form the basis for a simple and easy measurement of the coherence length of the illumination in a plane.
Resumo:
In order to describe the atmospheric turbulence which limits the resolution of long-exposure images obtained using ground-based large telescopes, a simplified model of a speckle pattern, reducing the complexity of calculating field-correlations of very high order, is presented. Focal plane correlations are used instead of correlations in the spatial frequency domain. General tripple correlations for a point source and for a binary are calculated and it is shown that they are not a strong function of the binary separation. For binary separations close to the diffraction limit of the telescope, the genuine triple correlation technique ensures a better SNR than the near-axis Knox-Thompson technique. The simplifications allow a complete analysis of the noise properties at all levels of light.
Resumo:
For the specific case of binary stars, this paper presents signal-to-noise ratio (SNR) calculations for the detection of the parity (the side of the brighter component) of the binary using the double correlation method. This double correlation method is a focal plane version of the well-known Knox-Thompson method used in speckle interferometry. It is shown that SNR for parity detection using double correlation depends linearly on binary separation. This new result was entirely missed by previous analytical calculations dealing with a point source. It is concluded that, for magnitudes relevant to the present day speckle interferometry and for binary separations close to the diffraction limit, speckle masking has better SNR for parity detection.
Resumo:
We propose a novel formulation of the points-to analysis as a system of linear equations. With this, the efficiency of the points-to analysis can be significantly improved by leveraging the advances in solution procedures for solving the systems of linear equations. However, such a formulation is non-trivial and becomes challenging due to various facts, namely, multiple pointer indirections, address-of operators and multiple assignments to the same variable. Further, the problem is exacerbated by the need to keep the transformed equations linear. Despite this, we successfully model all the pointer operations. We propose a novel inclusion-based context-sensitive points-to analysis algorithm based on prime factorization, which can model all the pointer operations. Experimental evaluation on SPEC 2000 benchmarks and two large open source programs reveals that our approach is competitive to the state-of-the-art algorithms. With an average memory requirement of mere 21MB, our context-sensitive points-to analysis algorithm analyzes each benchmark in 55 seconds on an average.
Resumo:
The source localization algorithms in the earlier works, mostly used non-planar arrays. If we consider scenarios like human-computer communication, or human-television communication where the microphones need to be placed on the computer monitor or television front panel, i.e we need to use the planar arrays. The algorithm proposed in 1], is a Linear Closed Form source localization algorithm (LCF algorithm) which is based on Time Difference of Arrivals (TDOAs) that are obtained from the data collected using the microphones. It assumes non-planar arrays. The LCF algorithm is applied to planar arrays in the current work. The relationship between the error in the source location estimate and the perturbation in the TDOAs is derived using first order perturbation analysis and validated using simulations. If the TDOAs are erroneous, both the coefficient matrix and the data matrix used for obtaining source location will be perturbed. So, the Total least squares solution for source localization is proposed in the current work. The sensitivity analysis of the source localization algorithm for planar arrays and non-planar arrays is done by introducing perturbation in the TDOAs and the microphone locations. It is shown that the error in the source location estimate is less when we use planar array instead of the particular non-planar array considered for same perturbation in the TDOAs or microphone location. The location of the reference microphone is proved to be important for getting an accurate source location estimate if we are using the LCF algorithm.
Resumo:
The effect of using a spatially smoothed forward-backward covariance matrix on the performance of weighted eigen-based state space methods/ESPRIT, and weighted MUSIC for direction-of-arrival (DOA) estimation is analyzed. Expressions for the mean-squared error in the estimates of the signal zeros and the DOA estimates, along with some general properties of the estimates and optimal weighting matrices, are derived. A key result is that optimally weighted MUSIC and weighted state-space methods/ESPRIT have identical asymptotic performance. Moreover, by properly choosing the number of subarrays, the performance of unweighted state space methods can be significantly improved. It is also shown that the mean-squared error in the DOA estimates is independent of the exact distribution of the source amplitudes. This results in a unified framework for dealing with DOA estimation using a uniformly spaced linear sensor array and the time series frequency estimation problems.
Resumo:
Reactions of hexachlorocyclodiphosphazane [MeNPCl3]2 with primary aromatic amines afforded the bisphosphinimine hydrochlorides [(RNH)2(RN)PN(Me)P(NHMe)(NHR)2]+Cl- (R = Ph 1, C6H4Me-4 2 or C6H4OMe-4 3). Dehydrochlorination of 2 and 3 by methanolic KOH yielded highly basic bisphosphinimines [(RNH)2(RN)PN(Me)P(NMe)(NHR)2] (R = C6H4Me-4 4 or C6H4OMe-4 5). Compounds 1-5 have been characterised by elemental analysis and IR and NMR (H-1, C-13, P-31) spectroscopy. The structure of 2 has been confirmed by single-crystal X-ray diffraction. The short P-N bond lengths and the conformations of the PN, units can be explained on the basis of cumulative negative hyperconjugative interactions between nitrogen lone pairs and adjacent P-N sigma* orbitals. Ab initio calculations on the model phosphinimine (H2N)3P=NH and its protonated form suggest that (amino)phosphinimines would be stronger bases compared to many organic bases such as guanidine.
Resumo:
An analysis of the primary degradation products of the widely used commercial polysulfide polymer Thiokol LP-33 by direct pyrolysis-mass spectrometry (DP-MS) is reported. The mechanism of degradation is through a radical process involving the random cleavage of a formal C-O bond followed by backbiting to form the cyclic products.
Resumo:
A new postcracking formulation for concrete, along with both implicit and explicit layering procedures, is used in the analysis of reinforced-concrete (RC) flexural and torsional elements. The postcracking formulation accounts for tension stiffening in concrete along the rebar directions, compression softening in cracked concrete based on either stresses or strains, and aggregate interlock based on crack-confining normal stresses. Transverse shear stresses computed using the layering procedures are included in material model considerations that permit the development of inclined cracks through the RC cross section. Examples of a beam analyzed by both the layering techniques, a torsional element, and a column-slab connection region analyzed by the implicit layering procedure are presented here. The study highlights the primary advantages and disadvantages of each layering approach, identifying the class of problems where the application of either procedure is more suitable.
Resumo:
An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.
Resumo:
This paper presents a new approach to the power flow analysis in steady state for multiterminal DC-AC systems. A flexible and practical choice of per unit system is used to formulate the DC network and converter equations. A converter is represented by Norton's equivalent of a current source in parallel with the commutation resistance. Unlike in previous literature, the DC network equations are used to derive the controller equations for the DC system using a subset of specifications. The specifications considered are current or power at all terminals except the slack terminal where the DC voltage is specified. The control equations are solved by Newton's method, using the current injections at the converter terminals as state variables. Further, a systematic approach to the handling of constraints is proposed by identifying the priorities in rescheduling of the specified variables. The methodology is illustrated by example of a 5 terminal DC system.