990 resultados para Stars: emission-line
Resumo:
The blue emission of ethyl-hexyl substituted polyfluorene (PF2/6) films is accompanied by a low energy green emission peak around 500 nm in inert atmosphere. The intensity of this 500 nm peak is large in electroluminescence (EL) compared to photoluminescence (PL)measurements. Furthermore, the green emission intensity reduces dramatically in the presence of molecular oxygen. To understand this, we have modeled various nonradiative processes by time dependent quantum many body methods. These are (i) intersystem crossing to study conversion of excited singlets to triplets leading to a phosphorescence emission, (ii) electron-hole recombination (e-hR) process in the presence of a paramagnetic impurity to follow the yield of triplets in a polyene system doped with paramagnetic metal atom, and (iii) quenching of excited triplet states in the presence of oxygen molecules to understand the low intensity of EL emission in ambient atmosphere, when compared with that in nitrogen atmosphere. We have employed the Pariser-Parr-Pople Hamiltonian to model the molecules and have invoked electron-electron repulsions beyond zero differential approximation while treating interactions between the organic molecule and the rest of the system. Our time evolution methods show that there is a large cross section for triplet formation in the e-hR process in the presence of paramagnetic impurity with degenerate orbitals. The triplet yield through e-hR process far exceeds that in the intersystem crossing pathway, clearly pointing to the large intensity of the 500 nm peak in EL compared to PL measurements. We have also modeled the triplet quenching process by a paramagnetic oxygen molecule which shows a sizable quenching cross section especially for systems with large sizes. These studies show that the most probable origin of the experimentally observed low energy EL emission is the triplets.
Resumo:
Cracks in civil structures can result in premature failure due to material degradation and can result in both financial loss and environmental consequences. This thesis reports an effective technique using Acoustic Emission (AE) technique to assess the severity of the crack propagation in steel structures. The outcome of this work confirms that combination of AE parametric analysis and signal processing techniques can be used to evaluate crack propagation under different loading configurations. The technique has potential application to assess and monitor the condition of civil structures.
Resumo:
Globally, lung cancer accounts for approximately 20% of all cancer related deaths. Five-year survival is poor and rates have remained unchanged for the past four decades. There is an urgent need to identify markers of lung carcinogenesis and new targets for therapy. Given the recent successes of immune modulators in cancer therapy and the improved understanding of immune evasion by tumours, we sought to determine the carcinogenic impact of chronic TNF-α and IL-1β exposure in a normal bronchial epithelial cell line model. Following three months of culture in a chronic inflammatory environment under conditions of normoxia and hypoxia (0.5% oxygen), normal cells developed a number of key genotypic and phenotypic alterations. Important cellular features such as the proliferative, adhesive and invasive capacity of the normal cells were significantly amplified. In addition, gene expression profiles were altered in pathways associated with apoptosis, angiogenesis and invasion. The data generated in this study provides support that TNF-α, IL-1β and hypoxia promotes a neoplastic phenotype in normal bronchial epithelial cells. In turn these mediators may be of benefit for biomarker and/or immune-therapy target studies. This project provides an important inflammatory in vitro model for further immuno-oncology studies in the lung cancer setting.
Resumo:
Localised prostate cancer is a heterogenous disease and a multi-modal approach is required to accurately diagnose and stage the disease. Whilst the use of magnetic resonance imaging (MRI) has become more common, small volume and multi-focal disease are oft en diffi cult to characterise. Prostate specifi c membrane antigen is a cell surface protein, which is expressed in nearly all prostate cancer cells. Its expression is signifi cantly higher in high grade prostate cancer cells. In this study, we compare multi-parametric magnetic resonance imaging and 68-Gallinium-PSMA PET with whole-mount pathology of the prostate to evaluate the applicability of multiparameteric (MP) MRI and 68Ga-PSMA PET in detecting and locating tumour foci in patients with localised prostate cancer.
Resumo:
Carbon fiber reinforced polymer (CFRP) composite specimens with different thickness, geometry, and stacking sequences were subjected to fatigue spectrum loading in stages. Another set of specimens was subjected to static compression load. On-line acoustic Emission (AE) monitoring was carried out during these tests. Two artificial neural networks, Kohonen-self organizing feature map (KSOM), and multi-layer perceptron (MLP) have been developed for AE signal analysis. AE signals from specimens were clustered using the unsupervised learning KSOM. These clusters were correlated to the failure modes using available a priori information such as AE signal amplitude distributions, time of occurrence of signals, ultrasonic imaging, design of the laminates (stacking sequences, orientation of fibers), and AE parametric plots. Thereafter, AE signals generated from the rest of the specimens were classified by supervised learning MLP. The network developed is made suitable for on-line monitoring of AE signals in the presence of noise, which can be used for detection and identification of failure modes and their growth. The results indicate that the characteristics of AE signals from different failure modes in CFRP remain largely unaffected by the type of load, fiber orientation, and stacking sequences, they being representatives of the type of failure phenomena. The type of loading can have effect only on the extent of damage allowed before the specimens fail and hence on the number of AE signals during the test. The artificial neural networks (ANN) developed and the methods and procedures adopted show significant success in AE signal characterization under noisy environment (detection and identification of failure modes and their growth).
Resumo:
Fluctuation of field emission in carbon nanotubes (CNTs) is riot desirable in many applications and the design of biomedical x-ray devices is one of them. In these applications, it is of great importance to have precise control of electron beams over multiple spatio-temporal scales. In this paper, a new design is proposed in order to optimize the field emission performance of CNT arrays. A diode configuration is used for analysis, where arrays of CNTs act as cathode. The results indicate that the linear height distribution of CNTs, as proposed in this study, shows more stable performance than the conventionally used unifrom distribution.
Resumo:
In document images, we often find printed lines over-lapping with hand written elements especially in case of signatures. Typical examples of such images are bank cheques and payment slips. Although the detection and removal of the horizontal lines has been addressed, the restoration of the handwritten area after removal of lines, persists to be a problem of interest. lit this paper, we propose a method for line removal and restoration of the erased areas of the handwritten elements. Subjective evaluation of the results have been conducted to analyze the effectiveness of the proposed method. The results are promising with an accuracy of 86.33%. The entire Process takes less than half a second for completion on a 2.4 GHz 512 MB RAM Pentium IV PC for a document image.
Resumo:
Caveolae have been linked to diverse cellular functions and to many disease states. In this study we have used zebrafish to examine the role of caveolin-1 and caveolae during early embryonic development. During development, expression is apparent in a number of tissues including Kupffer's vesicle, tailbud, intersomite boundaries, heart, branchial arches, pronephric ducts and periderm. Particularly strong expression is observed in the sensory organs of the lateral line, the neuromasts and in the notochord where it overlaps with expression of caveolin-3. Morpholino-mediated downregulation of Cav1α caused a dramatic inhibition of neuromast formation. Detailed ultrastructural analysis, including electron tomography of the notochord, revealed that the central regions of the notochord has the highest density of caveolae of any embryonic tissue comparable to the highest density observed in any vertebrate tissue. In addition, Cav1α downregulation caused disruption of the notochord, an effect that was enhanced further by Cav3 knockdown. These results indicate an essential role for caveolin and caveolae in this vital structural and signalling component of the embryo.
Resumo:
The photoluminescence (PL) of a series of (GeS2)(80)(Ga2S3)(20) glasses doped with different amounts of Er (0.17, 0.35, 0.52, 1.05 and 1.39 at.%) at 77 and 4.2 K has been studied. The influence of the temperature on the emission cross-section of the PL bands at -> 1540, 980 and 820 nm under host excitation has been defined. A quenching effect of the host photoluminescence has been established from the compositional dependence of the PL intensity. It has been found that the present Er3+-doped Ge-S-Ga glasses posses PL lifetime values about 3.25 ms. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
The problem of automatic melody line identification in a MIDI file plays an important role towards taking QBH systems to the next level. We present here, a novel algorithm to identify the melody line in a polyphonic MIDI file. A note pruning and track/channel ranking method is used to identify the melody line. We use results from musicology to derive certain simple heuristics for the note pruning stage. This helps in the robustness of the algorithm, by way of discarding "spurious" notes. A ranking based on the melodic information in each track/channel enables us to choose the melody line accurately. Our algorithm makes no assumption about MIDI performer specific parameters, is simple and achieves an accuracy of 97% in identifying the melody line correctly. This algorithm is currently being used by us in a QBH system built in our lab.
Resumo:
Interstellar clouds are not featureless, but show quite complex internal structures of filaments and clumps when observed with high enough resolution. These structures have been generated by 1) turbulent motions driven mainly by supernovae, 2) magnetic fields working on the ions and, through neutral-ion collisions, on neutral gas as well, and 3) self-gravity pulling a dense clump together to form a new star. The study of the cloud structure gives us information on the relative importance of each of these mechanisms, and helps us to gain a better understanding of the details of the star formation process. Interstellar dust is often used as a tracer for the interstellar gas which forms the bulk of the interstellar matter. Some of the methods that are used to derive the column density are summarized in this thesis. A new method, which uses the scattered light to map the column density in large fields with high spatial resolution, is introduced. This thesis also takes a look at the grain alignment with respect to the magnetic fields. The aligned grains give rise to the polarization of starlight and dust emission, thus revealing the magnetic field. The alignment mechanisms have been debated for the last half century. The strongest candidate at present is the radiative torques mechanism. In the first four papers included in this thesis, the scattered light method of column density estimation is formulated, tested in simulations, and finally used to obtain a column density map from observations. They demonstrate that the scattered light method is a very useful and reliable tool in column density estimation, and is able to provide higher resolution than the near-infrared color excess method. These two methods are complementary. The derived column density maps are also used to gain information on the dust emissivity within the observed cloud. The two final papers present simulations of polarized thermal dust emission assuming that the alignment happens by the radiative torques mechanism. We show that the radiative torques can explain the observed decline of the polarization degree towards dense cores. Furthermore, the results indicate that the dense cores themselves might not contribute significantly to the polarized signal, and hence one needs to be careful when interpreting the observations and deriving the magnetic field.
Resumo:
Emissions of coal combustion fly ash through real scale ElectroStatic Precipitators (ESP) were studied in different coal combustion and operation conditions. Sub-micron fly-ash aerosol emission from a power plant boiler and the ESP were determined and consequently the aerosol penetration, as based on electrical mobility measurements, thus giving thereby an indication for an estimate on the size and the maximum extent that the small particles can escape. The experimentals indicate a maximum penetration of 4% to 20 % of the small particles, as counted on number basis instead of the normally used mass basis, while simultaneously the ESP is operating at a nearly 100% collection efficiency on mass basis. Although the size range as such seems to appear independent of the coal, of the boiler or even of the device used for the emission control, the maximum penetration level on the number basis depends on the ESP operating parameters. The measured emissions were stable during stable boiler operation for a fired coal, and the emissions seemed each to be different indicating that the sub-micron size distribution of the fly-ash could be used as a specific characteristics for recognition, for instance for authenticity, provided with an indication of known stable operation. Consequently, the results on the emissions suggest an optimum particle size range for environmental monitoring in respect to the probability of finding traces from the samples. The current work embodies also an authentication system for aerosol samples for post-inspection from any macroscopic sample piece. The system can comprise newly introduced new devices, for mutually independent use, or, for use in a combination with each other, as arranged in order to promote the sampling operation length and/or the tag selection diversity. The tag for the samples can be based on naturally occurring measures and/or added measures of authenticity in a suitable combination. The method involves not only military related applications but those in civil industries as well. Alternatively to the samples, the system can be applied to ink for note printing or other monetary valued papers, but also in a filter manufacturing for marking fibrous filters.
Resumo:
Acoustic emission (AE) energy, instead of amplitude, associated with each of the event is used to estimate the fracture process zone (FPZ) size. A steep increase in the cumulative AE energy of the events with respect to time is correlated with the formation of FPZ. Based on the AE energy released during these events and the locations of the events, FPZ size is obtained. The size-independent fracture energy is computed using the expressions given in the boundary effect model by least squares method since over-determined system of equations are obtained when data from several specimens are used. Instead of least squares method a different method is suggested in which the transition ligament length, measured from the plot of histograms of AE events plotted over the un-cracked ligament, is used directly to obtain size-independent fracture energy. The fracture energy thus calculated seems to be size-independent.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.