968 resultados para decomposition of gauge field
Resumo:
The thesis is divided into four chapters. They are: introduction, experimental, results and discussion about the free ligands and results and discussion about the complexes. The First Chapter, the introductory chapter, is a general introduction to the study of solid state reactions. The Second Chapter is devoted to the materials and experimental methods that have been used for carrying out tile experiments. TIle Third Chapter is concerned with the characterisations of free ligands (Picolinic acid, nicotinic acid, and isonicotinic acid) by using elemental analysis, IR spectra, X-ray diffraction, and mass spectra. Additionally, the thermal behaviour of free ligands in air has been studied by means of thermogravimetry (TG), derivative thermogravimetry (DTG), and differential scanning calorimetry (DSC) measurements. The behaviour of thermal decomposition of the three free ligands was not identical Finally, a computer program has been used for kinetic evaluation of non-isothermal differential scanning calorimetry data according to a composite and single heating rate methods in comparison with the methods due to Ozawa and Kissinger methods. The most probable reaction mechanism for the free ligands was the Avrami-Erofeev equation (A) that described the solid-state nucleation-growth mechanism. The activation parameters of the decomposition reaction for free ligands were calculated and the results of different methods of data analysis were compared and discussed. The Fourth Chapter, the final chapter, deals with the preparation of cobalt, nickel, and copper with mono-pyridine carboxylic acids in aqueous solution. The prepared complexes have been characterised by analyses, IR spectra, X-ray diffraction, magnetic moments, and electronic spectra. The stoichiometry of these compounds was ML2x(H20), (where M = metal ion, L = organic ligand and x = water molecule). The environments of cobalt, nickel, and copper nicotinates and the environments of cobalt and nickel picolinates were octahedral, whereas the environment of copper picolinate [Cu(PA)2] was tetragonal. However, the environments of cobalt, nickel, and copper isonicotinates were polymeric octahedral structures. The morphological changes that occurred throughout the decomposition were followed by SEM observation. TG, DTG, and DSC measurements have studied the thermal behaviour of the prepared complexes in air. During the degradation processes of the hydrated complexes, the crystallisation water molecules were lost in one or two steps. This was also followed by loss of organic ligands and the metal oxides remained. Comparison between the DTG temperatures of the first and second steps of the dehydration suggested that the water of crystallisation was more strongly bonded with anion in Ni(II) complexes than in the complexes of Co(II) and Cu(II). The intermediate products of decomposition were not identified. The most probable reaction mechanism for the prepared complexes was also Avrami-Erofeev equation (A) characteristic of solid-state nucleation-growth mechanism. The tempemture dependence of conductivity using direct current was determined for cobalt, nickel, Cl.nd copper isonicotinates. An activation energy (ΔΕ), the activation energy (ΔΕ ) were calculated.The ternperature and frequency dependence of conductivity, the frequency dependence of dielectric constant, and the dielectric loss for nickel isonicotinate were determined by using altemating current. The value of s paralneter and the value of'density of state [N(Ef)] were calculated. Keyword Thermal decomposition, kinetic, electrical conduclion, pyridine rnono~ carboxylic acid, cOlnplex, transition metal compJex.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Resumo:
The study developed statistical techniques to evaluate visual field progression for use with the Humphrey Field Analyzer (HFA). The long-term fluctuation (LF) was evaluated in stable glaucoma. The magnitude of both LF components showed little relationship with MD, CPSD and SF. An algorithm was proposed for determining the clinical necessity for a confirmatory follow-up examination. The between-examination variability was determined for the HFA Standard and FASTPAC algorithms in glaucoma. FASTPAC exhibited greater between-examination variability than the Standard algorithm across the range of sensitivities and with increasing eccentricity. The difference in variability between the algorithms had minimal clinical significance. The effect of repositioning the baseline in the Glaucoma Change Probability Analysis (GCPA) was evaluated. The global baseline of the GCPA limited the detection of progressive change at a single stimulus location. A new technique, pointwise univariate linear regressions (ULR), of absolute sensitivity and, of pattern deviation, against time to follow-up was developed. In each case, pointwise ULR was more sensitive to localised progressive changes in sensitivity than ULR of MD, alone. Small changes in sensitivity were more readily determined by the pointwise ULR than by the GCPA. A comparison between the outcome of pointwise ULR for all fields and for the last six fields manifested linear and curvilinear declines in the absolute sensitivity and the pattern deviation. A method for delineating progressive loss in glaucoma, based upon the error in the forecasted sensitivity of a multivariate model, was developed. Multivariate forecasting exhibited little agreement with GCPA in glaucoma but showed promise for monitoring visual field progression in OHT patients. The recovery of sensitivity in optic neuritis over time was modelled with a Cumulative Gaussian function. The rate and level of recovery was greater in the peripheral than the central field. Probability models to forecast the field of recovery were proposed.
Resumo:
Background - An evaluation of standard automated perimetry (SAP) and short wavelength automated perimetry (SWAP) for the central 10–2 visual field test procedure in patients with age-related macular degeneration (AMD) is presented in order to determine methods of quantifying the central sensitivity loss in patients at various stages of AMD. Methods - 10–2 SAP and SWAP Humphrey visual fields and stereoscopic fundus photographs were collected in 27 eyes of 27 patients with AMD and 22 eyes of 22 normal subjects. Results - Mean Deviation and Pattern Standard Deviation (PSD) varied significantly with stage of disease in SAP (both p<0.001) and SWAP (both p<0.001), but post hoc analysis revealed overlap of functional values among stages. In SWAP, indices of focal loss were more sensitive to detecting differences in AMD from normal. SWAP defects were greater in depth and area than those in SAP. Central sensitivity (within 1°) changed by -3.9 and -4.9 dB per stage in SAP and SWAP, respectively. Based on defect maps, an AMD Severity Index was derived. Conclusions - Global indices of focal loss were more sensitive to detecting early stage AMD from normal. The SWAP sensitivity decline with advancing stage of AMD was greater than in SAP. A new AMD Severity Index quantifies visual field defects on a continuous scale. Although not all patients are suitable for SWAP examinations, it is of value as a tool in research studies of visual loss in AMD.
Resumo:
Presentation Purpose:To determine methods of quantifying the sensitivity loss in the central 10o visual field in a cross section of patients at various stages of age-related macular degeneration (AMD). Methods:Standard and short-wavelength automated perimetry (SAP and SWAP) visual fields were collected using program 10-2 of the Humphrey Field Analyzer, in 44 eyes of 27 patients with AMD and 41 eyes of 22 normal subjects. Stereoscopic fundus photographs were graded by two independent observers and the stage of disease determined. Global indices were compared for their ability to delineate the normal visual field from early stages of AMD and to differentiate between stages. Results:Mean Deviation (MD) and Pattern Standard Deviation (PSD) varied significantly with stage of disease in SAP (both p<0.001) and SWAP (both p<0.001), but post-hoc analysis revealed overlap of functional values between stages. Global indices of focal loss, PSD and local spatial variability (LSV) were the most sensitive to detecting differences between normal subjects and early stage AMD patients, in SAP and SWAP, respectively. Overall, defects were confined to the central 5°. SWAP defects were consistently greater in depth and area than those in SAP. The most vulnerable region of the 10° field to sensitivity loss with increasing stage of AMD was the central 1°, in which the sensitivity decline was -4.8dB per stage in SAP and -4.9dB per stage in SWAP. Based on the pattern deviation defect maps, a severity index of AMD visual field loss was derived. Threshold variability was considerably increased in late stage AMD eyes. Conclusions:Global indices of focal loss were more sensitive to the detection of early stage AMD from normal. The sensitivity decline with advancing stage of AMD was greater in SWAP compared to SAP, however the trend was not strong across all stages of disease. The less commonly used index LSV represents relatively statistically unmanipulated summary measure of focal loss. A new severity index is described which is sensitive to visual field change in AMD, measures visual field defects on a continuous scale and may serve as a useful measure of functional change in AMD in longitudinal studies. Keywords: visual fields • age-related macular degeneration • perimetry
Resumo:
Visual field assessment is a core component of glaucoma diagnosis and monitoring, and the Standard Automated Perimetry (SAP) test is considered up until this moment, the gold standard of visual field assessment. Although SAP is a subjective assessment and has many pitfalls, it is being constantly used in the diagnosis of visual field loss in glaucoma. Multifocal visual evoked potential (mfVEP) is a newly introduced method used for visual field assessment objectively. Several analysis protocols have been tested to identify early visual field losses in glaucoma patients using the mfVEP technique, some were successful in detection of field defects, which were comparable to the standard SAP visual field assessment, and others were not very informative and needed more adjustment and research work. In this study, we implemented a novel analysis approach and evaluated its validity and whether it could be used effectively for early detection of visual field defects in glaucoma. OBJECTIVES: The purpose of this study is to examine the effectiveness of a new analysis method in the Multi-Focal Visual Evoked Potential (mfVEP) when it is used for the objective assessment of the visual field in glaucoma patients, compared to the gold standard technique. METHODS: 3 groups were tested in this study; normal controls (38 eyes), glaucoma patients (36 eyes) and glaucoma suspect patients (38 eyes). All subjects had a two standard Humphrey visual field HFA test 24-2 and a single mfVEP test undertaken in one session. Analysis of the mfVEP results was done using the new analysis protocol; the Hemifield Sector Analysis HSA protocol. Analysis of the HFA was done using the standard grading system. RESULTS: Analysis of mfVEP results showed that there was a statistically significant difference between the 3 groups in the mean signal to noise ratio SNR (ANOVA p<0.001 with a 95% CI). The difference between superior and inferior hemispheres in all subjects were all statistically significant in the glaucoma patient group 11/11 sectors (t-test p<0.001), partially significant 5/11 (t-test p<0.01) and no statistical difference between most sectors in normal group (only 1/11 was significant) (t-test p<0.9). sensitivity and specificity of the HAS protocol in detecting glaucoma was 97% and 86% respectively, while for glaucoma suspect were 89% and 79%. DISCUSSION: The results showed that the new analysis protocol was able to confirm already existing field defects detected by standard HFA, was able to differentiate between the 3 study groups with a clear distinction between normal and patients with suspected glaucoma; however the distinction between normal and glaucoma patients was especially clear and significant. CONCLUSION: The new HSA protocol used in the mfVEP testing can be used to detect glaucomatous visual field defects in both glaucoma and glaucoma suspect patient. Using this protocol can provide information about focal visual field differences across the horizontal midline, which can be utilized to differentiate between glaucoma and normal subjects. Sensitivity and specificity of the mfVEP test showed very promising results and correlated with other anatomical changes in glaucoma field loss.
Resumo:
Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.
Resumo:
The appealing feature of the arbitrage-free Nelson-Siegel model of the yield curve is the ability to capture movements in the yield curve through readily interpretable shifts in its level, slope or curvature, all within a dynamic arbitrage-free framework. To ensure that the level, slope and curvature factors evolve so as not to admit arbitrage, the model introduces a yield-adjustment term. This paper shows how the yield-adjustment term can also be decomposed into the familiar level, slope and curvature elements plus some additional readily interpretable shape adjustments. This means that, even in an arbitrage-free setting, it continues to be possible to interpret movements in the yield curve in terms of level, slope and curvature influences. © 2014 © 2014 Taylor & Francis.
Resumo:
After exogenously cueing attention to a peripheral location, the return of attention and response to the location can be inhibited. We demonstrate that these inhibitory mechanisms of attention can be associated with objects and can be automatically and implicitly retrieved over relatively long periods. Furthermore, we also show that when face stimuli are associated with inhibition, the effect is more robust for faces presented in the left visual field. This effect can be even more spatially specific, where most robust inhibition is obtained for faces presented in the upper as compared to the lower visual field. Finally, it is revealed that the inhibition is associated with an object’s identity, as inhibition moves with an object to a new location; and that the retrieved inhibition is only transiently present after retrieval.
Resumo:
The microstructural stability of aluminide diffusion coatings, prepared by means of a two-stage pack-aluminization treatment on single-crystal nickel-base superalloy substrates, is considered in this article. Edge-on specimens of coated superalloy are studied using transmission electron microscopy (TEM). The effects of coating thickness and post-coating heat treatment (duration, temperature, and atmosphere) on coating microstructure are examined. The article discusses the partial transformation of the matrix of the coating, from a B2-type phase (nominally NiAl) to a L12 phase (nominally Ni3(Al, Ti)), during exposure at temperatures of 850 °C and 950 °C in air and in vacuum for up to 138 hours. Three possible processes that can account for decom- position of the coating matrix are investigated, namely, interdiffusion between the coating and the substrate, oxidation of the coating surface, and aging of the coating. Of these processes, aging of the coating is shown to be the predominant factor in the coating transformation under the conditions considered. © 1992 The Minerals, Metals and Materials Society, and ASM International.
Resumo:
A hard combinatorial problem is investigated which has useful application in design of discrete devices: the two-block decomposition of a partial Boolean function. The key task is regarded: finding such a weak partition on the set of arguments, at which the considered function can be decomposed. Solving that task is essentially speeded up by the way of preliminary discovering traces of the sought-for partition. Efficient combinatorial operations are used by that, based on parallel execution of operations above adjacent units in the Boolean space.
Resumo:
This paper is motivated by the recent debate on the existence and scale of China's 'Guo Jin Min Tui' phenomenon, which is often translated as 'the state sector advances and the private sector retreats'. We argue that the profound implication of an advancing state sector is not the size expansion of the state ownership in the economy per se, but the likely retardation of the development of the already financially constrained private sector and the issues around the sustainability of the already weakening Chinese economy growth. Drawing on recent methodological advances, we provide a critical analysis of the contributions of the state and non-state sectors in the aggregate Total Factor Productivity and its growth over the period of 1998-2007 to verify the existence of GJMT and its possible impacts on Chinese economic growth. Overall, we find strong and consistent evidence of a systematic and worsening resource misallocation within the state sector and/or between the state sectors and private sectors over time. This suggests that non-market forces allow resources to be driven away from their competitive market allocation and towards the inefficient state sector. Crown Copyright © 2014.