23 resultados para Secondary analysis
em Aston University Research Archive
Resumo:
We undertook a secondary analysis of in-depth interviews with white (n = 32) and Pakistani and Indian (n = 32) respondents who had type 2 diabetes, which explored their perceptions and understandings of disease causation. We observed subtle, but important, differences in the ways in which these respondent groups attributed responsibility and blame for developing the disease. Whereas Pakistani and Indian respondents tended to externalise responsibility, highlighting their life circumstances in general and/or their experiences of migrating to Britain in accounting for their diabetes (or the behaviours they saw as giving rise to it), white respondents, by contrast, tended to emphasise the role of their own lifestyle 'choices' and 'personal failings'. In seeking to understand these differences, we argue for a conceptual and analytical approach which embraces both micro- (i.e. everyday) and macro- (i.e. cultural) contextual factors and experiences. In so doing, we provide a critique of social scientific studies of lay accounts/understandings of health and illness. We suggest that greater attention needs to be paid to the research encounter (that is, to who is looking at whom and in what circumstances) to understand the different kinds of contexts researchers have highlighted in presenting and interpreting their data. © 2007 Foundation for the Sociology of Health & Illness/Blackwell Publishing Ltd.
Resumo:
The density of axons in the optic nerve, olfactory tract and corpus callosum was quantified in non-demented elderly subjects and in Alzheimer’s disease (AD) using an image analysis system. In each fibre tract, there was significant reduction in the density of axons in AD compared with non-demented subjects, the greatest reductions being observed in the olfactory tract and corpus callosum. Axonal loss in the optic nerve and olfactory tract was mainly of axons with smaller myelinated cross-sectional areas. In the corpus callosum, a reduction in the number of ‘thin’ and ‘thick’ fibres was observed in AD, but there was a proportionally greater loss of the ‘thick’ fibres. The data suggest significant degeneration of white matter fibre tracts in AD involving the smaller axons in the two sensory nerves and both large and small axons in the corpus callosum. Loss of axons in AD could reflect an associated white matter disorder and/or be secondary to neuronal degeneration.
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
Using an event study approach, this article reports evidence that the UK Treasury bond market displayed anomalous pricing behaviour in the secondary market both immediately before and after auctions of seasoned bonds. Using a benchmark return derived from the behaviour of the underlying yield curve, the market offered statistically and economically significant excess returns, around the auctions held between 1992 and 2004. A cross-sectional analysis of the cumulative excess returns shows that the excess demand at the auctions is a key determinant of this excess return.
Resumo:
OBJECTIVES: To assess whether blood pressure control in primary care could be improved with the use of patient held targets and self monitoring in a practice setting, and to assess the impact of these on health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences, and costs. DESIGN: Randomised controlled trial. SETTING: Eight general practices in south Birmingham. PARTICIPANTS: 441 people receiving treatment in primary care for hypertension but not controlled below the target of < 140/85 mm Hg. INTERVENTIONS: Patients in the intervention group received treatment targets along with facilities to measure their own blood pressure at their general practice; they were also asked to visit their general practitioner or practice nurse if their blood pressure was repeatedly above the target level. Patients in the control group received usual care (blood pressure monitoring by their practice). MAIN OUTCOME MEASURES: Primary outcome: change in systolic blood pressure at six months and one year in both intervention and control groups. Secondary outcomes: change in health behaviours, anxiety, prescribed antihypertensive drugs, patients' preferences of method of blood pressure monitoring, and costs. RESULTS: 400 (91%) patients attended follow up at one year. Systolic blood pressure in the intervention group had significantly reduced after six months (mean difference 4.3 mm Hg (95% confidence interval 0.8 mm Hg to 7.9 mm Hg)) but not after one year (mean difference 2.7 mm Hg (- 1.2 mm Hg to 6.6 mm Hg)). No overall difference was found in diastolic blood pressure, anxiety, health behaviours, or number of prescribed drugs. Patients who self monitored lost more weight than controls (as evidenced by a drop in body mass index), rated self monitoring above monitoring by a doctor or nurse, and consulted less often. Overall, self monitoring did not cost significantly more than usual care (251 pounds sterling (437 dollars; 364 euros) (95% confidence interval 233 pounds sterling to 275 pounds sterling) versus 240 pounds sterling (217 pounds sterling to 263 pounds sterling). CONCLUSIONS: Practice based self monitoring resulted in small but significant improvements of blood pressure at six months, which were not sustained after a year. Self monitoring was well received by patients, anxiety did not increase, and there was no appreciable additional cost. Practice based self monitoring is feasible and results in blood pressure control that is similar to that in usual care.
Resumo:
A study has been made of the coalescence of secondary dispersions in beds of woven meshes. The variables investigated were superficial velocity, bed depth, mesh geometry and fibre material; the effects of presoaking the bed in the dispersed phase before operation were also considered. Equipment was design~d to generate a 0.1% phase ratio toluene in water dispersion whose mean drop size was determined using a Coulter Counter. The coalesced drops were sized by photography and a novel holographic technique was developed to evaluate the mean diameter of the effluent secondary drops. Previous models describing single phase flow in porous media are reviewed and it was found that the experimental data obtained in this study is best represented by Keller's equation which is based on a physical model similar to the internal structure of the meshes. Statistical analysis of two phase data produced a correlation, for each mesh tested, relating the pressure drop to superficial velocity and bed depth. The flow parameter evaluated from the single phase model is incorporated into a theoretical comparison of drop capture mechanisms which indicated that direct and indirect interception are predominant. The resulting equation for drop capture efficiericy is used to predict the initial, local drop capture rate in a coalescer. A mathematical description of the saturation profiles was formulated and verified by average saturation data. Based 6n the Blake-Kozeny equation, an expression is derived analytically to predict the two phase pressure drop using the parameters which characterise the saturation profiles. By specifying the local saturation at the inlet face for a given velocity, good agreement between experimental pressure drop data and the model predictions was obtained.
Resumo:
The organic matter in five oil shales (three from the Kimmeridge Clay sequence, one from the Oxford Clay sequence and one from the Julia Creek deposits in Australia) has been isolated by acid demineralisation, separated into kerogens and bitumens by solvent extraction and then characterised in some detail by chromatographic, spectroscopic and degradative techniques. Kerogens cannot be characterised as easily as bitumens because of their insolubility, and hence before any detailed molecular information can be obtained from them they must be degraded into lower molecular weight, more soluble components. Unfortunately, the determination of kerogen structures has all too often involved degradations that were far too harsh and which lead to destruction of much of the structural information. For this reason a number of milder more selective degradative procedures have been tested and used to probe the structure of kerogens. These are: 1. Lithium aluminium hydride reduction. - This procedure is commonly used to remove pyrite from kerogens and it may also increase their solubility by reduction of labile functional groups. Although reduction of the kerogens was confirmed, increases in solubility were correlated with pyrite content and not kerogen reduction. 2. O-methylation in the presence of a phase transfer catalyst. - By the removal of hydrogen bond interactions via O-methylation, it was possible to determine the contribution of such secondary interactions to the insolubility of the kerogens. Problems were encountered with the use of the phase transfer catalyst. 3. Stepwise alkaline potassium permanganate oxidation. - Significant kerogen dissolution was achieved using this procedure but uncontrolled oxidation of initial oxidation products proved to be a problem. A comparison with the peroxytrifluoroaceticacid oxidation of these kerogens was made. 4. Peroxytrifluoroacetic acid oxidation. - This was used because it preferentially degrades aromatic rings whilst leaving any benzylic positions intact. Considerable conversion of the kerogens into soluble products was achieved with this procedure. At all stages of degradation the products were fully characterised where possible using a variety of techniques including elemental analysis, solution state 1H and 13C nuclear magnetic resonance, solid state 13C nuclear magnetic resonance, gel-permeationchromatography, gas chromatography-mass spectroscopy, fourier transform infra-red spectroscopy and some ultra violet-visible spectroscopy.
Resumo:
Finite element analysis is a useful tool in understanding how the accommodation system of the eye works. Further to simpler FEA models that have been used hitherto, this paper describes a sensitivity study which aims to understand which parameters of the crystalline lens are key to developing an accurate model of the accommodation system. A number of lens models were created, allowing the mechanical properties, internal structure and outer geometry to be varied. These models were then spun about their axes, and the deformations determined. The results showed the mechanical properties are the critical parameters, with the internal structure secondary. Further research is needed to fully understand how the internal structure and properties interact to affect lens deformation.
Resumo:
The literature relating to haze formation, methods of separation, coalescence mechanisms, and models by which droplets <100 μm are collected, coalesced and transferred, have been reviewed with particular reference to particulate bed coalescers. The separation of secondary oil-water dispersions was studied experimentally using packed beds of monosized glass ballotini particles. The variables investigated were superficial velocity, bed depth, particle size, and the phase ratio and drop size distribution of inlet secondary dispersion. A modified pump loop was used to generate secondary dispersions of toluene or Clairsol 350 in water with phase ratios between 0.5-6.0 v/v%.Inlet drop size distributions were determined using a Malvern Particle Size Analyser;effluent, coalesced droplets were sized by photography. Single phase flow pressure drop data were correlated by means of a Carman-Kozeny type equation. Correlations were obtained relating single and two phase pressure drops, as (ΔP2/μc)/ΔP1/μd) = kp Ua Lb dcc dpd Cine A flow equation was derived to correlate the two phase pressure drop data as, ΔP2/(ρcU2) = 8.64*107 [dc/D]-0.27 [L/D]0.71 [dp/D]-0.17 [NRe]1.5 [e1]-0.14 [Cin]0.26 In a comparison between functions to characterise the inlet drop size distributions a modification of the Weibull function provided the best fit of experimental data. The general mean drop diameter was correlated by: q_p q_p p_q /β Γ ((q-3/β) +1) d qp = d fr .α Γ ((P-3/β +1 The measured and predicted mean inlet drop diameters agreed within ±15%. Secondary dispersion separation depends largely upon drop capture within a bed. A theoretical analysis of drop capture mechanisms in this work indicated that indirect interception and London-van der Waal's mechanisms predominate. Mathematical models of dispersed phase concentration m the bed were developed by considering drop motion to be analogous to molecular diffusion.The number of possible channels in a bed was predicted from a model in which the pores comprised randomly-interconnected passage-ways between adjacent packing elements and axial flow occured in cylinders on an equilateral triangular pitch. An expression was derived for length of service channels in a queuing system leading to the prediction of filter coefficients. The insight provided into the mechanisms of drop collection and travel, and the correlations of operating parameters, should assist design of industrial particulate bed coalescers.
Resumo:
This work concerns the developnent of a proton irduced X-ray emission (PIXE) analysis system and a multi-sample scattering chamber facility. The characteristics of the beam pulsing system and its counting rate capabilities were evaluated by observing the ion-induced X-ray emission from pure thick copper targets, with and without beam pulsing operation. The characteristic X-rays were detected with a high resolution Si(Li) detector coupled to a rrulti-channel analyser. The removal of the pile-up continuum by the use of the on-demand beam pulsing is clearly demonstrated in this work. This new on-demand pu1sirg system with its counting rate capability of 25, 18 and 10 kPPS corresponding to 2, 4 am 8 usec main amplifier time constant respectively enables thick targets to be analysed more readily. Reproducibility tests of the on-demard beam pulsing system operation were checked by repeated measurements of the system throughput curves, with and without beam pulsing. The reproducibility of the analysis performed using this system was also checked by repeated measurements of the intensity ratios from a number of standard binary alloys during the experimental work. A computer programme has been developed to evaluate the calculations of the X-ray yields from thick targets bornbarded by protons, taking into account the secondary X-ray yield production due to characteristic X-ray fluorescence from an element energetically higher than the absorption edge energy of the other element present in the target. This effect was studied on metallic binary alloys such as Fe/Ni and Cr/Fe. The quantitative analysis of Fe/Ni and Cr/Fe alloy samples to determine their elemental composition taking into account the enhancement has been demonstrated in this work. Furthermore, the usefulness of the Rutherford backscattering (R.B.S.) technique to obtain the depth profiles of the elements in the upper micron of the sample is discussed.
Resumo:
A re-examination of fundamental concepts and a formal structuring of the waveform analysis problem is presented in Part I. eg. the nature of frequency is examined and a novel alternative to the classical methods of detection proposed and implemented which has the advantage of speed and independence from amplitude. Waveform analysis provides the link between Parts I and II. Part II is devoted to Human Factors and the Adaptive Task Technique. The Historical, Technical and Intellectual development of the technique is traced in a review which examines the evidence of its advantages relative to non-adaptive fixed task methods of training, skill assessment and man-machine optimisation. A second review examines research evidence on the effect of vibration on manual control ability. Findings are presented in terms of percentage increment or decrement in performance relative to performance without vibration in the range 0-0.6Rms'g'. Primary task performance was found to vary by as much as 90% between tasks at the same Rms'g'. Differences in task difficulty accounted for this difference. Within tasks vibration-added-difficulty accounted for the effects of vibration intensity. Secondary tasks were found to be largely insensitive to vibration except secondaries which involved fine manual adjustment of minor controls. Three experiments are reported next in which an adaptive technique was used to measure the % task difficulty added by vertical random and sinusoidal vibration to a 'Critical Compensatory Tracking task. At vibration intensities between 0 - 0.09 Rms 'g' it was found that random vibration added (24.5 x Rms'g')/7.4 x 100% to the difficulty of the control task. An equivalence relationship between Random and Sinusoidal vibration effects was established based upon added task difficulty. Waveform Analyses which were applied to the experimental data served to validate Phase Plane analysis and uncovered the development of a control and possibly a vibration isolation strategy. The submission ends with an appraisal of subjects mentioned in the thesis title.
Resumo:
Objective. Using an image analysis system to determine whether there is loss of axons in the olfactory tract (OT) in Alzheimer’s disease (AD). Design. A retrospective neuropathological study. Patients Nine control patients and eight clinically and pathologically verified AD cases. Measurements and Results. There was a reduction in axon density in AD compared with control subjects in the central and peripheral regions of the tract. Axonal loss was mainly of axons with smaller (<2.99 µm2) myelinated cross-sectional areas. Conclusions. The data suggest significant degeneration of axons within the OT involving the smaller sized axons. Loss of axons in the OT is likely to be secondary to pathological changes originating within the parahippocampal gyrus rather than to a pathogen spreading into the brain via the olfactory pathways.
Resumo:
Purpose: The purpose of this paper is to focus on investigating and benchmarking green operations initiatives in the automotive industry documented in the environmental reports of selected companies. The investigation roadmaps the main environmental initiatives taken by the world's three major car manufacturers and benchmarks them against each other. The categorisation of green operations initiatives that is provided in the paper can also help companies in other sectors to evaluate their green practices. Design/methodology/approach: The first part of the paper is based on existing literature on the topic of green and sustainable operations and the "unsustainable" context of automotive production. The second part relates to the roadmap and benchmarking of green operations initiatives based on an analysis of secondary data from the automotive industry. Findings: The findings show that the world's three major car manufacturers are pursuing various environmental initiatives involving the following green operations practices: green buildings, eco-design, green supply chains, green manufacturing, reverse logistics and innovation. Research limitations/implications: The limitations of this paper start from its selection of the companies, which was made using production volume and country of origin as the principal criteria. There is ample evidence that other, smaller, companies are pursuing more sophisticated and original environmental initiatives. Also, there might be a gap between what companies say they do in their environmental reports and what they actually do. Practical implications: This paper helps practitioners in the automotive industry to benchmark themselves against the major volume manufacturers in three different continents. Practitioners from other industries will also find it valuable to discover how the automotive industry is pursuing environmental initiatives beyond manufacturing, apart from the green operations practices covering broadly all the activities of operations function. Originality/value: The originality of the paper is in its up-to-date analysis of environmental reports of automotive companies. The paper offers value for researchers and practitioners due to its contribution to the green operations literature. For instance, the inclusion of green buildings as part of green operations practices has so far been neglected by most researchers and authors in the field of green and sustainable operations. © Emerald Group Publishing Limited.
Resumo:
A series of N1-benzylidene pyridine-2-carboxamidrazone anti-tuberculosis compounds has been evaluated for their cytotoxicity using human mononuclear leucocytes (MNL) as target cells. All eight compounds were significantly more toxic than dimethyl sulphoxide control and isoniazid (INH) with the exception of a 4-methoxy-3-(2-phenylethyloxy) derivative, which was not significantly different in toxicity compared with INH. The most toxic agent was an ethoxy derivative, followed by 3-nitro, 4-methoxy, dimethylpropyl, 4-methylbenzyloxy, 3-methoxy-4-(-2-phenylethyloxy) and 4-benzyloxy in rank order. In comparison with the effect of selected carboxamidrazone agents on cells alone, the presence of either N-acetyl cysteine (NAC) or glutathione caused a significant reduction in the toxicity of INH, as well as on the 4-benzyloxy derivative, although both increased the toxicity of a 4-N,N-dimethylamino-1-naphthylidene and a 2-t-butylthio derivative. The derivatives from this and three previous studies were subjected to computational analysis in order to derive equations designed to establish quantitative structure activity relationships for these agents. Twenty-five compounds were thus resolved into two groups (1 and 2), which on analysis yielded equations with r2 values in the range 0.65-0.92. Group 1 shares a common mode of toxicity related to hydrophobicity, where cytotoxicity peaked at logP of 3.2, while Group 2 toxicity was strongly related to ionisation potential. The presence of thiols such as NAC and GSH both promoted and attenuated toxicity in selected compounds from Group 1, suggesting that secondary mechanisms of toxicity were operating. These studies will facilitate the design of future low toxicity high activity anti-tubercular carboxamidrazone agents. © 2003 Elsevier Science B.V. All rights reserved.
Resumo:
G-protein coupled receptors (GPCRs) constitute the largest class of membrane proteins and are a major drug target. A serious obstacle to studying GPCR structure/function characteristics is the requirement to extract the receptors from their native environment in the plasma membrane, coupled with the inherent instability of GPCRs in the detergents required for their solubilization. In the present study, we report the first solubilization and purification of a functional GPCR [human adenosine A