38 resultados para Frontal Analysis Continuous Capillary Clectrophoresis
em Aston University Research Archive
Resumo:
We propose a fibre-based approach for generation of optical frequency combs (OFCs) with the aim of calibration of astronomical spectrographs in the low and medium-resolution range. This approach includes two steps: in the first step, an appropriate state of optical pulses is generated and subsequently moulded in the second step delivering the desired OFC. More precisely, the first step is realised by injection of two continuous-wave (CW) lasers into a conventional single-mode fibre, whereas the second step generates a broad OFC by using the optical solitons generated in step one as initial condition. We investigate the conversion of a bichromatic input wave produced by two initial CW lasers into a train of optical solitons, which happens in the fibre used as step one. Especially, we are interested in the soliton content of the pulses created in this fibre. For that, we study different initial conditions (a single cosine-hump, an Akhmediev breather, and a deeply modulated bichromatic wave) by means of soliton radiation beat analysis and compare the results to draw conclusion about the soliton content of the state generated in the first step. In case of a deeply modulated bichromatic wave, we observed the formation of a collective soliton crystal for low input powers and the appearance of separated solitons for high input powers. An intermediate state showing the features of both, the soliton crystal and the separated solitons, turned out to be most suitable for the generation of OFC for the purpose of calibration of astronomical spectrographs.
Resumo:
Purpose: The purpose of this paper is to present the application of logical framework analysis (LFA) for implementing continuous quality improvement (CQI) across multiple settings in a tertiary care hospital. Design/methodology/approach: This study adopts a multiple case study approach. LFA is implemented within three diverse settings, namely, intensive care unit, surgical ward, and acute in-patient psychiatric ward. First, problem trees are developed in order to determine the root causes of quality issues, specific to the three settings. Second, objective trees are formed suggesting solutions to the quality issues. Third, project plan template using logical framework (LOGFRAME) is created for each setting. Findings: This study shows substantial improvement in quality across the three settings. LFA proved to be effective to analyse quality issues and suggest improvement measures objectively. Research limitations/implications: This paper applies LFA in specific, albeit, diverse settings in one hospital. For validation purposes, it would be ideal to analyse in other settings within the same hospital, as well as in several hospitals. It also adopts a bottom-up approach when this can be triangulated with other sources of data. Practical implications: LFA enables top management to obtain an integrated view of performance. It also provides a basis for further quantitative research on quality management through the identification of key performance indicators and facilitates the development of a business case for improvement. Originality/value: LFA is a novel approach for the implementation of CQI programs. Although LFA has been used extensively for project development to source funds from development banks, its application in quality improvement within healthcare projects is scant.
Resumo:
Very little is known about the neural structures involved in the perception of realistic dynamic facial expressions. In the present study, a unique set of naturalistic dynamic facial emotional expressions was created. Through fMRI and connectivity analysis, a dynamic face perception network was identified, which is demonstrated to extend Haxby et al.'s [Haxby, J. V., Hoffman, E. A., & Gobbini, M. I. The distributed human neural system for face perception. Trends in Cognitive Science, 4, 223–233, 2000] distributed neural system for face perception. This network includes early visual regions, such as the inferior occipital gyrus, which is identified as insensitive to motion or affect but sensitive to the visual stimulus, the STS, identified as specifically sensitive to motion, and the amygdala, recruited to process affect. Measures of effective connectivity between these regions revealed that dynamic facial stimuli were associated with specific increases in connectivity between early visual regions, such as the inferior occipital gyrus and the STS, along with coupling between the STS and the amygdala, as well as the inferior frontal gyrus. These findings support the presence of a distributed network of cortical regions that mediate the perception of different dynamic facial expressions.
Resumo:
A culster analysis was performed on 78 cases of Alzheimer's disease (AD) to identify possible pathological subtypes of the disease. Data on 47 neuropathological variables, inculding features of the gross brain and the density and distribution of senile plaques (SP) and neurofibrillary tangles (NFT) were used to describe each case. Cluster analysis is a multivariate statistical method which combines together in groups, AD cases with the most similar neuropathological characteristics. The majority of cases (83%) were clustered into five such groups. The analysis suggested that an initial division of the 78 cases could be made into two major groups: (1) a large group (68%) in which the distribution of SP and NFT was restricted to a relatively small number of brain regions, and (2) a smaller group (15%) in which the lesions were more widely disseminated throughout the neocortex. Each of these groups could be subdivided on the degree of capillary amyloid angiopathy (CAA) present. In addition, those cases with a restricted development of SP/NFT and CAA could be divided further into an early and a late onset form. Familial AD cases did not cluster as a separate group but were either distributed between four of the five groups or were cases with unique combinations of pathological features not closely related to any of the groups. It was concluded that multivariate statistical methods may be of value in the classification of AD into subtypes. © 1994 Springer-Verlag.
Inventory parameter management and focused continuous improvement for repetitive batch manufacturers
Resumo:
What this thesis proposes is a methodology to assist repetitive batch manufacturers in the adoption of certain aspects of the Lean Production principles. The methodology concentrates on the reduction of inventory through the setting of appropriate batch sizes, taking account of the effect of sequence dependent set-ups and the identification and elimination of bottlenecks. It uses a simple Pareto and modified EBQ based analysis technique to allocate items to period order day classes based on a combination of each item's annual usage value and set-up cost. The period order day classes the items are allocated to are determined by the constraints limits in the three measured dimensions, capacity, administration and finance. The methodology overcomes the limitations associated with MRP in the area of sequence dependent set-ups, and provides a simple way of setting planning parameters taking this effect into account by concentrating on the reduction of inventory through the systematic identification and elimination of bottlenecks through set-up reduction processes, so allowing batch sizes to reduce. It aims to help traditional repetitive batch manufacturers in a route to continual improvement by: Highlighting those areas where change would bring the greatest benefits. Modelling the effect of proposed changes. Quantifying the benefits that could be gained through implementing the proposed changes. Simplifying the effort required to perform the modelling process. It concentrates on increasing flexibility through managed inventory reduction through rationally decreasing batch sizes, taking account of sequence dependent set-ups and the identification and elimination of bottlenecks. This was achieved through the development of a software modelling tool, and validated through a case study approach.
Resumo:
Three hypotheses have been proposed to explain neuropathological heterogeneity in Alzheimer's disease (AD): the presence of distinct subtypes ('subtype hypothesis'), variation in the stage of the disease ('phase hypothesis') and variation in the origin and progression of the disease ('compensation hypothesis'). To test these hypotheses, variation in the distribution and severity of senile plaques (SP) and neurofibrillary tangles (NFT) was studied in 80 cases of AD using principal components analysis (PCA). Principal components analysis using the cases as variables (Q-type analysis) suggested that individual differences between patients were continuously distributed rather than the cases being clustered into distinct subtypes. In addition, PCA using the abundances of SP and NFT as variables (R-type analysis) suggested that variations in the presence and abundance of lesions in the frontal and occipital lobes, the cingulate gyrus and the posterior parahippocampal gyrus were the most important sources of heterogeneity consistent with the presence of different stages of the disease. In addition, in a subgroup of patients, individual differences were related to apolipoprotein E (ApoE) genotype, the presence and severity of SP in the frontal and occipital cortex being significantly increased in patients expressing apolipoprotein (Apo)E allele ε4. It was concluded that some of the neuropathological heterogeneity in our AD cases may be consistent with the 'phase hypothesis'. A major factor determining this variation in late-onset cases was ApoE genotype with accelerated rates of spread of the pathology in patients expressing allele ε4.
Resumo:
Since the original Data Envelopment Analysis (DEA) study by Charnes et al. [Measuring the efficiency of decision-making units. European Journal of Operational Research 1978;2(6):429–44], there has been rapid and continuous growth in the field. As a result, a considerable amount of published research has appeared, with a significant portion focused on DEA applications of efficiency and productivity in both public and private sector activities. While several bibliographic collections have been reported, a comprehensive listing and analysis of DEA research covering its first 30 years of history is not available. This paper thus presents an extensive, if not nearly complete, listing of DEA research covering theoretical developments as well as “real-world” applications from inception to the year 2007. A listing of the most utilized/relevant journals, a keyword analysis, and selected statistics are presented.
Resumo:
With the advent of globalisation companies all around the world must improve their performance in order to survive. The threats are coming from everywhere, and in different ways, such as low cost products, high quality products, new technologies, and new products. Different companies in different countries are using various techniques and using quality criteria items to strive for excellence. Continuous improvement techniques are used to enable companies to improve their operations. Therefore, companies are using techniques such as TQM, Kaizen, Six-Sigma, Lean Manufacturing, and quality award criteria items such as Customer Focus, Human Resources, Information & Analysis, and Process Management. The purpose of this paper is to compare the use of these techniques and criteria items in two countries, Mexico and the United Kingdom, which differ in culture and industrial structure. In terms of the use of continuous improvement tools and techniques, Mexico formally started to deal with continuous improvement by creating its National Quality Award soon after the Americans began the Malcolm Baldrige National Quality Award. The United Kingdom formally started by using the European Quality Award (EQA), modified and renamed as the EFQM Excellence Model. The methodology used in this study was to undertake a literature review of the subject matter and to study some general applications around the world. A questionnaire survey was then designed and a survey undertaken based on the same scale, about the same sample size, and the about the same industrial sector within the two countries. The survey presents a brief definition of each of the constructs to facilitate understanding of the questions. The analysis of the data was then conducted with the assistance of a statistical software package. The survey results indicate both similarities and differences in the strengths and weaknesses of the companies in the two countries. One outcome of the analysis is that it enables the companies to use the results to benchmark themselves and thus act to reinforce their strengths and to reduce their weaknesses.
Resumo:
Recent discussion of the knowledge-based economy draws increasingly attention to the role that the creation and management of knowledge plays in economic development. Development of human capital, the principal mechanism for knowledge creation and management, becomes a central issue for policy-makers and practitioners at the regional, as well as national, level. Facing competition both within and across nations, regional policy-makers view human capital development as a key to strengthening the positions of their economies in the global market. Against this background, the aim of this study is to go some way towards answering the question of whether, and how, investment in education and vocational training at regional level provides these territorial units with comparative advantages. The study reviews literature in economics and economic geography on economic growth (Chapter 2). In growth model literature, human capital has gained increased recognition as a key production factor along with physical capital and labour. Although leaving technical progress as an exogenous factor, neoclassical Solow-Swan models have improved their estimates through the inclusion of human capital. In contrast, endogenous growth models place investment in research at centre stage in accounting for technical progress. As a result, they often focus upon research workers, who embody high-order human capital, as a key variable in their framework. An issue of discussion is how human capital facilitates economic growth: is it the level of its stock or its accumulation that influences the rate of growth? In addition, these economic models are criticised in economic geography literature for their failure to consider spatial aspects of economic development, and particularly for their lack of attention to tacit knowledge and urban environments that facilitate the exchange of such knowledge. Our empirical analysis of European regions (Chapter 3) shows that investment by individuals in human capital formation has distinct patterns. Those regions with a higher level of investment in tertiary education tend to have a larger concentration of information and communication technology (ICT) sectors (including provision of ICT services and manufacture of ICT devices and equipment) and research functions. Not surprisingly, regions with major metropolitan areas where higher education institutions are located show a high enrolment rate for tertiary education, suggesting a possible link to the demand from high-order corporate functions located there. Furthermore, the rate of human capital development (at the level of vocational type of upper secondary education) appears to have significant association with the level of entrepreneurship in emerging industries such as ICT-related services and ICT manufacturing, whereas such association is not found with traditional manufacturing industries. In general, a high level of investment by individuals in tertiary education is found in those regions that accommodate high-tech industries and high-order corporate functions such as research and development (R&D). These functions are supported through the urban infrastructure and public science base, facilitating exchange of tacit knowledge. They also enjoy a low unemployment rate. However, the existing stock of human and physical capital in those regions with a high level of urban infrastructure does not lead to a high rate of economic growth. Our empirical analysis demonstrates that the rate of economic growth is determined by the accumulation of human and physical capital, not by level of their existing stocks. We found no significant effects of scale that would favour those regions with a larger stock of human capital. The primary policy implication of our study is that, in order to facilitate economic growth, education and training need to supply human capital at a faster pace than simply replenishing it as it disappears from the labour market. Given the significant impact of high-order human capital (such as business R&D staff in our case study) as well as the increasingly fast pace of technological change that makes human capital obsolete, a concerted effort needs to be made to facilitate its continuous development.
Resumo:
This paper examines the impact of innovation on the performance of US business service firms. We distinguish between different levels of innovation (new-to-market and new-to-firm) in our analysis, and allow explicitly for sample selection issues. Reflecting the literature, which highlights the importance of external interaction in service innovation, we pay particular attention to the role of external innovation linkages and their effect on business performance. We find that the presence of service innovation and its extent has a consistently positive effect on growth, but no effect on productivity. There is evidence that the growth effect of innovation can be attributed, at least in part, to the external linkages maintained by innovators in the process of innovation. External linkages have an overwhelmingly positive effect on (innovator) firm performance, regardless of whether innovation is measured as a discrete or continuous variable, and regardless of the level of innovation considered.
Resumo:
Ten cases of neuronal intermediate filament inclusion disease (NIFID) were studied quantitatively. The α-internexin positive neurofilament inclusions (NI) were most abundant in the motor cortex and CA sectors of the hippocampus. The densities of the NI and the swollen achromatic neurons (SN) were similar in laminae II/III and V/VI but glial cell density was greater in V/VI. The density of the NI was positively correlated with the SN and the glial cells. Principal components analysis (PCA) suggested that PC1 was associated with variation in neuronal loss in the frontal/temporal lobes and PC2 with neuronal loss in the frontal lobe and NI density in the parahippocampal gyrus. The data suggest: 1) frontal and temporal lobe degeneration in NIFID is associated with the widespread formation of NI and SN, 2) NI and SN affect cortical laminae II/III and V/VI, 3) the NI and SN affect closely related neuronal populations, and 4) variations in neuronal loss and in the density of NI were the most important sources of pathological heterogeneity. © Springer-Verlag 2005.
Resumo:
Respiration is a complex activity. If the relationship between all neurological and skeletomuscular interactions was perfectly understood, an accurate dynamic model of the respiratory system could be developed and the interaction between different inputs and outputs could be investigated in a straightforward fashion. Unfortunately, this is not the case and does not appear to be viable at this time. In addition, the provision of appropriate sensor signals for such a model would be a considerable invasive task. Useful quantitative information with respect to respiratory performance can be gained from non-invasive monitoring of chest and abdomen motion. Currently available devices are not well suited in application for spirometric measurement for ambulatory monitoring. A sensor matrix measurement technique is investigated to identify suitable sensing elements with which to base an upper body surface measurement device that monitors respiration. This thesis is divided into two main areas of investigation; model based and geometrical based surface plethysmography. In the first instance, chapter 2 deals with an array of tactile sensors that are used as progression of existing and previously investigated volumetric measurement schemes based on models of respiration. Chapter 3 details a non-model based geometrical approach to surface (and hence volumetric) profile measurement. Later sections of the thesis concentrate upon the development of a functioning prototype sensor array. To broaden the application area the study has been conducted as it would be fore a generically configured sensor array. In experimental form the system performance on group estimation compares favourably with existing system on volumetric performance. In addition provides continuous transient measurement of respiratory motion within an acceptable accuracy using approximately 20 sensing elements. Because of the potential size and complexity of the system it is possible to deploy it as a fully mobile ambulatory monitoring device, which may be used outside of the laboratory. It provides a means by which to isolate coupled physiological functions and thus allows individual contributions to be analysed separately. Thus facilitating greater understanding of respiratory physiology and diagnostic capabilities. The outcome of the study is the basis for a three-dimensional surface contour sensing system that is suitable for respiratory function monitoring and has the prospect with future development to be incorporated into a garment based clinical tool.
Resumo:
Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.
Resumo:
The densities of Pick bodies (PB), Pick cells (PC), senile plaques (SP) and neurofibrillary tangles (NFT) in the frontal and temporal lobe were determined in ten patients diagnosed with Pick's disease (PD). The density of PB was significantly higher in the dentate gyrus granule cells compared with the cortex and the CA sectors of the hippocampus. Within the hippocampus, the highest densities of PB were observed in sector CA1. PC were absent in the dentate gyrus and no significant differences in PC density were observed in the remaining brain regions. With the exception of two patients, the densities of SP and NFT were low with no significant differences in mean densities between cortical regions. In the hippocampus, the density of NFT was greatest in sector CA1. PB and PC densities were positively correlated in the frontal cortex but no correlations were observed between the PD and AD lesions. A principal components analysis (PCA) of the neuropathological variables suggested that variations in the densities of SP in the frontal cortex, temporal cortex and hippocampus were the most important sources of heterogeneity within the patient group. Variations in the densities of PB and NFT in the temporal cortex and hippocampus were of secondary importance. In addition, the PCA suggested that two of the ten patients were atypical. One patient had a higher than average density of SP and one familial patient had a higher density of NFT but few SP.