24 resultados para Simplified and advanced calculation methods
em Aston University Research Archive
Resumo:
Sol-gel-synthesized bioactive glasses may be formed via a hydrolysis condensation reaction, silica being introduced in the form of tetraethyl orthosilicate (TEOS), and calcium is typically added in the form of calcium nitrate. The synthesis reaction proceeds in an aqueous environment; the resultant gel is dried, before stabilization by heat treatment. These materials, being amorphous, are complex at the level of their atomic-scale structure, but their bulk properties may only be properly understood on the basis of that structural insight. Thus, a full understanding of their structure-property relationship may only be achieved through the application of a coherent suite of leading-edge experimental probes, coupled with the cogent use of advanced computer simulation methods. Using as an exemplar a calcia-silica sol-gel glass of the kind developed by Larry Hench, in the memory of whom this paper is dedicated, we illustrate the successful use of high-energy X-ray and neutron scattering (diffraction) methods, magic-angle spinning solid-state NMR, and molecular dynamics simulation as components to a powerful methodology for the study of amorphous materials.
Resumo:
The use of Diagnosis Related Groups (DRG) as a mechanism for hospital financing is a currently debated topic in Portugal. The DRG system was scheduled to be initiated by the Health Ministry of Portugal on January 1, 1990 as an instrument for the allocation of public hospital budgets funded by the National Health Service (NHS), and as a method of payment for other third party payers (e.g., Public Employees (ADSE), private insurers, etc.). Based on experience from other countries such as the United States, it was expected that implementation of this system would result in more efficient hospital resource utilisation and a more equitable distribution of hospital budgets. However, in order to minimise the potentially adverse financial impact on hospitals, the Portuguese Health Ministry decided to gradually phase in the use of the DRG system for budget allocation by using blended hospitalspecific and national DRG casemix rates. Since implementation in 1990, the percentage of each hospitals budget based on hospital specific costs was to decrease, while the percentage based on DRG casemix was to increase. This was scheduled to continue until 1995 when the plan called for allocating yearly budgets on a 50% national and 50% hospitalspecific cost basis. While all other nonNHS third party payers are currently paying based on DRGs, the adoption of DRG casemix as a National Health Service budget setting tool has been slower than anticipated. There is now some argument in both the political and academic communities as to the appropriateness of DRGs as a budget setting criterion as well as to their impact on hospital efficiency in Portugal. This paper uses a twostage procedure to assess the impact of actual DRG payment on the productivity (through its components, i.e., technological change and technical efficiency change) of diagnostic technology in Portuguese hospitals during the years 1992–1994, using both parametric and nonparametric frontier models. We find evidence that the DRG payment system does appear to have had a positive impact on productivity and technical efficiency of some commonly employed diagnostic technologies in Portugal during this time span.
Resumo:
This thesis presents a theoretical investigation on applications of Raman effect in optical fibre communication as well as the design and optimisation of various Raman based devices and transmission schemes. The techniques used are mainly based on numerical modelling. The results presented in this thesis are divided into three main parts. First, novel designs of Raman fibre lasers (RFLs) based on Phosphosilicate core fibre are analysed and optimised for efficiency by using a discrete power balance model. The designs include a two stage RFL based on Phosphosilicate core fibre for telecommunication applications, a composite RFL for the 1.6 μm spectral window, and a multiple output wavelength RFL aimed to be used as a compact pump source for fiat gain Raman amplifiers. The use of Phosphosilicate core fibre is proven to effectively reduce the design complexity and hence leads to a better efficiency, stability and potentially lower cost. Second, the generalised Raman amplified gain model approach based on the power balance analysis and direct numerical simulation is developed. The approach can be used to effectively simulate optical transmission systems with distributed Raman amplification. Last, the potential employment of a hybrid amplification scheme, which is a combination between a distributed Raman amplifier and Erbium doped amplifier, is investigated by using the generalised Raman amplified gain model. The analysis focuses on the use of the scheme to upgrade a standard fibre network to 40 Gb/s system.
Resumo:
The available literature concerning dextransucrase and dextran production and purification has been reviewed along with the reaction mechanisms of the enzyme. A discussion of basic fermentation theory is included, together with a brief description of bioreactor hydrodynamics and general biotechnology. The various fermenters used in this research work are described in detail, along with the various experimental techniques employed. The micro-organism Leuconostoc mesenteroides NRRL B512 (F) secretes dextransucrase in the presence of an inducer, sucrose, this being the only known inducer of the enzyme. Dextransucrase is a growth related product and a series of fed-batch fermentations have been carried out to extend the exponential growth phase of the organism. These experiments were carried out in a number of different sized vessels, ranging in size from 2.5 to 1,000 litres. Using a 16 litre vessel, dextransucrase activities in excess of 450 DSU/cm3 (21.67 U/cm3) have been obtained under non-aerated conditions. It has also been possible to achieve 442 DSU/cm3 (21.28 U/cm3) using the 1,000 litre vessel, although this has not been done consistently. A 1 litre and a 2.5 litre vessel were used for the continuous fermentations of dextransucrase. The 2.5 litre vessel was a very sophisticated MBR MiniBioreactor and was used for the majority of continuous fermentations carried out. An enzyme activity of approximately 108 DSU/cm3 (5.20 U/cm3) was achieved at a dilution rate of 0.50 h-1, which corresponds to the maximum growth rate of the cells under the process conditions. A number of continuous fermentations were operated for prolonged periods of time, with experimental run-times of up to 389 h being recorded without any incidence of contamination. The phenomenon of enzyme enhancement on hold-up of up to 100% was also noted during these fermentations, with dextransucrase of activity 89.7 DSU/cm3 (4.32 U/cm3) being boosted to 155.7 DSU/cm3 (7.50 U/cm3) following 24 hours of hold-up. These findings support the recommendation of a second reactor being placed in series with the existing vessel.
Resumo:
The atomic scale structure of sodium borophosphates made by the sol-gel method is compared to those made by the melt-quench method. It is found that although the sol-gel generated materials have a higher tendency towards crystallization, they nevertheless show a qualitatively similar crystallization trend with composition to their melt-quench analogues; the progressive introduction of boron oxide into the phosphate network initially inhibits then promotes crystallization. At the composition associated with the most stable amorphous sodium borophosphate (20 mol% boron oxide), it is found that the atomic scale structure of the sol-gel synthesized network glass is almost identical to that of the corresponding melt-quenched one.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.
Resumo:
Remote sensing data is routinely used in ecology to investigate the relationship between landscape pattern as characterised by land use and land cover maps, and ecological processes. Multiple factors related to the representation of geographic phenomenon have been shown to affect characterisation of landscape pattern resulting in spatial uncertainty. This study investigated the effect of the interaction between landscape spatial pattern and geospatial processing methods statistically; unlike most papers which consider the effect of each factor in isolation only. This is important since data used to calculate landscape metrics typically undergo a series of data abstraction processing tasks and are rarely performed in isolation. The geospatial processing methods tested were the aggregation method and the choice of pixel size used to aggregate data. These were compared to two components of landscape pattern, spatial heterogeneity and the proportion of landcover class area. The interactions and their effect on the final landcover map were described using landscape metrics to measure landscape pattern and classification accuracy (response variables). All landscape metrics and classification accuracy were shown to be affected by both landscape pattern and by processing methods. Large variability in the response of those variables and interactions between the explanatory variables were observed. However, even though interactions occurred, this only affected the magnitude of the difference in landscape metric values. Thus, provided that the same processing methods are used, landscapes should retain their ranking when their landscape metrics are compared. For example, highly fragmented landscapes will always have larger values for the landscape metric "number of patches" than less fragmented landscapes. But the magnitude of difference between the landscapes may change and therefore absolute values of landscape metrics may need to be interpreted with caution. The explanatory variables which had the largest effects were spatial heterogeneity and pixel size. These explanatory variables tended to result in large main effects and large interactions. The high variability in the response variables and the interaction of the explanatory variables indicate it would be difficult to make generalisations about the impact of processing on landscape pattern as only two processing methods were tested and it is likely that untested processing methods will potentially result in even greater spatial uncertainty. © 2013 Elsevier B.V.
Resumo:
The research presented in this paper is part of an ongoing investigation into how best to support meaningful lab-based usability evaluations of mobile technologies. In particular, we report on a comparative study of (a) a standard paper prototype of a mobile application used to perform an early-phase seated (static) usability evaluation, and (b) a pseudo-paper prototype created from the paper prototype used to perform an early-phase,contextually-relevant, mobile usability evaluation. We draw some initial conclusions regarding whether it is worth the added effort of conducting a usability evaluation of a pseudo-paper prototype in a contextually-relevant setting during early-phase user interface development.
Resumo:
Biological experiments often produce enormous amount of data, which are usually analyzed by data clustering. Cluster analysis refers to statistical methods that are used to assign data with similar properties into several smaller, more meaningful groups. Two commonly used clustering techniques are introduced in the following section: principal component analysis (PCA) and hierarchical clustering. PCA calculates the variance between variables and groups them into a few uncorrelated groups or principal components (PCs) that are orthogonal to each other. Hierarchical clustering is carried out by separating data into many clusters and merging similar clusters together. Here, we use an example of human leukocyte antigen (HLA) supertype classification to demonstrate the usage of the two methods. Two programs, Generating Optimal Linear Partial Least Square Estimations (GOLPE) and Sybyl, are used for PCA and hierarchical clustering, respectively. However, the reader should bear in mind that the methods have been incorporated into other software as well, such as SIMCA, statistiXL, and R.
Resumo:
The traffic carried by core optical networks grows at a steady but remarkable pace of 30-40% year-over-year. Optical transmissions and networking advancements continue to satisfy the traffic requirements by delivering the content over the network infrastructure in a cost and energy efficient manner. Such core optical networks serve the information traffic demands in a dynamic way, in response to requirements for shifting of traffics demands, both temporally (day/night) and spatially (business district/residential). However as we are approaching fundamental spectral efficiency limits of singlemode fibers, the scientific community is pursuing recently the development of an innovative, all-optical network architecture introducing the spatial degree of freedom when designing/operating future transport networks. Spacedivision- multiplexing through the use of bundled single mode fibers, and/or multi-core fibers and/or few-mode fibers can offer up to 100-fold capacity increase in future optical networks. The EU INSPACE project is working on the development of a complete spatial-spectral flexible optical networking solution, offering the network ultra-high capacity, flexibility and energy efficiency required to meet the challenges of delivering exponentially growing traffic demands in the internet over the next twenty years. In this paper we will present the motivation and main research activities of the INSPACE consortium towards the realization of the overall project solution. © 2014 Copyright SPIE.
Resumo:
Adaptive critic methods have common roots as generalizations of dynamic programming for neural reinforcement learning approaches. Since they approximate the dynamic programming solutions, they are potentially suitable for learning in noisy, nonlinear and nonstationary environments. In this study, a novel probabilistic dual heuristic programming (DHP) based adaptive critic controller is proposed. Distinct to current approaches, the proposed probabilistic (DHP) adaptive critic method takes uncertainties of forward model and inverse controller into consideration. Therefore, it is suitable for deterministic and stochastic control problems characterized by functional uncertainty. Theoretical development of the proposed method is validated by analytically evaluating the correct value of the cost function which satisfies the Bellman equation in a linear quadratic control problem. The target value of the critic network is then calculated and shown to be equal to the analytically derived correct value.
Resumo:
The underlying work to this thesis focused on the exploitation and investigation of photosensitivity mechanisms in optical fibres and planar waveguides for the fabrication of advanced integrated optical devices for telecoms and sensing applications. One major scope is the improvement of grating fabrication specifications by introducing new writing techniques and the use of advanced characterisation methods for grating testing. For the first time the polarisation control method for advanced grating fabrication has successfully been converted to apodised planar waveguide fabrication and the development of a holographic method for the inscription of chirped gratings at arbitrary wavelength is presented. The latter resulted in the fabrication of gratings for pulse-width suppression and wavelength selection in diode lasers. In co-operation with research partners a number of samples were tested using optical frequency domain and optical low coherence reflectometry for a better insight into the limitations of grating writing techniques. Using a variety of different fabrication methods, custom apodised and chirped fibre Bragg gratings were written for the use as filter elements for multiplexer-demultiplexer devices, as well as for short pulse generation and wavelength selection in telecommunication transmission systems. Long period grating based devices in standard, speciality and tapered fibres are presented, showing great potential for multi-parameter sensing. One particular scope is the development of vectorial curvature and refractive index sensors with potential for medical, chemical and biological sensing. In addition the design of an optically tunable Mach-Zehnder based multiwavelength filter is introduced. The discovery of a Type IA grating type through overexposure of hydrogen loaded standard and Boron-Germanium co-doped fibres strengthened the assumption of UV-photosensitivity being a highly non-linear process. Gratings of this type show a significantly lower thermal sensitivity compared to standard gratings, which makes them useful for sensing applications. An Oxford Lasers copper-vapour laser operating at 255 nm in pulsed mode was used for their inscription, in contrast to previous work using CW-Argon-Ion lasers and contributing to differences in the processes of the photorefractive index change
Resumo:
OBJECTIVE: To assess the effect of using different risk calculation tools on how general practitioners and practice nurses evaluate the risk of coronary heart disease with clinical data routinely available in patients' records. DESIGN: Subjective estimates of the risk of coronary heart disease and results of four different methods of calculation of risk were compared with each other and a reference standard that had been calculated with the Framingham equation; calculations were based on a sample of patients' records, randomly selected from groups at risk of coronary heart disease. SETTING: General practices in central England. PARTICIPANTS: 18 general practitioners and 18 practice nurses. MAIN OUTCOME MEASURES: Agreement of results of risk estimation and risk calculation with reference calculation; agreement of general practitioners with practice nurses; sensitivity and specificity of the different methods of risk calculation to detect patients at high or low risk of coronary heart disease. RESULTS: Only a minority of patients' records contained all of the risk factors required for the formal calculation of the risk of coronary heart disease (concentrations of high density lipoprotein (HDL) cholesterol were present in only 21%). Agreement of risk calculations with the reference standard was moderate (kappa=0.33-0.65 for practice nurses and 0.33 to 0.65 for general practitioners, depending on calculation tool), showing a trend for underestimation of risk. Moderate agreement was seen between the risks calculated by general practitioners and practice nurses for the same patients (kappa=0.47 to 0.58). The British charts gave the most sensitive results for risk of coronary heart disease (practice nurses 79%, general practitioners 80%), and it also gave the most specific results for practice nurses (100%), whereas the Sheffield table was the most specific method for general practitioners (89%). CONCLUSIONS: Routine calculation of the risk of coronary heart disease in primary care is hampered by poor availability of data on risk factors. General practitioners and practice nurses are able to evaluate the risk of coronary heart disease with only moderate accuracy. Data about risk factors need to be collected systematically, to allow the use of the most appropriate calculation tools.
Resumo:
Purpose: To determine the critical fitting characteristics of modern soft contact lens fits and from this to devise a simplified recording scheme. Methods: Ten subjects (aged 28.1 ± 7.4 years) wore eight different modern soft contact lenses. Video was captured and analysed of blink (central and up-gaze), excursion lag (up, down, right and left gaze) and push-up movement, centration and coverage. Results: Lens centration was on average close to the corneal centre. Movement on blink was significantly smaller in up-gaze than in primary-gaze (p<0.001). Lag was greatest in down-gaze and least in up-gaze (p<0.001). Push-up test recovery speed was 1.32±0.73mm/s. Overall lens movement was determined best by assessing horizontal lag, movement on blink in up-gaze and push-up recovery speed. Steeper lens base-curves did not have a significant effect on lens fit characteristics. Contact lens material did influence lens fit characteristics, particularly silicone-hydrogels which generally had lower centration and a faster push-up speed of recovery than HEMA lenses (p<0.05). Conclusion: Lag on vertical gaze, and movement on blink in primary gaze generally provide little extra information on overall lens movement compared to horizontal lag, movement on blink in up gaze and push-up recovery speed. They can therefore be excluded from a simplified recording scheme. A simplified and comprehensive soft contact lens fit recording system could consist of a cross-hairs indicating the centre of the cornea; a circle to indicate the lens centration; a mark on the relevant position of the circle to indicate any limbal incursion; a grade (‘B’) below for movement with blink in up-gaze, a grade (‘L’) to the side for horizontal lag and a grade above (‘P’) for the assessed push-up recovery speed.
Resumo:
The timeline imposed by recent worldwide chemical legislation is not amenable to conventional in vivo toxicity testing, requiring the development of rapid, economical in vitro screening strategies which have acceptable predictive capacities. When acquiring regulatory neurotoxicity data, distinction on whether a toxic agent affects neurons and/or astrocytes is essential. This study evaluated neurofilament (NF) and glial fibrillary acidic protein (GFAP) directed single-cell (S-C) ELISA and flow cytometry as methods for distinguishing cell-specific cytoskeletal responses, using the established human NT2 neuronal/astrocytic (NT2.N/A) co-culture model and a range of neurotoxic (acrylamide, atropine, caffeine, chloroquine, nicotine) and non-neurotoxic (chloramphenicol, rifampicin, verapamil) test chemicals. NF and GFAP directed flow cytometry was able to identify several of the test chemicals as being specifically neurotoxic (chloroquine, nicotine) or astrocytoxic (atropine, chloramphenicol) via quantification of cell death in the NT2.N/A model at cytotoxic concentrations using the resazurin cytotoxicity assay. Those neurotoxicants with low associated cytotoxicity are the most significant in terms of potential hazard to the human nervous system. The NF and GFAP directed S-C ELISA data predominantly demonstrated the known neurotoxicants only to affect the neuronal and/or astrocytic cytoskeleton in the NT2.N/A cell model at concentrations below those affecting cell viability. This report concluded that NF and GFAP directed S-C ELISA and flow cytometric methods may prove to be valuable additions to an in vitro screening strategy for differentiating cytotoxicity from specific neuronal and/or astrocytic toxicity. Further work using the NT2.N/A model and a broader array of toxicants is appropriate in order to confirm the applicability of these methods.