25 resultados para convergence of numerical methods
em Aston University Research Archive
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT
Resumo:
We demonstrate an accurate BER estimation method for QPSK CO-OFDM transmission based on the probability density function of the received QPSK symbols. Using a 112Gbs QPSK CO-OFDM transmission as an example, we show that this method offers the most accurate estimate of the system's performance in comparison with other known approaches.
Resumo:
Purpose – To propose and investigate a stable numerical procedure for the reconstruction of the velocity of a viscous incompressible fluid flow in linear hydrodynamics from knowledge of the velocity and fluid stress force given on a part of the boundary of a bounded domain. Design/methodology/approach – Earlier works have involved the similar problem but for stationary case (time-independent fluid flow). Extending these ideas a procedure is proposed and investigated also for the time-dependent case. Findings – The paper finds a novel variation method for the Cauchy problem. It proves convergence and also proposes a new boundary element method. Research limitations/implications – The fluid flow domain is limited to annular domains; this restriction can be removed undertaking analyses in appropriate weighted spaces to incorporate singularities that can occur on general bounded domains. Future work involves numerical investigations and also to consider Oseen type flow. A challenging problem is to consider non-linear Navier-Stokes equation. Practical implications – Fluid flow problems where data are known only on a part of the boundary occur in a range of engineering situations such as colloidal suspension and swimming of microorganisms. For example, the solution domain can be the region between to spheres where only the outer sphere is accessible for measurements. Originality/value – A novel variational method for the Cauchy problem is proposed which preserves the unsteady Stokes operator, convergence is proved and using recent for the fundamental solution for unsteady Stokes system, a new boundary element method for this system is also proposed.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
This retrospective study was designed to investigate the factors that influence performance in examinations comprised of multiple-choice questions (MCQs), short-answer questions (SAQs), and essay questions in an undergraduate population. Final year optometry degree examination marks were analyzed for two separate cohorts. Direct comparison found that students performed better in MCQs than essays. However, forward stepwise regression analysis of module marks compared with the overall score showed that MCQs were the least influential, and the essay or SAQ mark was a more reliable predictor of overall grade. This has implications for examination design.
Resumo:
On July 17, 1990, President George Bush ssued “Proclamation #6158" which boldly declared the following ten years would be called the “Decade of the Brain” (Bush, 1990). Accordingly, the research mandates of all US federal biomedical institutions worldwide were redirected towards the study of the brain in general and cognitive neuroscience specifically. In 2008, one of the greatest legacies of this “Decade of the Brain” is the impressive array of techniques that can be used to study cortical activity. We now stand at a juncture where cognitive function can be mapped in the time, space and frequency domains, as and when such activity occurs. These advanced techniques have led to discoveries in many fields of research and clinical science, including psychology and psychiatry. Unfortunately, neuroscientific techniques have yet to be enthusiastically adopted by the social sciences. Market researchers, as specialized social scientists, have an unparalleled opportunity to adopt cognitive neuroscientific techniques and significantly redefine the field and possibly even cause substantial dislocations in business models. Following from this is a significant opportunity for more commercially-oriented researchers to employ such techniques in their own offerings. This report examines the feasibility of these techniques.
Resumo:
This is a study of heat transfer in a lift-off furnace which is employed in the batch annealing of a stack of coils of steel strip. The objective of the project is to investigate the various factors which govern the furnace design and the heat transfer resistances, so as to reduce the time of the annealing cycle, and hence minimize the operating costs. The work involved mathematical modelling of patterns of gas flow and modes of heat transfer. These models are: Heat conduction and its conjectures in the steel coils;Convective heat transfer in the plates separating the coils in the stack and in other parts of the furnace; and Radiative and convective heat transfer in the furnace by using the long furnace model. An important part of the project is the development of numerical methods and computations to solve the transient models. A limited number of temperature measurements was available from experiments on a test coil in an industrial furnace. The mathematical model agreed well with these data. The model has been used to show the following characteristics of annealing furnaces, and to suggest further developments which would lead to significant savings: - The location of the limiting temperature in a coil is nearer to the hollow core than to the outer periphery. - Thermal expansion of the steel tends to open the coils, reduces their thermal conductivity in the radial direction, and hence prolongs the annealing cycle. Increasing the tension in the coils and/or heating from the core would overcome this heat transfer resistance. - The shape and dimensions of the convective channels in the plates have significant effect on heat convection in the stack. An optimal design of a channel is shown to be of a width-to-height ratio equal to 9. - Increasing the cooling rate, by using a fluidized bed instead of the normal shell and tube exchanger, would shorten the cooling time by about 15%, but increase the temperature differential in the stack. - For a specific charge weight, a stack of different-sized coils will have a shorter annealing cycle than one of equally-sized coils, provided that production constraints allow the stacking order to be optimal. - Recycle of hot flue gases to the firing zone of the furnace would produce a. decrease in the thermal efficiency up to 30% but decreases the heating time by about 26%.
Resumo:
Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.
Resumo:
This thesis is an exploration of the organisation and functioning of the human visual system using the non-invasive functional imaging modality magnetoencephalography (MEG). Chapters one and two provide an introduction to the ‘human visual system and magnetoencephalographic methodologies. These chapters subsequently describe the methods by which MEG can be used to measure neuronal activity from the visual cortex. Chapter three describes the development and implementation of novel analytical tools; including beamforming based analyses, spectrographic movies and an optimisation of group imaging methods. Chapter four focuses on the use of established and contemporary analytical tools in the investigation of visual function. This is initiated with an investigation of visually evoked and induced responses; covering visual evoked potentials (VEPs) and event related synchronisation/desynchronisation (ERS/ERD). Chapter five describes the employment of novel methods in the investigation of cortical contrast response and demonstrates distinct contrast response functions in striate and extra-striate regions of visual cortex. Chapter six use synthetic aperture magnetometry (SAM) to investigate the phenomena of visual cortical gamma oscillations in response to various visual stimuli; concluding that pattern is central to its generation and that it increases in amplitude linearly as a function of stimulus contrast, consistent with results from invasive electrode studies in the macaque monkey. Chapter seven describes the use of driven visual stimuli and tuned SAM methods in a pilot study of retinotopic mapping using MEG; finding that activity in the primary visual cortex can be distinguished in four quadrants and two eccentricities of the visual field. Chapter eight is a novel implementation of the SAM beamforming method in the investigation of a subject with migraine visual aura; the method reveals desynchronisation of the alpha and gamma frequency bands in occipital and temporal regions contralateral to observed visual abnormalities. The final chapter is a summary of main conclusions and suggested further work.
Resumo:
Objective: Qualitative research is increasingly valued as part of the evidence for policy and practice, but how it should be appraised is contested. Various appraisal methods, including checklists and other structured approaches, have been proposed but rarely evaluated. We aimed to compare three methods for appraising qualitative research papers that were candidates for inclusion in a systematic review of evidence on support for breast-feeding. Method: A sample of 12 research papers on support for breast-feeding was appraised by six qualitative reviewers using three appraisal methods: unprompted judgement, based on expert opinion; a UK Cabinet Office quality framework; and CASP, a Critical Appraisal Skills Programme tool. Papers were assigned, following appraisals, to 1 of 5 categories, which were dichotomized to indicate whether or not papers should be included in a systematic review. Patterns of agreement in categorization of papers were assessed quantitatively using κ statistics, and qualitatively using cross-case analysis. Results: Agreement in categorizing papers across the three methods was slight (κ =0.13; 95% CI 0.06-0.24). Structured approaches did not appear to yield higher agreement than that by unprompted judgement. Qualitative analysis revealed reviewers' dilemmas in deciding between the potential impact of findings and the quality of the research execution or reporting practice. Structured instruments appeared to make reviewers more explicit about the reasons for their judgements. Conclusions: Structured approaches may not produce greater consistency of judgements about whether to include qualitative papers in a systematic review. Future research should address how appraisals of qualitative research should be incorporated in systematic reviews. © The Royal Society of Medicine Press Ltd 2007.
Resumo:
Damage to insulation materials located near to a primary circuit coolant leak may compromise the operation of the emergency core cooling system (ECCS). Insulation material in the form of mineral wool fiber agglomerates (MWFA) maybe transported to the containment sump strainers, where they may block or penetrate the strainers. Though the impact of MWFA on the pressure drop across the strainers is minimal, corrosion products formed over time may also accumulate in the fiber cakes on the strainers, which can lead to a significant increase in the strainer pressure drop and result in cavitation in the ECCS. An experimental and theoretical study performed by the Helmholtz-Zentrum Dresden-Rossendorf and the Hochschule Zittau/Görlitz is investigating the phenomena that maybe observed in the containment vessel during a primary circuit coolant leak. The study entails the generation of fiber agglomerates, the determination of their transport properties in single and multi-effect experiments and the long-term effect that corrosion and erosion of the containment internals by the coolant has on the strainer pressure drop. The focus of this paper is on the verification and validation of numerical models that can predict the transport of MWFA. A number of pseudo-continuous dispersed phases of spherical wetted agglomerates represent the MWFA. The size, density, the relative viscosity of the fluid-fiber agglomerate mixture and the turbulent dispersion all affect how the fiber agglomerates are transported. In the cases described here, the size is kept constant while the density is modified. This definition affects both the terminal velocity and volume fraction of the dispersed phases. Note that the relative viscosity is only significant at high concentrations. Three single effect experiments were used to provide validation data on the transport of the fiber agglomerates under conditions of sedimentation in quiescent fluid, sedimentation in a horizontal flow and suspension in a horizontal flow. The experiments were performed in a rectangular column for the quiescent fluid and a racetrack type channel that provided a near uniform horizontal flow. The numerical models of sedimentation in the column and the racetrack channel found that the sedimentation characteristics are consistent with the experiments. For channel suspension, the heavier fibers tend to accumulate at the channel base even at high velocities, while lighter phases are more likely to be transported around the channel.
Resumo:
There are several methods of providing series compensation for transmission lines using power electronic switches. Four methods of series compensation have been examined in this thesis, the thyristor controlled series capacitor, a voltage sourced inverter series compensator using a capacitor as the series element, a current sourced inverter series compensator and a voltage sourced inverter using an inductor as the series element. All the compensators examined will provide a continuously variable series voltage which is controlled by the switching of the electronic switches. Two of the circuits will offer both capacitive and inductive compensation, the thyristor controlled series capacitor and the current sourced inverter series compensator. The other two will produce either capacitive or inductive series compensation. The thyristor controlled series capacitor offers the widest range of series compensation. However, there is a band of unavailable compensation between 0 and 1 pu capacitive compensation. Compared to the other compensators examined the harmonic content of the compensating voltage is quite high. An algebraic analysis showed that there is more than one state the thyristor controlled series capacitor can operate in. This state has the undesirable effect of introducing large losses. The voltage sourced inverter series compensator using a capacitor as the series element will provide only capacitive compensation. It uses two capacitors which increase the cost of the compensator significantly above the other three. This circuit has the advantage of very low harmonic distortion. The current sourced inverter series compensator will provide both capacitive and inductive series compensation. The harmonic content of the compensating voltage is second only to the voltage sourced inverter series compensator using a capacitor as the series element. The voltage sourced inverter series compensator using an inductor as the series element will only provide inductive compensation, and it is the least expensive compensator examined. Unfortunately, the harmonics introduced by this circuit are considerable.
Resumo:
How are innovative new business models established if organizations constantly compare themselves against existing criteria and expectations? The objective is to address this question from the perspective of innovators and their ability to redefine established expectations and evaluation criteria. The research questions ask whether there are discernible patterns of discursive action through which innovators theorize institutional change and what role such theorizations play for mobilizing support and realizing change projects. These questions are investigated through a case study on a critical area of enterprise computing software, Java application servers. In the present case, business practices and models were already well established among incumbents with critical market areas allocated to few dominant firms. Fringe players started experimenting with a new business approach of selling services around freely available opensource application servers. While most new players struggled, one new entrant succeeded in leading incumbents to adopt and compete on the new model. The case demonstrates that innovative and substantially new models and practices are established in organizational fields when innovators are able to refine expectations and evaluation criteria within an organisational field. The study addresses the theoretical paradox of embedded agency. Actors who are embedded in prevailing institutional logics and structures find it hard to perceive potentially disruptive opportunities that fall outside existing ways of doing things. Changing prevailing institutional logics and structures requires strategic and institutional work aimed at overcoming barriers to innovation. The study addresses this problem through the lens of (new) institutional theory. This discourse methodology traces the process through which innovators were able to establish a new social and business model in the field.