32 resultados para quantitative method
em Aston University Research Archive
Resumo:
Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.
Resumo:
Few works address methodological issues of how to conduct strategy-as-practice research and even fewer focus on how to analyse the subsequent data in ways that illuminate strategy as an everyday, social practice. We address this gap by proposing a quantitative method for analysing observational data, which can complement more traditional qualitative methodologies. We propose that rigorous but context-sensitive coding of transcripts can render everyday practice analysable statistically. Such statistical analysis provides a means for analytically representing patterns and shifts within the mundane, repetitive elements through which practice is accomplished. We call this approach the Event Database (EDB) and it consists of five basic coding categories that help us capture the stream of practice. Indexing codes help to index or categorise the data, in order to give context and offer some basic information about the event under discussion. Indexing codes are descriptive codes, which allow us to catalogue and classify events according to their assigned characteristics. Content codes are to do with the qualitative nature of the event; this is the essence of the event. It is a description that helps to inform judgements about the phenomenon. Nature codes help us distinguish between discursive and tangible events. We include this code to acknowledge that some events differ qualitatively from other events. Type events are codes abstracted from the data in order to help us classify events based on their description or nature. This involves significantly more judgement than the index codes but consequently is also more meaningful. Dynamics codes help us capture some of the movement or fluidity of events. This category has been included to let us capture the flow of activity over time.
Resumo:
Water-based latices, used in the production of internal liners for beer/beverage cans, were investigated using a number of analytical techniques. The epoxy-graft-acrylic polymers, used to prepare the latices, and films, produced from those latices, were also examined. It was confirmed that acrylic polymer preferentially grafts onto higher molecular weight portions of the epoxy polymer. The amount of epoxy remaining ungrafted was determined to be 80%. This figure is higher than was previously thought. Molecular weight distribution studies were carried out on the epoxy and epoxy-g-acrylic resins. A quantitative method for determining copolymer composition using GPC was evaluated. The GPC method was also used to determine polymer composition as a function of molecular weight. IR spectroscopy was used to determine the total level of acrylic modification of the polymers and NMR was used to determine the level of grafting. Particle size determinations were carried out using transmission electron microscopy and dynamic light scattering. Levels of stabilising amine greatly affected the viscosity of the latex, particle size and amount of soluble polymer but the core particle size, as determined using TEM, was unaffected. NMR spectra of the latices produced spectra only from solvents and amine modifiers. Using solid-state CP/MAS/freezing techniques spectra from the epoxy component could be observed. FT-IR spectra of the latices were obtained after special subtraction of water. The only difference between the spectra of the latices and those of the dry film were due to the presence of the solvents in the former. A distinctive morphology in the films produced from the latices was observed. This suggested that the micelle structure of the latex survives the film forming process. If insufficient acrylic is present, large epoxy domains are produced which gives rise to poor film characteristics. Casting the polymers from organic solutions failed to produce similar morphology.
Resumo:
This thesis examines the innovative performance of 206 U.S. business service firms. Undeniably, a need exists for better comprehension of the service sector of developed economies. This research takes a unique view by applying a synthesis approach to studying innovation and attempts to build under a proposed strategic innovation paradigm. A quantitative method is utilised via questionnaire in which all major types of innovation are under examination including: product and service, organisational, and technology-driven innovations. Essential ideas for this conceptual framework encapsulate a new mode of understanding service innovation. Basically, the structure of this analysis encompasses the likelihood of innovation and determining the extent of innovation, while also attempting to shed light on the factors which determine the impact of innovation on performance among service firms. What differentiates this research is its focus on customer-driven service firms in addition to other external linkages. A synopsis of the findings suggest that external linkages, particularly with customers, suppliers and strategic alliances or joint ventures, significantly affect innovation performance with regard to the introduction of new services. Service firms which incorporate formal and informal R&D experience significant increases in the extent of new-to-market and new-to-firm innovations. Additionally, the results show that customer-driven service firms experience greater productivity and growth. Furthermore, the findings suggest that external linkages assist service firm performance.
Resumo:
Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.
Resumo:
In the UK, Open Learning has been used in industrial training for at least the last decade. Trainers and Open Learning practitioners have been concerned about the quality of the products and services being delivered. The argument put forward in this thesis is that there is ambiguity amongst industrialists over the meanings of `Open Learning' and `Quality in Open Learning'. For clarity, a new definition of Open Learning is proposed which challenges the traditional learner-centred approach favoured by educationalists. It introduces the concept that there are benefits afforded to the trainer/employer/teacher as well as to the learner. This enables a focussed view of what quality in Open Learning really means. Having discussed these issues, a new quantitative method of evaluating Open Learning is proposed. This is based upon an assessment of the degree of compliance with which products meet Parts 1 & 2 of the Open Learning Code of Practice. The vehicle for these research studies has been a commercial contract commissioned by the Training Agency for the Engineering Industry Training Board (EITB) to examine the quality of Open Learning products supplied to the engineering industry. A major part of this research has been the application of the evaluation technique to a range of 67 Open Learning products (in eight subject areas). The findings were that good quality products can be found right across the price range - so can average and poor quality ones. The study also shows quite convincingly that there are good quality products to be found at less than 50. Finally the majority (24 out of 34) of the good quality products were text based.
Resumo:
The fracture properties of a series of alloys containing 15% chromium and 0.8 to 3.4% carbon are investigated using strain fracture toughness testing techniques. The object of the work is to apply a quantitative method of measuring toughness to abrasion resistant materials, which have previously been assessed on an empirical basis; and to examine the relationship between microstructure and K10 in an attempt to improve the toughness of inherently brittle materials. A review of the relevant literature includes discussion of the background to the alloy series under investigation, a survey of the development of fracture mechanics and the emergence of K10 as a toughness parameter. Metallurgical variables such as composition, heat treatment, grain size, and hot working are ???? to relate microstructure to toughness, and fractographic evidence is used to substantiate the findings. The results are applied to a model correlating ductile fracture with plastic strain instability, and the nucleation of voids. Strain induced martensite formation in austenitic structures is analysed in terms of the plastic energy dissipation mechanisms operating at the crack tip. Emphasis is placed on the lower carbon alloys in the series, and a composition put forward to optimise wear resistance and toughness. The properties of established competitive materials are compared to the proposed alloy on a toughness and cost basis.
Resumo:
Identification of epitopes capable of binding multiple HLA types will significantly rationalise the development of epitope-based vaccines. A quantitative method assessing the contribution of each amino acid at each position was applied to over 500 nonamer peptides binding to 5 MHC alleles — A*0201, A*0202, A*0203, A*0206 and A*6802 — which together define the HLA-A2-like supertype. FXIGXI (L)IFV was identified as a supermotif for the A2-supertype based on the contributions of the common preferred amino acids at each of the nine positions. The results indicate that HLA-A*6802 is an intermediate allele standing between A2 and A3 supertypes: at anchor position 2 it is closer to A3 and at anchor position 9 it is nearer to A2. Models are available free on-line at http://www.jenner.ac.uk/MHCPred and can be used for binding affinity prediction.
Resumo:
Purpose: The human retinal vasculature has been demonstrated to exhibit fractal, or statistically self similar properties. Fractal analysis offers a simple quantitative method to characterise the complexity of the branching vessel network in the retina. Several methods have been proposed to quantify the fractal properties of the retina. Methods: Twenty five healthy volunteers underwent retinal photography, retinal oximetry and ocular biometry. A robust method to evaluate the fractal properties of the retinal vessels is proposed; it consists of manual vessel segmentation and box counting of 50 degree retinal photographs centred on the fovea. Results: Data is presented on the associations between the fractal properties of the retinal vessels and various functional properties of the retina. Conclusion Fractal properties of the retina could offer a promising tool to assess the risk and prognostic factors that define retinal disease. Outstanding efforts surround the need to adopt a standardised protocol for assessing the fractal properties of the retina, and further demonstrate its association with disease processes.
Resumo:
This paper introduces a quantitative method for identifying newly emerging word forms in large time-stamped corpora of natural language and then describes an analysis of lexical emergence in American social media using this method based on a multi-billion word corpus of Tweets collected between October 2013 and November 2014. In total 29 emerging word forms, which represent various semantic classes, grammatical parts-of speech, and word formations processes, were identified through this analysis. These 29 forms are then examined from various perspectives in order to begin to better understand the process of lexical emergence.
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
The thesis presents a two-dimensional Risk Assessment Method (RAM) where the assessment of risk to the groundwater resources incorporates both the quantification of the probability of the occurrence of contaminant source terms, as well as the assessment of the resultant impacts. The approach emphasizes the need for a greater dependency on the potential pollution sources, rather than the traditional approach where assessment is based mainly on the intrinsic geo-hydrologic parameters. The risk is calculated using Monte Carlo simulation methods whereby random pollution events were generated to the same distribution as historically occurring events or a priori potential probability distribution. Integrated mathematical models then simulate contaminant concentrations at the predefined monitoring points within the aquifer. The spatial and temporal distributions of the concentrations were calculated from repeated realisations, and the number of times when a user defined concentration magnitude was exceeded is quantified as a risk. The method was setup by integrating MODFLOW-2000, MT3DMS and a FORTRAN coded risk model, and automated, using a DOS batch processing file. GIS software was employed in producing the input files and for the presentation of the results. The functionalities of the method, as well as its sensitivities to the model grid sizes, contaminant loading rates, length of stress periods, and the historical frequencies of occurrence of pollution events were evaluated using hypothetical scenarios and a case study. Chloride-related pollution sources were compiled and used as indicative potential contaminant sources for the case study. At any active model cell, if a random generated number is less than the probability of pollution occurrence, then the risk model will generate synthetic contaminant source term as an input into the transport model. The results of the applications of the method are presented in the form of tables, graphs and spatial maps. Varying the model grid sizes indicates no significant effects on the simulated groundwater head. The simulated frequency of daily occurrence of pollution incidents is also independent of the model dimensions. However, the simulated total contaminant mass generated within the aquifer, and the associated volumetric numerical error appear to increase with the increasing grid sizes. Also, the migration of contaminant plume advances faster with the coarse grid sizes as compared to the finer grid sizes. The number of daily contaminant source terms generated and consequently the total mass of contaminant within the aquifer increases in a non linear proportion to the increasing frequency of occurrence of pollution events. The risk of pollution from a number of sources all occurring by chance together was evaluated, and quantitatively presented as risk maps. This capability to combine the risk to a groundwater feature from numerous potential sources of pollution proved to be a great asset to the method, and a large benefit over the contemporary risk and vulnerability methods.
Resumo:
Objective: To quantify the neuronal and glial cell pathology in the hippocampus and the parahippocampal gyrus (PHG) of 8 cases of progressive supranuclear palsy (PSP). Material: tau-immunolabeled sections of the temporal lobe of 8 diagnosed cases of PSP. Method: The densities of lesions were measured in the PHG, CA sectors of the hippocampus and the dentate gyrus (DG) and studied using spatial pattern analysis. Results: Neurofibrillary tangles (NFT) and abnormally enlarged neurons (EN) were most frequent in the PHG and in sector CA1 of the hippocampus, oligodendroglial inclusions (“coiled bodies”) (GI) in the PHG, subiculum, sectors CA1 and CA2, and neuritic plaques (NP) in sectors CA2 and CA4. The DG was the least affected region. Vacuolation and GI were observed in the alveus. No tufted astrocytes (TA) were observed. Pathological changes exhibited clustering, the lesions often exhibiting a regular distribution of the clusters parallel to the tissue boundary. There was a positive correlation between the degree of vacuolation in the alveus and the densities of NFT in CA1 and GI in CA1 and CA2. Conclusion: The pathology most significantly affected the output pathways of the hippocampus, lesions were topographically distributed, and hippocampal pathology may be one factor contributing to cognitive decline in PSP.
Resumo:
A dry matrix application for matrix-assisted laser desorption/ionization mass spectrometry imaging (MALDI MSI) was used to profile the distribution of 4-bromophenyl-1,4-diazabicyclo(3.2.2)nonane-4-carboxylate, monohydrochloride (BDNC, SSR180711) in rat brain tissue sections. Matrix application involved applying layers of finely ground dry alpha-cyano-4-hydroxycinnamic acid (CHCA) to the surface of tissue sections thaw mounted onto MALDI targets. It was not possible to detect the drug when applying matrix in a standard aqueous-organic solvent solution. The drug was detected at higher concentrations in specific regions of the brain, particularly the white matter of the cerebellum. Pseudomultiple reaction monitoring imaging was used to validate that the observed distribution was the target compound. The semiquantitative data obtained from signal intensities in the imaging was confirmed by laser microdissection of specific regions of the brain directed by the imaging, followed by hydrophilic interaction chromatography in combination with a quantitative high-resolution mass spectrometry method. This study illustrates that a dry matrix coating is a valuable and complementary matrix application method for analysis of small polar drugs and metabolites that can be used for semiquantitative analysis.