23 resultados para Quantitative fit analysis


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Sponsorship fit is frequently mentioned and empirically examined as a success factor of sponsorship. While sponsorship fit has been considered as a determinant of sponsorship success, little knowledge exists about the antecedents of sponsorship fit. In the present paper, individual and firm-level antecedents of sponsorship fit are examined in a single hierarchical linear model. Results show that sponsorship fit is influenced by the perception of benefits, the firm’s regional identification, sincerity, relatedness to the sponsored activity, and its dominance. On a partnership level, results show that contract length contributes to sponsorship fit while contract value is found to be unrelated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: To study the density and cross-sectional area of axons in the optic nerve in elderly control subjects and in cases of Alzheimer's disease (AD) using an image analysis system. Methods: Sections of optic nerves from control and AD patients were stained with toluidine blue to reveal axon profiles. Results: The density of axons was reduced in both the center and peripheral portions of the optic nerve in AD compared with control patients. Analysis of axons with different cross-sectional areas suggested a specific loss of the smaller sized axons in AD, i.e., those with areas less that 1.99 μm2. An analysis of axons >11 μm2 in cross-sectional area suggested no specific loss of the larger axons in this group of patients. Conclusions: The data suggest that image analysis provides an accurate and reproducible method of quantifying axons in the optic nerve. In addition, the data suggest that axons are lost throughout the optic nerve with a specific loss of the smaller-sized axons. Loss of the smaller axons may explain the deficits in color vision observed in a significant proportion of patients with AD.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have developed a new technique for extracting histological parameters from multi-spectral images of the ocular fundus. The new method uses a Monte Carlo simulation of the reflectance of the fundus to model how the spectral reflectance of the tissue varies with differing tissue histology. The model is parameterised by the concentrations of the five main absorbers found in the fundus: retinal haemoglobins, choroidal haemoglobins, choroidal melanin, RPE melanin and macular pigment. These parameters are shown to give rise to distinct variations in the tissue colouration. We use the results of the Monte Carlo simulations to construct an inverse model which maps tissue colouration onto the model parameters. This allows the concentration and distribution of the five main absorbers to be determined from suitable multi-spectral images. We propose the use of "image quotients" to allow this information to be extracted from uncalibrated image data. The filters used to acquire the images are selected to ensure a one-to-one mapping between model parameters and image quotients. To recover five model parameters uniquely, images must be acquired in six distinct spectral bands. Theoretical investigations suggest that retinal haemoglobins and macular pigment can be recovered with RMS errors of less than 10%. We present parametric maps showing the variation of these parameters across the posterior pole of the fundus. The results are in agreement with known tissue histology for normal healthy subjects. We also present an early result which suggests that, with further development, the technique could be used to successfully detect retinal haemorrhages.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Cloud computing is a new technological paradigm offering computing infrastructure, software and platforms as a pay-as-you-go, subscription-based service. Many potential customers of cloud services require essential cost assessments to be undertaken before transitioning to the cloud. Current assessment techniques are imprecise as they rely on simplified specifications of resource requirements that fail to account for probabilistic variations in usage. In this paper, we address these problems and propose a new probabilistic pattern modelling (PPM) approach to cloud costing and resource usage verification. Our approach is based on a concise expression of probabilistic resource usage patterns translated to Markov decision processes (MDPs). Key costing and usage queries are identified and expressed in a probabilistic variant of temporal logic and calculated to a high degree of precision using quantitative verification techniques. The PPM cost assessment approach has been implemented as a Java library and validated with a case study and scalability experiments. © 2012 Springer-Verlag Berlin Heidelberg.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Quantitative analysis of solid-state processes from isothermal microcalorimetric data is straightforward if data for the total process have been recorded and problematic (in the more likely case) when they have not. Data are usually plotted as a function of fraction reacted (α); for calorimetric data, this requires knowledge of the total heat change (Q) upon completion of the process. Determination of Q is difficult in cases where the process is fast (initial data missing) or slow (final data missing). Here we introduce several mathematical methods that allow the direct calculation of Q by selection of data points when only partial data are present, based on analysis with the Pérez-Maqueda model. All methods in addition allow direct determination of the reaction mechanism descriptors m and n and from this the rate constant, k. The validity of the methods is tested with the use of simulated calorimetric data, and we introduce a graphical method for generating solid-state power-time data. The methods are then applied to the crystallization of indomethacin from a glass. All methods correctly recovered the total reaction enthalpy (16.6 J) and suggested that the crystallization followed an Avrami model. The rate constants for crystallization were determined to be 3.98 × 10-6, 4.13 × 10-6, and 3.98 × 10 -6 s-1 with methods 1, 2, and 3, respectively. © 2010 American Chemical Society.