16 resultados para Statistical modeling technique

em Aston University Research Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper reports on the results of research into the connections between transaction attributes and buyer-supplier relationships (BSRs) in advanced manufacturing technology (AMT) acquisition and implementation. The investigation began by examining the impact of the different patterns of BSR on the performance of the AMT acquisition. In understanding the phenomena, the study drew upon and integrated the literature of transaction cost economics theory, BSRs, and AMT, and used this as the basis for a theoretical framework and hypotheses development. This framework was then empirically tested using data that were gathered through a questionnaire survey with 147 companies and analyzed using a structural equation modeling technique. The results of the analysis indicated that the higher the level of technological specificity and uncertainty, the more firms are likely to engage in a stronger relationship with technology suppliers. However, the complexity of the technology being implemented was associated with BSR only indirectly through its association with the level of uncertainty (which has a direct impact upon BSR). The analysis also provided strong support for the premise that developing strong BSR could lead to an improved performance in acquiring and implementing AMT. The implications of the study are offered for both the academic and practitioner audience.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research set out to test three main hypotheses derived from a summary of literature relevant to the use of audiometry in industry. These hypotheses were: (1) performing audiometry increases the probability that hearing protectors, once issued, will be worn; (2) audiometry is considered by workers to be evidence of their employer's concern for their welfare; (3) audiometry is associated with common law claims by workers against employers for alleged occupational deafness. Six subsidiary hypotheses were also developed. Four methods of data collection were used: (1) attitude questionnaires were administered to samples of workers drawn from an industrial company performing audiometry and two industrial companies not performing audiometry; (2) a postal questionnaire was sent out to industrial medical officers; (3) surveys were undertaken to assess the proportion of the workforce in each of eight industrial companies that was wearing personal hearing protectors that had been provided; (4) structured interviews were carried out with relevant management level personnel in each of five industrial companies. Factor analysis was the main statistical analytic technique used. The data supported all three main hypotheses. Audiometry was also examined as an example of medical screening procedure. It was argued that the validation of medical screening procedures requires the satisfaction of attitudinal or motivational validation criteria in addition to the biological and economic criteria currently used. It was concluded that industrial audiometry failed to satisfy such attitudinal or motivational criteria and so should not be part of a programme of screening for occupational deafness. It was also concluded that industrial audiometry may be useful in creating awareness, amongst workers, of occupational deafness. It was argued that the only profitable approach to investigating the role of audiometry in preventing occupational deafness is to study the attitudes and perceptions of everyone involved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Markets are dynamic by nature, and marketing efforts can be directed to stimulate, reduce, or to utilize these dynamics. The field of marketing dynamics aims at modeling the effects of marketing actions and policies on short-term performance (“lift”) and on long-term performance (“base”). One of the core questions within this field is: “How do marketing efforts affect outcome metrics such as revenues, profits, or shareholder value over time?” Developments in statistical modeling and new data sources allow marketing scientists to provide increasingly comprehensive answers to this question. We present an outlook on developments in modeling marketing dynamics and specify research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

By evolving brands and building on the importance of self-expression, Aaker (1997) developed the brand personality framework as a means to understand brand-consumer relationships. The brand personality framework captures the core values and characteristics described in human personality research in an attempt to humanize brands. Although influential across many streams of brand personality research, the current conceptualization of brand personality only offers a positively-framed approach. To date, no research, both conceptually and empirically, has thoroughly incorporated factors reflective of Negative Brand Personality, despite the fact that almost all researchers in personality are in agreement that factors akin to Extraversion (positive) and Neuroticism (negative) should be in a comprehensive personality scale to accommodate consumers’ expressions. As a result, the study of brand personality is only half complete since the current research trend is to position brand personality under brand image. However, with the brand personality concept being confused with brand identity at the empirical stage, factors reflective of Negative Brand Personality have been neglected. Accordingly, this thesis extends the current conceptualization of brand personality by demarcating the existing typologies of desirable brand personality and incorporating the characteristics reflective of consumers’ discrepant self-meaning to provide a more complete understanding of brand personality. However, it is not enough to interpret negative factors as the absence of positive factors. Negative factors reflect consumers’ anxious and frustrated feelings. Therefore, this thesis contributes to the current conceptualization of brand personality by, firstly, presenting a conceptual definition of Negative Brand Personality in order to provide a theoretical basis for the development of a Negative Brand Personality scale, then, secondly, identifying what constitutes Negative Brand Personality and to what extent consumers’ cognitive dissonance explains the nature of Negative Brand Personality, and, thirdly, ascertaining the impact Negative Brand Personality has on attitudinal constructs, namely: Negative Attitude, Detachment, Brand Loyalty and Satisfaction, which have proven to predict behaviors such as choice and (re-)purchasing. In order to deliver on the three main contributions, two comprehensive studies were conducted to a) develop a valid, parsimonious, yet relatively short measure of Negative Brand Personality, and b) ascertain how the Negative Brand Personality measure behaves within a network of related constructs. The mixed methods approach, grounded in theoretical and empirical development, provides evidence to suggest that there are four factors to Negative Brand Personality and, tested through use of a structural equation modeling technique, that these are influenced by Brand Confusion, Price Unfairness, Self- Incongruence and Corporate Hypocrisy. Negative Brand Personality factors mainly determined Consumers Negative Attitudes and Brand Detachment. The research contributes to the literature on brand personality by improving the consumer-brand relationship by means of engaging in a brandconsumer conversation in order to reduce consumers’ cognitive strain. The study concludes with a discussion on the theoretical and practical implications of the findings, its limitations, and potential directions for future research.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

For the first time we report full numerical NLSE-based modeling of generation properties of random distributed feedback fiber laser based on Rayleigh scattering. The model which takes into account the random backscattering via its average strength only describes well power and spectral properties of random DFB fiber lasers. The influence of dispersion and nonlinearity on spectral and statistical properties is investigated. The evidence of non-gaussian intensity statistics is found. © 2013 Optical Society of America.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A set of 38 epitopes and 183 non-epitopes, which bind to alleles of the HLA-A3 supertype, was subjected to a combination of comparative molecular similarity indices analysis (CoMSIA) and soft independent modeling of class analogy (SIMCA). During the process of T cell recognition, T cell receptors (TCR) interact with the central section of the bound nonamer peptide; thus only positions 4−8 were considered in the study. The derived model distinguished 82% of the epitopes and 73% of the non-epitopes after cross-validation in five groups. The overall preference from the model is for polar amino acids with high electron density and the ability to form hydrogen bonds. These so-called “aggressive” amino acids are flanked by small-sized residues, which enable such residues to protrude from the binding cleft and take an active role in TCR-mediated T cell recognition. Combinations of “aggressive” and “passive” amino acids in the middle part of epitopes constitute a putative TCR binding motif

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Receptor activity modifying proteins (RAMPs) are a family of single-pass transmembrane proteins that dimerize with G-protein-coupled receptors. They may alter the ligand recognition properties of the receptors (particularly for the calcitonin receptor-like receptor, CLR). Very little structural information is available about RAMPs. Here, an ab initio model has been generated for the extracellular domain of RAMP1. The disulfide bond arrangement (Cys 27-Cys82, Cys40-Cys72, and Cys 57-Cys104) was determined by site-directed mutagenesis. The secondary structure (a-helices from residues 29-51, 60-80, and 87-100) was established from a consensus of predictive routines. Using these constraints, an assemblage of 25,000 structures was constructed and these were ranked using an all-atom statistical potential. The best 1000 conformations were energy minimized. The lowest scoring model was refined by molecular dynamics simulation. To validate our strategy, the same methods were applied to three proteins of known structure; PDB:1HP8, PDB:1V54 chain H (residues 21-85), and PDB:1T0P. When compared to the crystal structures, the models had root mean-square deviations of 3.8 Å, 4.1 Å, and 4.0 Å, respectively. The model of RAMP1 suggested that Phe93, Tyr 100, and Phe101 form a binding interface for CLR, whereas Trp74 and Phe92 may interact with ligands that bind to the CLR/RAMP1 heterodimer. © 2006 by the Biophysical Society.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The modem digital communication systems are made transmission reliable by employing error correction technique for the redundancies. Codes in the low-density parity-check work along the principles of Hamming code, and the parity-check matrix is very sparse, and multiple errors can be corrected. The sparseness of the matrix allows for the decoding process to be carried out by probability propagation methods similar to those employed in Turbo codes. The relation between spin systems in statistical physics and digital error correcting codes is based on the existence of a simple isomorphism between the additive Boolean group and the multiplicative binary group. Shannon proved general results on the natural limits of compression and error-correction by setting up the framework known as information theory. Error-correction codes are based on mapping the original space of words onto a higher dimensional space in such a way that the typical distance between encoded words increases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the use of Gallager's low-density parity-check (LDPC) codes in a degraded broadcast channel, one of the fundamental models in network information theory. Combining linear codes is a standard technique in practical network communication schemes and is known to provide better performance than simple time sharing methods when algebraic codes are used. The statistical physics based analysis shows that the practical performance of the suggested method, achieved by employing the belief propagation algorithm, is superior to that of LDPC based time sharing codes while the best performance, when received transmissions are optimally decoded, is bounded by the time sharing limit.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE. A methodology for noninvasively characterizing the three-dimensional (3-D) shape of the complete human eye is not currently available for research into ocular diseases that have a structural substrate, such as myopia. A novel application of a magnetic resonance imaging (MRI) acquisition and analysis technique is presented that, for the first time, allows the 3-D shape of the eye to be investigated fully. METHODS. The technique involves the acquisition of a T2-weighted MRI, which is optimized to reveal the fluid-filled chambers of the eye. Automatic segmentation and meshing algorithms generate a 3-D surface model, which can be shaded with morphologic parameters such as distance from the posterior corneal pole and deviation from sphericity. Full details of the method are illustrated with data from 14 eyes of seven individuals. The spatial accuracy of the calculated models is demonstrated by comparing the MRI-derived axial lengths with values measured in the same eyes using interferometry. RESULTS. The color-coded eye models showed substantial variation in the absolute size of the 14 eyes. Variations in the sphericity of the eyes were also evident, with some appearing approximately spherical whereas others were clearly oblate and one was slightly prolate. Nasal-temporal asymmetries were noted in some subjects. CONCLUSIONS. The MRI acquisition and analysis technique allows a novel way of examining 3-D ocular shape. The ability to stratify and analyze eye shape, ocular volume, and sphericity will further extend the understanding of which specific biometric parameters predispose emmetropic children subsequently to develop myopia. Copyright © Association for Research in Vision and Ophthalmology.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The topic of this thesis is the development of knowledge based statistical software. The shortcomings of conventional statistical packages are discussed to illustrate the need to develop software which is able to exhibit a greater degree of statistical expertise, thereby reducing the misuse of statistical methods by those not well versed in the art of statistical analysis. Some of the issues involved in the development of knowledge based software are presented and a review is given of some of the systems that have been developed so far. The majority of these have moved away from conventional architectures by adopting what can be termed an expert systems approach. The thesis then proposes an approach which is based upon the concept of semantic modelling. By representing some of the semantic meaning of data, it is conceived that a system could examine a request to apply a statistical technique and check if the use of the chosen technique was semantically sound, i.e. will the results obtained be meaningful. Current systems, in contrast, can only perform what can be considered as syntactic checks. The prototype system that has been implemented to explore the feasibility of such an approach is presented, the system has been designed as an enhanced variant of a conventional style statistical package. This involved developing a semantic data model to represent some of the statistically relevant knowledge about data and identifying sets of requirements that should be met for the application of the statistical techniques to be valid. Those areas of statistics covered in the prototype are measures of association and tests of location.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inference and optimisation of real-value edge variables in sparse graphs are studied using the tree based Bethe approximation optimisation algorithms. Equilibrium states of general energy functions involving a large set of real edge-variables that interact at the network nodes are obtained for networks in various cases. These include different cost functions, connectivity values, constraints on the edge bandwidth and the case of multiclass optimisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microfluidics has recently emerged as a new method of manufacturing liposomes, which allows for reproducible mixing in miliseconds on the nanoliter scale. Here we investigate microfluidics-based manufacturing of liposomes. The aim of these studies was to assess the parameters in a microfluidic process by varying the total flow rate (TFR) and the flow rate ratio (FRR) of the solvent and aqueous phases. Design of experiment and multivariate data analysis were used for increased process understanding and development of predictive and correlative models. High FRR lead to the bottom-up synthesis of liposomes, with a strong correlation with vesicle size, demonstrating the ability to in-process control liposomes size; the resulting liposome size correlated with the FRR in the microfluidics process, with liposomes of 50 nm being reproducibly manufactured. Furthermore, we demonstrate the potential of a high throughput manufacturing of liposomes using microfluidics with a four-fold increase in the volumetric flow rate, maintaining liposome characteristics. The efficacy of these liposomes was demonstrated in transfection studies and was modelled using predictive modeling. Mathematical modelling identified FRR as the key variable in the microfluidic process, with the highest impact on liposome size, polydispersity and transfection efficiency. This study demonstrates microfluidics as a robust and high-throughput method for the scalable and highly reproducible manufacture of size-controlled liposomes. Furthermore, the application of statistically based process control increases understanding and allows for the generation of a design-space for controlled particle characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, temporal and statistical properties of quasi-CW fiber lasers have attracted a great attention. In particular, properties of Raman fiber laser (RFLs) have been studied both numerically and experimentally [1,2]. Experimental investigation is more challengeable, as the full generation optical bandwidth (typically hundreds of GHz for RFLs) is much bigger than real-time bandwidth of oscilloscopes (up to 60GHz for the newest models). So experimentally measured time dynamics is highly bandwidth averaged and do not provide precise information about overall statistical properties. To overpass this, one can use the spectral filtering technique to study temporal and statistical properties within optical bandwidth comparable with measurement bandwidth [3] or indirect measurements [4]. Ytterbium-doped fiber lasers (YDFL) are more suitable for experimental investigation, as their generation spectrum usually 10 times narrower. Moreover, recently ultra-narrow-band generation has been demonstrated in YDFL [5] which provides in principle possibility to measure time dynamics and statistics in real time using conventional oscilloscopes. © 2013 IEEE.