967 resultados para Partial Credit Model
Resumo:
The development of new products in today's marketing environment is generally accepted as a requirement for the continual growth and prosperity of organisations. The literature is consequently rich with information on the development of various aspects of good products. In the case of service industries, it can be argued that new service product development is of as least equal importance as it is to organisations that produce tangible goods products. Unlike the new goods product literature, the literature on service marketing practices, and in particular, new service product development, is relatively sparse. The main purpose of this thesis is to examine a number of aspects of new service product development practice with respect to financial services and specifically, credit card financial services. The empirical investigation utilises both a case study and a survey approach, to examine aspects of new service product development industry practice relating specifically to gaps and deficiencies in the literature with respect to the financial service industry. The findings of the empirical work are subsequently examined in the context in which they provide guidance and support for a new normative new service product development model. The study examines the UK credit card financial service product sector as an industry case study and perspective. The findings of the field work reveal that the new service product development process is still evolving, and that in the case of credit card financial services can be seen as a well-structured and well-documented process. New product development can also be seen as an incremental, complex, interactive and continuous process which has been applied in a variety of ways. A number of inferences are subsequently presented.
Resumo:
Semantic Web Service, one of the most significant research areas within the Semantic Web vision, has attracted increasing attention from both the research community and industry. The Web Service Modelling Ontology (WSMO) has been proposed as an enabling framework for the total/partial automation of the tasks (e.g., discovery, selection, composition, mediation, execution, monitoring, etc.) involved in both intra- and inter-enterprise integration of Web services. To support the standardisation and tool support of WSMO, a formal model of the language is highly desirable. As several variants of WSMO have been proposed by the WSMO community, which are still under development, the syntax and semantics of WSMO should be formally defined to facilitate easy reuse and future development. In this paper, we present a formal Object-Z formal model of WSMO, where different aspects of the language have been precisely defined within one unified framework. This model not only provides a formal unambiguous model which can be used to develop tools and facilitate future development, but as demonstrated in this paper, can be used to identify and eliminate errors present in existing documentation.
Resumo:
This paper investigates the impact of HRM systems on organisational performance in a sample of 178 Greek manufacturing organisations. The results show strong support for the ‘universalistic’ model, highlighting that both resource-development and reward-relations systems are positively related with organisational performance. The results also show weak and partial support for the ‘contingency model’, i.e., resourcedevelopment and reward-relations systems are contingent on the business strategies of quality, innovation, and cost in determining organisational efficiency. The study concludes that the universalistic and contingency perspectives are not necessarily mutually exclusive but on the contrary are in some cases complementary.
Resumo:
We uncover high persistence in credit spread series that can obscure the relationship between the theoretical determinants of credit risk and observed credit spreads. We use a Markovswitching model, which also captures the stability (low frequency changes) of credit ratings, to show why credit spreads may continue to respond to past levels of credit risk, even though the state of the economy has changed. A bivariate model of credit spreads and either macroeconomic activity or equity market volatility detects large and significant correlations that are consistent with theory but have not been observed in previous studies. © 2010 Nova Science Publishers, Inc. All rights reserved.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Motivation: The immunogenicity of peptides depends on their ability to bind to MHC molecules. MHC binding affinity prediction methods can save significant amounts of experimental work. The class II MHC binding site is open at both ends, making epitope prediction difficult because of the multiple binding ability of long peptides. Results: An iterative self-consistent partial least squares (PLS)-based additive method was applied to a set of 66 pep- tides no longer than 16 amino acids, binding to DRB1*0401. A regression equation containing the quantitative contributions of the amino acids at each of the nine positions was generated. Its predictability was tested using two external test sets which gave r pred =0.593 and r pred=0.655, respectively. Furthermore, it was benchmarked using 25 known T-cell epitopes restricted by DRB1*0401 and we compared our results with four other online predictive methods. The additive method showed the best result finding 24 of the 25 T-cell epitopes. Availability: Peptides used in the study are available from http://www.jenner.ac.uk/JenPep. The PLS method is available commercially in the SYBYL molecular modelling software package. The final model for affinity prediction of peptides binding to DRB1*0401 molecule is available at http://www.jenner.ac.uk/MHCPred. Models developed for DRB1*0101 and DRB1*0701 also are available in MHC- Pred
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
Let V be an array. The range query problem concerns the design of data structures for implementing the following operations. The operation update(j,x) has the effect vj ← vj + x, and the query operation retrieve(i,j) returns the partial sum vi + ... + vj. These tasks are to be performed on-line. We define an algebraic model – based on the use of matrices – for the study of the problem. In this paper we establish as well a lower bound for the sum of the average complexity of both kinds of operations, and demonstrate that this lower bound is near optimal – in terms of asymptotic complexity.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
With the determination of principal parameters of producing and pollution abatement technologies, this paper quantifies abatement and external costs at the social optimum and analyses the dynamic relationship between technological development and the above-mentioned costs. With the partial analysis of parameters, the paper presents the impacts on the level of pollution and external costs of extensive and intensive environmental protection, market demand change and product fees, and not environmental protection oriented technological development. Parametrical cost calculation makes the drawing up of two useful rules of thumb possible in connection with the rate of government in-terventions. Also, the paradox of technological development aiming at intensive environmental protection will become apparent.
Resumo:
The purpose of this study was to evaluate the mechanical engineering technology curriculum effectiveness at the junior college in Taiwan by using the CIPP evaluation model. The study concerned the areas of the curriculum, curriculum materials, individualized instruction, support services, teaching effectiveness, student achievement, and job performance. A descriptive survey method was used with questionnaires for data collection from faculty, students, graduates, and employers.^ All categories of respondents tended to agree that the curriculum provides appropriate occupational knowledge and skills. Students, graduates, and faculty tended to be satisfied with the curriculum; faculty tended to be satisfied with student achievement; graduates tended to be satisfied with their job preparation; and employers were most satisfied with graduates' job performance.^ Conclusions were drawn in the context, input, process, and product of the CIPP model. In Context area: Students were dissatisfied with curriculum flexibility in students characteristics. Graduates were dissatisfied with curriculum design for student's adaptability in new economic and industrial conditions; practicum flexibility in student characteristics; and course overlap. Both students and graduates were dissatisfied with practicum credit hours. Both faculty and students were dissatisfied with the number of required courses.^ In Input area: Students, faculty, and graduates perceived audiovisuals and manipulative aids positively. Faculty and students perceive CAI implementation positively. Students perceived textbooks negatively.^ In Process area: Faculty, students, and graduates perceived all support service negatively. Faculty tended to perceive the ratios of graduates who enter advanced study and related occupation, and who passed the professional skills certification, negatively. Students tended to perceive teaching effectiveness in terms of instructional strategies, the quality of instruction, overall suitability, and receivable, negatively. Graduates also tended to identify the instructional strategies as a negative perception. Faculty and students perceived curriculum objectives and practicum negatively. Both faculty and students felt that instructors should be more interested in making the courses a useful learning experience.^ In Product area: Employers were satisfied with graduates' academic preparation and job performance, adaptability, punctuality, and their ability to communicate, cooperate, and meet organization needs. Graduates were weak in terms of equipment familiarity and supervisory ability.^ In sum, the curriculum of the five-year mechanical engineering technology programs of junior college in Taiwan has served adequately up to this time in preparing a work force to enter industry. It is now time to look toward the future and adapt the curriculum and instruction for the future needs of this high-tech society. ^
Resumo:
The relationship between noun incorporation (NI) and the agreement alternations that occur in such contexts (NI Transitivity Alternations) remains inadequately understood. Three interpretations of these alternations (Baker, Aranovich & Golluscio 2005; Mithun 1984; Rosen 1989) are shown to be undermined by foundational or mechanical issues. I propose a syntactic model, adopting Branigan's (2011) interpretation of NI as the result of “provocative” feature valuation, which triggers generation of a copy of the object that subsequently merges inside the verb. Provocation triggers a reflexive Refine operation that deletes duplicate features from chains, making them interpretable for Transfer. NI Transitivity Alternations result from variant deletion preferences exhibited during Refine. I argue that the NI contexts discussed (Generic NI, Partial NI and Double Object NI) result from different restrictions on phonetic and semantic identity in chain formation. This provides us with a consistent definition of NI Transitivity Alternations across contexts, as well as a new typology that distinguishes NI contexts, rather than incorporating languages.
Resumo:
This study was supported by a Wellcome Trust-NIH PhD Studentship to SB, WDF and NV. Grant number 098252/Z/12/Z. SB, CHC and WDF are supported by the Intramural Research Program, NCI, NIH. NHG and WL are supported by the Intramural Research Program, NIA, NIH.
Resumo:
General note: Title and date provided by Bettye Lane.
Resumo:
The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval $[0,1]$ with dependence on a single parameter, $\lambda$. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on $\lambda$ and the behavior of the initial data around $1$. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.