936 resultados para MODEL ANALYSIS
Resumo:
Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.
Resumo:
This thesis provides a set of tools for managing uncertainty in Web-based models and workflows.To support the use of these tools, this thesis firstly provides a framework for exposing models through Web services. An introduction to uncertainty management, Web service interfaces,and workflow standards and technologies is given, with a particular focus on the geospatial domain.An existing specification for exposing geospatial models and processes, theWeb Processing Service (WPS), is critically reviewed. A processing service framework is presented as a solutionto usability issues with the WPS standard. The framework implements support for Simple ObjectAccess Protocol (SOAP), Web Service Description Language (WSDL) and JavaScript Object Notation (JSON), allowing models to be consumed by a variety of tools and software. Strategies for communicating with models from Web service interfaces are discussed, demonstrating the difficultly of exposing existing models on the Web. This thesis then reviews existing mechanisms for uncertainty management, with an emphasis on emulator methods for building efficient statistical surrogate models. A tool is developed to solve accessibility issues with such methods, by providing a Web-based user interface and backend to ease the process of building and integrating emulators. These tools, plus the processing service framework, are applied to a real case study as part of the UncertWeb project. The usability of the framework is proved with the implementation of aWeb-based workflow for predicting future crop yields in the UK, also demonstrating the abilities of the tools for emulator building and integration. Future directions for the development of the tools are discussed.
Resumo:
In this paper we present the design and analysis of an intonation model for text-to-speech (TTS) synthesis applications using a combination of Relational Tree (RT) and Fuzzy Logic (FL) technologies. The model is demonstrated using the Standard Yorùbá (SY) language. In the proposed intonation model, phonological information extracted from text is converted into an RT. RT is a sophisticated data structure that represents the peaks and valleys as well as the spatial structure of a waveform symbolically in the form of trees. An initial approximation to the RT, called Skeletal Tree (ST), is first generated algorithmically. The exact numerical values of the peaks and valleys on the ST is then computed using FL. Quantitative analysis of the result gives RMSE of 0.56 and 0.71 for peak and valley respectively. Mean Opinion Scores (MOS) of 9.5 and 6.8, on a scale of 1 - -10, was obtained for intelligibility and naturalness respectively.
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
Transmembrane proteins play crucial roles in many important physiological processes. The intracellular domain of membrane proteins is key for their function by interacting with a wide variety of cytosolic proteins. It is therefore important to examine this interaction. A recently developed method to study these interactions, based on the use of liposomes as a model membrane, involves the covalent coupling of the cytoplasmic domains of membrane proteins to the liposome membrane. This allows for the analysis of interaction partners requiring both protein and membrane lipid binding. This thesis further establishes the liposome recruitment system and utilises it to examine the intracellular interactome of the amyloid precursor protein (APP), most well-known for its proteolytic cleavage that results in the production and accumulation of amyloid beta fragments, the main constituent of amyloid plaques in Alzheimer’s disease pathology. Despite this, the physiological function of APP remains largely unclear. Through the use of the proteo-liposome recruitment system two novel interactions of APP’s intracellular domain (AICD) are examined with a view to gaining a greater insight into APP’s physiological function. One of these novel interactions is between AICD and the mTOR complex, a serine/threonine protein kinase that integrates signals from nutrients and growth factors. The kinase domain of mTOR directly binds to AICD and the N-terminal amino acids of AICD are crucial for this interaction. The second novel interaction is between AICD and the endosomal PIKfyve complex, a lipid kinase involved in the production of phosphatidylinositol-3,5-bisphosphate (PI(3,5)P2) from phosphatidylinositol-3-phosphate, which has a role in controlling ensdosome dynamics. The scaffold protein Vac14 of the PIKfyve complex binds directly to AICD and the C-terminus of AICD is important for its interaction with the PIKfyve complex. Using a recently developed intracellular PI(3,5)P2 probe it is shown that APP controls the formation of PI(3,5)P2 positive vesicular structures and that the PIKfyve complex is involved in the trafficking and degradation of APP. Both of these novel APP interactors have important implications of both APP function and Alzheimer’s disease. The proteo-liposome recruitment method is further validated through its use to examine the recruitment and assembly of the AP-2/clathrin coat from purified components to two membrane proteins containing different sorting motifs. Taken together this thesis highlights the proteo-liposome recruitment system as a valuable tool for the study of membrane proteins intracellular interactome. It allows for the mimicking of the protein in its native configuration therefore identifying weaker interactions that are not detected by more conventional methods and also detecting interactions that are mediated by membrane phospholipids.
Resumo:
We develop, implement and study a new Bayesian spatial mixture model (BSMM). The proposed BSMM allows for spatial structure in the binary activation indicators through a latent thresholded Gaussian Markov random field. We develop a Gibbs (MCMC) sampler to perform posterior inference on the model parameters, which then allows us to assess the posterior probabilities of activation for each voxel. One purpose of this article is to compare the HJ model and the BSMM in terms of receiver operating characteristics (ROC) curves. Also we consider the accuracy of the spatial mixture model and the BSMM for estimation of the size of the activation region in terms of bias, variance and mean squared error. We perform a simulation study to examine the aforementioned characteristics under a variety of configurations of spatial mixture model and BSMM both as the size of the region changes and as the magnitude of activation changes.
Resumo:
Through a lumped parameter modelling approach, a dynamical model, which can reproduce the motion of the muscles of a human body standing in different postures during Whole Body Vibrations (WBVs) treatment, has been developed. The key parameters, associated to the dynamics of the motion of the muscles of the lower limbs, have been identified starting from accelerometer measurements. The developed model can be usefully applied to the optimization of WBVs treatments which can effectively enhance muscle activation. © 2013 IEEE.
Resumo:
Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.
Resumo:
2000 Mathematics Subject Classification: 62J12, 62K15, 91B42, 62H99.
Resumo:
Diabetes patients might suffer from an unhealthy life, long-term treatment and chronic complicated diseases. The decreasing hospitalization rate is a crucial problem for health care centers. This study combines the bagging method with base classifier decision tree and costs-sensitive analysis for diabetes patients' classification purpose. Real patients' data collected from a regional hospital in Thailand were analyzed. The relevance factors were selected and used to construct base classifier decision tree models to classify diabetes and non-diabetes patients. The bagging method was then applied to improve accuracy. Finally, asymmetric classification cost matrices were used to give more alternative models for diabetes data analysis.
Resumo:
The extant literature on workplace coaching is characterised by a lack of theoretical and empirical understanding regarding the effectiveness of coaching as a learning and development tool; the types of outcomes one can expect from coaching; the tools that can be used to measure coaching outcomes; the underlying processes that explain why and how coaching works and the factors that may impact on coaching effectiveness. This thesis sought to address these substantial gaps in the literature with three linked studies. Firstly, a meta-analysis of workplace coaching effectiveness (k = 17), synthesizing the existing research was presented. A framework of coaching outcomes was developed and utilised to code the studies. Analysis indicated that coaching had positive effects on all outcomes. Next, the framework of outcomes was utilised as the deductive start-point to the development of the scale measuring perceived coaching effectiveness. Utilising a multi-stage approach (n = 201), the analysis indicated that perceived coaching effectiveness may be organised into a six factor structure: career clarity; team performance; work well-being; performance; planning and organizing and personal effectiveness and adaptability. The final study was a longitudinal field experiment to test a theoretical model of individual differences and coaching effectiveness developed in this thesis. An organizational sample of 84 employees each participated in a coaching intervention, completed self-report surveys, and had their job performance rated by peers, direct reports and supervisors (a total of 352 employees provided data on participant performance). The results demonstrate that compared to a control group, the coaching intervention generated a number of positive outcomes. The analysis indicated that coachees’ enthusiasm, intellect and orderliness influenced the impact of coaching on outcomes. Mediation analysis suggested that mastery goal orientation, performance goal orientation and approach motivation in the form of behavioural activation system (BAS) drive, were significant mediators between personality and outcomes. Overall, the findings of this thesis make an original contribution to the understanding of the types of outcomes that can be expected from coaching, and the magnitude of impact coaching has on outcomes. The thesis also provides a tool for reliably measuring coaching effectiveness and a theoretical model to understand the influence of coachee individual differences on coaching outcomes.
Resumo:
In product reviews, it is observed that the distribution of polarity ratings over reviews written by different users or evaluated based on different products are often skewed in the real world. As such, incorporating user and product information would be helpful for the task of sentiment classification of reviews. However, existing approaches ignored the temporal nature of reviews posted by the same user or evaluated on the same product. We argue that the temporal relations of reviews might be potentially useful for learning user and product embedding and thus propose employing a sequence model to embed these temporal relations into user and product representations so as to improve the performance of document-level sentiment analysis. Specifically, we first learn a distributed representation of each review by a one-dimensional convolutional neural network. Then, taking these representations as pretrained vectors, we use a recurrent neural network with gated recurrent units to learn distributed representations of users and products. Finally, we feed the user, product and review representations into a machine learning classifier for sentiment classification. Our approach has been evaluated on three large-scale review datasets from the IMDB and Yelp. Experimental results show that: (1) sequence modeling for the purposes of distributed user and product representation learning can improve the performance of document-level sentiment classification; (2) the proposed approach achieves state-of-The-Art results on these benchmark datasets.
Resumo:
This study was an evaluation of a Field Project Model Curriculum and its impact on achievement, attitude toward science, attitude toward the environment, self-concept, and academic self-concept with at-risk eleventh and twelfth grade students. One hundred eight students were pretested and posttested on the Piers-Harris Children's Self-Concept Scale, PHCSC (1985); the Self-Concept as a Learner Scale, SCAL (1978); the Marine Science Test, MST (1987); the Science Attitude Inventory, SAI (1970); and the Environmental Attitude Scale, EAS (1972). Using a stratified random design, three groups of students were randomly assigned according to sex and stanine level, to three treatment groups. Group one received the field project method, group two received the field study method, and group three received the field trip method. All three groups followed the marine biology course content as specified by Florida Student Performance Objectives and Frameworks. The intervention occurred for ten months with each group participating in outside-of-classroom activities on a trimonthly basis. Analysis of covariance procedures were used to determine treatment effects. F-ratios, p-levels and t-tests at p $<$.0062 (.05/8) indicated that a significant difference existed among the three treatment groups. Findings indicated that groups one and two were significantly different from group three with group one displaying significantly higher results than group two. There were no significant differences between males and females in performance on the five dependent variables. The tenets underlying environmental education are congruent with the recommendations toward the reform of science education. These include a value analysis approach, inquiry methods, and critical thinking strategies that are applied to environmental issues. ^
Resumo:
The purpose of this study is to produce a model to be used by state regulating agencies to assess demand for subacute care. In accomplishing this goal, the study refines the definition of subacute care, demonstrates a method for bed need assessment, and measures the effectiveness of this new level of care. This was the largest study of subacute care to date. Research focused on 19 subacute units in 16 states, each of which provides high-intensity rehabilitative and/or restorative care carried out in a high-tech unit. Each of the facilities was based in a nursing home, but utilized separate staff, equipment, and services. Because these facilities are under local control, it was possible to study regional differences in subacute care demand.^ Using this data, a model for predicting demand for subacute care services was created, building on earlier models submitted by John Whitman for the American Hospital Association and Robin E. MacStravic. The Broderick model uses the "bootstrapping" method and takes advantage of high technology: computers and software, databases in business and government, publicly available databases from providers or commercial vendors, professional organizations, and other information sources. Using newly available sources of information, this new model addresses the problems and needs of health care planners as they approach the challenges of the 21st century. ^