880 resultados para Evaluation models
Resumo:
In this paper, we explore the idea of social role theory (SRT) and propose a novel regularized topic model which incorporates SRT into the generative process of social media content. We assume that a user can play multiple social roles, and each social role serves to fulfil different duties and is associated with a role-driven distribution over latent topics. In particular, we focus on social roles corresponding to the most common social activities on social networks. Our model is instantiated on microblogs, i.e., Twitter and community question-answering (cQA), i.e., Yahoo! Answers, where social roles on Twitter include "originators" and "propagators", and roles on cQA are "askers" and "answerers". Both explicit and implicit interactions between users are taken into account and modeled as regularization factors. To evaluate the performance of our proposed method, we have conducted extensive experiments on two Twitter datasets and two cQA datasets. Furthermore, we also consider multi-role modeling for scientific papers where an author's research expertise area is considered as a social role. A novel application of detecting users' research interests through topical keyword labeling based on the results of our multi-role model has been presented. The evaluation results have shown the feasibility and effectiveness of our model.
Resumo:
Three human astroglioma lines U251-MG, U373-MG and CCF-STTG1 have been evaluated further as possible models for astrocytotoxicity (GFAP and IL-6 release). The effects of bacterial lipopolysaccharide, chloroquine diphosphate and acrylamide were studied on GFAP expression and LPS, chloroquine diphosphate, ethanol, trimethyltin chloride (TMTC) and acrylamide were examined on interleukin-6 (IL-6) release in the U373-MG line only. At 4-h LIPS elevated GFAP (17.0±5.0% P < 0.05) above control in the U251-MG cell line only. Chloroquine diphosphate over 4 h in the U251-MG line resulted in an increase in GFAP-IR to 20.3 ±4.2% and 21.1 ± 4.1 % above control levels 0.1 µM (P< 0.05) and 1 µM (P< 0.05) respectively. CQD was associated with decreases in MTT turnover, particularly after 24 h incubation. With the U373-MG line, LPS (0.5 µg/ml) increased IL-6 expression 640% above control (P < 0.001), whilst chloroquine diphosphate (100 µM), ethanol (10mM) and TMTC chloride (1 µM) also increased IL-6. It is possible that batteries of astrocytic human glioma cell lines may be applicable to the sensitive evaluation of toxicants on astrogliotic expression markers such as GFAP and IL-6.
Resumo:
The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).
Resumo:
Data envelopment analysis (DEA) is the most widely used methods for measuring the efficiency and productivity of decision-making units (DMUs). The need for huge computer resources in terms of memory and CPU time in DEA is inevitable for a large-scale data set, especially with negative measures. In recent years, wide ranges of studies have been conducted in the area of artificial neural network and DEA combined methods. In this study, a supervised feed-forward neural network is proposed to evaluate the efficiency and productivity of large-scale data sets with negative values in contrast to the corresponding DEA method. Results indicate that the proposed network has some computational advantages over the corresponding DEA models; therefore, it can be considered as a useful tool for measuring the efficiency of DMUs with (large-scale) negative data.
Resumo:
The finding that Pareto distributions are adequate to model Internet packet interarrival times has motivated the proposal of methods to evaluate steady-state performance measures of Pareto/D/1/k queues. Some limited analytical derivation for queue models has been proposed in the literature, but their solutions are often of a great mathematical challenge. To overcome such limitations, simulation tools that can deal with general queueing system must be developed. Despite certain limitations, simulation algorithms provide a mechanism to obtain insight and good numerical approximation to parameters of queues. In this work, we give an overview of some of these methods and compare them with our simulation approach, which are suited to solve queues with Generalized-Pareto interarrival time distributions. The paper discusses the properties and use of the Pareto distribution. We propose a real time trace simulation model for estimating the steady-state probability showing the tail-raising effect, loss probability, delay of the Pareto/D/1/k queue and make a comparison with M/D/1/k. The background on Internet traffic will help to do the evaluation correctly. This model can be used to study the long- tailed queueing systems. We close the paper with some general comments and offer thoughts about future work.
Resumo:
This thesis describes research that has developed the principles of a modelling tool for the analytical evaluation of a manufacturing strategy. The appropriate process of manufacturing strategy formulation is based on mental synthesis with formal planning processes supporting this role. Inherent to such processes is a stage where the effects of alternative strategies on the performance of a manufacturing system must be evaluated so that a choice of preferred strategy can be made. Invariably this evaluation is carried out by practitioners applying mechanisms of judgement, bargaining and analysis. Ibis thesis makes a significant and original contribution to the provision of analytical support for practitioners in this role. The research programme commences by defining the requirements of analytical strategy evaluation from the perspective of practitioners. A broad taxonomy of models has been used to identify a set of potentially suitable techniques for the strategy evaluation task. Then, where possible, unsuitable modelling techniques have been identified on the basis of evidence in the literature and discarded from this set. The remaining modelling techniques have been critically appraised by testing representative contemporary modelling tools in an industrially based experimentation programme. The results show that individual modelling techniques exhibit various limitations in the strategy evaluation role, though some combinations do appear to provide the necessary functionality. On the basis of this comprehensive and in-depth knowledge a modelling tool ' has been specifically designed for this task. Further experimental testing has then been conducted to verify the principles of this modelling tool. Ibis research has bridged the fields of manufacturing strategy formulation and manufacturing systems modelling and makes two contributions to knowledge. Firstly, a comprehensive and in-depth platform of knowledge has been established about modelling techniques in manufacturing strategy evaluation. Secondly, the principles of a tool that supports this role have been formed and verified.
Resumo:
This thesis describes research that has developed the principles of a modelling tool for the analytical evaluation of a manufacturing strategy. The appropriate process of manufacturing strategy formulation is based on mental synthesis with formal planning processes supporting this role. Inherent to such processes is a stage where the effects of alternative strategies on the performance of a manufacturing system must be evaluated so that a choice of preferred strategy can be made. Invariably this evaluation is carried out by practitioners applying mechanisms of judgement, bargaining and analysis. Ibis thesis makes a significant and original contribution to the provision of analytical support for practitioners in this role. The research programme commences by defining the requirements of analytical strategy evaluation from the perspective of practitioners. A broad taxonomy of models has been used to identify a set of potentially suitable techniques for the strategy evaluation task. Then, where possible, unsuitable modelling techniques have been identified on the basis of evidence in the literature and discarded from this set. The remaining modelling techniques have been critically appraised by testing representative contemporary modelling tools in an industrially based experimentation programme. The results show that individual modelling techniques exhibit various limitations in the strategy evaluation role, though some combinations do appear to provide the necessary functionality. On the basis of this comprehensive and in-depth knowledge a modelling tool ' has been specifically designed for this task. Further experimental testing has then been conducted to verify the principles of this modelling tool. Ibis research has bridged the fields of manufacturing strategy formulation and manufacturing systems modelling and makes two contributions to knowledge. Firstly, a comprehensive and in-depth platform of knowledge has been established about modelling techniques in manufacturing strategy evaluation. Secondly, the principles of a tool that supports this role have been formed and verified.
The Long-Term impact of Business Support? - Exploring the Role of Evaluation Timing using Micro Data
Resumo:
The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.
The long-term impact of business support? - Exploring the role of evaluation timing using micro data
Resumo:
The original contribution of this work is threefold. Firstly, this thesis develops a critical perspective on current evaluation practice of business support, with focus on the timing of evaluation. The general time frame applied for business support policy evaluation is limited to one to two, seldom three years post intervention. This is despite calls for long-term impact studies by various authors, concerned about time lags before effects are fully realised. This desire for long-term evaluation opposes the requirements by policy-makers and funders, seeking quick results. Also, current ‘best practice’ frameworks do not refer to timing or its implications, and data availability affects the ability to undertake long-term evaluation. Secondly, this thesis provides methodological value for follow-up and similar studies by using data linking of scheme-beneficiary data with official performance datasets. Thus data availability problems are avoided through the use of secondary data. Thirdly, this thesis builds the evidence, through the application of a longitudinal impact study of small business support in England, covering seven years of post intervention data. This illustrates the variability of results for different evaluation periods, and the value in using multiple years of data for a robust understanding of support impact. For survival, impact of assistance is found to be immediate, but limited. Concerning growth, significant impact centres on a two to three year period post intervention for the linear selection and quantile regression models – positive for employment and turnover, negative for productivity. Attribution of impact may present a problem for subsequent periods. The results clearly support the argument for the use of longitudinal data and analysis, and a greater appreciation by evaluators of the factor time. This analysis recommends a time frame of four to five years post intervention for soft business support evaluation.
Resumo:
The nation's freeway systems are becoming increasingly congested. A major contribution to traffic congestion on freeways is due to traffic incidents. Traffic incidents are non-recurring events such as accidents or stranded vehicles that cause a temporary roadway capacity reduction, and they can account for as much as 60 percent of all traffic congestion on freeways. One major freeway incident management strategy involves diverting traffic to avoid incident locations by relaying timely information through Intelligent Transportation Systems (ITS) devices such as dynamic message signs or real-time traveler information systems. The decision to divert traffic depends foremost on the expected duration of an incident, which is difficult to predict. In addition, the duration of an incident is affected by many contributing factors. Determining and understanding these factors can help the process of identifying and developing better strategies to reduce incident durations and alleviate traffic congestion. A number of research studies have attempted to develop models to predict incident durations, yet with limited success. ^ This dissertation research attempts to improve on this previous effort by applying data mining techniques to a comprehensive incident database maintained by the District 4 ITS Office of the Florida Department of Transportation (FDOT). Two categories of incident duration prediction models were developed: "offline" models designed for use in the performance evaluation of incident management programs, and "online" models for real-time prediction of incident duration to aid in the decision making of traffic diversion in the event of an ongoing incident. Multiple data mining analysis techniques were applied and evaluated in the research. The multiple linear regression analysis and decision tree based method were applied to develop the offline models, and the rule-based method and a tree algorithm called M5P were used to develop the online models. ^ The results show that the models in general can achieve high prediction accuracy within acceptable time intervals of the actual durations. The research also identifies some new contributing factors that have not been examined in past studies. As part of the research effort, software code was developed to implement the models in the existing software system of District 4 FDOT for actual applications. ^
Resumo:
A novel trileaflet polymer valve is a composite design of a biostable polymer poly(styrene-isobutylene-styrene) (SIBS) with a reinforcement polyethylene terephthalate (PET) fabric. Surface roughness and hydrophilicity vary with fabrication methods and influence leaflet biocompatibility. The purpose of this study was to investigate the biocompatibility of this composite material using both small animal (nonfunctional mode) and large animal (functional mode) models. Composite samples were manufactured using dip coating and solvent casting with different coating thickness (251μm and 50μm). Sample's surface was characterized through qualitative SEM observation and quantitative surface roughness analysis. A novel rat abdominal aorta model was developed to test the composite samples in a similar pulsatile flow condition as its intended use. The sample's tissue response was characterized by histological examination. Among the samples tested, the 25μm solvent-cast sample exhibited the smoothest surface and best biocompatibility in terms of tissue capsulation thickness, and was chosen as the method for fabrication of the SIBS valve. Phosphocholine was used to create a hydrophilic surface on selected composite samples, which resulted in improved blood compatibility. Four SIBS valves (two with phosphocholine modification) were implanted into sheep. Echocardiography, blood chemistry, and system pathology were conducted to evaluate the valve's performance and biocompatibility. No adverse response was identified following implantation. The average survival time was 76 days, and one sheep with the phosphocholine modified valve passed the FDA minimum requirement of 140 days with approximately 20 million cycles of valve activity. The explanted valves were observed under the aid of a dissection microscope, and evaluated via histology, SEM and X-ray. Surface cracks and calcified tissue deposition were found on the leaflets. In conclusion, we demonstrated the applicability of using a new rat abdominal aorta model for biocompatibility assessment of polymeric materials. A smooth and complete coating surface is essential for the biocompatibility of PET/SIBS composite, and surface modification using phosphocholine improves blood compatibility. Extrinsic calcification was identified on the leaflets and was associated with regions of surface cracks.
Resumo:
Ensuring the correctness of software has been the major motivation in software research, constituting a Grand Challenge. Due to its impact in the final implementation, one critical aspect of software is its architectural design. By guaranteeing a correct architectural design, major and costly flaws can be caught early on in the development cycle. Software architecture design has received a lot of attention in the past years, with several methods, techniques and tools developed. However, there is still more to be done, such as providing adequate formal analysis of software architectures. On these regards, a framework to ensure system dependability from design to implementation has been developed at FIU (Florida International University). This framework is based on SAM (Software Architecture Model), an ADL (Architecture Description Language), that allows hierarchical compositions of components and connectors, defines an architectural modeling language for the behavior of components and connectors, and provides a specification language for the behavioral properties. The behavioral model of a SAM model is expressed in the form of Petri nets and the properties in first order linear temporal logic.^ This dissertation presents a formal verification and testing approach to guarantee the correctness of Software Architectures. The Software Architectures studied are expressed in SAM. For the formal verification approach, the technique applied was model checking and the model checker of choice was Spin. As part of the approach, a SAM model is formally translated to a model in the input language of Spin and verified for its correctness with respect to temporal properties. In terms of testing, a testing approach for SAM architectures was defined which includes the evaluation of test cases based on Petri net testing theory to be used in the testing process at the design level. Additionally, the information at the design level is used to derive test cases for the implementation level. Finally, a modeling and analysis tool (SAM tool) was implemented to help support the design and analysis of SAM models. The results show the applicability of the approach to testing and verification of SAM models with the aid of the SAM tool.^
Resumo:
The purpose of the present dissertation was to evaluate the internal validity of symptoms of four common anxiety disorders included in the Diagnostic and Statistical Manual of Mental Disorders fourth edition (text revision) (DSM-IV-TR; American Psychiatric Association, 2000), namely, separation anxiety disorder (SAD), social phobia (SOP), specific phobia (SP), and generalized anxiety disorder (GAD), in a sample of 625 youth (ages 6 to 17 years) referred to an anxiety disorders clinic and 479 parents. Confirmatory factor analyses (CFAs) were conducted on the dichotomous items of the SAD, SOP, SP, and GAD sections of the youth and parent versions of the Anxiety Disorders Interview Schedule for DSM-IV (ADIS-IV: C/P; Silverman & Albano, 1996) to test and compare a number of factor models including a factor model based on the DSM. Contrary to predictions, findings from CFAs showed that a correlated model with five factors of SAD, SOP, SP, GAD worry, and GAD somatic distress, provided the best fit of the youth data as well as the parent data. Multiple group CFAs supported the metric invariance of the correlated five factor model across boys and girls. Thus, the present study’s finding supports the internal validity of DSM-IV SAD, SOP, and SP, but raises doubt regarding the internal validity of GAD.^
Resumo:
Despite a considerable progress in developing and testing psychosocial treatments to reduce youth anxiety disorders, much remains to learn about the relation between anxiety symptom reduction and change in youth functional impairment. The specific aims of this dissertation thus were to examine: (1) the relation between different levels of anxiety and youth functional impairment ratings; (2) incremental validity of the Children Global Assessment Scale (CGAS); (3) the mediating role of anxiety symptom reduction on youth functional impairment ratings; (4) the directionality of change between anxiety symptom reduction and youth functional impairment; (5) the moderating effects of youth age, sex, and ethnicity on the mediated relation between youth anxiety symptom reduction and change in functional impairment; and (6) an agreement (or lack thereof) between youths and their parents in their views of change in youth functional impairment vis-à-vis anxiety symptom reduction. ^ The results were analyzed using archival data set acquired from 183 youths and their mothers. Research questions were tested using SPSS and structural equation modeling techniques in Mplus. ^ The results supported the efficacy of psychosocial treatments to reduce the severity of youth anxiety symptoms and its associated functional impairment. Moreover, the results revealed that at posttreatment, youths who scored either low or medium on anxiety levels scored significantly lower on impairment, than youths who scored high on anxiety levels. Incremental validity of the CGAS was also revealed across all assessment points and informants in my sample. In addition, the results indicated the mediating role of anxiety symptom reduction with respect to change in youth functional impairment at posttest, regardless of the youth’s age, sex, and ethnicity. No significant findings were observed with regard to the bidirectionality and an informant disagreement vis-à-vis the relation between anxiety symptom reduction and change in functional impairment. ^ The study’s main contributions and potential implications on theoretical, empirical, and clinical levels are further discussed. The emphasis is on the need to enhance existing evidence-based treatments and develop innovative treatment models that will not only reduce youth’s symptoms (such anxiety) but also evoke genuine and palpable improvements in lives of youths and their families.^
Resumo:
Buildings and other infrastructures located in the coastal regions of the US have a higher level of wind vulnerability. Reducing the increasing property losses and causalities associated with severe windstorms has been the central research focus of the wind engineering community. The present wind engineering toolbox consists of building codes and standards, laboratory experiments, and field measurements. The American Society of Civil Engineers (ASCE) 7 standard provides wind loads only for buildings with common shapes. For complex cases it refers to physical modeling. Although this option can be economically viable for large projects, it is not cost-effective for low-rise residential houses. To circumvent these limitations, a numerical approach based on the techniques of Computational Fluid Dynamics (CFD) has been developed. The recent advance in computing technology and significant developments in turbulence modeling is making numerical evaluation of wind effects a more affordable approach. The present study targeted those cases that are not addressed by the standards. These include wind loads on complex roofs for low-rise buildings, aerodynamics of tall buildings, and effects of complex surrounding buildings. Among all the turbulence models investigated, the large eddy simulation (LES) model performed the best in predicting wind loads. The application of a spatially evolving time-dependent wind velocity field with the relevant turbulence structures at the inlet boundaries was found to be essential. All the results were compared and validated with experimental data. The study also revealed CFD's unique flow visualization and aerodynamic data generation capabilities along with a better understanding of the complex three-dimensional aerodynamics of wind-structure interactions. With the proper modeling that realistically represents the actual turbulent atmospheric boundary layer flow, CFD can offer an economical alternative to the existing wind engineering tools. CFD's easy accessibility is expected to transform the practice of structural design for wind, resulting in more wind-resilient and sustainable systems by encouraging optimal aerodynamic and sustainable structural/building design. Thus, this method will help ensure public safety and reduce economic losses due to wind perils.