947 resultados para modeling and model calibration
Resumo:
The Virtual Learning Environment (VLE) is one of the fastest growing areas in educational technology research and development. In order to achieve learning effectiveness, ideal VLEs should be able to identify learning needs and customize solutions, with or without an instructor to supplement instruction. They are called Personalized VLEs (PVLEs). In order to achieve PVLEs success, comprehensive conceptual models corresponding to PVLEs are essential. Such conceptual modeling development is important because it facilitates early detection and correction of system development errors. Therefore, in order to capture the PVLEs knowledge explicitly, this paper focuses on the development of conceptual models for PVLEs, including models of knowledge primitives in terms of learner, curriculum, and situational models, models of VLEs in general pedagogical bases, and particularly, the definition of the ontology of PVLEs on the constructivist pedagogical principle. Based on those comprehensive conceptual models, a prototyped multiagent-based PVLE has been implemented. A field experiment was conducted to investigate the learning achievements by comparing personalized and non-personalized systems. The result indicates that the PVLE we developed under our comprehensive ontology successfully provides significant learning achievements. These comprehensive models also provide a solid knowledge representation framework for PVLEs development practice, guiding the analysis, design, and development of PVLEs. (c) 2005 Elsevier Ltd. All rights reserved.
A simulation model of cereal-legume intercropping systems for semi-arid regions I. Model development
Resumo:
Cereal-legume intercropping plays an important role in subsistence food production in developing countries, especially in situations of limited water resources. Crop simulation can be used to assess risk for intercrop productivity over time and space. In this study, a simple model for intercropping was developed for cereal and legume growth and yield, under semi-arid conditions. The model is based on radiation interception and use, and incorporates a water stress factor. Total dry matter and yield are functions of photosynthetically active radiation (PAR), the fraction of radiation intercepted and radiation use efficiency (RUE). One of two PAR sub-models was used to estimate PAR from solar radiation; either PAR is 50% of solar radiation or the ratio of PAR to solar radiation (PAR/SR) is a function of the clearness index (K-T). The fraction of radiation intercepted was calculated either based on Beer's Law with crop extinction coefficients (K) from field experiments or from previous reports. RUE was calculated as a function of available soil water to a depth of 900 mm (ASW). Either the soil water balance method or the decay curve approach was used to determine ASW. Thus, two alternatives for each of three factors, i.e., PAR/SR, K and ASW, were considered, giving eight possible models (2 methods x 3 factors). The model calibration and validation were carried out with maize-bean intercropping systems using data collected in a semi-arid region (Bloemfontein, Free State, South Africa) during seven growing seasons (1996/1997-2002/2003). The combination of PAR estimated from the clearness index, a crop extinction coefficient from the field experiment and the decay curve model gave the most reasonable and acceptable result. The intercrop model developed in this study is simple, so this modelling approach can be employed to develop other cereal-legume intercrop models for semi-arid regions. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Parasite resistance to antimalarial drugs is a serious threat to human health, and novel agents that act on enzymes essential for parasite metabolism, such as proteases, are attractive targets for drug development. Recent studies have shown that clinically utilized human immunodeficiency virus (HIV) protease inhibitors can inhibit the in vitro growth of Plasmodium falciparum at or below concentrations found in human plasma after oral drug administration. The most potent in vitro antimalarial effects have been obtained for parasites treated with saquinavir, ritonavir, or lopinavir, findings confirmed in this study for a genetically distinct P. falciparum line (3D7). To investigate the potential in vivo activity of antiretroviral protease inhibitors (ARPIs) against malaria, we examined the effect of ARPI combinations in a murine model of malaria. In mice infected with Plasmodium chabaudi AS and treated orally with ritonavir-saquinavir or ritonavir-lopinavir, a delay in patency and a significant attenuation of parasitemia were observed. Using modeling and ligand docking studies we examined putative ligand binding sites of ARPIs in aspartyl proteases of P. falciparum (plasmepsins II and IV) and P. chabaudi (plasmepsin) and found that these in silico analyses support the antimalarial activity hypothesized to be mediated through inhibition of these enzymes. In addition, in vitro enzyme assays demonstrated that P. falciparum plasmepsins II and IV are both inhibited by the ARPIs saquinavir, ritonavir, and lopinavir. The combined results suggest that ARPIs have useful antimalarial activity that may be especially relevant in geographical regions where HIV and P. falciparum infections are both endemic.
Resumo:
This paper presents a formal but practical approach for defining and using design patterns. Initially we formalize the concepts commonly used in defining design patterns using Object-Z. We also formalize consistency constraints that must be satisfied when a pattern is deployed in a design model. Then we implement the pattern modeling language and its consistency constraints using an existing modeling framework, EMF, and incorporate the implementation as plug-ins to the Eclipse modeling environment. While the language is defined formally in terms of Object-Z definitions, the language is implemented in a practical environment. Using the plug-ins, users can develop precise pattern descriptions without knowing the underlying formalism, and can use the tool to check the validity of the pattern descriptions and pattern usage in design models. In this work, formalism brings precision to the pattern language definition and its implementation brings practicability to our pattern-based modeling approach.
Resumo:
In this paper, we present a framework for pattern-based model evolution approaches in the MDA context. In the framework, users define patterns using a pattern modeling language that is designed to describe software design patterns, and they can use the patterns as rules to evolve their model. In the framework, design model evolution takes place via two steps. The first step is a binding process of selecting a pattern and defining where and how to apply the pattern in the model. The second step is an automatic model transformation that actually evolves the model according to the binding information and the pattern rule. The pattern modeling language is defined in terms of a MOF-based role metamodel, and implemented using an existing modeling framework, EMF, and incorporated as a plugin to the Eclipse modeling environment. The model evolution process is also implemented as an Eclipse plugin. With these two plugins, we provide an integrated framework where defining and validating patterns, and model evolution based on patterns can take place in a single modeling environment.
Resumo:
In deregulated electricity market, modeling and forecasting the spot price present a number of challenges. By applying wavelet and support vector machine techniques, a new time series model for short term electricity price forecasting has been developed in this paper. The model employs both historical price and other important information, such as load capacity and weather (temperature), to forecast the price of one or more time steps ahead. The developed model has been evaluated with the actual data from Australian National Electricity Market. The simulation results demonstrated that the forecast model is capable of forecasting the electricity price with a reasonable forecasting accuracy.
Resumo:
Component-based development (CBD) has become an important emerging topic in the software engineering field. It promises long-sought-after benefits such as increased software reuse, reduced development time to market and, hence, reduced software production cost. Despite the huge potential, the lack of reasoning support and development environment of component modeling and verification may hinder its development. Methods and tools that can support component model analysis are highly appreciated by industry. Such a tool support should be fully automated as well as efficient. At the same time, the reasoning tool should scale up well as it may need to handle hundreds or even thousands of components that a modern software system may have. Furthermore, a distributed environment that can effectively manage and compose components is also desirable. In this paper, we present an approach to the modeling and verification of a newly proposed component model using Semantic Web languages and their reasoning tools. We use the Web Ontology Language and the Semantic Web Rule Language to precisely capture the inter-relationships and constraints among the entities in a component model. Semantic Web reasoning tools are deployed to perform automated analysis support of the component models. Moreover, we also proposed a service-oriented architecture (SOA)-based semantic web environment for CBD. The adoption of Semantic Web services and SOA make our component environment more reusable, scalable, dynamic and adaptive.
Resumo:
Adjuvants are substances that enhance immune responses and thus improve the efficacy of vaccination. Few adjuvants are available for use in humans, and the one that is most commonly used (alum) often induces suboptimal immunity for protection against many pathogens. There is thus an obvious need to develop new and improved adjuvants. We have therefore taken an approach to adjuvant discovery that uses in silico modeling and structure-based drug-design. As proof-of-principle we chose to target the interaction of the chemokines CCL22 and CCL17 with their receptor CCR4. CCR4 was posited as an adjuvant target based on its expression on CD4(+)CD25(+) regulatory T cells (Tregs), which negatively regulate immune responses induced by dendritic cells (DC), whereas CCL17 and CCL22 are chemotactic agents produced by DC, which are crucial in promoting contact between DC and CCR4(+) T cells. Molecules identified by virtual screening and molecular docking as CCR4 antagonists were able to block CCL22- and CCL17-mediated recruitment of human Tregs and Th2 cells. Furthermore, CCR4 antagonists enhanced DC-mediated human CD4(+) T cell proliferation in an in vitro immune response model and amplified cellular and humoral immune responses in vivo in experimental models when injected in combination with either Modified Vaccinia Ankara expressing Ag85A from Mycobacterium tuberculosis (MVA85A) or recombinant hepatitis B virus surface antigen (rHBsAg) vaccines. The significant adjuvant activity observed provides good evidence supporting our hypothesis that CCR4 is a viable target for rational adjuvant design.
Resumo:
We discuss aggregation of data from neuropsychological patients and the process of evaluating models using data from a series of patients. We argue that aggregation can be misleading but not aggregating can also result in information loss. The basis for combining data needs to be theoretically defined, and the particular method of aggregation depends on the theoretical question and characteristics of the data. We present examples, often drawn from our own research, to illustrate these points. We also argue that statistical models and formal methods of model selection are a useful way to test theoretical accounts using data from several patients in multiple-case studies or case series. Statistical models can often measure fit in a way that explicitly captures what a theory allows; the parameter values that result from model fitting often measure theoretically important dimensions and can lead to more constrained theories or new predictions; and model selection allows the strength of evidence for models to be quantified without forcing this into the artificial binary choice that characterizes hypothesis testing methods. Methods that aggregate and then formally model patient data, however, are not automatically preferred to other methods. Which method is preferred depends on the question to be addressed, characteristics of the data, and practical issues like availability of suitable patients, but case series, multiple-case studies, single-case studies, statistical models, and process models should be complementary methods when guided by theory development.
Resumo:
Cell-wall components (cellulose, hemicellulose (oat spelt xylan), lignin (Organosolv)), and model compounds (levoglucosan (an intermediate product of cellulose decomposition) and chlorogenic acid (structurally similar to lignin polymer units)) have been investigated to probe in detail the influence of potassium on their pyrolysis behaviours as well as their uncatalysed decomposition reaction. Cellulose and lignin were pretreated to remove salts and metals by hydrochloric acid, and this dematerialized sample was impregnated with 1% of potassium as potassium acetate. Levoglucosan, xylan and chlorogenic acid were mixed with CHCOOK to introduce 1% K. Characterisation was performed using thermogravimetric analysis (TGA) and differential thermal analysis (DTA). In addition to the TGA pyrolysis, pyrolysis-gas chromatography-mass spectrometry (PY-GC-MS) analysis was introduced to examine reaction products. Potassium-catalysed pyrolysis has a huge influence on the char formation stage and increases the char yields considerably (from 7.7% for raw cellulose to 27.7% for potassium impregnated cellulose; from 5.7% for raw levoglucosan to 20.8% for levoglucosan with CHCOOK added). Major changes in the pyrolytic decomposition pathways were observed for cellulose, levoglucosan and chlorogenic acid. The results for cellulose and levoglucosan are consistent with a base catalysed route in the presence of the potassium salt which promotes complete decomposition of glucosidic units by a heterolytic mechanism and favours its direct depolymerization and fragmentation to low molecular weight components (e.g. acetic acid, formic acid, glyoxal, hydroxyacetaldehyde and acetol). Base catalysed polymerization reactions increase the char yield. Potassium-catalysed lignin pyrolysis is very significant: the temperature of maximum conversion in pyrolysis shifts to lower temperature by 70 K and catalysed polymerization reactions increase the char yield from 37% to 51%. A similar trend is observed for the model compound, chlorogenic acid. The addition of potassium does not produce a dramatic change in the tar product distribution, although its addition to chlorogenic acid promoted the generation of cyclohexane and phenol derivatives. Postulated thermal decomposition schemes for chlorogenic acid are presented. © 2008 Elsevier B.V. All rights reserved.
Resumo:
Purpose: Our study explores the mediating role of discrete emotions in the relationships between employee perceptions of distributive and procedural injustice, regarding an annual salary raise, and counterproductive work behaviors (CWBs). Design/Methodology/Approach: Survey data were provided by 508 individuals from telecom and IT companies in Pakistan. Confirmatory factor analysis, structural equation modeling, and bootstrapping were used to test our hypothesized model. Findings: We found a good fit between the data and our tested model. As predicted, anger (and not sadness) was positively related to aggressive CWBs (abuse against others and production deviance) and fully mediated the relationship between perceived distributive injustice and these CWBs. Against predictions, however, neither sadness nor anger was significantly related to employee withdrawal. Implications: Our findings provide organizations with an insight into the emotional consequences of unfair HR policies, and the potential implications for CWBs. Such knowledge may help employers to develop training and counseling interventions that support the effective management of emotions at work. Our findings are particularly salient for national and multinational organizations in Pakistan. Originality/Value: This is one of the first studies to provide empirical support for the relationships between in/justice, discrete emotions and CWBs in a non-Western (Pakistani) context. Our study also provides new evidence for the differential effects of outward/inward emotions on aggressive/passive CWBs. © 2012 Springer Science+Business Media, LLC.
Resumo:
This article characterizes key weaknesses in the ability of current digital libraries to support scholarly inquiry, and as a way to address these, proposes computational services grounded in semiformal models of the naturalistic argumentation commonly found in research literatures. It is argued that a design priority is to balance formal expressiveness with usability, making it critical to coevolve the modeling scheme with appropriate user interfaces for argument construction and analysis. We specify the requirements for an argument modeling scheme for use by untrained researchers and describe the resulting ontology, contrasting it with other domain modeling and semantic web approaches, before discussing passive and intelligent user interfaces designed to support analysts in the construction, navigation, and analysis of scholarly argument structures in a Web-based environment. © 2007 Wiley Periodicals, Inc. Int J Int Syst 22: 17–47, 2007.
Resumo:
* The research work reviewed in this paper has been carried out in the context of the Russian Foundation for Basic Research funded project “Adaptable Intelligent Interfaces Research and Development for Distance Learning Systems”(grant N 02-01-81019). The authors wish to acknowledge the co-operation with the Byelorussian partners of this project.
Resumo:
Formal grammars can used for describing complex repeatable structures such as DNA sequences. In this paper, we describe the structural composition of DNA sequences using a context-free stochastic L-grammar. L-grammars are a special class of parallel grammars that can model the growth of living organisms, e.g. plant development, and model the morphology of a variety of organisms. We believe that parallel grammars also can be used for modeling genetic mechanisms and sequences such as promoters. Promoters are short regulatory DNA sequences located upstream of a gene. Detection of promoters in DNA sequences is important for successful gene prediction. Promoters can be recognized by certain patterns that are conserved within a species, but there are many exceptions which makes the promoter recognition a complex problem. We replace the problem of promoter recognition by induction of context-free stochastic L-grammar rules, which are later used for the structural analysis of promoter sequences. L-grammar rules are derived automatically from the drosophila and vertebrate promoter datasets using a genetic programming technique and their fitness is evaluated using a Support Vector Machine (SVM) classifier. The artificial promoter sequences generated using the derived L- grammar rules are analyzed and compared with natural promoter sequences.