991 resultados para Implicit model
Resumo:
Software engineering researchers are challenged to provide increasingly more powerful levels of abstractions to address the rising complexity inherent in software solutions. One new development paradigm that places models as abstraction at the forefront of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code.^ Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process.^ The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources.^ At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM's synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise.^ This dissertation investigates how to decouple the DSK from the MoE and subsequently producing a generic model of execution (GMoE) from the remaining application logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis component of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions.^ This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.^
Resumo:
Software engineering researchers are challenged to provide increasingly more pow- erful levels of abstractions to address the rising complexity inherent in software solu- tions. One new development paradigm that places models as abstraction at the fore- front of the development process is Model-Driven Software Development (MDSD). MDSD considers models as first class artifacts, extending the capability for engineers to use concepts from the problem domain of discourse to specify apropos solutions. A key component in MDSD is domain-specific modeling languages (DSMLs) which are languages with focused expressiveness, targeting a specific taxonomy of problems. The de facto approach used is to first transform DSML models to an intermediate artifact in a HLL e.g., Java or C++, then execute that resulting code. Our research group has developed a class of DSMLs, referred to as interpreted DSMLs (i-DSMLs), where models are directly interpreted by a specialized execution engine with semantics based on model changes at runtime. This execution engine uses a layered architecture and is referred to as a domain-specific virtual machine (DSVM). As the domain-specific model being executed descends the layers of the DSVM the semantic gap between the user-defined model and the services being provided by the underlying infrastructure is closed. The focus of this research is the synthesis engine, the layer in the DSVM which transforms i-DSML models into executable scripts for the next lower layer to process. The appeal of an i-DSML is constrained as it possesses unique semantics contained within the DSVM. Existing DSVMs for i-DSMLs exhibit tight coupling between the implicit model of execution and the semantics of the domain, making it difficult to develop DSVMs for new i-DSMLs without a significant investment in resources. At the onset of this research only one i-DSML had been created for the user- centric communication domain using the aforementioned approach. This i-DSML is the Communication Modeling Language (CML) and its DSVM is the Communication Virtual machine (CVM). A major problem with the CVM’s synthesis engine is that the domain-specific knowledge (DSK) and the model of execution (MoE) are tightly interwoven consequently subsequent DSVMs would need to be developed from inception with no reuse of expertise. This dissertation investigates how to decouple the DSK from the MoE and sub- sequently producing a generic model of execution (GMoE) from the remaining appli- cation logic. This GMoE can be reused to instantiate synthesis engines for DSVMs in other domains. The generalized approach to developing the model synthesis com- ponent of i-DSML interpreters utilizes a reusable framework loosely coupled to DSK as swappable framework extensions. This approach involves first creating an i-DSML and its DSVM for a second do- main, demand-side smartgrid, or microgrid energy management, and designing the synthesis engine so that the DSK and MoE are easily decoupled. To validate the utility of the approach, the SEs are instantiated using the GMoE and DSKs of the two aforementioned domains and an empirical study to support our claim of reduced developmental effort is performed.
Resumo:
Using a numerical implicit model for root water extraction by a single root in a symmetric radial flow problem, based on the Richards equation and the combined convection-dispersion equation, we investigated some aspects of the response of root water uptake to combined water and osmotic stress. The model implicitly incorporates the effect of simultaneous pressure head and osmotic head on root water uptake, and does not require additional assumptions (additive or multiplicative) to derive the combined effect of water and salt stress. Simulation results showed that relative transpiration equals relative matric flux potential, which is defined as the matric flux potential calculated with an osmotic pressure head-dependent lower bound of integration, divided by the matric flux potential at the onset of limiting hydraulic conditions. In the falling rate phase, the osmotic head near the root surface was shown to increase in time due to decreasing root water extraction rates, causing a more gradual decline of relative transpiration than with water stress alone. Results furthermore show that osmotic stress effects on uptake depend on pressure head or water content, allowing a refinement of the approach in which fixed reduction factors based on the electrical conductivity of the saturated soil solution extract are used. One of the consequences is that osmotic stress is predicted to occur in situations not predicted by the saturation extract analysis approach. It is also shown that this way of combining salinity and water as stressors yields results that are different from a purely multiplicative approach. An analytical steady state solution is presented to calculate the solute content at the root surface, and compared with the outputs of the numerical model. Using the analytical solution, a method has been developed to estimate relative transpiration as a function of system parameters, which are often already used in vadose zone models: potential transpiration rate, root length density, minimum root surface pressure head, and soil theta-h and K-h functions.
Resumo:
The interaction mean free path between neutrons and TRISO particles is simulated using scripts written in MATLAB to solve the increasing error present with an increase in the packing factor in the reactor physics code Serpent. Their movement is tracked both in an unbounded and in a bounded space. Their track is calculated, depending on the program, linearly directly using the position vectors of the neutrons and the surface equations of all the fuel particles; by dividing the space in multiple subspaces, each of which contain a fraction of the total number of particles, and choosing the particles from those subspaces through which the neutron passes through; or by choosing the particles that lie within an infinite cylinder formed on the movement axis of the neutron. The estimate from the current analytical model, based on an exponential distribution, for the mean free path, utilized by Serpent, is used as a reference result. The results from the implicit model in Serpent imply a too long mean free path with high packing factors. The received results support this observation by producing, with a packing factor of 17 %, approximately 2.46 % shorter mean free path compared to the reference model. This is supported by the packing factor experienced by the neutron, the simulation of which resulted in a 17.29 % packing factor. It was also observed that the neutrons leaving from the surfaces of the fuel particles, in contrast to those starting inside the moderator, do not follow the exponential distribution. The current model, as it is, is thus not valid in the determination of the free path lengths of the neutrons.
Resumo:
This paper describes a general, trainable architecture for object detection that has previously been applied to face and peoplesdetection with a new application to car detection in static images. Our technique is a learning based approach that uses a set of labeled training data from which an implicit model of an object class -- here, cars -- is learned. Instead of pixel representations that may be noisy and therefore not provide a compact representation for learning, our training images are transformed from pixel space to that of Haar wavelets that respond to local, oriented, multiscale intensity differences. These feature vectors are then used to train a support vector machine classifier. The detection of cars in images is an important step in applications such as traffic monitoring, driver assistance systems, and surveillance, among others. We show several examples of car detection on out-of-sample images and show an ROC curve that highlights the performance of our system.
Resumo:
Valuation is the process of estimating price. The methods used to determine value attempt to model the thought processes of the market and thus estimate price by reference to observed historic data. This can be done using either an explicit model, that models the worth calculation of the most likely bidder, or an implicit model, that that uses historic data suitably adjusted as a short cut to determine value by reference to previous similar sales. The former is generally referred to as the Discounted Cash Flow (DCF) model and the latter as the capitalisation (or All Risk Yield) model. However, regardless of the technique used, the valuation will be affected by uncertainties. Uncertainty in the comparable data available; uncertainty in the current and future market conditions and uncertainty in the specific inputs for the subject property. These input uncertainties will translate into an uncertainty with the output figure, the estimate of price. In a previous paper, we have considered the way in which uncertainty is allowed for in the capitalisation model in the UK. In this paper, we extend the analysis to look at the way in which uncertainty can be incorporated into the explicit DCF model. This is done by recognising that the input variables are uncertain and will have a probability distribution pertaining to each of them. Thus buy utilising a probability-based valuation model (using Crystal Ball) it is possible to incorporate uncertainty into the analysis and address the shortcomings of the current model. Although the capitalisation model is discussed, the paper concentrates upon the application of Crystal Ball to the Discounted Cash Flow approach.
Resumo:
This paper estimates the implicit model, especially the roles of size asymmetries and firm numbers, used by the European Commission to identify mergers with coordinated effects. This subset of cases offers an opportunity to shed empirical light on the conditions where a Competition Authority believes tacit collusion is most likely to arise. We find that, for the Commission, tacit collusion is a rare phenomenon, largely confined to markets of two, more or less symmetric, players. This is consistent with recent experimental literature, but contrasts with the facts on ‘hard-core’ collusion in which firm numbers and asymmetries are often much larger.
Resumo:
This paper compares two linear programming (LP) models for shift scheduling in services where homogeneously-skilled employees are available at limited times. Although both models are based on set covering approaches, one explicitly matches employees to shifts, while the other imposes this matching implicitly. Each model is used in three forms—one with complete, another with very limited meal break placement flexibility, and a third without meal breaks—to provide initial schedules to a completion/improvement heuristic. The term completion/improvement heuristic is used to describe a construction/ improvement heuristic operating on a starting schedule. On 80 test problems varying widely in scheduling flexibility, employee staffing requirements, and employee availability characteristics, all six LP-based procedures generated lower cost schedules than a comparison from-scratch construction/improvement heuristic. This heuristic, which perpetually maintains an explicit matching of employees to shifts, consists of three phases which add, drop, and modify shifts. In terms of schedule cost, schedule generation time, and model size, the procedures based on the implicit model performed better, as a group, than those based on the explicit model. The LP model with complete break placement flexibility and implicitly matching employees to shifts generated schedules costing 6.7% less than those developed by the from-scratch heuristic.
Resumo:
Traditional cutoff regularization schemes of the Nambu-Jona-Lasinio model limit the applicability of the model to energy-momentum scales much below the value of the regularizing cutoff. In particular, the model cannot be used to study quark matter with Fermi momenta larger than the cutoff. In the present work, an extension of the model to high temperatures and densities recently proposed by Casalbuoni, Gatto, Nardulli, and Ruggieri is used in connection with an implicit regularization scheme. This is done by making use of scaling relations of the divergent one-loop integrals that relate these integrals at different energy-momentum scales. Fixing the pion decay constant at the chiral symmetry breaking scale in the vacuum, the scaling relations predict a running coupling constant that decreases as the regularization scale increases, implementing in a schematic way the property of asymptotic freedom of quantum chromodynamics. If the regularization scale is allowed to increase with density and temperature, the coupling will decrease with density and temperature, extending in this way the applicability of the model to high densities and temperatures. These results are obtained without specifying an explicit regularization. As an illustration of the formalism, numerical results are obtained for the finite density and finite temperature quark condensate and applied to the problem of color superconductivity at high quark densities and finite temperature.
Resumo:
A boundary element method (BEM) formulation to predict the behavior of solids exhibiting displacement (strong) discontinuity is presented. In this formulation, the effects of the displacement jump of a discontinuity interface embedded in an internal cell are reproduced by an equivalent strain field over the cell. To compute the stresses, this equivalent strain field is assumed as the inelastic part of the total strain. As a consequence, the non-linear BEM integral equations that result from the proposed approach are similar to those of the implicit BEM based on initial strains. Since discontinuity interfaces can be introduced inside the cell independently on the cell boundaries, the proposed BEM formulation, combined with a tracking scheme to trace the discontinuity path during the analysis, allows for arbitrary discontinuity propagation using a fixed mesh. A simple technique to track the crack path is outlined. This technique is based on the construction of a polygonal line formed by segments inside the cells, in which the assumed failure criterion is reached. Two experimental concrete fracture tests were analyzed to assess the performance of the proposed formulation.
Resumo:
In the protein folding problem, solvent-mediated forces are commonly represented by intra-chain pairwise contact energy. Although this approximation has proven to be useful in several circumstances, it is limited in some other aspects of the problem. Here we show that it is possible to achieve two models to represent the chain-solvent system. one of them with implicit and other with explicit solvent, such that both reproduce the same thermodynamic results. Firstly, lattice models treated by analytical methods, were used to show that the implicit and explicitly representation of solvent effects can be energetically equivalent only if local solvent properties are time and spatially invariant. Following, applying the same reasoning Used for the lattice models, two inter-consistent Monte Carlo off-lattice models for implicit and explicit solvent are constructed, being that now in the latter the solvent properties are allowed to fluctuate. Then, it is shown that the chain configurational evolution as well as the globule equilibrium conformation are significantly distinct for implicit and explicit solvent systems. Actually, strongly contrasting with the implicit solvent version, the explicit solvent model predicts: (i) a malleable globule, in agreement with the estimated large protein-volume fluctuations; (ii) thermal conformational stability, resembling the conformational hear resistance of globular proteins, in which radii of gyration are practically insensitive to thermal effects over a relatively wide range of temperatures; and (iii) smaller radii of gyration at higher temperatures, indicating that the chain conformational entropy in the unfolded state is significantly smaller than that estimated from random coil configurations. Finally, we comment on the meaning of these results with respect to the understanding of the folding process. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The dispersion model with mixed boundary conditions uses a single parameter, the dispersion number, to describe the hepatic elimination of xenobiotics and endogenous substances. An implicit a priori assumption of the model is that the transit time density of intravascular indicators is approximated by an inverse Gaussian distribution. This approximation is limited in that the model poorly describes the tail part of the hepatic outflow curves of vascular indicators. A sum of two inverse Gaussian functions is proposed as ail alternative, more flexible empirical model for transit time densities of vascular references. This model suggests that a more accurate description of the tail portion of vascular reference curves yields an elimination rate constant (or intrinsic clearance) which is 40% less than predicted by the dispersion model with mixed boundary conditions. The results emphasize the need to accurately describe outflow curves in using them as a basis for determining pharmacokinetic parameters using hepatic elimination models. (C) 1997 Society for Mathematical Biology.
Resumo:
The IWA Anaerobic Digestion Modelling Task Group was established in 1997 at the 8th World Congress on,Anaerobic Digestion (Sendai, Japan) with the goal of developing a generalised anaerobic digestion model. The structured model includes multiple steps describing biochemical as well as physicochemical processes. The biochemical steps include disintegration from homogeneous particulates to carbohydrates, proteins and lipids; extracellular hydrolysis of these particulate substrates to sugars, amino acids, and long chain fatty acids (LCFA), respectively; acidogenesis from sugars and amino acids to volatile fatty acids (VFAs) and hydrogen; acetogenesis of LCFA and VFAs to acetate; and separate methanogenesis steps from acetate and hydrogen/CO2. The physico-chemical equations describe ion association and dissociation, and gas-liquid transfer. Implemented as a differential and algebraic equation (DAE) set, there are 26 dynamic state concentration variables, and 8 implicit algebraic variables per reactor vessel or element. Implemented as differential equations (DE) only, there are 32 dynamic concentration state variables.
Resumo:
Two studies investigated interactions between health providers and patients, using Semin and Fiedler's linguistic category model. In Study 1 the linguistic category model was used to examine perceptions of the levels of linguistic intergroup bias in descriptions of conversations with health professionals in hospitals. Results indicated a favourable linguistic bias toward health professionals in satisfactory conversations but low levels of linguistic intergroup bias in unsatisfactory conversations. In Study 2, the language of patients and health professionals in videotaped interactions was examined for levels of linguistic intergroup bias. Interpersonally salient interactions showed less linguistic intergroup bias than did intergroup ones. Results also indicate that health professionals have high levels of control in all types of medical encounters with patients. Nevertheless, the extent to which patients are able to interact with health professionals as individuals, rather than only as professionals is a key determinant of satisfaction with the interaction.
Resumo:
The research presented in this paper proposes a novel quantitative model for decomposing and assessing the Value for the Customer. The proposed approach builds on the different dimensions of the Value Network analysis proposed by Verna Allee having as background the concept of Value for the Customer proposed by Woodall. In this context, the Value for the Customer is modelled as a relationship established between the exchanged deliverables and a combination of tangible and intangible assets projected into their endogenous or exogenous dimensions. The Value Network Analysis of the deliverables exchange enables an in-depth understanding of this frontier and the implicit modelling of co-creation scenarios. The proposed Conceptual Model for Decomposing Value for the Customer combines several concepts: from the marketing area we have the concept of Value for the Customer; from the area of intellectual capital the concept of Value Network Analysis; from the collaborative networks area we have the perspective of the enterprise life cycle and the endogenous and exogenous perspectives; at last, the proposed model is supported by a mathematical formal description that stems from the area of Multi-Criteria Decision Making. The whole concept is illustrated in the context of a case study of an enterprise in the footwear industry (Pontechem). The merits of this approach seem evident from the contact with Pontechem as it provides a structured approach for the enterprises to assess the adequacy of their value proposition to the client/customer needs and how these relate to their endogenous and/or exogenous tangible or intangible assets. The proposed model, as a tool, may therefore be a useful instrument in supporting the commercialisation of new products and/or services.