956 resultados para Constraint based modeling


Relevância:

90.00% 90.00%

Publicador:

Resumo:

EuroPES 2009

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Many testing methods are based on program paths. A well-known problem with them is that some paths are infeasible. To decide the feasibility of paths, we may solve a set of constraints. In this paper, we describe constraint-based tools that can be used for this purpose. They accept constraints expressed in a natural form, which may involve variables of different types such as integers, Booleans, reals and fixed-size arrays. The constraint solver is an extension of a Boolean satisfiability checker and it makes use of a linear programming package. The solving algorithm is described, and examples are given to illustrate the use of the tools. For many paths in the testing literature, their feasibility can be decided in a reasonable amount of time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This report describes MM, a computer program that can model a variety of mechanical and fluid systems. Given a system's structure and qualitative behavior, MM searches for models using an energy-based modeling framework. MM uses general facts about physical systems to relate behavioral and model properties. These facts enable a more focussed search for models than would be obtained by mere comparison of desired and predicted behaviors. When these facts do not apply, MM uses behavior-constrained qualitative simulation to verify candidate models efficiently. MM can also design experiments to distinguish among multiple candidate models.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The constraint paradigm is a model of computation in which values are deduced whenever possible, under the limitation that deductions be local in a certain sense. One may visualize a constraint 'program' as a network of devices connected by wires. Data values may flow along the wires, and computation is performed by the devices. A device computes using only locally available information (with a few exceptions), and places newly derived values on other, locally attached wires. In this way computed values are propagated. An advantage of the constraint paradigm (not unique to it) is that a single relationship can be used in more than one direction. The connections to a device are not labelled as inputs and outputs; a device will compute with whatever values are available, and produce as many new values as it can. General theorem provers are capable of such behavior, but tend to suffer from combinatorial explosion; it is not usually useful to derive all the possible consequences of a set of hypotheses. The constraint paradigm places a certain kind of limitation on the deduction process. The limitations imposed by the constraint paradigm are not the only one possible. It is argued, however, that they are restrictive enough to forestall combinatorial explosion in many interesting computational situations, yet permissive enough to allow useful computations in practical situations. Moreover, the paradigm is intuitive: It is easy to visualize the computational effects of these particular limitations, and the paradigm is a natural way of expressing programs for certain applications, in particular relationships arising in computer-aided design. A number of implementations of constraint-based programming languages are presented. A progression of ever more powerful languages is described, complete implementations are presented and design difficulties and alternatives are discussed. The goal approached, though not quite reached, is a complete programming system which will implicitly support the constraint paradigm to the same extent that LISP, say, supports automatic storage management.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nonrigid motion can be described as morphing or blending between extremal shapes, e.g., heart motion can be described as transitioning between the systole and diastole states. Using physically-based modeling techniques, shape similarity can be measured in terms of forces and strain. This provides a physically-based coordinate system in which motion is characterized in terms of physical similarity to a set of extremal shapes. Having such a low-dimensional characterization of nonrigid motion allows for the recognition and the comparison of different types of nonrigid motion.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a method for modeling diffusive boundaries in finite difference time domain (FDTD) room acoustics simulations with the use of impedance filters is presented. The proposed technique is based on the concept of phase grating diffusers, and realized by designing boundary impedance filters from normal-incidence reflection filters with added delay. These added delays, that correspond to the diffuser well depths, are varied across the boundary surface, and implemented using Thiran allpass filters. The proposed method for simulating sound scattering is suitable for modeling high frequency diffusion caused by small variations in surface roughness and, more generally, diffusers characterized by narrow wells with infinitely thin separators. This concept is also applicable to other wave-based modeling techniques. The approach is validated by comparing numerical results for Schroeder diffusers to measured data. In addition, it is proposed that irregular surfaces are modeled by shaping them with Brownian noise, giving good control over the sound scattering properties of the simulated boundary through two parameters, namely the spectral density exponent and the maximum well depth.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Structure-based modeling methods have been used to design a series of disubstituted triazole-linked acridine compounds with selectivity for human telomeric quadruplex DNAs. A focused library of these compounds was prepared using click chemistry and the selectivity concept was validated against two promoter quadruplexes from the c-kit gene with known molecular structures, as well as with duplex DNA using a FRET-based melting method. Lead compounds were found to have reduced effects on the thermal stability of the c-kit quadruplexes and duplex DNA structures. These effects were further explored with a series of competition experiments, which confirmed that binding to duplex DNA is very low even at high duplex:telomeric quadruplex ratios. Selectivity to the c-kit quadruplexes is more complex, with some evidence of their stabilization at increasing excess over human telomeric quadruplex DNA. Selectivity is a result of the dimensions of the triazole-acridine compounds; and in particular the separation of the two alkyl-amino terminal groups. Both lead compounds also have selective inhibitory effects on the proliferation of cancer cell lines compared to a normal cell line, and one has been shown to inhibit the activity of the telomerase enzyme, which is selectively expressed in tumor cells, where it plays a role in maintaining telomere integrity and cellular immortalization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The existing parking simulations, as most simulations, are intended to gain insights of a system or to make predictions. The knowledge they have provided has built up over the years, and several research works have devised detailed parking system models. This thesis work describes the use of an agent-based parking simulation in the context of a bigger parking system development. It focuses more on flexibility than on fidelity, showing the case where it is relevant for a parking simulation to consume dynamically changing GIS data from external, online sources and how to address this case. The simulation generates the parking occupancy information that sensing technologies should eventually produce and supplies it to the bigger parking system. It is built as a Java application based on the MASON toolkit and consumes GIS data from an ArcGis Server. The application context of the implemented parking simulation is a university campus with free, on-street parking places.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Computational Biology is the research are that contributes to the analysis of biological data through the development of algorithms which will address significant research problems.The data from molecular biology includes DNA,RNA ,Protein and Gene expression data.Gene Expression Data provides the expression level of genes under different conditions.Gene expression is the process of transcribing the DNA sequence of a gene into mRNA sequences which in turn are later translated into proteins.The number of copies of mRNA produced is called the expression level of a gene.Gene expression data is organized in the form of a matrix. Rows in the matrix represent genes and columns in the matrix represent experimental conditions.Experimental conditions can be different tissue types or time points.Entries in the gene expression matrix are real values.Through the analysis of gene expression data it is possible to determine the behavioral patterns of genes such as similarity of their behavior,nature of their interaction,their respective contribution to the same pathways and so on. Similar expression patterns are exhibited by the genes participating in the same biological process.These patterns have immense relevance and application in bioinformatics and clinical research.Theses patterns are used in the medical domain for aid in more accurate diagnosis,prognosis,treatment planning.drug discovery and protein network analysis.To identify various patterns from gene expression data,data mining techniques are essential.Clustering is an important data mining technique for the analysis of gene expression data.To overcome the problems associated with clustering,biclustering is introduced.Biclustering refers to simultaneous clustering of both rows and columns of a data matrix. Clustering is a global whereas biclustering is a local model.Discovering local expression patterns is essential for identfying many genetic pathways that are not apparent otherwise.It is therefore necessary to move beyond the clustering paradigm towards developing approaches which are capable of discovering local patterns in gene expression data.A biclusters is a submatrix of the gene expression data matrix.The rows and columns in the submatrix need not be contiguous as in the gene expression data matrix.Biclusters are not disjoint.Computation of biclusters is costly because one will have to consider all the combinations of columans and rows in order to find out all the biclusters.The search space for the biclustering problem is 2 m+n where m and n are the number of genes and conditions respectively.Usually m+n is more than 3000.The biclustering problem is NP-hard.Biclustering is a powerful analytical tool for the biologist.The research reported in this thesis addresses the problem of biclustering.Ten algorithms are developed for the identification of coherent biclusters from gene expression data.All these algorithms are making use of a measure called mean squared residue to search for biclusters.The objective here is to identify the biclusters of maximum size with the mean squared residue lower than a given threshold. All these algorithms begin the search from tightly coregulated submatrices called the seeds.These seeds are generated by K-Means clustering algorithm.The algorithms developed can be classified as constraint based,greedy and metaheuristic.Constarint based algorithms uses one or more of the various constaints namely the MSR threshold and the MSR difference threshold.The greedy approach makes a locally optimal choice at each stage with the objective of finding the global optimum.In metaheuristic approaches particle Swarm Optimization(PSO) and variants of Greedy Randomized Adaptive Search Procedure(GRASP) are used for the identification of biclusters.These algorithms are implemented on the Yeast and Lymphoma datasets.Biologically relevant and statistically significant biclusters are identified by all these algorithms which are validated by Gene Ontology database.All these algorithms are compared with some other biclustering algorithms.Algorithms developed in this work overcome some of the problems associated with the already existing algorithms.With the help of some of the algorithms which are developed in this work biclusters with very high row variance,which is higher than the row variance of any other algorithm using mean squared residue, are identified from both Yeast and Lymphoma data sets.Such biclusters which make significant change in the expression level are highly relevant biologically.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Agent based simulation is a widely developing area in artificial intelligence.The simulation studies are extensively used in different areas of disaster management. This work deals with the study of an agent based evacuation simulation which is being done to handle the various evacuation behaviors.Various emergent behaviors of agents are addressed here. Dynamic grouping behaviors of agents are studied. Collision detection and obstacle avoidances are also incorporated in this approach.Evacuation is studied with single exits and multiple exits and efficiency is measured in terms of evacuation rate, collision rate etc.Net logo is the tool used which helps in the efficient modeling of scenarios in evacuation

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Este proyecto de investigación busca usar un sistema de cómputo basado en modelación por agentes para medir la percepción de marca de una organización en una población heterogénea. Se espera proporcionar información que permita dar soluciones a una organización acerca del comportamiento de sus consumidores y la asociada percepción de marca. El propósito de este sistema es el de modelar el proceso de percepción-razonamiento-acción para simular un proceso de razonamiento como el resultado de una acumulación de percepciones que resultan en las acciones del consumidor. Este resultado definirá la aceptación de marca o el rechazo del consumidor hacia la empresa. Se realizó un proceso de recolección información acerca de una organización específica en el campo de marketing. Después de compilar y procesar la información obtenida de la empresa, el análisis de la percepción de marca es aplicado mediante procesos de simulación. Los resultados del experimento son emitidos a la organización mediante un informe basado en conclusiones y recomendaciones a nivel de marketing para mejorar la percepción de marca por parte de los consumidores.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

En un mundo hiperconectado, dinámico y cargado de incertidumbre como el actual, los métodos y modelos analíticos convencionales están mostrando sus limitaciones. Las organizaciones requieren, por tanto, herramientas útiles que empleen tecnología de información y modelos de simulación computacional como mecanismos para la toma de decisiones y la resolución de problemas. Una de las más recientes, potentes y prometedoras es el modelamiento y la simulación basados en agentes (MSBA). Muchas organizaciones, incluidas empresas consultoras, emplean esta técnica para comprender fenómenos, hacer evaluación de estrategias y resolver problemas de diversa índole. Pese a ello, no existe (hasta donde conocemos) un estado situacional acerca del MSBA y su aplicación a la investigación organizacional. Cabe anotar, además, que por su novedad no es un tema suficientemente difundido y trabajado en Latinoamérica. En consecuencia, este proyecto pretende elaborar un estado situacional sobre el MSBA y su impacto sobre la investigación organizacional.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bezier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bezier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bezier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bezier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Current European Union regulatory risk assessment allows application of pesticides provided that recovery of nontarget arthropods in-crop occurs within a year. Despite the long-established theory of source-sink dynamics, risk assessment ignores depletion of surrounding populations and typical field trials are restricted to plot-scale experiments. In the present study, the authors used agent-based modeling of 2 contrasting invertebrates, a spider and a beetle, to assess how the area of pesticide application and environmental half-life affect the assessment of recovery at the plot scale and impact the population at the landscape scale. Small-scale plot experiments were simulated for pesticides with different application rates and environmental half-lives. The same pesticides were then evaluated at the landscape scale (10 km × 10 km) assuming continuous year-on-year usage. The authors' results show that recovery time estimated from plot experiments is a poor indicator of long-term population impact at the landscape level and that the spatial scale of pesticide application strongly determines population-level impact. This raises serious doubts as to the utility of plot-recovery experiments in pesticide regulatory risk assessment for population-level protection. Predictions from the model are supported by empirical evidence from a series of studies carried out in the decade starting in 1988. The issues raised then can now be addressed using simulation. Prediction of impacts at landscape scales should be more widely used in assessing the risks posed by environmental stressors.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The recognition of behavioural elements in finance has caused major shifts in the analytic framework pertaining to ratio-based modeling of corporate collapse. The modeling approach so far has been based on the classical rational theory in behavioural economics, which assumes that the financial ratios (i.e., the predictors of collapse) are static over time. The paper argues that, in the absence of rational economic theory, a static model is flawed, and that a suitable model instead is one that reflects the heuristic behavioural framework, which is what characterises behavioural attributes of company directors and in turn influences the accounting numbers used in calculating the financial ratios. This calls for a dynamic model: dynamic in the sense that it does not rely on a coherent assortment of financial ratios for signaling corporate collapse over multiple time periods. This paper provides empirical evidence, using a data set of Australian publicly listed companies, to demonstrate that a dynamic model consistently outperforms its static counterpart in signaling the event of collapse. On average, the overall predictive power of the dynamic model is 86.83% compared to an average overall predictive power of 69.35% for the static model.