43 resultados para MODEL (Computer program language)
Resumo:
GEODERM, a microcomputer-based solid modeller, which incorporates the parametric object model, is discussed. The entity-relationship model, which is used to describe the conceptual schema of the geometric database, is also presented. Three of the four modules of GEODERM, which have been implemented are described in some detail. They are the Solid Definition Language (SDL), the Solid Manipulation Language (SML) and the User-System Interface.
Resumo:
It is shown that a leaky aquifer model can be used for well field analysis in hard rock areas, treating the upper weathered and clayey layers as a composite unconfined aquitard overlying a deeper fractured aquifer. Two long-duration pump test studies are reported in granitic and schist regions in the Vedavati river basin. The validity of simplifications in the analytical solution is verified by finite difference computations.
Resumo:
An estimate of the irrigation potential over and above the existing utilization was made based on the ground water potential in the Vedavati river basin. The estimate is based on assumed crops and cropping patterns as per existing practice in the various taluks of the basin. Irrigation potential was estimated talukwise based on the available ground water potential identified from the simulation study. It is estimated that 84,100 hectares of additional land can be brought under irrigation from ground water in the entire basin.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
Extensible Markup Language ( XML) has emerged as a medium for interoperability over the Internet. As the number of documents published in the form of XML is increasing, there is a need for selective dissemination of XML documents based on user interests. In the proposed technique, a combination of Adaptive Genetic Algorithms and multi class Support Vector Machine ( SVM) is used to learn a user model. Based on the feedback from the users, the system automatically adapts to the user's preference and interests. The user model and a similarity metric are used for selective dissemination of a continuous stream of XML documents. Experimental evaluations performed over a wide range of XML documents, indicate that the proposed approach significantly improves the performance of the selective dissemination task, with respect to accuracy and efficiency.
Resumo:
The recent spurt of research activities in Entity-Relationship Approach to databases calls for a close scrutiny of the semantics of the underlying Entity-Relationship models, data manipulation languages, data definition languages, etc. For reasons well known, it is very desirable and sometimes imperative to give formal description of the semantics. In this paper, we consider a specific ER model, the generalized Entity-Relationship model (without attributes on relationships) and give denotational semantics for the model as well as a simple ER algebra based on the model. Our formalism is based on the Vienna Development Method—the meta language (VDM). We also discuss the salient features of the given semantics in detail and suggest directions for further work.
Resumo:
The method of structured programming or program development using a top-down, stepwise refinement technique provides a systematic approach for the development of programs of considerable complexity. The aim of this paper is to present the philosophy of structured programming through a case study of a nonnumeric programming task. The problem of converting a well-formed formula in first-order logic into prenex normal form is considered. The program has been coded in the programming language PASCAL and implemented on a DEC-10 system. The program has about 500 lines of code and comprises 11 procedures.
Resumo:
Memory models of shared memory concurrent programs define the values a read of a shared memory location is allowed to see. Such memory models are typically weaker than the intuitive sequential consistency semantics to allow efficient execution. In this paper, we present WOMM (abbreviation for Weak Operational Memory Model) that formally unifies two sources of weak behavior in hardware memory models: reordering of instructions and weakly consistent memory. We show that a large number of optimizations are allowed by WOMM. We also show that WOMM is weaker than a number of hardware memory models. Consequently, if a program behaves correctly under WOMM, it will be correct with respect to those hardware memory models. Hence, WOMM can be used as a formally specified abstraction of the hardware memory models. Moreover; unlike most weak memory models, WOMM is described using operational semantics, making it easy to integrate into a model checker for concurrent programs. We further show that WOMM has an important property - it has sequential consistency semantics for datarace-free programs.
Resumo:
The Java Memory Model (JMM) provides a semantics of Java multithreading for any implementation platform. The JMM is defined in a declarative fashion with an allowed program execution being defined in terms of existence of "commit sequences" (roughly, the order in which actions in the execution are committed). In this work, we develop OpMM, an operational under-approximation of the JMM. The immediate motivation of this work lies in integrating a formal specification of the JMM with software model checkers. We show how our operational memory model description can be integrated into a Java Path Finder (JPF) style model checker for Java programs.
Resumo:
Knowledge about program worst case execution time (WCET) is essential in validating real-time systems and helps in effective scheduling. One popular approach used in industry is to measure execution time of program components on the target architecture and combine them using static analysis of the program. Measurements need to be taken in the least intrusive way in order to avoid affecting accuracy of estimated WCET. Several programs exhibit phase behavior, wherein program dynamic execution is observed to be composed of phases. Each phase being distinct from the other, exhibits homogeneous behavior with respect to cycles per instruction (CPI), data cache misses etc. In this paper, we show that phase behavior has important implications on timing analysis. We make use of the homogeneity of a phase to reduce instrumentation overhead at the same time ensuring that accuracy of WCET is not largely affected. We propose a model for estimating WCET using static worst case instruction counts of individual phases and a function of measured average CPI. We describe a WCET analyzer built on this model which targets two different architectures. The WCET analyzer is observed to give safe estimates for most benchmarks considered in this paper. The tightness of the WCET estimates are observed to be improved for most benchmarks compared to Chronos, a well known static WCET analyzer.
Resumo:
Polyhedral techniques for program transformation are now used in several proprietary and open source compilers. However, most of the research on polyhedral compilation has focused on imperative languages such as C, where the computation is specified in terms of statements with zero or more nested loops and other control structures around them. Graphical dataflow languages, where there is no notion of statements or a schedule specifying their relative execution order, have so far not been studied using a powerful transformation or optimization approach. The execution semantics and referential transparency of dataflow languages impose a different set of challenges. In this paper, we attempt to bridge this gap by presenting techniques that can be used to extract polyhedral representation from dataflow programs and to synthesize them from their equivalent polyhedral representation. We then describe PolyGLoT, a framework for automatic transformation of dataflow programs which we built using our techniques and other popular research tools such as Clan and Pluto. For the purpose of experimental evaluation, we used our tools to compile LabVIEW, one of the most widely used dataflow programming languages. Results show that dataflow programs transformed using our framework are able to outperform those compiled otherwise by up to a factor of seventeen, with a mean speed-up of 2.30x while running on an 8-core Intel system.
Resumo:
We address the problem of passive eavesdroppers in multi-hop wireless networks using the technique of friendly jamming. The network is assumed to employ Decode and Forward (DF) relaying. Assuming the availability of perfect channel state information (CSI) of legitimate nodes and eavesdroppers, we consider a scheduling and power allocation (PA) problem for a multiple-source multiple-sink scenario so that eavesdroppers are jammed, and source-destination throughput targets are met while minimizing the overall transmitted power. We propose activation sets (AS-es) for scheduling, and formulate an optimization problem for PA. Several methods for finding AS-es are discussed and compared. We present an approximate linear program for the original nonlinear, non-convex PA optimization problem, and argue that under certain conditions, both the formulations produce identical results. In the absence of eavesdroppers' CSI, we utilize the notion of Vulnerability Region (VR), and formulate an optimization problem with the objective of minimizing the VR. Our results show that the proposed solution can achieve power-efficient operation while defeating eavesdroppers and achieving desired source-destination throughputs simultaneously. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
Identifying translations from comparable corpora is a well-known problem with several applications, e.g. dictionary creation in resource-scarce languages. Scarcity of high quality corpora, especially in Indian languages, makes this problem hard, e.g. state-of-the-art techniques achieve a mean reciprocal rank (MRR) of 0.66 for English-Italian, and a mere 0.187 for Telugu-Kannada. There exist comparable corpora in many Indian languages with other ``auxiliary'' languages. We observe that translations have many topically related words in common in the auxiliary language. To model this, we define the notion of a translingual theme, a set of topically related words from auxiliary language corpora, and present a probabilistic framework for translation induction. Extensive experiments on 35 comparable corpora using English and French as auxiliary languages show that this approach can yield dramatic improvements in performance (e.g. MRR improves by 124% to 0.419 for Telugu-Kannada). A user study on WikiTSu, a system for cross-lingual Wikipedia title suggestion that uses our approach, shows a 20% improvement in the quality of titles suggested.