57 resultados para GENESIS (Computer system)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of companies are trying to migrate large monolithic software systems to Service Oriented Architectures. A common approach to do this is to first identify and describe desired services (i.e., create a model), and then to locate portions of code within the existing system that implement the described services. In this paper we describe a detailed case study we undertook to match a model to an open-source business application. We describe the systematic methodology we used, the results of the exercise, as well as several observations that throw light on the nature of this problem. We also suggest and validate heuristics that are likely to be useful in partially automating the process of matching service descriptions to implementations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indian logic has a long history. It somewhat covers the domains of two of the six schools (darsanas) of Indian philosophy, namely, Nyaya and Vaisesika. The generally accepted definition of Indian logic over the ages is the science which ascertains valid knowledge either by means of six senses or by means of the five members of the syllogism. In other words, perception and inference constitute the subject matter of logic. The science of logic evolved in India through three ages: the ancient, the medieval and the modern, spanning almost thirty centuries. Advances in Computer Science, in particular, in Artificial Intelligence have got researchers in these areas interested in the basic problems of language, logic and cognition in the past three decades. In the 1980s, Artificial Intelligence has evolved into knowledge-based and intelligent system design, and the knowledge base and inference engine have become standard subsystems of an intelligent system. One of the important issues in the design of such systems is knowledge acquisition from humans who are experts in a branch of learning (such as medicine or law) and transferring that knowledge to a computing system. The second important issue in such systems is the validation of the knowledge base of the system i.e. ensuring that the knowledge is complete and consistent. It is in this context that comparative study of Indian logic with recent theories of logic, language and knowledge engineering will help the computer scientist understand the deeper implications of the terms and concepts he is currently using and attempting to develop.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past two decades RNase A has been the focus of diverse investigations in order to understand the nature of substrate binding and to know the mechanism of enzyme action. Although this system is reasonably well characterized from the view point of some of the binding sites, the details of interactions in the second base binding (B2) site is insufficient. Further, the nature of ligand-protein interaction is elucidated generally by studies on RNase A-substrate analog complexes (mainly with the help of X-ray crystallography). Hence, the details of interactions at atomic level arising due to substrates are inferred indirectly. In the present paper, the dinucleotide substrate UpA is fitted into the active site of RNase A Several possible substrate conformations are investigated and the binding modes have been selected based on Contact Criteria. Thus identified RNase A-UpA complexes are energy minimized in coordinate space and are analysed in terms of conformations, energetics and interactions. The best possible ligand conformations for binding to RNase A are identified by experimentally known interactions and by the energetics. Upon binding of UpA to RNase A the changes associated,with protein back bone, Side chains in general and at the binding sites in particular are described. Further, the detailed interactions between UpA and RNase A are characterized in terms of hydrogen bonds and energetics. An extensive study has helped in interpreting the diverse results obtained from a number of experiments and also in evaluating the extent of changes the protein and the substrate undergo in order to maximize their interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An escape mechanism in a bistable system driven by colored noise of large but finite correlation time (tau) is analyzed. It is shown that the fluctuating potential theory [Phys. Rev. A 38, 3749 (1988)] becomes invalid in a region around the inflection points of the bistable potential, resulting in the underestimation of the mean first passage time at finite tau by this theory. It is shown that transitions at large but finite tau are caused by noise spikes, with edges rising and falling exponentially in a time of O(tau). Simulation of the dynamics of the bistable system driven by noise spikes of the above-mentioned nature clearly reveal the physical mechanism behind the transition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Management of large projects, especially the ones in which a major component of R&D is involved and those requiring knowledge from diverse specialised and sophisticated fields, may be classified as semi-structured problems. In these problems, there is some knowledge about the nature of the work involved, but there are also uncertainties associated with emerging technologies. In order to draw up a plan and schedule of activities of such a large and complex project, the project manager is faced with a host of complex decisions that he has to take, such as, when to start an activity, for how long the activity is likely to continue, etc. An Intelligent Decision Support System (IDSS) which aids the manager in decision making and drawing up a feasible schedule of activities while taking into consideration the constraints of resources and time, will have a considerable impact on the efficient management of the project. This report discusses the design of an IDSS that helps in project planning phase through the scheduling phase. The IDSS uses a new project scheduling tool, the Project Influence Graph (PIG).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An intelligent computer aided defect analysis (ICADA) system, based on artificial intelligence techniques, has been developed to identify design, process or material parameters which could be responsible for the occurrence of defective castings in a manufacturing campaign. The data on defective castings for a particular time frame, which is an input to the ICADA system, has been analysed. It was observed that a large proportion, i.e. 50-80% of all the defective castings produced in a foundry, have two, three or four types of defects occurring above a threshold proportion, say 10%. Also, a large number of defect types are either not found at all or found in a very small proportion, with a threshold value below 2%. An important feature of the ICADA system is the recognition of this pattern in the analysis. Thirty casting defect types and a large number of causes numbering between 50 and 70 for each, as identified in the AFS analysis of casting defects-the standard reference source for a casting process-constituted the foundation for building the knowledge base. Scientific rationale underlying the formation of a defect during the casting process was identified and 38 metacauses were coded. Process, material and design parameters which contribute to the metacauses were systematically examined and 112 were identified as rootcauses. The interconnections between defects, metacauses and rootcauses were represented as a three tier structured graph and the handling of uncertainty in the occurrence of events such as defects, metacauses and rootcauses was achieved by Bayesian analysis. The hill climbing search technique, associated with forward reasoning, was employed to recognize one or several root causes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sensor network nodes exhibit characteristics of both embedded systems and general-purpose systems.A sensor network operating system is a kind of embedded operating system, but unlike a typical embedded operating system, sensor network operatin g system may not be real time, and is constrained by memory and energy constraints. Most sensor network operating systems are based on event-driven approach. Event-driven approach is efficient in terms of time and space.Also this approach does not require a separate stack for each execution context. But using this model, it is difficult to implement long running tasks, like cryptographic operations. A thread based computation requires a separate stack for each execution context, and is less efficient in terms of time and space. In this paper, we propose a thread based execution model that uses only a fixed number of stacks. In this execution model, the number of stacks at each priority level are fixed. It minimizes the stack requirement for multi-threading environment and at the same time provides ease of programming. We give an implementation of this model in Contiki OS by separating thread implementation from protothread implementation completely. We have tested our OS by implementing a clock synchronization protocol using it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over past few years, the studies of cultured neuronal networks have opened up avenues for understanding the ion channels, receptor molecules, and synaptic plasticity that may form the basis of learning and memory. The hippocampal neurons from rats are dissociated and cultured on a surface containing a grid of 64 electrodes. The signals from these 64 electrodes are acquired using a fast data acquisition system MED64 (Alpha MED Sciences, Japan) at a sampling rate of 20 K samples with a precision of 16-bits per sample. A few minutes of acquired data runs in to a few hundreds of Mega Bytes. The data processing for the neural analysis is highly compute-intensive because the volume of data is huge. The major processing requirements are noise removal, pattern recovery, pattern matching, clustering and so on. In order to interface a neuronal colony to a physical world, these computations need to be performed in real-time. A single processor such as a desk top computer may not be adequate to meet this computational requirements. Parallel computing is a method used to satisfy the real-time computational requirements of a neuronal system that interacts with an external world while increasing the flexibility and scalability of the application. In this work, we developed a parallel neuronal system using a multi-node Digital Signal processing system. With 8 processors, the system is able to compute and map incoming signals segmented over a period of 200 ms in to an action in a trained cluster system in real time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a low cost but high resolution retinal image acquisition system of the human eye. The images acquired by a CMOS image sensor are communicated through the Universal Serial Bus (USB) interface to a personal computer for viewing and further processing. The image acquisition time was estimated to be 2.5 seconds. This system can also be used in telemedicine applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electricity appears to be the energy carrier of choice for modern economics since growth in electricity has outpaced growth in the demand for fuels. A decision maker (DM) for accurate and efficient decisions in electricity distribution requires the sector wise and location wise electricity consumption information to predict the requirement of electricity. In this regard, an interactive computer-based Decision Support System (DSS) has been developed to compile, analyse and present the data at disaggregated levels for regional energy planning. This helps in providing the precise information needed to make timely decisions related to transmission and distribution planning leading to increased efficiency and productivity. This paper discusses the design and implementation of a DSS, which facilitates to analyse the consumption of electricity at various hierarchical levels (division, taluk, sub division, feeder) for selected periods. This DSS is validated with the data of transmission and distribution systems of Kolar district in Karnataka State, India.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the modern distance relays are designed to avoid overreaching due to the transient d.c. component of the fault current, whereas a more likely source of transients in e.h.v. systems is the oscillatory discharge of the system charging current into the fault. Until now attempts have not been made to reproduce these transients in the laboratory. This paper describes an analogue and an accurate digital simulation of these harmonic transients. The dynamic behaviour of a typical polarised mho-type relay is analysed, and results are presented. The paper also advocates the use of active filters for filtering the harmonics associated with e.h.v. system, and hence, to improve the speed of response and accuracy of the protective relays.