18 resultados para Functions graph
em Cochin University of Science
Resumo:
In this paper, we study the domination number, the global dom ination number, the cographic domination number, the global co graphic domination number and the independent domination number of all the graph products which are non-complete extended p-sums (NEPS) of two graphs.
Resumo:
We define a new graph operator called the P3 intersection graph, P3(G)- the intersection graph of all induced 3-paths in G. A characterization of graphs G for which P-3 (G) is bipartite is given . Forbidden subgraph characterization for P3 (G) having properties of being chordal , H-free, complete are also obtained . For integers a and b with a > 1 and b > a - 1, it is shown that there exists a graph G such that X(G) = a, X(P3( G)) = b, where X is the chromatic number of G. For the domination number -y(G), we construct graphs G such that -y(G) = a and -y (P3(G)) = b for any two positive numbers a > 1 and b. Similar construction for the independence number and radius, diameter relations are also discussed.
Resumo:
Abstract. The edge C4 graph E4(G) of a graph G has all the edges of Gas its vertices, two vertices in E4(G) are adjacent if their corresponding edges in G are either incident or are opposite edges of some C4. In this paper, characterizations for E4(G) being connected, complete, bipartite, tree etc are given. We have also proved that E4(G) has no forbidden subgraph characterization. Some dynamical behaviour such as convergence, mortality and touching number are also studied
Resumo:
Abstract. The paper deals with graph operators-the Gallai graphs and the anti-Gallai graphs. We prove the existence of a finite family of forbidden subgraphs for the Gallai graphs and the anti-Gallai graphs to be H-free for any finite graph H. The case of complement reducible graphs-cographs is discussed in detail. Some relations between the chromatic number, the radius and the diameter of a graph and its Gallai and anti-Gallai graphs are also obtained.
Resumo:
Department of Biotechnology, Cochin University of Science and Technology
Resumo:
The brain with its highly complex structure made up of simple units,imterconnected information pathways and specialized functions has always been an object of mystery and sceintific fascination for physiologists,neuroscientists and lately to mathematicians and physicists. The stream of biophysicists are engaged in building the bridge between the biological and physical sciences guided by a conviction that natural scenarios that appear extraordinarily complex may be tackled by application of principles from the realm of physical sciences. In a similar vein, this report aims to describe how nerve cells execute transmission of signals ,how these are put together and how out of this integration higher functions emerge and get reflected in the electrical signals that are produced in the brain.Viewing the E E G Signal through the looking glass of nonlinear theory, the dynamics of the underlying complex system-the brain ,is inferred and significant implications of the findings are explored.
Resumo:
Department of Mathematics, Cochin University of Science and Technology
Resumo:
Department of Mathematics, Cochin University of Science and Technology
Resumo:
Department of Statistics, Cochin University of Science and Technology
Resumo:
Department of Mathematics, Cochin University of Science and Technology
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
This paper describes JERIM-320, a new 320-bit hash function used for ensuring message integrity and details a comparison with popular hash functions of similar design. JERIM-320 and FORK -256 operate on four parallel lines of message processing while RIPEMD-320 operates on two parallel lines. Popular hash functions like MD5 and SHA-1 use serial successive iteration for designing compression functions and hence are less secure. The parallel branches help JERIM-320 to achieve higher level of security using multiple iterations and processing on the message blocks. The focus of this work is to prove the ability of JERIM 320 in ensuring the integrity of messages to a higher degree to suit the fast growing internet applications
Resumo:
The median (antimedian) set of a profile π = (u1, . . . , uk) of vertices of a graphG is the set of vertices x that minimize (maximize) the remoteness i d(x,ui ). Two algorithms for median graphs G of complexity O(nidim(G)) are designed, where n is the order and idim(G) the isometric dimension of G. The first algorithm computes median sets of profiles and will be in practice often faster than the other algorithm which in addition computes antimedian sets and remoteness functions and works in all partial cubes