961 resultados para Enterprise network agreement (ENA)
Resumo:
Alternating differential scanning calorimetry (ADSC) studies were undertaken to investigate the effect of Tl addition on the thermal properties of As30Te70-xTlx ( 6 <= x <= 22 at%) glasses. These include parameters such as glass-transition temperature (T-g), changes in specific heat capacity (Delta C-p) and relaxation enthalpy (Delta H-NR) at the glass transition. It was found that T-g of the glasses decreased with the addition of Tl, which is in contrast to the dependence of T-g in As - Te glasses on the addition of Al and In. The change in heat capacity Delta C-p through the glass transition was also found to decrease with increasing Tl content. The addition of Tl to the As - Te matrix may lead to a breaking of As - Te chains and the formation of Tl+Te- AsTe2/2 dipoles. There was no significant dependence of the change of relaxation enthalpy, through the glass transition, with composition.
Resumo:
For active contour modeling (ACM), we propose a novel self-organizing map (SOM)-based approach, called the batch-SOM (BSOM), that attempts to integrate the advantages of SOM- and snake-based ACMs in order to extract the desired contours from images. We employ feature points, in the form of ail edge-map (as obtained from a standard edge-detection operation), to guide the contour (as in the case of SOM-based ACMs) along with the gradient and intensity variations in a local region to ensure that the contour does not "leak" into the object boundary in case of faulty feature points (weak or broken edges). In contrast with the snake-based ACMs, however, we do not use an explicit energy functional (based on gradient or intensity) for controlling the contour movement. We extend the BSOM to handle extraction of contours of multiple objects, by splitting a single contour into as many subcontours as the objects in the image. The BSOM and its extended version are tested on synthetic binary and gray-level images with both single and multiple objects. We also demonstrate the efficacy of the BSOM on images of objects having both convex and nonconvex boundaries. The results demonstrate the superiority of the BSOM over others. Finally, we analyze the limitations of the BSOM.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications.
Resumo:
In this work, we introduce convolutional codes for network-error correction in the context of coherent network coding. We give a construction of convolutional codes that correct a given set of error patterns, as long as consecutive errors are separated by a certain interval. We also give some bounds on the field size and the number of errors that can get corrected in a certain interval. Compared to previous network error correction schemes, using convolutional codes is seen to have advantages in field size and decoding technique. Some examples are discussed which illustrate the several possible situations that arise in this context.
Resumo:
This paper proposes a Single Network Adaptive Critic (SNAC) based Power System Stabilizer (PSS) for enhancing the small-signal stability of power systems over a wide range of operating conditions. SNAC uses only a single critic neural network instead of the action-critic dual network architecture of typical adaptive critic designs. SNAC eliminates the iterative training loops between the action and critic networks and greatly simplifies the training procedure. The performance of the proposed PSS has been tested on a Single Machine Infinite Bus test system for various system and loading conditions. The proposed stabilizer, which is relatively easier to synthesize, consistently outperformed stabilizers based on conventional lead-lag and linear quadratic regulator designs.
Resumo:
Glioblastoma (GBM; grade IV astrocytoma) is a very aggressive form of brain cancer with a poor survival and few qualified predictive markers. This study integrates experimentally validated genes that showed specific upregulation in GBM along with their protein-protein interaction information. A system level analysis was used to construct GBM-specific network. Computation of topological parameters of networks showed scale-free pattern and hierarchical organization. From the large network involving 1,447 proteins, we synthesized subnetworks and annotated them with highly enriched biological processes. A careful dissection of the functional modules, important nodes, and their connections identified two novel intermediary molecules CSK21 and protein phosphatase 1 alpha (PP1A) connecting the two subnetworks CDC2-PTEN-TOP2A-CAV1-P53 and CDC2-CAV1-RB-P53-PTEN, respectively. Real-time quantitative reverse transcription-PCR analysis revealed CSK21 to be moderately upregulated and PP1A to be overexpressed by 20-fold in GBM tumor samples. Immunohistochemical staining revealed nuclear expression of PP1A only in GBM samples. Thus, CSK21 and PP1A, whose functions are intimately associated with cell cycle regulation, might play key role in gliomagenesis. Cancer Res; 70(16); 6437-47. (C)2010 AACR.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
Frequency response analysis is critical in understanding the steady and transient state behavior of any electrical network. Network analyzeror frequency response analyzer is used to determine the frequency response of an electrical network. This paper deals with the design of an inexpensive digitally controlled Network Analyzer. The frequency range of the network analyzer is from 10Hz to 50kHz (suitable range for system studies on most power electronics apparatus). It is composed of a microcontroller (as central processing unit) and a personal computer (as analyzer and display). The communication between the microcontroller and personal computer is established through one of the USB ports. The testing and evaluation of the analyzer is done with RC, RLC and multi-resonant circuits. The design steps, basis of analysis, experimental results, limitation in bandwidth and possible techniques for improvement in performances are presented.
Resumo:
The problem of denoising damage indicator signals for improved operational health monitoring of systems is addressed by applying soft computing methods to design filters. Since measured data in operational settings is contaminated with noise and outliers, pattern recognition algorithms for fault detection and isolation can give false alarms. A direct approach to improving the fault detection and isolation is to remove noise and outliers from time series of measured data or damage indicators before performing fault detection and isolation. Many popular signal-processing approaches do not work well with damage indicator signals, which can contain sudden changes due to abrupt faults and non-Gaussian outliers. Signal-processing algorithms based on radial basis function (RBF) neural network and weighted recursive median (WRM) filters are explored for denoising simulated time series. The RBF neural network filter is developed using a K-means clustering algorithm and is much less computationally expensive to develop than feedforward neural networks trained using backpropagation. The nonlinear multimodal integer-programming problem of selecting optimal integer weights of the WRM filter is solved using genetic algorithm. Numerical results are obtained for helicopter rotor structural damage indicators based on simulated frequencies. Test signals consider low order polynomial growth of damage indicators with time to simulate gradual or incipient faults and step changes in the signal to simulate abrupt faults. Noise and outliers are added to the test signals. The WRM and RBF filters result in a noise reduction of 54 - 71 and 59 - 73% for the test signals considered in this study, respectively. Their performance is much better than the moving average FIR filter, which causes significant feature distortion and has poor outlier removal capabilities and shows the potential of soft computing methods for specific signal-processing applications. (C) 2005 Elsevier B. V. All rights reserved.
Resumo:
The research question of this thesis was how knowledge can be managed with information systems. Information systems can support but not replace knowledge management. Systems can mainly store epistemic organisational knowledge included in content, and process data and information. Certain value can be achieved by adding communication technology to systems. All communication, however, can not be managed. A new layer between communication and manageable information was named as knowformation. Knowledge management literature was surveyed, together with information species from philosophy, physics, communication theory, and information system science. Positivism, post-positivism, and critical theory were studied, but knowformation in extended organisational memory seemed to be socially constructed. A memory management model of an extended enterprise (M3.exe) and knowformation concept were findings from iterative case studies, covering data, information and knowledge management systems. The cases varied from groups towards extended organisation. Systems were investigated, and administrators, users (knowledge workers) and managers interviewed. The model building required alternative sets of data, information and knowledge, instead of using the traditional pyramid. Also the explicit-tacit dichotomy was reconsidered. As human knowledge is the final aim of all data and information in the systems, the distinction between management of information vs. management of people was harmonised. Information systems were classified as the core of organisational memory. The content of the systems is in practice between communication and presentation. Firstly, the epistemic criterion of knowledge is not required neither in the knowledge management literature, nor from the content of the systems. Secondly, systems deal mostly with containers, and the knowledge management literature with applied knowledge. Also the construction of reality based on the system content and communication supports the knowformation concept. Knowformation belongs to memory management model of an extended enterprise (M3.exe) that is divided into horizontal and vertical key dimensions. Vertically, processes deal with content that can be managed, whereas communication can be supported, mainly by infrastructure. Horizontally, the right hand side of the model contains systems, and the left hand side content, which should be independent from each other. A strategy based on the model was defined.
Resumo:
We propose a novel algorithm for placement of standard cells in VLSI circuits based on an analogy of this problem with neural networks. By employing some of the organising principles of these nets, we have attempted to improve the behaviour of the bipartitioning method as proposed by Kernighan and Lin. Our algorithm yields better quality placements compared with the above method, and also makes the final placement independent of the initial partition.
Resumo:
We have compared the total as well as fine mode aerosol optical depth (tau and tau(fine)) retrieved by Moderate Resolution Imaging Spectroradiometer (MODIS) onboard Terra and Aqua (2001-2005) with the equivalent parameters derived by Aerosol Robotic Network (AERONET) at Kanpur (26.45 degrees N, 80.35 degrees E), northern India. MODIS Collection 005 (C005)-derived tau(0.55) was found to be in good agreement with the AERONET measurements. The tau(fine) and eta (tau(fine)/tau) were, however, biased low significantly in most matched cases. A new set of retrieval with the use of absorbing aerosol model (SSA similar to 0.87) with increased visible surface reflectance provided improved tau and tau(fine) at Kanpur. The new derivation of eta also compares well qualitatively with an independent set of in situ measurements of accumulation mass fraction over much of the southern India. This suggests that though MODIS land algorithm has limited information to derive size properties of aerosols over land, more accurate parameterization of aerosol and surface properties within the existing C005 algorithm may improve the accuracy of size-resolved aerosol optical properties. The results presented in this paper indicate that there is a need to reconsider the surface parameterization and assumed aerosol properties in MODIS C005 algorithm over the Indian region in order to retrieve more accurate aerosol optical and size properties, which are essential to quantify the impact of human-made aerosols on climate.