935 resultados para Industrial automation techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Control centers (CC) play a very important role in power system operation. An overall view of the system with information about all existing resources and needs is implemented through SCADA (Supervisory control and data acquisition system) and an EMS (energy management system). As advanced technologies have made their way into the utility environment, the operators are flooded with huge amount of data. The last decade has seen extensive applications of AI techniques, knowledge-based systems, Artificial Neural Networks in this area. This paper focuses on the need for development of an intelligent decision support system to assist the operator in making proper decisions. The requirements for realization of such a system are recognized for the effective operation and energy management of the southern grid in India The application of Petri nets leading to decision support system has been illustrated considering 24 bus system that is a part of southern grid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we show the applicability of Ant Colony Optimisation (ACO) techniques for pattern classification problem that arises in tool wear monitoring. In an earlier study, artificial neural networks and genetic programming have been successfully applied to tool wear monitoring problem. ACO is a recent addition to evolutionary computation technique that has gained attention for its ability to extract the underlying data relationships and express them in form of simple rules. Rules are extracted for data classification using training set of data points. These rules are then applied to set of data in the testing/validation set to obtain the classification accuracy. A major attraction in ACO based classification is the possibility of obtaining an expert system like rules that can be directly applied subsequently by the user in his/her application. The classification accuracy obtained in ACO based approach is as good as obtained in other biologically inspired techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Image filtering techniques have potential applications in biomedical image processing such as image restoration and image enhancement. The potential of traditional filters largely depends on the apriori knowledge about the type of noise corrupting the image. This makes the standard filters to be application specific. For example, the well-known median filter and its variants can remove the salt-and-pepper (or impulse) noise at low noise levels. Each of these methods has its own advantages and disadvantages. In this paper, we have introduced a new finite impulse response (FIR) filter for image restoration where, the filter undergoes a learning procedure. The filter coefficients are adaptively updated based on correlated Hebbian learning. This algorithm exploits the inter pixel correlation in the form of Hebbian learning and hence performs optimal smoothening of the noisy images. The application of the proposed filter on images corrupted with Gaussian noise, results in restorations which are better in quality compared to those restored by average and Wiener filters. The restored image is found to be visually appealing and artifact-free

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Processor architects have a challenging task of evaluating a large design space consisting of several interacting parameters and optimizations. In order to assist architects in making crucial design decisions, we build linear regression models that relate Processor performance to micro-architecture parameters, using simulation based experiments. We obtain good approximate models using an iterative process in which Akaike's information criteria is used to extract a good linear model from a small set of simulations, and limited further simulation is guided by the model using D-optimal experimental designs. The iterative process is repeated until desired error bounds are achieved. We used this procedure to establish the relationship of the CPI performance response to 26 key micro-architectural parameters using a detailed cycle-by-cycle superscalar processor simulator The resulting models provide a significance ordering on all micro-architectural parameters and their interactions, and explain the performance variations of micro-architectural techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Views on industrial service have conceptually progressed from the output of the provider’s production process to the result of an interaction process in which the customer also is involved. Although there are attempts to be customer-oriented, especially when the focus is on solutions, an industrial company’s offering combining goods and services is inherently seller-oriented. There is, however, a need to go beyond the current literature and company practices. We propose that what is needed is a genuinely customer-based parallel concept to offering that takes the customer’s view and put forward a new concept labelled customer needing. A needing is based on the customer’s mental model of their business and strategies which will affect priorities, decisions, and actions. A needing can be modelled as a configuration of three dimensions containing six functions that create realised value for the customer. These dimensions and functions can be used to describe needings which represent starting points for sellers’ creation of successful offerings. When offerings match needings over time the seller should have the potential to form and sustain successful buyer relationships.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is an urgent interest in marketing to move away from neo-classical value definitions suggesting that value creation is a process of exchanging goods for money. In the present paper, value creation is conceptualized as an integration of two distinct, yet closely coupled processes. First, actors co-create what this paper calls an underlying basis of value. This is done by interactively re-configuring resources. By relating and combining resources, activity sets, and risks across actor boundaries in novel ways actors create joint productivity gains – a concept very similar to density (Normann, 2001). Second, actors engage in a process of signification and evaluation. Signification implies co-constructing the meaning and worth of joint productivity gains co-created through interactive resource re-configuration, as well as sharing those gains through a pricing mechanism as value to involved actors. The conceptual framework highlights an all-important dynamics associated with ´value creation´ and ´value´ - a dynamics the paper claims has eluded past marketing research. The paper argues that the framework presented here is appropriate for the interactive service perspective, where value and value creation are not objectively given, but depend on the power of involved actors´ socially constructed frames to mobilize resources across actor boundaries in ways that ´enhance system well-being´ (Vargo et al., 2008). The paper contributes to research on Service Logic, Service-Dominant Logic, and Service Science.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An important issue in the design of a distributed computing system (DCS) is the development of a suitable protocol. This paper presents an effort to systematize the protocol design procedure for a DCS. Protocol design and development can be divided into six phases: specification of the DCS, specification of protocol requirements, protocol design, specification and validation of the designed protocol, performance evaluation, and hardware/software implementation. This paper describes techniques for the second and third phases, while the first phase has been considered by the authors in their earlier work. Matrix and set theoretic based approaches are used for specification of a DCS and for specification of the protocol requirements. These two formal specification techniques form the basis of the development of a simple and straightforward procedure for the design of the protocol. The applicability of the above design procedure has been illustrated by considering an example of a computing system encountered on board a spacecraft. A Petri-net based approach has been adopted to model the protocol. The methodology developed in this paper can be used in other DCS applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Yhteenveto: Kemikaalien teollisesta käsittelystä vesieliöille aiheutuvien riskien arviointi mallin avulla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of invariants is an important tool for analysis of distributed and concurrent systems modeled by Petri nets. For a large practical system, the computation of desired invariants by the existing techniques is a time-consuming task. This paper proposes a theoretical foundation for simplified computation of desired invariants. We provide invariant-preserving Petri net reduction rules followed by the conditions for the existence of invariants in various well-structured nets. If an invariant exists, it can be found directly from the net structure using the formulas derived, or by applying the existing techniques on the reduced net.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we use reinforcement learning (RL) as a tool to study price dynamics in an electronic retail market consisting of two competing sellers, and price sensitive and lead time sensitive customers. Sellers, offering identical products, compete on price to satisfy stochastically arriving demands (customers), and follow standard inventory control and replenishment policies to manage their inventories. In such a generalized setting, RL techniques have not previously been applied. We consider two representative cases: 1) no information case, were none of the sellers has any information about customer queue levels, inventory levels, or prices at the competitors; and 2) partial information case, where every seller has information about the customer queue levels and inventory levels of the competitors. Sellers employ automated pricing agents, or pricebots, which use RL-based pricing algorithms to reset the prices at random intervals based on factors such as number of back orders, inventory levels, and replenishment lead times, with the objective of maximizing discounted cumulative profit. In the no information case, we show that a seller who uses Q-learning outperforms a seller who uses derivative following (DF). In the partial information case, we model the problem as a Markovian game and use actor-critic based RL to learn dynamic prices. We believe our approach to solving these problems is a new and promising way of setting dynamic prices in multiseller environments with stochastic demands, price sensitive customers, and inventory replenishments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Let G - (V, E) be a weighted undirected graph having nonnegative edge weights. An estimate (delta) over cap (u, v) of the actual distance d( u, v) between u, v is an element of V is said to be of stretch t if and only if delta(u, v) <= (delta) over cap (u, v) <= t . delta(u, v). Computing all-pairs small stretch distances efficiently ( both in terms of time and space) is a well-studied problem in graph algorithms. We present a simple, novel, and generic scheme for all-pairs approximate shortest paths. Using this scheme and some new ideas and tools, we design faster algorithms for all-pairs t-stretch distances for a whole range of stretch t, and we also answer an open question posed by Thorup and Zwick in their seminal paper [J. ACM, 52 (2005), pp. 1-24].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This master thesis studies how trade liberalization affects the firm-level productivity and industrial evolution. To do so, I built a dynamic model that considers firm-level productivity as endogenous to investigate the influence of trade on firm’s productivity and the market structure. In the framework, heterogeneous firms in the same industry operate differently in equilibrium. Specifically, firms are ex ante identical but heterogeneity arises as an equilibrium outcome. Under the setting of monopolistic competition, this type of model yields an industry that is represented not by a steady-state outcome, but by an evolution that rely on the decisions made by individual firms. I prove that trade liberalization has a general positive impact on technological adoption rates and hence increases the firm-level productivity. Besides, this endogenous technology adoption model also captures the stylized facts: exporting firms are larger and more productive than their non-exporting counterparts in the same sector. I assume that the number of firms is endogenous, since, according to the empirical literature, the industrial evolution shows considerably different patterns across countries; some industries experience large scale of firms’ exit in the period of contracting market shares, while some industries display relative stable number of firms or gradually increase quantities. The special word “shakeout” is used to describe the dramatic decrease in the number of firms. In order to explain the causes of shakeout, I construct a model where forward-looking firms decide to enter and exit the market on the basis of their state of technology. In equilibrium, firms choose different dates to adopt innovation which generate a gradual diffusion process. It is exactly this gradual diffusion process that generates the rapid, large-scale exit phenomenon. Specifically, it demonstrates that there is a positive feedback between firm’s exit and adoption, the reduction in the number of firms increases the incentives for remaining firms to adopt innovation. Therefore, in the setting of complete information, this model not only generates a shakeout but also captures the stability of an industry. However, the solely national view of industrial evolution neglects the importance of international trade in determining the shape of market structure. In particular, I show that the higher trade barriers lead to more fragile markets, encouraging the over-entry in the initial stage of industry life cycle and raising the probability of a shakeout. Therefore, more liberalized trade generates more stable market structure from both national and international viewpoints. The main references are Ederington and McCalman(2008,2009).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-speed evaluation of a large number of linear, quadratic, and cubic expressions is very important for the modeling and real-time display of objects in computer graphics. Using VLSI techniques, chips called pixel planes have actually been built by H. Fuchs and his group to evaluate linear expressions. In this paper, we describe a topological variant of Fuchs' pixel planes which can evaluate linear, quadratic, cubic, and higher-order polynomials. In our design, we make use of local interconnections only, i.e., interconnections between neighboring processing cells. This leads to the concept of tiling the processing cells for VLSI implementation.