935 resultados para computer model
Resumo:
The research on multiple classifiers systems includes the creation of an ensemble of classifiers and the proper combination of the decisions. In order to combine the decisions given by classifiers, methods related to fixed rules and decision templates are often used. Therefore, the influence and relationship between classifier decisions are often not considered in the combination schemes. In this paper we propose a framework to combine classifiers using a decision graph under a random field model and a game strategy approach to obtain the final decision. The results of combining Optimum-Path Forest (OPF) classifiers using the proposed model are reported, obtaining good performance in experiments using simulated and real data sets. The results encourage the combination of OPF ensembles and the framework to design multiple classifier systems. © 2011 Springer-Verlag.
Resumo:
This paper presents the results obtained with a business game whose model represents the decision making process related to two moments at an industrial company. The first refers to the project of the industrial plant, and the second to its management. The game model was conceived so the player's first decision would establish capacity and other parameters such as quantities of each product to produce, marketing expenses, research and development, quality, advertising, salaries, if purchases will be made in installments or in cash, if there will be credit sales and how many installments will be allowed and the number of workers in the assembly area. An experiment was conducted with employees of a Brazilian company. Data obtained indicate that the players have lack of contents, especially in finances. Although these results cannot be generalized, they confirm prior results with undergraduate and graduate students and they indicate the need for reinforcement in this undergraduate area. © 2012 Springer-Verlag.
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
Gesture-based applications have particularities, since users interact in a natural way, much as they interact in the non-digital world. Hence, new requirements are needed on the software design process. This paper shows a software development process model for these applications, including requirement specification, design, implementation, and testing procedures. The steps and activities of the proposed model were tested through a game case study, which is a puzzle game. The puzzle is completed when all pieces of a painting are correctly positioned by the drag and drop action of users hand gesture. It also shows the results obtained of applying a heuristic evaluation on this game. © 2012 IEEE.
Resumo:
The use of non-pressure compensating drip hose in horticultural and annual cycle fruits is growing in Brazil. In this case, the challenge for designers is getting longer lateral lines with high values of uniformity. The objective of this study was to develop a model to design longer lateral lines using non-pressure compensating drip hose. Using the developed model, the hypotheses to be evaluated were: a) the use of two different spacing between emitters in the same lateral line allows longer length; b) it is possible to get longer lateral lines using high values of pressure variation in the lateral lines since the distribution uniformity stays below allowable limits. A computer program was developed in Delphi based on the model developed and it is able to design lateral lines in level using non-pressure compensating drip hose. The input data are: desired distribution uniformity (DU); initial and final pressure in the lateral line; coefficients of relationship between emitter discharge and pressure head; hose internal diameter; pipe cross-sectional area with the dripper; and roughness coefficient for the Hazen-Williams equation. The program allows calculate the lateral line length with three possibilities: selecting two spacing between emitters and defining the exchange point; using two pre-established spacing between emitters and calculating the length of each section with different spacing; using one emitter spacing. Results showed that the use of two sections with different spacing between drippers in the lateral line didn't allow longer length but got better uniformity when compared with lateral line with one spacing between emitters. The adoption of two spacing increased the flow rate per meter in the final section which represented approximately 80% of the lateral line total length and this justifies their use. The software allowed DU above 90% with pressure head variation of 40% and the use of two spacing between emitters. The developed model/software showed to be accurate, easy to handle and useful for lateral line design using non-pressure compensating drip hose.
Resumo:
The transcription process is crucial to life and the enzyme RNA polymerase (RNAP) is the major component of the transcription machinery. The development of single-molecule techniques, such as magnetic and optical tweezers, atomic-force microscopy and single-molecule fluorescence, increased our understanding of the transcription process and complements traditional biochemical studies. Based on these studies, theoretical models have been proposed to explain and predict the kinetics of the RNAP during the polymerization, highlighting the results achieved by models based on the thermodynamic stability of the transcription elongation complex. However, experiments showed that if more than one RNAP initiates from the same promoter, the transcription behavior slightly changes and new phenomenona are observed. We proposed and implemented a theoretical model that considers collisions between RNAPs and predicts their cooperative behavior during multi-round transcription generalizing the Bai et al. stochastic sequence-dependent model. In our approach, collisions between elongating enzymes modify their transcription rate values. We performed the simulations in Mathematica® and compared the results of the single and the multiple-molecule transcription with experimental results and other theoretical models. Our multi-round approach can recover several expected behaviors, showing that the transcription process for the studied sequences can be accelerated up to 48% when collisions are allowed: the dwell times on pause sites are reduced as well as the distance that the RNAPs backtracked from backtracking sites. © 2013 Costa et al.
Resumo:
This paper presents a methodology for modeling high intensity discharge lamps based on artificial neural networks. The methodology provides a model which is able to represent the device operating in the frequency of distribution systems, facing events related to power quality. With the aid of a data acquisition system to monitor the laboratory experiment, and using $$\text{ MATLAB }^{\textregistered }$$ software, data was obtained for the training of two neural networks. These neural networks, working together, were able to represent with high fidelity the behavior of a discharge lamp. The excellent performance obtained by these models allowed the simulation of a group of lamps in a distribution system with shorter simulation time when compared to mathematical models. This fact justified the application of this family of loads in electric power systems. The representation of the device facing power quality disturbances also proved to be a useful tool for more complex studies in distribution systems. © 2013 Brazilian Society for Automatics - SBA.
Resumo:
In this paper, a hybrid heuristic methodology that employs fuzzy logic for solving the AC transmission network expansion planning (AC-TEP) problem is presented. An enhanced constructive heuristic algorithm aimed at obtaining a significant quality solution for such complicated problems considering contingency is proposed. In order to indicate the severity of the contingency, 2 performance indices, namely the line flow performance index and voltage performance index, are calculated. An interior point method is applied as a nonlinear programming solver to handle such nonconvex optimization problems, while the objective function includes the costs of the new transmission lines as well as the real power losses. The performance of the proposed method is examined by applying it to the well-known Garver system for different cases. The simulation studies and result analysis demonstrate that the proposed method provides a promising way to find an optimal plan. Obtaining the best quality solution shows the capability and the viability of the proposed algorithm in AC-TEP. © Tübi̇tak..
Resumo:
This work presents a numerical model to simulate refrigerant flow through capillary tubes, commonly used as expansion devices in refrigeration systems. The flow is divided in a single-phase region, where the refrigerant is in the subcooled liquid state, and a region of two-phase flow. The capillary tube is considered straight and horizontal. The flow is taken as one-dimensional and adiabatic. Steady-state condition is also assumed and the metastable flow phenomena are neglected. The two-fluid model, considering the hydrodynamic and thermal non-equilibrium between the liquid and vapor phases, is applied to the two-phase flow region. Comparisons are made with experimental measurements of the mass flow rate and pressure distribution along two capillary tubes working with refrigerant R-134a in different operating conditions. The results indicate that the present model provides a better estimation than the commonly employed homogeneous model. Some computational results referring to the quality, void fraction, velocities, and temperatures of each phase are presented and discussed.
Resumo:
Modeling is a step to perform a finite element analysis. Different methods of model construction are reported in literature, as the Bio-CAD modeling. The purpose of this study was to perform a model evaluation and application using two methods of Bio-CAD modeling from human edentulous hemi-mandible on the finite element analysis. From CT scans of dried human skull was reconstructed a stereolithographic model. Two methods of modeling were performed: STL conversion approach (Model 1) associated to STL simplification and reverse engineering approach (Model 2). For finite element analysis was used the action of lateral pterygoid muscle as loading condition to assess total displacement (D), equivalent von-Mises stress (VM) and maximum principal stress (MP). Two models presented differences on the geometry regarding surface number (1834 (model 1); 282 (model 2)). Were observed differences in finite element mesh regarding element number (30428 nodes/16683 elements (model 1); 15801 nodes/8410 elements (model 2). D, VM and MP stress areas presented similar distribution in two models. The values were different regarding maximum and minimum values of D (ranging 0-0.511 mm (model 1) and 0-0.544 mm (model 2), VM stress (6.36E-04-11.4 MPa (model 1) and 2.15E-04-14.7 MPa (model 2) and MP stress (-1.43-9.14 MPa (model 1) and -1.2-11.6 MPa (model 2). From two methods of Bio-CAD modeling, the reverse engineering presented better anatomical representation compared to the STL conversion approach. The models presented differences in the finite element mesh, total displacement and stress distribution.
Resumo:
We investigate the problem of waveband switching (WBS) in a wavelength-division multiplexing (WDM) mesh network with dynamic traffic requests. To solve the WBS problem in a homogeneous dynamic WBS network, where every node is a multi-granular optical cross-connect (MG-OXC), we construct an auxiliary graph. Based on the auxiliary graph, we develop two heuristic on-line WBS algorithms with different grouping policies, namely the wavelength-first WBS algorithm based on the auxiliary graph (WFAUG) and the waveband-first WBS algorithm based on the auxiliary graph (BFAUG). Our results show that the WFAUG algorithm outperforms the BFAUG algorithm.
Resumo:
An analytical model for Virtual Topology Reconfiguration (VTR) in optical networks is developed. It aims at the optical networks with a circuit-based data plane and an IPlike control plane. By identifying and analyzing the important factors impacting the network performance due to VTR operations on both planes, we can compare the benefits and penalties of different VTR algorithms and policies. The best VTR scenario can be adaptively chosen from a set of such algorithms and policies according to the real-time network situations. For this purpose, a cost model integrating all these factors is created to provide a comparison criterion independent of any specific VTR algorithm and policy. A case study based on simulation experiments is conducted to illustrate the application of our models.
Resumo:
The emerging Cyber-Physical Systems (CPSs) are envisioned to integrate computation, communication and control with the physical world. Therefore, CPS requires close interactions between the cyber and physical worlds both in time and space. These interactions are usually governed by events, which occur in the physical world and should autonomously be reflected in the cyber-world, and actions, which are taken by the CPS as a result of detection of events and certain decision mechanisms. Both event detection and action decision operations should be performed accurately and timely to guarantee temporal and spatial correctness. This calls for a flexible architecture and task representation framework to analyze CP operations. In this paper, we explore the temporal and spatial properties of events, define a novel CPS architecture, and develop a layered spatiotemporal event model for CPS. The event is represented as a function of attribute-based, temporal, and spatial event conditions. Moreover, logical operators are used to combine different types of event conditions to capture composite events. To the best of our knowledge, this is the first event model that captures the heterogeneous characteristics of CPS for formal temporal and spatial analysis.
Resumo:
Time correlation functions of current fluctuations were calculated by molecular dynamics (MD) simulations in order to investigate sound waves of high wavevectors in the glass-forming liquid Ca(NO3)(2)center dot 4H(2)O. Dispersion curves, omega(k), were obtained for longitudinal (LA) and transverse acoustic (TA) modes, and also for longitudinal optic (LO) modes. Spectra of LA modes calculated by MD simulations were modeled by a viscoelastic model within the memory function framework. The viscoelastic model is used to rationalize the change of slope taking place at k similar to 0.3 angstrom(-1) in the omega(k) curve of acoustic modes. For still larger wavevectors, mixing of acoustic and optic modes is observed. Partial time correlation functions of longitudinal mass currents were calculated separately for the ions and the water molecules. The wavevector dependence of excitation energies of the corresponding partial LA modes indicates the coexistence of a relatively stiff subsystem made of cations and anions, and a softer subsystem made of water molecules. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4751548]
Resumo:
This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.