182 resultados para Network structure
Resumo:
This study analyzed inter-individual variability of the temporal structure applied in basketball throwing. Ten experienced male athletes in basketball throwing were filmed and a number of kinematic movement parameters analyzed. A biomechanical model provided the relative timing of the shoulder, elbow and wrist joint movements. Inter-individual variability was analyzed using sequencing and relative timing of tem phases of the throw. To compare the variability of the movement phases between subjects a discriminant analysis and an ANOVA were applied. The Tukey test was applied to determine where differences occurred. The significance level was p = 0.05. Inter-individual variability was explained by three concomitant factors: (a) a precision control strategy, (b) a velocity control strategy and (c) intrinsic characteristics of the subjects. Therefore, despite the fact that some actions are common to the basketball throwing pattern each performed demonstrated particular and individual characteristics.
Resumo:
Motivation: Understanding the patterns of association between polymorphisms at different loci in a population ( linkage disequilibrium, LD) is of fundamental importance in various genetic studies. Many coefficients were proposed for measuring the degree of LD, but they provide only a static view of the current LD structure. Generative models (GMs) were proposed to go beyond these measures, giving not only a description of the actual LD structure but also a tool to help understanding the process that generated such structure. GMs based in coalescent theory have been the most appealing because they link LD to evolutionary factors. Nevertheless, the inference and parameter estimation of such models is still computationally challenging. Results: We present a more practical method to build GM that describe LD. The method is based on learning weighted Bayesian network structures from haplotype data, extracting equivalence structure classes and using them to model LD. The results obtained in public data from the HapMap database showed that the method is a promising tool for modeling LD. The associations represented by the learned models are correlated with the traditional measure of LD D`. The method was able to represent LD blocks found by standard tools. The granularity of the association blocks and the readability of the models can be controlled in the method. The results suggest that the causality information gained by our method can be useful to tell about the conservability of the genetic markers and to guide the selection of subset of representative markers.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
We proposed a connection admission control (CAC) to monitor the traffic in a multi-rate WDM optical network. The CAC searches for the shortest path connecting source and destination nodes, assigns wavelengths with enough bandwidth to serve the requests, supervises the traffic in the most required nodes, and if needed activates a reserved wavelength to release bandwidth according to traffic demand. We used a scale-free network topology, which includes highly connected nodes ( hubs), to enhance the monitoring procedure. Numerical results obtained from computational simulations show improved network performance evaluated in terms of blocking probability.
Resumo:
This letter shows that the matrix can be used for redundancy and observability analysis of metering systems composed of PMU measurements and conventional measurements (power and voltage magnitude measurements). The matrix is obtained via triangular factorization of the Jacobian matrix. Observability analysis and restoration is carried out during the triangular factorization of the Jacobian matrix, and the redundancy analysis is made exploring the matrix structure. As a consequence, the matrix can be used for metering system planning considering conventional and PMU measurements. These features of the matrix will be outlined and illustrated by numerical examples.
Resumo:
This paper analyses an optical network architecture composed by an arrangement of nodes equipped with multi-granular optical cross-connects (MG-OXCs) in addition to the usual optical cross-connects (OXCs). Then, selected network nodes can perform both waveband as well as traffic grooming operations and our goal is to assess the improvement on network performance brought by these additional capabilities. Specifically, the influence of the MG-OXC multi-granularity on the blocking probability is evaluated for 16 classes of service over a network based on the NSFNet topology. A mechanism of fairness in bandwidth capacity is also added to the connection admission control to manage the blocking probabilities of all kind of bandwidth requirements. Comprehensive computational simulation are carried out to compare eight distinct node architectures, showing that an adequate combination of waveband and single-wavelength ports of the MG-OXCs and OXCs allow a more efficient operation of a WDM optical network carrying multi-rate traffic.
Resumo:
The advantages offered by the electronic component LED (Light Emitting Diode) have resulted in a quick and extensive application of this device in the replacement of incandescent lights. In this combined application, however, the relationship between the design variables and the desired effect or result is very complex and renders it difficult to model using conventional techniques. This paper consists of the development of a technique using artificial neural networks that makes it possible to obtain the luminous intensity values of brake lights using SMD (Surface Mounted Device) LEDs from design data. This technique can be utilized to design any automotive device that uses groups of SMD LEDs. The results of industrial applications using SMD LED are presented to validate the proposed technique.
Resumo:
This paper develops H(infinity) control designs based on neural networks for fully actuated and underactuated cooperative manipulators. The neural networks proposed in this paper only adapt the uncertain dynamics of the robot manipulators. They work as a complement of the nominal model. The H(infinity) performance index includes the position errors as well the squeeze force errors between the manipulator end-effectors and the object, which represents a complete disturbance rejection scenario. For the underactuated case, the squeeze force control problem is more difficult to solve due to the loss of some degrees of manipulator actuation. Results obtained from an actual cooperative manipulator, which is able to work as a fully actuated and an underactuated manipulator, are presented. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Five vegetable oils: canola, soybean, corn, cottonseed and sunflower oils were characterized with respect to their composition by gas chromatography and viscosity. The compositions of the vegetable oils suggest that they exhibit substantially different propensity for oxidation following the order of: canola < corn < cottonseed < sunflower approximate to soybean. Viscosities at 40 degrees C and 100 degrees C and the viscosity index (VI) values were determined for the vegetable oils and two petroleum oil quenchants: Microtemp 157 (a conventional slow oil) and Microtemp 153B (an accelerated or fast oil). The kinematic viscosities of the different vegetable and petroleum oils at 40 degrees C were similar. The VI values for the different vegetable oils were very close and varied between 209-220 and were all much higher than the VI values obtained for Microtemp 157 (96) and Microtemp 153B (121). These data indicate that the viscosity variations of these vegetable oils are substantially less sensitive to temperature variation than are the parafinic oil based Microtemp 157 and Microtemp 153B. Although these data suggest that any of the vegetable oils evaluated could be blended with minimal impact on viscosity, the oxidative stability would surely be substantially impacted. Cooling curve analysis was performed on these vegetable oils at 60 degrees C under non-agitated conditions. These results were compared with cooling curves obtained for Microtemp 157, a conventional, unaccelerated petroleum oil, and Microtemp 153B, an accelerated petroleum oil under the same conditions. The results showed that cooling profiles of the different vegetable oils were similar as expected from the VI values. However, no boiling was observed wit any of the vegetable oils and heat transfer occurs only by convection since there is no full-film boiling and nucleate boiling process as typically observed for petroleum oil quenchants, including those of this study. Therefore, high-temperature cooling is considerable faster for vegetable oils as a class. The cooling properties obtained suggest that vegetable oils would be especially suitable fur quenching low-hardenability steels such as carbon steels.
Resumo:
Considering the increasing popularity of network-based control systems and the huge adoption of IP networks (such as the Internet), this paper studies the influence of network quality of service (QoS) parameters over quality of control parameters. An example of a control loop is implemented using two LonWorks networks (CEA-709.1) interconnected by an emulated IP network, in which important QoS parameters such as delay and delay jitter can be completely controlled. Mathematical definitions are provided according to the literature, and the results of the network-based control loop experiment are presented and discussed.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Austenitic stainless steels cannot be conventionally surface treated at temperatures close to 550 degrees C due to intense precipitation of nitrides or carbides. Plasma carburizing allows introducing carbon in the steel at temperatures below 500 degrees C without carbide precipitation. Plasma carburizing of AISI 316L was carried out at 480 degrees C and 400 degrees C, during 20 h, using CH(4) as carbon carrier gas. The results show that carbon expanded austenite (gamma(c)), 20 mu m in depth, was formed on the surface after the 480 degrees C treatment. Carbon expanded austenite (gamma(c)), 8 mu m in depth, was formed on the surface after the 400 degrees C treatment. DRX results showed that the austenitic FCC lattice parameter increases from 0.358 nm to 0.363 nm for the 400 degrees C treatment and to 0.369 nm for the 480 degrees C treatment, giving an estimation of circa 10 at.% carbon content for the latter. Lattice distortion, resulting from the expansion and the associated compressive residual stresses increases the surface hardness to 1040 HV(0.025). Micro-scale tensile tests were conducted on specimens prepared with the conditions selected above, which has indicated that the damage imposed to the expanded austenite layer was more easily related to each separated grain than to the overall macro-scale stresses imposed by the tensile test. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
There are several ways to attempt to model a building and its heat gains from external sources as well as internal ones in order to evaluate a proper operation, audit retrofit actions, and forecast energy consumption. Different techniques, varying from simple regression to models that are based on physical principles, can be used for simulation. A frequent hypothesis for all these models is that the input variables should be based on realistic data when they are available, otherwise the evaluation of energy consumption might be highly under or over estimated. In this paper, a comparison is made between a simple model based on artificial neural network (ANN) and a model that is based on physical principles (EnergyPlus) as an auditing and predicting tool in order to forecast building energy consumption. The Administration Building of the University of Sao Paulo is used as a case study. The building energy consumption profiles are collected as well as the campus meteorological data. Results show that both models are suitable for energy consumption forecast. Additionally, a parametric analysis is carried out for the considered building on EnergyPlus in order to evaluate the influence of several parameters such as the building profile occupation and weather data on such forecasting. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
A study on the use of artificial intelligence (AI) techniques for the modelling and subsequent control of an electric resistance spot welding process (ERSW) is presented. The ERSW process is characterized by the coupling of thermal, electrical, mechanical, and metallurgical phenomena. For this reason, early attempts to model it using computational methods established as the methods of finite differences, finite element, and finite volumes, ask for simplifications that lead the model obtained far from reality or very costly in terms of computational costs, to be used in a real-time control system. In this sense, the authors have developed an ERSW controller that uses fuzzy logic to adjust the energy transferred to the weld nugget. The proposed control strategies differ in the speed with which it reaches convergence. Moreover, their application for a quality control of spot weld through artificial neural networks (ANN) is discussed.
Resumo:
Ti(6)Al(4)V thin films were grown by magnetron sputtering on a conventional austenitic stainless steel. Five deposition conditions varying both the deposition chamber pressure and the plasma power were studied. Highly textured thin films were obtained, their crystallite size (C) 2008 Elsevier Ltd. All rights reserved.