978 resultados para input parameter value recommendation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Throughout the industrial processes of sheet metal manufacturing and refining, shear cutting is widely used for its speed and cost advantages over competing cutting methods. Industrial shears may include some force measurement possibilities, but the force is most likely influenced by friction losses between shear tool and the point of measurement, and are in general not showing the actual force applied to the sheet. Well defined shears and accurate measurements of force and shear tool position are important for understanding the influence of shear parameters. Accurate experimental data are also necessary for calibration of numerical shear models. Here, a dedicated laboratory set-up with well defined geometry and movement in the shear, and high measurability in terms of force and geometry is designed, built and verified. Parameters important to the shear process are studied with perturbation analysis techniques and requirements on input parameter accuracy are formulated to meet experimental output demands. Input parameters in shearing are mostly geometric parameters, but also material properties and contact conditions. Based on the accuracy requirements, a symmetric experiment with internal balancing of forces is constructed to avoid guides and corresponding friction losses. Finally, the experimental procedure is validated through shearing of a medium grade steel. With the obtained experimental set-up performance, force changes as result of changes in studied input parameters are distinguishable down to a level of 1%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, by using the Poincare compactification in R(3) we make a global analysis of the Lorenz system, including the complete description of its dynamic behavior on the sphere at infinity. Combining analytical and numerical techniques we show that for the parameter value b = 0 the system presents an infinite set of singularly degenerate heteroclinic cycles, which consist of invariant sets formed by a line of equilibria together with heteroclinic orbits connecting two of the equilibria. The dynamical consequences related to the existence of such cycles are discussed. In particular a possibly new mechanism behind the creation of Lorenz-like chaotic attractors, consisting of the change in the stability index of the saddle at the origin as the parameter b crosses the null value, is proposed. Based on the knowledge of this mechanism we have numerically found chaotic attractors for the Lorenz system in the case of small b > 0, so nearby the singularly degenerate heteroclinic cycles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes a novel neural model to electrical load forecasting in transformers. The network acts as identifier of structural features to forecast process. So that output parameters can be estimated and generalized from an input parameter set. The model was trained and assessed through load data extracted from a Brazilian Electric Utility taking into account time, current, tension, active power in the three phases of the system. The results obtained in the simulations show that the developed technique can be used as an alternative tool to become more appropriate for planning of electric power systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate identification of the nitrogen content in plants is extremely important since it involves economic aspects and environmental impacts, Several experimental tests have been carried out to obtain characteristics and parameters associated with the health of plants and its growing. The nitrogen content identification in plants involves a lot of non-linear parameters and complexes mathematical models. This paper describes a novel approach for identification of nitrogen content thought SPAD index using artificial neural networks (ANN). The network acts as identifier of relationships among, crop varieties, fertilizer treatments, type of leaf and nitrogen content in the plants (target). So, nitrogen content can be generalized and estimated and from an input parameter set. This approach can form the basis for development of an accurate real time system to predict nitrogen content in plants.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate identification of features of dynamical grounding systems are extremely important to define the operational safety and proper functioning of electric power systems. Several experimental tests and theoretical investigations have been carried out to obtain characteristics and parameters associated with the technique of grounding. The grounding system involves a lot of non-linear parameters. This paper describes a novel approach for mapping characteristics of dynamical grounding systems using artificial neural networks. The network acts as identifier of structural features of the grounding processes. So that output parameters can be estimated and generalized from an input parameter set. The results obtained by the network are compared with other approaches also used to model grounding systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accurate identification of the nitrogen content in crop plants is extremely important since it involves economic aspects and environmental impacts. Several experimental tests have been carried out to obtain characteristics and parameters associated with the health of plants and its growing. The nitrogen content identification involves a lot of nonlinear parametes and complexes mathematical models. This paper describes a novel approach for identification of nitrogen content thought spectral reflectance of plant leaves using artificial neural networks. The network acts as identifier of relationships among pH of soil, fertilizer treatment, spectral reflectance and nitrogen content in the plants. So, nitrogen content can be estimated and generalized from an input parameter set. This approach can be form the basis for development of an accurate real time nitrogen applicator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The energy flow, dE/d eta, is studied at large pseudorapidities in proton-proton collisions at the LHC, for centre-of-mass energies of 0.9 and 7 TeV. The measurements are made using the CMS detector in the pseudorapidity range 3:15 < vertical bar eta vertical bar < 4.9, for both minimum-bias events and events with at least two high-momentum jets. The data are compared to various pp Monte Carlo event generators whose theoretical models and input parameter values are sensitive to the energy-flow measurements. Inclusion of multiple-parton interactions in the Monte Carlo event generators is found to improve the description of the energy-flow measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper describes a novel neural model to estimate electrical losses in transformer during the manufacturing phase. The network acts as an identifier of structural features on electrical loss process, so that output parameters can be estimated and generalized from an input parameter set. The model was trained and assessed through experimental data taking into account core losses, copper losses, resistance, current and temperature. The results obtained in the simulations have shown that the developed technique can be used as an alternative tool to make the analysis of electrical losses on distribution transformer more appropriate regarding to manufacturing process. Thus, this research has led to an improvement on the rational use of energy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a novel approach for mapping lightning models using artificial neural networks. The networks acts as identifier of structural features of the lightning models so that output parameters can be estimated and generalized from an input parameter set. Simulation examples are presented to validate the proposed approach. More specifically, the neural networks are used to compute electrical field intensity and critical disruptive voltage taking into account several atmospheric and structural factors, such as pressure, temperature, humidity, distance between phases, height of bus bars, and wave forms. A comparative analysis with other approaches is also provided to illustrate this new methodology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Capacitated Arc Routing Problem (CARP) is a well-known NP-hard combinatorial optimization problem where, given an undirected graph, the objective is to find a minimum cost set of tours servicing a subset of required edges under vehicle capacity constraints. There are numerous applications for the CARP, such as street sweeping, garbage collection, mail delivery, school bus routing, and meter reading. A Greedy Randomized Adaptive Search Procedure (GRASP) with Path-Relinking (PR) is proposed and compared with other successful CARP metaheuristics. Some features of this GRASP with PR are (i) reactive parameter tuning, where the parameter value is stochastically selected biased in favor of those values which historically produced the best solutions in average; (ii) a statistical filter, which discard initial solutions if they are unlikely to improve the incumbent best solution; (iii) infeasible local search, where high-quality solutions, though infeasible, are used to explore the feasible/infeasible boundaries of the solution space; (iv) evolutionary PR, a recent trend where the pool of elite solutions is progressively improved by successive relinking of pairs of elite solutions. Computational tests were conducted using a set of 81 instances, and results reveal that the GRASP is very competitive, achieving the best overall deviation from lower bounds and the highest number of best solutions found. © 2011 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Energia na Agricultura) - FCA

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent Salmonella outbreaks have prompted the need for new processing options for peanut products. Traditional heating kill-steps have shown to be ineffective in lipid-rich matrices such as peanut products. High pressure processing is one such option for peanut sauce because it has a high water activity, which has proved to be a large contributing factor in microbial lethality due to high pressure processing. Four different formulations of peanut sauce were inoculated with a five strain Salmonella cocktail and high pressure processed. Results indicate that increasing pressure or increasing hold time increases log10 reductions. The Weibull model was fitted to each kill curve, with b and n values significantly optimized for each curve (p-value < 0.05). Most curves had an n parameter value less than 1, indicating that the population had a dramatic initial reduction, but tailed off as time increased, leaving a small resistant population. ANOVA analysis of the b and n parameters show that there are more significant differences between b parameters than n parameters, meaning that most treatments showed similar tailing effect, but differed on the shape of the curve. Comparisons between peanut sauce formulations at the same pressure treatments indicate that increasing amount of organic peanut butter within the sauce formulation decreases log10 reductions. This could be due to a protective effect from the lipids in the peanut butter, or it may be due to other factors such as nutrient availability or water activity. Sauces pressurized at lower temperatures had decreased log10 reductions, indicating that cooler temperatures offered some protective effect. Log10 reductions exceeded 5 logs, indicating that high pressure processing may be a suitable option as a kill-step for Salmonella in industrial processing of peanut sauces. Future research should include high pressure processing on other peanut products with high water activities such as sauces and syrups as well as research to determine the effects of water activity and lipid composition with a food matrix such as peanut sauces.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Der astrophysikalische r-Prozeß (schneller Neutroneneinfang), ist verantwortlich für die Nukleosynthese einer großen Zahl von Elementen, die schwerer als Eisen sind. Benutzt man das ''waiting-point''-Modell, so kann die Häufigkeitsverteilung der Elemente durch drei nukleare und drei stellare Input-Parameter beschrieben werden. Für einen gegebenen Satz von stellaren Parametern definiert die Neutronenseparationsenergie (Sn) den r-Prozeß-Pfad. Die beta-Zerfall-Halbwertszeit (T1/2) der Kerne im r-Prozeß-Pfad bestimmt die Häufigkeit des Vorläufers und bezieht man die Neutronenemissionswahrscheinlichkeit (Pn) mit ein, so auch die endgültige Häufigkeitsverteilung. Von besonderer Wichtigkeit sind die neutronenreichen ''waiting-point''-Isotope. Zum Beispiel sind die N=82 Isotope verantwortlich für den solaren A~130 Häufigkeits-Peak. Diese Arbeit befaßt sich mit der Identifizierung und der Untersuchung von Zerfallseigenschaften neutronenreicher Isotope des Mangan (A=61 bis 69) und Cadmium (A=130 bis 132). Neutronenreiche Nuklide zu erzeugen und zu detektieren ist ein komplizierter und zeitaufwendiger Prozeß, nichts desto trotz erfolgreich. Das Hauptproblem bei dieser Art von Experimenten ist der hohe isobare Untergrund. Aus diesem Grunde wurden speziell entwickelte Anregungsschemata für Mangan und Cadmium eingesetzt, um die gewünschten Isotope mittels Laser-Resonanzionisation chemisch selektiv zu ionisieren. Bei CERN/ISOLDE war es möglich im Massenbereich von 60^Mn bis 69^Mn neue Halbwertszeiten und Pn-Werte zu bestimmen. Für 64^Mn und 66^Mn konnten darüber hinaus erstmals noch partielle Zerfallsschemata aufgestellt werden. Es zeigte sich, daß die Ergebnisse teilweise recht überraschend waren, da sie nicht durch das QRPA-Modell vorhergesagt wurden. Mit Hilfe vergleichender Studien des Gesamttrends der Niveausystematiken der gg-Kerne von 26_Fe, 30_Zn, 32_Ge, 24_Cr und 28_Ni konnte ein Verschwinden der sphärischen N=40 Unterschale und die Existenz einer neuen Region mit signifikanter Deformation nachgewiesen werden, die vermutlich ihr ''Zentrum'' bei 64^Cr hat. Ebenfalls zeigen Studien der Niveausystematik bei 48Cd und der Vergleich mit 46_Pd, 54_Xe, 52_Te und 50_Sn, erste Hinweise eines Schalenquenchings bei N=82. Es wurde die Messung der Halbwertszeit von 130^Cd verbessert und die Halbwertszeiten von 131^Cd und 132^Cd erstmals bestimmt. Die neuen Daten können nur erklärt werden, wenn man bei der QRPA-Rechnung verbotene Übergänge mitberücksichtigt. Es genügt nicht, die Rechnung für reinen Gamow-Teller-Zerfall durchzuführen.