942 resultados para Parallel programming model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a mathematical model that combines lot-sizing and cutting-stock problems applied to the furniture industry is presented. The model considers the usual decisions of the lot sizing problems, as well as operational decisions related to the cutting machine programming. Two sets of a priori generated cutting patterns are used, industry cutting patterns and a class of n-group cutting patterns. A strategy to improve the utilization of the cutting machine is also tested. An optimization package was used to solve the model and the computational results, using real data from a furniture factory, show that a small subset of n-group cutting patterns provides good results and that the cutting machine utilization can be improved by the proposed strategy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new mixed-integer linear programming (MILP) model is proposed to represent the plug-in electric vehicles (PEVs) charging coordination problem in electrical distribution systems. The proposed model defines the optimal charging schedule for each division of the considered period of time that minimizes the total energy costs. Moreover, priority charging criteria is taken into account. The steady-state operation of the electrical distribution system, as well as the PEV batteries charging is mathematically represented; furthermore, constraints related to limits of voltage, current and power generation are included. The proposed mathematical model was applied in an electrical distribution system used in the specialized literature and the results show that the model can be used in the solution of the PEVs charging problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this study was to compare linear and nonlinear programming models for feed formulation, for maximum profit, considering the real variation in the prices of the corn, soybean meal and broilers during the period from January of 2008 to October of 2009, in the São Paulo State, Brazil. For the nonlinear formulation model, it was considered the following scenarios of prices: a) the minimum broiler price and the maximum prices of the corn and soybean meal during the period, b) the mean prices of the broiler, corn and soybean meal in the period and c) the maximum broiler price and the minimum prices of the corn and soybean meal, in the considered period; while for the linear formulation model, it was considered just the prices of the corn and the soybean. It was used the Practical Program for Feed Formulation 2.0 for the diets establishment. A total of 300 Cobb male chicks were randomly assigned to the 4 dietary treatments with 5 replicate pens of 15 chicks each. The birds were fed with a starter diet until 21 d and a grower diet from 22 to 42 d of age, and they had ad libitum access to feed and water, on floor with wood shavings as litter. The broilers were raised in an environmentally-controlled house. Body weight, body weight gain, feed intake, feed conversion ratio and profitability (related to the prices variation of the broilers and ingredients) were obtained at 42 d of age. It was found that the broilers fed with the diet formulated with the linear model presented the lowest feed intake and feed conversion ratio as compared with the broilers fed with diets from nonlinear formulation models. There were no significant differences in body weight and body weight gain among the treatments. Nevertheless, the profitabilities of the diets from the nonlinear model were significantly higher than that one from the linear formulation model, when the corn and soybean meal prices were near or below their average values for the studied period, for any broiler chicken price.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we introduce two new variants of the Assembly Line Worker Assignment and Balancing Problem (ALWABP) that allow parallelization of and collaboration between heterogeneous workers. These new approaches suppose an additional level of complexity in the Line Design and Assignment process, but also higher flexibility; which may be particularly useful in practical situations where the aim is to progressively integrate slow or limited workers in conventional assembly lines. We present linear models and heuristic procedures for these two new problems. Computational results show the efficiency of the proposed approaches and the efficacy of the studied layouts in different situations. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work addresses the solution to the problem of robust model predictive control (MPC) of systems with model uncertainty. The case of zone control of multi-variable stable systems with multiple time delays is considered. The usual approach of dealing with this kind of problem is through the inclusion of non-linear cost constraint in the control problem. The control action is then obtained at each sampling time as the solution to a non-linear programming (NLP) problem that for high-order systems can be computationally expensive. Here, the robust MPC problem is formulated as a linear matrix inequality problem that can be solved in real time with a fraction of the computer effort. The proposed approach is compared with the conventional robust MPC and tested through the simulation of a reactor system of the process industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives. The C-Factor has been used widely to rationalize the changes in shrinkage stress occurring at the tooth/resin-composite interfaces. Experimentally, such stresses have been measured in a uniaxial direction between opposed parallel walls. The situation of adjoining cavity walls has been neglected. The aim was to investigate the hypothesis that: within stylized model rectangular cavities of constant volume and wall thickness, the interfacial shrinkage-stress at the adjoining cavity walls increases steadily as the C-Factor increases. Methods. Eight 3D-FEM restored Class I 'rectangular cavity' models were created by MSC.PATRAN/MSC.Marc, r2-2005 and subjected to 1% of shrinkage, while maintaining constant both the volume (20 mm(3)) and the wall thickness (2 mm), but varying the C-Factor (1.9-13.5). An adhesive contact between the composite and the teeth was incorporated. Polymerization shrinkage was simulated by analogy with thermal contraction. Principal stresses and strains were calculated. Peak values of maximum principal (MP) and maximum shear (MS) stresses from the different walls were displayed graphically as a function of C-Factor. The stress-peak association with C-Factor was evaluated by the Pearson correlation between the stress peak and the C-Factor. Results. The hypothesis was rejected: there was no clear increase of stress-peaks with C-Factor. The stress-peaks particularly expressed as MP and MS varied only slightly with increasing C-Factor. Lower stress-peaks were present at the pulpal floor in comparison to the stress at the axial walls. In general, MP and MS were similar when the axial wall dimensions were similar. The Pearson coefficient only expressed associations for the maximum principal stress at the ZX wall and the Z axis. Significance. Increase of the C-Factor did not lead to increase of the calculated stress-peaks in model rectangular Class I cavity walls. (C) 2011 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a strongly interacting "quantum dot 1" and a weakly interacting "dot 2" connected in parallel to metallic leads. Gate voltages can drive the system between Kondo-quenched and non-Kondo free-moment phases separated by Kosterlitz-Thouless quantum phase transitions. Away from the immediate vicinity of the quantum phase transitions, the physical properties retain signatures of first-order transitions found previously to arise when dot 2 is strictly noninteracting. As interactions in dot 2 become stronger relative to the dot-lead coupling, the free moment in the non-Kondo phase evolves smoothly from an isolated spin-one-half in dot 1 to a many-body doublet arising from the incomplete Kondo compensation by the leads of a combined dot spin-one. These limits, which feature very different spin correlations between dot and lead electrons, can be distinguished by weak-bias conductance measurements performed at finite temperatures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ischemia/reperfusion (I/R) injury remains a major cause of graft dysfunction, which impacts short- and long-term follow-up. Hyperbaric oxygen therapy (HBO), through plasma oxygen transport, has been currently used as an alternative treatment for ischemic tissues. The aim of this study was to analyze the effects of HBO on kidney I/R injury model in rats, in reducing the harmful effect of I/R. The renal I/R model was obtained by occluding bilateral renal pedicles with nontraumatic vascular clamps for 45 minutes, followed by 48 hours of reperfusion. HBO therapy was delivered an hypebaric chamber (2.5 atmospheres absolute). Animals underwent two sessions of 60 minutes each at 6 hours and 20 hours after initiation of reperfusion. Male Wistar rats (n = 38) were randomized into four groups: sham, sham operated rats; Sham+HBO, sham operated rats exposed to HBO; I/R, animals submitted to I/R; and I/R+HBO, I/R rats exposed to HBO. Blood, urine, and kidney tissue were collected for biochemical, histologic, and immunohistochemical analyses. The histopathological evaluation of the ischemic injury used a grading scale of 0 to 4. HBO attenuated renal dysfunction after ischemia characterized by a significant decrease in blood urea nitrogen (BUN), serum creatinine, and proteinuria in the I/R+HBO group compared with I/R alone. In parallel, tubular function was improved resulting in significantly lower fractional excretions of sodium and potassium. Kidney sections from the I/R plus HBO group showed significantly lower acute kidney injury scores compared with the I/R group. HBO treatment significantly diminished proliferative activity in I/R (P < .05). There was no significant difference in macrophage infiltration or hemoxygenase-1 expression. In conclusion, HBO attenuated renal dysfunction in a kidney I/R injury model with a decrease in BUN, serum creatinine, proteinuria, and fractional excretion of sodium and potassium, associated with reduced histological damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background An important challenge for transcript counting methods such as Serial Analysis of Gene Expression (SAGE), "Digital Northern" or Massively Parallel Signature Sequencing (MPSS), is to carry out statistical analyses that account for the within-class variability, i.e., variability due to the intrinsic biological differences among sampled individuals of the same class, and not only variability due to technical sampling error. Results We introduce a Bayesian model that accounts for the within-class variability by means of mixture distribution. We show that the previously available approaches of aggregation in pools ("pseudo-libraries") and the Beta-Binomial model, are particular cases of the mixture model. We illustrate our method with a brain tumor vs. normal comparison using SAGE data from public databases. We show examples of tags regarded as differentially expressed with high significance if the within-class variability is ignored, but clearly not so significant if one accounts for it. Conclusion Using available information about biological replicates, one can transform a list of candidate transcripts showing differential expression to a more reliable one. Our method is freely available, under GPL/GNU copyleft, through a user friendly web-based on-line tool or as R language scripts at supplemental web-site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Over the last years, a number of researchers have investigated how to improve the reuse of crosscutting concerns. New possibilities have emerged with the advent of aspect-oriented programming, and many frameworks were designed considering the abstractions provided by this new paradigm. We call this type of framework Crosscutting Frameworks (CF), as it usually encapsulates a generic and abstract design of one crosscutting concern. However, most of the proposed CFs employ white-box strategies in their reuse process, requiring two mainly technical skills: (i) knowing syntax details of the programming language employed to build the framework and (ii) being aware of the architectural details of the CF and its internal nomenclature. Also, another problem is that the reuse process can only be initiated as soon as the development process reaches the implementation phase, preventing it from starting earlier. Method In order to solve these problems, we present in this paper a model-based approach for reusing CFs which shields application engineers from technical details, letting him/her concentrate on what the framework really needs from the application under development. To support our approach, two models are proposed: the Reuse Requirements Model (RRM) and the Reuse Model (RM). The former must be used to describe the framework structure and the later is in charge of supporting the reuse process. As soon as the application engineer has filled in the RM, the reuse code can be automatically generated. Results We also present here the result of two comparative experiments using two versions of a Persistence CF: the original one, whose reuse process is based on writing code, and the new one, which is model-based. The first experiment evaluated the productivity during the reuse process, and the second one evaluated the effort of maintaining applications developed with both CF versions. The results show the improvement of 97% in the productivity; however little difference was perceived regarding the effort for maintaining the required application. Conclusion By using the approach herein presented, it was possible to conclude the following: (i) it is possible to automate the instantiation of CFs, and (ii) the productivity of developers are improved as long as they use a model-based instantiation approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Parallel kinematic structures are considered very adequate architectures for positioning and orienti ng the tools of robotic mechanisms. However, developing dynamic models for this kind of systems is sometimes a difficult task. In fact, the direct application of traditional methods of robotics, for modelling and analysing such systems, usually does not lead to efficient and systematic algorithms. This work addre sses this issue: to present a modular approach to generate the dynamic model and through some convenient modifications, how we can make these methods more applicable to parallel structures as well. Kane’s formulati on to obtain the dynamic equations is shown to be one of the easiest ways to deal with redundant coordinates and kinematic constraints, so that a suitable c hoice of a set of coordinates allows the remaining of the modelling procedure to be computer aided. The advantages of this approach are discussed in the modelling of a 3-dof parallel asymmetric mechanisms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.