38 resultados para attori, concorrenza, COOP, Akka, benchmark


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A procedure to evaluate mine rehabilitation practices during the operational phase was developed and validated. It is based on a comparison of actually observed or documented practices with internationally recommended best practices (BP). A set of 150 BP statements was derived from international guides in order to establish the benchmark. The statements are arranged in six rehabilitation programs under three categories: (1) planning (2) operational and (3) management, corresponding to the adoption of the plan-do-check-act management systems model to mine rehabilitation. The procedure consists of (i) performing technical inspections guided by a series of field forms containing BP statements; (ii) classifying evidences in five categories; and (iii) calculating conformity indexes and levels. For testing and calibration purposes, the procedure was applied to nine limestone quarries and conformity indexes were calculated for the rehabilitation programs in each quarry. Most quarries featured poor planning practices, operational practices reached high conformity levels in 50% of the cases and management practices scored moderate conformity. Despite all quarries being ISO 14001 certified, their management systems pay low attention to issues pertaining to land rehabilitation and biodiversity. The best results were achieved by a quarry whose expansion was recently submitted to the environmental impact assessment process, suggesting that public scrutiny may play a positive role in enhancing rehabilitation practices. Conformity indexes and levels can be used to chart the evolution of rehabilitation practices at regular intervals, to establish corporate goals and for communication with stakeholders. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the minimization of the mean absolute deviation from a common due date in a two-machine flowshop scheduling problem. We present heuristics that use an algorithm, based on proposed properties, which obtains an optimal schedule fora given job sequence. A new set of benchmark problems is presented with the purpose of evaluating the heuristics. Computational experiments show that the developed heuristics outperform results found in the literature for problems up to 500 jobs. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

At present, the cement industry generates approximately 5% of the world`s anthropogenic CO(2) emissions. This share is expected to increase since demand for cement based products is forecast to multiply by a factor of 2.5 within the next 40 years and the traditional strategies to mitigate emissions, focused on the production of cement, will not be capable of compensating such growth. Therefore, additional mitigation strategies are needed, including an increase in the efficiency of cement use. This paper proposes indicators for measuring cement use efficiency, presents a benchmark based on literature data and discusses potential gains in efficiency. The binder intensity (bi) index measures the amount of binder (kg m(-3)) necessary to deliver 1 MPa of mechanical strength, and consequently express the efficiency of using binder materials. The CO(2) intensity index (ci) allows estimating the global warming potential of concrete formulations. Research benchmarks show that bi similar to 5 kg m(-3) MPa(-1) are feasible and have already been achieved for concretes >50 MPa. However, concretes with lower compressive strengths have binder intensities varying between 10 and 20 kg m(-3) MPa(-1). These values can be a result of the minimum cement content established in many standards and reveal a significant potential for performance gains. In addition, combinations of low bi and ci are shown to be feasible. (c) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article attempts to elucidate one of the mechanisms that link trade barriers, in the form of port costs, and subsequent growth and regional inequality. Prior attention has focused on inland or link costs, but port costs can be considered as a further barrier to enhancing trade liberalization and growth. In contrast to a highway link, congestion at a port may have severe impacts that are spread over space and time whereas highway link congestion may be resolved within several hours. Since a port is part of the transportation network, any congestion/disruption is likely to ripple throughout the hinterland. In this sense, it is important to model properly the role nodal components play in the context of spatial models and international trade. In this article, a spatial computable general equilibrium (CGE) model that is integrated to a transport network system is presented to simulate the impacts of increases in port efficiency in Brazil. The role of ports of entry and ports of exit are explicitly considered to grasp the holistic picture in an integrated interregional system. Measures of efficiency for different port locations are incorporated in the calibration of the model and used as the benchmark in our simulations. Three scenarios are evaluated: (1) an overall increase in port efficiency in Brazil to achieve international standards; (2) efficiency gains associated with decentralization in port management in Brazil; and (3) regionally differentiated increases in port efficiency to reach the boundary of the national efficiency frontier.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper develops a multi-regional general equilibrium model for climate policy analysis based on the latest version of the MIT Emissions Prediction and Policy Analysis (EPPA) model. We develop two versions so that we can solve the model either as a fully inter-temporal optimization problem (forward-looking, perfect foresight) or recursively. The standard EPPA model on which these models are based is solved recursively, and it is necessary to simplify some aspects of it to make inter-temporal solution possible. The forward-looking capability allows one to better address economic and policy issues such as borrowing and banking of GHG allowances, efficiency implications of environmental tax recycling, endogenous depletion of fossil resources, international capital flows, and optimal emissions abatement paths among others. To evaluate the solution approaches, we benchmark each version to the same macroeconomic path, and then compare the behavior of the two versions under a climate policy that restricts greenhouse gas emissions. We find that the energy sector and CO(2) price behavior are similar in both versions (in the recursive version of the model we force the inter-temporal theoretical efficiency result that abatement through time should be allocated such that the CO(2) price rises at the interest rate.) The main difference that arises is that the macroeconomic costs are substantially lower in the forward-looking version of the model, since it allows consumption shifting as an additional avenue of adjustment to the policy. On the other hand, the simplifications required for solving the model as an optimization problem, such as dropping the full vintaging of the capital stock and fewer explicit technological options, likely have effects on the results. Moreover, inter-temporal optimization with perfect foresight poorly represents the real economy where agents face high levels of uncertainty that likely lead to higher costs than if they knew the future with certainty. We conclude that while the forward-looking model has value for some problems, the recursive model produces similar behavior in the energy sector and provides greater flexibility in the details of the system that can be represented. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Telemedicine might increase the speed of diagnosis for leprosy and reduce the development of disabilities. We compared the accuracy of diagnosis made by telemedicine with that made by in-person examination. The cases were patients with suspected leprosy at eight public health clinics in outlying areas of the city of Sao Paulo. The case history and clinical examination data, and at least two clinical images for each patient, were stored in a web-based system developed for teledermatology. After the examination in the public clinic, patients then attended a teaching hospital for an in-person examination. The benchmark was the clinical examination of two dermatologists at the university hospital. From August 2005 to April 2006, 142 suspected cases of leprosy were forwarded to the website by the doctors at the clinics. Of these, 36 cases were excluded. There was overall agreement in the diagnosis of leprosy in 74% of the 106 remaining cases. The sensitivity was 78% and the specificity was 31%. Although the specificity was low, the study suggests that telemedicine may be a useful low-cost method for obtaining second opinions in programmes to control leprosy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seasonal distribution of Lutzomyia longipalpis was studied in two forested and five domiciliary areas of the urban area of Campo Grande; MS, from December 2003 to November 2005. Weekly captures were carried out with CDC light traps positioned on ground and in the canopy inside a residual forest and on the edge (ground) of a woodland and in at least one of the following ecotopes in peridomiciles-a cultivated area, a chicken coop, a pigsty, a kennel, a goat and sheep shelter and an intradomicile. A total of 9519 sand flies were collected, 2666 during the first year and 6853 during the second. L. longipalpis was found throughout the 2-year period, presenting smaller peaks at intervals of 2-3 months and two greater peaks, the first in February and the second in April 2005, soon after periods of heavy rain. Only In one of the woodlands was a significant negative correlation (p < 0.05) between the number of insects and temperature during the first year and the climatic factors (temperature, RHA and rain) was observed. In the domiciliary areas in four domiciles some positive correlations (p < 0.05) occurred in relation to one or more climatic factors; however, the species shows a clear tendency to greater frequency (72%) in the rainy season than in the dry (28%). Thus, we recommend an intensification of the VL control measures applied in Campo Grande, MS, during the rainy season with a view to reducing the risk of the transmission of the disease. (C) 2007 Elsevier B.V. All fights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the discovery of a wide (67 AU) substellar companion to the nearby (21 pc) young solar-metallicity M1 dwarf CD-35 2722, a member of the approximate to 100 Myr AB Doradus association. Two epochs of astrometry from the NICI Planet-Finding Campaign confirm that CD-35 2722 B is physically associated with the primary star. Near-IR spectra indicate a spectral type of L4 +/- 1 with a moderately low surface gravity, making it one of the coolest young companions found to date. The absorption lines and near-IR continuum shape of CD-35 2722 B agree especially well the dusty field L4.5 dwarf 2MASS J22244381-0158521, while the near-IR colors and absolute magnitudes match those of the 5 Myr old L4 planetary-mass companion, 1RXS J160929.1-210524 b. Overall, CD-35 2722 B appears to be an intermediate-age benchmark for L dwarfs, with a less peaked H-band continuum than the youngest objects and near-IR absorption lines comparable to field objects. We fit Ames-Dusty model atmospheres to the near-IR spectra and find T(eff) = 1700-1900 K and log(g) = 4.5 +/- 0.5. The spectra also show that the radial velocities of components A and B agree to within +/- 10 km s(-1), further confirming their physical association. Using the age and bolometric luminosity of CD-35 2722 B, we derive a mass of 31 +/- 8 M(Jup) from the Lyon/Dusty evolutionary models. Altogether, young late-M to mid-L type companions appear to be overluminous for their near-IR spectral type compared with field objects, in contrast to the underluminosity of young late-L and early-T dwarfs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The diffusion of astrophysical magnetic fields in conducting fluids in the presence of turbulence depends on whether magnetic fields can change their topology via reconnection in highly conducting media. Recent progress in understanding fast magnetic reconnection in the presence of turbulence reassures that the magnetic field behavior in computer simulations and turbulent astrophysical environments is similar, as far as magnetic reconnection is concerned. This makes it meaningful to perform MHD simulations of turbulent flows in order to understand the diffusion of magnetic field in astrophysical environments. Our studies of magnetic field diffusion in turbulent medium reveal interesting new phenomena. First of all, our three-dimensional MHD simulations initiated with anti-correlating magnetic field and gaseous density exhibit at later times a de-correlation of the magnetic field and density, which corresponds well to the observations of the interstellar media. While earlier studies stressed the role of either ambipolar diffusion or time-dependent turbulent fluctuations for de-correlating magnetic field and density, we get the effect of permanent de-correlation with one fluid code, i.e., without invoking ambipolar diffusion. In addition, in the presence of gravity and turbulence, our three-dimensional simulations show the decrease of the magnetic flux-to-mass ratio as the gaseous density at the center of the gravitational potential increases. We observe this effect both in the situations when we start with equilibrium distributions of gas and magnetic field and when we follow the evolution of collapsing dynamically unstable configurations. Thus, the process of turbulent magnetic field removal should be applicable both to quasi-static subcritical molecular clouds and cores and violently collapsing supercritical entities. The increase of the gravitational potential as well as the magnetization of the gas increases the segregation of the mass and magnetic flux in the saturated final state of the simulations, supporting the notion that the reconnection-enabled diffusivity relaxes the magnetic field + gas system in the gravitational field to its minimal energy state. This effect is expected to play an important role in star formation, from its initial stages of concentrating interstellar gas to the final stages of the accretion to the forming protostar. In addition, we benchmark our codes by studying the heat transfer in magnetized compressible fluids and confirm the high rates of turbulent advection of heat obtained in an earlier study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.