961 resultados para Chance-constrained model
Resumo:
The execution of a project requires resources that are generally scarce. Classical approaches to resource allocation assume that the usage of these resources by an individual project activity is constant during the execution of that activity; in practice, however, the project manager may vary resource usage over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and various work-content-related constraints are met. We formulate this problem for the first time as a mixed-integer linear program. Our computational results for a standard test set from the literature indicate that this model outperforms the state-of-the-art solution methods for this problem.
Resumo:
The bedrock topography beneath the Quaternary cover provides an important archive for the identification of erosional processes during past glaciations. Here, we combined stratigraphic investigations of more than 40,000 boreholes with published data to generate a bedrock topography model for the entire plateau north of the Swiss Alps including the valleys within the mountain belt. We compared the bedrock map with data about the pattern of the erosional resistance of Alpine rocks to identify the controls of the lithologic architecture on the location of overdeepenings. We additionally used the bedrock topography map as a basis to calculate the erosional potential of the Alpine glaciers, which was related to the thickness of the LGM ice. We used these calculations to interpret how glaciers, with support by subglacial meltwater under pressure, might have shaped the bedrock topography of the Alps. We found that the erosional resistance of the bedrock lithology mainly explains where overdeepenings in the Alpine valleys and the plateau occur. In particular, in the Alpine valleys, the locations of overdeepenings largely overlap with areas where the underlying bedrock has a low erosional resistance, or where it was shattered by faults. We also found that the assignment of two end-member scenarios of erosion, related to glacial abrasion/plucking in the Alpine valleys, and dissection by subglacial meltwater in the plateau, may be adequate to explain the pattern of overdeepenings in the Alpine realm. This most likely points to the topographic controls on glacial scouring. In the Alps, the flow of LGM and previous glaciers were constrained by valley flanks, while ice flow was mostly divergent on the plateau where valley borders are absent. We suggest that these differences in landscape conditioning might have contributed to the contrasts in the formation of overdeepenings in the Alpine valleys and the plateau.
Resumo:
Radiocarbon production, solar activity, total solar irradiance (TSI) and solar-induced climate change are reconstructed for the Holocene (10 to 0 kyr BP), and TSI is predicted for the next centuries. The IntCal09/SHCal04 radiocarbon and ice core CO2 records, reconstructions of the geomagnetic dipole, and instrumental data of solar activity are applied in the Bern3D-LPJ, a fully featured Earth system model of intermediate complexity including a 3-D dynamic ocean, ocean sediments, and a dynamic vegetation model, and in formulations linking radiocarbon production, the solar modulation potential, and TSI. Uncertainties are assessed using Monte Carlo simulations and bounding scenarios. Transient climate simulations span the past 21 thousand years, thereby considering the time lags and uncertainties associated with the last glacial termination. Our carbon-cycle-based modern estimate of radiocarbon production of 1.7 atoms cm−2 s−1 is lower than previously reported for the cosmogenic nuclide production model by Masarik and Beer (2009) and is more in-line with Kovaltsov et al. (2012). In contrast to earlier studies, periods of high solar activity were quite common not only in recent millennia, but throughout the Holocene. Notable deviations compared to earlier reconstructions are also found on decadal to centennial timescales. We show that earlier Holocene reconstructions, not accounting for the interhemispheric gradients in radiocarbon, are biased low. Solar activity is during 28% of the time higher than the modern average (650 MeV), but the absolute values remain weakly constrained due to uncertainties in the normalisation of the solar modulation to instrumental data. A recently published solar activity–TSI relationship yields small changes in Holocene TSI of the order of 1 W m−2 with a Maunder Minimum irradiance reduction of 0.85 ± 0.16 W m−2. Related solar-induced variations in global mean surface air temperature are simulated to be within 0.1 K. Autoregressive modelling suggests a declining trend of solar activity in the 21st century towards average Holocene conditions.
Resumo:
Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.
Resumo:
A state-of-the-art inverse model, CarbonTracker Data Assimilation Shell (CTDAS), was used to optimize estimates of methane (CH4) surface fluxes using atmospheric observations of CH4 as a constraint. The model consists of the latest version of the TM5 atmospheric chemistry-transport model and an ensemble Kalman filter based data assimilation system. The model was constrained by atmospheric methane surface concentrations, obtained from the World Data Centre for Greenhouse Gases (WDCGG). Prior methane emissions were specified for five sources: biosphere, anthropogenic, fire, termites and ocean, of which bio-sphere and anthropogenic emissions were optimized. Atmospheric CH 4 mole fractions for 2007 from northern Finland calculated from prior and optimized emissions were compared with observations. It was found that the root mean squared errors of the posterior esti - mates were more than halved. Furthermore, inclusion of NOAA observations of CH 4 from weekly discrete air samples collected at Pallas improved agreement between posterior CH 4 mole fraction estimates and continuous observations, and resulted in reducing optimized biosphere emissions and their uncertainties in northern Finland.
Resumo:
This paper clarifies the relationship between an injurer's wealth level and his care choice by highlighting the distinction between monetary and non-monetary care. When care is non-monetary, wealth-constrained injurers generally take less than optimal care, and care is increasing in their wealth level under both strict liability and negligence. In contrast, when care is monetary, injurers may take too much or too little care under strict liability, and care is not strictly increasing in injurer wealth. Under negligence, the relationship between injurer care and wealth is similar in the two formulations. However, when litigation costs are added to the model, the relationship between injurer care and wealth becomes non-monotonic under both liability rules.
Resumo:
Bromoform (CHBr3) is one important precursor of atmospheric reactive bromine species that are involved in ozone depletion in the troposphere and stratosphere. In the open ocean bromoform production is linked to phytoplankton that contains the enzyme bromoperoxidase. Coastal sources of bromoform are higher than open ocean sources. However, open ocean emissions are important because the transfer of tracers into higher altitude in the air, i.e. into the ozone layer, strongly depends on the location of emissions. For example, emissions in the tropics are more rapidly transported into the upper atmosphere than emissions from higher latitudes. Global spatio-temporal features of bromoform emissions are poorly constrained. Here, a global three-dimensional ocean biogeochemistry model (MPIOM-HAMOCC) is used to simulate bromoform cycling in the ocean and emissions into the atmosphere using recently published data of global atmospheric concentrations (Ziska et al., 2013) as upper boundary conditions. Our simulated surface concentrations of CHBr3 match the observations well. Simulated global annual emissions based on monthly mean model output are lower than previous estimates, including the estimate by Ziska et al. (2013), because the gas exchange reverses when less bromoform is produced in non-blooming seasons. This is the case for higher latitudes, i.e. the polar regions and northern North Atlantic. Further model experiments show that future model studies may need to distinguish different bromoform-producing phytoplankton species and reveal that the transport of CHBr3 from the coast considerably alters open ocean bromoform concentrations, in particular in the northern sub-polar and polar regions.
Resumo:
Received signal strength-based localization systems usually rely on a calibration process that aims at characterizing the propagation channel. However, due to the changing environmental dynamics, the behavior of the channel may change after some time, thus, recalibration processes are necessary to maintain the positioning accuracy. This paper proposes a dynamic calibration method to initially calibrate and subsequently update the parameters of the propagation channel model using a Least Mean Squares approach. The method assumes that each anchor node in the localization infrastructure is characterized by its own propagation channel model. In practice, a set of sniffers is used to collect RSS samples, which will be used to automatically calibrate each channel model by iteratively minimizing the positioning error. The proposed method is validated through numerical simulation, showing that the positioning error of the mobile nodes is effectively reduced. Furthermore, the method has a very low computational cost; therefore it can be used in real-time operation for wireless resource-constrained nodes.
Resumo:
Adaptive agents use feedback as a key strategy to cope with un- certainty and change in their environments. The information fed back from the sensorimotor loop into the control subsystem can be used to change four different elements of the controller: parameters associated to the control model, the control model itself, the functional organization of the agent and the functional realization of the agent. There are many change alternatives and hence the complexity of the agent’s space of potential configurations is daunting. The only viable alternative for space- and time-constrained agents —in practical, economical, evolutionary terms— is to achieve a reduction of the dimensionality of this configuration space. Emotions play a critical role in this reduction. The reduction is achieved by func- tionalization, interface minimization and by patterning, i.e. by selection among a predefined set of organizational configurations. This analysis lets us state how autonomy emerges from the integration of cognitive, emotional and autonomic systems in strict functional terms: autonomy is achieved by the closure of functional dependency. Emotion-based morphofunctional systems are able to exhibit complex adaptation patterns at a reduced cognitive cost. In this article we show a general model of how emotion supports functional adaptation and how the emotional biological systems operate following this theoretical model. We will also show how this model is also of applicability to the construction of a wide spectrum of artificial systems1.
Resumo:
Systems used for target localization, such as goods, individuals, or animals, commonly rely on operational means to meet the final application demands. However, what would happen if some means were powered up randomly by harvesting systems? And what if those devices not randomly powered had their duty cycles restricted? Under what conditions would such an operation be tolerable in localization services? What if the references provided by nodes in a tracking problem were distorted? Moreover, there is an underlying topic common to the previous questions regarding the transfer of conceptual models to reality in field tests: what challenges are faced upon deploying a localization network that integrates energy harvesting modules? The application scenario of the system studied is a traditional herding environment of semi domesticated reindeer (Rangifer tarandus tarandus) in northern Scandinavia. In these conditions, information on approximate locations of reindeer is as important as environmental preservation. Herders also need cost-effective devices capable of operating unattended in, sometimes, extreme weather conditions. The analyses developed are worthy not only for the specific application environment presented, but also because they may serve as an approach to performance of navigation systems in absence of reasonably accurate references like the ones of the Global Positioning System (GPS). A number of energy-harvesting solutions, like thermal and radio-frequency harvesting, do not commonly provide power beyond one milliwatt. When they do, battery buffers may be needed (as it happens with solar energy) which may raise costs and make systems more dependent on environmental temperatures. In general, given our problem, a harvesting system is needed that be capable of providing energy bursts of, at least, some milliwatts. Many works on localization problems assume that devices have certain capabilities to determine unknown locations based on range-based techniques or fingerprinting which cannot be assumed in the approach considered herein. The system presented is akin to range-free techniques, but goes to the extent of considering very low node densities: most range-free techniques are, therefore, not applicable. Animal localization, in particular, uses to be supported by accurate devices such as GPS collars which deplete batteries in, maximum, a few days. Such short-life solutions are not particularly desirable in the framework considered. In tracking, the challenge may times addressed aims at attaining high precision levels from complex reliable hardware and thorough processing techniques. One of the challenges in this Thesis is the use of equipment with just part of its facilities in permanent operation, which may yield high input noise levels in the form of distorted reference points. The solution presented integrates a kinetic harvesting module in some nodes which are expected to be a majority in the network. These modules are capable of providing power bursts of some milliwatts which suffice to meet node energy demands. The usage of harvesting modules in the aforementioned conditions makes the system less dependent on environmental temperatures as no batteries are used in nodes with harvesters--it may be also an advantage in economic terms. There is a second kind of nodes. They are battery powered (without kinetic energy harvesters), and are, therefore, dependent on temperature and battery replacements. In addition, their operation is constrained by duty cycles in order to extend node lifetime and, consequently, their autonomy. There is, in turn, a third type of nodes (hotspots) which can be static or mobile. They are also battery-powered, and are used to retrieve information from the network so that it is presented to users. The system operational chain starts at the kinetic-powered nodes broadcasting their own identifier. If an identifier is received at a battery-powered node, the latter stores it for its records. Later, as the recording node meets a hotspot, its full record of detections is transferred to the hotspot. Every detection registry comprises, at least, a node identifier and the position read from its GPS module by the battery-operated node previously to detection. The characteristics of the system presented make the aforementioned operation own certain particularities which are also studied. First, identifier transmissions are random as they depend on movements at kinetic modules--reindeer movements in our application. Not every movement suffices since it must overcome a certain energy threshold. Second, identifier transmissions may not be heard unless there is a battery-powered node in the surroundings. Third, battery-powered nodes do not poll continuously their GPS module, hence localization errors rise even more. Let's recall at this point that such behavior is tight to the aforementioned power saving policies to extend node lifetime. Last, some time is elapsed between the instant an identifier random transmission is detected and the moment the user is aware of such a detection: it takes some time to find a hotspot. Tracking is posed as a problem of a single kinetically-powered target and a population of battery-operated nodes with higher densities than before in localization. Since the latter provide their approximate positions as reference locations, the study is again focused on assessing the impact of such distorted references on performance. Unlike in localization, distance-estimation capabilities based on signal parameters are assumed in this problem. Three variants of the Kalman filter family are applied in this context: the regular Kalman filter, the alpha-beta filter, and the unscented Kalman filter. The study enclosed hereafter comprises both field tests and simulations. Field tests were used mainly to assess the challenges related to power supply and operation in extreme conditions as well as to model nodes and some aspects of their operation in the application scenario. These models are the basics of the simulations developed later. The overall system performance is analyzed according to three metrics: number of detections per kinetic node, accuracy, and latency. The links between these metrics and the operational conditions are also discussed and characterized statistically. Subsequently, such statistical characterization is used to forecast performance figures given specific operational parameters. In tracking, also studied via simulations, nonlinear relationships are found between accuracy and duty cycles and cluster sizes of battery-operated nodes. The solution presented may be more complex in terms of network structure than existing solutions based on GPS collars. However, its main gain lies on taking advantage of users' error tolerance to reduce costs and become more environmentally friendly by diminishing the potential amount of batteries that can be lost. Whether it is applicable or not depends ultimately on the conditions and requirements imposed by users' needs and operational environments, which is, as it has been explained, one of the topics of this Thesis.
Resumo:
We recently have shown that selective growth of transplanted normal hepatocytes can be achieved in a setting of cell cycle block of endogenous parenchymal cells. Thus, massive proliferation of donor-derived normal hepatocytes was observed in the liver of rats previously given retrorsine (RS), a naturally occurring alkaloid that blocks proliferation of resident liver cells. In the present study, the fate of nodular hepatocytes transplanted into RS-treated or normal syngeneic recipients was followed. The dipeptidyl peptidase type IV-deficient (DPPIV−) rat model for hepatocyte transplantation was used to distinguish donor-derived cells from recipient cells. Hepatocyte nodules were chemically induced in Fischer 344, DPPIV+ rats; livers were then perfused and larger (>5 mm) nodules were separated from surrounding tissue. Cells isolated from either tissue were then injected into normal or RS-treated DPPIV− recipients. One month after transplantation, grossly visible nodules (2–3 mm) were seen in RS-treated recipients transplanted with nodular cells. They grew rapidly, occupying 80–90% of the host liver at 2 months, and progressed to hepatocellular carcinoma within 4 months. By contrast, no liver nodules developed within 6 months when nodular hepatocytes were injected into the liver of recipients not exposed to RS, although small clusters of donor-derived cells were present in these animals. Taken together, these results directly point to a fundamental role played by the host environment in modulating the growth and the progression rate of altered cells during carcinogenesis. In particular, they indicate that conditions associated with growth constraint of the host tissue can drive tumor progression in vivo.
Resumo:
The Huangtupo landslide is one of the largest in the Three Gorges region, China. The county-seat town of Badong, located on the south shore between the Xiling and Wu gorges of the Yangtze River, was moved to this unstable slope prior to the construction of the Three Gorges Project, since the new Three Gorges reservoir completely submerged the location of the old city. The instability of the slope is affecting the new town by causing residential safety problems. The Huangtupo landslide provides scientists an opportunity to understand landslide response to fluctuating river water level and heavy rainfall episodes, which is essential to decide upon appropriate remediation measures. Interferometric Synthetic Aperture Radar (InSAR) techniques provide a very useful tool for the study of superficial and spatially variable displacement phenomena. In this paper, three sets of radar data have been processed to investigate the Huangtupo landslide. Results show that maximum displacements are affecting the northwest zone of the slope corresponding to Riverside slumping mass I#. The other main landslide bodies (i.e. Riverside slumping mass II#, Substation landslide and Garden Spot landslide) exhibit a stable behaviour in agreement with in situ data, although some active areas have been recognized in the foot of the Substation landslide and Garden Spot landslide. InSAR has allowed us to study the kinematic behaviour of the landslide and to identify its active boundaries. Furthermore, the analysis of the InSAR displacement time-series has helped recognize the different displacement patterns on the slope and their relationships with various triggering factors. For those persistent scatterers, which exhibit long-term displacements, they can be decomposed into a creep model (controlled by geological conditions) and a superimposed recoverable term (dependent on external factors), which appears closely correlated with reservoir water level changes close to the river's edge. These results, combined with in situ data, provide a comprehensive analysis of the Huangtupo landslide, which is essential for its management.
Resumo:
Context. Classical supergiant X-ray binaries (SGXBs) and supergiant fast X-ray transients (SFXTs) are two types of high-mass X-ray binaries (HMXBs) that present similar donors but, at the same time, show very different behavior in the X-rays. The reason for this dichotomy of wind-fed HMXBs is still a matter of debate. Among the several explanations that have been proposed, some of them invoke specific stellar wind properties of the donor stars. Only dedicated empiric analysis of the donors’ stellar wind can provide the required information to accomplish an adequate test of these theories. However, such analyses are scarce. Aims. To close this gap, we perform a comparative analysis of the optical companion in two important systems: IGR J17544-2619 (SFXT) and Vela X-1 (SGXB). We analyze the spectra of each star in detail and derive their stellar and wind properties. As a next step, we compare the wind parameters, giving us an excellent chance of recognizing key differences between donor winds in SFXTs and SGXBs. Methods. We use archival infrared, optical and ultraviolet observations, and analyze them with the non-local thermodynamic equilibrium (NLTE) Potsdam Wolf-Rayet model atmosphere code. We derive the physical properties of the stars and their stellar winds, accounting for the influence of X-rays on the stellar winds. Results. We find that the stellar parameters derived from the analysis generally agree well with the spectral types of the two donors: O9I (IGR J17544-2619) and B0.5Iae (Vela X-1). The distance to the sources have been revised and also agree well with the estimations already available in the literature. In IGR J17544-2619 we are able to narrow the uncertainty to d = 3.0 ± 0.2 kpc. From the stellar radius of the donor and its X-ray behavior, the eccentricity of IGR J17544-2619 is constrained to e< 0.25. The derived chemical abundances point to certain mixing during the lifetime of the donors. An important difference between the stellar winds of the two stars is their terminal velocities (ν∞ = 1500 km s-1 in IGR J17544-2619 and ν∞ = 700 km s-1 in Vela X-1), which have important consequences on the X-ray luminosity of these sources. Conclusions. The donors of IGR J17544-2619 and Vela X-1 have similar spectral types as well as similar parameters that physically characterize them and their spectra. In addition, the orbital parameters of the systems are similar too, with a nearly circular orbit and short orbital period. However, they show moderate differences in their stellar wind velocity and the spin period of their neutron star which has a strong impact on the X-ray luminosity of the sources. This specific combination of wind speed and pulsar spin favors an accretion regime with a persistently high luminosity in Vela X-1, while it favors an inhibiting accretion mechanism in IGR J17544-2619. Our study demonstrates that the relative wind velocity is critical in class determination for the HMXBs hosting a supergiant donor, given that it may shift the accretion mechanism from direct accretion to propeller regimes when combined with other parameters.
Resumo:
STUDY HYPOTHESIS Using optimized conditions, primary trophoblast cells isolated from human term placenta can develop a confluent monolayer in vitro, which morphologically and functionally resembles the microvilli structure found in vivo. STUDY FINDING We report the successful establishment of a confluent human primary trophoblast monolayer using pre-coated polycarbonate inserts, where the integrity and functionality was validated by cell morphology, biophysical features, cellular marker expression and secretion, and asymmetric glucose transport. WHAT IS KNOWN ALREADY Human trophoblast cells form the initial barrier between maternal and fetal blood to regulate materno-fetal exchange processes. Although the method for isolating pure human cytotrophoblast cells was developed almost 30 years ago, a functional in vitro model with primary trophoblasts forming a confluent monolayer is still lacking. STUDY DESIGN, SAMPLES/MATERIALS, METHODS Human term cytotrophoblasts were isolated by enzymatic digestion and density gradient separation. The purity of the primary cells was evaluated by flow cytometry using the trophoblast-specific marker cytokeratin 7, and vimentin as an indicator for potentially contaminating cells. We screened different coating matrices for high cell viability to optimize the growth conditions for primary trophoblasts on polycarbonate inserts. During culture, cell confluency and polarity were monitored daily by determining transepithelial electrical resistance (TEER) and permeability properties of florescent dyes. The time course of syncytia-related gene expression and hCG secretion during syncytialization were assessed by quantitative RT-PCR and enzyme-linked immunosorbent assay, respectively. The morphology of cultured trophoblasts after 5 days was determined by light microscopy, scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Membrane makers were visualized using confocal microscopy. Additionally, glucose transport studies were performed on the polarized trophoblasts in the same system. MAIN RESULTS AND THE ROLE OF CHANCE During 5-day culture, the highly pure trophoblasts were cultured on inserts coated with reconstituted basement membrane matrix . They exhibited a confluent polarized monolayer, with a modest TEER and a size-dependent apparent permeability coefficient (Papp) to fluorescently labeled compounds (MW ∼400-70 000 Da). The syncytialization progress was characterized by gradually increasing mRNA levels of fusogen genes and elevating hCG secretion. SEM analyses confirmed a confluent trophoblast layer with numerous microvilli, and TEM revealed a monolayer with tight junctions. Immunocytochemistry on the confluent trophoblasts showed positivity for the cell-cell adhesion molecule E-cadherin, the tight junction protein 1 (ZO-1) and the membrane proteins ATP-binding cassette transporter A1 (ABCA1) and glucose transporter 1 (GLUT1). Applying this model to study the bidirectional transport of a non-metabolizable glucose derivative indicated a carrier-mediated placental glucose transport mechanism with asymmetric kinetics. LIMITATIONS, REASONS FOR CAUTION The current study is only focused on primary trophoblast cells isolated from healthy placentas delivered at term. It remains to be evaluated whether this system can be extended to pathological trophoblasts isolated from diverse gestational diseases. WIDER IMPLICATIONS OF THE FINDINGS These findings confirmed the physiological properties of the newly developed human trophoblast barrier, which can be applied to study the exchange of endobiotics and xenobiotics between the maternal and fetal compartment, as well as intracellular metabolism, paracellular contributions and regulatory mechanisms influencing the vectorial transport of molecules. LARGE-SCALE DATA Not applicable. STUDY FUNDING AND COMPETING INTERESTS This study was supported by the Swiss National Center of Competence in Research, NCCR TransCure, University of Bern, Switzerland, and the Swiss National Science Foundation (grant no. 310030_149958, C.A.). All authors declare that their participation in the study did not involve factual or potential conflicts of interests.
Resumo:
When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.