72 resultados para Mixed proportional hazards model
Resumo:
An idealized equilibrium model for the undisturbed partly cloudy boundary layer (BL) is used as a framework to explore the coupling of the energy, water, and carbon cycles over land in midlatitudes and show the sensitivity to the clear‐sky shortwave flux, the midtropospheric temperature, moisture, CO2, and subsidence. The changes in the surface fluxes, the BL equilibrium, and cloud cover are shown for a warmer, doubled CO2 climate. Reduced stomatal conductance in a simple vegetation model amplifies the background 2 K ocean temperature rise to an (unrealistically large) 6 K increase in near‐surface temperature over land, with a corresponding drop of near‐surface relative humidity of about 19%, and a rise of cloud base of about 70 hPa. Cloud changes depend strongly on changes of mean subsidence; but evaporative fraction (EF) decreases. EF is almost uniquely related to mixed layer (ML) depth, independent of background forcing climate. This suggests that it might be possible to infer EF for heterogeneous landscapes from ML depth. The asymmetry of increased evaporation over the oceans and reduced transpiration over land increases in a warmer doubled CO2 climate.
Resumo:
A novel analytical model for mixed-phase, unblocked and unseeded orographic precipitation with embedded convection is developed and evaluated. The model takes an idealised background flow and terrain geometry, and calculates the area-averaged precipitation rate and other microphysical quantities. The results provide insight into key physical processes, including cloud condensation, vapour deposition, evaporation, sublimation, as well as precipitation formation and sedimentation (fallout). To account for embedded convection in nominally stratiform clouds, diagnostics for purely convective and purely stratiform clouds are calculated independently and combined using weighting functions based on relevant dynamical and microphysical time scales. An in-depth description of the model is presented, as well as a quantitative assessment of its performance against idealised, convection-permitting numerical simulations with a sophisticated microphysics parameterisation. The model is found to accurately reproduce the simulation diagnostics over most of the parameter space considered.
Resumo:
In order to examine metacognitive accuracy (i.e., the relationship between metacognitive judgment and memory performance), researchers often rely on by-participant analysis, where metacognitive accuracy (e.g., resolution, as measured by the gamma coefficient or signal detection measures) is computed for each participant and the computed values are entered into group-level statistical tests such as the t-test. In the current work, we argue that the by-participant analysis, regardless of the accuracy measurements used, would produce a substantial inflation of Type-1 error rates, when a random item effect is present. A mixed-effects model is proposed as a way to effectively address the issue, and our simulation studies examining Type-1 error rates indeed showed superior performance of mixed-effects model analysis as compared to the conventional by-participant analysis. We also present real data applications to illustrate further strengths of mixed-effects model analysis. Our findings imply that caution is needed when using the by-participant analysis, and recommend the mixed-effects model analysis.
Resumo:
Well-resolved air–sea interactions are simulated in a new ocean mixed-layer, coupled configuration of the Met Office Unified Model (MetUM-GOML), comprising the MetUM coupled to the Multi-Column K Profile Parameterization ocean (MC-KPP). This is the first globally coupled system which provides a vertically resolved, high near-surface resolution ocean at comparable computational cost to running in atmosphere-only mode. As well as being computationally inexpensive, this modelling framework is adaptable– the independent MC-KPP columns can be applied selectively in space and time – and controllable – by using temperature and salinity corrections the model can be constrained to any ocean state. The framework provides a powerful research tool for process-based studies of the impact of air–sea interactions in the global climate system. MetUM simulations have been performed which separate the impact of introducing inter- annual variability in sea surface temperatures (SSTs) from the impact of having atmosphere–ocean feedbacks. The representation of key aspects of tropical and extratropical variability are used to assess the performance of these simulations. Coupling the MetUM to MC-KPP is shown, for example, to reduce tropical precipitation biases, improve the propagation of, and spectral power associated with, the Madden–Julian Oscillation and produce closer-to-observed patterns of springtime blocking activity over the Euro-Atlantic region.
Resumo:
Genome-wide association studies (GWAS) have been widely used in genetic dissection of complex traits. However, common methods are all based on a fixed-SNP-effect mixed linear model (MLM) and single marker analysis, such as efficient mixed model analysis (EMMA). These methods require Bonferroni correction for multiple tests, which often is too conservative when the number of markers is extremely large. To address this concern, we proposed a random-SNP-effect MLM (RMLM) and a multi-locus RMLM (MRMLM) for GWAS. The RMLM simply treats the SNP-effect as random, but it allows a modified Bonferroni correction to be used to calculate the threshold p value for significance tests. The MRMLM is a multi-locus model including markers selected from the RMLM method with a less stringent selection criterion. Due to the multi-locus nature, no multiple test correction is needed. Simulation studies show that the MRMLM is more powerful in QTN detection and more accurate in QTN effect estimation than the RMLM, which in turn is more powerful and accurate than the EMMA. To demonstrate the new methods, we analyzed six flowering time related traits in Arabidopsis thaliana and detected more genes than previous reported using the EMMA. Therefore, the MRMLM provides an alternative for multi-locus GWAS.
Resumo:
The role of the local atmospheric forcing on the ocean mixed layer depth (MLD) over the global oceans is studied using ocean reanalysis data products and a single-column ocean model coupled to an atmospheric general circulation model. The focus of this study is on how the annual mean and the seasonal cycle of the MLD relate to various forcing characteristics in different parts of the world's ocean, and how anomalous variations in the monthly mean MLD relate to anomalous atmospheric forcings. By analysing both ocean reanalysis data and the single-column ocean model, regions with different dominant forcings and different mean and variability characteristics of the MLD can be identified. Many of the global oceans' MLD characteristics appear to be directly linked to different atmospheric forcing characteristics at different locations. Here, heating and wind-stress are identified as the main drivers; in some, mostly coastal, regions the atmospheric salinity forcing also contributes. The annual mean MLD is more closely related to the annual mean wind-stress and the MLD seasonality is more closely to the seasonality in heating. The single-column ocean model, however, also points out that the MLD characteristics over most global ocean regions, and in particular the tropics and subtropics, cannot be maintained by local atmospheric forcings only, but are also a result of ocean dynamics that are not simulated in a single-column ocean model. Thus, lateral ocean dynamics are essentially in correctly simulating observed MLD.
Resumo:
The constant-density Charney model describes the simplest unstable basic state with a planetary-vorticity gradient, which is uniform and positive, and baroclinicity that is manifest as a negative contribution to the potential-vorticity (PV) gradient at the ground and positive vertical wind shear. Together, these ingredients satisfy the necessary conditions for baroclinic instability. In Part I it was shown how baroclinic growth on a general zonal basic state can be viewed as the interaction of pairs of ‘counter-propagating Rossby waves’ (CRWs) that can be constructed from a growing normal mode and its decaying complex conjugate. In this paper the normal-mode solutions for the Charney model are studied from the CRW perspective.
Clear parallels can be drawn between the most unstable modes of the Charney model and the Eady model, in which the CRWs can be derived independently of the normal modes. However, the dispersion curves for the two models are very different; the Eady model has a short-wave cut-off, while the Charney model is unstable at short wavelengths. Beyond its maximum growth rate the Charney model has a neutral point at finite wavelength (r=1). Thereafter follows a succession of unstable branches, each with weaker growth than the last, separated by neutral points at integer r—the so-called ‘Green branches’. A separate branch of westward-propagating neutral modes also originates from each neutral point. By approximating the lower CRW as a Rossby edge wave and the upper CRW structure as a single PV peak with a spread proportional to the Rossby scale height, the main features of the ‘Charney branch’ (0
Resumo:
Models of the dynamics of nitrogen in soil (soil-N) can be used to aid the fertilizer management of a crop. The predictions of soil-N models can be validated by comparison with observed data. Validation generally involves calculating non-spatial statistics of the observations and predictions, such as their means, their mean squared-difference, and their correlation. However, when the model predictions are spatially distributed across a landscape the model requires validation with spatial statistics. There are three reasons for this: (i) the model may be more or less successful at reproducing the variance of the observations at different spatial scales; (ii) the correlation of the predictions with the observations may be different at different spatial scales; (iii) the spatial pattern of model error may be informative. In this study we used a model, parameterized with spatially variable input information about the soil, to predict the mineral-N content of soil in an arable field, and compared the results with observed data. We validated the performance of the N model spatially with a linear mixed model of the observations and model predictions, estimated by residual maximum likelihood. This novel approach allowed us to describe the joint variation of the observations and predictions as: (i) independent random variation that occurred at a fine spatial scale; (ii) correlated random variation that occurred at a coarse spatial scale; (iii) systematic variation associated with a spatial trend. The linear mixed model revealed that, in general, the performance of the N model changed depending on the spatial scale of interest. At the scales associated with random variation, the N model underestimated the variance of the observations, and the predictions were correlated poorly with the observations. At the scale of the trend, the predictions and observations shared a common surface. The spatial pattern of the error of the N model suggested that the observations were affected by the local soil condition, but this was not accounted for by the N model. In summary, the N model would be well-suited to field-scale management of soil nitrogen, but suited poorly to management at finer spatial scales. This information was not apparent with a non-spatial validation. (c),2007 Elsevier B.V. All rights reserved.
Resumo:
A one-dimensional water column model using the Mellor and Yamada level 2.5 parameterization of vertical turbulent fluxes is presented. The model equations are discretized with a mixed finite element scheme. Details of the finite element discrete equations are given and adaptive mesh refinement strategies are presented. The refinement criterion is an "a posteriori" error estimator based on stratification, shear and distance to surface. The model performances are assessed by studying the stress driven penetration of a turbulent layer into a stratified fluid. This example illustrates the ability of the presented model to follow some internal structures of the flow and paves the way for truly generalized vertical coordinates. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Ice clouds are an important yet largely unvalidated component of weather forecasting and climate models, but radar offers the potential to provide the necessary data to evaluate them. First in this paper, coordinated aircraft in situ measurements and scans by a 3-GHz radar are presented, demonstrating that, for stratiform midlatitude ice clouds, radar reflectivity in the Rayleigh-scattering regime may be reliably calculated from aircraft size spectra if the "Brown and Francis" mass-size relationship is used. The comparisons spanned radar reflectivity values from -15 to +20 dBZ, ice water contents (IWCs) from 0.01 to 0.4 g m(-3), and median volumetric diameters between 0.2 and 3 mm. In mixed-phase conditions the agreement is much poorer because of the higher-density ice particles present. A large midlatitude aircraft dataset is then used to derive expressions that relate radar reflectivity and temperature to ice water content and visible extinction coefficient. The analysis is an advance over previous work in several ways: the retrievals vary smoothly with both input parameters, different relationships are derived for the common radar frequencies of 3, 35, and 94 GHz, and the problem of retrieving the long-term mean and the horizontal variance of ice cloud parameters is considered separately. It is shown that the dependence on temperature arises because of the temperature dependence of the number concentration "intercept parameter" rather than mean particle size. A comparison is presented of ice water content derived from scanning 3-GHz radar with the values held in the Met Office mesoscale forecast model, for eight precipitating cases spanning 39 h over Southern England. It is found that the model predicted mean I WC to within 10% of the observations at temperatures between -30 degrees and - 10 degrees C but tended to underestimate it by around a factor of 2 at colder temperatures.
Resumo:
Model catalysts of Pd nanoparticles and films on TiO2 (I 10) were fabricated by metal vapour deposition (MVD). Molecular beam measurements show that the particles are active for CO adsorption, with a global sticking probability of 0.25, but that they are deactivated by annealing above 600 K, an effect indicative of SMSI. The Pd nanoparticles are single crystals oriented with their (I 11) plane parallel to the surface plane of the titania. Analysis of the surface by atomic resolution STM shows that new structures have formed at the surface of the Pd nanoparticles and films after annealing above 800 K. There are only two structures, a zigzag arrangement and a much more complex "pinwheel" structure. The former has a unit cell containing 7 atoms, and the latter is a bigger unit cell containing 25 atoms. These new structures are due to an overlayer of titania that has appeared on the surface of the Pd nanoparticles after annealing, and it is proposed that the surface layer that causes the SMSI effect is a mixed alloy of Pd and Ti, with only two discrete ratios of atoms: Pd/Ti of 1: 1 (pinwheel) and 1:2 (zigzag). We propose that it is these structures that cause the SMSI effect. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
In this study, the processes affecting sea surface temperature variability over the 1992–98 period, encompassing the very strong 1997–98 El Niño event, are analyzed. A tropical Pacific Ocean general circulation model, forced by a combination of weekly ERS1–2 and TAO wind stresses, and climatological heat and freshwater fluxes, is first validated against observations. The model reproduces the main features of the tropical Pacific mean state, despite a weaker than observed thermal stratification, a 0.1 m s−1 too strong (weak) South Equatorial Current (North Equatorial Countercurrent), and a slight underestimate of the Equatorial Undercurrent. Good agreement is found between the model dynamic height and TOPEX/Poseidon sea level variability, with correlation/rms differences of 0.80/4.7 cm on average in the 10°N–10°S band. The model sea surface temperature variability is a bit weak, but reproduces the main features of interannual variability during the 1992–98 period. The model compares well with the TAO current variability at the equator, with correlation/rms differences of 0.81/0.23 m s−1 for surface currents. The model therefore reproduces well the observed interannual variability, with wind stress as the only interannually varying forcing. This good agreement with observations provides confidence in the comprehensive three-dimensional circulation and thermal structure of the model. A close examination of mixed layer heat balance is thus undertaken, contrasting the mean seasonal cycle of the 1993–96 period and the 1997–98 El Niño. In the eastern Pacific, cooling by exchanges with the subsurface (vertical advection, mixing, and entrainment), the atmospheric forcing, and the eddies (mainly the tropical instability waves) are the three main contributors to the heat budget. In the central–western Pacific, the zonal advection by low-frequency currents becomes the main contributor. Westerly wind bursts (in December 1996 and March and June 1997) were found to play a decisive role in the onset of the 1997–98 El Niño. They contributed to the early warming in the eastern Pacific because the downwelling Kelvin waves that they excited diminished subsurface cooling there. But it is mainly through eastward advection of the warm pool that they generated temperature anomalies in the central Pacific. The end of El Niño can be linked to the large-scale easterly anomalies that developed in the western Pacific and spread eastward, from the end of 1997 onward. In the far-western Pacific, because of the shallower than normal thermocline, these easterlies cooled the SST by vertical processes. In the central Pacific, easterlies pushed the warm pool back to the west. In the east, they led to a shallower thermocline, which ultimately allowed subsurface cooling to resume and to quickly cool the surface layer.
Resumo:
Canopy interception of incident precipitation is a critical component of the forest water balance during each of the four seasons. Models have been developed to predict precipitation interception from standard meteorological variables because of acknowledged difficulty in extrapolating direct measurements of interception loss from forest to forest. No known study has compared and validated canopy interception models for a leafless deciduous forest stand in the eastern United States. Interception measurements from an experimental plot in a leafless deciduous forest in northeastern Maryland (39°42'N, 75°5'W) for 11 rainstorms in winter and early spring 2004/05 were compared to predictions from three models. The Mulder model maintains a moist canopy between storms. The Gash model requires few input variables and is formulated for a sparse canopy. The WiMo model optimizes the canopy storage capacity for the maximum wind speed during each storm. All models showed marked underestimates and overestimates for individual storms when the measured ratio of interception to gross precipitation was far more or less, respectively, than the specified fraction of canopy cover. The models predicted the percentage of total gross precipitation (PG) intercepted to within the probable standard error (8.1%) of the measured value: the Mulder model overestimated the measured value by 0.1% of PG; the WiMo model underestimated by 0.6% of PG; and the Gash model underestimated by 1.1% of PG. The WiMo model’s advantage over the Gash model indicates that the canopy storage capacity increases logarithmically with the maximum wind speed. This study has demonstrated that dormant-season precipitation interception in a leafless deciduous forest may be satisfactorily predicted by existing canopy interception models.