888 resultados para Two-stage stochastic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The advances in computational biology have made simultaneous monitoring of thousands of features possible. The high throughput technologies not only bring about a much richer information context in which to study various aspects of gene functions but they also present challenge of analyzing data with large number of covariates and few samples. As an integral part of machine learning, classification of samples into two or more categories is almost always of interest to scientists. In this paper, we address the question of classification in this setting by extending partial least squares (PLS), a popular dimension reduction tool in chemometrics, in the context of generalized linear regression based on a previous approach, Iteratively ReWeighted Partial Least Squares, i.e. IRWPLS (Marx, 1996). We compare our results with two-stage PLS (Nguyen and Rocke, 2002A; Nguyen and Rocke, 2002B) and other classifiers. We show that by phrasing the problem in a generalized linear model setting and by applying bias correction to the likelihood to avoid (quasi)separation, we often get lower classification error rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multi-site time series studies of air pollution and mortality and morbidity have figured prominently in the literature as comprehensive approaches for estimating acute effects of air pollution on health. Hierarchical models are generally used to combine site-specific information and estimate pooled air pollution effects taking into account both within-site statistical uncertainty, and across-site heterogeneity. Within a site, characteristics of time series data of air pollution and health (small pollution effects, missing data, highly correlated predictors, non linear confounding etc.) make modelling all sources of uncertainty challenging. One potential consequence is underestimation of the statistical variance of the site-specific effects to be combined. In this paper we investigate the impact of variance underestimation on the pooled relative rate estimate. We focus on two-stage normal-normal hierarchical models and on under- estimation of the statistical variance at the first stage. By mathematical considerations and simulation studies, we found that variance underestimation does not affect the pooled estimate substantially. However, some sensitivity of the pooled estimate to variance underestimation is observed when the number of sites is small and underestimation is severe. These simulation results are applicable to any two-stage normal-normal hierarchical model for combining information of site-specific results, and they can be easily extended to more general hierarchical formulations. We also examined the impact of variance underestimation on the national average relative rate estimate from the National Morbidity Mortality Air Pollution Study and we found that variance underestimation as much as 40% has little effect on the national average.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop Bayesian hierarchical distributed lag models for estimating associations between daily variations in summer ozone levels and daily variations in cardiovascular and respiratory (CVDRESP) mortality counts for 19 U.S. large cities included in the National Morbidity Mortality Air Pollution Study (NMMAPS) for the period 1987 - 1994. At the first stage, we define a semi-parametric distributed lag Poisson regression model to estimate city-specific relative rates of CVDRESP associated with short-term exposure to summer ozone. At the second stage, we specify a class of distributions for the true city-specific relative rates to estimate an overall effect by taking into account the variability within and across cities. We perform the calculations with respect to several random effects distributions (normal, t-student, and mixture of normal), thus relaxing the common assumption of a two-stage normal-normal hierarchical model. We assess the sensitivity of the results to: 1) lag structure for ozone exposure; 2) degree of adjustment for long-term trends; 3) inclusion of other pollutants in the model;4) heat waves; 5) random effects distributions; and 6) prior hyperparameters. On average across cities, we found that a 10ppb increase in summer ozone level for every day in the previous week is associated with 1.25 percent increase in CVDRESP mortality (95% posterior regions: 0.47, 2.03). The relative rate estimates are also positive and statistically significant at lags 0, 1, and 2. We found that associations between summer ozone and CVDRESP mortality are sensitive to the confounding adjustment for PM_10, but are robust to: 1) the adjustment for long-term trends, other gaseous pollutants (NO_2, SO_2, and CO); 2) the distributional assumptions at the second stage of the hierarchical model; and 3) the prior distributions on all unknown parameters. Bayesian hierarchical distributed lag models and their application to the NMMAPS data allow us estimation of an acute health effect associated with exposure to ambient air pollution in the last few days on average across several locations. The application of these methods and the systematic assessment of the sensitivity of findings to model assumptions provide important epidemiological evidence for future air quality regulations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

RATIONALE AND OBJECTIVES: A feasibility study on measuring kidney perfusion by a contrast-free magnetic resonance (MR) imaging technique is presented. MATERIALS AND METHODS: A flow-sensitive alternating inversion recovery (FAIR) prepared true fast imaging with steady-state precession (TrueFISP) arterial spin labeling sequence was used on a 3.0-T MR-scanner. The basis for quantification is a two-compartment exchange model proposed by Parkes that corrects for diverse assumptions in single-compartment standard models. RESULTS: Eleven healthy volunteers (mean age, 42.3 years; range 24-55) were examined. The calculated mean renal blood flow values for the exchange model (109 +/- 5 [medulla] and 245 +/- 11 [cortex] ml/min - 100 g) are in good agreement with the literature. Most important, the two-compartment exchange model exhibits a stabilizing effect on the evaluation of perfusion values if the finite permeability of the vessel wall and the venous outflow (fast solution) are considered: the values for the one-compartment standard model were 93 +/- 18 (medulla) and 208 +/- 37 (cortex) ml/min - 100 g. CONCLUSION: This improvement will increase the accuracy of contrast-free imaging of kidney perfusion in treatment renovascular disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heroin prices are a reflection of supply and demand, and similar to any other market, profits motivate participation. The intent of this research is to examine the change in Afghan opium production due to political conflict affecting Europe’s heroin market and government policies. If the Taliban remain in power, or a new Afghan government is formed, the changes will affect the heroin market in Europe to a certain degree. In the heroin market, the degree of change is dependent on many socioeconomic forces such as law enforcement, corruption, and proximity to Afghanistan. An econometric model that examines the degree of these socioeconomic effects has not been applied to the heroin trade in Afghanistan before. This research uses a two-stage least squares econometric model to reveal the supply and demand of heroin in 36 different countries from the Middle East to Western Europe in 2008. An application of the two-stage least squares model to the heroin market in Europe will attempt to predict the socioeconomic consequences of Afghanistan opium production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An experimental setup was designed to visualize water percolation inside the porous transport layer, PTL, of proton exchange membrane, PEM, fuel cells and identify the relevant characterization parameters. In parallel with the observation of the water movement, the injection pressure (pressure required to transport water through the PTL) was measured. A new scaling for the drainage in porous media has been proposed based on the ratio between the input and the dissipated energies during percolation. A proportional dependency was obtained between the energy ratio and a non-dimensional time and this relationship is not dependent on the flow regime; stable displacement or capillary fingering. Experimental results show that for different PTL samples (from different manufacturers) the proportionality is different. The identification of this proportionality allows a unique characterization of PTLs with respect to water transport. This scaling has relevance in porous media flows ranging far beyond fuel cells. In parallel with the experimental analysis, a two-dimensional numerical model was developed in order to simulate the phenomena observed in the experiments. The stochastic nature of the pore size distribution, the role of the PTL wettability and morphology properties on the water transport were analyzed. The effect of a second porous layer placed between the porous transport layer and the catalyst layer called microporous layer, MPL, was also studied. It was found that the presence of the MPL significantly reduced the water content on the PTL by enhancing fingering formation. Moreover, the presence of small defects (cracks) within the MPL was shown to enhance water management. Finally, a corroboration of the numerical simulation was carried out. A threedimensional version of the network model was developed mimicking the experimental conditions. The morphology and wettability of the PTL are tuned to the experiment data by using the new energy scaling of drainage in porous media. Once the fit between numerical and experimental data is obtained, the computational PTL structure can be used in different types of simulations where the conditions are representative of the fuel cell operating conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fundamental combustion model for spark-ignition engine is studied in this report. The model is implemented in SIMULINK to simulate engine outputs (mass fraction burn and in-cylinder pressure) under various engine operation conditions. The combustion model includes a turbulent propagation and eddy burning processes based on literature [1]. The turbulence propagation and eddy burning processes are simulated by zero-dimensional method and the flame is assumed as sphere. To predict pressure, temperature and other in-cylinder variables, a two-zone thermodynamic model is used. The predicted results of this model match well with the engine test data under various engine speeds, loads, spark ignition timings and air fuel mass ratios. The developed model is used to study cyclic variation and combustion stability at lean (or diluted) combustion conditions. Several variation sources are introduced into the combustion model to simulate engine performance observed in experimental data. The relations between combustion stability and the introduced variation amount are analyzed at various lean combustion levels.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In all European Union countries, chemical residues are required to be routinely monitored in meat. Good farming and veterinary practice can prevent the contamination of meat with pharmaceutical substances, resulting in a low detection of drug residues through random sampling. An alternative approach is to target-monitor farms suspected of treating their animals with antimicrobials. The objective of this project was to assess, using a stochastic model, the efficiency of these two sampling strategies. The model integrated data on Swiss livestock as well as expert opinion and results from studies conducted in Switzerland. Risk-based sampling showed an increase in detection efficiency of up to 100% depending on the prevalence of contaminated herds. Sensitivity analysis of this model showed the importance of the accuracy of prior assumptions for conducting risk-based sampling. The resources gained by changing from random to risk-based sampling should be transferred to improving the quality of prior information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article, we perform an extensive study of flavor observables in a two-Higgs-doublet model with generic Yukawa structure (of type III). This model is interesting not only because it is the decoupling limit of the minimal supersymmetric standard model but also because of its rich flavor phenomenology which also allows for sizable effects not only in flavor-changing neutral-current (FCNC) processes but also in tauonic B decays. We examine the possible effects in flavor physics and constrain the model both from tree-level processes and from loop observables. The free parameters of the model are the heavy Higgs mass, tanβ (the ratio of vacuum expectation values) and the “nonholomorphic” Yukawa couplings ϵfij(f=u,d,ℓ). In our analysis we constrain the elements ϵfij in various ways: In a first step we give order of magnitude constraints on ϵfij from ’t Hooft’s naturalness criterion, finding that all ϵfij must be rather small unless the third generation is involved. In a second step, we constrain the Yukawa structure of the type-III two-Higgs-doublet model from tree-level FCNC processes (Bs,d→μ+μ−, KL→μ+μ−, D¯¯¯0→μ+μ−, ΔF=2 processes, τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−) and observe that all flavor off-diagonal elements of these couplings, except ϵu32,31 and ϵu23,13, must be very small in order to satisfy the current experimental bounds. In a third step, we consider Higgs mediated loop contributions to FCNC processes [b→s(d)γ, Bs,d mixing, K−K¯¯¯ mixing and μ→eγ] finding that also ϵu13 and ϵu23 must be very small, while the bounds on ϵu31 and ϵu32 are especially weak. Furthermore, considering the constraints from electric dipole moments we obtain constrains on some parameters ϵu,ℓij. Taking into account the constraints from FCNC processes we study the size of possible effects in the tauonic B decays (B→τν, B→Dτν and B→D∗τν) as well as in D(s)→τν, D(s)→μν, K(π)→eν, K(π)→μν and τ→K(π)ν which are all sensitive to tree-level charged Higgs exchange. Interestingly, the unconstrained ϵu32,31 are just the elements which directly enter the branching ratios for B→τν, B→Dτν and B→D∗τν. We show that they can explain the deviations from the SM predictions in these processes without fine-tuning. Furthermore, B→τν, B→Dτν and B→D∗τν can even be explained simultaneously. Finally, we give upper limits on the branching ratios of the lepton flavor-violating neutral B meson decays (Bs,d→μe, Bs,d→τe and Bs,d→τμ) and correlate the radiative lepton decays (τ→μγ, τ→eγ and μ→eγ) to the corresponding neutral current lepton decays (τ−→μ−μ+μ−, τ−→e−μ+μ− and μ−→e−e+e−). A detailed Appendix contains all relevant information for the considered processes for general scalar-fermion-fermion couplings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Localized short-echo-time (1)H-MR spectra of human brain contain contributions of many low-molecular-weight metabolites and baseline contributions of macromolecules. Two approaches to model such spectra are compared and the data acquisition sequence, optimized for reproducibility, is presented. Modeling relies on prior knowledge constraints and linear combination of metabolite spectra. Investigated was what can be gained by basis parameterization, i.e., description of basis spectra as sums of parametric lineshapes. Effects of basis composition and addition of experimentally measured macromolecular baselines were investigated also. Both fitting methods yielded quantitatively similar values, model deviations, error estimates, and reproducibility in the evaluation of 64 spectra of human gray and white matter from 40 subjects. Major advantages of parameterized basis functions are the possibilities to evaluate fitting parameters separately, to treat subgroup spectra as independent moieties, and to incorporate deviations from straightforward metabolite models. It was found that most of the 22 basis metabolites used may provide meaningful data when comparing patient cohorts. In individual spectra, sums of closely related metabolites are often more meaningful. Inclusion of a macromolecular basis component leads to relatively small, but significantly different tissue content for most metabolites. It provides a means to quantitate baseline contributions that may contain crucial clinical information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Understanding Nanog’s Role in Cancer Biology Mark Daniel Badeaux Supervisory Professor Dean Tang, PhD The cancer stem cell model holds that tumor heterogeneity and population-level immortality are driven by a subset of cells within the tumor, termed cancer stem cells. Like embryonic or somatic stem cells, cancer stem cells are believed to possess self-renewal capacity and the ability to give rise to a multitude of varieties of daughter cell. Because of cancer’s implied connections to authentic stem cells, we screened a variety of prostate cancer cell lines and primary tumors in order to determine if any notable ‘stemness’ genes were expressed in malignant growths. We found a promising lead in Nanog, a central figure in maintaining embryonic stem cell pluripotency, and through a variety of experiments in which we diminished Nanog expression, found that it may play a significant role in prostate cancer development. We then created a transgenic mouse model in which we targeted Nanog expression to keratin 14-expressing in order to assess its potential contribution to tumorigenesis. We found a variety of developmental abnormalities and altered differentiation patterns in our model , but much to our chagrin we observed neither spontaneous tumor formation nor premalignant changes in these mice, but instead surprisingly found that high levels of Nanog expression inhibited tumor formation in a two-stage skin carcinogenesis model. We also noted a depletion of skin stem cell populations, which underlies the wound-healing defect our mice harbor as well. Gene expression analysis shows a reduction in c-Jun and Bmp5, two genes whose loss inhibits skin tumor development and reduces stem cell counts respectively. As we further explored Nanog’s activity in prostate cancer, it became apparent that the protein oftentimes was not expressed. Emboldened by the competing endogenous RNA (ceRNA) hypothesis, we identified the Nanog 3’UTR as a regulator of the tumor suppressive microRNA 128a (miR-128a), which includes known oncogenes such as Bmi1 among its authentic targets. Future work will necessarily involve discerning instances in which Nanog mRNA is the biologically relevant molecule, as well as identifying additional mRNA species which may serve solely as a molecular sink for miR-128a.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tyrosine hydroxylase (TH), the initial and rate limiting enzyme in the catecholaminergic biosynthetic pathway, is phosphorylated on multiple serine residues by multiple protein kinases. Although it has been demonstrated that many protein kinases are capable of phosphorylating and activating TH in vitro, it is less clear which protein kinases participate in the physiological regulation of catecholamine synthesis in situ. These studies were designed to determine if protein kinase C (PK-C) plays such a regulatory role.^ Stimulation of intact bovine adrenal chromaffin cells with phorbol esters results in stimulation of catecholamine synthesis, tyrosine hydroxylase phosphorylation and activation. These responses are both time and concentration dependent, and are specific for those phorbol ester analogues which activate PK-C. RP-HPLC analysis of TH tryptic phosphopeptides indicate that PK-C phosphorylates TH on three putative sites. One of these (pepetide 6) is the same as that phosphorylated by both cAMP-dependent protein kinase (PK-A) and calcium/calmodulin-dependent protein kinase (CaM-K). However, two of these sites (peptides 4 and 7) are unique, and, to date, have not been shown to be phosphorylated by any other protein kinase. These peptides correspond to those which are phosphorylated with a slow time course in response to stimulation of chromaffin cells with the natural agonist acetylcholine. The activation of TH produced by PK-C is most closely correlated with the phosphorylation of peptide 6. But, as evident from pH profiles of tyrosine hydroxylase activity, phosphorylation of peptides 4 and 7 affect the expression of the activation produced by phosphorylation of peptide 6.^ These data support a role for PK-C in the control of TH activity, and suggest a two stage model for the physiological regulation of catecholamine synthesis by phosphorylation in response to cholinergic stimulation. An initial fast response, which appears to be mediated by CaM-K, and a slower, sustained response which appears to be mediated by PK-C. In addition, the multiple site phosphorylation of TH provides a mechanism whereby the regulation of catecholamine synthesis appears to be under the control of multiple protein kinases, and allows for the convergence of multiple, diverse physiological and biochemical signals. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Non-alcoholic fatty liver disease (NAFLD) is the most common chronic liver disorder in industrialized countries, yet its pathophysiology is incompletely understood. Small-molecule metabolite screens may offer new insights into disease mechanisms and reveal new treatment targets. Methods Discovery (N = 33) and replication (N = 66) of liver biopsies spanning the range from normal liver histology to non-alcoholic steatohepatitis (NASH) were ascertained ensuring rapid freezing under 30 s in patients. 252 metabolites were assessed using GC/MS. Replicated metabolites were evaluated in a murine high-fat diet model of NAFLD. Results In a two-stage metabolic screening, hydroquinone (HQ, pcombined = 3.0 × 10−4) and nicotinic acid (NA, pcombined = 3.9 × 10−9) were inversely correlated with histological NAFLD severity. A murine high-fat diet model of NAFLD demonstrated a protective effect of these two substances against NAFLD: Supplementation with 1% HQ reduced only liver steatosis, whereas 0.6% NA reduced both liver fat content and serum transaminase levels and induced a complex regulatory network of genes linked to NALFD pathogenesis in a global expression pathway analysis. Human nutritional intake of NA equivalent was also consistent with a protective effect of NA against NASH progression. Conclusion This first small-molecular screen of human liver tissue identified two replicated protective metabolites. Either the use of NA or targeting its regulatory pathways might be explored to treat or prevent human NAFLD.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the southern part of Korup National Park, Cameroon, the mast fruiting tree Microberlinia bisulcata occurs as a codominant in groves of ectomycorrhizal Caesalpiniaceae within a mosaic of otherwise species-rich lowland rain forest. To estimate the amount of carbon and nutrients invested in reproduction during a mast fruiting event, and the consequential seed and seedling survival, three related field studies were made in 1995. These provided a complete seed and seedling budget for the cohort. Seed production was estimated by counting woody pods on the forest floor. Trees produced on average 26,000 (range 0-92,000) seeds/tree, with a dry mass of 16.6 kg/tree. Seeds were contained in woody pods of mass 307 kg/tree. Dry mass production of pods and seeds was 1034 kg ha(-1), equivalent to over half (55%) of annual leaf litterfall for this species, and contained 13% of the nitrogen and 21% of the phosphorus in annual leaf litterfall. Seed and young-seedling mortality was investigated with open quadrats and cages to exclude vertebrate predators, at two distances from the parent tree. The proportion of seeds on the forest floor which disappeared in the first 6 wk after dispersal was 84%, of which 26.5% was due to likely vertebrate removal, 36% to rotting, and 21.5% to other causes. Vertebrate predation was greater close to the stem than 5 m beyond the crown (41 vs 12% of seeds disappearing) where the seed shadow was less dense. Previous studies have demonstrated an association between mast years at Korup and high dry-season radiation before flowering, and have shown lower leaf-litterfall phosphorus concentrations following mast fruiting. The emerging hypothesis is that mast fruiting is primarily imposed by energy limitation for fruit production, but phosphorus supply and vertebrate predation are regulating factors. Recording the survival of naturally-regenerating M. bisulcata seedlings (6-wk stage) showed that 21% of seedlings survived to 31 mo. A simple three-stage recruitment model was constructed. Mortality rates were initially high and peaked again in each of the next two dry seasons, with smaller peaks in the two intervening wet seasons, these latter coinciding with annual troughs in radiation. The very poor recruitment of M. bisulcata trees in Korup, demonstrated in previous investigations, appears not to be due to a limitation in seed or young-seedling supply, but rather by factors operating at the established-seedling stage.