909 resultados para Artisanal and Small-Scale Mining
Resumo:
Human activities that modify land cover can alter the structure and biogeochemistry of small streams but these effects are poorly known over large regions of the humid tropics where rates of forest clearing are high. We examined how conversion of Amazon lowland tropical forest to cattle pasture influenced the physical and chemical structure, organic matter stocks and N cycling of small streams. We combined a regional ground survey of small streams with an intensive study of nutrient cycling using (15)N additions in three representative streams: a second-order forest stream, a second-order pasture stream and a third-order pasture stream. These three streams were within several km of each other and on similar soils. Replacement of forest with pasture decreased stream habitat complexity by changing streams from run and pool channels with forest leaf detritus (50% cover) to grass-filled (63% cover) channel with runs of slow-moving water. In the survey, pasture streams consistently had lower concentrations of dissolved oxygen and nitrate (NO(3) (-)) compared with similar-sized forest streams. Stable isotope additions revealed that second-order pasture stream had a shorter NH(4) (+) uptake length, higher uptake rates into organic matter components and a shorter (15)NH(4) (+) residence time than the second-order forest stream or the third-order pasture stream. Nitrification was significant in the forest stream (19% of the added (15)NH(4) (+)) but not in the second-order pasture (0%) or third-order (6%) pasture stream. The forest stream retained 7% of added (15)N in organic matter compartments and exported 53% ((15)NH(4) (+) = 34%; (15)NO(3) (-) = 19%). In contrast, the second-order pasture stream retained 75% of added (15)N, predominantly in grasses (69%) and exported only 4% as (15)NH(4) (+). The fate of tracer (15)N in the third-order pasture stream more closely resembled that in the forest stream, with 5% of added N retained and 26% exported ((15)NH(4) (+) = 9%; (15)NO(3) (-) = 6%). These findings indicate that the widespread infilling by grass in small streams in areas deforested for pasture greatly increases the retention of inorganic N in the first- and second-order streams, which make up roughly three-fourths of total stream channel length in Amazon basin watersheds. The importance of this phenomenon and its effect on N transport to larger rivers across the larger areas of the Amazon Basin will depend on better evaluation of both the extent and the scale at which stream infilling by grass occurs, but our analysis suggests the phenomenon is widespread.
Resumo:
introducing a pharmaceutical product on the market involves several stages of research. The scale-up stage comprises the integration of previous phases of development and their integration. This phase is extremely important since many process limitations which do not appear on the small scale become significant on the transposition to a large one. Since scientific literature presents only a few reports about the characterization of emulsified systems involving their scaling-up, this research work aimed at evaluating physical properties of non-ionic and anionic emulsions during their manufacturing phases: laboratory stage and scale-up. Prototype non-ionic (glyceryl monostearate) and anionic (potassium cetyl phosphate) emulsified systems had the physical properties by the determination of the droplet size (D[4,3 1, mu m) and rheology profile. Transposition occurred from a batch of 500-50,000 g. Semi-industrial manufacturing involved distinct conditions: intensity of agitation and homogenization. Comparing the non-ionic and anionic systems, it was observed that anionic emulsifiers generated systems with smaller droplet size and higher viscosity in laboratory scale. Besides that, for the concentrations tested, augmentation of the glyceryl monostearate emulsifier content provided formulations with better physical characteristics. For systems with potassium cetyl phosphate, droplet size increased with the elevation of the emulsifier concentration, suggesting inadequate stability. The scale-up provoked more significant alterations on the rheological profile and droplet size on the anionic systems than the non-ionic. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Effluent water from shrimp ponds typically contains elevated concentrations of dissolved nutrients and suspended particulates compared to influent water. Attempts to improve effluent water quality using filter feeding bivalves and macroalgae to reduce nutrients have previously been hampered by the high concentration of clay particles typically found in untreated pond effluent. These particles inhibit feeding in bivalves and reduce photosynthesis in macroalgae by increasing effluent turbidity. In a small-scale laboratory study, the effectiveness of a three-stage effluent treatment system was investigated. In the first stage, reduction in particle concentration occurred through natural sedimentation. In the second stage, filtration by the Sydney rock oyster, Saccostrea commercialis (Iredale and Roughley), further reduced the concentration of suspended particulates, including inorganic particles, phytoplankton, bacteria, and their associated nutrients. In the final stage, the macroalga, Gracilaria edulis (Gmelin) Silva, absorbed dissolved nutrients. Pond effluent was collected from a commercial shrimp farm, taken to an indoor culture facility and was left to settle for 24 h. Subsamples of water were then transferred into laboratory tanks stocked with oysters and maintained for 24 h, and then transferred to tanks containing macroalgae for another 24 h. Total suspended solid (TSS), chlorophyll a, total nitrogen (N), total phosphorus (P), NH4+, NO3-, and PO43-, and bacterial numbers were compared before and after each treatment at: 0 h (initial); 24 h (after sedimentation); 48 h (after oyster filtration); 72 h (after macroalgal absorption). The combined effect of the sequential treatments resulted in significant reductions in the concentrations of all parameters measured. High rates of nutrient regeneration were observed in the control tanks, which did not contain oysters or macroalgae. Conversely, significant reductions in nutrients and suspended particulates after sedimentation and biological treatment were observed. Overall, improvements in water quality (final percentage of the initial concentration) were as follows: TSS (12%); total N (28%); total P (14%); NH4+ (76%); NO3- (30%); PO43-(35%); bacteria (30%); and chlorophyll a (0.7%). Despite the probability of considerable differences in sedimentation, filtration and nutrient uptake rates when scaled to farm size, these results demonstrate that integrated treatment has the potential to significantly improve water quality of shrimp farm effluent. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
Plants are sessile organisms, often characterized by limited dispersal. Seeds and pollen are the critical stages for gene flow. Here we investigate spatial genetic structure, gene dispersal and the relative contribution of pollen vs seed in the movement of genes in a stable metapopulation of the white campion Silene latifolia within its native range. This short-lived perennial plant is dioecious, has gravity-dispersed seeds and moth-mediated pollination. Direct measures of pollen dispersal suggested that large populations receive more pollen than small isolated populations and that most gene flow occurs within tens of meters. However, these studies were performed in the newly colonized range (North America) where the specialist pollinator is absent. In the native range (Europe), gene dispersal could fall on a different spatial scale. We genotyped 258 individuals from large and small (15) subpopulations along a 60 km, elongated metapopulation in Europe using six highly variable microsatellite markers, two X-linked and four autosomal. We found substantial genetic differentiation among subpopulations (global F(ST)=0.11) and a general pattern of isolation by distance over the whole sampled area. Spatial autocorrelation revealed high relatedness among neighboring individuals over hundreds of meters. Estimates of gene dispersal revealed gene flow at the scale of tens of meters (5-30 m), similar to the newly colonized range. Contrary to expectations, estimates of dispersal based on X and autosomal markers showed very similar ranges, suggesting similar levels of pollen and seed dispersal. This may be explained by stochastic events of extensive seed dispersal in this area and limited pollen dispersal.
Resumo:
This thesis Entitled entrepreneurship and motivation in small business sector of kerala -A study of rubber products manufacturing industry.Rubber-based industry in Kerala was established only in the first half of the 20th century.the number of licensed manufacturers in the State has increased substantially over the years, particularly in the post- independence period. 54 rubber manufacturing units in 1965-66, the number of licensed rubber-based industrial units has increased to 1300 units in 2001-02. In 2001-02 Kerala occupied the primary position in the number of rubber goods manufacturers in the country.As per the latest report of the Third All India Census of Small Scale Industries 2001-02, Kerala has the third largest number of registered small scale units in the country next after Tamil Nadu and Utter Pradesh.This study of entrepreneurship in the small-scale rubber goods manufacturing industry in Kerala compares a cross section of successful and unsuccessful entrepreneurs with respect to socio-economic characteristics and motivational dynamics. Based on a sample survey of 120 entrepreneurs of Kottayam and Ernakulam districts successful and unsuccessful entrepreneurs were selected using multiple criteria. The study provides guidelines for the development of entrepreneurship in Kerala.The results on the socio-economic survey support the hypothesis that the successful entrepreneurs will differ from unsuccessful entrepreneurs with respect to education, social contacts, initial investment, sales turnover, profits, capital employed, personal income, and number of employees.Successful entrepreneurs were found to be self~starters. Successful entrepreneurs adopted a lot more technological changes than unsuccessful entrepreneurs. Successful entrepreneurs were more innovative — the percent of successful entrepreneurs and unsuccessful entrepreneurs reporting innovations in business were 31.50 and 8.50 percent respectively.
Resumo:
The study covers theFishing capture technology innovation includes the catching of aquatic animal, using any kind of gear techniques, operated from a vessel. Utilization of fishing techniques varies, depending upon the type of fisheries, and can go from a basic and little hook connected to a line to huge and complex mid water trawls or seines operated by large fishing vessels.The size and autonomy of a fishing vessel is largely determined by its ability to handle, process and store fish in good condition on board, and thus these two characteristics have been greatly influenced by the introduction and utilization of ice and refrigeration machinery. Other technological developments especially hydraulic hauling machinery, fish finding electronics and synthetic twines have also had a major impact on the efficiency and profitability of fishing vessels.A wide variety of fishing gears and practices ranging from small-scale artisanal to advanced mechanised systems are used for fish capture in Kerala. Most important among these fishing gears are trawls, seines, lines, gillnets and entangling nets and traps The modern sector was introduced in 1953 at Neendakara, Shakthikulangara region under the initiative of Indo-Norwegian project (INP). The novel facilities introduced in fishing industry by Indo- Norwegian project accordingly are mechanically operated new boats with new fishing nets. Soon after mechanization, motorization programme gained momentum in Kerala especially in Alleppey, Ernakulam and Kollam districts.
Resumo:
An unbalanced nested sampling design was used to investigate the spatial scale of soil and herbicide interactions at the field scale. A hierarchical analysis of variance based on residual maximum likelihood (REML) was used to analyse the data and provide a first estimate of the variogram. Soil samples were taken at 108 locations at a range of separating distances in a 9 ha field to explore small and medium scale spatial variation. Soil organic matter content, pH, particle size distribution, microbial biomass and the degradation and sorption of the herbicide, isoproturon, were determined for each soil sample. A large proportion of the spatial variation in isoproturon degradation and sorption occurred at sampling intervals less than 60 m, however, the sampling design did not resolve the variation present at scales greater than this. A sampling interval of 20-25 m should ensure that the main spatial structures are identified for isoproturon degradation rate and sorption without too great a loss of information in this field.
Resumo:
In real world applications sequential algorithms of data mining and data exploration are often unsuitable for datasets with enormous size, high-dimensionality and complex data structure. Grid computing promises unprecedented opportunities for unlimited computing and storage resources. In this context there is the necessity to develop high performance distributed data mining algorithms. However, the computational complexity of the problem and the large amount of data to be explored often make the design of large scale applications particularly challenging. In this paper we present the first distributed formulation of a frequent subgraph mining algorithm for discriminative fragments of molecular compounds. Two distributed approaches have been developed and compared on the well known National Cancer Institute’s HIV-screening dataset. We present experimental results on a small-scale computing environment.
Resumo:
1 Adaptation of plant populations to local environments has been shown in many species but local adaptation is not always apparent and spatial scales of differentiation are not well known. In a reciprocal transplant experiment we tested whether: (i) three widespread grassland species are locally adapted at a European scale; (ii) detection of local adaptation depends on competition with the local plant community; and (iii) local differentiation between neighbouring populations from contrasting habitats can be stronger than differentiation at a European scale. 2 Seeds of Holcus lanatus, Lotus corniculatus and Plantago lanceolata from a Swiss, Czech and UK population were sown in a reciprocal transplant experiment at fields that exhibit environmental conditions similar to the source sites. Seedling emergence, survival, growth and reproduction were recorded for two consecutive years. 3 The effect of competition was tested by comparing individuals in weeded monocultures with plants sown together with species from the local grassland community. To compare large-scale vs. small-scale differentiation, a neighbouring population from a contrasting habitat (wet-dry contrast) was compared with the 'home' and 'foreign' populations. 4 In P. lanceolata and H. lanatus, a significant home-site advantage was detected in fitness-related traits, thus indicating local adaptation. In L. corniculatus, an overall superiority of one provenance was found. 5 The detection of local adaptation depended on competition with the local plant community. In the absence of competition the home-site advantage was underestimated in P. lanceolata and overestimated in H. lanatus. 6 A significant population differentiation between contrasting local habitats was found. In some traits, this small-scale was greater than large-scale differentiation between countries. 7 Our results indicate that local adaptation in real plant communities cannot necessarily be predicted from plants grown in weeded monocultures and that tests on the relationship between fitness and geographical distance have to account for habitat-dependent small-scale differentiation. Considering the strong small-scale differentiation, a local provenance from a different habitat may not be the best choice in ecological restoration if distant populations from a more similar habitat are available.
Resumo:
The large scale urban consumption of energy (LUCY) model simulates all components of anthropogenic heat flux (QF) from the global to individual city scale at 2.5 × 2.5 arc-minute resolution. This includes a database of different working patterns and public holidays, vehicle use and energy consumption in each country. The databases can be edited to include specific diurnal and seasonal vehicle and energy consumption patterns, local holidays and flows of people within a city. If better information about individual cities is available within this (open-source) database, then the accuracy of this model can only improve, to provide the community data from global-scale climate modelling or the individual city scale in the future. The results show that QF varied widely through the year, through the day, between countries and urban areas. An assessment of the heat emissions estimated revealed that they are reasonably close to those produced by a global model and a number of small-scale city models, so results from LUCY can be used with a degree of confidence. From LUCY, the global mean urban QF has a diurnal range of 0.7–3.6 W m−2, and is greater on weekdays than weekends. The heat release from building is the largest contributor (89–96%), to heat emissions globally. Differences between months are greatest in the middle of the day (up to 1 W m−2 at 1 pm). December to February, the coldest months in the Northern Hemisphere, have the highest heat emissions. July and August are at the higher end. The least QF is emitted in May. The highest individual grid cell heat fluxes in urban areas were located in New York (577), Paris (261.5), Tokyo (178), San Francisco (173.6), Vancouver (119) and London (106.7). Copyright © 2010 Royal Meteorological Society
Resumo:
Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.
Resumo:
In this article, we review the state-of-the-art techniques in mining data streams for mobile and ubiquitous environments. We start the review with a concise background of data stream processing, presenting the building blocks for mining data streams. In a wide range of applications, data streams are required to be processed on small ubiquitous devices like smartphones and sensor devices. Mobile and ubiquitous data mining target these applications with tailored techniques and approaches addressing scarcity of resources and mobility issues. Two categories can be identified for mobile and ubiquitous mining of streaming data: single-node and distributed. This survey will cover both categories. Mining mobile and ubiquitous data require algorithms with the ability to monitor and adapt the working conditions to the available computational resources. We identify the key characteristics of these algorithms and present illustrative applications. Distributed data stream mining in the mobile environment is then discussed, presenting the Pocket Data Mining framework. Mobility of users stimulates the adoption of context-awareness in this area of research. Context-awareness and collaboration are discussed in the Collaborative Data Stream Mining, where agents share knowledge to learn adaptive accurate models.
Resumo:
The Madden–Julian Oscillation (MJO) is the chief source of tropical intra-seasonal variability, but is simulated poorly by most state-of-the-art GCMs. Common errors include a lack of eastward propagation at the correct frequency and zonal extent, and too small a ratio of eastward- to westward-propagating variability. Here it is shown that HiGEM, a high-resolution GCM, simulates a very realistic MJO with approximately the correct spatial and temporal scale. Many MJO studies in GCMs are limited to diagnostics which average over a latitude band around the equator, allowing an analysis of the MJO’s structure in time and longitude only. In this study a wider range of diagnostics is applied. It is argued that such an approach is necessary for a comprehensive analysis of a model’s MJO. The standard analysis of Wheeler and Hendon (Mon Wea Rev 132(8):1917–1932, 2004; WH04) is applied to produce composites, which show a realistic spatial structure in the MJO envelopes but for the timing of the peak precipitation in the inter-tropical convergence zone, which bifurcates the MJO signal. Further diagnostics are developed to analyse the MJO’s episodic nature and the “MJO inertia” (the tendency to remain in the same WH04 phase from one day to the next). HiGEM favours phases 2, 3, 6 and 7; has too much MJO inertia; and dies out too frequently in phase 3. Recent research has shown that a key feature of the MJO is its interaction with the diurnal cycle over the Maritime Continent. This interaction is present in HiGEM but is unrealistically weak.