164 resultados para EROSION THRESHOLDS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is commonly assumed that rates of accumulation of organic-rich strata have varied through geologic time with some periods that were particularly favorable for accumulation of petroleum source rocks or coals. A rigorous analysis of the validity of such an assumption requires consideration of the basic fact that although sedimentary rocks have been lost through geologic time to erosion and metamorphism. Consequently, their present-day global abundance decreases with their geologic age. Measurements of the global abundance of coal-bearing strata suggest that conditions for coal accumulation were exceptionally favorable during the late Carboniferous. Strata of this age constitute 21% of the world's coal-bearing strata. Global rates of coal accumulation appear to have been relatively constant since the end of the Carboniferous, with the exception of the Triassic which contains only 1.75% of the world's coal-bearing strata. Estimation of the global amount of discovered oil by age of the source rock show that 58% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. Although most geologic periods were favourable for oil source-rock accumulation the mid-Permian to mid-Jurassic appears to have been particularly unfavourable accounting for less than 2% of the world's oil. Estimation of the global amount of discovered natural gas by age of the source rock show that 48% of the world's oil has been sourced from Cretaceous or younger strata and 99% from Silurian or younger strata. The Silurian and Late Carboniferous were particularly favourable for gas source-rock accumulation respectively accounting for 12.9% and 6.9% of the world's gas. By contrast, Permian and Triassic source rocks account for only 1.7% of the world's natural gas. Rather than invoking global climatic or oceanic events to explain the relative abundance of organic rich sediments through time, examination of the data suggests the more critical control is tectonic. The majority of coals are associated with foreland basins and the majority of oil-prone source rocks are associated with rifting. The relative abundance of these types of basin through time determines the abundance and location of coals and petroleum source rocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE There has been a dramatic increase in vitamin D testing in Australia in recent years, prompting calls for targeted testing. We sought to develop a model to identify people most at risk of vitamin D deficiency. DESIGN AND PARTICIPANTS This is a cross-sectional study of 644 60- to 84-year-old participants, 95% of whom were Caucasian, who took part in a pilot randomized controlled trial of vitamin D supplementation. MEASUREMENTS Baseline 25(OH)D was measured using the Diasorin Liaison platform. Vitamin D insufficiency and deficiency were defined using 50 and 25 nmol/l as cut-points, respectively. A questionnaire was used to obtain information on demographic characteristics and lifestyle factors. We used multivariate logistic regression to predict low vitamin D and calculated the net benefit of using the model compared with 'test-all' and 'test-none' strategies. RESULTS The mean serum 25(OH)D was 42 (SD 14) nmol/1. Seventy-five per cent of participants were vitamin D insufficient and 10% deficient. Serum 25(OH)D was positively correlated with time outdoors, physical activity, vitamin D intake and ambient UVR, and inversely correlated with age, BMI and poor self-reported health status. These predictors explained approximately 21% of the variance in serum 25(OH)D. The area under the ROC curve predicting vitamin D deficiency was 0·82. Net benefit for the prediction model was higher than that for the 'test-all' strategy at all probability thresholds and higher than the 'test-none' strategy for probabilities up to 60%. CONCLUSION Our model could predict vitamin D deficiency with reasonable accuracy, but it needs to be validated in other populations before being implemented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of the present study was to examine the influence of 3 different high-intensity interval training regimens on the first and second ventilatory thresholds (VT1 and VT2), anaerobic capacity (ANC), and plasma volume (PV) in well-trained endurance cyclists. Before and after 2 and 4 weeks of training, 38 well-trained cyclists (VO2peak = 64.5 +/- 5.2 ml[middle dot]kg-1[middle dot]min-1) performed (a) a progressive cycle test to measure VO2peak, peak power output (PPO), VT1, and VT2; (b) a time to exhaustion test (Tmax) at their VO2peak power output (Pmax); and (c) a 40-km time-trial (TT40). Subjects were assigned to 1 of 4 training groups (group 1: n = 8, 8 3 60% Tmax at Pmax, 1:2 work-recovery ratio; group 2: n = 9, 8 x 60% Tmax at Pmax, recovery at 65% maximum heart rate; group 3: n = 10, 12 x 30 seconds at 175% PPO, 4.5-minute recovery; control group: n = 11). The TT40 performance, VO2peak, VT1,VT2, and ANC were all significantly increased in groups 1, 2, and 3 (p < 0.05) but not in the control group. However, PV did not change in response to the 4-week training program. Changes in TT40 performance were modestly related to the changes in VO2peak, VT1, VT2, and ANC (r = 0.41, 0.34, 0.42, and 0.40, respectively; all p < 0.05). In conclusion, the improvements in TT40 performance were related to significant increases in VO2peak, VT1,VT2, and ANC but were not accompanied by significant changes in PV. Thus, peripheral adaptations rather than central adaptations are likely responsible for the improved performances witnessed in well-trained endurance athletes following various forms of high-intensity interval training programs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The application of different EMS current thresholds on muscle activates not only the muscle but also peripheral sensory axons that send proprioceptive and pain signals to the cerebral cortex. A 32-channel time-domain fNIRS instrument was employed to map regional cortical activities under varied EMS current intensities applied on the right wrist extensor muscle. Eight healthy volunteers underwent four EMS at different current thresholds based on their individual maximal tolerated intensity (MTI), i.e., 10 % < 50 % < 100 % < over 100 % MTI. Time courses of the absolute oxygenated and deoxygenated hemoglobin concentrations primarily over the bilateral sensorimotor cortical (SMC) regions were extrapolated, and cortical activation maps were determined by general linear model using the NIRS-SPM software. The stimulation-induced wrist extension paradigm significantly increased activation of the contralateral SMC region according to the EMS intensities, while the ipsilateral SMC region showed no significant changes. This could be due in part to a nociceptive response to the higher EMS current intensities and result also from increased sensorimotor integration in these cortical regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

On 21 September 1999 Division 152 was inserted into the Income Tax Assessment Act (1997) (ITAA 1997). It was subsequently subject to amendments in 2006. Division 152 contains the small business CGT concessions, which enables eligible small business taxpayers to reduce the amount of tax payable on capital gains arising from certain CGT events (including the sale of the small business itself) that occur after 11:45 am on 21 September 1999. One of the stated principal objectives of the legislation was to provide a concessionary regime for small business owners who did not have the same ability to access the concessionary superannuation regime (particularly the superannuation guarantee charge) generally available to employees. The then Federal Treasurer, Mr Peter Costello, when announcing the introduction of the concessions, specifically stated that the object of Div 152 was to provide “small business people with access to funds for retirement or expansion”. The purpose of this project is to: one, assess the extent to which small business taxpayers understand the CGT small business concessions, particularly when considering sale of their business; two, determine which of the four small business CGT concessions are being adopted and/or recommended by tax advisors to clients; and three, determine whether the recent superannuation changes announced by the Federal Government in relation to the capping of the concessional superannuation thresholds have had an impact on the use of the small business retirement concession. It is anticipated that the results of this study will reveal that that small business owners are reliant on their tax advisors to explain the operation of Division 152. It is plausible that give the complexity of the CGT concessions, most small business owners are completely unaware of the four small business CGT concessions contained in Division 152 and do not understand how these concessions apply. Our study will also reveal the extent to which each CGT small business concession has been adopted (and reasons why). In particular, emphasis will be placed on the adoption of the small business retirement concession contained in Subdivision 152-D (and specific reasons for its adoption). This study also seeks to understand whether the recent (and impending) changes to the concessional superannuation cap has resulted in the retirement concession being more widely adopted (or recommended) by tax advisors. We would expect that the results of our study to confirm this to be the case, particularly coupled with the recent economic downturn, which has led to lower superannuation fund balances. By providing accounting firms with this information, small business owners will benefit from the information, becoming better placed to be long-term self funded retirees, providing not only financial benefits to the individuals and the country, but a significant increase in social self-assurance by these members of the community.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Big Data is a rising IT trend similar to cloud computing, social networking or ubiquitous computing. Big Data can offer beneficial scenarios in the e-health arena. However, one of the scenarios can be that Big Data needs to be kept secured for a long period of time in order to gain its benefits such as finding cures for infectious diseases and protecting patient privacy. From this connection, it is beneficial to analyse Big Data to make meaningful information while the data is stored securely. Therefore, the analysis of various database encryption techniques is essential. In this study, we simulated 3 types of technical environments, namely, Plain-text, Microsoft Built-in Encryption, and custom Advanced Encryption Standard, using Bucket Index in Data-as-a-Service. The results showed that custom AES-DaaS has a faster range query response time than MS built-in encryption. Furthermore, while carrying out the scalability test, we acknowledged that there are performance thresholds depending on physical IT resources. Therefore, for the purpose of efficient Big Data management in eHealth it is noteworthy to examine their scalability limits as well even if it is under a cloud computing environment. In addition, when designing an e-health database, both patient privacy and system performance needs to be dealt as top priorities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Soil-based emissions of nitrous oxide (N2O), a well-known greenhouse gas, have been associated with changes in soil water-filled pore space (WFPS) and soil temperature in many previous studies. However, it is acknowledged that the environment-N2O relationship is complex and still relatively poorly unknown. In this article, we employed a Bayesian model selection approach (Reversible jump Markov chain Monte Carlo) to develop a data-informed model of the relationship between daily N2O emissions and daily WFPS and soil temperature measurements between March 2007 and February 2009 from a soil under pasture in Queensland, Australia, taking seasonal factors and time-lagged effects into account. The model indicates a very strong relationship between a hybrid seasonal structure and daily N2O emission, with the latter substantially increased in summer. Given the other variables in the model, daily soil WFPS, lagged by a week, had a negative influence on daily N2O; there was evidence of a nonlinear positive relationship between daily soil WFPS and daily N2O emission; and daily soil temperature tended to have a linear positive relationship with daily N2O emission when daily soil temperature was above a threshold of approximately 19°C. We suggest that this flexible Bayesian modeling approach could facilitate greater understanding of the shape of the covariate-N2O flux relation and detection of effect thresholds in the natural temporal variation of environmental variables on N2O emission.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the current (1995-present) eruptive phase of the Soufrière Hills volcano on Montserrat, voluminous pyroclastic flows entered the sea off the eastern flank of the island, resulting in the deposition of well-defined submarine pyroclastic lobes. Previously reported bathymetric surveys documented the sequential construction of these deposits, but could not image their internal structure, the morphology or extent of their base, or interaction with the underlying sediments. We show, by combining these bathymetric data with new high-resolution three dimensional (3D) seismic data, that the sequence of previously detected pyroclastic deposits from different phases of the ongoing eruptive activity is still well preserved. A detailed interpretation of the 3D seismic data reveals the absence of significant (> 3. m) basal erosion in the distal extent of submarine pyroclastic deposits. We also identify a previously unrecognized seismic unit directly beneath the stack of recent lobes. We propose three hypotheses for the origin of this seismic unit, but prefer an interpretation that the deposit is the result of the subaerial flank collapse that formed the English's Crater scarp on the Soufrière Hills volcano. The 1995-recent volcanic activity on Montserrat accounts for a significant portion of the sediments on the southeast slope of Montserrat, in places forming deposits that are more than 60. m thick, which implies that the potential for pyroclastic flows to build volcanic island edifices is significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pesticides used in agricultural systems must be applied in economically viable and environmentally sensitive ways, and this often requires expensive field trials on spray deposition and retention by plant foliage. Computational models to describe whether a spray droplet sticks (adheres), bounces or shatters on impact, and if any rebounding parent or shatter daughter droplets are recaptured, would provide an estimate of spray retention and thereby act as a useful guide prior to any field trials. Parameter-driven interactive software has been implemented to enable the end-user to study and visualise droplet interception and impaction on a single, horizontal leaf. Living chenopodium, wheat and cotton leaves have been scanned to capture the surface topography and realistic virtual leaf surface models have been generated. Individual leaf models have then been subjected to virtual spray droplets and predictions made of droplet interception with the virtual plant leaf. Thereafter, the impaction behaviour of the droplets and the subsequent behaviour of any daughter droplets, up until re-capture, are simulated to give the predicted total spray retention by the leaf. A series of critical thresholds for the stick, bounce, and shatter elements in the impaction process have been developed for different combinations of formulation, droplet size and velocity, and leaf surface characteristics to provide this output. The results show that droplet properties, spray formulations and leaf surface characteristics all influence the predicted amount of spray retained on a horizontal leaf surface. Overall the predicted spray retention increases as formulation surface tension, static contact angle, droplet size and velocity decreases. Predicted retention on cotton is much higher than on chenopodium. The average predicted retention on a single horizontal leaf across all droplet size, velocity and formulations scenarios tested, is 18, 30 and 85% for chenopodium, wheat and cotton, respectively.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background Single nucleotide polymorphisms (SNPs) rs429358 (ε4) and rs7412 (ε2), both invoking changes in the amino-acid sequence of the apolipoprotein E (APOE) gene, have previously been tested for association with multiple sclerosis (MS) risk. However, none of these studies was sufficiently powered to detect modest effect sizes at acceptable type-I error rates. As both SNPs are only imperfectly captured on commonly used microarray genotyping platforms, their evaluation in the context of genome-wide association studies has been hindered until recently. Methods We genotyped 12 740 subjects hitherto not studied for their APOE status, imputed raw genotype data from 8739 subjects from five independent genome wide association studies datasets using the most recent high-resolution reference panels, and extracted genotype data for 8265 subjects from previous candidate gene assessments. Results Despite sufficient power to detect associations at genome-wide significance thresholds across a range of ORs, our analyses did not support a role of rs429358 or rs7412 on MS susceptibility. This included meta-analyses of the combined data across 13 913 MS cases and 15 831 controls (OR=0.95, p=0.259, and OR 1.07, p=0.0569, for rs429358 and rs7412, respectively). Conclusion Given the large sample size of our analyses, it is unlikely that the two APOE missense SNPs studied here exert any relevant effects on MS susceptibility.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical and chemical properties of biodiesel are influenced by structural features of the fatty acids, such as chain length, degree of unsaturation and branching of the carbon chain. This study investigated if microalgal fatty acid profiles are suitable for biodiesel characterization and species selection through Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Graphical Analysis for Interactive Assistance (GAIA) analysis. Fatty acid methyl ester (FAME) profiles were used to calculate the likely key chemical and physical properties of the biodiesel [cetane number (CN), iodine value (IV), cold filter plugging point, density, kinematic viscosity, higher heating value] of nine microalgal species (this study) and twelve species from the literature, selected for their suitability for cultivation in subtropical climates. An equal-parameter weighted (PROMETHEE-GAIA) ranked Nannochloropsis oculata, Extubocellulus sp. and Biddulphia sp. highest; the only species meeting the EN14214 and ASTM D6751-02 biodiesel standards, except for the double bond limit in the EN14214. Chlorella vulgaris outranked N. oculata when the twelve microalgae were included. Culture growth phase (stationary) and, to a lesser extent, nutrient provision affected CN and IV values of N. oculata due to lower eicosapentaenoic acid (EPA) contents. Application of a polyunsaturated fatty acid (PUFA) weighting to saturation led to a lower ranking of species exceeding the double bond EN14214 thresholds. In summary, CN, IV, C18:3 and double bond limits were the strongest drivers in equal biodiesel parameter-weighted PROMETHEE analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper examines the rise in the politicisation of Islam in Malaysia and links it to the othering of the Malaysian Malay. It is my argument that both were “conquering” tools of Malaysia’s “Father of Modernisation”, Mahathir Mohamad, devised to win the support of the Malay Muslim majority in Malaysia. The many awards bestowed on Mahathir obscure the fact that he was instrumental in the systematic erosion of the power and roles of state institutions, especially at the Federal government level. This includes the significant loss of the independence of the Malaysian judiciary. Whilst per capita income in Malaysia may well have increased eight times under his 22-year leadership, this paper asks why is it that the majority of the Malays remain the largest number among the poor and the more disenfranchised of ethnicities in the country? Why have Malay and Muslim women suffered such a rapid decreasing ability to access justice? This paper examines existing research on the social and political changes Malaysia has experienced with Islamisation and under Mahathir’s rule, as well as studies on Malayness, Malay nationalism and Muslim Malay identity formation. The paper elaborates the othering of a majority people, the Malays in Malaysia, and how this othering has brought forth a fast-growing political power in the name of a supremacist Islam, a puritanical Sunni and Malay Islam. Specific events in the rise and rule of Mahathir as Malaysia’s then Prime Minister are reviewed, such as the banning of The Malay Dilemma, and the split in the United Malays National Organisation (UMNO) in 1987. Also examined is the varying emphasis between Muslim and race, and how during Mahathir’s rule, that strong misogynist and patriarchal attitudes took hold in Malay Muslim consciousness, a colonising consciousness that is othering the perceived cultural and genetic “impurities” within the Malay.