14 resultados para Number systems. Arithmetic teaching. Number systems ancients

em CORA - Cork Open Research Archive - University College Cork - Ireland


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Power efficiency is one of the most important constraints in the design of embedded systems since such systems are generally driven by batteries with limited energy budget or restricted power supply. In every embedded system, there are one or more processor cores to run the software and interact with the other hardware components of the system. The power consumption of the processor core(s) has an important impact on the total power dissipated in the system. Hence, the processor power optimization is crucial in satisfying the power consumption constraints, and developing low-power embedded systems. A key aspect of research in processor power optimization and management is “power estimation”. Having a fast and accurate method for processor power estimation at design time helps the designer to explore a large space of design possibilities, to make the optimal choices for developing a power efficient processor. Likewise, understanding the processor power dissipation behaviour of a specific software/application is the key for choosing appropriate algorithms in order to write power efficient software. Simulation-based methods for measuring the processor power achieve very high accuracy, but are available only late in the design process, and are often quite slow. Therefore, the need has arisen for faster, higher-level power prediction methods that allow the system designer to explore many alternatives for developing powerefficient hardware and software. The aim of this thesis is to present fast and high-level power models for the prediction of processor power consumption. Power predictability in this work is achieved in two ways: first, using a design method to develop power predictable circuits; second, analysing the power of the functions in the code which repeat during execution, then building the power model based on average number of repetitions. In the first case, a design method called Asynchronous Charge Sharing Logic (ACSL) is used to implement the Arithmetic Logic Unit (ALU) for the 8051 microcontroller. The ACSL circuits are power predictable due to the independency of their power consumption to the input data. Based on this property, a fast prediction method is presented to estimate the power of ALU by analysing the software program, and extracting the number of ALU-related instructions. This method achieves less than 1% error in power estimation and more than 100 times speedup in comparison to conventional simulation-based methods. In the second case, an average-case processor energy model is developed for the Insertion sort algorithm based on the number of comparisons that take place in the execution of the algorithm. The average number of comparisons is calculated using a high level methodology called MOdular Quantitative Analysis (MOQA). The parameters of the energy model are measured for the LEON3 processor core, but the model is general and can be used for any processor. The model has been validated through the power measurement experiments, and offers high accuracy and orders of magnitude speedup over the simulation-based method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The work described in this thesis reports the structural changes induced on micelles under a variety of conditions. The micelles of a liquid crystal film and dilute solutions of micelles were subjected to high pressure CO2 and selected hydrocarbon environments. Using small angle neutron scattering (SANS) techniques the spacing between liquid crystal micelles was measured in-situ. The liquid crystals studied were templated from different surfactants with varying structural characteristics. Micelles of a dilute surfactant solution were also subjected to elevated pressures of varying gas atmospheres. Detailed modelling of the in-situ SANS experiments revealed information of the size and shape of the micelles at a number of different pressures. Also reported in this thesis is the characterisation of mesoporous materials in the confined channels of larger porous materials. Periodic mesoporous organosilicas (PMOs) were synthesised within the channels of anodic alumina membranes (AAM) under different conditions, including drying rates and precursor concentrations. In-situ small angle x-ray scattering (SAXS) and transmission electron microscopy (TEM) was used to determine the pore morphology of the PMO within the AAM channels. PMO materials were also used as templates in the deposition of gold nanoparticles and subsequently used in the synthesis of germanium nanostructures. Polymer thin films were also employed as templates for the directed deposition of gold nanoparticles which were again used as seeds for the production of germanium nanostructures. A supercritical CO2 (sc-CO2) technique was successfully used during the production of the germanium nanostructures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Constraint programming has emerged as a successful paradigm for modelling combinatorial problems arising from practical situations. In many of those situations, we are not provided with an immutable set of constraints. Instead, a user will modify his requirements, in an interactive fashion, until he is satisfied with a solution. Examples of such applications include, amongst others, model-based diagnosis, expert systems, product configurators. The system he interacts with must be able to assist him by showing the consequences of his requirements. Explanations are the ideal tool for providing this assistance. However, existing notions of explanations fail to provide sufficient information. We define new forms of explanations that aim to be more informative. Even if explanation generation is a very hard task, in the applications we consider, we must manage to provide a satisfactory level of interactivity and, therefore, we cannot afford long computational times. We introduce the concept of representative sets of relaxations, a compact set of relaxations that shows the user at least one way to satisfy each of his requirements and at least one way to relax them, and present an algorithm that efficiently computes such sets. We introduce the concept of most soluble relaxations, maximising the number of products they allow. We present algorithms to compute such relaxations in times compatible with interactivity, achieving this by indifferently making use of different types of compiled representations. We propose to generalise the concept of prime implicates to constraint problems with the concept of domain consequences, and suggest to generate them as a compilation strategy. This sets a new approach in compilation, and allows to address explanation-related queries in an efficient way. We define ordered automata to compactly represent large sets of domain consequences, in an orthogonal way from existing compilation techniques that represent large sets of solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The technological role of handheld devices is fundamentally changing. Portable computers were traditionally application specific. They were designed and optimised to deliver a specific task. However, it is now commonly acknowledged that future handheld devices need to be multi-functional and need to be capable of executing a range of high-performance applications. This thesis has coined the term pervasive handheld computing systems to refer to this type of mobile device. Portable computers are faced with a number of constraints in trying to meet these objectives. They are physically constrained by their size, their computational power, their memory resources, their power usage, and their networking ability. These constraints challenge pervasive handheld computing systems in achieving their multi-functional and high-performance requirements. This thesis proposes a two-pronged methodology to enable pervasive handheld computing systems meet their future objectives. The methodology is a fusion of two independent and yet complementary concepts. The first step utilises reconfigurable technology to enhance the physical hardware resources within the environment of a handheld device. This approach recognises that reconfigurable computing has the potential to dynamically increase the system functionality and versatility of a handheld device without major loss in performance. The second step of the methodology incorporates agent-based middleware protocols to support handheld devices to effectively manage and utilise these reconfigurable hardware resources within their environment. The thesis asserts the combined characteristics of reconfigurable computing and agent technology can meet the objectives of pervasive handheld computing systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The electroencephalogram (EEG) is a medical technology that is used in the monitoring of the brain and in the diagnosis of many neurological illnesses. Although coarse in its precision, the EEG is a non-invasive tool that requires minimal set-up times, and is suitably unobtrusive and mobile to allow continuous monitoring of the patient, either in clinical or domestic environments. Consequently, the EEG is the current tool-of-choice with which to continuously monitor the brain where temporal resolution, ease-of- use and mobility are important. Traditionally, EEG data are examined by a trained clinician who identifies neurological events of interest. However, recent advances in signal processing and machine learning techniques have allowed the automated detection of neurological events for many medical applications. In doing so, the burden of work on the clinician has been significantly reduced, improving the response time to illness, and allowing the relevant medical treatment to be administered within minutes rather than hours. However, as typical EEG signals are of the order of microvolts (μV ), contamination by signals arising from sources other than the brain is frequent. These extra-cerebral sources, known as artefacts, can significantly distort the EEG signal, making its interpretation difficult, and can dramatically disimprove automatic neurological event detection classification performance. This thesis therefore, contributes to the further improvement of auto- mated neurological event detection systems, by identifying some of the major obstacles in deploying these EEG systems in ambulatory and clinical environments so that the EEG technologies can emerge from the laboratory towards real-world settings, where they can have a real-impact on the lives of patients. In this context, the thesis tackles three major problems in EEG monitoring, namely: (i) the problem of head-movement artefacts in ambulatory EEG, (ii) the high numbers of false detections in state-of-the-art, automated, epileptiform activity detection systems and (iii) false detections in state-of-the-art, automated neonatal seizure detection systems. To accomplish this, the thesis employs a wide range of statistical, signal processing and machine learning techniques drawn from mathematics, engineering and computer science. The first body of work outlined in this thesis proposes a system to automatically detect head-movement artefacts in ambulatory EEG and utilises supervised machine learning classifiers to do so. The resulting head-movement artefact detection system is the first of its kind and offers accurate detection of head-movement artefacts in ambulatory EEG. Subsequently, addtional physiological signals, in the form of gyroscopes, are used to detect head-movements and in doing so, bring additional information to the head- movement artefact detection task. A framework for combining EEG and gyroscope signals is then developed, offering improved head-movement arte- fact detection. The artefact detection methods developed for ambulatory EEG are subsequently adapted for use in an automated epileptiform activity detection system. Information from support vector machines classifiers used to detect epileptiform activity is fused with information from artefact-specific detection classifiers in order to significantly reduce the number of false detections in the epileptiform activity detection system. By this means, epileptiform activity detection which compares favourably with other state-of-the-art systems is achieved. Finally, the problem of false detections in automated neonatal seizure detection is approached in an alternative manner; blind source separation techniques, complimented with information from additional physiological signals are used to remove respiration artefact from the EEG. In utilising these methods, some encouraging advances have been made in detecting and removing respiration artefacts from the neonatal EEG, and in doing so, the performance of the underlying diagnostic technology is improved, bringing its deployment in the real-world, clinical domain one step closer.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The overall objective of this thesis is to integrate a number of micro/nanotechnologies into integrated cartridge type systems to implement such biochemical protocols. Instrumentation and systems were developed to interface such cartridge systems: (i) implementing microfluidic handling, (ii) executing thermal control during biochemical protocols and (iii) detection of biomolecules associated with inherited or infectious disease. This system implements biochemical protocols for DNA extraction, amplification and detection. A digital microfluidic chip (ElectroWetting on Dielectric) manipulated droplets of sample and reagent implementing sample preparation protocols. The cartridge system also integrated a planar magnetic microcoil device to generate local magnetic field gradients, manipulating magnetic beads. For hybridisation detection a fluorescence microarray, screening for mutations associated with CFTR gene is printed on a waveguide surface and integrated within the cartridge. A second cartridge system was developed to implement amplification and detection screening for DNA associated with disease-causing pathogens e.g. Escherichia coli. This system incorporates (i) elastomeric pinch valves isolating liquids during biochemical protocols and (ii) a silver nanoparticle microarray for fluorescent signal enhancement, using localized surface plasmon resonance. The microfluidic structures facilitated the sample and reagent to be loaded and moved between chambers with external heaters implementing thermal steps for nucleic acid amplification and detection. In a technique allowing probe DNA to be immobilised within a microfluidic system using (3D) hydrogel structures a prepolymer solution containing probe DNA was formulated and introduced into the microfluidic channel. Photo-polymerisation was undertaken forming 3D hydrogel structures attached to the microfluidic channel surface. The prepolymer material, poly-ethyleneglycol (PEG), was used to form hydrogel structures containing probe DNA. This hydrogel formulation process was fast compared to conventional biomolecule immobilization techniques and was also biocompatible with the immobilised biomolecules, as verified by on-chip hybridisation assays. This process allowed control over hydrogel height growth at the micron scale.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the field of embedded systems design, coprocessors play an important role as a component to increase performance. Many embedded systems are built around a small General Purpose Processor (GPP). If the GPP cannot meet the performance requirements for a certain operation, a coprocessor can be included in the design. The GPP can then offload the computationally intensive operation to the coprocessor; thus increasing the performance of the overall system. A common application of coprocessors is the acceleration of cryptographic algorithms. The work presented in this thesis discusses coprocessor architectures for various cryptographic algorithms that are found in many cryptographic protocols. Their performance is then analysed on a Field Programmable Gate Array (FPGA) platform. Firstly, the acceleration of Elliptic Curve Cryptography (ECC) algorithms is investigated through the use of instruction set extension of a GPP. The performance of these algorithms in a full hardware implementation is then investigated, and an architecture for the acceleration the ECC based digital signature algorithm is developed. Hash functions are also an important component of a cryptographic system. The FPGA implementation of recent hash function designs from the SHA-3 competition are discussed and a fair comparison methodology for hash functions presented. Many cryptographic protocols involve the generation of random data, for keys or nonces. This requires a True Random Number Generator (TRNG) to be present in the system. Various TRNG designs are discussed and a secure implementation, including post-processing and failure detection, is introduced. Finally, a coprocessor for the acceleration of operations at the protocol level will be discussed, where, a novel aspect of the design is the secure method in which private-key data is handled

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study selected six geographically-similar villages with traditional and alternative cultivation methods (two groups of three, one traditional and two alternatives) in two counties of Henan Province, China—a representative area of the Huang-huai-hai Plain representing traditional rural China. Soil heavy metal concentrations, floral and faunal biodiversity, and socio-economic data were recorded. Heavy metal concentrations of surface soils from three sites in each village were analysed using Inductively Coupled Plasma Mass Spectrometry (ICP-MS, chromium, nickel, copper, cadmium, and lead) and Atomic Absorption Spectrophotometer (AAS, zinc). The floral biodiversity of four land-use types was recorded following the Braun-Blanquet coverage-abundance method using 0.5×0.5m quadrats. The faunal biodiversity of two representative farmland plots was recorded using 0.3×0.3m quadrats at four 0.1m layers. The socio-economic data were recorded through face-to-face interviews of one hundred randomly selected households at each village. Results demonstrate different cultivation methods lead to different impact on above variables. Traditional cultivation led to lower heavy metal concentrations; both alternative managements were associated with massive agrochemical input causing heavy metal pollution in farmlands. Floral distribution was significantly affected by village factors. Diverse cultivation supported high floral biodiversity through multi-scale heterogeneous landscapes containing niches and habitats. Faunal distribution was also significantly affected by village factor nested within soil depth. Different faunal groups responded differently, with Acari being taxonomically diverse and Collembola high in densities. Increase in manual labour and crop number in villages using alternative cultivation may positively affect biodiversity. The results point to the conservation potential of diverse cultivation methods in traditional rural China and other regions under social and political reforms, where traditional agriculture is changing to unified, large-scale mechanized agriculture. This study serves as a baseline for conservation in small-holding agricultural areas of China, and points to the necessity of further studies at larger and longer scales.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Due to growing concerns regarding the anthropogenic interference with the climate system, countries across the world are being challenged to develop effective strategies to mitigate climate change by reducing or preventing greenhouse gas (GHG) emissions. The European Union (EU) is committed to contribute to this challenge by setting a number of climate and energy targets for the years 2020, 2030 and 2050 and then agreeing effort sharing amongst Member States. This thesis focus on one Member State, Ireland, which faces specific challenges and is not on track to meet the targets agreed to date. Before this work commenced, there were no projections of energy demand or supply for Ireland beyond 2020. This thesis uses techno-economic energy modelling instruments to address this knowledge gap. It builds and compares robust, comprehensive policy scenarios, providing a means of assessing the implications of different future energy and emissions pathways for the Irish economy, Ireland’s energy mix and the environment. A central focus of this thesis is to explore the dynamics of the energy system moving towards a low carbon economy. This thesis develops an energy systems model (the Irish TIMES model) to assess the implications of a range of energy and climate policy targets and target years. The thesis also compares the results generated from the least cost scenarios with official projections and target pathways and provides useful metrics and indications to identify key drivers and to support both policy makers and stakeholder in identifying cost optimal strategies. The thesis also extends the functionality of energy system modelling by developing and applying new methodologies to provide additional insights with a focus on particular issues that emerge from the scenario analysis carried out. Firstly, the thesis develops a methodology for soft-linking an energy systems model (Irish TIMES) with a power systems model (PLEXOS) to improve the interpretation of the electricity sector results in the energy system model. The soft-linking enables higher temporal resolution and improved characterisation of power plants and power system operation Secondly, the thesis develops a methodology for the integration of agriculture and energy systems modelling to enable coherent economy wide climate mitigation scenario analysis. This provides a very useful starting point for considering the trade-offs between the energy system and agriculture in the context of a low carbon economy and for enabling analysis of land-use competition. Three specific time scale perspectives are examined in this thesis (2020, 2030, 2050), aligning with key policy target time horizons. The results indicate that Ireland’s short term mandatory emissions reduction target will not be achieved without a significant reassessment of renewable energy policy and that the current dominant policy focus on wind-generated electricity is misplaced. In the medium to long term, the results suggest that energy efficiency is the first cost effective measure to deliver emissions reduction; biomass and biofuels are likely to be the most significant fuel source for Ireland in the context of a low carbon future prompting the need for a detailed assessment of possible implications for sustainability and competition with the agri-food sectors; significant changes are required in infrastructure to deliver deep emissions reductions (to enable the electrification of heat and transport, to accommodate carbon capture and storage facilities (CCS) and for biofuels); competition between energy and agriculture for land-use will become a key issue. The purpose of this thesis is to increase the evidence-based underpinning energy and climate policy decisions in Ireland. The methodology is replicable in other Member States.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Potato is the most important food crop after wheat and rice. A changing climate, coupled with a heightened consumer awareness of how food is produced and legislative changes governing the usage of agrochemicals, means that alternative more integrated and sustainable approaches are needed for crop management practices. Bioprospecting in the Central Andean Highlands resulted in the isolation and in vitro screening of 600 bacterial isolates. The best performing isolates, under in vitro conditions, were field trialled in their home countries. Six of the isolates, Pseudomonas sp. R41805 (Bolivia), Pseudomonas palleroniana R43631 (Peru), Bacillus sp. R47065, R47131, Paenibacillus sp. B3a R49541, and Bacillus simplex M3-4 R49538 (Ecuador), showed significant increase in the yield of potato. Using – omic technologies (i.e. volatilomic, transcriptomic, proteomic and metabolomic), the influence of microbial isolates on plant defence responses was determined. Volatile organic compounds of bacterial isolates were identified using GC/MS. RT-qPCR analysis revealed the significant expression of Ethylene Response Factor 3 (ERF3) and the results of this study suggest that the dual inoculation of potato with Pseudomonas sp. R41805 and Rhizophagus irregularis MUCL 41833 may play a part in the activation of plant defence system via ERF3. The proteomic analysis by 2-DE study has shown that priming by Pseudomonas sp. R41805 can induce the expression of proteins related to photosynthesis and protein folding in in vitro potato plantlets. The metabolomics study has shown that the total glycoalkaloid (TGA) content of greenhouse-grown potato tubers following inoculation with Pseudomonas sp. R41805 did not exceed the acceptable safety limit (200 mg kg-1 FW). As a result of this study, a number of bacteria have been identified with commercial potential that may offer sustainable alternatives in both Andean and European agricultural settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The literature clearly links the quality and capacity of a country’s infrastructure to its economic growth and competitiveness. This thesis analyses the historic national and spatial distribution of investment by the Irish state in its physical networks (water, wastewater and roads) across the 34 local authorities and examines how Ireland is perceived internationally relative to its economic counterparts. An appraisal of the current status and shortcomings of Ireland’s infrastructure is undertaken using key stakeholders from foreign direct investment companies and national policymakers to identify Ireland's infrastructural gaps, along with current challenges in how the country is delivering infrastructure. The output of these interviews identified many issues with how infrastructure decision-making is currently undertaken. This led to an evaluation of how other countries are informing decision-making, and thus this thesis presents a framework of how and why Ireland should embrace a Systems of Systems (SoS) methodology approach to infrastructure decision-making going forward. In undertaking this study a number of other infrastructure challenges were identified: significant political interference in infrastructure decision-making and delivery the need for a national agency to remove the existing ‘silo’ type of mentality to infrastructure delivery how tax incentives can interfere with the market; and their significance. The two key infrastructure gaps identified during the interview process were: the need for government intervention in the rollout of sufficient communication capacity and at a competitive cost outside of Dublin; and the urgent need to address water quality and capacity with approximately 25% of the population currently being served by water of unacceptable quality. Despite considerable investment in its national infrastructure, Ireland’s infrastructure performance continues to trail behind its economic partners in the Eurozone and OECD. Ireland is projected to have the highest growth rate in the euro zone region in 2015 and 2016, albeit that it required a bailout in 2010, and, at the time of writing, is beginning to invest in its infrastructure networks again. This thesis proposes the development and implementation of a SoS approach for infrastructure decision-making which would be based on: existing spatial and capacity data of each of the constituent infrastructure networks; and scenario computation and analysis of alternative drivers eg. Demographic change, economic variability and demand/capacity constraints. The output from such an analysis would provide valuable evidence upon which policy makers and decision makers alike could rely, which has been lacking in historic investment decisions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Diagnostic decision-making is made through a combination of Systems 1 (intuition or pattern-recognition) and Systems 2 (analytic) thinking. The purpose of this study was to use the Cognitive Reflection Test (CRT) to evaluate and compare the level of Systems 1 and 2 thinking among medical students in pre-clinical and clinical programs. Methods: The CRT is a three-question test designed to measure the ability of respondents to activate metacognitive processes and switch to System 2 (analytic) thinking where System 1 (intuitive) thinking would lead them astray. Each CRT question has a correct analytical (System 2) answer and an incorrect intuitive (System 1) answer. A group of medical students in Years 2 & 3 (pre-clinical) and Years 4 (in clinical practice) of a 5-year medical degree were studied. Results: Ten percent (13/128) of students had the intuitive answers to the three questions (suggesting they generally relied on System 1 thinking) while almost half (44%) answered all three correctly (indicating full analytical, System 2 thinking). Only 3-13% had incorrect answers (i.e. that were neither the analytical nor the intuitive responses). Non-native English speaking students (n = 11) had a lower mean number of correct answers compared to native English speakers (n = 117: 1.0 s 2.12 respectfully: p < 0.01). As students progressed through questions 1 to 3, the percentage of correct System 2 answers increased and the percentage of intuitive answers decreased in both the pre-clinical and clinical students. Conclusions: Up to half of the medical students demonstrated full or partial reliance on System 1 (intuitive) thinking in response to these analytical questions. While their CRT performance has no claims to make as to their future expertise as clinicians, the test may be used in helping students to understand the importance of awareness and regulation of their thinking processes in clinical practice.