851 resultados para Mathematical ability
Resumo:
The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.
Resumo:
This paper reports on the research and development of an ICT tool to facilitate the learning of ratio and fractions by adult prisoners. The design of the ICT tool was informed by a semiotic framework for mathematical meaning-making. The ICT tool thus employed multiple semiotic resources including topological, typological, and social-actional resources. The results showed that individual semiotic resource could only represent part of the mathematical concept, while at the same time it might signify something else to create a misconception. When multiple semiotic resources were utilised the mathematical ideas could be better learnt.
Resumo:
We developed orthogonal least-squares techniques for fitting crystalline lens shapes, and used the bootstrap method to determine uncertainties associated with the estimated vertex radii of curvature and asphericities of five different models. Three existing models were investigated including one that uses two separate conics for the anterior and posterior surfaces, and two whole lens models based on a modulated hyperbolic cosine function and on a generalized conic function. Two new models were proposed including one that uses two interdependent conics and a polynomial based whole lens model. The models were used to describe the in vitro shape for a data set of twenty human lenses with ages 7–82 years. The two-conic-surface model (7 mm zone diameter) and the interdependent surfaces model had significantly lower merit functions than the other three models for the data set, indicating that most likely they can describe human lens shape over a wide age range better than the other models (although with the two-conic-surfaces model being unable to describe the lens equatorial region). Considerable differences were found between some models regarding estimates of radii of curvature and surface asphericities. The hyperbolic cosine model and the new polynomial based whole lens model had the best precision in determining the radii of curvature and surface asphericities across the five considered models. Most models found significant increase in anterior, but not posterior, radius of curvature with age. Most models found a wide scatter of asphericities, but with the asphericities usually being positive and not significantly related to age. As the interdependent surfaces model had lower merit function than three whole lens models, there is further scope to develop an accurate model of the complete shape of human lenses of all ages. The results highlight the continued difficulty in selecting an appropriate model for the crystalline lens shape.
Resumo:
1. Species' distribution modelling relies on adequate data sets to build reliable statistical models with high predictive ability. However, the money spent collecting empirical data might be better spent on management. A less expensive source of species' distribution information is expert opinion. This study evaluates expert knowledge and its source. In particular, we determine whether models built on expert knowledge apply over multiple regions or only within the region where the knowledge was derived. 2. The case study focuses on the distribution of the brush-tailed rock-wallaby Petrogale penicillata in eastern Australia. We brought together from two biogeographically different regions substantial and well-designed field data and knowledge from nine experts. We used a novel elicitation tool within a geographical information system to systematically collect expert opinions. The tool utilized an indirect approach to elicitation, asking experts simpler questions about observable rather than abstract quantities, with measures in place to identify uncertainty and offer feedback. Bayesian analysis was used to combine field data and expert knowledge in each region to determine: (i) how expert opinion affected models based on field data and (ii) how similar expert-informed models were within regions and across regions. 3. The elicitation tool effectively captured the experts' opinions and their uncertainties. Experts were comfortable with the map-based elicitation approach used, especially with graphical feedback. Experts tended to predict lower values of species occurrence compared with field data. 4. Across experts, consensus on effect sizes occurred for several habitat variables. Expert opinion generally influenced predictions from field data. However, south-east Queensland and north-east New South Wales experts had different opinions on the influence of elevation and geology, with these differences attributable to geological differences between these regions. 5. Synthesis and applications. When formulated as priors in Bayesian analysis, expert opinion is useful for modifying or strengthening patterns exhibited by empirical data sets that are limited in size or scope. Nevertheless, the ability of an expert to extrapolate beyond their region of knowledge may be poor. Hence there is significant merit in obtaining information from local experts when compiling species' distribution models across several regions.
Resumo:
We examined differences in response latencies obtained during a validated video-based hazard perception driving test between three healthy, community-dwelling groups: 22 mid-aged (35-55 years), 34 young-old (65-74 years), and 23 old-old (75-84 years) current drivers, matched for gender, education level, and vocabulary. We found no significant difference in performance between mid-aged and young-old groups, but the old-old group was significantly slower than the other two groups. The differences between the old-old group and the other groups combined were independently mediated by useful field of view (UFOV), contrast sensitivity, and simple reaction time measures. Given that hazard perception latency has been linked with increased crash risk, these results are consistent with the idea that increased crash risk in older adults could be a function of poorer hazard perception, though this decline does not appear to manifest until age 75+ in healthy drivers.
Resumo:
Surveillance for invasive non-indigenous species (NIS) is an integral part of a quarantine system. Estimating the efficiency of a surveillance strategy relies on many uncertain parameters estimated by experts, such as the efficiency of its components in face of the specific NIS, the ability of the NIS to inhabit different environments, and so on. Due to the importance of detecting an invasive NIS within a critical period of time, it is crucial that these uncertainties be accounted for in the design of the surveillance system. We formulate a detection model that takes into account, in addition to structured sampling for incursive NIS, incidental detection by untrained workers. We use info-gap theory for satisficing (not minimizing) the probability of detection, while at the same time maximizing the robustness to uncertainty. We demonstrate the trade-off between robustness to uncertainty, and an increase in the required probability of detection. An empirical example based on the detection of Pheidole megacephala on Barrow Island demonstrates the use of info-gap analysis to select a surveillance strategy.
Resumo:
This paper reports on the research and development of an ICT tool to facilitate the learning of ratio and fractions by adult prisoners. The design of the ICT tool was informed by a semiotic framework for mathematical meaning-making. The ICT tool thus employed multiple semiotic resources including topological, typological, and social-actional resources. The results showed that individual semiotic resource could only represent part of the mathematical concept, while at the same time it might signify something else to create a misconception. When multiple semiotic resources were utilised the mathematical ideas could be better learnt.
Resumo:
Low back pain is an increasing problem in industrialised countries and although it is a major socio-economic problem in terms of medical costs and lost productivity, relatively little is known about the processes underlying the development of the condition. This is in part due to the complex interactions between bone, muscle, nerves and other soft tissues of the spine, and the fact that direct observation and/or measurement of the human spine is not possible using non-invasive techniques. Biomechanical models have been used extensively to estimate the forces and moments experienced by the spine. These models provide a means of estimating the internal parameters which can not be measured directly. However, application of most of the models currently available is restricted to tasks resembling those for which the model was designed due to the simplified representation of the anatomy. The aim of this research was to develop a biomechanical model to investigate the changes in forces and moments which are induced by muscle injury. In order to accurately simulate muscle injuries a detailed quasi-static three dimensional model representing the anatomy of the lumbar spine was developed. This model includes the nine major force generating muscles of the region (erector spinae, comprising the longissimus thoracis and iliocostalis lumborum; multifidus; quadratus lumborum; latissimus dorsi; transverse abdominis; internal oblique and external oblique), as well as the thoracolumbar fascia through which the transverse abdominis and parts of the internal oblique and latissimus dorsi muscles attach to the spine. The muscles included in the model have been represented using 170 muscle fascicles each having their own force generating characteristics and lines of action. Particular attention has been paid to ensuring the muscle lines of action are anatomically realistic, particularly for muscles which have broad attachments (e.g. internal and external obliques), muscles which attach to the spine via the thoracolumbar fascia (e.g. transverse abdominis), and muscles whose paths are altered by bony constraints such as the rib cage (e.g. iliocostalis lumborum pars thoracis and parts of the longissimus thoracis pars thoracis). In this endeavour, a separate sub-model which accounts for the shape of the torso by modelling it as a series of ellipses has been developed to model the lines of action of the oblique muscles. Likewise, a separate sub-model of the thoracolumbar fascia has also been developed which accounts for the middle and posterior layers of the fascia, and ensures that the line of action of the posterior layer is related to the size and shape of the erector spinae muscle. Published muscle activation data are used to enable the model to predict the maximum forces and moments that may be generated by the muscles. These predictions are validated against published experimental studies reporting maximum isometric moments for a variety of exertions. The model performs well for fiexion, extension and lateral bend exertions, but underpredicts the axial twist moments that may be developed. This discrepancy is most likely the result of differences between the experimental methodology and the modelled task. The application of the model is illustrated using examples of muscle injuries created by surgical procedures. The three examples used represent a posterior surgical approach to the spine, an anterior approach to the spine and uni-lateral total hip replacement surgery. Although the three examples simulate different muscle injuries, all demonstrate the production of significant asymmetrical moments and/or reduced joint compression following surgical intervention. This result has implications for patient rehabilitation and the potential for further injury to the spine. The development and application of the model has highlighted a number of areas where current knowledge is deficient. These include muscle activation levels for tasks in postures other than upright standing, changes in spinal kinematics following surgical procedures such as spinal fusion or fixation, and a general lack of understanding of how the body adjusts to muscle injuries with respect to muscle activation patterns and levels, rate of recovery from temporary injuries and compensatory actions by other muscles. Thus the comprehensive and innovative anatomical model which has been developed not only provides a tool to predict the forces and moments experienced by the intervertebral joints of the spine, but also highlights areas where further clinical research is required.
Resumo:
In this article we explore young children's development of mathematical knowledge and reasoning processes as they worked two modelling problems (the Butter Beans Problem and the Airplane Problem). The problems involve authentic situations that need to be interpreted and described in mathematical ways. Both problems include tables of data, together with background information containing specific criteria to be considered in the solution process. Four classes of third-graders (8 years of age) and their teachers participated in the 6-month program, which included preparatory modelling activities along with professional development for the teachers. In discussing our findings we address: (a) Ways in which the children applied their informal, personal knowledge to the problems; (b) How the children interpreted the tables of data, including difficulties they experienced; (c) How the children operated on the data, including aggregating and comparing data, and looking for trends and patterns; (c) How the children developed important mathematical ideas; and (d) Ways in which the children represented their mathematical understandings.
Resumo:
The main objective of this PhD was to further develop Bayesian spatio-temporal models (specifically the Conditional Autoregressive (CAR) class of models), for the analysis of sparse disease outcomes such as birth defects. The motivation for the thesis arose from problems encountered when analyzing a large birth defect registry in New South Wales. The specific components and related research objectives of the thesis were developed from gaps in the literature on current formulations of the CAR model, and health service planning requirements. Data from a large probabilistically-linked database from 1990 to 2004, consisting of fields from two separate registries: the Birth Defect Registry (BDR) and Midwives Data Collection (MDC) were used in the analyses in this thesis. The main objective was split into smaller goals. The first goal was to determine how the specification of the neighbourhood weight matrix will affect the smoothing properties of the CAR model, and this is the focus of chapter 6. Secondly, I hoped to evaluate the usefulness of incorporating a zero-inflated Poisson (ZIP) component as well as a shared-component model in terms of modeling a sparse outcome, and this is carried out in chapter 7. The third goal was to identify optimal sampling and sample size schemes designed to select individual level data for a hybrid ecological spatial model, and this is done in chapter 8. Finally, I wanted to put together the earlier improvements to the CAR model, and along with demographic projections, provide forecasts for birth defects at the SLA level. Chapter 9 describes how this is done. For the first objective, I examined a series of neighbourhood weight matrices, and showed how smoothing the relative risk estimates according to similarity by an important covariate (i.e. maternal age) helped improve the model’s ability to recover the underlying risk, as compared to the traditional adjacency (specifically the Queen) method of applying weights. Next, to address the sparseness and excess zeros commonly encountered in the analysis of rare outcomes such as birth defects, I compared a few models, including an extension of the usual Poisson model to encompass excess zeros in the data. This was achieved via a mixture model, which also encompassed the shared component model to improve on the estimation of sparse counts through borrowing strength across a shared component (e.g. latent risk factor/s) with the referent outcome (caesarean section was used in this example). Using the Deviance Information Criteria (DIC), I showed how the proposed model performed better than the usual models, but only when both outcomes shared a strong spatial correlation. The next objective involved identifying the optimal sampling and sample size strategy for incorporating individual-level data with areal covariates in a hybrid study design. I performed extensive simulation studies, evaluating thirteen different sampling schemes along with variations in sample size. This was done in the context of an ecological regression model that incorporated spatial correlation in the outcomes, as well as accommodating both individual and areal measures of covariates. Using the Average Mean Squared Error (AMSE), I showed how a simple random sample of 20% of the SLAs, followed by selecting all cases in the SLAs chosen, along with an equal number of controls, provided the lowest AMSE. The final objective involved combining the improved spatio-temporal CAR model with population (i.e. women) forecasts, to provide 30-year annual estimates of birth defects at the Statistical Local Area (SLA) level in New South Wales, Australia. The projections were illustrated using sixteen different SLAs, representing the various areal measures of socio-economic status and remoteness. A sensitivity analysis of the assumptions used in the projection was also undertaken. By the end of the thesis, I will show how challenges in the spatial analysis of rare diseases such as birth defects can be addressed, by specifically formulating the neighbourhood weight matrix to smooth according to a key covariate (i.e. maternal age), incorporating a ZIP component to model excess zeros in outcomes and borrowing strength from a referent outcome (i.e. caesarean counts). An efficient strategy to sample individual-level data and sample size considerations for rare disease will also be presented. Finally, projections in birth defect categories at the SLA level will be made.
Resumo:
In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.
Resumo:
Alginate microspheres are considered a promising material as a drug carrier in bone repair due to excellent biocompatibility, but their main disadvantage is low drug entrapment efficiency and non-controllable release. The aim of this study was to investigate the effect of incorporating mesoporous bioglass (MBG), non-mesoporous bioglass (BG) or hydroxyapatite (HAp) into alginate microspheres on their drug-loading and release properties. X-ray diffraction (XRD), transmission electron microscopy (TEM), scanning electron microscopy (SEM), Fourier transform infrared spectroscopy (FTIR), and atomic emission spectroscopy (AES) were used to analyse the composition, structure and dissolution of bioactive inorganic materials and their microspheres. Dexamethasone (DEX)-loading and release ability of four microspheres were tested in phosphate buffered saline with varying pHs. Results showed that the drug-loading capacity was enhanced with the incorporation of bioactive inorganic materials into alginate microspheres. The MBG/Alginate microspheres had the highest drug loading ability. DEX release from alginate microspheres correlated to the dissolution of MBG, BG and HAp in PBS, and that the pH was an efficient factor in controlling the DEX release; a high pH resulted in greater DEX release, whereas a low pH delayed DEX release. In addition, MBG/alginate, BG/alginate and HAp/alginate microspheres had varying apatite-formation and dissolution abilities, which indicate that the composites would behave differently with respect to bioactivity. The study suggests that microspheres made of a composite of bioactive inorganic materials and alginate have a bioactivity and degradation profile which greatly improves their drug delivery capacity, thus enhancing their potential applications as bioactive filler materials for bone tissue regeneration.