932 resultados para HMM, Nosocomial Pathogens, Genotyping, Statistical Modelling, VRE


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents one approach to addressing the important issue of interdisciplinarity in the primary school mathematics curriculum, namely, through realistic mathematical modelling problems. Such problems draw upon other disciplines for their contexts and data. The article initially considers the nature of modelling with complex systems and discusses how such experiences differ from existing problem-solving activities in the primary mathematics curriculum. Principles for designing interdisciplinary modelling problems are then addressed, with reference to two mathematical modelling problems— one based in the scientific domain and the other in the literary domain. Examples of the models children have created in solving these problems follow. A reflection on the differences in the diversity and sophistication of these models raises issues regarding the design of interdisciplinary modelling problems. The article concludes with suggested opportunities for generating multidisciplinary projects within the regular mathematics curriculum.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The equations governing saltwater intrusion in coastal aquifers are complex. Backward Euler time stepping approaches are often used to advance the solution to these equations in time, which typically requires that small time steps be taken in order to ensure that an accurate solution is obtained. We show that a method of lines approach incorporating variable order backward differentiation formulas can greatly improve the efficiency of the time stepping process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The driving task requires sustained attention during prolonged periods, and can be performed in highly predictable or repetitive environments. Such conditions could create drowsiness or hypovigilance and impair the ability to react to critical events. Identifying vigilance decrement in monotonous conditions has been a major subject of research, but no research to date has attempted to predict this vigilance decrement. This pilot study aims to show that vigilance decrements due to monotonous tasks can be predicted through mathematical modelling. A short vigilance task sensitive to short periods of lapses of vigilance called Sustained Attention to Response Task is used to assess participants’ performance. This task models the driver’s ability to cope with unpredicted events by performing the expected action. A Hidden Markov Model (HMM) is proposed to predict participants’ hypovigilance. Driver’s vigilance evolution is modelled as a hidden state and is correlated to an observable variable: the participant’s reactions time. This experiment shows that the monotony of the task can lead to an important vigilance decline in less than five minutes. This impairment can be predicted four minutes in advance with an 86% accuracy using HMMs. This experiment showed that mathematical models such as HMM can efficiently predict hypovigilance through surrogate measures. The presented model could result in the development of an in-vehicle device that detects driver hypovigilance in advance and warn the driver accordingly, thus offering the potential to enhance road safety and prevent road crashes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Light Detection and Ranging (LIDAR) has great potential to assist vegetation management in power line corridors by providing more accurate geometric information of the power line assets and vegetation along the corridors. However, the development of algorithms for the automatic processing of LIDAR point cloud data, in particular for feature extraction and classification of raw point cloud data, is in still in its infancy. In this paper, we take advantage of LIDAR intensity and try to classify ground and non-ground points by statistically analyzing the skewness and kurtosis of the intensity data. Moreover, the Hough transform is employed to detected power lines from the filtered object points. The experimental results show the effectiveness of our methods and indicate that better results were obtained by using LIDAR intensity data than elevation data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Human-specific Bacteroides HF183 (HS-HF183), human-specific Enterococci faecium esp (HS-esp), human-specific adenoviruses (HS-AVs) and human-specific polyomaviruses (HS-PVs) assays were evaluated in freshwater, seawater and distilled water to detect fresh sewage. The sewage spiked water samples were also tested for the concentrations of traditional fecal indicators (i.e., Escherichia coli, enterococci and Clostridium perfringens) and enteric viruses such as enteroviruses (EVs), sapoviruses (SVs), and torquetenoviruses (TVs). The overall host-specificity of the HS-HF183 marker to differentiate between humans and other animals was 98%. However, the HS-esp, HS-AVs and HS-PVs showed 100% hostspecificity. All the human-specific markers showed >97% sensitivity to detect human fecal pollution. E. coli, enterococci and, C. perfringens were detected up to dilutions of sewage 10_5, 10_4 and 10_3 respectively.HS-esp, HS-AVs, HS-PVs, SVs and TVs were detected up to dilution of sewage 10_4 whilst EVs were detected up to dilution 10_5. The ability of the HS-HF183 marker to detect freshsewagewas3–4 orders ofmagnitude higher than that of the HS-esp and viral markers. The ability to detect fresh sewage in freshwater, seawater and distilled water matrices was similar for human-specific bacterial and viral marker. Based on our data, it appears that human-specific molecular markers are sensitive measures of fresh sewage pollution, and the HS-HF183 marker appears to be the most sensitive among these markers in terms of detecting fresh sewage. However, the presence of the HS-HF183 marker in environmental waters may not necessarily indicate the presence of enteric viruses due to their high abundance in sewage compared to enteric viruses. More research is required on the persistency of these markers in environmental water samples in relation to traditional fecal indicators and enteric pathogens.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents an extended Joint Factor Analysis model including explicit modelling of unwanted within-session variability. The goals of the proposed extended JFA model are to improve verification performance with short utterances by compensating for the effects of limited or imbalanced phonetic coverage, and to produce a flexible JFA model that is effective over a wide range of utterance lengths without adjusting model parameters such as retraining session subspaces. Experimental results on the 2006 NIST SRE corpus demonstrate the flexibility of the proposed model by providing competitive results over a wide range of utterance lengths without retraining and also yielding modest improvements in a number of conditions over current state-of-the-art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditionally, conceptual modelling of business processes involves the use of visual grammars for the representation of, amongst other things, activities, choices and events. These grammars, while very useful for experts, are difficult to understand by naive stakeholders. Annotations of such process models have been developed to assist in understanding aspects of these grammars via map-based approaches, and further work has looked at forms of 3D conceptual models. However, no one has sought to embed the conceptual models into a fully featured 3D world, using the spatial annotations to explicate the underlying model clearly. In this paper, we present an approach to conceptual process model visualisation that enhances a 3D virtual world with annotations representing process constructs, facilitating insight into the developed model. We then present a prototype implementation of a 3D Virtual BPMN Editor that embeds BPMN process models into a 3D world. We show how this gives extra support for tasks performed by the conceptual modeller, providing better process model communication to stakeholders..

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantitative Microbial Risk Assessment (QMRA) analysis was used to quantify the risk of infection associated with the exposure to pathogens from potable and non-potable uses of roof-harvested rainwater in South East Queensland (SEQ). A total of 84 rainwater samples were analysed for the presence of faecal indicators (using culture based methods) and zoonotic bacterial and protozoan pathogens using binary and quantitative PCR (qPCR). The concentrations of Salmonella invA, and Giardia lamblia β-giradin genes ranged from 65-380 genomic units/1000 mL and 9-57 genomic units/1000 mL of water, respectively. After converting gene copies to cell/cyst number, the risk of infection from G. lamblia and Salmonella spp. associated with the use of rainwater for bi-weekly garden hosing was calculated to be below the threshold value of 1 extra infection per 10,000 persons per year. However, the estimated risk of infection from drinking the rainwater daily was 44-250 (for G. lamblia) and 85-520 (for Salmonella spp.) infections per 10,000 persons per year. Since this health risk seems higher than that expected from the reported incidences of gastroenteritis, the assumptions used to estimate these infection risks are critically discussed. Nevertheless, it would seem prudent to disinfect rainwater for potable use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. Species' distribution modelling relies on adequate data sets to build reliable statistical models with high predictive ability. However, the money spent collecting empirical data might be better spent on management. A less expensive source of species' distribution information is expert opinion. This study evaluates expert knowledge and its source. In particular, we determine whether models built on expert knowledge apply over multiple regions or only within the region where the knowledge was derived. 2. The case study focuses on the distribution of the brush-tailed rock-wallaby Petrogale penicillata in eastern Australia. We brought together from two biogeographically different regions substantial and well-designed field data and knowledge from nine experts. We used a novel elicitation tool within a geographical information system to systematically collect expert opinions. The tool utilized an indirect approach to elicitation, asking experts simpler questions about observable rather than abstract quantities, with measures in place to identify uncertainty and offer feedback. Bayesian analysis was used to combine field data and expert knowledge in each region to determine: (i) how expert opinion affected models based on field data and (ii) how similar expert-informed models were within regions and across regions. 3. The elicitation tool effectively captured the experts' opinions and their uncertainties. Experts were comfortable with the map-based elicitation approach used, especially with graphical feedback. Experts tended to predict lower values of species occurrence compared with field data. 4. Across experts, consensus on effect sizes occurred for several habitat variables. Expert opinion generally influenced predictions from field data. However, south-east Queensland and north-east New South Wales experts had different opinions on the influence of elevation and geology, with these differences attributable to geological differences between these regions. 5. Synthesis and applications. When formulated as priors in Bayesian analysis, expert opinion is useful for modifying or strengthening patterns exhibited by empirical data sets that are limited in size or scope. Nevertheless, the ability of an expert to extrapolate beyond their region of knowledge may be poor. Hence there is significant merit in obtaining information from local experts when compiling species' distribution models across several regions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Expert elicitation is the process of retrieving and quantifying expert knowledge in a particular domain. Such information is of particular value when the empirical data is expensive, limited, or unreliable. This paper describes a new software tool, called Elicitator, which assists in quantifying expert knowledge in a form suitable for use as a prior model in Bayesian regression. Potential environmental domains for applying this elicitation tool include habitat modeling, assessing detectability or eradication, ecological condition assessments, risk analysis, and quantifying inputs to complex models of ecological processes. The tool has been developed to be user-friendly, extensible, and facilitate consistent and repeatable elicitation of expert knowledge across these various domains. We demonstrate its application to elicitation for logistic regression in a geographically based ecological context. The underlying statistical methodology is also novel, utilizing an indirect elicitation approach to target expert knowledge on a case-by-case basis. For several elicitation sites (or cases), experts are asked simply to quantify their estimated ecological response (e.g. probability of presence), and its range of plausible values, after inspecting (habitat) covariates via GIS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The central aim for the research undertaken in this PhD thesis is the development of a model for simulating water droplet movement on a leaf surface and to compare the model behavior with experimental observations. A series of five papers has been presented to explain systematically the way in which this droplet modelling work has been realised. Knowing the path of the droplet on the leaf surface is important for understanding how a droplet of water, pesticide, or nutrient will be absorbed through the leaf surface. An important aspect of the research is the generation of a leaf surface representation that acts as the foundation of the droplet model. Initially a laser scanner is used to capture the surface characteristics for two types of leaves in the form of a large scattered data set. After the identification of the leaf surface boundary, a set of internal points is chosen over which a triangulation of the surface is constructed. We present a novel hybrid approach for leaf surface fitting on this triangulation that combines Clough-Tocher (CT) and radial basis function (RBF) methods to achieve a surface with a continuously turning normal. The accuracy of the hybrid technique is assessed using numerical experimentation. The hybrid CT-RBF method is shown to give good representations of Frangipani and Anthurium leaves. Such leaf models facilitate an understanding of plant development and permit the modelling of the interaction of plants with their environment. The motion of a droplet traversing this virtual leaf surface is affected by various forces including gravity, friction and resistance between the surface and the droplet. The innovation of our model is the use of thin-film theory in the context of droplet movement to determine the thickness of the droplet as it moves on the surface. Experimental verification shows that the droplet model captures reality quite well and produces realistic droplet motion on the leaf surface. Most importantly, we observed that the simulated droplet motion follows the contours of the surface and spreads as a thin film. In the future, the model may be applied to determine the path of a droplet of pesticide along a leaf surface before it falls from or comes to a standstill on the surface. It will also be used to study the paths of many droplets of water or pesticide moving and colliding on the surface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: There are innumerable diabetes studies that have investigated associations between risk factors, protective factors, and health outcomes; however, these individual predictors are part of a complex network of interacting forces. Moreover, there is little awareness about resilience or its importance in chronic disease in adulthood, especially diabetes. Thus, this is the first study to: (1) extensively investigate the relationships among a host of predictors and multiple adaptive outcomes; and (2) conceptualise a resilience model among people with diabetes. Methods: This cross-sectional study was divided into two research studies. Study One was to translate two diabetes-specific instruments (Problem Areas In Diabetes, PAID; Diabetes Coping Measure, DCM) into a Chinese version and to examine their psychometric properties for use in Study Two in a convenience sample of 205 outpatients with type 2 diabetes. In Study Two, an integrated theoretical model is developed and evaluated using the structural equation modelling (SEM) technique. A self-administered questionnaire was completed by 345 people with type 2 diabetes from the endocrine outpatient departments of three hospitals in Taiwan. Results: Confirmatory factor analyses confirmed a one-factor structure of the PAID-C which was similar to the original version of the PAID. Strong content validity of the PAID-C was demonstrated. The PAID-C was associated with HbA1c and diabetes self-care behaviours, confirming satisfactory criterion validity. There was a moderate relationship between the PAID-C and the Perceived Stress Scale, supporting satisfactory convergent validity. The PAID-C also demonstrated satisfactory stability and high internal consistency. A four-factor structure and strong content validity of the DCM-C was confirmed. Criterion validity demonstrated that the DCM-C was significantly associated with HbA1c and diabetes self-care behaviours. There was a statistical correlation between the DCM-C and the Revised Ways of Coping Checklist, suggesting satisfactory convergent validity. Test-retest reliability demonstrated satisfactory stability of the DCM-C. The total scale of the DCM-C showed adequate internal consistency. Age, duration of diabetes, diabetes symptoms, diabetes distress, physical activity, coping strategies, and social support were the most consistent factors associated with adaptive outcomes in adults with diabetes. Resilience was positively associated with coping strategies, social support, health-related quality of life, and diabetes self-care behaviours. Results of the structural equation modelling revealed protective factors had a significant direct effect on adaptive outcomes; however, the construct of risk factors was not significantly related to adaptive outcomes. Moreover, resilience can moderate the relationships among protective factors and adaptive outcomes, but there were no interaction effects of risk factors and resilience on adaptive outcomes. Conclusion: This study contributes to an understanding of how risk factors and protective factors work together to influence adaptive outcomes in blood sugar control, health-related quality of life, and diabetes self-care behaviours. Additionally, resilience is a positive personality characteristic and may be importantly involved in the adjustment process among people living with type 2 diabetes.