932 resultados para HMM, Nosocomial Pathogens, Genotyping, Statistical Modelling, VRE
Resumo:
1. Essential hypertension occurs in people with an underlying genetic predisposition who subject themselves to adverse environmental influences. The number of genes involved is unknown, as is the extent to which each contributes to final blood pressure and the severity of the disease. 2. In the past, studies of potential candidate genes have been performed by association (case-control) analysis of unrelated individuals or linkage (pedigree or sibpair) analysis of families. These studies have resulted in several positive findings but, as one may expect, also an enormous number of negative results. 3. In order to uncover the major genetic loci for essential hypertension, it is proposed that scanning the genome systematically in 100- 200 affected sibships should prove successful. 4. This involves genotyping sets of hypertensive sibships to determine their complement of several hundred microsatellite polymorphisms. Those that are highly informative, by having a high heterozygosity, are most suitable. Also, the markers need to be spaced sufficiently evenly across the genome so as to ensure adequate coverage. 5. Tests are performed to determine increased segregation of alleles of each marker with hypertension. The analytical tools involve specialized statistical programs that can detect such differences. Non- parametric multipoint analysis is an appropriate approach. 6. In this way, loci for essential hypertension are beginning to emerge.
Resumo:
Travel time prediction has long been the topic of transportation research. But most relevant prediction models in the literature are limited to motorways. Travel time prediction on arterial networks is challenging due to involving traffic signals and significant variability of individual vehicle travel time. The limited availability of traffic data from arterial networks makes travel time prediction even more challenging. Recently, there has been significant interest of exploiting Bluetooth data for travel time estimation. This research analysed the real travel time data collected by the Brisbane City Council using the Bluetooth technology on arterials. Databases, including experienced average daily travel time are created and classified for approximately 8 months. Thereafter, based on data characteristics, Seasonal Auto Regressive Integrated Moving Average (SARIMA) modelling is applied on the database for short-term travel time prediction. The SARMIA model not only takes the previous continuous lags into account, but also uses the values from the same time of previous days for travel time prediction. This is carried out by defining a seasonality coefficient which improves the accuracy of travel time prediction in linear models. The accuracy, robustness and transferability of the model are evaluated through comparing the real and predicted values on three sites within Brisbane network. The results contain the detailed validation for different prediction horizons (5 min to 90 minutes). The model performance is evaluated mainly on congested periods and compared to the naive technique of considering the historical average.
Resumo:
Dwell time at the busway station has a significant effect on bus capacity and delay. Dwell time has conventionally been estimated using models developed on the basis of field survey data. However field survey is resource and cost intensive, so dwell time estimation based on limited observations can be somewhat inaccurate. Most public transport systems are now equipped with Automatic Passenger Count (APC) and/or Automatic Fare Collection (AFC) systems. AFC in particular reduces on-board ticketing time, driver’s work load and ultimately reduces bus dwell time. AFC systems can record all passenger transactions providing transit agencies with access to vast quantities of data. AFC data provides transaction timestamps, however this information differs from dwell time because passengers may tag on or tag off at times other than when doors open and close. This research effort contended that models could be developed to reliably estimate dwell time distributions when measured distributions of transaction times are known. Development of the models required calibration and validation using field survey data of actual dwell times, and an appreciation of another component of transaction time being bus time in queue. This research develops models for a peak period and off peak period at a busway station on the South East Busway (SEB) in Brisbane, Australia.
Resumo:
Pesticides used in agricultural systems must be applied in economically viable and environmentally sensitive ways, and this often requires expensive field trials on spray deposition and retention by plant foliage. Computational models to describe whether a spray droplet sticks (adheres), bounces or shatters on impact, and if any rebounding parent or shatter daughter droplets are recaptured, would provide an estimate of spray retention and thereby act as a useful guide prior to any field trials. Parameter-driven interactive software has been implemented to enable the end-user to study and visualise droplet interception and impaction on a single, horizontal leaf. Living chenopodium, wheat and cotton leaves have been scanned to capture the surface topography and realistic virtual leaf surface models have been generated. Individual leaf models have then been subjected to virtual spray droplets and predictions made of droplet interception with the virtual plant leaf. Thereafter, the impaction behaviour of the droplets and the subsequent behaviour of any daughter droplets, up until re-capture, are simulated to give the predicted total spray retention by the leaf. A series of critical thresholds for the stick, bounce, and shatter elements in the impaction process have been developed for different combinations of formulation, droplet size and velocity, and leaf surface characteristics to provide this output. The results show that droplet properties, spray formulations and leaf surface characteristics all influence the predicted amount of spray retained on a horizontal leaf surface. Overall the predicted spray retention increases as formulation surface tension, static contact angle, droplet size and velocity decreases. Predicted retention on cotton is much higher than on chenopodium. The average predicted retention on a single horizontal leaf across all droplet size, velocity and formulations scenarios tested, is 18, 30 and 85% for chenopodium, wheat and cotton, respectively.
Resumo:
High-speed broadband internet access is widely recognised as a catalyst to social and economic development. However, the provision of broadband Internet services with the existing solutions to rural population, scattered over an extensive geographical area, remains both an economic and technical challenge. As a feasible solution, the Commonwealth Scientific and Industrial Research Organization (CSIRO) proposed a highly spectrally efficient, innovative and cost-effective fixed wireless broadband access technology, which uses analogue TV frequency spectrum and Multi-User MIMO (MUMIMO) technology with Orthogonal-Frequency-Division-Multiplexing (OFDM). MIMO systems have emerged as a promising solution for the increasing demand of higher data rates, better quality of service, and higher network capacity. However, the performance of MIMO systems can be significantly affected by different types of propagation environments e.g., indoor, outdoor urban, or outdoor rural and operating frequencies. For instance, large spectral efficiencies associated with MIMO systems, which assume a rich scattering environment in urban environments, may not be valid for all propagation environments, such as outdoor rural environments, due to the presence of less scatterer densities. Since this is the first time a MU-MIMO-OFDM fixed broadband wireless access solution is deployed in a rural environment, questions from both theoretical and practical standpoints arise; For example, what capacity gains are available for the proposed solution under realistic rural propagation conditions?. Currently, no comprehensive channel measurement and capacity analysis results are available for MU-MIMO-OFDM fixed broadband wireless access systems which employ large scale multiple antennas at the Access Point (AP) and analogue TV frequency spectrum in rural environments. Moreover, according to the literature, no deterministic MU-MIMO channel models exist that define rural wireless channels by accounting for terrain effects. This thesis fills the aforementioned knowledge gaps with channel measurements, channel modeling and comprehensive capacity analysis for MU-MIMO-OFDM fixed wireless broadband access systems in rural environments. For the first time, channel measurements were conducted in a rural farmland near Smithton, Tasmania using CSIRO's broadband wireless access solution. A novel deterministic MU-MIMO-OFDM channel model, which can be used for accurate performance prediction of rural MUMIMO channels with dominant Line-of-Sight (LoS) paths, was developed under this research. Results show that the proposed solution can achieve 43.7 bits/s/Hz at a Signal-to- Noise Ratio (SNR) of 20 dB in rural environments. Based on channel measurement results, this thesis verifies that the deterministic channel model accurately predicts channel capacity in rural environments with a Root Mean Square (RMS) error of 0.18 bits/s/Hz. Moreover, this study presents a comprehensive capacity analysis of rural MU-MIMOOFDM channels using experimental, simulated and theoretical models. Based on the validated deterministic model, further investigations on channel capacity and the eects of capacity variation, with different user distribution angles (θ) around the AP, were analysed. For instance, when SNR = 20dB, the capacity increases from 15.5 bits/s/Hz to 43.7 bits/s/Hz as θ increases from 10° to 360°. Strategies to mitigate these capacity degradation effects are also presented by employing a suitable user grouping method. Outcomes of this thesis have already been used by CSIRO scientists to determine optimum user distribution angles around the AP, and are of great significance for researchers and MU-MUMO-OFDM system developers to understand the advantages and potential capacity gains of MU-MIMO systems in rural environments. Also, results of this study are useful to further improve the performance of MU-MIMO-OFDM systems in rural environments. Ultimately, this knowledge contribution will be useful in delivering efficient, cost-effective high-speed wireless broadband systems that are tailor-made for rural environments, thus, improving the quality of life and economic prosperity of rural populations.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
This paper illustrates the use of finite element (FE) technique to investigate the behaviour of laminated glass (LG) panels under blast loads. Two and three dimensional (2D and 3D) modelling approaches available in LS-DYNA FE code to model LG panels are presented. Results from the FE analysis for mid-span deflection and principal stresses compared well with those from large deflection plate theory. The FE models are further validated using the results from a free field blast test on a LG panel. It is evident that both 2D and 3D LG models predict the experimental results with reasonable accuracy. The 3D LG models give slightly more accurate results but require considerably more computational time compared to the 2D LG models.
Resumo:
The use of immobilised TiO2 for the purification of polluted water streams introduces the necessity to evaluate the effect of mechanisms such as the transport of pollutants from the bulk of the liquid to the catalyst surface and the transport phenomena inside the porous film. Experimental results of the effects of film thickness on the observed reaction rate for both liquid-side and support-side illumination are here compared with the predictions of a one-dimensional mathematical model of the porous photocatalytic slab. Good agreement was observed between the experimentally obtained photodegradation of phenol and its by-products, and the corresponding model predictions. The results have confirmed that an optimal catalyst thickness exists and, for the films employed here, is 5 μm. Furthermore, the modelling results have highlighted the fact that porosity, together with the intrinsic reaction kinetics are the parameters controlling the photocatalytic activity of the film. The former by influencing transport phenomena and light absorption characteristics, the latter by naturally dictating the rate of reaction.
Resumo:
Designing the smart grid requires combining varied models. As their number increases, so does the complexity of the software. Having a well thought architecture for the software then becomes crucial. This paper presents MODAM, a framework designed to combine agent-based models in a flexible and extensible manner, using well known software engineering design solutions (OSGi specification [1] and Eclipse plugins [2]). Details on how to build a modular agent-based model for the smart grid are given in this paper, illustrated by an example for a small network.
Resumo:
This thesis developed semi-parametric regression models for estimating the spatio-temporal distribution of outdoor airborne ultrafine particle number concentration (PNC). The models developed incorporate multivariate penalised splines and random walks and autoregressive errors in order to estimate non-linear functions of space, time and other covariates. The models were applied to data from the "Ultrafine Particles from Traffic Emissions and Child" project in Brisbane, Australia, and to longitudinal measurements of air quality in Helsinki, Finland. The spline and random walk aspects of the models reveal how the daily trend in PNC changes over the year in Helsinki and the similarities and differences in the daily and weekly trends across multiple primary schools in Brisbane. Midday peaks in PNC in Brisbane locations are attributed to new particle formation events at the Port of Brisbane and Brisbane Airport.
A methodology to develop an urban transport disadvantage framework : the case of Brisbane, Australia
Resumo:
Most individuals travel in order to participate in a network of activities which are important for attaining a good standard of living. Because such activities are commonly widely dispersed and not located locally, regular access to a vehicle is important to avoid exclusion. However, planning transport system provisions that can engage members of society in an acceptable degree of activity participation remains a great challenge. The main challenges in most cities of the world are due to significant population growth and rapid urbanisation which produces increased demand for transport. Keeping pace with these challenges in most urban areas is difficult due to the widening gap between supply and demand for transport systems which places the urban population at a transport disadvantage. The key element in mitigating the issue of urban transport disadvantage is to accurately identify the urban transport disadvantaged. Although wide-ranging variables and multi-dimensional methods have been used to identify this group, variables are commonly selected using ad-hoc techniques and unsound methods. This poses questions of whether the current variables used are accurately linked with urban transport disadvantage, and the effectiveness of the current policies. To fill these gaps, the research conducted for this thesis develops an operational urban transport disadvantage framework (UTDAF) based on key statistical urban transport disadvantage variables to accurately identify the urban transport disadvantaged. The thesis develops a methodology based on qualitative and quantitative statistical approaches to develop an urban transport disadvantage framework designed to accurately identify urban transport disadvantage. The reliability and the applicability of the methodology developed is the prime concern rather than the accuracy of the estimations. Relevant concepts that impact on urban transport disadvantage identification and measurement and a wide range of urban transport disadvantage variables were identified through a review of the existing literature. Based on the reviews, a conceptual urban transport disadvantage framework was developed based on the causal theory. Variables identified during the literature review were selected and consolidated based on the recommendations of international and local experts during the Delphi study. Following the literature review, the conceptual urban transport disadvantage framework was statistically assessed to identify key variables. Using the statistical outputs, the key variables were weighted and aggregated to form the UTDAF. Before the variable's weights were finalised, they were adjusted based on results of correlation analysis between elements forming the framework to improve the framework's accuracy. The UTDAF was then applied to three contextual conditions to determine the framework's effectiveness in identifying urban transport disadvantage. The development of the framework is likely to be a robust application measure for policy makers to justify infrastructure investments and to generate awareness about the issue of urban transport disadvantage.
Resumo:
With the rapid growth of information on the Web, the study of information searching has let to an increased interest. Information behaviour (IB) researchers and information systems (IS) developers are continuously exploring user - Web search interactions to understand and to help users to provide assistance with their information searching. In attempting to develop models of IB, several studies have identified various factors that govern user's information searching and information retrieval (IR), such as age, gender, prior knowledge and task complexity. However, how users' contextual factors, such as cognitive styles, affect Web search interactions has not been clearly explained by the current models of Web Searching and IR. This study explores the influence of users' cognitive styles on their Web search behaviour. The main goal of the study is to enhance Web search models with a better understanding of how these cognitive styles affect Web searching. Modelling Web search behaviour with a greater understanding of user's cognitive styles can help information science researchers and IS designers to bridge the semantic gap between the user and the IS. To achieve the aims of the study, a user study with 50 participants was conducted. The study adopted a mixed method approach incorporating several data collection strategies to gather a range of qualitative and quantitative data. The study utilised pre-search and post-search questionnaires to collect the participants' demographic information and their level of satisfaction about the search interactions. Riding's (1991) Cognitive Style Analysis (CSA) test was used to assess the participants' cognitive styles. Participants completed three predesigned search tasks and the whole user - web search interactions, including thinkaloud, were captured using a monitoring program. Data analysis involved several qualitative and quantitative techniques: the quantitative data gave raise to detailed findings about users' Web searching and cognitive styles, the qualitative data enriched the findings with illustrative examples. The study results provide valuable insights into Web searching behaviour among different cognitive style users. The findings of the study extend our understanding of Web search behaviour and how users search information on the Web. Three key study findings emerged: • Users' Web search behaviour was demonstrated through information searching strategies, Web navigation styles, query reformulation behaviour and information processing approaches while performing Web searches. The manner in which these Web search patterns were demonstrated varied among the users with different cognitive style groups. • Users' cognitive styles influenced their information searching strategies, query reformulation behaviour, Web navigational styles and information processing approaches. Users with particular cognitive styles followed certain Web search patterns. • Fundamental relationships were evident between users' cognitive styles and their Web search behaviours; and these relationships can be illustrated through modelling Web search behaviour. Two models that depict the associations between Web search interactions, user characteristics and users' cognitive styles were developed. These models provide a greater understanding of Web search behaviour from the user perspective, particularly how users' cognitive styles influence their Web search behaviour. The significance of this research is twofold: it will provide insights for information science researchers, information system designers, academics, educators, trainers and librarians who want to better understand how users with different cognitive styles perform information searching on the Web; at the same time, it will provide assistance and support to the users. The major outcomes of this study are 1) a comprehensive analysis of how users search the Web; 2) extensive discussion on the implications of the models developed in this study for future work; and 3) a theoretical framework to bridge high-level search models and cognitive models.
Resumo:
Fracture healing is a complicated coupling of many processes. Yet despite the apparent complexity, fracture repair is usually effective. There is, however, no comprehensive mathematical model addressing the multiple interactions of cells, cytokines and oxygen that includes extra-cellular matrix production and that results in the formation of the early stage soft callus. This thesis develops a one dimensional continuum transport model in the context of early fracture healing. Although fracture healing is a complex interplay of many local factors, critical components are identified and used to construct an hypothesis about regulation of the evolution of early callus formation. Multiple cell lines, cellular differentiation, oxygen levels and cytokine concentrations are examined as factors affecting this model of early bone repair. The model presumes diffusive and chemotactic cell migration mechanisms. It is proposed that the initial signalling regime and oxygen availability arising as consequences of bone fracture, are sufficient to determine the quantity and quality of early soft callus formation. Readily available software and purpose written algorithms have been used to obtain numerical solutions representative of various initial conditions. These numerical distributions of cellular populations reflect available histology obtained from murine osteotomies. The behaviour of the numerical system in response to differing initial conditions can be described by alternative in vivo healing pathways. An experimental basis, as illustrated in murine fracture histology, has been utilised to validate the mathematical model outcomes. The model developed in this thesis has potential for future extension, to incorporate processes leading to woven bone deposition, while maintaining the characteristics that regulate early callus formation.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.