585 resultados para Deterministic Trend


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Upper Roper River is one of the Australia’s unique tropical rivers which have been largely untouched by development. The Upper Roper River catchment comprises the sub-catchments of the Waterhouse River and Roper Creek, the two tributaries of the Roper River. There is a complex geological setting with different aquifer types. In this seasonal system, close interaction between surface water and groundwater contributes to both streamflow and sustaining ecosystems. The interaction is highly variable between seasons. A conceptual hydrogeological model was developed to investigate the different hydrological processes and geochemical parameters, and determine the baseline characteristics of water resources of this pristine catchment. In the catchment, long term average rainfall is around 850 mm and is summer dominant which significantly influences the total hydrological system. The difference between seasons is pronounced, with high rainfall up to 600 mm/month in the wet season, and negligible rainfall in the dry season. Canopy interception significantly reduces the amount of effective rainfall because of the native vegetation cover in the pristine catchment. Evaporation exceeds rainfall the majority of the year. Due to elevated evaporation and high temperature in the tropics, at least 600 mm of annual rainfall is required to generate potential recharge. Analysis of 120 years of rainfall data trend helped define “wet” and “dry periods”: decreasing trend corresponds to dry periods, and increasing trend to wet periods. The period from 1900 to 1970 was considered as Dry period 1, when there were years with no effective rainfall, and if there was, the intensity of rainfall was around 300 mm. The period 1970 – 1985 was identified as the Wet period 2, when positive effective rainfall occurred in almost every year, and the intensity reached up to 700 mm. The period 1985 – 1995 was the Dry period 2, with similar characteristics as Dry period 1. Finally, the last decade was the Wet period 2, with effective rainfall intensity up to 800 mm. This variability in rainfall over decades increased/decreased recharge and discharge, improving/reducing surface water and groundwater quantity and quality in different wet and dry periods. The stream discharge follows the rainfall pattern. In the wet season, the aquifer is replenished, groundwater levels and groundwater discharge are high, and surface runoff is the dominant component of streamflow. Waterhouse River contributes two thirds and Roper Creek one third to Roper River flow. As the dry season progresses, surface runoff depletes, and groundwater becomes the main component of stream flow. Flow in Waterhouse River is negligible, the Roper Creek dries up, but the Roper River maintains its flow throughout the year. This is due to the groundwater and spring discharge from the highly permeable Tindall Limestone and tufa aquifers. Rainfall seasonality and lithology of both the catchment and aquifers are shown to influence water chemistry. In the wet season, dilution of water bodies by rainwater is the main process. In the dry season, when groundwater provides baseflow to the streams, their chemical composition reflects lithology of the aquifers, in particular the karstic areas. Water chemistry distinguishes four types of aquifer materials described as alluvium, sandstone, limestone and tufa. Surface water in the headwaters of the Waterhouse River, the Roper Creek and their tributaries are freshwater, and reflect the alluvium and sandstone aquifers. At and downstream of the confluence of the Roper River, river water chemistry indicates the influence of rainfall dilution in the wet season, and the signature of the Tindall Limestone and tufa aquifers in the dry. Rainbow Spring on the Waterhouse River and Bitter Spring on the Little Roper River (known as Roper Creek at the headwaters) discharge from the Tindall Limestone. Botanic Walk Spring and Fig Tree Spring discharge into the Roper River from tufa. The source of water was defined based on water chemical composition of the springs, surface and groundwater. The mechanisms controlling surface water chemistry were examined to define the dominance of precipitation, evaporation or rock weathering on the water chemical composition. Simple water balance models for the catchment have been developed. The important aspects to be considered in water resource planning of this total system are the naturally high salinity in the region, especially the downstream sections, and how unpredictable climate variation may impact on the natural seasonal variability of water volumes and surface-subsurface interaction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bistability arises within a wide range of biological systems from the λ phage switch in bacteria to cellular signal transduction pathways in mammalian cells. Changes in regulatory mechanisms may result in genetic switching in a bistable system. Recently, more and more experimental evidence in the form of bimodal population distributions indicates that noise plays a very important role in the switching of bistable systems. Although deterministic models have been used for studying the existence of bistability properties under various system conditions, these models cannot realize cell-to-cell fluctuations in genetic switching. However, there is a lag in the development of stochastic models for studying the impact of noise in bistable systems because of the lack of detailed knowledge of biochemical reactions, kinetic rates, and molecular numbers. In this work, we develop a previously undescribed general technique for developing quantitative stochastic models for large-scale genetic regulatory networks by introducing Poisson random variables into deterministic models described by ordinary differential equations. Two stochastic models have been proposed for the genetic toggle switch interfaced with either the SOS signaling pathway or a quorum-sensing signaling pathway, and we have successfully realized experimental results showing bimodal population distributions. Because the introduced stochastic models are based on widely used ordinary differential equation models, the success of this work suggests that this approach is a very promising one for studying noise in large-scale genetic regulatory networks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Discrete stochastic simulations are a powerful tool for understanding the dynamics of chemical kinetics when there are small-to-moderate numbers of certain molecular species. In this paper we introduce delays into the stochastic simulation algorithm, thus mimicking delays associated with transcription and translation. We then show that this process may well explain more faithfully than continuous deterministic models the observed sustained oscillations in expression levels of hes1 mRNA and Hes1 protein.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: There has been some difficulty getting standard laboratory rats to voluntarily consume large amounts of ethanol without the use of initiation procedures. It has previously been shown that standard laboratory rats will voluntarily consume high levels of ethanol if given intermittent-access to 20% ethanol in a 2-bottle-choice setting [Wise, Psychopharmacologia 29 (1973), 203]. In this study, we have further characterized this drinking model. METHODS: Ethanol-naïve Long-Evans rats were given intermittent-access to 20% ethanol (three 24-hour sessions per week). No sucrose fading was needed and water was always available ad libitum. Ethanol consumption, preference, and long-term drinking behaviors were investigated. Furthermore, to pharmacologically validate the intermittent-access 20% ethanol drinking paradigm, the efficacy of acamprosate and naltrexone in decreasing ethanol consumption were compared with those of groups given continuous-access to 10 or 20% ethanol, respectively. Additionally, ethanol consumption was investigated in Wistar and out-bred alcohol preferring (P) rats following intermittent-access to 20% ethanol. RESULTS: The intermittent-access 20% ethanol 2-bottle-choice drinking paradigm led standard laboratory rats to escalate their ethanol intake over the first 5 to 6 drinking sessions, reaching stable baseline consumption of high amounts of ethanol (Long-Evans: 5.1 +/- 0.6; Wistar: 5.8 +/- 0.8 g/kg/24 h, respectively). Furthermore, the cycles of excessive drinking and abstinence led to an increase in ethanol preference and increased efficacy of both acamprosate and naltrexone in Long-Evans rats. P-rats initiate drinking at a higher level than both Long-Evans and Wistar rats using the intermittent-access 20% ethanol paradigm and showed a trend toward a further escalation in ethanol intake over time (mean ethanol intake: 6.3 +/- 0.8 g/kg/24 h). CONCLUSION: Standard laboratory rats will voluntarily consume ethanol using the intermittent-access 20% ethanol drinking paradigm without the use of any initiation procedures. This model promises to be a valuable tool in the alcohol research field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Networked Control System (NCS) is a feedback-driven control system wherein the control loops are closed through a real-time network. Control and feedback signals in an NCS are exchanged among the system’s components in the form of information packets via the network. Nowadays, wireless technologies such as IEEE802.11 are being introduced to modern NCSs as they offer better scalability, larger bandwidth and lower costs. However, this type of network is not designed for NCSs because it introduces a large amount of dropped data, and unpredictable and long transmission latencies due to the characteristics of wireless channels, which are not acceptable for real-time control systems. Real-time control is a class of time-critical application which requires lossless data transmission, small and deterministic delays and jitter. For a real-time control system, network-introduced problems may degrade the system’s performance significantly or even cause system instability. It is therefore important to develop solutions to satisfy real-time requirements in terms of delays, jitter and data losses, and guarantee high levels of performance for time-critical communications in Wireless Networked Control Systems (WNCSs). To improve or even guarantee real-time performance in wireless control systems, this thesis presents several network layout strategies and a new transport layer protocol. Firstly, real-time performances in regard to data transmission delays and reliability of IEEE 802.11b-based UDP/IP NCSs are evaluated through simulations. After analysis of the simulation results, some network layout strategies are presented to achieve relatively small and deterministic network-introduced latencies and reduce data loss rates. These are effective in providing better network performance without performance degradation of other services. After the investigation into the layout strategies, the thesis presents a new transport protocol which is more effcient than UDP and TCP for guaranteeing reliable and time-critical communications in WNCSs. From the networking perspective, introducing appropriate communication schemes, modifying existing network protocols and devising new protocols, have been the most effective and popular ways to improve or even guarantee real-time performance to a certain extent. Most previously proposed schemes and protocols were designed for real-time multimedia communication and they are not suitable for real-time control systems. Therefore, devising a new network protocol that is able to satisfy real-time requirements in WNCSs is the main objective of this research project. The Conditional Retransmission Enabled Transport Protocol (CRETP) is a new network protocol presented in this thesis. Retransmitting unacknowledged data packets is effective in compensating for data losses. However, every data packet in realtime control systems has a deadline and data is assumed invalid or even harmful when its deadline expires. CRETP performs data retransmission only in the case that data is still valid, which guarantees data timeliness and saves memory and network resources. A trade-off between delivery reliability, transmission latency and network resources can be achieved by the conditional retransmission mechanism. Evaluation of protocol performance was conducted through extensive simulations. Comparative studies between CRETP, UDP and TCP were also performed. These results showed that CRETP significantly: 1). improved reliability of communication, 2). guaranteed validity of received data, 3). reduced transmission latency to an acceptable value, and 4). made delays relatively deterministic and predictable. Furthermore, CRETP achieved the best overall performance in comparative studies which makes it the most suitable transport protocol among the three for real-time communications in a WNCS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aims The aim of this cross sectional study is to explore levels of physical activity and sitting behaviour amongst a sample of pregnant Australian women (n = 81), and investigate whether reported levels of physical activity and/or time spent sitting were associated with depressive symptom scores after controlling for potential covariates. Methods Study participants were women who attended the antenatal clinic of a large Brisbane maternity hospital between October and November 2006. Data relating to participants. current levels of physical activity, sitting behaviour, depressive symptoms, demographic characteristics and exposure to known risk factors for depression during pregnancy were collected; via on-site survey, follow-up telephone interview (approximately one week later) and post delivery access to participant hospital records. Results Participants were aged 29.5 (¡¾ 5.6) years and mostly partnered (86.4%) with a gross household income above $26,000 per annum (88.9%). Levels of physical activity were generally low, with only 28.4 % of participants reporting sufficient total activity and 16% of participants reporting sufficient planned (leisure-time) activity. The sample mean for depressive symptom scores measured by the Hospital Anxiety and Depression Scale (HADS-D) was 6.38 (¡¾ 2.55). The mean depressive symptom scores for participants who reported total moderate-to-vigorous activity levels of sufficient, insufficient, and none, were 5.43 (¡¾ 1.56), 5.82 (¡¾ 1.77) and 7.63 (¡¾ 3.25), respectively. Hierarchical multivariable linear regression modelling indicated that after controlling for covariates, a statistically significant difference of 1.09 points was observed between mean depressive symptom scores of participants who reported sufficient total physical activity, compared with participants who reported they were engaging in no moderate-to-vigorous activity in a typical week (p = 0.05) but this did not reach the criteria for a clinically meaningful difference. Total physical activity was contributed 2.2% to the total 30.3% of explained variance within this model. The other main contributors to explained variance in multivariable regression models were anxiety symptom scores and the number of existing children. Further, a trend was observed between higher levels of planned sitting behaviour and higher depressive symptom scores (p = 0.06); this correlation was not clinically meaningful. Planned sitting contributed 3.2% to the total 31.3 % of explained variance. The number of regression covariates and limited sample size led to a less than ideal ratio of covariates to participants, probably attenuating this relationship. Specific information about the sitting-based activities in which participants engaged may have provided greater insight about the relationship between planned sitting and depressive symptoms, but these data were not captured by the present study. Conclusions The finding that higher levels of physical activity were associated with lower levels of depressive symptoms is consistent with the current body of existing literature in pregnant women, and with a larger body of evidence based in general population samples. Although this result was not considered clinically meaningful, the criterion for a clinically meaningful result was an a priori decision based on quality of life literature in non-pregnant populations and may not truly reflect a difference in symptoms that is meaningful to pregnant women. Further investigation to establish clinically meaningful criteria for continuous depressive symptom data in pregnant women is required. This result may have implications relating to prevention and management options for depression during pregnancy. The observed trend between planned sitting and depressive symptom scores is consistent with literature based on leisure-time sitting behaviour in general population samples, and suggests that further research in this area, with larger samples of pregnant women and more specific sitting data is required to explore potential associations between activities such as television viewing and depressive symptoms, as this may be an area of behaviour that is amenable to modification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The New Hebrides Island Arc, an intra-oceanic island chain in the southwest Pacific, is formed by subduction of the Indo-Australian Plate beneath the Pacific Plate. The southern end of the New Hebrides Island Arc is an ideal location to study the magmatic and tectonic interaction of an emerging island arc as this part of the island chain is less than 3 million years old. A tectonically complex island arc, it exhibits a change in relative subduction rate from ~12cm/yr to 6 cm/yr before transitioning to a left-lateral strike slip zone at its southern end. Two submarine volcanic fields, Gemini-Oscostar and Volsmar, occur at this transition from normal arc subduction to sinistral strike slip movement. Multi-beam bathymetry and dredge samples collected during the 2004 CoTroVE cruise onboard the RV Southern Surveyor help define the relationship between magmatism and tectonics, and the source for these two submarine volcanic fields. Gemini-Oscostar volcanic field (GOVF), dominated by northwest-oriented normal faults, has mature polygenetic stratovolcanoes with evidence for explosive subaqueous eruptions and homogeneous monogenetic scoria cones. Volsmar volcanic field (VVF), located 30 km south of GOVF, exhibits a conjugate set of northwest and eastwest-oriented normal faults, with two polygenetic stratovolcanoes and numerous monogenetic scoria cones. A deep water caldera provides evidence for explosive eruptions at 1500m below sea level in the VVF. Both volcanic fields are dominated by low-K island arc tholeiites and basaltic andesites with calcalkalic andesite and dacite being found only in the GOVF. Geochemical signatures of both volcanic fields continue the along-arc trend of decreasing K2O with both volcanic fields being similar to the New Hebrides central chain lavas. Lavas from both fields display a slight depletion in high field strength elements and heavy rare earth elements, and slight enrichments in large-ion lithophile elements and light rare earth elements with respect to N-MORB mantle. Sr and Nd isotope data correlate with heavy rare earth and high field strength element data to show that both fields are derived from depleted mantle. Pb isotopes define Pacific MORB mantle sources and are consistent with isotopic variation along the New Hebrides Island Arc. Pb isotopes show no evidence for sediment contamination; the subduction component enrichment is therefore a slab-derived enrichment. There is a subtle spatial variation in source chemistry which sees a northerly trend of decreasing enrichment of slab-derived fluids.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A new algorithm for extracting features from images for object recognition is described. The algorithm uses higher order spectra to provide desirable invariance properties, to provide noise immunity, and to incorporate nonlinearity into the feature extraction procedure thereby allowing the use of simple classifiers. An image can be reduced to a set of 1D functions via the Radon transform, or alternatively, the Fourier transform of each 1D projection can be obtained from a radial slice of the 2D Fourier transform of the image according to the Fourier slice theorem. A triple product of Fourier coefficients, referred to as the deterministic bispectrum, is computed for each 1D function and is integrated along radial lines in bifrequency space. Phases of the integrated bispectra are shown to be translation- and scale-invariant. Rotation invariance is achieved by a regrouping of these invariants at a constant radius followed by a second stage of invariant extraction. Rotation invariance is thus converted to translation invariance in the second step. Results using synthetic and actual images show that isolated, compact clusters are formed in feature space. These clusters are linearly separable, indicating that the nonlinearity required in the mapping from the input space to the classification space is incorporated well into the feature extraction stage. The use of higher order spectra results in good noise immunity, as verified with synthetic and real images. Classification of images using the higher order spectra-based algorithm compares favorably to classification using the method of moment invariants

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the graphics race subsides and gamers grow weary of predictable and deterministic game characters, game developers must put aside their “old faithful” finite state machines and look to more advanced techniques that give the users the gaming experience they crave. The next industry breakthrough will be with characters that behave realistically and that can learn and adapt, rather than more polygons, higher resolution textures and more frames-per-second. This paper explores the various artificial intelligence techniques that are currently being used by game developers, as well as techniques that are new to the industry. The techniques covered in this paper are finite state machines, scripting, agents, flocking, fuzzy logic and fuzzy state machines decision trees, neural networks, genetic algorithms and extensible AI. This paper introduces each of these technique, explains how they can be applied to games and how commercial games are currently making use of them. Finally, the effectiveness of these techniques and their future role in the industry are evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data preprocessing is widely recognized as an important stage in anomaly detection. This paper reviews the data preprocessing techniques used by anomaly-based network intrusion detection systems (NIDS), concentrating on which aspects of the network traffic are analyzed, and what feature construction and selection methods have been used. Motivation for the paper comes from the large impact data preprocessing has on the accuracy and capability of anomaly-based NIDS. The review finds that many NIDS limit their view of network traffic to the TCP/IP packet headers. Time-based statistics can be derived from these headers to detect network scans, network worm behavior, and denial of service attacks. A number of other NIDS perform deeper inspection of request packets to detect attacks against network services and network applications. More recent approaches analyze full service responses to detect attacks targeting clients. The review covers a wide range of NIDS, highlighting which classes of attack are detectable by each of these approaches. Data preprocessing is found to predominantly rely on expert domain knowledge for identifying the most relevant parts of network traffic and for constructing the initial candidate set of traffic features. On the other hand, automated methods have been widely used for feature extraction to reduce data dimensionality, and feature selection to find the most relevant subset of features from this candidate set. The review shows a trend toward deeper packet inspection to construct more relevant features through targeted content parsing. These context sensitive features are required to detect current attacks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

According to statistics and trend data, women continue to be substantially under- represented in the Australian professoriate, and growth in their representation has been slow despite the plethora of equity programs. While not disputing these facts, we propose that examining gender equity by cohort provides a complementary perspective on the status of gender equity in the professoriate. Based on over 500 survey responses, we detected substantial similarities between women and men who were appointed as professors or associate professors between 2005 and 2008. There were similar proportions of women and men appointed via external or internal processes or by invitation. Additionally, similar proportions of women and men professors expressed a marked preference for research over teaching. Furthermore, there were similar distributions between the genders in the age of appointment to the professoriate. However, a notable gender difference was that women were appointed to the professoriate on average 1.9 years later than mens. This later appointment provides one reason for the lower representation of women compared to men in the professoriate. It also raises questions of the typical length of time that women and men remain in the (paid) professoriate and reasons why they might leave it. A further similarity between women and men in this cohort was their identification of motivation and circumstances as key factors in their career orientation. However, substantially more women identified motivation than circumstances and the situation was reversed for men. The open-ended survey responses also provided confirmation that affirmative action initiatives make a difference to women’s careers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently the application of the quasi-steady-state approximation (QSSA) to the stochastic simulation algorithm (SSA) was suggested for the purpose of speeding up stochastic simulations of chemical systems that involve both relatively fast and slow chemical reactions [Rao and Arkin, J. Chem. Phys. 118, 4999 (2003)] and further work has led to the nested and slow-scale SSA. Improved numerical efficiency is obtained by respecting the vastly different time scales characterizing the system and then by advancing only the slow reactions exactly, based on a suitable approximation to the fast reactions. We considerably extend these works by applying the QSSA to numerical methods for the direct solution of the chemical master equation (CME) and, in particular, to the finite state projection algorithm [Munsky and Khammash, J. Chem. Phys. 124, 044104 (2006)], in conjunction with Krylov methods. In addition, we point out some important connections to the literature on the (deterministic) total QSSA (tQSSA) and place the stochastic analogue of the QSSA within the more general framework of aggregation of Markov processes. We demonstrate the new methods on four examples: Michaelis–Menten enzyme kinetics, double phosphorylation, the Goldbeter–Koshland switch, and the mitogen activated protein kinase cascade. Overall, we report dramatic improvements by applying the tQSSA to the CME solver.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis investigates profiling and differentiating customers through the use of statistical data mining techniques. The business application of our work centres on examining individuals’ seldomly studied yet critical consumption behaviour over an extensive time period within the context of the wireless telecommunication industry; consumption behaviour (as oppose to purchasing behaviour) is behaviour that has been performed so frequently that it become habitual and involves minimal intentions or decision making. Key variables investigated are the activity initialised timestamp and cell tower location as well as the activity type and usage quantity (e.g., voice call with duration in seconds); and the research focuses are on customers’ spatial and temporal usage behaviour. The main methodological emphasis is on the development of clustering models based on Gaussian mixture models (GMMs) which are fitted with the use of the recently developed variational Bayesian (VB) method. VB is an efficient deterministic alternative to the popular but computationally demandingMarkov chainMonte Carlo (MCMC) methods. The standard VBGMMalgorithm is extended by allowing component splitting such that it is robust to initial parameter choices and can automatically and efficiently determine the number of components. The new algorithm we propose allows more effective modelling of individuals’ highly heterogeneous and spiky spatial usage behaviour, or more generally human mobility patterns; the term spiky describes data patterns with large areas of low probability mixed with small areas of high probability. Customers are then characterised and segmented based on the fitted GMM which corresponds to how each of them uses the products/services spatially in their daily lives; this is essentially their likely lifestyle and occupational traits. Other significant research contributions include fitting GMMs using VB to circular data i.e., the temporal usage behaviour, and developing clustering algorithms suitable for high dimensional data based on the use of VB-GMM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There has recently been noted a rapid increase in research attention to projects that involve outside partners. Our knowledge of such inter-organizational projects, however, is limited. This paper reports large scale data from a repeated trend survey amongst 2000 SMEs in 2006 and 2009 that focused on inter-organizational project ventures. Our major findings indicate that the overall prevalence of inter-organizational project ventures remained significant and stable over time, even despite the economic crisis. Moreover, we find that these ventures predominantly solve repetitive rather than unique tasks and are embedded in prior relations between the partnering organizations. These findings provide empirical support for the recent claims that project management should pay more attention to inter-organizational forms of project organization, and suggest that the archetypical view of projects as being unique in every respect should be reconsidered. Both have important implications for project management, especially in the area of project-based learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Much current Queensland media rhetoric, government policy and legislation on truancy and youth justice appears to be based on ideas of responsibilisation – of sheeting responsibility for children’s behaviour back onto their parents. This article examines the evidence of parental responsibility provisions in juvenile justice and truancy legislation in Queensland and the drivers behind this approach. It considers recent legislative initiatives as part of an international trend toward making parents ‘responsible’ for the wrongs of their children. It identifies the parental responsibility rhetoric appearing in recent ministerial statements and associated media reports. It then asks the questions – are these legislative provisions being enforced? And if so, are they successful? Are they simply adding to the administrative burdens placed on teachers and schools, and the socioeconomic burdens placed on already disadvantaged parents? Parental responsibility provisions have been discussed at length in the context of juvenile offending and research suggests that punishing parents for the acts of their children does not decrease delinquency. The paper asks how, as a society, we intend to evaluate these punitive measures against parents?