284 resultados para RATE-VARIABILITY
Resumo:
It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade leading to an array of approaches to business process variability modeling. This survey examines existing approaches in this field based on a common set of criteria and illustrates their key concepts using a running example. The analysis shows that existing approaches are characterized by the fact that they extend a conventional process mod- eling language with constructs that make it able to capture customizable process models. A customizable process model represents a family of process variants in a way that each variant can be derived by adding or deleting fragments according to configuration parameters or according to a domain model. The survey puts into evidence an abundance of customizable process modeling languages, embodying a diverse set of con- structs. In contrast, there is comparatively little tool support for analyzing and constructing customizable process models, as well as a scarcity of empirical evaluations of languages in the field.
Resumo:
This thesis introduced Bayesian statistics as an analysis technique to isolate resonant frequency information in in-cylinder pressure signals taken from internal combustion engines. Applications of these techniques are relevant to engine design (performance and noise), energy conservation (fuel consumption) and alternative fuel evaluation. The use of Bayesian statistics, over traditional techniques, allowed for a more in-depth investigation into previously difficult to isolate engine parameters on a cycle-by-cycle basis. Specifically, these techniques facilitated the determination of the start of pre-mixed and diffusion combustion and for the in-cylinder temperature profile to be resolved on individual consecutive engine cycles. Dr Bodisco further showed the utility of the Bayesian analysis techniques by applying them to in-cylinder pressure signals taken from a compression ignition engine run with fumigated ethanol.
Resumo:
Purpose: Flat-detector, cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. Methods: The rich sources of prior information in IGRT are incorporated into a hidden Markov random field (MRF) model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk (OAR). The voxel labels are estimated using the iterated conditional modes (ICM) algorithm. Results: The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom (CIRS, Inc. model 062). The mean voxel-wise misclassification rate was 6.2%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. Conclusions: By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
The Australian region spans some 60° of latitude and 50° of longitude and displays considerable regional climate variability both today and during the Late Quaternary. A synthesis of marine and terrestrial climate records, combining findings from the Southern Ocean, temperate, tropical and arid zones, identifies a complex response of climate proxies to a background of changing boundary conditions over the last 35,000 years. Climate drivers include the seasonal timing of insolation, greenhouse gas content of the atmosphere, sea level rise and ocean and atmospheric circulation changes. Our compilation finds few climatic events that could be used to construct a climate event stratigraphy for the entire region, limiting the usefulness of this approach. Instead we have taken a spatial approach, looking to discern the patterns of change across the continent. The data identify the clearest and most synchronous climatic response at the time of the Last Glacial Maximum (LGM) (21 ± 3 ka), with unambiguous cooling recorded in the ocean, and evidence of glaciation in the highlands of tropical New Guinea, southeast Australia and Tasmania. Many terrestrial records suggest drier conditions, but with the timing of inferred snowmelt, and changes to the rainfall/runoff relationships, driving higher river discharge at the LGM. In contrast, the deglaciation is a time of considerable south-east to north-west variation across the region. Warming was underway in all regions by 17 ka. Post-glacial sea level rise and its associated regional impacts have played an important role in determining the magnitude and timing of climate response in the north-west of the continent in contrast to the southern latitudes. No evidence for cooling during the Younger Dryas chronozone is evident in the region, but the Antarctic cold reversal clearly occurs south of Australia. The Holocene period is a time of considerable climate variability associated with an intense monsoon in the tropics early in the Holocene, giving way to a weakened monsoon and an increasingly El Niño-dominated ENSO to the present. The influence of ENSO is evident throughout the southeast of Australia, but not the southwest. This climate history provides a template from which to assess the regionality of climate events across Australia and make comparisons beyond our region. The data identify the clearest and most synchronous climatic response at the time of the Last Glacial Maximum (LGM) (21 ± 3 ka), with unambiguous cooling recorded in the ocean, and evidence of glaciation in the highlands of tropical New Guinea, southeast Australia and Tasmania. Many terrestrial records suggest drier conditions, but with the timing of inferred snowmelt, and changes to the rainfall/runoff relationships, driving higher river discharge at the LGM. In contrast, the deglaciation is a time of considerable south-east to north-west variation across the region. Warming was underway in all regions by 17 ka. Post-glacial sea level rise and its associated regional impacts have played an important role in determining the magnitude and timing of climate response in the north-west of the continent in contrast to the southern latitudes. No evidence for cooling during the Younger Dryas chronozone is evident in the region, but the Antarctic cold reversal clearly occurs south of Australia. The Holocene period is a time of considerable climate variability associated with an intense monsoon in the tropics early in the Holocene, giving way to a weakened monsoon and an increasingly El Niño-dominated ENSO to the present. The influence of ENSO is evident throughout the southeast of Australia, but not the southwest. This climate history provides a template from which to assess the regionality of climate events across Australia and make comparisons beyond our region.
Resumo:
Soil-based emissions of nitrous oxide (N2O), a well-known greenhouse gas, have been associated with changes in soil water-filled pore space (WFPS) and soil temperature in many previous studies. However, it is acknowledged that the environment-N2O relationship is complex and still relatively poorly unknown. In this article, we employed a Bayesian model selection approach (Reversible jump Markov chain Monte Carlo) to develop a data-informed model of the relationship between daily N2O emissions and daily WFPS and soil temperature measurements between March 2007 and February 2009 from a soil under pasture in Queensland, Australia, taking seasonal factors and time-lagged effects into account. The model indicates a very strong relationship between a hybrid seasonal structure and daily N2O emission, with the latter substantially increased in summer. Given the other variables in the model, daily soil WFPS, lagged by a week, had a negative influence on daily N2O; there was evidence of a nonlinear positive relationship between daily soil WFPS and daily N2O emission; and daily soil temperature tended to have a linear positive relationship with daily N2O emission when daily soil temperature was above a threshold of approximately 19°C. We suggest that this flexible Bayesian modeling approach could facilitate greater understanding of the shape of the covariate-N2O flux relation and detection of effect thresholds in the natural temporal variation of environmental variables on N2O emission.
Resumo:
Although transit travel time variability is essential for understanding the deterioration of reliability, optimising transit schedule and route choice; it has not attracted enough attention from the literature. This paper proposes public transport-oriented definitions of travel time variability and explores the distributions of public transport travel time using the Transit Signal Priority data. First, definitions of public transport travel time variability are established by extending the common definitions of variability in the literature and by using route and services data of public transport vehicles. Second, the paper explores the distribution of public transport travel time. A new approach for analysing the distributions involving all transit vehicles as well as vehicles from a specific route is proposed. The Lognormal distribution is revealed as the descriptors for public transport travel time from the same route and service. The methods described in this study could be of interest for both traffic managers and transit operators for planning and managing the transit systems.
Individual variability in compensatory eating following acute exercise in overweight and obese women
Resumo:
Background While compensatory eating following acute aerobic exercise is highly variable, little is known about the underling mechanisms that contribute to alterations in exercise-induced eating behaviour. Methods Overweight and obese women (BMI = 29.6 ± 4.0kg.m2) performed a bout of cycling individually tailored to expend 400kcal (EX), or a time-matched no exercise control condition in a randomised, counter-balanced order. Sixty minutes after the cessation of exercise, an ad libitum test meal was provided. Substrate oxidation and subjective appetite ratings were measured during exercise/time-matched rest, and during the period between the cessation of exercise and food consumption. Results While ad libitum EI did not differ between EX and the control condition (666.0 ± 203.9kcal vs. 664.6 ± 174.4kcal, respectively; ns), there was marked individual variability in compensatory energy intake (EI). The difference in EI between EX and the control condition ranged from -234.3 to +278.5kcal. Carbohydrate oxidation during exercise was positively associated with post-exercise EI, accounting for 37% of the variance in EI (r = 0.57; p = 0.02). Conclusions These data indicate that capacity of acute exercise to create a short-term energy deficit in overweight and obese women is highly variable. Furthermore, exercise-induced CHO oxidation can explain part of the variability in acute exercise-induced compensatory eating. Post-exercise compensatory eating could serve as an adaptive response to facilitate the restoration of carbohydrate balance.
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
Cone-beam computed tomography (CBCT) has enormous potential to improve the accuracy of treatment delivery in image-guided radiotherapy (IGRT). To assist radiotherapists in interpreting these images, we use a Bayesian statistical model to label each voxel according to its tissue type. The rich sources of prior information in IGRT are incorporated into a hidden Markov random field model of the 3D image lattice. Tissue densities in the reference CT scan are estimated using inverse regression and then rescaled to approximate the corresponding CBCT intensity values. The treatment planning contours are combined with published studies of physiological variability to produce a spatial prior distribution for changes in the size, shape and position of the tumour volume and organs at risk. The voxel labels are estimated using iterated conditional modes. The accuracy of the method has been evaluated using 27 CBCT scans of an electron density phantom. The mean voxel-wise misclassification rate was 6.2\%, with Dice similarity coefficient of 0.73 for liver, muscle, breast and adipose tissue. By incorporating prior information, we are able to successfully segment CBCT images. This could be a viable approach for automated, online image analysis in radiotherapy.
Resumo:
The problem of estimating pseudobearing rate information of an airborne target based on measurements from a vision sensor is considered. Novel image speed and heading angle estimators are presented that exploit image morphology, hidden Markov model (HMM) filtering, and relative entropy rate (RER) concepts to allow pseudobearing rate information to be determined before (or whilst) the target track is being estimated from vision information.
Resumo:
Development and application of inorganic adsorbent materials have been continuously investigated due to their variability and versatility. This Master thesis has expanded the knowledge in the field of adsorption targeting radioactive iodine waste and proteins using modified inorganic materials. Industrial treatment of radioactive waste and safety disposal of nuclear waste is a constant concern around the world with the development of radioactive materials applications. To address the current problems, laminar titanate with large surface area (143 m2 g−1) was synthesized from inorganic titanium compounds by hydrothermal reactions at 433 K. Ag2O nanocrystals of particle size ranging from 5–30 nm were anchored on the titanate lamina surface which has crystallographic similarity to that of Ag2O nanocrystals. Therefore, the deposited Ag2O nanocrystals and titanate substrate could join together at these surfaces between which there forms a coherent interface. Such coherence between the two phases reduces the overall energy by minimizing surface energy and maintains the Ag2O nanocrystals firmly on the outer surface of the titanate structure. The combined adsorbent was then applied as efficient adsorbent to remove radioactive iodine from water (one gram adsorbent can capture up to 3.4 mmol of I- anions) and the composite adsorbent can be recovered easily for safe disposal. The structure changes of the titanate lamina and the composite adsorbent were characterized via various techniques. The isotherm and kinetics of iodine adsorption, competitive adsorption and column adsorption using the adsorbent were studied to determine the iodine removal abilities of the adsorbent. It is shown that the adsorbent exhibited excellent trapping ability towards iodine in the fix-bed column despite the presence of competitive ions. Hence, Ag2O deposited titanate lamina could serve as an effective adsorbent for removing iodine from radioactive waste. Surface hydroxyl group of the inorganic materials is widely applied for modification purposes and modification of inorganic materials for biomolecule adsorption can also be achieved. Specifically, γ-Al2O3 nanofibre material is converted via calcinations from boehmite precursor which is synthesised by hydrothermal chemical reactions under directing of surfactant. These γ-Al2O3 nanofibres possess large surface area (243 m2 g-1), good stability under extreme chemical conditions, good mechanical strength and rich surface hydroxyl groups making it an ideal candidate in industrialized separation column. The fibrous morphology of the adsorbent also guarantees facile recovery from aqueous solution under both centrifuge and sedimentation approaches. By chemically bonding the dyes molecules, the charge property of γ-Al2O3 is changed in the aim of selectively capturing of lysozyme from chicken egg white solution. The highest Lysozyme adsorption amount was obtained at around 600 mg/g and its proportion is elevated from around 5% to 69% in chicken egg white solution. It was found from the adsorption test under different solution pH that electrostatic force played the key role in the good selectivity and high adsorption rate of surface modified γ-Al2O3 nanofibre adsorbents. Overall, surface modified fibrous γ-Al2O3 could be applied potentially as an efficient adsorbent for capturing of various biomolecules.
Resumo:
In this paper we explore the relationship between monthly random breath testing (RBT) rates (per 1000 licensed drivers) and alcohol-related traffic crash (ARTC) rates over time, across two Australian states: Queensland and Western Australia. We analyse the RBT, ARTC and licensed driver rates across 12 years; however, due to administrative restrictions, we model ARTC rates against RBT rates for the period July 2004 to June 2009. The Queensland data reveals that the monthly ARTC rate is almost flat over the five year period. Based on the results of the analysis, an average of 5.5 ARTCs per 100,000 licensed drivers are observed across the study period. For the same period, the monthly rate of RBTs per 1000 licensed drivers is observed to be decreasing across the study with the results of the analysis revealing no significant variations in the data. The comparison between Western Australia and Queensland shows that Queensland's ARTC monthly percent change (MPC) is 0.014 compared to the MPC of 0.47 for Western Australia. While Queensland maintains a relatively flat ARTC rate, the ARTC rate in Western Australia is increasing. Our analysis reveals an inverse relationship between ARTC RBT rates, that for every 10% increase in the percentage of RBTs to licensed driver there is a 0.15 decrease in the rate of ARTCs per 100,000 licenced drivers. Moreover, in Western Australia, if the 2011 ratio of 1:2 (RBTs to annual number of licensed drivers) were to double to a ratio of 1:1, we estimate the number of monthly ARTCs would reduce by approximately 15. Based on these findings we believe that as the number of RBTs conducted increases the number of drivers willing to risk being detected for drinking driving decreases, because the perceived risk of being detected is considered greater. This is turn results in the number of ARTCs diminishing. The results of this study provide an important evidence base for policy decisions for RBT operations.
Resumo:
This thesis is a population-based epidemiological study to explore the spatial and temporal pattern of malaria, and to assess the relationship between socio-ecological factors and malaria in Yunnan, China. Geospatial and temporal approaches were applied; the high risk areas of the disease were identified; and socio-ecological drivers of malaria were assessed. These findings will provide important evidence for the control and prevention of malaria in China and other countries with a similar situation of endemic malaria.
Resumo:
Purpose The objectives of this study were to examine the effect of 4-week moderate- and high-intensity interval training (MIIT and HIIT) on fat oxidation and the responses of blood lactate (BLa) and rating of perceived exertion (RPE). Methods Ten overweight/obese men (age = 29 ±3.7 years, BMI = 30.7 ±3.4 kg/m2) participated in a cross-over study of 4-week MIIT and HIIT training. The MIIT training sessions consisted of 5-min cycling stages at mechanical workloads 20% above and 20% below 45%VO2peak. The HIIT sessions consisted of intervals of 30-s work at 90%VO2peak and 30-s rest. Pre- and post-training assessments included VO2max using a graded exercise test (GXT) and fat oxidation using a 45-min constant-load test at 45%VO2max. BLa and RPE were also measured during the constant-load exercise test. Results There were no significant changes in body composition with either intervention. There were significant increases in fat oxidation after MIIT and HIIT (p ≤ 0.01), with no effect of intensity. BLa during the constant-load exercise test significantly decreased after MIIT and HIIT (p ≤ 0.01), and the difference between MIIT and HIIT was not significant (p = 0.09). RPE significantly decreased after HIIT greater than MIIT (p ≤ 0.05). Conclusion Interval training can increase fat oxidation with no effect of exercise intensity, but BLa and RPE decreased after HIIT to greater extent than MIIT.
Resumo:
Most mathematical models of collective cell spreading make the standard assumption that the cell diffusivity and cell proliferation rate are constants that do not vary across the cell population. Here we present a combined experimental and mathematical modeling study which aims to investigate how differences in the cell diffusivity and cell proliferation rate amongst a population of cells can impact the collective behavior of the population. We present data from a three–dimensional transwell migration assay which suggests that the cell diffusivity of some groups of cells within the population can be as much as three times higher than the cell diffusivity of other groups of cells within the population. Using this information, we explore the consequences of explicitly representing this variability in a mathematical model of a scratch assay where we treat the total population of cells as two, possibly distinct, subpopulations. Our results show that when we make the standard assumption that all cells within the population behave identically we observe the formation of moving fronts of cells where both subpopulations are well–mixed and indistinguishable. In contrast, when we consider the same system where the two subpopulations are distinct, we observe a very different outcome where the spreading population becomes spatially organized with the more motile subpopulation dominating at the leading edge while the less motile subpopulation is practically absent from the leading edge. These modeling predictions are consistent with previous experimental observations and suggest that standard mathematical approaches, where we treat the cell diffusivity and cell proliferation rate as constants, might not be appropriate.