166 resultados para Observational techniques and algorithms
Resumo:
Using a combination of multivariate statistical techniques and the graphical assessment of major ion ratios, the influences on hydrochemical variability of coal seam gas (or coal bed methane) groundwaters from several sites in the Surat and Clarence-Moreton basins in Queensland, Australia, were investigated. Several characteristic relationships between major ions were observed: 1) strong positive linear correlation between the Na/Cl and alkalinity/Cl ratios; 2) an exponentially decaying trend between the Na/Cl and Na/alkalinity ratios; 3) inverse linear relationships between increasing chloride concentrations and decreasing pH for high salinity groundwaters, and; 4) high residual alkalinity for lower salinity waters, and an inverse relationship between decreasing residual alkalinity and increasing chloride concentrations for more saline waters. The interpretation of the hydrochemical data provides invaluable insights into the hydrochemical evolution of coal seam gas (CSG) groundwaters that considers both the source of major ions in coals and the influence of microbial activity. Elevated chloride and sodium concentrations in more saline groundwaters appear to be influenced by organic-bound chlorine held in the coal matrix; a sodium and chloride ion source that has largely been neglected in previous CSG groundwater studies. However, contrastingly high concentrations of bicarbonate in low salinity waters could not be explained, and are possibly associated with a number of different factors such as coal degradation, methanogenic processes, the evolution of high-bicarbonate NaHCO3 water types earlier on in the evolutionary pathway, and variability in gas reservoir characteristics. Using recently published data for CSG groundwaters in different basins, the characteristic major ion relationships identified for new data presented in this study were also observed in other CSG groundwaters from Australia, as well as for those in the Illinois Basin in the USA. This observation suggests that where coal maceral content and the dominant methanogenic pathway are similar, and where organic-bound chlorine is relatively abundant, distinct hydrochemical responses may be observed. Comparisons with published data of other NaHCO3 water types in non-CSG environments suggest that these characteristic major ion relationships described here can: i) serve as an indicator of potential CSG groundwaters in certain coal-bearing aquifers that contain methane; and ii) help in the development of strategic sampling programmes for CSG exploration and to monitor potential impacts of CSG activities on groundwater resources.
Resumo:
South Africa is an emerging and industrializing economy which is experiencing remarkable progress. We contend that amidst the developments in the economy, the role of energy, trade openness and financial development are critical. In this article, we revisit the pivotal role of these factors. We use the ARDL bounds [72], the Bayer and Hanck [11] cointegration techniques, and an extended Cobb–Douglas framework, to examine the long-run association with output per worker over the sample period 1971–2011. The results support long-run association between output per worker, capital per worker and the shift parameters. The short-run elasticity coefficients are as follows: energy (0.24), trade (0.07), financial development (−0.03). In the long-run, the elasticity coefficients are: trade openness (0.05), energy (0.29), and financial development (−0.04). In both the short-run and the long-run, we note the post-2000 period has a marginal positive effect on the economy. The Toda and Yamamoto [91] Granger causality results show that a unidirectional causality from capital stock and energy consumption to output; and from capital stock to trade openness; a bidirectional causality between trade openness and output; and absence (neutrality) of any causality between financial development and output thus indicating that these two variables evolve independent of each other.
Resumo:
Phosphorus has a number of indispensable biochemical roles, but its natural deposition and the low solubility of phosphates as well as their rapid transformation to insoluble forms make the element commonly the growth-limiting nutrient, particularly in aquatic ecosystems. Famously, phosphorus that reaches water bodies is commonly the main cause of eutrophication. This undesirable process can severely affect many aquatic biotas in the world. More management practices are proposed but long-term monitoring of phosphorus level is necessary to ensure that the eutrophication won't occur. Passive sampling techniques, which have been developed over the last decades, could provide several advantages to the conventional sampling methods including simpler sampling devices, more cost-effective sampling campaign, providing flow proportional load as well as representative average of concentrations of phosphorus in the environment. Although some types of passive samplers are commercially available, their uses are still scarcely reported in the literature. In Japan, there is limited application of passive sampling technique to monitor phosphorus even in the field of agricultural environment. This paper aims to introduce the relatively new P-sampling techniques and their potential to use in environmental monitoring studies.
Resumo:
This thesis investigates the use of fusion techniques and mathematical modelling to increase the robustness of iris recognition systems against iris image quality degradation, pupil size changes and partial occlusion. The proposed techniques improve recognition accuracy and enhance security. They can be further developed for better iris recognition in less constrained environments that do not require user cooperation. A framework to analyse the consistency of different regions of the iris is also developed. This can be applied to improve recognition systems using partial iris images, and cancelable biometric signatures or biometric based cryptography for privacy protection.
Resumo:
The purpose of this article is to show the applicability and benefits of the techniques of design of experiments as an optimization tool for discrete simulation models. The simulated systems are computational representations of real-life systems; its characteristics include a constant evolution that follows the occurrence of discrete events along the time. In this study, a production system, designed with the business philosophy JIT (Just in Time) is used, which seeks to achieve excellence in organizations through waste reduction in all the operational aspects. The most typical tool of JIT systems is the KANBAN production control that seeks to synchronize demand with flow of materials, minimize work in process, and define production metrics. Using experimental design techniques for stochastic optimization, the impact of the operational factors on the efficiency of the KANBAN / CONWIP simulation model is analyzed. The results show the effectiveness of the integration of experimental design techniques and discrete simulation models in the calculation of the operational parameters. Furthermore, the reliability of the methodologies found was improved with a new statistical consideration.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Recovering the motion of a non-rigid body from a set of monocular images permits the analysis of dynamic scenes in uncontrolled environments. However, the extension of factorisation algorithms for rigid structure from motion to the low-rank non-rigid case has proved challenging. This stems from the comparatively hard problem of finding a linear “corrective transform” which recovers the projection and structure matrices from an ambiguous factorisation. We elucidate that this greater difficulty is due to the need to find multiple solutions to a non-trivial problem, casting a number of previous approaches as alleviating this issue by either a) introducing constraints on the basis, making the problems nonidentical, or b) incorporating heuristics to encourage a diverse set of solutions, making the problems inter-dependent. While it has previously been recognised that finding a single solution to this problem is sufficient to estimate cameras, we show that it is possible to bootstrap this partial solution to find the complete transform in closed-form. However, we acknowledge that our method minimises an algebraic error and is thus inherently sensitive to deviation from the low-rank model. We compare our closed-form solution for non-rigid structure with known cameras to the closed-form solution of Dai et al. [1], which we find to produce only coplanar reconstructions. We therefore make the recommendation that 3D reconstruction error always be measured relative to a trivial reconstruction such as a planar one.
Resumo:
This paper presents a validation study on the application of a novel interslice interpolation technique for musculoskeletal structure segmentation of articulated joints and muscles on human magnetic resonance imaging data. The interpolation technique is based on morphological shape-based interpolation combined with intensity based voxel classification. Shape-based interpolation in the absence of the original intensity image has been investigated intensively. However, in some applications of medical image analysis, the intensity image of the slice to be interpolated is available. For example, when manual segmentation is conducted on selected slices, the segmentation on those unselected slices can be obtained by interpolation. We proposed a two- step interpolation method to utilize both the shape information in the manual segmentation and local intensity information in the image. The method was tested on segmentations of knee, hip and shoulder joint bones and hamstring muscles. The results were compared with two existing interpolation methods. Based on the calculated Dice similarity coefficient and normalized error rate, the proposed method outperformed the other two methods.
Connecting the space between design and research: Explorations in participatory research supervision
Resumo:
In this article we offer a single case study using an action research method for gathering and analysing data offering insights valuable to both design and research supervision practice. We do not attempt to generalise from this single case, but offer it as an instance that can improve our understanding of research supervision practice. We question the conventional ‘dyadic’ models of research supervision and outline a more collaborative model, based on the signature pedagogy of architecture: the design studio. A novel approach to the supervision of creatively oriented post-graduate students is proposed, including new approaches to design methods and participatory supervision that draw on established design studio practices. This model collapses the distance between design and research activities. Our case study involving Research Masters student supervision in the discipline of Architecture, shows how ‘connected learning’ emerges from this approach. This type of learning builds strong elements of creativity and fun, which promote and enhance student engagement. The results of our action research suggests that students learn to research more easily in such an environment and supervisory practices are enhanced when we apply the techniques and characteristics of design studio pedagogy to the more conventional research pedagogies imported from the humanities. We believe that other creative disciplines can apply similar tactics to enrich both the creative practice of research and the supervision of HDR students.
Resumo:
We present a novel framework and algorithms for the analysis of Web service interfaces to improve the efficiency of application integration in wide-spanning business networks. Our approach addresses the notorious issue of large and overloaded operational signatures, which are becoming increasingly prevalent on the Internet and being opened up for third-party service aggregation. Extending upon existing techniques used to refactor service interfaces based on derived artefacts of applications, namely business entities, we propose heuristics for deriving relations between business entities, and in turn, deriving permissible orders in which operations are invoked. As a result, service operations are refactored on business entity CRUD which then leads to behavioural protocols generated, thus supportive of fine-grained and flexible service discovery, composition and interaction. A prototypical implementation and analysis of web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms and open up further insights into service interface synthesis.
Resumo:
The number of genetic factors associated with common human traits and disease is increasing rapidly, and the general public is utilizing affordable, direct-to-consumer genetic tests. The results of these tests are often in the public domain. A combination of factors has increased the potential for the indirect estimation of an individual's risk for a particular trait. Here we explain the basic principals underlying risk estimation which allowed us to test the ability to make an indirect risk estimation from genetic data by imputing Dr. James Watson's redacted apolipoprotein E gene (APOE) information. The principles underlying risk prediction from genetic data have been well known and applied for many decades, however, the recent increase in genomic knowledge, and advances in mathematical and statistical techniques and computational power, make it relatively easy to make an accurate but indirect estimation of risk. There is a current hazard for indirect risk estimation that is relevant not only to the subject but also to individuals related to the subject; this risk will likely increase as more detailed genomic data and better computational tools become available.
Resumo:
After more than twenty years of basic and applied research, the use of nanotechnology in the design and manufacture of nanoscale materials is rapidly increasing, particularly in commercial applications that span from electronics across renewable energy areas, and biomedical devices. Novel polymers are attracting significant attention for they promise to provide a low−cost high−performance alternative to existing materials. Furthermore, these polymers have the potential to overcome limitations imposed by currently available materials thus enabling the development of new technologies and applications that are currently beyond our reach. This work focuses on the development of a range of new low−cost environmentally−friendly polymer materials for applications in areas of organic (flexible) electronics, optics, and biomaterials. The choice of the monomer reflects the environmentally−conscious focus of this project. Terpinen−4−ol is a major constituent of Australian grown Melaleuca alternifolia (tea tree) oil, attributed with the oil's antimicrobial and anti−inflammatory properties. Plasma polymerisation was chosen as a deposition technique for it requires minimal use of harmful chemicals and produces no hazardous by−products. Polymer thin films were fabricated under varied process conditions to attain materials with distinct physico−chemical, optoelectrical, biological and degradation characteristics. The resultant materials, named polyterpenol, were extensively characterised using a number of well−accepted and novel techniques, and their fundamental properties were defined. Polyterpenol films were demonstrated to be hydrocarbon rich, with variable content of oxygen moieties, primarily in the form of hydroxyl and carboxyl functionalities. The level of preservation of original monomer functionality was shown to be strongly dependent on the deposition energy, with higher applied power increasing the molecular fragmentation and substrate temperature. Polyterpenol water contact angle contact angle increased from 62.7° for the 10 W samples to 76.3° for the films deposited at 100 W. Polymers were determined to resist solubilisation by water, due to the extensive intermolecular and intramolecular hydrogen bonds present, and other solvents commonly employed in electronics and biomedical processing. Independent of deposition power, the surface topography of the polymers was shown to be smooth (Rq <0.5 nm), uniform and defect free. Hardness of polyterpenol coatings increased from 0.33 GPa for 10 W to 0.51 GPa for 100 W (at 500 μN load). Coatings deposited at higher input RF powers showed less mechanical deformation during nanoscratch testing, with no considerable damage, cracking or delamination observed. Independent of the substrate, the quality of film adhesion improved with RF power, suggesting these coatings are likely to be more stable and less susceptible to wear. Independent of fabrication conditions, polyterpenol thin films were optically transparent, with refractive index approximating that of glass. Refractive index increased slightly with deposition power, from 1.54 (10 W) to 1.56 (100 W) at 500 nm. The optical band gap values declined with increasing power, from 2.95 eV to 2.64 eV, placing the material within the range for semiconductors. Introduction of iodine impurity reduced the band gap of polyterpenol, from 2.8 eV to 1.64 eV, by extending the density of states more into the visible region of the electromagnetic spectrum. Doping decreased the transparency and increased the refractive index from 1.54 to 1.70 (at 500 nm). At optical frequencies, the real part of permittivity (k) was determined to be between 2.34 and 2.65, indicating a potential low-k material. These permittivity values were confirmed at microwave frequencies, where permittivity increased with input RF energy – from 2.32 to 2.53 (at 10 GHz ) and from 2.65 to 2.83 (at 20 GHz). At low frequencies, the dielectric constant was determined from current−voltage characteristics of Al−polyterpenol−Al devices. At frequencies below 100 kHz, the dielectric constant varied with RF power, from 3.86 to 4.42 at 1 kHz. For all samples, the resistivity was in order of 10⁸−10⁹ _m (at 6 V), confirming the insulating nature of polyterpenol material. In situ iodine doping was demonstrated to increase the conductivity of polyterpenol, from 5.05 × 10⁻⁸ S/cm to 1.20 × 10⁻⁶ S/cm (at 20 V). Exposed to ambient conditions over extended period of time, polyterpenol thin films were demonstrated to be optically, physically and chemically stable. The bulk of ageing occurred within first 150 h after deposition and was attributed to oxidation and volumetric relaxation. Thermal ageing studies indicated thermal stability increased for the films manufactured at higher RF powers, with degradation onset temperature associated with weight loss shifting from 150 ºC to 205 ºC for 10 W and 100 W polyterpenol, respectively. Annealing the films to 405 °C resulted in full dissociation of the polymer, with minimal residue. Given the outcomes of the fundamental characterisation, a number of potential applications for polyterpenol have been identified. Flexibility, tunable permittivity and loss tangent properties of polyterpenol suggest the material can be used as an insulating layer in plastic electronics. Implementation of polyterpenol as a surface modification of the gate insulator in pentacene-based Field Effect Transistor resulted in significant improvements, shifting the threshold voltage from + 20 V to –3 V, enhancing the effective mobility from 0.012 to 0.021 cm²/Vs, and improving the switching property of the device from 10⁷ to 10⁴. Polyterpenol was demonstrated to have a hole transport electron blocking property, with potential applications in many organic devices, such as organic light emitting diodes. Encapsulation of biomedical devices is also proposed, given that under favourable conditions, the original chemical and biological functionality of terpinen−4−ol molecule can be preserved. Films deposited at low RF power were shown to successfully prevent adhesion and retention of several important human pathogens, including P. aeruginosa, S. aureus, and S. epidermidis, whereas films deposited at higher RF power promoted bacterial cell adhesion and biofilm formation. Preliminary investigations into in vitro biocompatibility of polyterpenol demonstrated the coating to be non−toxic for several types of eukaryotic cells, including Balb/c mice macrophage and human monocyte type (HTP−1 non-adherent) cells. Applied to magnesium substrates, polyterpenol encapsulating layer significantly slowed down in vitro biodegradation of the metal, thus increasing the viability and growth of HTP−1 cells. Recently, applied to varied nanostructured titanium surfaces, polyterpenol thin films successfully reduced attachment, growth, and viability of P. aeruginosa and S. aureus.
Resumo:
- Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary-care setting. The GP or health-care professional then refers the patient to a third-party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the individual. - Objective To assess the clinical effectiveness and cost-effectiveness of ERS for people with a diagnosed medical condition known to benefit from physical activity (PA). The scope of this report was broadened to consider individuals without a diagnosed condition who are sedentary. - Data sources MEDLINE; EMBASE; PsycINFO; The Cochrane Library, ISI Web of Science; SPORTDiscus and ongoing trial registries were searched (from 1990 to October 2009) and included study references were checked. - Methods Systematic reviews: the effectiveness of ERS, predictors of ERS uptake and adherence, and the cost-effectiveness of ERS; and the development of a decision-analytic economic model to assess cost-effectiveness of ERS. - Results Seven randomised controlled trials (UK, n = 5; non-UK, n = 2) met the effectiveness inclusion criteria, five comparing ERS with usual care, two compared ERS with an alternative PA intervention, and one to an ERS plus a self-determination theory (SDT) intervention. In intention-to-treat analysis, compared with usual care, there was weak evidence of an increase in the number of ERS participants who achieved a self-reported 90-150 minutes of at least moderate-intensity PA per week at 6-12 months' follow-up [pooled relative risk (RR) 1.11, 95% confidence interval 0.99 to 1.25]. There was no consistent evidence of a difference between ERS and usual care in the duration of moderate/vigorous intensity and total PA or other outcomes, for example physical fitness, serum lipids, health-related quality of life (HRQoL). There was no between-group difference in outcomes between ERS and alternative PA interventions or ERS plus a SDT intervention. None of the included trials separately reported outcomes in individuals with medical diagnoses. Fourteen observational studies and five randomised controlled trials provided a numerical assessment of ERS uptake and adherence (UK, n = 16; non-UK, n = 3). Women and older people were more likely to take up ERS but women, when compared with men, were less likely to adhere. The four previous economic evaluations identified suggest ERS to be a cost-effective intervention. Indicative incremental cost per quality-adjusted life-year (QALY) estimates for ERS for various scenarios were based on a de novo model-based economic evaluation. Compared with usual care, the mean incremental cost for ERS was £169 and the mean incremental QALY was 0.008, with the base-case incremental cost-effectiveness ratio at £20,876 per QALY in sedentary people without a medical condition and a cost per QALY of £14,618 in sedentary obese individuals, £12,834 in sedentary hypertensive patients, and £8414 for sedentary individuals with depression. Estimates of cost-effectiveness were highly sensitive to plausible variations in the RR for change in PA and cost of ERS. - Limitations We found very limited evidence of the effectiveness of ERS. The estimates of the cost-effectiveness of ERS are based on a simple analytical framework. The economic evaluation reports small differences in costs and effects, and findings highlight the wide range of uncertainty associated with the estimates of effectiveness and the impact of effectiveness on HRQoL. No data were identified as part of the effectiveness review to allow for adjustment of the effect of ERS in different populations. - Conclusions There remains considerable uncertainty as to the effectiveness of ERS for increasing activity, fitness or health indicators or whether they are an efficient use of resources in sedentary people without a medical diagnosis. We failed to identify any trial-based evidence of the effectiveness of ERS in those with a medical diagnosis. Future work should include randomised controlled trials assessing the cinical effectiveness and cost-effectivenesss of ERS in disease groups that may benefit from PA. - Funding The National Institute for Health Research Health Technology Assessment programme.
Resumo:
Background The effectiveness of exercise referral schemes (ERS) is influenced by uptake and adherence to the scheme. The identification of factors influencing low uptake and adherence could lead to the refinement of schemes to optimise investment. Objectives To quantify the levels of ERS uptake and adherence and to identify factors predictive of uptake and adherence. Methods A systematic review and meta-analysis was undertaken. MEDLINE, EMBASE, PsycINFO, Cochrane Library, ISI WOS, SPORTDiscus and ongoing trial registries were searched (to October 2009) and included study references were checked. Included studies were required to report at least one of the following: (1) a numerical measure of ERS uptake or adherence and (2) an estimate of the statistical association between participant demographic or psychosocial factors (eg, level of motivation, self-efficacy) or programme factors and uptake or adherence to ERS. Results Twenty studies met the inclusion criteria, six randomised controlled trials (RCTs) and 14 observational studies. The pooled level of uptake in ERS was 66% (95% CI 57% to 75%) across the observational studies and 81% (95% CI 68% to 94%) across the RCTs. The pooled level of ERS adherence was 49% (95% CI 40% to 59%) across the observational studies and 43% (95% CI 32% to 54%) across the RCTs. Few studies considered anything other than gender and age. Women were more likely to begin an ERS but were less likely to adhere to it than men. Older people were more likely to begin and adhere to an ERS. Limitations Substantial heterogeneity was evident across the ERS studies. Without standardised definitions, the heterogeneity may have been reflective of differences in methods of defining uptake and adherence across studies. Conclusions To enhance our understanding of the variation in uptake and adherence across ERS and how these variations might affect physical activity outcomes, future trials need to use quantitative and qualitative methods.
Resumo:
Sonographic diagnosis of appendicitis in children is an important clinical tool, often obviating the need for potentially harmful ionising radiation from computed tomography (CT) scans and unnecessary appendectomies. Established criteria do not commonly account for the sonographic secondary signs of acute appendicitis as an adjunct or corollary to an identifiably inflamed appendix. If one of, or combinations of these secondary signs are a reliable positive and/or negative indicator of the condition, diagnostic accuracy may be improved. This will be of particular importance in cases where the appendix cannot be easily identified, possibly providing referring clinicians with a less equivocal diagnosis. Acute appendicitis (AA) is the most common emergency presentation requiring surgical intervention among both adults and children. During 2010-11 in Australia 25000 appendicectomies were performed on adults and children, more than double the number of the next most common surgical procedure [1]. Ultrasound has been commonly used to diagnose AA since the 1980s, however the best imaging modality or combination of modalities to accurately and cost-effectively diagnose the condition is still debated. A study by Puylaert advocated ultrasound in all presentations [2], whereas others suggested it only as a first line modality [3–5]. Conversely, York et al state that it is not appropriate as it delays treatment [6]. CT has been shown to more accurately diagnose AA than ultrasound, however its inherent radiation risks warrant cautionary use in children [7]. Improved accuracy in the diagnosis of suspected AA using ultrasound would enable surgeons to make a decision without the need to expose children to the potentially harmful effects of CT. Secondary signs of appendicitis are well established [8], although research into their predictive values has only recently been undertaken [9,10] indicating their potential diagnostic benefit in the absence of an identifiable appendix. The purpose of this review is to examine the history of appendiceal sonography, established sonographic criteria, paediatric specific techniques and the predictive value of secondary signs.