925 resultados para Label-free techniques


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This investigation has shown that by transforming free caustic in red mud (RM) to Bayer hydrotalcite (during the seawater neutralization (SWN) process) enables a more controlled release mechanism for the neutralization of acid sulfate soils. The formation of hydrotalcite has been confirmed by X-ray diffraction (XRD) and differential thermalgravimetric analysis (DTG), while the dissolution of hydrotalcite and sodalite has been observed through XRD, DTG, pH plots, and ICP-OES. Coupling of all techniques enabled three neutralization mechanisms to be determined: (1) free alkali, (2) hydrotalcite dissolution, and (3) sodalite dissolution. The mechanisms are determined on the basis of ICP-OES and kinetic information. When the mass of RM or SWN-RM is greater than 0.08 g/50 mL, the pH of solution increases to a suitable value for plant life with aluminum leaching kept at a minimum. To obtain a neutralization pH greater than 6 in 10 min, the following ratio of bauxite residue (g) in 50 mL with a known iron sulfate (Fe2(SO4)3) concentration can be determined as follows: 0.04 g:50 mL:0.1 g/L of Fe2(SO4)3.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on theoretical prediction, a g-C3N4@carbon metal-free oxygen reduction reaction (ORR) electrocatalyst was designed and synthesized by uniform incorporation of g-C3N4 into a mesoporous carbon to enhance the electron transfer efficiency of g-C3N4. The resulting g-C3N4@carbon composite exhibited competitive catalytic activity (11.3 mA cm–2 kinetic-limiting current density at −0.6 V) and superior methanol tolerance compared to a commercial Pt/C catalyst. Furthermore, it demonstrated significantly higher catalytic efficiency (nearly 100% of four-electron ORR process selectivity) than a Pt/C catalyst. The proposed synthesis route is facile and low-cost, providing a feasible method for the development of highly efficient electrocatalysts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Overweight and obesity has become a serious public health problem in many parts of the world. Studies suggest that making small changes in daily activity levels such as “breaking-up” sedentary time (i.e., standing) may help mitigate the health risks of sedentary behavior. The aim of the present study was to examine time spent in standing (determined by count threshold), lying, and sitting postures (determined by inclinometer function) via the ActiGraph GT3X among sedentary adults with differing weight status based on body mass index (BMI) categories. Methods Participants included 22 sedentary adults (14 men, 8 women; mean age 26.5 ± 4.1 years). All subjects completed the self-report International Physical Activity Questionnaire to determine time spent sitting over the previous 7 days. Participants were included if they spent seven or more hours sitting per day. Postures were determined with the ActiGraph GT3X inclinometer function. Participants were instructed to wear the accelerometer for 7 consecutive days (24 h a day). BMI was categorized as: 18.5 to <25 kg/m2 as normal, 25 to <30 kg/m2 as overweight, and ≥30 kg/m2 as obese. Results Participants in the normal weight (n = 10) and overweight (n = 6) groups spent significantly more time standing (after adjustment for moderate-to-vigorous intensity physical activity and wear-time) (6.7 h and 7.3 h respectively) and less time sitting (7.1 h and 6.9 h respectively) than those in obese (n = 6) categories (5.5 h and 8.0 h respectively) after adjustment for wear-time (p < 0.001). There were no significant differences in standing and sitting time between normal weight and overweight groups (p = 0.051 and p = 0.670 respectively). Differences were not significant among groups for lying time (p = 0.55). Conclusion This study described postural allocations standing, lying, and sitting among normal weight, overweight, and obese sedentary adults. The results provide additional evidence for the use of increasing standing time in obesity prevention strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the explosive growth of resources available through the Internet, information mismatching and overload have become a severe concern to users. Web users are commonly overwhelmed by huge volume of information and are faced with the challenge of finding the most relevant and reliable information in a timely manner. Personalised information gathering and recommender systems represent state-of-the-art tools for efficient selection of the most relevant and reliable information resources, and the interest in such systems has increased dramatically over the last few years. However, web personalization has not yet been well-exploited; difficulties arise while selecting resources through recommender systems from a technological and social perspective. Aiming to promote high quality research in order to overcome these challenges, this paper provides a comprehensive survey on the recent work and achievements in the areas of personalised web information gathering and recommender systems. The report covers concept-based techniques exploited in personalised information gathering and recommender systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A number of mathematical models investigating certain aspects of the complicated process of wound healing are reported in the literature in recent years. However, effective numerical methods and supporting error analysis for the fractional equations which describe the process of wound healing are still limited. In this paper, we consider the numerical simulation of a fractional mathematical model of epidermal wound healing (FMM-EWH), which is based on the coupled advection-diffusion equations for cell and chemical concentration in a polar coordinate system. The space fractional derivatives are defined in the Left and Right Riemann-Liouville sense. Fractional orders in the advection and diffusion terms belong to the intervals (0, 1) or (1, 2], respectively. Some numerical techniques will be used. Firstly, the coupled advection-diffusion equations are decoupled to a single space fractional advection-diffusion equation in a polar coordinate system. Secondly, we propose a new implicit difference method for simulating this equation by using the equivalent of Riemann-Liouville and Grünwald-Letnikov fractional derivative definitions. Thirdly, its stability and convergence are discussed, respectively. Finally, some numerical results are given to demonstrate the theoretical analysis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter provides researchers with a guide to some of the types of dating techniques that can be used in geomorpological investigations and issues that need to be addressed when using gechronological data, specifically issues relating to accuracy and precision. This chapter also introduces the 'types' of dating methods that are commonly used in geomorphological studies. This includes sidereal, isotopic, radiogenic, and chemical dating methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In developed countries the relationship between socioeconomic position (SEP) and health is unequivocal. Those who are socioeconomically disadvantaged are known to experience higher morbidity and mortality from a range of chronic diet-related conditions compared to those of higher SEP. Socioeconomic inequalities in diet are well established. Compared to their more advantaged counterparts, those of low SEP are consistently found to consume diets less consistent with dietary guidelines (i.e. higher in fat, salt and sugar and lower in fibre, fruit and vegetables). Although the reasons for dietary inequalities remain unclear, understanding how such differences arise is important for the development of strategies to reduce health inequalities. Both environmental (e.g. proximity of supermarkets, price, and availability of foods) and psychosocial (e.g. taste preference, nutrition knowledge) influences are proposed to account for inequalities in food choices. Although in the United States (US), United Kingdom (UK), and parts of Australia, environmental factors are associated with socioeconomic differences in food choices, these factors do not completely account for the observed inequalities. Internationally, this context has prompted calls for further exploration of the role of psychological and social factors in relation to inequalities in food choices. It is this task that forms the primary goal of this PhD research. In the small body of research examining the contribution of psychosocial factors to inequalities in food choices, studies have focussed on food cost concerns, nutrition knowledge or health concerns. These factors are generally found to be influential. However, since a range of psychosocial factors are known determinants of food choices in the general population, it is likely that a range of factors also contribute to inequalities in food choices. Identification of additional psychosocial factors of relevance to inequalities in food choices would provide new opportunities for health promotion, including the adaption of existing strategies. The methodological features of previous research have also hindered the advancement of knowledge in this area and a lack of qualitative studies has resulted in a dearth of descriptive information on this topic. This PhD investigation extends previous research by assessing a range of psychosocial factors in relation to inequalities in food choices using both quantitative and qualitative techniques. Secondary data analyses were undertaken using data obtained from two Brisbane-based studies, the Brisbane Food Study (N=1003, conducted in 2000), and the Sixty Families Study (N=60, conducted in 1998). Both studies involved main household food purchasers completing an interviewer-administered survey within their own home. Data pertaining to food-purchasing, and psychosocial, socioeconomic and demographic characteristics were collected in each study. The mutual goals of both the qualitative and quantitative phases of this investigation were to assess socioeconomic differences in food purchasing and to identify psychosocial factors relevant to any observed differences. The quantitative methods then additionally considered whether the associations examined differed according to the socioeconomic indicator used (i.e. income or education). The qualitative analyses made a unique contribution to this project by generating detailed descriptions of socioeconomic differences in psychosocial factors. Those with lower levels of income and education were found to make food purchasing choices less consistent with dietary guidelines compared to those of high SEP. The psychosocial factors identified as relevant to food-purchasing inequalities were: taste preferences, health concerns, health beliefs, nutrition knowledge, nutrition concerns, weight concerns, nutrition label use, and several other values and beliefs unique to particular socioeconomic groups. Factors more tenuously or inconsistently related to socioeconomic differences in food purchasing were cost concerns, and perceived adequacy of the family diet. Evidence was displayed in both the quantitative and qualitative analyses to suggest that psychosocial factors contribute to inequalities in food purchasing in a collective manner. The quantitative analyses revealed that considerable overlap in the socioeconomic variation in food purchasing was accounted for by key psychosocial factors of importance, including taste preference, nutrition concerns, nutrition knowledge, and health concerns. Consistent with these findings, the qualitative transcripts demonstrated the interplay between such influential psychosocial factors in determining food-purchasing choices. The qualitative analyses found socioeconomic differences in the prioritisation of psychosocial factors in relation to food choices. This is suggestive of complex cultural factors that distinguish advantaged and disadvantaged groups and result in socioeconomically distinct schemas related to health and food choices. Compared to those of high SEP, those of lower SEP were less likely to indicate that health concerns, nutrition concerns, or food labels influenced food choices, and exhibited lower levels of nutrition knowledge. In the absence of health or nutrition-related concerns, taste preferences tended to dominate the food purchasing choices of those of low SEP. Overall, while cost concerns did not appear to be a main determinant of socioeconomic differences in food purchasing, this factor had a dominant influence on the food choices of some of the most disadvantaged respondents included in this research. The findings of this study have several implications for health promotion. The integrated operation of psychosocial factors on food purchasing inequalities indicates that multiple psychosocial factors may be appropriate to target in health promotion. It also seems possible that the inter-relatedness of psychosocial factors would allow health promotion targeting a single psychosocial factor to have a flow-on affect in terms of altering other influential psychosocial factors. This research also suggests that current mass marketing approaches to health promotion may not be effective across all socioeconomic groups due to differences in the priorities and main factors of influence in food purchasing decisions across groups. In addition to the practical recommendations for health promotion, this investigation, through the critique of previous research, and through the substantive study findings, has highlighted important methodological considerations for future research. Of particular note are the recommendations pertaining to the selection of socioeconomic indicators, measurement of relevant constructs, consideration of confounders, and development of an analytical approach. Addressing inequalities in health has been noted as a main objective by many health authorities and governments internationally. It is envisaged that the substantive and methodological findings of this thesis will make a useful contribution towards this important goal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Availability has become a primary goal of information security and is as significant as other goals, in particular, confidentiality and integrity. Maintaining availability of essential services on the public Internet is an increasingly difficult task in the presence of sophisticated attackers. Attackers may abuse limited computational resources of a service provider and thus managing computational costs is a key strategy for achieving the goal of availability. In this thesis we focus on cryptographic approaches for managing computational costs, in particular computational effort. We focus on two cryptographic techniques: computational puzzles in cryptographic protocols and secure outsourcing of cryptographic computations. This thesis contributes to the area of cryptographic protocols in the following ways. First we propose the most efficient puzzle scheme based on modular exponentiations which, unlike previous schemes of the same type, involves only a few modular multiplications for solution verification; our scheme is provably secure. We then introduce a new efficient gradual authentication protocol by integrating a puzzle into a specific signature scheme. Our software implementation results for the new authentication protocol show that our approach is more efficient and effective than the traditional RSA signature-based one and improves the DoSresilience of Secure Socket Layer (SSL) protocol, the most widely used security protocol on the Internet. Our next contributions are related to capturing a specific property that enables secure outsourcing of cryptographic tasks in partial-decryption. We formally define the property of (non-trivial) public verifiability for general encryption schemes, key encapsulation mechanisms (KEMs), and hybrid encryption schemes, encompassing public-key, identity-based, and tag-based encryption avors. We show that some generic transformations and concrete constructions enjoy this property and then present a new public-key encryption (PKE) scheme having this property and proof of security under the standard assumptions. Finally, we combine puzzles with PKE schemes for enabling delayed decryption in applications such as e-auctions and e-voting. For this we first introduce the notion of effort-release PKE (ER-PKE), encompassing the well-known timedrelease encryption and encapsulated key escrow techniques. We then present a security model for ER-PKE and a generic construction of ER-PKE complying with our security notion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we analyse the effects of highway traffic flow parameters like vehicle arrival rate and density on the performance of Amplify and Forward (AF) cooperative vehicular networks along a multi-lane highway under free flow state. We derive analytical expressions for connectivity performance and verify them with Monte-Carlo simulations. When AF cooperative relaying is employed together with Maximum Ratio Combining (MRC) at the receivers the average route error rate shows 10-20 fold improvement compared to direct communication. A 4-8 fold increase in maximum number of traversable hops can also be observed at different vehicle densities when AF cooperative communication is used to strengthen communication routes. However the theorical upper bound of maximum number of hops promises higher performance gains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Invasion waves of cells play an important role in development, disease and repair. Standard discrete models of such processes typically involve simulating cell motility, cell proliferation and cell-to-cell crowding effects in a lattice-based framework. The continuum-limit description is often given by a reaction–diffusion equation that is related to the Fisher–Kolmogorov equation. One of the limitations of a standard lattice-based approach is that real cells move and proliferate in continuous space and are not restricted to a predefined lattice structure. We present a lattice-free model of cell motility and proliferation, with cell-to-cell crowding effects, and we use the model to replicate invasion wave-type behaviour. The continuum-limit description of the discrete model is a reaction–diffusion equation with a proliferation term that is different from lattice-based models. Comparing lattice based and lattice-free simulations indicates that both models lead to invasion fronts that are similar at the leading edge, where the cell density is low. Conversely, the two models make different predictions in the high density region of the domain, well behind the leading edge. We analyse the continuum-limit description of the lattice based and lattice-free models to show that both give rise to invasion wave type solutions that move with the same speed but have very different shapes. We explore the significance of these differences by calibrating the parameters in the standard Fisher–Kolmogorov equation using data from the lattice-free model. We conclude that estimating parameters using this kind of standard procedure can produce misleading results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sol-gel synthesis in varied gravity is only a relatively new topic in the literature and further investigation is required to explore its full potential as a method to synthesise novel materials. Although trialled for systems such as silica, the specific application of varied gravity synthesis to other sol-gel systems such as titanium has not previously been undertaken. Current literature methods for the synthesis of sol-gel material in reduced gravity could not be applied to titanium sol-gel processing, thus a new strategy had to be developed in this study. To successfully conduct experiments in varied gravity a refined titanium sol-gel chemical precursor had to be developed which allowed the single solution precursor to remain un-reactive at temperatures up to 50oC and only begin to react when exposed to a pressure decrease from a vacuum. Due to the new nature of this precursor, a thorough characterisation of the reaction precursors was subsequently undertaken with the use of techniques such as Nuclear Magnetic Resonance, Infra-red and UV-Vis spectroscopy in order to achieve sufficient understanding of precursor chemistry and kinetic stability. This understanding was then used to propose gelation reaction mechanisms under varied gravity conditions. Two unique reactor systems were designed and built with the specific purpose to allow the effects of varied gravity (high, normal, reduced) during synthesis of titanium sol-gels to be studied. The first system was a centrifuge capable of providing high gravity environments of up to 70 g’s for extended periods, whilst applying a 100 mbar vacuum and a temperature of 40-50oC to the reaction chambers. The second system to be used in the QUT Microgravity Drop Tower Facility was also required to provide the same thermal and vacuum conditions used in the centrifuge, but had to operate autonomously during free fall. Through the use of post synthesis characterisation techniques such as Raman Spectroscopy, X-Ray diffraction (XRD) and N2 adsorption, it was found that increased gravity levels during synthesis, had the greatest effect on the final products. Samples produced in reduced and normal gravity appeared to form amorphous gels containing very small particles with moderate surface areas. Whereas crystalline anatase (TiO2), was found to form in samples synthesised above 5 g with significant increases in crystallinity, particle size and surface area observed when samples were produced at gravity levels up to 70 g. It is proposed that for samples produced in higher gravity, an increased concentration gradient of water is forms at the bottom of the reacting film due to forced convection. The particles formed in higher gravity diffuse downward towards this excess of water, which favours the condensation reaction of remaining sol gel precursors with the particles promoting increased particle growth. Due to the removal of downward convection in reduced gravity, particle growth due to condensation reaction processes are physically hindered hydrolysis reactions favoured instead. Another significant finding from this work was that anatase could be produced at relatively low temperatures of 40-50oC instead of the conventional method of calcination above 450oC solely through sol-gel synthesis at higher gravity levels. It is hoped that the outcomes of this research will lead to an increased understanding of the effects of gravity on chemical synthesis of titanium sol-gel, potentially leading to the development of improved products suitable for diverse applications such as semiconductor or catalyst materials as well as significantly reducing production and energy costs through manufacturing these materials at significantly lower temperatures.