934 resultados para Geometric Sum
Resumo:
Several components of the metabolic syndrome, particularly diabetes and cardiovascular disease, are known to be oxidative stress-related conditions and there is research to suggest that antioxidant nutrients may play a protective role in these conditions. Carotenoids are compounds derived primarily from plants and several have been shown to be potent antioxidant nutrients. The aim of this study was to examine the associations between metabolic syndrome status and major serum carotenoids in adult Australians. Data on the presence of the metabolic syndrome, based on International Diabetes Federation 2005 criteria, were collected from 1523 adults aged 25 years and over in six randomly selected urban centers in Queensland, Australia, using a cross-sectional study design. Weight, height, BMI, waist circumference, blood pressure, fasting and 2-hour blood glucose and lipids were determined, as well as five serum carotenoids. Mean serum alpha-carotene, beta-carotene and the sum of the five carotenoid concentrations were significantly lower (p<0.05) in persons with the metabolic syndrome (after adjusting for age, sex, education, BMI status, alcohol intake, smoking, physical activity status and vitamin/mineral use) than persons without the syndrome. Alpha, beta and total carotenoids also decreased significantly (p<0.05) with increased number of components of the metabolic syndrome, after adjusting for these confounders. These differences were significant among former smokers and non-smokers, but not in current smokers. Low concentrations of serum alpha-carotene, beta-carotene and the sum of five carotenoids appear to be associated with metabolic syndrome status. Additional research, particularly longitudinal studies, may help to determine if these associations are causally related to the metabolic syndrome, or are a result of the pathologies of the syndrome.
Resumo:
This thesis is a study of naturally occurring radioactive materials (NORM) activity concentration, gamma dose rate and radon (222Rn) exhalation from the waste streams of large-scale onshore petroleum operations. Types of activities covered included; sludge recovery from separation tanks, sludge farming, NORM storage, scaling in oil tubulars, scaling in gas production and sedimentation in produced water evaporation ponds. Field work was conducted in the arid desert terrain of an operational oil exploration and production region in the Sultanate of Oman. The main radionuclides found were 226Ra and 210Pb (238U - series), 228Ra and 228Th (232Th - series), and 227Ac (235U - series), along with 40K. All activity concentrations were higher than the ambient soil level and varied over several orders of magnitude. The range of gamma dose rates at a 1 m height above ground for the farm treated sludge had a range of 0.06 0.43 µSv h 1, and an average close to the ambient soil mean of 0.086 ± 0.014 µSv h 1, whereas the untreated sludge gamma dose rates had a range of 0.07 1.78 µSv h 1, and a mean of 0.456 ± 0.303 µSv h 1. The geometric mean of ambient soil 222Rn exhalation rate for area surrounding the sludge was mBq m 2 s 1. Radon exhalation rates reported in oil waste products were all higher than the ambient soil value and varied over three orders of magnitude. This study resulted in some unique findings including: (i) detection of radiotoxic 227Ac in the oil scales and sludge, (ii) need of a new empirical relation between petroleum sludge activity concentrations and gamma dose rates, and (iii) assessment of exhalation of 222Rn from oil sludge. Additionally the study investigated a method to determine oil scale and sludge age by the use of inherent behaviour of radionuclides as 228Ra:226Ra and 228Th:228Ra activity ratios.
Resumo:
Established Monte Carlo user codes BEAMnrc and DOSXYZnrc permit the accurate and straightforward simulation of radiotherapy experiments and treatments delivered from multiple beam angles. However, when an electronic portal imaging detector (EPID) is included in these simulations, treatment delivery from non-zero beam angles becomes problematic. This study introduces CTCombine, a purpose-built code for rotating selected CT data volumes, converting CT numbers to mass densities, combining the results with model EPIDs and writing output in a form which can easily be read and used by the dose calculation code DOSXYZnrc. The geometric and dosimetric accuracy of CTCombine’s output has been assessed by simulating simple and complex treatments applied to a rotated planar phantom and a rotated humanoid phantom and comparing the resulting virtual EPID images with the images acquired using experimental measurements and independent simulations of equivalent phantoms. It is expected that CTCombine will be useful for Monte Carlo studies of EPID dosimetry as well as other EPID imaging applications.
Resumo:
Grid music systems provide discrete geometric methods for simplified music-making by providing spatialised input to construct patterned music on a 2D matrix layout. While they are conceptually simple, grid systems may be layered to enable complex and satisfying musical results. Grid music systems have been applied to a range of systems from small portable devices up to larger systems. In this paper we will discuss the use of grid music systems in general and present an overview of the HarmonyGrid system we have developed as a new interactive performance system. We discuss a range of issues related to the design and use of larger-scale grid- based interactive performance systems such as the HarmonyGrid.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Pooled serum samples collected from 8132 residents in 2002/03 and 2004/05 were analyzed to assess human polybrominated diphenyl ether (PBDE) concentrations from specified strata of the Australian population. The strata were defined by age (0−4 years, 5−15 years, < 16 years, 16−30 years, 31−45 years, 46−60 years, and >60 years); region; and gender. For both time periods, infants and older children had substantially higher PBDE concentrations than adults. For samples collected in 2004/05, the mean ± standard deviation ΣPBDE (sum of the homologue groups for the mono-, di-, tri-, tetra-, penta-, hexa-, hepta-, octa-, nona-, and deca-BDEs) concentrations for 0−4 and 5−15 years were 73 ± 7 and 29 ± 7 ng g−1 lipid, respectively, while for all adults >16 years, the mean concentration was lower at 18 ± 5 ng g−1 lipid. A similar trend was observed for the samples collected in 2002/03, with the mean ΣPBDE concentration for children <16 years being 28 ± 8 ng g−1 lipid and for the adults >16 years, 15 ± 5 ng g−1 lipid. No regional or gender specific differences were observed. Measured data were compared with a model that we developed to incorporate the primary known exposure pathways (food, air, dust, breast milk) and clearance (half-life) data. The model was used to predict PBDE concentration trends and indicated that the elevated concentrations in infants were primarily due to maternal transfer and breast milk consumption with inhalation and ingestion of dust making a comparatively lower contribution.
Resumo:
Background: Polybrominated diphenyl ethers (PBDEs) are used as flame retardants in many products and have been detected in human samples worldwide. Limited data show that concentrations are elevated in young children. Objectives: We investigated the association between PBDEs and age with an emphasis on young children from Australia in 2006–2007. Methods: We collected human blood serum samples (n = 2,420), which we stratified by age and sex and pooled for analysis of PBDEs. Results: The sum of BDE-47, -99, -100, and -153 concentrations (Σ4PBDE) increased from 0–0.5 years (mean ± SD, 14 ± 3.4 ng/g lipid) to peak at 2.6–3 years (51 ± 36 ng/g lipid; p < 0.001) and then decreased until 31–45 years (9.9 ± 1.6 ng/g lipid). We observed no further significant decrease among ages 31–45, 45–60 (p = 0.964), or > 60 years (p = 0.894). The mean Σ4PBDE concentration in cord blood (24 ± 14 ng/g lipid) did not differ significantly from that in adult serum at ages 15–30 (p = 0.198) or 31–45 years (p = 0.140). We found no temporal trend when we compared the present results with Australian PBDE data from 2002–2005. PBDE concentrations were higher in males than in females; however, this difference reached statistical significance only for BDE-153 (p = 0.05). Conclusions: The observed peak concentration at 2.6–3 years of age is later than the period when breast-feeding is typically ceased. This suggests that in addition to the exposure via human milk, young children have higher exposure to these chemicals and/or a lower capacity to eliminate them. Key words: Australia, children, cord blood, human blood serum, PBDEs, polybrominated diphenyl ethers. Environ Health Perspect 117:1461–1465 (2009). doi:10.1289/ehp.0900596
Resumo:
Integrity of Real Time Kinematic (RTK) positioning solutions relates to the confidential level that can be placed in the information provided by the RTK system. It includes the ability of the RTK system to provide timely valid warnings to users when the system must not be used for the intended operation. For instance, in the controlled traffic farming (CTF) system that controls traffic separates wheel beds and root beds, RTK positioning error causes overlap and increases the amount of soil compaction. The RTK system’s integrity capacity can inform users when the actual positional errors of the RTK solutions have exceeded Horizontal Protection Levels (HPL) within a certain Time-To-Alert (TTA) at a given Integrity Risk (IR). The later is defined as the probability that the system claims its normal operational status while actually being in an abnormal status, e.g., the ambiguities being incorrectly fixed and positional errors having exceeded the HPL. The paper studies the required positioning performance (RPP) of GPS positioning system for PA applications such as a CTF system, according to literature review and survey conducted among a number of farming companies. The HPL and IR are derived from these RPP parameters. A RTK-specific rover autonomous integrity monitoring (RAIM) algorithm is developed to determine the system integrity according to real time outputs, such as residual square sum (RSS), HDOP values. A two-station baseline data set is analyzed to demonstrate the concept of RTK integrity and assess the RTK solution continuity, missed detection probability and false alarm probability.
Resumo:
Noise and vibration in complex ship structures are becoming a prominent issue for ship building industry and ship companies due to the constant demand of building faster ships of lighter weight, and the stringent noise and libration regulation of the industry. In order to retain the full benefit of building faster ships without compromising too much on ride comfort and safety, noise and vibration control needs to be implemented. Due to the complexity of ship structures, the coupling of different wave types and multiple wave propagation paths, active control of global hull modes is difficult to implement and very expensive. Traditional passive control such as adding damping materials is only effective in the high frequency range. However, most severe damage to ship structures is caused by large structural deformation of hull structures and high dynamic stress concentration at low frequencies. The most discomfort and fatigue of passengers and the crew onboard ships is also due to the low frequency noise and vibration. Innovative approaches are therefore, required to attenuate the noise and vibration at low frequencies. This book was developed from several specialized research topics on vibration and vibration control of ship structures, mostly from the author's own PhD work at the University of Western Australia. The book aims to provide a better understanding of vibration characteristics of ribbed plate structures, plate/plate coupled structures and the mechanism governing wave propagation and attenuation in periodic and irregular ribbed structures as well as in complex ship structures. The book is designed to be a reference book for ship builders, vibro-acoustic engineers and researchers. The author also hopes that the book can stimulate more exciting future work in this area of research. It is the author's humble desire that the book can be some use for those who purchase it. This book is divided into eight chapters. Each chapter focuses on providing solution to address a particular issue on vibration problems of ship structures. A brief summary of each chapter is given in the general introduction. All chapters are inter-dependent to each other to form an integration volume on the subject of vibration and vibration control of ship structures and alike. I am in debt to many people in completing this work. In particular, I would like to thank Professor J. Pan, Dr N.H. Farag, Dr K. Sum and many others from the University of Western Australia for useful advices and helps during my times at the University and beyond. I would also like to thank my wife, Miaoling Wang, my children, Anita, Sophia and Angela Lin, for their sacrifice and continuing supports to make this work possible. Financial supports from Australian Research Council, Australian Defense Science and Technology Organization and Strategic Marine Pty Ltd at Western Australia for this work is gratefully acknowledged.
Resumo:
Studies have examined the associations between cancers and circulating 25-hydroxyvitamin D [25(OH)D], but little is known about the impact of different laboratory practices on 25(OH)D concentrations. We examined the potential impact of delayed blood centrifuging, choice of collection tube, and type of assay on 25(OH)D concentrations. Blood samples from 20 healthy volunteers underwent alternative laboratory procedures: four centrifuging times (2, 24, 72, and 96 h after blood draw); three types of collection tubes (red top serum tube, two different plasma anticoagulant tubes containing heparin or EDTA); and two types of assays (DiaSorin radioimmunoassay [RIA] and chemiluminescence immunoassay [CLIA/LIAISON®]). Log-transformed 25(OH)D concentrations were analyzed using the generalized estimating equations (GEE) linear regression models. We found no difference in 25(OH)D concentrations by centrifuging times or type of assay. There was some indication of a difference in 25(OH)D concentrations by tube type in CLIA/LIAISON®-assayed samples, with concentrations in heparinized plasma (geometric mean, 16.1 ng ml−1) higher than those in serum (geometric mean, 15.3 ng ml−1) (p = 0.01), but the difference was significant only after substantial centrifuging delays (96 h). Our study suggests no necessity for requiring immediate processing of blood samples after collection or for the choice of a tube type or assay.
Resumo:
Competent navigation in an environment is a major requirement for an autonomous mobile robot to accomplish its mission. Nowadays, many successful systems for navigating a mobile robot use an internal map which represents the environment in a detailed geometric manner. However, building, maintaining and using such environment maps for navigation is difficult because of perceptual aliasing and measurement noise. Moreover, geometric maps require the processing of huge amounts of data which is computationally expensive. This thesis addresses the problem of vision-based topological mapping and localisation for mobile robot navigation. Topological maps are concise and graphical representations of environments that are scalable and amenable to symbolic manipulation. Thus, they are well-suited for basic robot navigation applications, and also provide a representational basis for the procedural and semantic information needed for higher-level robotic tasks. In order to make vision-based topological navigation suitable for inexpensive mobile robots for the mass market we propose to characterise key places of the environment based on their visual appearance through colour histograms. The approach for representing places using visual appearance is based on the fact that colour histograms change slowly as the field of vision sweeps the scene when a robot moves through an environment. Hence, a place represents a region of the environment rather than a single position. We demonstrate in experiments using an indoor data set, that a topological map in which places are characterised using visual appearance augmented with metric clues provides sufficient information to perform continuous metric localisation which is robust to the kidnapped robot problem. Many topological mapping methods build a topological map by clustering visual observations to places. However, due to perceptual aliasing observations from different places may be mapped to the same place representative in the topological map. A main contribution of this thesis is a novel approach for dealing with the perceptual aliasing problem in topological mapping. We propose to incorporate neighbourhood relations for disambiguating places which otherwise are indistinguishable. We present a constraint based stochastic local search method which integrates the approach for place disambiguation in order to induce a topological map. Experiments show that the proposed method is capable of mapping environments with a high degree of perceptual aliasing, and that a small map is found quickly. Moreover, the method of using neighbourhood information for place disambiguation is integrated into a framework for topological off-line simultaneous localisation and mapping which does not require an initial categorisation of visual observations. Experiments on an indoor data set demonstrate the suitability of our method to reliably localise the robot while building a topological map.
Resumo:
High density development has been seen as a contribution to sustainable development. However, a number of engineering issues play a crucial role in the sustainable construction of high rise buildings. Non linear deformation of concrete has an adverse impact on high-rise buildings with complex geometries, due to differential axial shortening. These adverse effects are caused by time dependent behaviour resulting in volume change known as ‘shrinkage’, ‘creep’ and ‘elastic’ deformation. These three phenomena govern the behaviour and performance of all concrete elements, during and after construction. Reinforcement content, variable concrete modulus, volume to surface area ratio of the elements, environmental conditions, and construction quality and sequence influence on the performance of concrete elements and differential axial shortening will occur in all structural systems. Its detrimental effects escalate with increasing height and non vertical load paths resulting from geometric complexity. The magnitude of these effects has a significant impact on building envelopes, building services, secondary systems, and lifetime serviceability and performance. Analytical and test procedures available to quantify the magnitude of these effects are limited to a very few parameters and are not adequately rigorous to capture the complexity of true time dependent material response. With this in mind, a research project has been undertaken to develop an accurate numerical procedure to quantify the differential axial shortening of structural elements. The procedure has been successfully applied to quantify the differential axial shortening of a high rise building, and the important capabilities available in the procedure have been discussed. A new practical concept, based on the variation of vibration characteristic of structure during and after construction and used to quantify the axial shortening and assess the performance of structure, is presented.
Resumo:
We investigate whether the two 2 zero cost portfolios, SMB and HML, have the ability to predict economic growth for markets investigated in this paper. Our findings show that there are only a limited number of cases when the coefficients are positive and significance is achieved in an even more limited number of cases. Our results are in stark contrast to Liew and Vassalou (2000) who find coefficients to be generally positive and of a similar magnitude. We go a step further and also employ the methodology of Lakonishok, Shleifer and Vishny (1994) and once again fail to support the risk-based hypothesis of Liew and Vassalou (2000). In sum, we argue that search for a robust economic explanation for firm size and book-to-market equity effects needs sustained effort as these two zero cost portfolios do not represent economically relevant risk.
Resumo:
Willingness to pay models have shown the theoretical relationships between the contingent valuation, cost of illness and the avertive behaviour approaches. In this paper, field survey data are used to compare the relationships between these three approaches and to demonstrate that contingent valuation bids exceed the sum of cost of illness and the avertive behaviour approach estimates. The estimates provide a validity check for CV bids and further support the claim that contingent valuation studies are theoretically consistent.