983 resultados para Minkowski Sum
Resumo:
Several components of the metabolic syndrome, particularly diabetes and cardiovascular disease, are known to be oxidative stress-related conditions and there is research to suggest that antioxidant nutrients may play a protective role in these conditions. Carotenoids are compounds derived primarily from plants and several have been shown to be potent antioxidant nutrients. The aim of this study was to examine the associations between metabolic syndrome status and major serum carotenoids in adult Australians. Data on the presence of the metabolic syndrome, based on International Diabetes Federation 2005 criteria, were collected from 1523 adults aged 25 years and over in six randomly selected urban centers in Queensland, Australia, using a cross-sectional study design. Weight, height, BMI, waist circumference, blood pressure, fasting and 2-hour blood glucose and lipids were determined, as well as five serum carotenoids. Mean serum alpha-carotene, beta-carotene and the sum of the five carotenoid concentrations were significantly lower (p<0.05) in persons with the metabolic syndrome (after adjusting for age, sex, education, BMI status, alcohol intake, smoking, physical activity status and vitamin/mineral use) than persons without the syndrome. Alpha, beta and total carotenoids also decreased significantly (p<0.05) with increased number of components of the metabolic syndrome, after adjusting for these confounders. These differences were significant among former smokers and non-smokers, but not in current smokers. Low concentrations of serum alpha-carotene, beta-carotene and the sum of five carotenoids appear to be associated with metabolic syndrome status. Additional research, particularly longitudinal studies, may help to determine if these associations are causally related to the metabolic syndrome, or are a result of the pathologies of the syndrome.
Resumo:
The inquiry documented in this thesis is located at the nexus of technological innovation and traditional schooling. As we enter the second decade of a new century, few would argue against the increasingly urgent need to integrate digital literacies with traditional academic knowledge. Yet, despite substantial investments from governments and businesses, the adoption and diffusion of contemporary digital tools in formal schooling remain sluggish. To date, research on technology adoption in schools tends to take a deficit perspective of schools and teachers, with the lack of resources and teacher ‘technophobia’ most commonly cited as barriers to digital uptake. Corresponding interventions that focus on increasing funding and upskilling teachers, however, have made little difference to adoption trends in the last decade. Empirical evidence that explicates the cultural and pedagogical complexities of innovation diffusion within long-established conventions of mainstream schooling, particularly from the standpoint of students, is wanting. To address this knowledge gap, this thesis inquires into how students evaluate and account for the constraints and affordances of contemporary digital tools when they engage with them as part of their conventional schooling. It documents the attempted integration of a student-led Web 2.0 learning initiative, known as the Student Media Centre (SMC), into the schooling practices of a long-established, high-performing independent senior boys’ school in urban Australia. The study employed an ‘explanatory’ two-phase research design (Creswell, 2003) that combined complementary quantitative and qualitative methods to achieve both breadth of measurement and richness of characterisation. In the initial quantitative phase, a self-reported questionnaire was administered to the senior school student population to determine adoption trends and predictors of SMC usage (N=481). Measurement constructs included individual learning dispositions (learning and performance goals, cognitive playfulness and personal innovativeness), as well as social and technological variables (peer support, perceived usefulness and ease of use). Incremental predictive models of SMC usage were conducted using Classification and Regression Tree (CART) modelling: (i) individual-level predictors, (ii) individual and social predictors, and (iii) individual, social and technological predictors. Peer support emerged as the best predictor of SMC usage. Other salient predictors include perceived ease of use and usefulness, cognitive playfulness and learning goals. On the whole, an overwhelming proportion of students reported low usage levels, low perceived usefulness and a lack of peer support for engaging with the digital learning initiative. The small minority of frequent users reported having high levels of peer support and robust learning goal orientations, rather than being predominantly driven by performance goals. These findings indicate that tensions around social validation, digital learning and academic performance pressures influence students’ engagement with the Web 2.0 learning initiative. The qualitative phase that followed provided insights into these tensions by shifting the analytics from individual attitudes and behaviours to shared social and cultural reasoning practices that explain students’ engagement with the innovation. Six indepth focus groups, comprising 60 students with different levels of SMC usage, were conducted, audio-recorded and transcribed. Textual data were analysed using Membership Categorisation Analysis. Students’ accounts converged around a key proposition. The Web 2.0 learning initiative was useful-in-principle but useless-in-practice. While students endorsed the usefulness of the SMC for enhancing multimodal engagement, extending peer-topeer networks and acquiring real-world skills, they also called attention to a number of constraints that obfuscated the realisation of these design affordances in practice. These constraints were cast in terms of three binary formulations of social and cultural imperatives at play within the school: (i) ‘cool/uncool’, (ii) ‘dominant staff/compliant student’, and (iii) ‘digital learning/academic performance’. The first formulation foregrounds the social stigma of the SMC among peers and its resultant lack of positive network benefits. The second relates to students’ perception of the school culture as authoritarian and punitive with adverse effects on the very student agency required to drive the innovation. The third points to academic performance pressures in a crowded curriculum with tight timelines. Taken together, findings from both phases of the study provide the following key insights. First, students endorsed the learning affordances of contemporary digital tools such as the SMC for enhancing their current schooling practices. For the majority of students, however, these learning affordances were overshadowed by the performative demands of schooling, both social and academic. The student participants saw engagement with the SMC in-school as distinct from, even oppositional to, the conventional social and academic performance indicators of schooling, namely (i) being ‘cool’ (or at least ‘not uncool’), (ii) sufficiently ‘compliant’, and (iii) achieving good academic grades. Their reasoned response therefore, was simply to resist engagement with the digital learning innovation. Second, a small minority of students seemed dispositionally inclined to negotiate the learning affordances and performance constraints of digital learning and traditional schooling more effectively than others. These students were able to engage more frequently and meaningfully with the SMC in school. Their ability to adapt and traverse seemingly incommensurate social and institutional identities and norms is theorised as cultural agility – a dispositional construct that comprises personal innovativeness, cognitive playfulness and learning goals orientation. The logic then is ‘both and’ rather than ‘either or’ for these individuals with a capacity to accommodate both learning and performance in school, whether in terms of digital engagement and academic excellence, or successful brokerage across multiple social identities and institutional affiliations within the school. In sum, this study takes us beyond the familiar terrain of deficit discourses that tend to blame institutional conservatism, lack of resourcing and teacher resistance for low uptake of digital technologies in schools. It does so by providing an empirical base for the development of a ‘third way’ of theorising technological and pedagogical innovation in schools, one which is more informed by students as critical stakeholders and thus more relevant to the lived culture within the school, and its complex relationship to students’ lives outside of school. It is in this relationship that we find an explanation for how these individuals can, at the one time, be digital kids and analogue students.
Resumo:
Pooled serum samples collected from 8132 residents in 2002/03 and 2004/05 were analyzed to assess human polybrominated diphenyl ether (PBDE) concentrations from specified strata of the Australian population. The strata were defined by age (0−4 years, 5−15 years, < 16 years, 16−30 years, 31−45 years, 46−60 years, and >60 years); region; and gender. For both time periods, infants and older children had substantially higher PBDE concentrations than adults. For samples collected in 2004/05, the mean ± standard deviation ΣPBDE (sum of the homologue groups for the mono-, di-, tri-, tetra-, penta-, hexa-, hepta-, octa-, nona-, and deca-BDEs) concentrations for 0−4 and 5−15 years were 73 ± 7 and 29 ± 7 ng g−1 lipid, respectively, while for all adults >16 years, the mean concentration was lower at 18 ± 5 ng g−1 lipid. A similar trend was observed for the samples collected in 2002/03, with the mean ΣPBDE concentration for children <16 years being 28 ± 8 ng g−1 lipid and for the adults >16 years, 15 ± 5 ng g−1 lipid. No regional or gender specific differences were observed. Measured data were compared with a model that we developed to incorporate the primary known exposure pathways (food, air, dust, breast milk) and clearance (half-life) data. The model was used to predict PBDE concentration trends and indicated that the elevated concentrations in infants were primarily due to maternal transfer and breast milk consumption with inhalation and ingestion of dust making a comparatively lower contribution.
Resumo:
Background: Polybrominated diphenyl ethers (PBDEs) are used as flame retardants in many products and have been detected in human samples worldwide. Limited data show that concentrations are elevated in young children. Objectives: We investigated the association between PBDEs and age with an emphasis on young children from Australia in 2006–2007. Methods: We collected human blood serum samples (n = 2,420), which we stratified by age and sex and pooled for analysis of PBDEs. Results: The sum of BDE-47, -99, -100, and -153 concentrations (Σ4PBDE) increased from 0–0.5 years (mean ± SD, 14 ± 3.4 ng/g lipid) to peak at 2.6–3 years (51 ± 36 ng/g lipid; p < 0.001) and then decreased until 31–45 years (9.9 ± 1.6 ng/g lipid). We observed no further significant decrease among ages 31–45, 45–60 (p = 0.964), or > 60 years (p = 0.894). The mean Σ4PBDE concentration in cord blood (24 ± 14 ng/g lipid) did not differ significantly from that in adult serum at ages 15–30 (p = 0.198) or 31–45 years (p = 0.140). We found no temporal trend when we compared the present results with Australian PBDE data from 2002–2005. PBDE concentrations were higher in males than in females; however, this difference reached statistical significance only for BDE-153 (p = 0.05). Conclusions: The observed peak concentration at 2.6–3 years of age is later than the period when breast-feeding is typically ceased. This suggests that in addition to the exposure via human milk, young children have higher exposure to these chemicals and/or a lower capacity to eliminate them. Key words: Australia, children, cord blood, human blood serum, PBDEs, polybrominated diphenyl ethers. Environ Health Perspect 117:1461–1465 (2009). doi:10.1289/ehp.0900596
Resumo:
Integrity of Real Time Kinematic (RTK) positioning solutions relates to the confidential level that can be placed in the information provided by the RTK system. It includes the ability of the RTK system to provide timely valid warnings to users when the system must not be used for the intended operation. For instance, in the controlled traffic farming (CTF) system that controls traffic separates wheel beds and root beds, RTK positioning error causes overlap and increases the amount of soil compaction. The RTK system’s integrity capacity can inform users when the actual positional errors of the RTK solutions have exceeded Horizontal Protection Levels (HPL) within a certain Time-To-Alert (TTA) at a given Integrity Risk (IR). The later is defined as the probability that the system claims its normal operational status while actually being in an abnormal status, e.g., the ambiguities being incorrectly fixed and positional errors having exceeded the HPL. The paper studies the required positioning performance (RPP) of GPS positioning system for PA applications such as a CTF system, according to literature review and survey conducted among a number of farming companies. The HPL and IR are derived from these RPP parameters. A RTK-specific rover autonomous integrity monitoring (RAIM) algorithm is developed to determine the system integrity according to real time outputs, such as residual square sum (RSS), HDOP values. A two-station baseline data set is analyzed to demonstrate the concept of RTK integrity and assess the RTK solution continuity, missed detection probability and false alarm probability.
Resumo:
Noise and vibration in complex ship structures are becoming a prominent issue for ship building industry and ship companies due to the constant demand of building faster ships of lighter weight, and the stringent noise and libration regulation of the industry. In order to retain the full benefit of building faster ships without compromising too much on ride comfort and safety, noise and vibration control needs to be implemented. Due to the complexity of ship structures, the coupling of different wave types and multiple wave propagation paths, active control of global hull modes is difficult to implement and very expensive. Traditional passive control such as adding damping materials is only effective in the high frequency range. However, most severe damage to ship structures is caused by large structural deformation of hull structures and high dynamic stress concentration at low frequencies. The most discomfort and fatigue of passengers and the crew onboard ships is also due to the low frequency noise and vibration. Innovative approaches are therefore, required to attenuate the noise and vibration at low frequencies. This book was developed from several specialized research topics on vibration and vibration control of ship structures, mostly from the author's own PhD work at the University of Western Australia. The book aims to provide a better understanding of vibration characteristics of ribbed plate structures, plate/plate coupled structures and the mechanism governing wave propagation and attenuation in periodic and irregular ribbed structures as well as in complex ship structures. The book is designed to be a reference book for ship builders, vibro-acoustic engineers and researchers. The author also hopes that the book can stimulate more exciting future work in this area of research. It is the author's humble desire that the book can be some use for those who purchase it. This book is divided into eight chapters. Each chapter focuses on providing solution to address a particular issue on vibration problems of ship structures. A brief summary of each chapter is given in the general introduction. All chapters are inter-dependent to each other to form an integration volume on the subject of vibration and vibration control of ship structures and alike. I am in debt to many people in completing this work. In particular, I would like to thank Professor J. Pan, Dr N.H. Farag, Dr K. Sum and many others from the University of Western Australia for useful advices and helps during my times at the University and beyond. I would also like to thank my wife, Miaoling Wang, my children, Anita, Sophia and Angela Lin, for their sacrifice and continuing supports to make this work possible. Financial supports from Australian Research Council, Australian Defense Science and Technology Organization and Strategic Marine Pty Ltd at Western Australia for this work is gratefully acknowledged.
Resumo:
We investigate whether the two 2 zero cost portfolios, SMB and HML, have the ability to predict economic growth for markets investigated in this paper. Our findings show that there are only a limited number of cases when the coefficients are positive and significance is achieved in an even more limited number of cases. Our results are in stark contrast to Liew and Vassalou (2000) who find coefficients to be generally positive and of a similar magnitude. We go a step further and also employ the methodology of Lakonishok, Shleifer and Vishny (1994) and once again fail to support the risk-based hypothesis of Liew and Vassalou (2000). In sum, we argue that search for a robust economic explanation for firm size and book-to-market equity effects needs sustained effort as these two zero cost portfolios do not represent economically relevant risk.
Resumo:
Willingness to pay models have shown the theoretical relationships between the contingent valuation, cost of illness and the avertive behaviour approaches. In this paper, field survey data are used to compare the relationships between these three approaches and to demonstrate that contingent valuation bids exceed the sum of cost of illness and the avertive behaviour approach estimates. The estimates provide a validity check for CV bids and further support the claim that contingent valuation studies are theoretically consistent.
Resumo:
Today’s evolving networks are experiencing a large number of different attacks ranging from system break-ins, infection from automatic attack tools such as worms, viruses, trojan horses and denial of service (DoS). One important aspect of such attacks is that they are often indiscriminate and target Internet addresses without regard to whether they are bona fide allocated or not. Due to the absence of any advertised host services the traffic observed on unused IP addresses is by definition unsolicited and likely to be either opportunistic or malicious. The analysis of large repositories of such traffic can be used to extract useful information about both ongoing and new attack patterns and unearth unusual attack behaviors. However, such an analysis is difficult due to the size and nature of the collected traffic on unused address spaces. In this dissertation, we present a network traffic analysis technique which uses traffic collected from unused address spaces and relies on the statistical properties of the collected traffic, in order to accurately and quickly detect new and ongoing network anomalies. Detection of network anomalies is based on the concept that an anomalous activity usually transforms the network parameters in such a way that their statistical properties no longer remain constant, resulting in abrupt changes. In this dissertation, we use sequential analysis techniques to identify changes in the behavior of network traffic targeting unused address spaces to unveil both ongoing and new attack patterns. Specifically, we have developed a dynamic sliding window based non-parametric cumulative sum change detection techniques for identification of changes in network traffic. Furthermore we have introduced dynamic thresholds to detect changes in network traffic behavior and also detect when a particular change has ended. Experimental results are presented that demonstrate the operational effectiveness and efficiency of the proposed approach, using both synthetically generated datasets and real network traces collected from a dedicated block of unused IP addresses.
Resumo:
High-rate flooding attacks (aka Distributed Denial of Service or DDoS attacks) continue to constitute a pernicious threat within the Internet domain. In this work we demonstrate how using packet source IP addresses coupled with a change-point analysis of the rate of arrival of new IP addresses may be sufficient to detect the onset of a high-rate flooding attack. Importantly, minimizing the number of features to be examined, directly addresses the issue of scalability of the detection process to higher network speeds. Using a proof of concept implementation we have shown how pre-onset IP addresses can be efficiently represented using a bit vector and used to modify a “white list” filter in a firewall as part of the mitigation strategy.
Resumo:
In public venues, crowd size is a key indicator of crowd safety and stability. In this paper we propose a crowd counting algorithm that uses tracking and local features to count the number of people in each group as represented by a foreground blob segment, so that the total crowd estimate is the sum of the group sizes. Tracking is employed to improve the robustness of the estimate, by analysing the history of each group, including splitting and merging events. A simplified ground truth annotation strategy results in an approach with minimal setup requirements that is highly accurate.
Resumo:
The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.