945 resultados para seedling vigor and speed of germination index
Resumo:
The aim of this study was to apply the principles of content, criterion, and construct validation to a new questionnaire specifically designed to measure foot-health status. One hundred eleven subjects completed two different questionnaires designed to measure foot health (the new Foot Health Status Questionnaire and the previously validated Foot Function Index) and underwent a clinical examination in order to provide data for a second-order confirmatory factor analysis. Presented herein is a psychometrically evaluated questionnaire that contains 13 items covering foot pain, foot function, footwear, and general foot health. The tool demonstrates a high degree of content, criterion, and construct validity and test-retest reliability.
Resumo:
Compared with viewing videos on PCs or TVs, mobile users have different experiences in viewing videos on a mobile phone due to different device features such as screen size and distinct usage contexts. To understand how mobile user’s viewing experience is impacted, we conducted a field user study with 42 participants in two typical usage contexts using a custom-designed iPhone application. With user’s acceptance of mobile video quality as the index, the study addresses four influence aspects of user experiences, including context, content type, encoding parameters and user profiles. Accompanying the quantitative method (acceptance assessment), we used a qualitative interview method to obtain a deeper understanding of a user’s assessment criteria and to support the quantitative results from a user’s perspective. Based on the results from data analysis, we advocate two user-driven strategies to adaptively provide an acceptable quality and to predict a good user experience, respectively. There are two main contributions from this paper. Firstly, the field user study allows a consideration of more influencing factors into the research on user experience of mobile video. And these influences are further demonstrated by user’s opinions. Secondly, the proposed strategies — user-driven acceptance threshold adaptation and user experience prediction — will be valuable in mobile video delivery for optimizing user experience.
Resumo:
In this study we propose a virtual index for measuring the relative innovativeness of countries. Using a multistage virtual benchmarking process, the best and rational benchmark is extracted for inefficient ISs. Furthermore, Tobit and Ordinary Least Squares (OLS) regression models are used to investigate the likelihood of changes in inefficiencies by investigating country-specific factors. The empirical results relating to the virtual benchmarking process suggest that the OLS regression model would better explain changes in the performance of innovation- inefficient countries.
Resumo:
The objective of this thesis is to investigate the corporate governance attributes of smaller listed Australian firms. This study is motivated by evidence that these firms are associated with more regulatory concerns, the introduction of ASX Corporate Governance Recommendations in 2004, and a paucity of research to guide regulators and stakeholders of smaller firms. While there is an extensive body of literature examining the effectiveness of corporate governance, the literature principally focuses on larger companies, resulting in a deficiency in the understanding of the nature and effectiveness of corporate governance in smaller firms. Based on a review of agency theory literature, a theoretical model is developed that posits that agency costs are mitigated by internal governance mechanisms and transparency. The model includes external governance factors but in many smaller firms these factors are potentially absent, increasing the reliance on the internal governance mechanisms of the firm. Based on the model, the observed greater regulatory intervention in smaller companies may be due to sub-optimal internal governance practices. Accordingly, this study addresses four broad research questions (RQs). First, what is the extent and nature of the ASX Recommendations that have been adopted by smaller firms (RQ1)? Second, what firm characteristics explain differences in the recommendations adopted by smaller listed firms (RQ2), and third, what firm characteristics explain changes in the governance of smaller firms over time (RQ3)? Fourth, how effective are the corporate governance attributes of smaller firms (RQ4)? Six hypotheses are developed to address the RQs. The first two hypotheses explore the extent and nature of corporate governance, while the remaining hypotheses evaluate its effectiveness. A time-series, cross-sectional approach is used to evaluate the effectiveness of governance. Three models, based on individual governance attributes, an index of six items derived from the literature, and an index based on the full list of ASX Recommendations, are developed and tested using a sample of 298 smaller firms with annual observations over a five-year period (2002-2006) before and after the introduction of the ASX Recommendations in 2004. With respect to (RQ1) the results reveal that the overall adoption of the recommendations increased from 66 per cent in 2004 to 74 per cent in 2006. Interestingly, the adoption rate for recommendations regarding the structure of the board and formation of committees is significantly lower than the rates for other categories of recommendations. With respect to (RQ2) the results reveal that variations in rates of adoption are explained by key firm differences including, firm size, profitability, board size, audit quality, and ownership dispersion, while the results for (RQ3) were inconclusive. With respect to (RQ4), the results provide support for the association between better governance and superior accounting-based performance. In particular, the results highlight the importance of the independence of both the board and audit committee chairs, and of greater accounting-based expertise on the audit committee. In contrast, while there is little evidence that a majority independent board is associated with superior outcomes, there is evidence linking board independence with adverse audit opinion outcomes. These results suggest that board and chair independence are substitutes; in the presence of an independent chair a majority independent board may be an unnecessary and costly investment for smaller firms. The findings make several important contributions. First, the findings contribute to the literature by providing evidence on the extent, nature and effectiveness of governance in smaller firms. The findings also contribute to the policy debate regarding future development of Australia’s corporate governance code. The findings regarding board and chair independence, and audit committee characteristics, suggest that policy-makers could consider providing additional guidance for smaller companies. In general, the findings offer support for the “if not, why not?” approach of the ASX, rather than a prescriptive rules-based approach.
Resumo:
The aim of the research program was to evaluate the heat strain, hydration status, and heat illness symptoms experienced by surface mine workers. An initial investigation involved 91 surface miners completing a heat stress questionnaire; assessing the work environment, hydration practices, and heat illness symptom experience. The key findings included 1) more than 80 % of workers experienced at least one symptom of heat illness over a 12 month period; and 2) the risk of moderate symptoms of heat illness increased with the severity of dehydration. These findings highlight a health and safety concern for surface miners, as experiencing symptoms of heat illness is an indication that the physiological systems of the body may be struggling to meet the demands of thermoregulation. To illuminate these findings a field investigation to monitor the heat strain and hydration status of surface miners was proposed. Two preliminary studies were conducted to ensure accurate and reliable data collection techniques. Firstly, a study was undertaken to determine a calibration procedure to ensure the accuracy of core body temperature measurement via an ingestible sensor. A water bath was heated to several temperatures between 23 . 51 ¢ªC, allowing for comparison of the temperature recorded by the sensors and a traceable thermometer. A positive systematic bias was observed and indicated a need for calibration. It was concluded that a linear regression should be developed for each sensor prior to ingestion, allowing for a correction to be applied to the raw data. Secondly, hydration status was to be assessed through urine specific gravity measurement. It was foreseeable that practical limitations on mine sites would delay the time between urine collection and analysis. A study was undertaken to assess the reliability of urine analysis over time. Measurement of urine specific gravity was found to be reliable up to 24 hours post urine collection and was suitable to be used in the field study. Twenty-nine surface miners (14 drillers [winter] and 15 blast crew [summer]) were monitored during a normal work shift. Core body temperature was recorded continuously. Average mean core body temperature was 37.5 and 37.4 ¢ªC for blast crew and drillers, with average maximum body temperatures of 38.0 and 37.9 ¢ªC respectively. The highest body temperature recorded was 38.4 ¢ªC. Urine samples were collected at each void for specific gravity measurement. The average mean urine specific gravity was 1.024 and 1.021 for blast crew and drillers respectively. The Heat Illness Symptoms Index was used to evaluate the experience of heat illness symptoms on shift. Over 70 % of drillers and over 80 % of blast crew reported at least one symptom. It was concluded that 1) heat strain remained within the recommended limits for acclimatised workers; and 2) the majority of workers were dehydrated before commencing their shift, and tend to remain dehydrated for the duration. Dehydration was identified as the primary issue for surface miners working in the heat. Therefore continued study focused on investigating a novel approach to monitoring hydration status. The final aim of this research program was to investigate the influence dehydration has on intraocular pressure (IOP); and subsequently, whether IOP could provide a novel indicator of hydration status. Seven males completed 90 minutes of walking in both a cool and hot climate with fluid restriction. Hydration variables and intraocular pressure were measured at baseline and at 30 minute intervals. Participants became dehydrated during the trial in the heat but maintained hydration status in the cool. Intraocular pressure progressively declined in the trial in the heat but remained relatively stable when hydration was maintained. A significant relationship was observed between intraocular pressure and both body mass loss and plasma osmolality. This evidence suggests that intraocular pressure is influenced by changes in hydration status. Further research is required to determine if intraocular pressure could be utilised as an indirect indicator of hydration status.
Resumo:
This ALTC Teaching Fellowship aimed to establish Guiding Principles for Library and Information Science Education 2.0. The aim was achieved by (i) identifying the current and anticipated skills and knowledge required by successful library and information science (LIS) professionals in the age of web 2.0 (and beyond), (ii) establishing the current state of LIS education in Australia in supporting the development of librarian 2.0, and in doing so, identify models of best practice.
The fellowship has contributed to curriculum renewal in the LIS profession. It has helped to ensure that LIS education in Australia continues to meet the changing skills and knowledge requirements of the profession it supports. It has also provided a vehicle through which LIS professionals and LIS educators may find opportunities for greater collaboration and more open communication. This will help bridge the gap between LIS theory and practice and will foster more authentic engagement between LIS education and other parts of the LIS industry in the education of the next generation of professionals. Through this fellowship the LIS discipline has become a role model for other disciplines who will be facing similar issues in the coming years.
Eighty-one members of the Australian LIS profession participated in a series of focus groups exploring the current and anticipated skills and knowledge needed by the LIS professional in the web 2.0 world and beyond. Whilst each focus group tended to draw on specific themes of interest to that particular group of people, there was a great deal of common ground. Eight key themes emerged: technology, learning and education, research or evidence-based practice, communication, collaboration and team work, user focus, business savvy and personal traits.
It was acknowledged that the need for successful LIS professionals to possess transferable skills and interpersonal attributes was not new. It was noted however that the speed with which things are changing in the web 2.0 world was having a significant impact and that this faster pace is placing a new and unexpected emphasis on the transferable skills and knowledge. It was also acknowledged that all librarians need to possess these skills, knowledge and attributes and not just the one or two role models who lead the way.
The most interesting finding however was that web 2.0, library 2.0 and librarian 2.0 represented a ‘watershed’ for the LIS profession. Almost all the focus groups spoke about how they are seeing and experiencing a culture change in the profession. Librarian 2.0 requires a ‘different mindset or attitude’. The Levels of Perspective model by Daniel Kim provides one lens by which to view this finding. The focus group findings suggest that we are witnessing a re-awaking of the Australian LIS profession as it begins to move towards the higher levels of Kim’s model (ie mental models, vision).
Thirty-six LIS educators participated in telephone interviews aimed at exploring the current state of LIS education in supporting the development of librarian 2.0. Skills and knowledge of LIS professionals in a web 2.0 world that were identified and discussed by the LIS educators mirrored those highlighted in the focus group discussions with LIS professionals. Similarly it was noted that librarian 2.0 needed a focus less on skills and knowledge and more on attitude. However, whilst LIS professionals felt that there was a paradigm shift within the profession. LIS educators did not speak with one voice on this matter with quite a number of the educators suggesting that this might be ‘overstating it a bit’. This study provides evidence for “disparate viewpoints” (Hallam, 2007) between LIS educators and LIS professionals that can have a significant implications for the future of not just LIS professional education specifically but for the profession generally.
Library and information science education 2.0: guiding principles and models of best practice 1
Inviting the LIS academics to discuss how their teaching and learning activities support the development of librarian 2.0 was a core part of the interviews conducted. The strategies used and the challenges faced by LIS educators in developing their teaching and learning approaches to support the formation of librarian 2.0 are identified and discussed. A core part of the fellowship was the identification of best practice examples on how LIS educators were developing librarian 2.0. Twelve best practice examples were identified. Each educator was recorded discussing his or her approach to teaching and learning. Videos of these interviews are available via the Fellowship blog at
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such microcalibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, lane distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Calibration process in micro-simulation is an extremely complicated phenomenon. The difficulties are more prevalent if the process encompasses fitting aggregate and disaggregate parameters e.g. travel time and headway. The current practice in calibration is more at aggregate level, for example travel time comparison. Such practices are popular to assess network performance. Though these applications are significant there is another stream of micro-simulated calibration, at disaggregate level. This study will focus on such micro-calibration exercise-key to better comprehend motorway traffic risk level, management of variable speed limit (VSL) and ramp metering (RM) techniques. Selected section of Pacific Motorway in Brisbane will be used as a case study. The discussion will primarily incorporate the critical issues encountered during parameter adjustment exercise (e.g. vehicular, driving behaviour) with reference to key traffic performance indicators like speed, land distribution and headway; at specific motorway points. The endeavour is to highlight the utility and implications of such disaggregate level simulation for improved traffic prediction studies. The aspects of calibrating for points in comparison to that for whole of the network will also be briefly addressed to examine the critical issues such as the suitability of local calibration at global scale. The paper will be of interest to transport professionals in Australia/New Zealand where micro-simulation in particular at point level, is still comparatively a less explored territory in motorway management.
Resumo:
Vehicle emitted particles are of significant concern based on their potential to influence local air quality and human health. Transport microenvironments usually contain higher vehicle emission concentrations compared to other environments, and people spend a substantial amount of time in these microenvironments when commuting. Currently there is limited scientific knowledge on particle concentration, passenger exposure and the distribution of vehicle emissions in transport microenvironments, partially due to the fact that the instrumentation required to conduct such measurements is not available in many research centres. Information on passenger waiting time and location in such microenvironments has also not been investigated, which makes it difficult to evaluate a passenger’s spatial-temporal exposure to vehicle emissions. Furthermore, current emission models are incapable of rapidly predicting emission distribution, given the complexity of variations in emission rates that result from changes in driving conditions, as well as the time spent in driving condition within the transport microenvironment. In order to address these scientific gaps in knowledge, this work conducted, for the first time, a comprehensive statistical analysis of experimental data, along with multi-parameter assessment, exposure evaluation and comparison, and emission model development and application, in relation to traffic interrupted transport microenvironments. The work aimed to quantify and characterise particle emissions and human exposure in the transport microenvironments, with bus stations and a pedestrian crossing identified as suitable research locations representing a typical transport microenvironment. Firstly, two bus stations in Brisbane, Australia, with different designs, were selected to conduct measurements of particle number size distributions, particle number and PM2.5 concentrations during two different seasons. Simultaneous traffic and meteorological parameters were also monitored, aiming to quantify particle characteristics and investigate the impact of bus flow rate, station design and meteorological conditions on particle characteristics at stations. The results showed higher concentrations of PN20-30 at the station situated in an open area (open station), which is likely to be attributed to the lower average daily temperature compared to the station with a canyon structure (canyon station). During precipitation events, it was found that particle number concentration in the size range 25-250 nm decreased greatly, and that the average daily reduction in PM2.5 concentration on rainy days compared to fine days was 44.2 % and 22.6 % at the open and canyon station, respectively. The effect of ambient wind speeds on particle number concentrations was also examined, and no relationship was found between particle number concentration and wind speed for the entire measurement period. In addition, 33 pairs of average half-hourly PN7-3000 concentrations were calculated and identified at the two stations, during the same time of a day, and with the same ambient wind speeds and precipitation conditions. The results of a paired t-test showed that the average half-hourly PN7-3000 concentrations at the two stations were not significantly different at the 5% confidence level (t = 0.06, p = 0.96), which indicates that the different station designs were not a crucial factor for influencing PN7-3000 concentrations. A further assessment of passenger exposure to bus emissions on a platform was evaluated at another bus station in Brisbane, Australia. The sampling was conducted over seven weekdays to investigate spatial-temporal variations in size-fractionated particle number and PM2.5 concentrations, as well as human exposure on the platform. For the whole day, the average PN13-800 concentration was 1.3 x 104 and 1.0 x 104 particle/cm3 at the centre and end of the platform, respectively, of which PN50-100 accounted for the largest proportion to the total count. Furthermore, the contribution of exposure at the bus station to the overall daily exposure was assessed using two assumed scenarios of a school student and an office worker. It was found that, although the daily time fraction (the percentage of time spend at a location in a whole day) at the station was only 0.8 %, the daily exposure fractions (the percentage of exposures at a location accounting for the daily exposure) at the station were 2.7% and 2.8 % for exposure to PN13-800 and 2.7% and 3.5% for exposure to PM2.5 for the school student and the office worker, respectively. A new parameter, “exposure intensity” (the ratio of daily exposure fraction and the daily time fraction) was also defined and calculated at the station, with values of 3.3 and 3.4 for exposure to PN13-880, and 3.3 and 4.2 for exposure to PM2.5, for the school student and the office worker, respectively. In order to quantify the enhanced emissions at critical locations and define the emission distribution in further dispersion models for traffic interrupted transport microenvironments, a composite line source emission (CLSE) model was developed to specifically quantify exposure levels and describe the spatial variability of vehicle emissions in traffic interrupted microenvironments. This model took into account the complexity of vehicle movements in the queue, as well as different emission rates relevant to various driving conditions (cruise, decelerate, idle and accelerate), and it utilised multi-representative segments to capture the accurate emission distribution for real vehicle flow. This model does not only helped to quantify the enhanced emissions at critical locations, but it also helped to define the emission source distribution of the disrupted steady flow for further dispersion modelling. The model then was applied to estimate particle number emissions at a bidirectional bus station used by diesel and compressed natural gas fuelled buses. It was found that the acceleration distance was of critical importance when estimating particle number emission, since the highest emissions occurred in sections where most of the buses were accelerating and no significant increases were observed at locations where they idled. It was also shown that emissions at the front end of the platform were 43 times greater than at the rear of the platform. The CLSE model was also applied at a signalled pedestrian crossing, in order to assess increased particle number emissions from motor vehicles when forced to stop and accelerate from rest. The CLSE model was used to calculate the total emissions produced by a specific number and mix of light petrol cars and diesel passenger buses including 1 car travelling in 1 direction (/1 direction), 14 cars / 1 direction, 1 bus / 1 direction, 28 cars / 2 directions, 24 cars and 2 buses / 2 directions, and 20 cars and 4 buses / 2 directions. It was found that the total emissions produced during stopping on a red signal were significantly higher than when the traffic moved at a steady speed. Overall, total emissions due to the interruption of the traffic increased by a factor of 13, 11, 45, 11, 41, and 43 for the above 6 cases, respectively. In summary, this PhD thesis presents the results of a comprehensive study on particle number and mass concentration, together with particle size distribution, in a bus station transport microenvironment, influenced by bus flow rates, meteorological conditions and station design. Passenger spatial-temporal exposure to bus emitted particles was also assessed according to waiting time and location along the platform, as well as the contribution of exposure at the bus station to overall daily exposure. Due to the complexity of the interrupted traffic flow within the transport microenvironments, a unique CLSE model was also developed, which is capable of quantifying emission levels at critical locations within the transport microenvironment, for the purpose of evaluating passenger exposure and conducting simulations of vehicle emission dispersion. The application of the CLSE model at a pedestrian crossing also proved its applicability and simplicity for use in a real-world transport microenvironment.
Resumo:
Atopic dermatitis (AD) is a chronic inflammatory skin condition, characterized by intense pruritis, with a complex aetiology comprising multiple genetic and environmental factors. It is a common chronic health problem among children, and along with other allergic conditions, is increasing in prevalence within Australia and in many countries worldwide. Successful management of childhood AD poses a significant and ongoing challenge to parents of affected children. Episodic and unpredictable, AD can have profound effects on children’s physical and psychosocial wellbeing and quality of life, and that of their caregivers and families. Where concurrent child behavioural problems and parenting difficulties exist, parents may have particular difficulty achieving adequate and consistent performance of the routine management tasks that promote the child’s health and wellbeing. Despite frequent reports of behaviour problems in children with AD, past research has neglected the importance of child behaviour to parenting confidence and competence with treatment. Parents of children with AD are also at risk of experiencing depression, anxiety, parenting stress, and parenting difficulties. Although these factors have been associated with difficulty in managing other childhood chronic health conditions, the nature of these relationships in the context of child AD management has not been reported. This study therefore examined relationships between child, parent, and family variables, and parents’ management of child AD and difficult child behaviour, using social cognitive and self-efficacy theory as a guiding framework. The study was conducted in three phases. It employed a quantitative, cross-sectional study design, accessing a community sample of 120 parents of children with AD, and a sample of 64 child-parent dyads recruited from a metropolitan paediatric tertiary referral centre. In Phase One, instruments designed to measure parents’ self-reported performance of AD management tasks (Parents’ Eczema Management Scale – PEMS) and parents’ outcome expectations of task performance (Parents’ Outcome Expectations of Eczema Management Scale – POEEMS) were adapted from the Parental Self-Efficacy with Eczema Care Index (PASECI). In Phase Two, these instruments were used to examine relationships between child, parent, and family variables, and parents’ self-efficacy, outcome expectations, and self-reported performance of AD management tasks. Relationships between child, parent, and family variables, parents’ self-efficacy for managing problem behaviours, and reported parenting practices, were also examined. This latter focus was explored further in Phase Three, in which relationships between observed child and parent behaviour, and parent-reported self-efficacy for managing both child AD and problem behaviours, were explored. Phase One demonstrated the reliability of both PEMS and POEEMS, and confirmed that PASECI was reliable and valid with modification as detailed. Factor analyses revealed two-factor structures for PEMS and PASECI alike, with both scales containing factors related to performing routine management tasks, and managing the child’s symptoms and behaviour. Factor analysis was also applied to POEEMS resulting in a three-factor structure. Factors relating to independent management of AD by the parent, involving healthcare professionals in management, and involving the child in management of AD were found. Parents’ self-efficacy and outcome expectations had a significant influence on self-reported task performance. In Phase Two, relationships emerged between parents’ self-efficacy and self-reported performance of AD management tasks, and AD severity, child behaviour difficulties, parent depression and stress, conflict over parenting issues, and parents’ relationship satisfaction. Using multiple linear regressions, significant proportions of variation in parents’ self-efficacy and self-reported performance of AD management tasks were explained by child behaviour difficulties and parents’ formal education, and self-efficacy emerged as a likely mediator for the relationships between both child behaviour and parents’ education, and performance of AD management tasks. Relationships were also found between parents’ self-efficacy for managing difficult child behaviour and use of dysfunctional parenting strategies, and child behaviour difficulties, parents’ depression and stress, conflict over parenting issues, and relationship satisfaction. While significant proportions of variation in self-efficacy for managing child behaviour were explained by both child behaviour and family income, family income was the only variable to explain a significant proportion of variation in parent-reported use of dysfunctional parenting strategies. Greater use of dysfunctional parenting strategies (both lax and authoritarian parenting) was associated with more severe AD. Parents reporting lower self-efficacy for managing AD also reported lower self-efficacy for managing difficult child behaviour; likewise, less successful self-reported performance of AD management tasks was associated with greater use of dysfunctional parenting strategies. When child and parent behaviour was directly observed in Phase Three, more aversive child behaviour was associated with lower self-efficacy, less positive outcome expectations, and poorer self-reported performance of AD management tasks by parents. Importantly, there were strong positive relationships between these variables (self-efficacy, outcome expectations, and self-reported task performance) and parents’ observed competence when providing treatment to their child. Less competent performance was also associated with greater parent-reported child behaviour difficulties, parent depression and stress, parenting conflict, and relationship dissatisfaction. Overall, this study revealed the importance of child behaviour to parents’ confidence and practices in the contexts of child AD and child behaviour management. Parents of children with concurrent AD and behavioural problems are at particular risk of having low self-efficacy for managing their child’s AD and difficult behaviour. Children with more severe AD are also at higher risk of behaviour problems, and thus represent a high-risk group of children whose parents may struggle to manage the disease successfully. As one of the first studies to examine the role and correlates of parents’ self-efficacy in child AD management, this study identified a number of potentially modifiable factors that can be targeted to enhance parents’ self-efficacy, and improve parent management of child AD. In particular, interventions should focus on child behaviour and parenting issues to support parents caring for children with AD and improve child health outcomes. In future, findings from this research will assist healthcare teams to identify parents most in need of support and intervention, and inform the development and testing of targeted multidisciplinary strategies to support parents caring for children with AD.
Resumo:
Failing injectors are one of the most common faults in diesel engines. The severity of these faults could have serious effects on diesel engine operations such as engine misfire, knocking, insufficient power output or even cause a complete engine breakdown. It is thus essential to prevent such faults from occurring by monitoring the condition of these injectors. In this paper, the authors present the results of an experimental investigation on identifying the signal characteristics of a simulated incipient injector fault in a diesel engine using both in-cylinder pressure and acoustic emission (AE) techniques. A time waveform event driven synchronous averaging technique was used to minimize or eliminate the effect of engine speed variation and amplitude fluctuation. It was found that AE is an effective method to detect the simulated injector fault in both time (crank angle) and frequency (order) domains. It was also shown that the time domain in-cylinder pressure signal is a poor indicator for condition monitoring and diagnosis of the simulated injector fault due to the small effect of the simulated fault on the engine combustion process. Nevertheless, good correlations between the simulated injector fault and the lower order components of the enveloped in-cylinder pressure spectrum were found at various engine loading conditions.
Resumo:
Complex networks have been studied extensively due to their relevance to many real-world systems such as the world-wide web, the internet, biological and social systems. During the past two decades, studies of such networks in different fields have produced many significant results concerning their structures, topological properties, and dynamics. Three well-known properties of complex networks are scale-free degree distribution, small-world effect and self-similarity. The search for additional meaningful properties and the relationships among these properties is an active area of current research. This thesis investigates a newer aspect of complex networks, namely their multifractality, which is an extension of the concept of selfsimilarity. The first part of the thesis aims to confirm that the study of properties of complex networks can be expanded to a wider field including more complex weighted networks. Those real networks that have been shown to possess the self-similarity property in the existing literature are all unweighted networks. We use the proteinprotein interaction (PPI) networks as a key example to show that their weighted networks inherit the self-similarity from the original unweighted networks. Firstly, we confirm that the random sequential box-covering algorithm is an effective tool to compute the fractal dimension of complex networks. This is demonstrated on the Homo sapiens and E. coli PPI networks as well as their skeletons. Our results verify that the fractal dimension of the skeleton is smaller than that of the original network due to the shortest distance between nodes is larger in the skeleton, hence for a fixed box-size more boxes will be needed to cover the skeleton. Then we adopt the iterative scoring method to generate weighted PPI networks of five species, namely Homo sapiens, E. coli, yeast, C. elegans and Arabidopsis Thaliana. By using the random sequential box-covering algorithm, we calculate the fractal dimensions for both the original unweighted PPI networks and the generated weighted networks. The results show that self-similarity is still present in generated weighted PPI networks. This implication will be useful for our treatment of the networks in the third part of the thesis. The second part of the thesis aims to explore the multifractal behavior of different complex networks. Fractals such as the Cantor set, the Koch curve and the Sierspinski gasket are homogeneous since these fractals consist of a geometrical figure which repeats on an ever-reduced scale. Fractal analysis is a useful method for their study. However, real-world fractals are not homogeneous; there is rarely an identical motif repeated on all scales. Their singularity may vary on different subsets; implying that these objects are multifractal. Multifractal analysis is a useful way to systematically characterize the spatial heterogeneity of both theoretical and experimental fractal patterns. However, the tools for multifractal analysis of objects in Euclidean space are not suitable for complex networks. In this thesis, we propose a new box covering algorithm for multifractal analysis of complex networks. This algorithm is demonstrated in the computation of the generalized fractal dimensions of some theoretical networks, namely scale-free networks, small-world networks, random networks, and a kind of real networks, namely PPI networks of different species. Our main finding is the existence of multifractality in scale-free networks and PPI networks, while the multifractal behaviour is not confirmed for small-world networks and random networks. As another application, we generate gene interactions networks for patients and healthy people using the correlation coefficients between microarrays of different genes. Our results confirm the existence of multifractality in gene interactions networks. This multifractal analysis then provides a potentially useful tool for gene clustering and identification. The third part of the thesis aims to investigate the topological properties of networks constructed from time series. Characterizing complicated dynamics from time series is a fundamental problem of continuing interest in a wide variety of fields. Recent works indicate that complex network theory can be a powerful tool to analyse time series. Many existing methods for transforming time series into complex networks share a common feature: they define the connectivity of a complex network by the mutual proximity of different parts (e.g., individual states, state vectors, or cycles) of a single trajectory. In this thesis, we propose a new method to construct networks of time series: we define nodes by vectors of a certain length in the time series, and weight of edges between any two nodes by the Euclidean distance between the corresponding two vectors. We apply this method to build networks for fractional Brownian motions, whose long-range dependence is characterised by their Hurst exponent. We verify the validity of this method by showing that time series with stronger correlation, hence larger Hurst exponent, tend to have smaller fractal dimension, hence smoother sample paths. We then construct networks via the technique of horizontal visibility graph (HVG), which has been widely used recently. We confirm a known linear relationship between the Hurst exponent of fractional Brownian motion and the fractal dimension of the corresponding HVG network. In the first application, we apply our newly developed box-covering algorithm to calculate the generalized fractal dimensions of the HVG networks of fractional Brownian motions as well as those for binomial cascades and five bacterial genomes. The results confirm the monoscaling of fractional Brownian motion and the multifractality of the rest. As an additional application, we discuss the resilience of networks constructed from time series via two different approaches: visibility graph and horizontal visibility graph. Our finding is that the degree distribution of VG networks of fractional Brownian motions is scale-free (i.e., having a power law) meaning that one needs to destroy a large percentage of nodes before the network collapses into isolated parts; while for HVG networks of fractional Brownian motions, the degree distribution has exponential tails, implying that HVG networks would not survive the same kind of attack.
Resumo:
The compressed gas industry and government agencies worldwide utilize "adiabatic compression" testing for qualifying high-pressure valves, regulators, and other related flow control equipment for gaseous oxygen service. This test methodology is known by various terms including adiabatic compression testing, gaseous fluid impact testing, pneumatic impact testing, and BAM testing as the most common terms. The test methodology will be described in greater detail throughout this document but in summary it consists of pressurizing a test article (valve, regulator, etc.) with gaseous oxygen within 15 to 20 milliseconds (ms). Because the driven gas1 and the driving gas2 are rapidly compressed to the final test pressure at the inlet of the test article, they are rapidly heated by the sudden increase in pressure to sufficient temperatures (thermal energies) to sometimes result in ignition of the nonmetallic materials (seals and seats) used within the test article. In general, the more rapid the compression process the more "adiabatic" the pressure surge is presumed to be and the more like an isentropic process the pressure surge has been argued to simulate. Generally speaking, adiabatic compression is widely considered the most efficient ignition mechanism for directly kindling a nonmetallic material in gaseous oxygen and has been implicated in many fire investigations. Because of the ease of ignition of many nonmetallic materials by this heating mechanism, many industry standards prescribe this testing. However, the results between various laboratories conducting the testing have not always been consistent. Research into the test method indicated that the thermal profile achieved (i.e., temperature/time history of the gas) during adiabatic compression testing as required by the prevailing industry standards has not been fully modeled or empirically verified, although attempts have been made. This research evaluated the following questions: 1) Can the rapid compression process required by the industry standards be thermodynamically and fluid dynamically modeled so that predictions of the thermal profiles be made, 2) Can the thermal profiles produced by the rapid compression process be measured in order to validate the thermodynamic and fluid dynamic models; and, estimate the severity of the test, and, 3) Can controlling parameters be recommended so that new guidelines may be established for the industry standards to resolve inconsistencies between various test laboratories conducting tests according to the present standards?
Resumo:
This paper reports on a mathematics project conducted with six Torres Strait Islander schools and communities by the research team at the YuMi Deadly Centre at QUT. Data collected is from a small focus group of six teachers and two teacher aides. We investigated how measurement is taught and learned by students, their teachers and teacher aides in the community schools. A key focus of the project was that the teaching and learning of measurement be contextualised to the students’ culture, community and home languages. A significant finding from the project was that the teachers had differing levels of knowledge and understanding about how to contextualise measurement to support student learning. For example, an Indigenous teacher identified that mathematics and the environment are relational, that is, they are not discrete and in isolation from one another, rather they mesh together, thus affording the articulation and interchange among and between mathematics and Torres Strait Islander culture.
Resumo:
The demand for high-speed data services for portable device has become a driving force for development of advanced broadband access technologies. Despite recent advances in broadband wireless technologies, there remain a number of critical issues to be resolved. One of the major concerns is the implementation of compact antennas that can operate in a wide frequency band. Spiral antenna has been used extensively for broadband applications due to its planar structure, wide bandwidth characteristics and circular polarisation. However, the practical implementation of spiral antennas is challenged by its high input characteristic impedance, relatively low gain and the need for balanced feeding structures. Further development of wideband balanced feeding structures for spiral antennas with matching impedance capabilities remain a need. This thesis proposes three wideband feeding systems for spiral antennas which are compatible with wideband array antenna geometries. First, a novel tapered geometry is proposed for a symmetric coplanar waveguide (CPW) to coplanar strip line (CPS) wideband balun. This balun can achieve the unbalanced to balanced transformation while matching the high input impedance of the antenna to a reference impedance of 50 . The discontinuity between CPW and CPS is accommodated by using a radial stub and bond wires. The bandwidth of the balun is improved by appropriately tapering the CPW line instead of using a stepped impedance transformer. Next, the tapered design is applied to an asymmetric CPW to propose a novel asymmetric CPW to CPS wideband balun. The use of asymmetric CPW does away with the discontinuities between CPW and CPS without having to use a radial stub or bond wires. Finally, a tapered microstrip line to parallel striplines balun is proposed. The balun consists of two sections. One section is the parallel striplines which are connected to the antenna, with the impedance of balanced line equal to the antenna input impedance. The other section consists of a microstrip line where the width of the ground plane is gradually reduced to eventually resemble a parallel stripline. The taper accomplishes the mode and impedance transformation. This balun has significantly improved bandwidth characteristics. Characteristics of proposed feeding structures are measured in a back-to-back configuration and compared to simulated results. The simulated and measured results show the tapered microstrip to parallel striplines balun to have more than three octaves of bandwidth. The tapered microstrip line to parallel striplines balun is integrated with a single Archimedean spiral antenna and with an array of spiral antennas. The performance of the integrated structures is simulated with the aid of electromagnetic simulation software, and results are compared to measurements. The back-to-back microstrip to parallel strip balun has a return loss of better than 10 dB over a wide bandwidth from 1.75 to 15 GHz. The performance of the microstrip to parallel strip balun was validated with the spiral antennas. The results show the balun to be an effective mean of feeding network with a low profile and wide bandwidth (2.5 to 15 GHz) for balanced spiral antennas.