186 resultados para Simple interest
Resumo:
This paper introduces the application of a sensor network to navigate a flying robot. We have developed distributed algorithms and efficient geographic routing techniques to incrementally guide one or more robots to points of interest based on sensor gradient fields, or along paths defined in terms of Cartesian coordinates. The robot itself is an integral part of the localization process which establishes the positions of sensors which are not known a priori. We use this system in a large-scale outdoor experiment with Mote sensors to guide an autonomous helicopter along a path encoded in the network. A simple handheld device, using this same environmental infrastructure, is used to guide humans.
Resumo:
International assessments of student science achievement, and growing evidence of students' waning interest in school science, have ensured that the development of scientific literacy continues to remain an important educational priority. Furthermore, researchers have called for teaching and learning strategies to engage students in the learning of science, particularly in the middle years of schooling. This study extends previous national and international research that has established a link between writing and learning science. Specifically, it investigates the learning experiences of eight intact Year 9 science classes as they engage in the writing of short stories that merge scientific and narrative genres (i.e., hybridised scientific narratives) about the socioscientific issue of biosecurity. This study employed a triangulation mixed methods research design, generating both quantitative and qualitative data, in order to investigate three research questions that examined the extent to which the students' participation in the study enhanced their scientific literacy; the extent to which the students demonstrated conceptual understanding of related scientific concepts through their written artefacts and in interviews about the artefacts; and the extent to which the students' participation in the project influenced their attitudes toward science and science learning. Three aspects of scientific literacy were investigated in this study: conceptual science understandings (a derived sense of scientific literacy), the students' transformation of scientific information in written stories about biosecurity (simple and expanded fundamental senses of scientific literacy), and attitudes toward science and science learning. The stories written by students in a selected case study class (N=26) were analysed quantitatively using a series of specifically-designed matrices that produce numerical scores that reflect students' developing fundamental and derived senses of scientific literacy. All students (N=152) also completed a Likert-style instrument (i.e., BioQuiz), pretest and posttest, that examined their interest in learning science, science self-efficacy, their perceived personal and general value of science, their familiarity with biosecurity issues, and their attitudes toward biosecurity. Socioscientific issues (SSI) education served as a theoretical framework for this study. It sought to investigate an alternative discourse with which students can engage in the context of SSI education, and the role of positive attitudes in engaging students in the negotiation of socioscientific issues. Results of the study have revealed that writing BioStories enhanced selected aspects of the participants' attitudes toward science and science learning, and their awareness and conceptual understanding of issues relating to biosecurity. Furthermore, the students' written artefacts alone did not provide an accurate representation of the level of their conceptual science understandings. An examination of these artefacts in combination with interviews about the students' written work provided a more comprehensive assessment of their developing scientific literacy. These findings support extensive calls for the utilisation of diversified writing-to-learn strategies in the science classroom, and therefore make a significant contribution to the writing-to-learn science literature, particularly in relation to the use of hybridised scientific genres. At the same time, this study presents the argument that the writing of hybridised scientific narratives such as BioStories can be used to complement the types of written discourse with which students engage in the negotiation of socioscientific issues, namely, argumentation, as the development of positive attitudes toward science and science learning can encourage students' participation in the discourse of science. The implications of this study for curricular design and implementation, and for further research, are also discussed.
Resumo:
World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.
Resumo:
Bioelectrical impedance analysis, (BIA), is a method of body composition analysis first investigated in 1962 which has recently received much attention by a number of research groups. The reasons for this recent interest are its advantages, (viz: inexpensive, non-invasive and portable) and also the increasing interest in the diagnostic value of body composition analysis. The concept utilised by BIA to predict body water volumes is the proportional relationship for a simple cylindrical conductor, (volume oc length2/resistance), which allows the volume to be predicted from the measured resistance and length. Most of the research to date has measured the body's resistance to the passage of a 50· kHz AC current to predict total body water, (TBW). Several research groups have investigated the application of AC currents at lower frequencies, (eg 5 kHz), to predict extracellular water, (ECW). However all research to date using BIA to predict body water volumes has used the impedance measured at a discrete frequency or frequencies. This thesis investigates the variation of impedance and phase of biological systems over a range of frequencies and describes the development of a swept frequency bioimpedance meter which measures impedance and phase at 496 frequencies ranging from 4 kHz to 1 MHz. The impedance of any biological system varies with the frequency of the applied current. The graph of reactance vs resistance yields a circular arc with the resistance decreasing with increasing frequency and reactance increasing from zero to a maximum then decreasing to zero. Computer programs were written to analyse the measured impedance spectrum and determine the impedance, Zc, at the characteristic frequency, (the frequency at which the reactance is a maximum). The fitted locus of the measured data was extrapolated to determine the resistance, Ro, at zero frequency; a value that cannot be measured directly using surface electrodes. The explanation of the theoretical basis for selecting these impedance values (Zc and Ro), to predict TBW and ECW is presented. Studies were conducted on a group of normal healthy animals, (n=42), in which TBW and ECW were determined by the gold standard of isotope dilution. The prediction quotients L2/Zc and L2/Ro, (L=length), yielded standard errors of 4.2% and 3.2% respectively, and were found to be significantly better than previously reported, empirically determined prediction quotients derived from measurements at a single frequency. The prediction equations established in this group of normal healthy animals were applied to a group of animals with abnormally low fluid levels, (n=20), and also to a group with an abnormal balance of extra-cellular to intracellular fluids, (n=20). In both cases the equations using L2/Zc and L2/Ro accurately and precisely predicted TBW and ECW. This demonstrated that the technique developed using multiple frequency bioelectrical impedance analysis, (MFBIA), can accurately predict both TBW and ECW in both normal and abnormal animals, (with standard errors of the estimate of 6% and 3% for TBW and ECW respectively). Isotope dilution techniques were used to determine TBW and ECW in a group of 60 healthy human subjects, (male. and female, aged between 18 and 45). Whole body impedance measurements were recorded on each subject using the MFBIA technique and the correlations between body water volumes, (TBW and ECW), and heighe/impedance, (for all measured frequencies), were compared. The prediction quotients H2/Zc and H2/Ro, (H=height), again yielded the highest correlation with TBW and ECW respectively with corresponding standard errors of 5.2% and 10%. The values of the correlation coefficients obtained in this study were very similar to those recently reported by others. It was also observed that in healthy human subjects the impedance measured at virtually any frequency yielded correlations not significantly different from those obtained from the MFBIA quotients. This phenomenon has been reported by other research groups and emphasises the need to validate the technique by investigating its application in one or more groups with abnormalities in fluid levels. The clinical application of MFBIA was trialled and its capability of detecting lymphoedema, (an excess of extracellular fluid), was investigated. The MFBIA technique was demonstrated to be significantly more sensitive, (P<.05), in detecting lymphoedema than the current technique of circumferential measurements. MFBIA was also shown to provide valuable information describing the changes in the quantity of muscle mass of the patient during the course of the treatment. The determination of body composition, (viz TBW and ECW), by MFBIA has been shown to be a significant improvement on previous bioelectrical impedance techniques. The merit of the MFBIA technique is evidenced in its accurate, precise and valid application in animal groups with a wide variation in body fluid volumes and balances. The multiple frequency bioelectrical impedance analysis technique developed in this study provides accurate and precise estimates of body composition, (viz TBW and ECW), regardless of the individual's state of health.
Resumo:
Nitrous oxide (N2O) is a potent agricultural greenhouse gas (GHG). More than 50% of the global anthropogenic N2O flux is attributable to emissions from soil, primarily due to large fertilizer nitrogen (N) applications to corn and other non-leguminous crops. Quantification of the trade–offs between N2O emissions, fertilizer N rate, and crop yield is an essential requirement for informing management strategies aiming to reduce the agricultural sector GHG burden, without compromising productivity and producer livelihood. There is currently great interest in developing and implementing agricultural GHG reduction offset projects for inclusion within carbon offset markets. Nitrous oxide, with a global warming potential (GWP) of 298, is a major target for these endeavours due to the high payback associated with its emission prevention. In this paper we use robust quantitative relationships between fertilizer N rate and N2O emissions, along with a recently developed approach for determining economically profitable N rates for optimized crop yield, to propose a simple, transparent, and robust N2O emission reduction protocol (NERP) for generating agricultural GHG emission reduction credits. This NERP has the advantage of providing an economic and environmental incentive for producers and other stakeholders, necessary requirements in the implementation of agricultural offset projects.
Resumo:
Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites
Resumo:
The dominant economic paradigm currently guiding industry policy making in Australia and much of the rest of the world is the neoclassical approach. Although neoclassical theories acknowledge that growth is driven by innovation, such innovation is exogenous to their standard models and hence often not explored. Instead the focus is on the allocation of scarce resources, where innovation is perceived as an external shock to the system. Indeed, analysis of innovation is largely undertaken by other disciplines, such as evolutionary economics and institutional economics. As more has become known about innovation processes, linear models, based on research and development or market demand, have been replaced by more complex interactive models which emphasise the existence of feedback loops between the actors and activities involved in the commercialisation of ideas (Manley 2003). Currently dominant among these approaches is the national or sectoral innovation system model (Breschi and Malerba 2000; Nelson 1993), which is based on the notion of increasingly open innovation systems (Chesbrough, Vanhaverbeke, and West 2008). This chapter reports on the ‘BRITE Survey’ funded by the Cooperative Research Centre for Construction Innovation which investigated the open sectoral innovation system operating in the Australian construction industry. The BRITE Survey was undertaken in 2004 and it is the largest construction innovation survey ever conducted in Australia. The results reported here give an indication of how construction innovation processes operate, as an example that should be of interest to international audiences interested in construction economics. The questionnaire was based on a broad range of indicators recommended in the OECD’s Community Innovation Survey guidelines (OECD/Eurostat 2005). Although the ABS has recently begun to undertake regular innovation surveys that include the construction industry (2006), they employ a very narrow definition of the industry and only collect very basic data compared to that provided by the BRITE Survey, which is presented in this chapter. The term ‘innovation’ is defined here as a new or significantly improved technology or organisational practice, based broadly on OECD definitions (OECD/Eurostat 2005). Innovation may be technological or organisational in nature and it may be new to the world, or just new to the industry or the business concerned. The definition thus includes the simple adoption of existing technological and organisational advancements. The survey collected information about respondents’ perceptions of innovation determinants in the industry, comprising various aspects of business strategy and business environment. It builds on a pilot innovation survey undertaken by PricewaterhouseCoopers (PWC) for the Australian Construction Industry Forum on behalf of the Australian Commonwealth Department of Industry Tourism and Resources, in 2001 (PWC 2002). The survey responds to an identified need within the Australian construction industry to have accurate and timely innovation data upon which to base effective management strategies and public policies (Focus Group 2004).
Resumo:
The Simple Laws of Proportion was shortlisted in the 2010 John Marsden Writing Prize for Young Australian Writers. It was subsequently published online by Express Media in December, 2010.
Resumo:
The number of Australian children requiring foster care due to abuse and neglect is increasing at a faster rate than suitable carers can be recruited. Currently increased numbers of foster children are presenting with higher care needs. Evidence suggests carers with a higher education could contribute to placement stability and ultimately provide more positive outcomes for this group of children. This paper explores the level of interest by tertiary educated persons toward a model of fostering for children with higher needs. Using a descriptive survey methodology, a convenience sample of 644 university undergraduate and postgraduate students within faculties of health sciences, and education, arts and social sciences was employed. Psychology students in the 17-26 year old age group showed greatest interest in a professional foster care model and this was statistically significant (p=0.002 955 CI .000-.010) when compared to other health professionals and other age groups. Education students held the highest interest in general fostering although not statistically significant. When these survey results were extrapolated to the total number of health professionals in Australia there could be 8,385 potential recruits for a model professional foster care. Focused campaigns are required to source professional as recruits to fostering with the benefit of servicing the placement needs of higher care needs children and contributing to general foster care resources.
Resumo:
An attempt was made to produce sensitive and specific polyclonal antisera against the viruses causing rice tungro disease, and to assess their potential for use in simple diagnostic tests. Using a multiple, sequential injection procedure, seven batches of polyclonal antisera against rice tungro bacilliform virus (RTBV) and rice tungro spherical virus (RTSV) were produced. These were characterized for their sensitivity and specificity using ring-interface precipitin test and double antibody sandwich (DAS) ELISA. Thirty-one weeks after the first immunization, antiserum batch B6b for RTBV showed the highest ring interface titer (DEP = 1:1920). For RTSV, batches S3, S4b and S5b all had similar titres (DEP = 1:640). In DAS-ELISA, however, significant differences among purified antisera (IgG) batches were observed only at IgG dilution of 10-3. At that dilution, IgGB4b showed the greatest sensitivity, while IgGS3 showed greatest sensitivity for RTSV. When all IgG batches were tested against 11 tungro field isolates (dual RTBV-RTSV infections) at sample dilution of 1:10, IgGB4b and IgGB6b for RTBV and IgGS3 and IgGS6b for RTSV performed equally well. However, after cross adsorption with healthy plant extracts in a specially prepared healthy plant-Sepharose affinity column, only IgGB6b could be used specifically to detect RTBV in a simple tissue-print assay.
Resumo:
A special transmit polarization signalling scheme is presented to alleviate the power reduction as a result of polarization mismatch from random antenna orientations. This is particularly useful for hand held mobile terminals typically equipped with only a single linearly polarized antenna, since the average signal power is desensitized against receiver orientations. Numerical simulations also show adequate robustness against incorrect channel estimations.
Resumo:
Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.
Resumo:
A remarkable growth in quantity and popularity of online social networks has been observed in recent years. There is a good number of online social networks exists which have over 100 million registered users. Many of these popular social networks offer automated recommendations to their users. This automated recommendations are normally generated using collaborative filtering systems based on the past ratings or opinions of the similar users. Alternatively, trust among the users in the network also can be used to find the neighbors while making recommendations. To obtain the optimum result, there must be a positive correlation exists between trust and interest similarity. Though the positive relations between trust and interest similarity are assumed and adopted by many researchers; no survey work on real life people’s opinion to support this hypothesis is found. In this paper, we have reviewed the state-of-the-art research work on trust in online social networks and have presented the result of the survey on the relationship between trust and interest similarity. Our result supports the assumed hypothesis of positive relationship between the trust and interest similarity of the users.