936 resultados para Effort alimentaire
Resumo:
Stream ciphers are encryption algorithms used for ensuring the privacy of digital telecommunications. They have been widely used for encrypting military communications, satellite communications, pay TV encryption and for voice encryption of both fixed lined and wireless networks. The current multi year European project eSTREAM, which aims to select stream ciphers suitable for widespread adoptation, reflects the importance of this area of research. Stream ciphers consist of a keystream generator and an output function. Keystream generators produce a sequence that appears to be random, which is combined with the plaintext message using the output function. Most commonly, the output function is binary addition modulo two. Cryptanalysis of these ciphers focuses largely on analysis of the keystream generators and of relationships between the generator and the keystream it produces. Linear feedback shift registers are widely used components in building keystream generators, as the sequences they produce are well understood. Many types of attack have been proposed for breaking various LFSR based stream ciphers. A recent attack type is known as an algebraic attack. Algebraic attacks transform the problem of recovering the key into a problem of solving multivariate system of equations, which eventually recover the internal state bits or the key bits. This type of attack has been shown to be effective on a number of regularly clocked LFSR based stream ciphers. In this thesis, algebraic attacks are extended to a number of well known stream ciphers where at least one LFSR in the system is irregularly clocked. Applying algebriac attacks to these ciphers has only been discussed previously in the open literature for LILI-128. In this thesis, algebraic attacks are first applied to keystream generators using stop-and go clocking. Four ciphers belonging to this group are investigated: the Beth-Piper stop-and-go generator, the alternating step generator, the Gollmann cascade generator and the eSTREAM candidate: the Pomaranch cipher. It is shown that algebraic attacks are very effective on the first three of these ciphers. Although no effective algebraic attack was found for Pomaranch, the algebraic analysis lead to some interesting findings including weaknesses that may be exploited in future attacks. Algebraic attacks are then applied to keystream generators using (p; q) clocking. Two well known examples of such ciphers, the step1/step2 generator and the self decimated generator are investigated. Algebraic attacks are shown to be very powerful attack in recovering the internal state of these generators. A more complex clocking mechanism than either stop-and-go or the (p; q) clocking keystream generators is known as mutual clock control. In mutual clock control generators, the LFSRs control the clocking of each other. Four well known stream ciphers belonging to this group are investigated with respect to algebraic attacks: the Bilateral-stop-and-go generator, A5/1 stream cipher, Alpha 1 stream cipher, and the more recent eSTREAM proposal, the MICKEY stream ciphers. Some theoretical results with regards to the complexity of algebraic attacks on these ciphers are presented. The algebraic analysis of these ciphers showed that generally, it is hard to generate the system of equations required for an algebraic attack on these ciphers. As the algebraic attack could not be applied directly on these ciphers, a different approach was used, namely guessing some bits of the internal state, in order to reduce the degree of the equations. Finally, an algebraic attack on Alpha 1 that requires only 128 bits of keystream to recover the 128 internal state bits is presented. An essential process associated with stream cipher proposals is key initialization. Many recently proposed stream ciphers use an algorithm to initialize the large internal state with a smaller key and possibly publicly known initialization vectors. The effect of key initialization on the performance of algebraic attacks is also investigated in this thesis. The relationships between the two have not been investigated before in the open literature. The investigation is conducted on Trivium and Grain-128, two eSTREAM ciphers. It is shown that the key initialization process has an effect on the success of algebraic attacks, unlike other conventional attacks. In particular, the key initialization process allows an attacker to firstly generate a small number of equations of low degree and then perform an algebraic attack using multiple keystreams. The effect of the number of iterations performed during key initialization is investigated. It is shown that both the number of iterations and the maximum number of initialization vectors to be used with one key should be carefully chosen. Some experimental results on Trivium and Grain-128 are then presented. Finally, the security with respect to algebraic attacks of the well known LILI family of stream ciphers, including the unbroken LILI-II, is investigated. These are irregularly clock- controlled nonlinear filtered generators. While the structure is defined for the LILI family, a particular paramater choice defines a specific instance. Two well known such instances are LILI-128 and LILI-II. The security of these and other instances is investigated to identify which instances are vulnerable to algebraic attacks. The feasibility of recovering the key bits using algebraic attacks is then investigated for both LILI- 128 and LILI-II. Algebraic attacks which recover the internal state with less effort than exhaustive key search are possible for LILI-128 but not for LILI-II. Given the internal state at some point in time, the feasibility of recovering the key bits is also investigated, showing that the parameters used in the key initialization process, if poorly chosen, can lead to a key recovery using algebraic attacks.
Resumo:
Since 2001 the School of Information Technology and Electrical Engineering (ITEE) at the University of Queensland has been involved in RoboCupJunior activities aimed at providing children with the Robot building and programming knowledge they need to succeed in RoboCupJunior competitions. These activities include robotics workshops, the organization of the State-wide RoboCupJunior competition, and consultation on all matters robotic with schools and government organizations. The activities initiated by ITEE have succeeded in providing children with the scaffolding necessary to become competent, independent robot builders and programmers. Results from state, national and international competitions suggest that many of the children who participate in the activities supported by ITEE are subsequently able to purpose- build robots to effectively compete in RoboCupJunior competitions. As a result of the scaffolding received within workshops children are able to think deeply and creatively about their designs, and to critique their designs in order to make the best possible creation in an effort to win.
Resumo:
Structural health monitoring has been accepted as a justified effort for long-span bridges, which are critical to a region's economic vitality. As the most heavily instrumented bridge project in the world, WASHMS - Wind And Structural Health Monitoring System has been developed and installed on the cable-supported bridges in Hong Kong (Wong and Ni 2009a). This chapter aims to share some of the experience gained through the operations and studies on the application of WASHMS. It is concluded that Structural Health Monitoring should be composed of two main components: Structural Performance Monitoring (SPM) and Structural Safety Evaluation (SSE). As an example to illustrate how the WASHMS could be used for structural performance monitoring, the layout of the sensory system installed on the Tsing Ma Bridge is briefly described. To demonstrate the two broad approaches of structural safety evaluation - Structural Health Assessment and Damage Detection, three examples in the application of SHM information are presented. These three examples can be considered as pioneer works for the research and development of the structural diagnosis and prognosis tools required by the structural health monitoring for monitoring and evaluation applications.
Resumo:
One major gap in transportation system safety management is the ability to assess the safety ramifications of design changes for both new road projects and modifications to existing roads. To fulfill this need, FHWA and its many partners are developing a safety forecasting tool, the Interactive Highway Safety Design Model (IHSDM). The tool will be used by roadway design engineers, safety analysts, and planners throughout the United States. As such, the statistical models embedded in IHSDM will need to be able to forecast safety impacts under a wide range of roadway configurations and environmental conditions for a wide range of driver populations and will need to be able to capture elements of driving risk across states. One of the IHSDM algorithms developed by FHWA and its contractors is for forecasting accidents on rural road segments and rural intersections. The methodological approach is to use predictive models for specific base conditions, with traffic volume information as the sole explanatory variable for crashes, and then to apply regional or state calibration factors and accident modification factors (AMFs) to estimate the impact on accidents of geometric characteristics that differ from the base model conditions. In the majority of past approaches, AMFs are derived from parameter estimates associated with the explanatory variables. A recent study for FHWA used a multistate database to examine in detail the use of the algorithm with the base model-AMF approach and explored alternative base model forms as well as the use of full models that included nontraffic-related variables and other approaches to estimate AMFs. That research effort is reported. The results support the IHSDM methodology.
Resumo:
Flinders University and Queensland University of Technology, biofuels research interests cover a broad range of activities. Both institutions are seeking to overcome the twin evils of "peak oil" (Hubbert 1949 & 1956) and "global warming" (IPPC 2007, Stern 2006, Alison 2010), through development of Generation 1, 2 and 3 (Gen-1, 2 & 3) biofuels (Clarke 2008, Clarke 2010). This includes development of parallel Chemical Biorefinery, value-added, co-product chemical technologies, which can underpin the commercial viability of the biofuel industry. Whilst there is a focused effort to develop Gen-2 & 3 biofuels, thus avoiding the socially unacceptable use of food based Gen-1 biofuels, it must also be recognized that as yet, no country in the world has produced sustainable Gen-2 & 3 biofuel on a commercial basis. For example, in 2008 the United States used 38 billion litres (3.5% of total fuel use) of Gen-1 biofuel; in 2009/2010 this will be 47.5 billion litres (4.5% of fuel use) and in 2018 this has been estimated to rise to 96 billion litres (9% of total US fuel use). Brazil in 2008 produced 24.5 billion litres of ethanol, representing 37.3% of the world’s ethanol use for fuel and Europe, in 2008, produced 11.7 billion litres of biofuel (primarily as biodiesel). Compare this to Australia’s miserly biofuel production in 2008/2009 of 180 million litres of ethanol and 75 million litres of biodiesel, which is 0.4% of our fuel consumption! (Clarke, Graiver and Habibie 2010) To assist in the development of better biofuels technologies in the Asian developing regions the Australian Government recently awarded the Materials & BioEnergy Group from Flinders University, in partnership with the Queensland University of Technology, an Australian Leadership Award (ALA) Biofuel Fellowship program to train scientists from Indonesia and India about all facets of advanced biofuel technology.
Resumo:
This paper demonstrates a model of self-regulation based on a qualitative research project with adult learners undertaking an undergraduate degree. The narrative about the participant’s life transitions, co-constructed with the researcher, yielded data about their generalised self-efficacy and resulted in a unique self-efficacy narrative for each participant. A model of self-regulation is proposed with potential applications for coaching, counselling and psychotherapy. A narrative method was employed to construct narratives about an individual’s self-efficacy in relation to their experience of learning and life transitions. The method involved a cyclical and iterative process using qualitative interviews to collect life history data from participants. In addition, research participants completed reflective homework tasks, and this data was included in the participant’s narratives. A highly collaborative method entailed narratives being co-constructed by researcher and research participants as the participants were guided in reflecting on their experience in relation to learning and life transitions; the reflection focused on behaviour, cognitions and emotions that constitute a sense of self-efficacy. The analytic process used was narrative analysis, in which life is viewed as constructed and experienced through the telling and retelling of stories and hence the analysis is the creation of a coherent and resonant story. The method of constructing self-efficacy narratives was applied to a sample of mature aged students starting an undergraduate degree. The research outcomes confirmed a three-factor model of self-efficacy, comprising three interrelated stages: initiating action, applying effort, and persistence in overcoming difficulties. Evaluation of the research process by participants suggested that they had gained an enhanced understanding of self-efficacy from their participation in the research process, and would be able to apply this understanding to their studies and other endeavours in the future. A model of self-regulation is proposed as a means for coaches, counsellors and psychotherapists working from a narrative constructivist perspective to assist clients facing life transitions by helping them generate selfefficacious cognitions, emotions and behaviour.
Resumo:
As a strategy to identify child sexual abuse, most Australian States and Territories have enacted legislation requiring teachers to report suspected cases. Some Australian State and non-State educational authorities have also created policy-based obligations to report suspected child sexual abuse. Significantly, these can be wider than non-existent or limited legislative duties, and therefore are a crucial element of the effort to identify sexual abuse. Yet, no research has explored the existence and nature of these policy-based duties. The first purpose of this paper is to report the results of a three-State study into policy-based reporting duties in State and non-State schools in Australia. In an extraordinary coincidence, while conducting the study, a case of failure to comply with reporting policy occurred with tragic consequences. This led to a rare example in Australia (and one of only a few worldwide) of a professional being prosecuted for failure to comply with a legislative duty. It also led to disciplinary proceedings against school staff. The second purpose of this paper is to describe this case and connect it with findings from our policy analysis.
Resumo:
Predicting safety on roadways is standard practice for road safety professionals and has a corresponding extensive literature. The majority of safety prediction models are estimated using roadway segment and intersection (microscale) data, while more recently efforts have been undertaken to predict safety at the planning level (macroscale). Safety prediction models typically include roadway, operations, and exposure variables—factors known to affect safety in fundamental ways. Environmental variables, in particular variables attempting to capture the effect of rain on road safety, are difficult to obtain and have rarely been considered. In the few cases weather variables have been included, historical averages rather than actual weather conditions during which crashes are observed have been used. Without the inclusion of weather related variables researchers have had difficulty explaining regional differences in the safety performance of various entities (e.g. intersections, road segments, highways, etc.) As part of the NCHRP 8-44 research effort, researchers developed PLANSAFE, or planning level safety prediction models. These models make use of socio-economic, demographic, and roadway variables for predicting planning level safety. Accounting for regional differences - similar to the experience for microscale safety models - has been problematic during the development of planning level safety prediction models. More specifically, without weather related variables there is an insufficient set of variables for explaining safety differences across regions and states. Furthermore, omitted variable bias resulting from excluding these important variables may adversely impact the coefficients of included variables, thus contributing to difficulty in model interpretation and accuracy. This paper summarizes the results of an effort to include weather related variables, particularly various measures of rainfall, into accident frequency prediction and the prediction of the frequency of fatal and/or injury degree of severity crash models. The purpose of the study was to determine whether these variables do in fact improve overall goodness of fit of the models, whether these variables may explain some or all of observed regional differences, and identifying the estimated effects of rainfall on safety. The models are based on Traffic Analysis Zone level datasets from Michigan, and Pima and Maricopa Counties in Arizona. Numerous rain-related variables were found to be statistically significant, selected rain related variables improved the overall goodness of fit, and inclusion of these variables reduced the portion of the model explained by the constant in the base models without weather variables. Rain tends to diminish safety, as expected, in fairly complex ways, depending on rain frequency and intensity.
Resumo:
This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.
Resumo:
A novel application of the popular web instruction architecture Blackboard Academic Suite® is described. The method was applied to a large number of students to assess quantitatively the accuracy of each student’s laboratory skills. The method provided immediate feedback to students on their personal skill level, replaced labour-intensive scrutiny of laboratory skills by teaching staff and identified immediately those students requiring further individual assistance in mastering the skill under evaluation. The method can be used for both formative and summative assessment. When used formatively, the assessment can be repeated by the student without penalty until the skill is mastered. When used for summative assessment, the method can save the teacher much time and effort in assessing laboratory skills of vital importance to students in the real world.
Resumo:
The way in which metabolic fuels are utilised can alter the expression of behaviour in the interests of regulating energy balance and fuel availability. This is consistent with the notion that the regulation of appetite is a psychobiological process, in which physiological mediators act as drivers of behaviour. The glycogenostatic theory suggests that glycogen availability is central in eliciting negative feedback signals to restore energy homeostasis. Due to its limited storage capacity, carbohydrate availability is tightly regulated and its restoration is a high metabolic priority following depletion. It has been proposed that such depletion may act as a biological cue to stimulate compensatory energy intake in an effort to restore availability. Due to the increased energy demand, aerobic exercise may act as a biological cue to trigger compensatory eating as a result of perturbations to muscle and liver glycogen stores. However, studies manipulating glycogen availability over short-term periods (1-3 days) using exercise, diet or both have often produced equivocal findings. There is limited but growing evidence to suggest that carbohydrate balance is involved in the short-term regulation of food intake, with a negative carbohydrate balance having been shown to predict greater ad libitum feeding. Furthermore, a negative carbohydrate balance has been shown to be predictive of weight gain. However, further research is needed to support these findings as the current research in this area is limited. In addition, the specific neural or hormonal signal through which carbohydrate availability could regulate energy intake is at present unknown. Identification of this signal or pathway is imperative if a casual relationship is to be established. Without this, the possibility remains that the associations found between carbohydrate balance and food intake are incidental.
Resumo:
Many industrial processes and systems can be modelled mathematically by a set of Partial Differential Equations (PDEs). Finding a solution to such a PDF model is essential for system design, simulation, and process control purpose. However, major difficulties appear when solving PDEs with singularity. Traditional numerical methods, such as finite difference, finite element, and polynomial based orthogonal collocation, not only have limitations to fully capture the process dynamics but also demand enormous computation power due to the large number of elements or mesh points for accommodation of sharp variations. To tackle this challenging problem, wavelet based approaches and high resolution methods have been recently developed with successful applications to a fixedbed adsorption column model. Our investigation has shown that recent advances in wavelet based approaches and high resolution methods have the potential to be adopted for solving more complicated dynamic system models. This chapter will highlight the successful applications of these new methods in solving complex models of simulated-moving-bed (SMB) chromatographic processes. A SMB process is a distributed parameter system and can be mathematically described by a set of partial/ordinary differential equations and algebraic equations. These equations are highly coupled; experience wave propagations with steep front, and require significant numerical effort to solve. To demonstrate the numerical computing power of the wavelet based approaches and high resolution methods, a single column chromatographic process modelled by a Transport-Dispersive-Equilibrium linear model is investigated first. Numerical solutions from the upwind-1 finite difference, wavelet-collocation, and high resolution methods are evaluated by quantitative comparisons with the analytical solution for a range of Peclet numbers. After that, the advantages of the wavelet based approaches and high resolution methods are further demonstrated through applications to a dynamic SMB model for an enantiomers separation process. This research has revealed that for a PDE system with a low Peclet number, all existing numerical methods work well, but the upwind finite difference method consumes the most time for the same degree of accuracy of the numerical solution. The high resolution method provides an accurate numerical solution for a PDE system with a medium Peclet number. The wavelet collocation method is capable of catching up steep changes in the solution, and thus can be used for solving PDE models with high singularity. For the complex SMB system models under consideration, both the wavelet based approaches and high resolution methods are good candidates in terms of computation demand and prediction accuracy on the steep front. The high resolution methods have shown better stability in achieving steady state in the specific case studied in this Chapter.
Resumo:
In Australia rural research and development corporations and companies expended over $AUS500 million on agricultural research and development. A substantial proportion of this is invested in R&D in the beef industry. The Australian beef industry exports almost $AUS5billionof product annually and invest heavily in new product development to improve the beef quality and improve production efficiency. Review points are critical for effective new product development, yet many research and development bodies, particularly publicly funded ones, appear to ignore the importance of assessing products prior to their release. Significant sums of money are invested in developing technological innovations that have low levels and rates of adoption. The adoption rates could be improved if the developers were more focused on technology uptake and less focused on proving their technologies can be applied in practice. Several approaches have been put forward in an effort to improve rates of adoption into operational settings. This paper presents a study of key technological innovations in the Australian beef industry to assess the use of multiple criteria in evaluating the potential uptake of new technologies. Findings indicate that using multiple criteria to evaluate innovations before commercializing a technology enables researchers to better understand the issues that may inhibit adoption.
Resumo:
While critical success factors (CSFs) of enterprise system (ES) implementation are mature concepts and have received considerable attention for over a decade, researchers have very often focused on only a specific aspect of the implementation process or a specific CSF. Resultantly, there is (1) little research documented that encompasses all significant CSF considerations and (2) little empirical research into the important factors of successful ES implementation. This paper is part of a larger research effort that aims to contribute to understanding the phenomenon of ES CSFs, and reports on preliminary findings from a case study conducted at a Queensland University of Technology (QUT) in Australia. This paper reports on an empirically derived CSFs framework using a directed content analysis of 79 studies; from top IS outlets, employing the characteristics of the analytic theory, and from six different projects implemented at QUT.