60 resultados para restricted grazing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study explores young people's creative practice through using Information and Communications Technologies (ICTs) - in one particular learning area - Drama. The study focuses on school-based contexts and the impact of ICT-based interventions within two drama education case studies. The first pilot study involved the use of online spaces to complement a co-curricula performance project. The second focus case was a curriculum-based project with online spaces and digital technologies being used to create a cyberdrama. Each case documents the activity systems, participant experiences and meaning making in specific institutional and technological contexts. The nature of creative practice and learning are analysed, using frameworks drawn from Vygotsky's socio-historical theory (including his work on creativity) and from activity theory. Case study analysis revealed the nature of contradictions encountered and these required an analysis of institutional constraints and the dynamics of power. Cyberdrama offers young people opportunities to explore drama through new modes and the use of ICTs can be seen as contributing different tools, spaces and communities for creative activity. To be able to engage in creative practice using ICTs requires a focus on a range of cultural tools and social practices beyond those of the purely technological. Cybernetic creative practice requires flexibility in the negotiation of tool use and subjects and a system that responds to feedback and can adapt. Classroom-based dramatic practice may allow for the negotiation of power and tool use in the development of collaborative works of the imagination. However, creative practice using ICTs in schools is typically restricted by authoritative power structures and access issues. The research identified participant engagement and meaning making emerging from different factors, with some students showing preferences for embodied creative practice in Drama that did not involve ICTs. The findings of the study suggest ICT-based interventions need to focus on different applications for the technology but also on embodied experience, the negotiation of power, identity and human interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Managing livestock movement in extensive systems has environmental and production benefits. Currently permanent wire fencing is used to control cattle; this is both expensive and inflexible. Cattle are known to respond to auditory and visual cues and we investigated whether these can be used to manipulate their behaviour. Twenty-five Belmont Red steers with a mean live weight of 270kg were each randomly assigned to one of five treatments. Treatments consisted of a combination of cues (audio, tactile and visual stimuli) and consequence (electrical stimulation). The treatments were electrical stimulation alone, audio plus electrical stimulation, vibration plus electrical stimulation, light plus electrical stimulation and electrified electric fence (6kV) plus electrical stimulation. Cue stimuli were administered for 3s followed immediately by electrical stimulation (consequence) of 1kV for 1s. The experiment tested the operational efficacy of an on-animal control or virtual fencing system. A collar-halter device was designed to carry the electronics, batteries and equipment providing the stimuli, including audio, vibration, light and electrical of a prototype virtual fencing device. Cattle were allowed to travel along a 40m alley to a group of peers and feed while their rate of travel and response to the stimuli were recorded. The prototype virtual fencing system was successful in modifying the behaviour of the cattle. The rate of travel of cattle along the alley demonstrated the large variability in behavioural response associated with tactile, visual and audible cues. The experiment demonstrated virtual fencing has potential for controlling cattle in extensive grazing systems. However, larger numbers of cattle need to be tested to derive a better understanding of the behavioural variance. Further controlled experimental work is also necessary to quantify the interaction between cues, consequences and cattle learning.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents research that is being conducted by the Commonwealth Scientific and Industrial Research Organisation (CSIRO) with the aim of investigating the use of wireless sensor networks for automated livestock monitoring and control. It is difficult to achieve practical and reliable cattle monitoring with current conventional technologies due to challenges such as large grazing areas of cattle, long time periods of data sampling, and constantly varying physical environments. Wireless sensor networks bring a new level of possibilities into this area with the potential for greatly increased spatial and temporal resolution of measurement data. CSIRO has created a wireless sensor platform for animal behaviour monitoring where we are able to observe and collect information of animals without significantly interfering with them. Based on such monitoring information, we can identify each animal's behaviour and activities successfully

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Virtual fencing has the potential to control grazing livestock. Understanding and refi ning the cues that can alter behaviour is an integral part of autonomous animal control. A series of tests have been completed to explore the relationship between temperament and control. Prior to exposure to virtual fencing control the animals were scored for temperament using fl ight speed and a sociability index using contact logging devices. The behavioural response of 30, Belmont Red steers were observed for behavioural changes when presented with cues prior to receiving an electrical stimulation. A control and four treatments designed to interrupt the animal’s movement down an alley were tested. The treatments consisted of sound plus electrical stimulation, vibration plus electrical stimulation, a visual cue plus electrical stimulation and electrical stimulation by itself. The treatments were randomly applied to each animal over fi ve consecutive trials. A control treatment in which no cues were applied was used to establish a basal behavioural pattern. A trial was considered completed after each animal had been retained behind the cue barrier for at least 60 sec. All cues and electrical stimulation were manually applied from a laptop located on a portable 3.5 m tower located immediately outside the alley. The electric stimulation consisted of 1.0 Kv of electricity. Electric stimulation, sound and vibration along with the Global Position System (GPS) hardware to autonomously record the animal’s path within the alley were recorded every second.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent months the extremes of Australia’s weather have affected, killed a good number of people and millions of dollars lost. Contrary to a manned aircraft or a helicopter; which have restricted air time, a UAS or a group of UAS could provide 24 hours coverage of the disaster area and be instrumented with infrared cameras to locate distressed people and relay information to emergency services. The solar powered UAV is capable of carrying a 0.25Kg payload consuming 0.5 watt and fly continuously for at low altitude for 24 hrs ,collect the data and create a special distribution . This system, named Green Falcon, is fully autonomous in navigation and power generation, equipped with solar cells covering its wing, it retrieves energy from the sun in order to supply power to the propulsion system and the control electronics, and charge the battery with the surplus of energy. During the night, the only energy available comes from the battery, which discharges slowly until the next morning when a new cycle starts. The prototype airplane was exhibited at the Melbourne Museum form Nov09 to Feb 2010.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [3] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [3] is shown.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Despite the general evolution and broadening of the scope of the concept of infrastructure in many other sectors, the energy sector has maintained the same narrow boundaries for over 80 years. Energy infrastructure is still generally restricted in meaning to the transmission and distribution networks of electricity and, to some extent, gas. This is especially true in the urban development context. This early 20th century system is struggling to meet community expectations that the industry itself created and fostered for many decades. The relentless growth in demand and changing political, economic and environmental challenges require a shift from the traditional ‘predict and provide’ approach to infrastructure which is no longer economically or environmentally viable. Market deregulation and a raft of demand and supply side management strategies have failed to curb society’s addiction to the commodity of electricity. None of these responses has addressed the fundamental problem. This chapter presents an argument for the need for a new paradigm. Going beyond peripheral energy efficiency measures and the substitution of fossil fuels with renewables, it outlines a new approach to the provision of energy services in the context of 21st century urban environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a consumerist society obsessed with body image and thinness, obesity levels have reached an all-time high. This multi-faceted book written by a range of experts, explores the social, cultural, clinical and psychological factors that lie behind the Obesity Epidemic . It is required reading for the many healthcare professionals dealing with the effects of obesity and for anyone who wants to know more about the causes of weight gain and the best ways of dealing with it. Fat Matters covers a range of issues from sociology through medicine to technology. This is not a book for the highly specialised expert. Rather it is a book that shows the diversity of approaches to the phenomenon of obesity, tailored to the reader who wants to be up-to-date and well-informed on a subject that is possibly as frequently discussed and as misunderstood as the weather.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Throughout history, developments in medicine have aimed to improve patient quality of life, and reduce the trauma associated with surgical treatment. Surgical access to internal organs and bodily structures has been traditionally via large incisions. Endoscopic surgery presents a technique for surgical access via small (1 Omm) incisions by utilising a scope and camera for visualisation of the operative site. Endoscopy presents enormous benefits for patients in terms of lower post operative discomfort, and reduced recovery and hospitalisation time. Since the first gall bladder extraction operation was performed in France in 1987, endoscopic surgery has been embraced by the international medical community. With the adoption of the new technique, new problems never previously encountered in open surgery, were revealed. One such problem is that the removal of large tissue specimens and organs is restricted by the small incision size. Instruments have been developed to address this problem however none of the devices provide a totally satisfactory solution. They have a number of critical weaknesses: -The size of the access incision has to be enlarged, thereby compromising the entire endoscopic approach to surgery. - The physical quality of the specimen extracted is very poor and is not suitable to conduct the necessary post operative pathological examinations. -The safety of both the patient and the physician is jeopardised. The problem of tissue and organ extraction at endoscopy is investigated and addressed. In addition to background information covering endoscopic surgery, this thesis describes the entire approach to the design problem, and the steps taken before arriving at the final solution. This thesis contributes to the body of knowledge associated with the development of endoscopic surgical instruments. A new product capable of extracting large tissue specimens and organs in endoscopy is the final outcome of the research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reliability analysis has several important engineering applications. Designers and operators of equipment are often interested in the probability of the equipment operating successfully to a given age - this probability is known as the equipment's reliability at that age. Reliability information is also important to those charged with maintaining an item of equipment, as it enables them to model and evaluate alternative maintenance policies for the equipment. In each case, information on failures and survivals of a typical sample of items is used to estimate the required probabilities as a function of the item's age, this process being one of many applications of the statistical techniques known as distribution fitting. In most engineering applications, the estimation procedure must deal with samples containing survivors (suspensions or censorings); this thesis focuses on several graphical estimation methods that are widely used for analysing such samples. Although these methods have been current for many years, they share a common shortcoming: none of them is continuously sensitive to changes in the ages of the suspensions, and we show that the resulting reliability estimates are therefore more pessimistic than necessary. We use a simple example to show that the existing graphical methods take no account of any service recorded by suspensions beyond their respective previous failures, and that this behaviour is inconsistent with one's intuitive expectations. In the course of this thesis, we demonstrate that the existing methods are only justified under restricted conditions. We present several improved methods and demonstrate that each of them overcomes the problem described above, while reducing to one of the existing methods where this is justified. Each of the improved methods thus provides a realistic set of reliability estimates for general (unrestricted) censored samples. Several related variations on these improved methods are also presented and justified. - i

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Literally, the word compliance suggests conformity in fulfilling official requirements. The thesis presents the results of the analysis and design of a class of protocols called compliant cryptologic protocols (CCP). The thesis presents a notion for compliance in cryptosystems that is conducive as a cryptologic goal. CCP are employed in security systems used by at least two mutually mistrusting sets of entities. The individuals in the sets of entities only trust the design of the security system and any trusted third party the security system may include. Such a security system can be thought of as a broker between the mistrusting sets of entities. In order to provide confidence in operation for the mistrusting sets of entities, CCP must provide compliance verification mechanisms. These mechanisms are employed either by all the entities or a set of authorised entities in the system to verify the compliance of the behaviour of various participating entities with the rules of the system. It is often stated that confidentiality, integrity and authentication are the primary interests of cryptology. It is evident from the literature that authentication mechanisms employ confidentiality and integrity services to achieve their goal. Therefore, the fundamental services that any cryptographic algorithm may provide are confidentiality and integrity only. Since controlling the behaviour of the entities is not a feasible cryptologic goal,the verification of the confidentiality of any data is a futile cryptologic exercise. For example, there exists no cryptologic mechanism that would prevent an entity from willingly or unwillingly exposing its private key corresponding to a certified public key. The confidentiality of the data can only be assumed. Therefore, any verification in cryptologic protocols must take the form of integrity verification mechanisms. Thus, compliance verification must take the form of integrity verification in cryptologic protocols. A definition of compliance that is conducive as a cryptologic goal is presented as a guarantee on the confidentiality and integrity services. The definitions are employed to provide a classification mechanism for various message formats in a cryptologic protocol. The classification assists in the characterisation of protocols, which assists in providing a focus for the goals of the research. The resulting concrete goal of the research is the study of those protocols that employ message formats to provide restricted confidentiality and universal integrity services to selected data. The thesis proposes an informal technique to understand, analyse and synthesise the integrity goals of a protocol system. The thesis contains a study of key recovery,electronic cash, peer-review, electronic auction, and electronic voting protocols. All these protocols contain message format that provide restricted confidentiality and universal integrity services to selected data. The study of key recovery systems aims to achieve robust key recovery relying only on the certification procedure and without the need for tamper-resistant system modules. The result of this study is a new technique for the design of key recovery systems called hybrid key escrow. The thesis identifies a class of compliant cryptologic protocols called secure selection protocols (SSP). The uniqueness of this class of protocols is the similarity in the goals of the member protocols, namely peer-review, electronic auction and electronic voting. The problem statement describing the goals of these protocols contain a tuple,(I, D), where I usually refers to an identity of a participant and D usually refers to the data selected by the participant. SSP are interested in providing confidentiality service to the tuple for hiding the relationship between I and D, and integrity service to the tuple after its formation to prevent the modification of the tuple. The thesis provides a schema to solve the instances of SSP by employing the electronic cash technology. The thesis makes a distinction between electronic cash technology and electronic payment technology. It will treat electronic cash technology to be a certification mechanism that allows the participants to obtain a certificate on their public key, without revealing the certificate or the public key to the certifier. The thesis abstracts the certificate and the public key as the data structure called anonymous token. It proposes design schemes for the peer-review, e-auction and e-voting protocols by employing the schema with the anonymous token abstraction. The thesis concludes by providing a variety of problem statements for future research that would further enrich the literature.