972 resultados para field line


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Maintenance is a time consuming and expensive task for any golf course or driving range manager. For a golf course the primary tasks are grass mowing and maintenance (fertilizer and herbicide spreading), while for a driving range mowing, maintenance and ball collection are required. All these tasks require an operator to drive a vehicle along paths which are generally predefined. This paper presents some preliminary in-field tsting results for an automated tractor vehicle performing golf ball collection on an actual driving range, and mowing on difficult unstructured terrain.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A point interpolation method with locally smoothed strain field (PIM-LS2) is developed for mechanics problems using a triangular background mesh. In the PIM-LS2, the strain within each sub-cell of a nodal domain is assumed to be the average strain over the adjacent sub-cells of the neighboring element sharing the same field node. We prove theoretically that the energy norm of the smoothed strain field in PIM-LS2 is equivalent to that of the compatible strain field, and then prove that the solution of the PIM- LS2 converges to the exact solution of the original strong form. Furthermore, the softening effects of PIM-LS2 to system and the effects of the number of sub-cells that participated in the smoothing operation on the convergence of PIM-LS2 are investigated. Intensive numerical studies verify the convergence, softening effects and bound properties of the PIM-LS2, and show that the very ‘‘tight’’ lower and upper bound solutions can be obtained using PIM-LS2.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Water environments are greatly valued in urban areas as ecological and aesthetic assets. However, it is the water environment that is most adversely affected by urbanisation. Urban land use coupled with anthropogenic activities alters the stream flow regime and degrade water quality with urban stormwater being a significant source of pollutants. Unfortunately, urban water pollution is difficult to evaluate in terms of conventional monetary measures. True costs extend beyond immediate human or the physical boundaries of the urban area and affect the function of surrounding ecosystems. Current approaches for handling stormwater pollution and water quality issues in urban landscapes are limited as these are primarily focused on ‘end-of-pipe’ solutions. The approaches are commonly based either on, insufficient design knowledge, faulty value judgements or inadequate consideration of full life cycle costs. It is in this context that the adoption of a triple bottom line approach is advocated to safeguard urban water quality. The problem of degradation of urban water environments can only be remedied through innovative planning, water sensitive engineering design and the foresight to implement sustainable practices. Sustainable urban landscapes must be designed to match the triple bottom line needs of the community, starting with ecosystem services first such as the water cycle, then addressing the social and immediate ecosystem health needs, and finally the economic performance of the catchment. This calls for a cultural change towards urban water resources rather than the current piecemeal and single issue focus approach. This paper discusses the challenges in safeguarding urban water environments and the limitations of current approaches. It then explores the opportunities offered by integrating innovative planning practices with water engineering concepts into a single cohesive framework to protect valuable urban ecosystem assets. Finally, a series of recommendations are proposed for protecting urban water resources within the context of a triple bottom line approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most costly operations encountered in pairing computations are those that take place in the full extension field Fpk . At high levels of security, the complexity of operations in Fpk dominates the complexity of the operations that occur in the lower degree subfields. Consequently, full extension field operations have the greatest effect on the runtime of Miller’s algorithm. Many recent optimizations in the literature have focussed on improving the overall operation count by presenting new explicit formulas that reduce the number of subfield operations encountered throughout an iteration of Miller’s algorithm. Unfortunately, almost all of these improvements tend to suffer for larger embedding degrees where the expensive extension field operations far outweigh the operations in the smaller subfields. In this paper, we propose a new way of carrying out Miller’s algorithm that involves new explicit formulas which reduce the number of full extension field operations that occur in an iteration of the Miller loop, resulting in significant speed ups in most practical situations of between 5 and 30 percent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Miller’s algorithm for computing pairings involves perform- ing multiplications between elements that belong to different finite fields. Namely, elements in the full extension field Fpk are multiplied by elements contained in proper subfields F pk/d , and by elements in the base field Fp . We show that significant speedups in pairing computations can be achieved by delaying these “mismatched” multiplications for an optimal number of iterations. Importantly, we show that our technique can be easily integrated into traditional pairing algorithms; implementers can exploit the computational savings herein by applying only minor changes to existing pairing code.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Power transformers are one of the most important and costly equipment in power generation, transmission and distribution systems. Current average age of transformers in Australia is around 25 years and there is a strong economical tendency to use them up to 50 years or more. As the transformers operate, they get degraded due to different loading and environmental operating stressed conditions. In today‘s competitive energy market with the penetration of distributed energy sources, the transformers are stressed more with minimum required maintenance. The modern asset management program tries to increase the usage life time of power transformers with prognostic techniques using condition indicators. In the case of oil filled transformers, condition monitoring methods based on dissolved gas analysis, polarization studies, partial discharge studies, frequency response analysis studies to check the mechanical integrity, IR heat monitoring and other vibration monitoring techniques are in use. In the current research program, studies have been initiated to identify the degradation of insulating materials by the electrical relaxation technique known as dielectrometry. Aging leads to main degradation products like moisture and other oxidized products due to fluctuating thermal and electrical loading. By applying repetitive low frequency high voltage sine wave perturbations in the range of 100 to 200 V peak across available terminals of power transformer, the conductive and polarization parameters of insulation aging are identified. An in-house novel digital instrument is developed to record the low leakage response of repetitive polarization currents in three terminals configuration. The technique is tested with known three transformers of rating 5 kVA or more. The effects of stressing polarization voltage level, polarizing wave shapes and various terminal configurations provide characteristic aging relaxation information. By using different analyses, sensitive parameters of aging are identified and it is presented in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, both Distributed Generators (DG) and capacitors are allocated and sized optimally for improving line loss and reliability. The objective function is composed of the investment cost of DGs and capacitors along with loss and reliability which are converted to the genuine dollar. The bus voltage and line current are considered as constraints which should be satisfied during the optimization procedure. Hybrid Particle Swarm Optimization as a heuristic based technique is used as the optimization method. The IEEE 69-bus test system is modified and employed to evaluate the proposed algorithm. The results illustrate that the lowest cost planning is found by optimizing both DGs and capacitors in distribution networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Undertaking empirical research on crime and violence can be a tricky enterprise fraught with ethical, methodological, intellectual and legal implications. This chapter takes readers on a reflective journey through the qualitative methodologies I used to research sex work in Kings Cross, miscarriages of justice, female delinquency, sexual violence, and violence in rural and regional settings over a period of nearly 30 years. Reflecting on these experiences, the chapter explores and analyses the reality of doing qualitative field research, the role of the researcher, the politics of subjectivity, the exercise of power, and the ‘muddiness’ of the research process, which is often overlooked in sanitised accounts of the research process (Byrne-Armstrong, Higgs and Horsfall, 2001; Davies, 2000).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis aimed to investigate the way in which distance runners modulate their speed in an effort to understand the key processes and determinants of speed selection when encountering hills in natural outdoor environments. One factor which has limited the expansion of knowledge in this area has been a reliance on the motorized treadmill which constrains runners to constant speeds and gradients and only linear paths. Conversely, limits in the portability or storage capacity of available technology have restricted field research to brief durations and level courses. Therefore another aim of this thesis was to evaluate the capacity of lightweight, portable technology to measure running speed in outdoor undulating terrain. The first study of this thesis assessed the validity of a non-differential GPS to measure speed, displacement and position during human locomotion. Three healthy participants walked and ran over straight and curved courses for 59 and 34 trials respectively. A non-differential GPS receiver provided speed data by Doppler Shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100m section, while static positions were collected for 1 hour and compared with the known geodetic point. GPS speed values on the straight course were found to be closely correlated with actual speeds (Doppler shift: r = 0.9994, p < 0.001, Δ GPS position/time: r = 0.9984, p < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within ± 0.1 m.sec -1). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, p < 0.001, Δ GPS distance/time: r = 0.9973, p < 0.001). Distance measured by GPS was 100.46 ± 0.49m, while 86.5% of static points were within 1.5m of the actual geodetic point (mean error: 1.08 ± 0.34m, range 0.69-2.10m). Non-differential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost and ease of use, the use of a non-differential receiver offers a valid alternative to differential GPS in the study of overground locomotion. The second study of this dissertation examined speed regulation during overground running on a hilly course. Following an initial laboratory session to calculate physiological thresholds (VO2 max and ventilatory thresholds), eight experienced long distance runners completed a self- paced time trial over three laps of an outdoor course involving uphill, downhill and level sections. A portable gas analyser, GPS receiver and activity monitor were used to collect physiological, speed and stride frequency data. Participants ran 23% slower on uphills and 13.8% faster on downhills compared with level sections. Speeds on level sections were significantly different for 78.4 ± 7.0 seconds following an uphill and 23.6 ± 2.2 seconds following a downhill. Speed changes were primarily regulated by stride length which was 20.5% shorter uphill and 16.2% longer downhill, while stride frequency was relatively stable. Oxygen consumption averaged 100.4% of runner’s individual ventilatory thresholds on uphills, 78.9% on downhills and 89.3% on level sections. Group level speed was highly predicted using a modified gradient factor (r2 = 0.89). Individuals adopted distinct pacing strategies, both across laps and as a function of gradient. Speed was best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption (VO2) limited runner’s speeds only on uphill sections, and was maintained in line with individual ventilatory thresholds. Running speed showed larger individual variation on downhill sections, while speed on the level was systematically influenced by the preceding gradient. Runners who varied their pace more as a function of gradient showed a more consistent level of oxygen consumption. These results suggest that optimising time on the level sections after hills offers the greatest potential to minimise overall time when running over undulating terrain. The third study of this thesis investigated the effect of implementing an individualised pacing strategy on running performance over an undulating course. Six trained distance runners completed three trials involving four laps (9968m) of an outdoor course involving uphill, downhill and level sections. The initial trial was self-paced in the absence of any temporal feedback. For the second and third field trials, runners were paced for the first three laps (7476m) according to two different regimes (Intervention or Control) by matching desired goal times for subsections within each gradient. The fourth lap (2492m) was completed without pacing. Goals for the Intervention trial were based on findings from study two using a modified gradient factor and elapsed distance to predict the time for each section. To maintain the same overall time across all paced conditions, times were proportionately adjusted according to split times from the self-paced trial. The alternative pacing strategy (Control) used the original split times from this initial trial. Five of the six runners increased their range of uphill to downhill speeds on the Intervention trial by more than 30%, but this was unsuccessful in achieving a more consistent level of oxygen consumption with only one runner showing a change of more than 10%. Group level adherence to the Intervention strategy was lowest on downhill sections. Three runners successfully adhered to the Intervention pacing strategy which was gauged by a low Root Mean Square error across subsections and gradients. Of these three, the two who had the largest change in uphill-downhill speeds ran their fastest overall time. This suggests that for some runners the strategy of varying speeds systematically to account for gradients and transitions may benefit race performances on courses involving hills. In summary, a non – differential receiver was found to offer highly accurate measures of speed, distance and position across the range of human locomotion speeds. Self-selected speed was found to be best predicted using a weighted factor to account for prior and current gradients. Oxygen consumption limited runner’s speeds only on uphills, speed on the level was systematically influenced by preceding gradients, while there was a much larger individual variation on downhill sections. Individuals were found to adopt distinct but unrelated pacing strategies as a function of durations and gradients, while runners who varied pace more as a function of gradient showed a more consistent level of oxygen consumption. Finally, the implementation of an individualised pacing strategy to account for gradients and transitions greatly increased runners’ range of uphill-downhill speeds and was able to improve performance in some runners. The efficiency of various gradient-speed trade- offs and the factors limiting faster downhill speeds will however require further investigation to further improve the effectiveness of the suggested strategy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation develops the model of a prototype system for the digital lodgement of spatial data sets with statutory bodies responsible for the registration and approval of land related actions under the Torrens Title system. Spatial data pertain to the location of geographical entities together with their spatial dimensions and are classified as point, line, area or surface. This dissertation deals with a sub-set of spatial data, land boundary data that result from the activities performed by surveying and mapping organisations for the development of land parcels. The prototype system has been developed, utilising an event-driven paradigm for the user-interface, to exploit the potential of digital spatial data being generated from the utilisation of electronic techniques. The system provides for the creation of a digital model of the cadastral network and dependent data sets for an area of interest from hard copy records. This initial model is calibrated on registered control and updated by field survey to produce an amended model. The field-calibrated model then is electronically validated to ensure it complies with standards of format and content. The prototype system was designed specifically to create a database of land boundary data for subsequent retrieval by land professionals for surveying, mapping and related activities. Data extracted from this database are utilised for subsequent field survey operations without the need to create an initial digital model of an area of interest. Statistical reporting of differences resulting when subsequent initial and calibrated models are compared, replaces the traditional checking operations of spatial data performed by a land registry office. Digital lodgement of survey data is fundamental to the creation of the database of accurate land boundary data. This creation of the database is fundamental also to the efficient integration of accurate spatial data about land being generated by modem technology such as global positioning systems, and remote sensing and imaging, with land boundary information and other information held in Government databases. The prototype system developed provides for the delivery of accurate, digital land boundary data for the land registration process to ensure the continued maintenance of the integrity of the cadastre. Such data should meet also the more general and encompassing requirements of, and prove to be of tangible, longer term benefit to the developing, electronic land information industry.