943 resultados para semiarid typical grassland
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.
Resumo:
Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations (stop-and-go driving). The negative environmental impacts of these oscillations are widely accepted, but their impact on traffic safety has been debated. This paper describes the impact of freeway traffic oscillations on traffic safety. This study employs a matched case-control design using high-resolution traffic and crash data from a freeway segment. Traffic conditions prior to each crash were taken as cases, while traffic conditions during the same periods on days without crashes were taken as controls. These were also matched by presence of congestion, geometry and weather. A total of 82 cases and about 80,000 candidate controls were extracted from more than three years of data from 2004 to 2007. Conditional logistic regression models were developed based on the case-control samples. To verify consistency in the results, 20 different sets of controls were randomly extracted from the candidate pool for varying control-case ratios. The results reveal that the standard deviation of speed (thus, oscillations) is a significant variable, with an average odds ratio of about 1.08. This implies that the likelihood of a (rear-end) crash increases by about 8% with an additional unit increase in the standard deviation of speed. The average traffic states prior to crashes were less significant than the speed variations in congestion.
Resumo:
Item folksonomy or tag information is a kind of typical and prevalent web 2.0 information. Item folksonmy contains rich opinion information of users on item classifications and descriptions. It can be used as another important information source to conduct opinion mining. On the other hand, each item is associated with taxonomy information that reflects the viewpoints of experts. In this paper, we propose to mine for users’ opinions on items based on item taxonomy developed by experts and folksonomy contributed by users. In addition, we explore how to make personalized item recommendations based on users’ opinions. The experiments conducted on real word datasets collected from Amazon.com and CiteULike demonstrated the effectiveness of the proposed approaches.
Resumo:
Traffic oscillations are typical features of congested traffic flow that are characterized by recurring decelerations followed by accelerations. However, people have limited knowledge on this complex topic. In this research, 1) the impact of traffic oscillations on freeway crash occurrences has been measured using the matched case-control design. The results consistently reveal that oscillations have a more significant impact on freeway safety than the average traffic states. 2) Wavelet Transform has been adopted to locate oscillations' origins and measure their characteristics along their propagation paths using vehicle trajectory data. 3) Lane changing maneuver's impact on the immediate follower is measured and modeled. The knowledge and the new models generated from this study could provide better understanding on fundamentals of congested traffic; enable improvements to existing traffic control strategies and freeway crash countermeasures; and instigate people to develop new operational strategies with the objective of reducing the negative effects of oscillatory driving.
Resumo:
We investigate known security flaws in the context of security ceremonies to gain an understanding of the ceremony analysis process. The term security ceremonies is used to describe a system of protocols and humans which interact for a specific purpose. Security ceremonies and ceremony analysis is an area of research in its infancy, and we explore the basic principles involved to better understand the issues involved.We analyse three ceremonies, HTTPS, EMV and Opera Mini, and use the information gained from the experience to establish a list of typical flaws in ceremonies. Finally, we use that list to analyse a protocol proven secure for human use. This leads to a realisation of the strengths and weaknesses of ceremony analysis.
Resumo:
Newberyite Mg(PO3OH)•3H2O is a mineral found in caves such as from Moorba cave, Jurien Bay, Western Australia, the Skipton Lava tubes (SW of Ballarat, Victoria, Australia) and in the Petrogale Cave (Madura , Eucla, Western Australia). Because these minerals contain oxyanions, hydroxyl units and water, the minerals lend themselves to spectroscopic analysis. Raman spectroscopy can investigate the complex paragenetic relationships existing between a number of ‘cave’ minerals. The intense sharp band at 982 cm-1 is assigned to the PO43- ν1 symmetric stretching mode. Low intensity Raman bands at 1152, 1263 and 1277 cm-1 are assigned to the PO43- ν3 antisymmetric stretching vibrations. Raman bands at 497 and 552 cm-1 are attributed to the PO43- ν4 bending modes. An intense Raman band for newberyite at 398 cm-1 with a shoulder band at 413 cm-1 is assigned to the PO43- ν2 bending modes. The values for the OH stretching vibrations provide hydrogen bond distances of 2.728Å (3267 cm-1), 2.781Å (3374cm-1), 2.868Å (3479 cm-1), and 2.918Å (3515 cm-1). Such hydrogen bond distances are typical of secondary minerals. Estimates of the hydrogen-bond distances have been made from the position of the OH stretching vibrations and show a wide range in both strong and weak bonds.
Resumo:
Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.
Resumo:
Circuit-breakers (CBs) are subject to electrical stresses with restrikes during capacitor bank operation. Stresses are caused by the overvoltages across CBs, the interrupting currents and the rate of rise of recovery voltage (RRRV). Such electrical stresses also depend on the types of system grounding and the types of dielectric strength curves. The aim of this study is to demonstrate a restrike waveform predictive model for a SF6 CB that considered the types of system grounding: grounded and non-grounded and the computation accuracy comparison on the application of the cold withstand dielectric strength and the hot recovery dielectric strength curve including the POW (point-on-wave) recommendations to make an assessment of increasing the CB remaining life. The simulation of SF6 CB stresses in a typical 400 kV system was undertaken and the results in the applications are presented. The simulated restrike waveforms produced with the identified features using wavelet transform can be used for restrike diagnostic algorithm development with wavelet transform to locate a substation with breaker restrikes. This study found that the hot withstand dielectric strength curve has less magnitude than the cold withstand dielectric strength curve for restrike simulation results. Computation accuracy improved with the hot withstand dielectric strength and POW controlled switching can increase the life for a SF6 CB.
Resumo:
Compared with viewing videos on PCs or TVs, mobile users have different experiences in viewing videos on a mobile phone due to different device features such as screen size and distinct usage contexts. To understand how mobile user’s viewing experience is impacted, we conducted a field user study with 42 participants in two typical usage contexts using a custom-designed iPhone application. With user’s acceptance of mobile video quality as the index, the study addresses four influence aspects of user experiences, including context, content type, encoding parameters and user profiles. Accompanying the quantitative method (acceptance assessment), we used a qualitative interview method to obtain a deeper understanding of a user’s assessment criteria and to support the quantitative results from a user’s perspective. Based on the results from data analysis, we advocate two user-driven strategies to adaptively provide an acceptable quality and to predict a good user experience, respectively. There are two main contributions from this paper. Firstly, the field user study allows a consideration of more influencing factors into the research on user experience of mobile video. And these influences are further demonstrated by user’s opinions. Secondly, the proposed strategies — user-driven acceptance threshold adaptation and user experience prediction — will be valuable in mobile video delivery for optimizing user experience.
Resumo:
While recent research has provided valuable information as to the composition of laser printer particles, their formation mechanisms, and explained why some printers are emitters whilst others are low emitters, fundamental questions relating to the potential exposure of office workers remained unanswered. In particular, (i) what impact does the operation of laser printers have on the background particle number concentration (PNC) of an office environment over the duration of a typical working day?; (ii) what is the airborne particle exposure to office workers in the vicinity of laser printers; (iii) what influence does the office ventilation have upon the transport and concentration of particles?; (iv) is there a need to control the generation of, and/or transport of particles arising from the operation of laser printers within an office environment?; (v) what instrumentation and methodology is relevant for characterising such particles within an office location? We present experimental evidence on printer temporal and spatial PNC during the operation of 107 laser printers within open plan offices of five buildings. We show for the first time that the eight-hour time-weighted average printer particle exposure is significantly less than the eight-hour time-weighted local background particle exposure, but that peak printer particle exposure can be greater than two orders of magnitude higher than local background particle exposure. The particle size range is predominantly ultrafine (< 100nm diameter). In addition we have established that office workers are constantly exposed to non-printer derived particle concentrations, with up to an order of magnitude difference in such exposure amongst offices, and propose that such exposure be controlled along with exposure to printer derived particles. We also propose, for the first time, that peak particle reference values be calculated for each office area analogous to the criteria used in Australia and elsewhere for evaluating exposure excursion above occupational hazardous chemical exposure standards. A universal peak particle reference value of 2.0 x 104 particles cm-3 has been proposed.
Resumo:
The emergence of Twenty20 cricket at the elite level has been marketed on the excitement of the big hitter, where it seems that winning is a result of the muscular batter hitting boundaries at will. This version of the game has captured the imagination of many young players who all want to score runs with “big hits”. However, in junior cricket, boundary hitting is often more difficult due to size limitations of children and games played on outfields where the ball does not travel quickly. As a result, winning is often achieved via a less spectacular route – by scoring more singles than your opponents. However, most standard coaching texts only describe how to play boundary scoring shots (e.g. the drives, pulls, cuts and sweeps) and defensive shots to protect the wicket. Learning to bat appears to have been reduced to extremes of force production, i.e. maximal force production to hit boundaries or minimal force production to stop the ball from hitting the wicket. Initially, this is not a problem because the typical innings of a young player (<12 years) would be based on the concept of “block” or “bash” – they “block” the good balls and “bash” the short balls. This approach works because there are many opportunities to hit boundaries off the numerous inaccurate deliveries of novice bowlers. Most runs are scored behind the wicket by using the pace of the bowler’s delivery to re-direct the ball, because the intrinsic dynamics (i.e. lack of strength) of most children means that they can only create sufficient power by playing shots where the whole body can contribute to force production. This method works well until the novice player comes up against more accurate bowling when they find they have no way of scoring runs. Once batters begin to face “good” bowlers, batters have to learn to score runs via singles. In cricket coaching manuals (e.g. ECB, n.d), running between the wickets is treated as a separate task to batting, and the “basics” of running, such as how to “back- up”, carry the bat, calling and turning and sliding the bat into the crease are “drilled” into players. This task decomposition strategy focussing on techniques is a common approach to skill acquisition in many highly traditional sports, typified in cricket by activities where players hit balls off tees and receive “throw-downs” from coaches. However, the relative usefulness of these approaches in the acquisition of sporting skills is increasingly being questioned (Pinder, Renshaw & Davids, 2009). We will discuss why this is the case in the next section.
Resumo:
Recently, a constraints- led approach has been promoted as a framework for understanding how children and adults acquire movement skills for sport and exercise (see Davids, Button & Bennett, 2008; Araújo et al., 2004). The aim of a constraints- led approach is to identify the nature of interacting constraints that influence skill acquisition in learners. In this chapter the main theoretical ideas behind a constraints- led approach are outlined to assist practical applications by sports practitioners and physical educators in a non- linear pedagogy (see Chow et al., 2006, 2007). To achieve this goal, this chapter examines implications for some of the typical challenges facing sport pedagogists and physical educators in the design of learning programmes.
Resumo:
Columns are one of the key load bearing elements that are highly susceptible to vehicle impacts. The resulting severe damages to columns may leads to failures of the supporting structure that are catastrophic in nature. However, the columns in existing structures are seldom designed for impact due to inadequacies of design guidelines. The impact behaviour of columns designed for gravity loads and actions other than impact is, therefore, of an interest. A comprehensive investigation is conducted on reinforced concrete column with a particular focus on investigating the vulnerability of the exposed columns and to implement mitigation techniques under low to medium velocity car and truck impacts. The investigation is based on non-linear explicit computer simulations of impacted columns followed by a comprehensive validation process. The impact is simulated using force pulses generated from full scale vehicle impact tests. A material model capable of simulating triaxial loading conditions is used in the analyses. Circular columns adequate in capacity for five to twenty story buildings, designed according to Australian standards are considered in the investigation. The crucial parameters associated with the routine column designs and the different load combinations applied at the serviceability stage on the typical columns are considered in detail. Axially loaded columns are examined at the initial stage and the investigation is extended to analyse the impact behaviour under single axis bending and biaxial bending. The impact capacity reduction under varying axial loads is also investigated. Effects of the various load combinations are quantified and residual capacity of the impacted columns based on the status of the damage and mitigation techniques are also presented. In addition, the contribution of the individual parameter to the failure load is scrutinized and analytical equations are developed to identify the critical impulses in terms of the geometrical and material properties of the impacted column. In particular, an innovative technique was developed and introduced to improve the accuracy of the equations where the other techniques are failed due to the shape of the error distribution. Above all, the equations can be used to quantify the critical impulse for three consecutive points (load combinations) located on the interaction diagram for one particular column. Consequently, linear interpolation can be used to quantify the critical impulse for the loading points that are located in-between on the interaction diagram. Having provided a known force and impulse pair for an average impact duration, this method can be extended to assess the vulnerability of columns for a general vehicle population based on an analytical method that can be used to quantify the critical peak forces under different impact durations. Therefore the contribution of this research is not only limited to produce simplified yet rational design guidelines and equations, but also provides a comprehensive solution to quantify the impact capacity while delivering new insight to the scientific community for dealing with impacts.
Resumo:
In this paper, I focus on the growing "nonsense industry" which is most apparent in the writing typical of business, government departments, and the financial press. This writing, like technical writing, is characterised by heavy reliance on grammatical metaphor. It endows shibboleths - for instance, "globalisation"; "efficiencies"; "competition"; "modernisation"; "consumer sentiment"; "reform"; and so on - with anthropomorphic qualities. These anthropomorphic artefacts of technocratised language are then presented as having immutable powers over people. Thus they become banal public excuses for negligent practices in both business and government.