961 resultados para Stations agronomiques
Resumo:
At a quite fundamental level, the very way in which Public Service Broadcasting (PSB) may envisage its future usually captured in the semantic shift from PSB to Public Service Media (PSM) is at stake when considering the recent history of public value discourse and the public value test. The core Reithian PSB idea assumed that public value would be created through the application of core principles of universality of availability and appeal, provision for minorities, education of the public, distance from vested interests, quality programming standards, program maker independence, and fostering of national culture and the public sphere. On the other hand, the philosophical import of the public value test is that potentially any excursion into the provision of new media services needs to be justified ex ante. In this era of New Public Management, greater transparency and accountability, and the proposition that resources for public value deliverables be contestable and not sequestered in public sector institutions, what might be the new Archimedean point around which a contemporised normativity for PSM be built? This paper will argue for the innovation imperative as an organising principle for contemporary PSM. This may appear counterintuitive, as it is precisely PSB’s predilection for innovating in new media services (in online, mobile, and social media) that has produced the constraining apparatus of the ex ante/public value/Drei-Stufen-Test in Europe, based on principles of competitive neutrality and transparency in the application of public funds for defined and limited public benefit. However, I argue that a commitment to innovation can define as complementary to, rather than as competitive ‘crowding out’, the new products and services that PSM can, and should, be delivering into a post-scarcity, superabundant all-media marketplace. The evidence presented in this paper for this argument is derived mostly from analysis of PSM in the Australian media ecology. While no PSB outside Europe is subject to a formal public value test, the crowding out arguments are certainly run in Australia, particularly by powerful commercial interests for whom free news is a threat to monetising quality news journalism. Take right wing opinion leader, herself a former ABC Board member, Judith Sloan: ‘… the recent expansive nature of the ABC – all those television stations, radio stations and online offerings – is actually squeezing activity that would otherwise be undertaken by the private sector. From partly correcting market failure, the ABC is now causing it. We are now dealing with a case of unfair competition and wasted taxpayer funds’ (The Drum, 1 August http://www.abc.net.au/unleashed/2818220.html). But I argue that the crowding out argument is difficult to sustain in Australia because of the PSB’s non-dominant position and the fact that much of innovation generated by the two PSBs, the ABC and the SBS, has not been imitated by or competed for by the commercials. The paper will bring cases forward, such as SBS’ Go Back to Where you Came From (2011) as an example of product innovation, and a case study of process and organisational innovation which also has resulted in specific product and service innovation – the ABC’s Innovation Unit. In summary, at least some of the old Reithian dicta, along with spectrum scarcity and market failure arguments, have faded or are fading. Contemporary PSM need to justify their role in the system, and to society, in terms of innovation.
Resumo:
This paper presents an adaptive metering algorithm for enhancing the electronic screening (e-screening) operation at truck weight stations. This algorithm uses a feedback control mechanism to control the level of truck vehicles entering the weight station. The basic operation of the algorithm allows more trucks to be inspected when the weight station is underutilized by adjusting the weight threshold lower. Alternatively, the algorithm restricts the number of trucks to inspect when the station is overutilized to prevent queue spillover. The proposed control concept is demonstrated and evaluated in a simulation environment. The simulation results demonstrate the considerable benefits of the proposed algorithm in improving overweight enforcement with minimal negative impacts on nonoverweighed trucks. The test results also reveal that the effectiveness of the algorithm improves with higher truck participation rates in the e-screening program.
Resumo:
This work experimentally examines the performance benefits of a regional CORS network to the GPS orbit and clock solutions for supporting real-time Precise Point Positioning (PPP). The regionally enhanced GPS precise orbit solutions are derived from a global evenly distributed CORS network added with a densely distributed network in Australia and New Zealand. A series of computational schemes for different network configurations are adopted in the GAMIT-GLOBK and PANDA data processing. The precise GPS orbit results show that the regionally enhanced solutions achieve the overall orbit improvements with respect to the solutions derived from the global network only. Additionally, the orbital differences over GPS satellite arcs that are visible by any of the five Australia-wide CORS stations show a higher percentage of overall improvements compared to the satellite arcs that are not visible from these stations. The regional GPS clock and Uncalibrated Phase Delay (UPD) products are derived using the PANDA real time processing module from Australian CORS networks of 35 and 79 stations respectively. Analysis of PANDA kinematic PPP and kinematic PPP-AR solutions show certain overall improvements in the positioning performance from a denser network configuration after solution convergence. However, the clock and UPD enhancement on kinematic PPP solutions is marginal. It is suggested that other factors, such as effects of ionosphere, incorrectly fixed ambiguities, may be the more dominating, deserving further research attentions.
Resumo:
Modernized GPS and GLONASS, together with new GNSS systems, BeiDou and Galileo, offer code and phase ranging signals in three or more carriers. Traditionally, dual-frequency code and/or phase GPS measurements are linearly combined to eliminate effects of ionosphere delays in various positioning and analysis. This typical treatment method has imitations in processing signals at three or more frequencies from more than one system and can be hardly adapted itself to cope with the booming of various receivers with a broad variety of singles. In this contribution, a generalized-positioning model that the navigation system independent and the carrier number unrelated is promoted, which is suitable for both single- and multi-sites data processing. For the synchronization of different signals, uncalibrated signal delays (USD) are more generally defined to compensate the signal specific offsets in code and phase signals respectively. In addition, the ionospheric delays are included in the parameterization with an elaborate consideration. Based on the analysis of the algebraic structures, this generalized-positioning model is further refined with a set of proper constrains to regularize the datum deficiency of the observation equation system. With this new model, uncalibrated signal delays (USD) and ionospheric delays are derived for both GPS and BeiDou with a large dada set. Numerical results demonstrate that, with a limited number of stations, the uncalibrated code delays (UCD) are determinate to a precision of about 0.1 ns for GPS and 0.4 ns for BeiDou signals, while the uncalibrated phase delays (UPD) for L1 and L2 are generated with 37 stations evenly distributed in China for GPS with a consistency of about 0.3 cycle. Extra experiments concerning the performance of this novel model in point positioning with mixed-frequencies of mixed-constellations is analyzed, in which the USD parameters are fixed with our generated values. The results are evaluated in terms of both positioning accuracy and convergence time.
Resumo:
Tilting-pad hydrodynamic thrust bearings are used in hydroelectric power stations around the world, reliably supporting turbines weighing hundreds of tonnes, over decades of service. Newer designs incorporate hydrostatic recesses machined into the sector-shaped pads to enhance oil film thickness at low rotational speeds. External pressurisation practically eliminates wear and enhances service life and reliability. It follows that older generating plants, lacking such assistance, stand to benefit from being retrofitted with hydrostatic lubrication systems. The design process is not trivial however. The need to increase the groove size to permit spontaneous lifting of the turbine under hydrostatic pressure, conflicts with the need to preserve performance of the original plane pad design. A haphazardly designed recess can induce a significant rise in bearing temperature concomitant with reduced mechanical efficiency and risk of thermal damage. In this work, a numerical study of a sector-shaped pad is undertaken to demonstrate how recess size and shape can affect the performance of a typical bearing.
Resumo:
Nick Herd begins his institutional history of Australian commercial television in the early 1890s, when an amateur inventor named Henry Sutton designed the ‘telephane’ with the intent of watching the Melbourne Cup in his home town of Ballarat. The ‘race that stops a nation’ was not broadcast live on television until 1960, but Sutton’s initiative indicates how closely sport and television were aligned in Australia even before the medium existed. The first licensed commercial stations to begin regular broadcasting went on air in Sydney and Melbourne shortly before the 1956 Melbourne Olympic Games, although Herd claims that this was ‘almost accidental’ rather than planned. (49) Only Melbourne viewers were able to see some events live, many via television sets in Ampol service stations following the company’s last minute sponsorship of coverage on Melbourne station GTV-9...
Resumo:
Nitrous oxide emissions from soil are known to be spatially and temporally volatile. Reliable estimation of emissions over a given time and space depends on measuring with sufficient intensity but deciding on the number of measuring stations and the frequency of observation can be vexing. The question of low frequency manual observations providing comparable results to high frequency automated sampling also arises. Data collected from a replicated field experiment was intensively studied with the intention to give some statistically robust guidance on these issues. The experiment had nitrous oxide soil to air flux monitored within 10 m by 2.5 m plots by automated closed chambers under a 3 h average sampling interval and by manual static chambers under a three day average sampling interval over sixty days. Observed trends in flux over time by the static chambers were mostly within the auto chamber bounds of experimental error. Cumulated nitrous oxide emissions as measured by each system were also within error bounds. Under the temporal response pattern in this experiment, no significant loss of information was observed after culling the data to simulate results under various low frequency scenarios. Within the confines of this experiment observations from the manual chambers were not spatially correlated above distances of 1 m. Statistical power was therefore found to improve due to increased replicates per treatment or chambers per replicate. Careful after action review of experimental data can deliver savings for future work.
Resumo:
Rail operators recognize a need to increase ridership in order to improve the economic viability of rail service, and to magnify the role that rail travel plays in making cities feel liveable. This study extends previous research that used cluster analysis with a small sample of rail passengers to identify five salient perspectives of rail access (Zuniga et al, 2013). In this project stage, we used correlation techniques to determine how those perspectives would resonate with two larger study populations, including a relatively homogeneous sample of university students in Brisbane, Australia and a diverse sample of rail passengers in Melbourne, Australia. Findings from Zuniga et al. (2013) described a complex typology of current passengers that was based on respondents’ subjective attitudes and perceptions rather than socio-demographic or travel behaviour characteristics commonly used for segmentation analysis. The typology included five qualitative perspectives of rail travel. Based on the transport accessibility literature, we expected to find that perspectives from that study emphasizing physical access to rail stations would be shared by current and potential rail passengers who live further from rail stations. Other perspectives might be shared among respondents who live nearby, since the relevance of distance would be diminished. The population living nearby would thus represent an important target group for increasing ridership, since making rail travel accessible to them does not require expansion of costly infrastructure such as new lines or stations. By measuring the prevalence of each perspective in a larger respondent pool, results from this study provide insight into the typical socio-demographic and travel behaviour characteristics that correspond to each perspective of intra-urban rail travel. In several instances, our quantitative findings reinforced Zuniga et al.’s (2013) qualitative descriptions of passenger types, further validating the original research. This work may directly inform rail operators’ approach to increasing ridership through marketing and improvements to service quality and station experience. Operators in other parts of Australia and internationally may also choose to replicate the study locally, to fine-tune understanding of diverse customer bases. Developing regional and international collaboration would provide additional opportunities to evaluate and benchmark service and station amenities as they address the various access dimensions.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
Novel computer vision techniques have been developed for automatic monitoring of crowed environments such as airports, railway stations and shopping malls. Using video feeds from multiple cameras, the techniques enable crowd counting, crowd flow monitoring, queue monitoring and abnormal event detection. The outcome of the research is useful for surveillance applications and for obtaining operational metrics to improve business efficiency.
Resumo:
Stations on Bus Rapid Transit (BRT) lines ordinarily control line capacity because they act as bottlenecks. At stations with passing lanes, congestion may occur when buses maneuvering into and out of the platform stopping lane interfere with bus flow, or when a queue of buses forms upstream of the station blocking inflow. We contend that, as bus inflow to the station area approaches capacity, queuing will become excessive in a manner similar to operation of a minor movement on an unsignalized intersection. This analogy is used to treat BRT station operation and to analyze the relationship between station queuing and capacity. In the first of three stages, we conducted microscopic simulation modeling to study and analyze operating characteristics of the station under near steady state conditions through output variables of capacity, degree of saturation and queuing. A mathematical model was then developed to estimate the relationship between average queue and degree of saturation and calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Finally, simulation results were calibrated and validated.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Many large-scale GNSS CORS networks have been deployed around the world to support various commercial and scientific applications. To make use of these networks for real-time kinematic positioning services, one of the major challenges is the ambiguity resolution (AR) over long inter-station baselines in the presence of considerable atmosphere biases. Usually, the widelane ambiguities are fixed first, followed by the procedure of determination of the narrowlane ambiguity integers based on the ionosphere-free model in which the widelane integers are introduced as known quantities. This paper seeks to improve the AR performance over long baseline through efficient procedures for improved float solutions and ambiguity fixing. The contribution is threefold: (1) instead of using the ionosphere-free measurements, the absolute and/or relative ionospheric constraints are introduced in the ionosphere-constrained model to enhance the model strength, thus resulting in the better float solutions; (2) the realistic widelane ambiguity precision is estimated by capturing the multipath effects due to the observation complexity, leading to improvement of reliability of widelane AR; (3) for the narrowlane AR, the partial AR for a subset of ambiguities selected according to the successively increased elevation is applied. For fixing the scalar ambiguity, an error probability controllable rounding method is proposed. The established ionosphere-constrained model can be efficiently solved based on the sequential Kalman filter. It can be either reduced to some special models simply by adjusting the variances of ionospheric constraints, or extended with more parameters and constraints. The presented methodology is tested over seven baselines of around 100 km from USA CORS network. The results show that the new widelane AR scheme can obtain the 99.4 % successful fixing rate with 0.6 % failure rate; while the new rounding method of narrowlane AR can obtain the fix rate of 89 % with failure rate of 0.8 %. In summary, the AR reliability can be efficiently improved with rigorous controllable probability of incorrectly fixed ambiguities.
Resumo:
This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.
Resumo:
Loop detectors are the oldest and widely used traffic data source. On urban arterials, they are mainly installed for signal control. Recently state of the art Bluetooth MAC Scanners (BMS) has significantly captured the interest of stakeholders for exploiting it for area wide traffic monitoring. Loop detectors provide flow- a fundamental traffic parameter; whereas BMS provides individual vehicle travel time between BMS stations. Hence, these two data sources complement each other, and if integrated should increase the accuracy and reliability of the traffic state estimation. This paper proposed a model that integrates loops and BMS data for seamless travel time and density estimation for urban signalised network. The proposed model is validated using both real and simulated data and the results indicate that the accuracy of the proposed model is over 90%.