965 resultados para LOCATION NETWORK WWLLN
Resumo:
Severe power quality problems can arise when a large number of single-phase distributed energy resources (DERs) are connected to a low-voltage power distribution system. Due to the random location and size of DERs, it may so happen that a particular phase generates excess power than its load demand. In such an event, the excess power will be fed back to the distribution substation and will eventually find its way to the transmission network, causing undesirable voltage-current unbalance. As a solution to this problem, the article proposes the use of a distribution static compensator (DSTATCOM), which regulates voltage at the point of common coupling (PCC), thereby ensuring balanced current flow from and to the distribution substation. Additionally, this device can also support the distribution network in the absence of the utility connection, making the distribution system work as a microgrid. The proposals are validated through extensive digital computer simulation studies using PSCADTM
Resumo:
This thesis analysed the theoretical and ontological issues of previous scholarship concerning information technology and indigenous people. As an alternative, the thesis used the framework of actor-network-theory, especially through historiographical and ethnographic techniques. The thesis revealed an assemblage of indigenous/digital enactments striving for relevance and avoiding obsolescence. It also recognised heterogeneities- including user-ambivalences, oscillations, noise, non-coherences and disruptions - as part of the milieu of the daily digital lives of indigenous people. By taking heterogeneities into account, the thesis ensured that the data “speaks for itself” and that social inquiry is not overtaken by ideology and ontology.
Resumo:
Two recent decisions of the Supreme Court of New South Wales in the context of obstetric management have highlighted firstly, the importance of keeping legible, accurate and detailed medical records; and secondly, the challenges faced by those seeking to establish causation, particularly where epidemiological evidence is relied upon...
Resumo:
Recently there has been significant interest of researchers and practitioners on the use of Bluetooth as a complementary transport data. However, literature is limited with the understanding of the Bluetooth MAC Scanner (BMS) based data acquisition process and the properties of the data being collected. This paper first provides an insight on the BMS data acquisition process. Thereafter, it discovers the interesting facts from analysis of the real BMS data from both motorway and arterial networks of Brisbane, Australia. The knowledge gained is helpful for researchers and practitioners to understand the BMS data being collected which is vital to the development of management and control algorithms using the data.
Resumo:
The Chemistry Discipline Network has recently completed two distinct mapping exercises. The first is a snapshot of chemistry taught at 12 institutions around Australia in 2011. There were many similarities but also important differences in the content taught and assessed at different institutions. There were also significant differences in delivery, particularly laboratory contact hours, as well as forms and weightings of assessment. The second mapping exercise mapped the chemistry degrees at three institutions to the Threshold Learning Outcomes for chemistry. Importantly, some of the TLOs were addressed by multiple units at all institutions, while others were not met, or were met at an introductory level only. The exercise also exposed some challenges in using the TLOs as currently written.
Resumo:
A Neutral cluster and Air Ion Spectrometer (NAIS) was used to monitor the concentration of airborne ions on 258 full days between Nov 2011 and Dec 2012 in Brisbane, Australia. The air was sampled from outside a window on the sixth floor of a building close to the city centre, approximately 100 m away from a busy freeway. The NAIS detects all ions and charged particles smaller than 42 nm. It was operated in a 4 min measurement cycle, with ion data recorded at 10 s intervals over 2 min during each cycle. The data were analysed to derive the diurnal variation of small, large and total ion concentrations in the environment. We adapt the definition of Horrak et al (2000) and classify small ions as molecular clusters smaller than 1.6 nm and large ions as charged particles larger than this size...
Resumo:
This article asks questions about the futures of power in the network era. Two critical emerging issues are at work with uncertain outcomes. The first is the emergence of the collaborative economy, while the second is the emergence of surveillance capabilities from both civic, state and commercial sources. While both of these emerging issues are expected by many to play an important role in the future development of our societies, it is still unclear whose values and whose purposes will be furthered. This article argues that the futures of these emerging issues depend on contests for power. As such, four scenarios are developed for the futures of power in the network era using the double variable scenario approach.
Resumo:
Network reconfiguration after complete blackout of a power system is an essential step for power system restoration. A new node importance evaluation method is presented based on the concept of regret, and maximisation of the average importance of a path is employed as the objective of finding the optimal restoration path. Then, a two-stage method is presented to optimise the network reconfiguration strategy. Specifically, the restoration sequence of generating units is first optimised so as to maximise the restored generation capacity, then the optimal restoration path is selected to restore the generating nodes concerned and the issues of selecting a serial or parallel restoration mode and the reconnecting failure of a transmission line are next considered. Both the restoration path selection and skeleton-network determination are implemented together in the proposed method, which overcomes the shortcoming of separate decision-making in the existing methods. Finally, the New England 10-unit 39-bus power system and the Guangzhou power system in South China are employed to demonstrate the basic features of the proposed method.
Resumo:
Automated crowd counting has become an active field of computer vision research in recent years. Existing approaches are scene-specific, as they are designed to operate in the single camera viewpoint that was used to train the system. Real world camera networks often span multiple viewpoints within a facility, including many regions of overlap. This paper proposes a novel scene invariant crowd counting algorithm that is designed to operate across multiple cameras. The approach uses camera calibration to normalise features between viewpoints and to compensate for regions of overlap. This compensation is performed by constructing an 'overlap map' which provides a measure of how much an object at one location is visible within other viewpoints. An investigation into the suitability of various feature types and regression models for scene invariant crowd counting is also conducted. The features investigated include object size, shape, edges and keypoints. The regression models evaluated include neural networks, K-nearest neighbours, linear and Gaussian process regresion. Our experiments demonstrate that accurate crowd counting was achieved across seven benchmark datasets, with optimal performance observed when all features were used and when Gaussian process regression was used. The combination of scene invariance and multi camera crowd counting is evaluated by training the system on footage obtained from the QUT camera network and testing it on three cameras from the PETS 2009 database. Highly accurate crowd counting was observed with a mean relative error of less than 10%. Our approach enables a pre-trained system to be deployed on a new environment without any additional training, bringing the field one step closer toward a 'plug and play' system.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Current Bayesian network software packages provide good graphical interface for users who design and develop Bayesian networks for various applications. However, the intended end-users of these networks may not necessarily find such an interface appealing and at times it could be overwhelming, particularly when the number of nodes in the network is large. To circumvent this problem, this paper presents an intuitive dashboard, which provides an additional layer of abstraction, enabling the end-users to easily perform inferences over the Bayesian networks. Unlike most software packages, which display the nodes and arcs of the network, the developed tool organises the nodes based on the cause-and-effect relationship, making the user-interaction more intuitive and friendly. In addition to performing various types of inferences, the users can conveniently use the tool to verify the behaviour of the developed Bayesian network. The tool has been developed using QT and SMILE libraries in C++.
Resumo:
Macroscopic Fundamental Diagram (MFD) has been proved to exist in large urban road and freeway networks by theoretic method and real data in cities. However hysteresis and scatters have also been found existed both on motorway network and urban road. This paper investigates how the incident variables affect the scatter and shape of the MFD using both the simulated data and the real data collected from the Pacific Motorway M3 in Brisbane, Australia. Three key components of incident are investigated based on the simulated data: incident location, incident duration time and traffic demand. Results based on the simulated data indicate that MFD shape is a property not only of the network itself but also of the incident characteristics variables. MFDs for three types of real incidents (crash, hazard and breakdown) are explored separately. The results based on the empirical data are consistent with the simulated results. The hysteresis phenomenon occurs on both the upstream and the downstream of the incident location, but for opposite hysteresis loops. Gradient of the MFD for the upstream is more than that for the downstream on the incident site, when traffic demand is off peak.
Resumo:
The current state of knowledge in relation to first flush does not provide a clear understanding of the role of rainfall and catchment characteristics in influencing this phenomenon. This is attributed to the inconsistent findings from research studies due to the unsatisfactory selection of first flush indicators and how first flush is defined. The research study discussed in this thesis provides the outcomes of a comprehensive analysis on the influence of rainfall and catchment characteristics on first flush behaviour in residential catchments. Two sets of first flush indicators are introduced in this study. These indicators were selected such that they are representative in explaining in a systematic manner the characteristics associated with first flush. Stormwater samples and rainfall-runoff data were collected and recorded from stormwater monitoring stations established at three urban catchments at Coomera Waters, Gold Coast, Australia. In addition, historical data were also used to support the data analysis. Three water quality parameters were analysed, namely, total suspended solids (TSS), total phosphorus (TP) and total nitrogen (TN). The data analyses were primarily undertaken using multi criteria decision making methods, PROMETHEE and GAIA. Based on the data obtained, the pollutant load distribution curve (LV) was determined for the individual rainfall events and pollutant types. Accordingly, two sets of first flush indicators were derived from the curve, namely, cumulative load wash-off for every 10% of runoff volume interval (interval first flush indicators or LV) from the beginning of the event and the actual pollutant load wash-off during a 10% increment in runoff volume (section first flush indicators or P). First flush behaviour showed significant variation with pollutant types. TSS and TP showed consistent first flush behaviour. However, the dissolved fraction of TN showed significant differences to TSS and TP first flush while particulate TN showed similarities. Wash-off of TSS, TP and particulate TN during the first 10% of the runoff volume showed no influence from corresponding rainfall intensity. This was attributed to the wash-off of weakly adhered solids on the catchment surface referred to as "short term pollutants" or "weakly adhered solids" load. However, wash-off after 10% of the runoff volume showed dependency on the rainfall intensity. This is attributed to the wash-off of strongly adhered solids being exposed when the weakly adhered solids diminish. The wash-off process was also found to depend on rainfall depth at the end part of the event as the strongly adhered solids are loosened due to impact of rainfall in the earlier part of the event. Events with high intensity rainfall bursts after 70% of the runoff volume did not demonstrate first flush behaviour. This suggests that rainfall pattern plays a critical role in the occurrence of first flush. Rainfall intensity (with respect to the rest of the event) that produces 10% to 20% runoff volume play an important role in defining the magnitude of the first flush. Events can demonstrate high magnitude first flush when the rainfall intensity occurring between 10% and 20% of the runoff volume is comparatively high while low rainfall intensities during this period produces low magnitude first flush. For events with first flush, the phenomenon is clearly visible up to 40% of the runoff volume. This contradicts the common definition that first flush only exists, if for example, 80% of the pollutant mass is transported in the first 30% of runoff volume. First flush behaviour for TN is different compared to TSS and TP. Apart from rainfall characteristics, the composition and the availability of TN on the catchment also play an important role in first flush. The analysis confirmed that events with low rainfall intensity can produce high magnitude first flush for the dissolved fraction of TN, while high rainfall intensity produce low dissolved TN first flush. This is attributed to the source limiting behaviour of dissolved TN wash-off where there is high wash-off during the initial part of a rainfall event irrespective of the intensity. However, for particulate TN, the influence of rainfall intensity on first flush characteristics is similar to TSS and TP. The data analysis also confirmed that first flush can occur as high magnitude first flush, low magnitude first flush or non existence of first flush. Investigation of the influence of catchment characteristics on first flush found that the key factors that influence the phenomenon are the location of the pollutant source, spatial distribution of the pervious and impervious surfaces in the catchment, drainage network layout and slope of the catchment. This confirms that first flush phenomenon cannot be evaluated based on a single or a limited set of parameters as a number of catchment characteristics should be taken into account. Catchments where the pollutant source is located close to the outlet, a high fraction of road surfaces, short travel time to the outlet, with steep slopes can produce high wash-off load during the first 50% of the runoff volume. Rainfall characteristics have a comparatively dominant impact on the wash-off process compared to the catchment characteristics. In addition, the pollutant characteristics also should be taken into account in designing stormwater treatment systems due to different wash-off behaviour. Analysis outcomes confirmed that there is a high TSS load during the first 20% of the runoff volume followed by TN which can extend up to 30% of the runoff volume. In contrast, high TP load can exist during the initial and at the end part of a rainfall event. This is related to the composition of TP available for the wash-off.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Pavlovian auditory fear conditioning involves the integration of information about an acoustic conditioned stimulus (CS) and an aversive unconditioned stimulus in the lateral nucleus of the amygdala (LA). The auditory CS reaches the LA subcortically via a direct connection from the auditory thalamus and also from the auditory association cortex itself. How neural modulators, especially those activated during stress, such as norepinephrine (NE), regulate synaptic transmission and plasticity in this network is poorly understood. Here we show that NE inhibits synaptic transmission in both the subcortical and cortical input pathway but that sensory processing is biased toward the subcortical pathway. In addition binding of NE to β-adrenergic receptors further dissociates sensory processing in the LA. These findings suggest a network mechanism that shifts sensory balance toward the faster but more primitive subcortical input