648 resultados para Relocation reuse
Resumo:
Ethernet is a key component of the standards used for digital process buses in transmission substations, namely IEC 61850 and IEEE Std 1588-2008 (PTPv2). These standards use multicast Ethernet frames that can be processed by more than one device. This presents some significant engineering challenges when implementing a sampled value process bus due to the large amount of network traffic. A system of network traffic segregation using a combination of Virtual LAN (VLAN) and multicast address filtering using managed Ethernet switches is presented. This includes VLAN prioritisation of traffic classes such as the IEC 61850 protocols GOOSE, MMS and sampled values (SV), and other protocols like PTPv2. Multicast address filtering is used to limit SV/GOOSE traffic to defined subsets of subscribers. A method to map substation plant reference designations to multicast address ranges is proposed that enables engineers to determine the type of traffic and location of the source by inspecting the destination address. This method and the proposed filtering strategy simplifies future changes to the prioritisation of network traffic, and is applicable to both process bus and station bus applications.
Resumo:
This paper presents channel measurements and weather data collection experiments conducted in a rural environment for an innovative Multi-User-Single-Antenna (MUSA) MIMO-OFDM technology, proposed for rural areas. MUSA MIMO-OFDM uplink channels are established by placing six user terminals (UT) around one access point (AP). Generated terrain profiles and relative received power plots are presented based on the experimental data. According to the relative received signal, MUSA-MIMO-OFDM uplink channels experience temporal fading. Moreover, the correlation between the relative received power and weather variables are presented. Results show that all weather variables exhibit a negative average correlation with received power. Wind speed records the highest average negative correlation coefficient of -0.35. Local maxima of negative correlation, ranging from 0.49 to 0.78, between the weather variables and relative received signals were registered between 5-6 a.m. The highest measured correlation (-0.78) of this time of the day was exhibited by wind speed. These results show the extend of time variation effects experienced by MUSA-MIMO-OFDM channels deployed in rural environments.
Resumo:
This paper establishes practical stability results for an important range of approximate discrete-time filtering problems involving mismatch between the true system and the approximating filter model. Using local consistency assumption, the practical stability established is in the sense of an asymptotic bound on the amount of bias introduced by the model approximation. Significantly, these practical stability results do not require the approximating model to be of the same model type as the true system. Our analysis applies to a wide range of estimation problems and justifies the common practice of approximating intractable infinite dimensional nonlinear filters by simpler computationally tractable filters.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
The aim of this work is to develop a Demand-Side-Response (DSR) model, which assists electricity end-users to be engaged in mitigating peak demands on the electricity network in Eastern and Southern Australia. The proposed innovative model will comprise a technical set-up of a programmable internet relay, a router, solid state switches in addition to the suitable software to control electricity demand at user's premises. The software on appropriate multimedia tool (CD Rom) will be curtailing/shifting electric loads to the most appropriate time of the day following the implemented economic model, which is designed to be maximizing financial benefits to electricity consumers. Additionally the model is targeting a national electrical load be spread-out evenly throughout the year in order to satisfy best economic performance for electricity generation, transmission and distribution. The model is applicable in region managed by the Australian Energy Management Operator (AEMO) covering states of Eastern-, Southern-Australia and Tasmania.
Resumo:
This paper considers an aircraft collision avoidance design problem that also incorporates design of the aircraft’s return-to-course flight. This control design problem is formulated as a non-linear optimal-stopping control problem; a formulation that does not require a prior knowledge of time taken to perform the avoidance and return-to-course manoeuvre. A dynamic programming solution to the avoidance and return-to-course problem is presented, before a Markov chain numerical approximation technique is described. Simulation results are presented that illustrate the proposed collision avoidance and return-to-course flight approach.
Resumo:
In semisupervised learning (SSL), a predictive model is learn from a collection of labeled data and a typically much larger collection of unlabeled data. These paper presented a framework called multi-view point cloud regularization (MVPCR), which unifies and generalizes several semisupervised kernel methods that are based on data-dependent regularization in reproducing kernel Hilbert spaces (RKHSs). Special cases of MVPCR include coregularized least squares (CoRLS), manifold regularization (MR), and graph-based SSL. An accompanying theorem shows how to reduce any MVPCR problem to standard supervised learning with a new multi-view kernel.
Resumo:
The paper "the importance of convexity in learning with squared loss" gave a lower bound on the sample complexity of learning with quadratic loss using a nonconvex function class. The proof contains an error. We show that the lower bound is true under a stronger condition that holds for many cases of interest.
Resumo:
We present a technique for estimating the 6DOF pose of a PTZ camera by tracking a single moving target in the image with known 3D position. This is useful in situations where it is not practical to measure the camera pose directly. Our application domain is estimating the pose of a PTZ camerso so that it can be used for automated GPS-based tracking and filming of UAV flight trials. We present results which show the technique is able to localize a PTZ after a short vision-tracked flight, and that the estimated pose is sufficiently accurate for the PTZ to then actively track a UAV based on GPS position data.
Resumo:
A significant reduction in carbon emissions is a global mission and the construction industry has an indispensable role to play as a major carbon dioxide (CO2) generator. Over the years, various building environmental assessment (BEA) models and concepts have been developed to promote environmentally responsible design and construction. However, limited attention has been placed on assessing and benchmarking the carbon emitted throughout the lifecycle of building facilities. This situation could undermine the construction industry’s potential to reduce its dependence on raw materials, recognise the negative impacts of producing new materials, and intensify the recycle and reuse process. In this paper, current BEA approaches adopted by the construction industry are first introduced. The focus of these models and concepts is then examined. Following a brief review of lifecycle analysis, the boundary in which a lifecycle carbon emission analysis should be set for a construction project is identified. The paper concludes by highlighting the potential barriers of applying lifecycle carbon emissions analysis in the construction industry. It is proposed that lifecycle carbon emission analysis can be integrated with existing BEA models to provide a more comprehensive and accurate evaluation on the cradle-to-grave environmental performance of a construction facility. In doing so, this can assist owners and clients to identify the optimum solution to maximise emissions reduction opportunities.
Resumo:
The School of Electrical and Electronic Systems Engineering at Queensland University of Technology, Brisbane, Australia (QUT), offers three bachelor degree courses in electrical and computer engineering. In all its courses there is a strong emphasis on signal processing. A newly established Signal Processing Research Centre (SPRC) has played an important role in the development of the signal processing units in these courses. This paper describes the unique design of the undergraduate program in signal processing at QUT, the laboratories developed to support it, and the criteria that influenced the design.
Resumo:
This paper discusses the principal domains of auto- and cross-trispectra. It is shown that the cumulant and moment based trispectra are identical except on certain planes in trifrequency space. If these planes are avoided, their principal domains can be derived by considering the regions of symmetry of the fourth order spectral moment. The fourth order averaged periodogram will then serve as an estimate for both cumulant and moment trispectra. Statistics of estimates of normalised trispectra or tricoherence are also discussed.
Resumo:
A new algorithm for extracting features from images for object recognition is described. The algorithm uses higher order spectra to provide desirable invariance properties, to provide noise immunity, and to incorporate nonlinearity into the feature extraction procedure thereby allowing the use of simple classifiers. An image can be reduced to a set of 1D functions via the Radon transform, or alternatively, the Fourier transform of each 1D projection can be obtained from a radial slice of the 2D Fourier transform of the image according to the Fourier slice theorem. A triple product of Fourier coefficients, referred to as the deterministic bispectrum, is computed for each 1D function and is integrated along radial lines in bifrequency space. Phases of the integrated bispectra are shown to be translation- and scale-invariant. Rotation invariance is achieved by a regrouping of these invariants at a constant radius followed by a second stage of invariant extraction. Rotation invariance is thus converted to translation invariance in the second step. Results using synthetic and actual images show that isolated, compact clusters are formed in feature space. These clusters are linearly separable, indicating that the nonlinearity required in the mapping from the input space to the classification space is incorporated well into the feature extraction stage. The use of higher order spectra results in good noise immunity, as verified with synthetic and real images. Classification of images using the higher order spectra-based algorithm compares favorably to classification using the method of moment invariants
Resumo:
An approach to pattern recognition using invariant parameters based on higher-order spectra is presented. In particular, bispectral invariants are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale- and amplification-invariant. A minimal set of these invariants is selected as the feature vector for pattern classification. Pattern recognition using higher-order spectral invariants is fast, suited for parallel implementation, and works for signals corrupted by Gaussian noise. The classification technique is shown to distinguish two similar but different bolts given their one-dimensional profiles
Resumo:
A general procedure to determine the principal domain (i.e., nonredundant region of computation) of any higher-order spectrum is presented, using the bispectrum as an example. The procedure is then applied to derive the principal domain of the trispectrum of a real-valued, stationary time series. These results are easily extended to compute the principal domains of other higher-order spectra