956 resultados para Cubic autocatalitic


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Denote the set of 21 non-isomorphic cubic graphs of order 10 by L. We first determine precisely which L is an element of L occur as the leave of a partial Steiner triple system, thus settling the existence problem for partial Steiner triple systems of order 10 with cubic leaves. Then we settle the embedding problem for partial Steiner triple systems with leaves L is an element of L. This second result is obtained as a corollary of a more general result which gives, for each integer v greater than or equal to 10 and each L is an element of L, necessary and sufficient conditions for the existence of a partial Steiner triple system of order v with leave consisting of the complement of L and v - 10 isolated vertices. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Plastic yield criteria for porous ductile materials are explored numerically using the finite-element technique. The cases of spherical voids arranged in simple cubic, body-centred cubic and face-centred cubic arrays are investigated with void volume fractions ranging from 2 % through to the percolation limit (over 90 %). Arbitrary triaxial macroscopic stress states and two definitions of yield are explored. The numerical data demonstrates that the yield criteria depend linearly on the determinant of the macroscopic stress tensor for the case of simple-cubic and body-centred cubic arrays - in contrast to the famous Gurson-Tvergaard-Needleman (GTN) formula - while there is no such dependence for face-centred cubic arrays within the accuracy of the finite-element discretisation. The data are well fit by a simple extension of the GTN formula which is valid for all void volume fractions, with yield-function convexity constraining the form of the extension in terms of parameters in the original formula. Simple cubic structures are more resistant to shear, while body-centred and face-centred structures are more resistant to hydrostatic pressure. The two yield surfaces corresponding to the two definitions of yield are not related by a simple scaling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transit agencies across the world are increasingly shifting their fare collection mechanisms towards fully automated systems like the smart card. One of the objectives in implementing such a system is to reduce the boarding time per passenger and hence reduce the overall dwell time for the buses at the bus stops/bus rapid transit (BRT) stations. TransLink, the transit authority responsible for public transport management in South East Queensland, has introduced ‘GoCard’ technology using the Cubic platform for fare collection on its public transport system. In addition to this, three inner city BRT stations on South East Busway spine are operating as pre-paid platforms during evening peak time. This paper evaluates the effects of these multiple policy measures on operation of study busway station. The comparison between pre and post policy scenarios suggests that though boarding time per passenger has decreased, while the alighting time per passenger has increased slightly. However, there is a substantial reduction in operating efficiency was observed at the station.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim – To develop and assess the predictive capabilities of a statistical model that relates routinely collected Trauma Injury Severity Score (TRISS) variables to length of hospital stay (LOS) in survivors of traumatic injury. Method – Retrospective cohort study of adults who sustained a serious traumatic injury, and who survived until discharge from Auckland City, Middlemore, Waikato, or North Shore Hospitals between 2002 and 2006. Cubic-root transformed LOS was analysed using two-level mixed-effects regression models. Results – 1498 eligible patients were identified, 1446 (97%) injured from a blunt mechanism and 52 (3%) from a penetrating mechanism. For blunt mechanism trauma, 1096 (76%) were male, average age was 37 years (range: 15-94 years), and LOS and TRISS score information was available for 1362 patients. Spearman’s correlation and the median absolute prediction error between LOS and the original TRISS model was ρ=0.31 and 10.8 days, respectively, and between LOS and the final multivariable two-level mixed-effects regression model was ρ=0.38 and 6.0 days, respectively. Insufficient data were available for the analysis of penetrating mechanism models. Conclusions – Neither the original TRISS model nor the refined model has sufficient ability to accurately or reliably predict LOS. Additional predictor variables for LOS and other indicators for morbidity need to be considered.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

World economies increasingly demand reliable and economical power supply and distribution. To achieve this aim the majority of power systems are becoming interconnected, with several power utilities supplying the one large network. One problem that occurs in a large interconnected power system is the regular occurrence of system disturbances which can result in the creation of intra-area oscillating modes. These modes can be regarded as the transient responses of the power system to excitation, which are generally characterised as decaying sinusoids. For a power system operating ideally these transient responses would ideally would have a “ring-down” time of 10-15 seconds. Sometimes equipment failures disturb the ideal operation of power systems and oscillating modes with ring-down times greater than 15 seconds arise. The larger settling times associated with such “poorly damped” modes cause substantial power flows between generation nodes, resulting in significant physical stresses on the power distribution system. If these modes are not just poorly damped but “negatively damped”, catastrophic failures of the system can occur. To ensure system stability and security of large power systems, the potentially dangerous oscillating modes generated from disturbances (such as equipment failure) must be quickly identified. The power utility must then apply appropriate damping control strategies. In power system monitoring there exist two facets of critical interest. The first is the estimation of modal parameters for a power system in normal, stable, operation. The second is the rapid detection of any substantial changes to this normal, stable operation (because of equipment breakdown for example). Most work to date has concentrated on the first of these two facets, i.e. on modal parameter estimation. Numerous modal parameter estimation techniques have been proposed and implemented, but all have limitations [1-13]. One of the key limitations of all existing parameter estimation methods is the fact that they require very long data records to provide accurate parameter estimates. This is a particularly significant problem after a sudden detrimental change in damping. One simply cannot afford to wait long enough to collect the large amounts of data required for existing parameter estimators. Motivated by this gap in the current body of knowledge and practice, the research reported in this thesis focuses heavily on rapid detection of changes (i.e. on the second facet mentioned above). This thesis reports on a number of new algorithms which can rapidly flag whether or not there has been a detrimental change to a stable operating system. It will be seen that the new algorithms enable sudden modal changes to be detected within quite short time frames (typically about 1 minute), using data from power systems in normal operation. The new methods reported in this thesis are summarised below. The Energy Based Detector (EBD): The rationale for this method is that the modal disturbance energy is greater for lightly damped modes than it is for heavily damped modes (because the latter decay more rapidly). Sudden changes in modal energy, then, imply sudden changes in modal damping. Because the method relies on data from power systems in normal operation, the modal disturbances are random. Accordingly, the disturbance energy is modelled as a random process (with the parameters of the model being determined from the power system under consideration). A threshold is then set based on the statistical model. The energy method is very simple to implement and is computationally efficient. It is, however, only able to determine whether or not a sudden modal deterioration has occurred; it cannot identify which mode has deteriorated. For this reason the method is particularly well suited to smaller interconnected power systems that involve only a single mode. Optimal Individual Mode Detector (OIMD): As discussed in the previous paragraph, the energy detector can only determine whether or not a change has occurred; it cannot flag which mode is responsible for the deterioration. The OIMD seeks to address this shortcoming. It uses optimal detection theory to test for sudden changes in individual modes. In practice, one can have an OIMD operating for all modes within a system, so that changes in any of the modes can be detected. Like the energy detector, the OIMD is based on a statistical model and a subsequently derived threshold test. The Kalman Innovation Detector (KID): This detector is an alternative to the OIMD. Unlike the OIMD, however, it does not explicitly monitor individual modes. Rather it relies on a key property of a Kalman filter, namely that the Kalman innovation (the difference between the estimated and observed outputs) is white as long as the Kalman filter model is valid. A Kalman filter model is set to represent a particular power system. If some event in the power system (such as equipment failure) causes a sudden change to the power system, the Kalman model will no longer be valid and the innovation will no longer be white. Furthermore, if there is a detrimental system change, the innovation spectrum will display strong peaks in the spectrum at frequency locations associated with changes. Hence the innovation spectrum can be monitored to both set-off an “alarm” when a change occurs and to identify which modal frequency has given rise to the change. The threshold for alarming is based on the simple Chi-Squared PDF for a normalised white noise spectrum [14, 15]. While the method can identify the mode which has deteriorated, it does not necessarily indicate whether there has been a frequency or damping change. The PPM discussed next can monitor frequency changes and so can provide some discrimination in this regard. The Polynomial Phase Method (PPM): In [16] the cubic phase (CP) function was introduced as a tool for revealing frequency related spectral changes. This thesis extends the cubic phase function to a generalised class of polynomial phase functions which can reveal frequency related spectral changes in power systems. A statistical analysis of the technique is performed. When applied to power system analysis, the PPM can provide knowledge of sudden shifts in frequency through both the new frequency estimate and the polynomial phase coefficient information. This knowledge can be then cross-referenced with other detection methods to provide improved detection benchmarks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cubic indium hydroxide nanomaterials were obtained by a low temperature soft-chemical method without any surfactants. The transition of nano-cubic indium hydroxide to cubic indium oxide during dehydroxylation has been studied by infrared emission spectroscopy. The spectra are related to the structure of the materials and the changes in the structure upon thermal treatment. The infrared absorption spectrum of In(OH)3 is characterised by an intense OH deformation band at 1150 cm-1 and two O-H stretching bands at 3107 and 3221 cm-1. In the infrared emission spectra, the hydroxyl-stretching and hydroxyl-bending bands diminish dramatically upon heating, and no intensity remains after 200 °C. However, new low intensity bands are found in the OH deformation region at 915 cm-1 and in OH stretching region at 3437 cm-1. These bands are attributed to the vibrations of newly formed InOH bonds because of the release and transfer of protons during calcination of the nanomaterial. The use of infrared emission spectroscopy enables the low-temperature phase transition brought about through dehydration of In(OH)3 nanocubes to be studied.