309 resultados para Order-parameter
Resumo:
LiFePO4 is a commercially available battery material with good theoretical discharge capacity, excellent cycle life and increased safety compared with competing Li-ion chemistries. It has been the focus of considerable experimental and theoretical scrutiny in the past decade, resulting in LiFePO4 cathodes that perform well at high discharge rates. This scrutiny has raised several questions about the behaviour of LiFePO4 material during charge and discharge. In contrast to many other battery chemistries that intercalate homogeneously, LiFePO4 can phase-separate into highly and lowly lithiated phases, with intercalation proceeding by advancing an interface between these two phases. The main objective of this thesis is to construct mathematical models of LiFePO4 cathodes that can be validated against experimental discharge curves. This is in an attempt to understand some of the multi-scale dynamics of LiFePO4 cathodes that can be difficult to determine experimentally. The first section of this thesis constructs a three-scale mathematical model of LiFePO4 cathodes that uses a simple Stefan problem (which has been used previously in the literature) to describe the assumed phase-change. LiFePO4 crystals have been observed agglomerating in cathodes to form a porous collection of crystals and this morphology motivates the use of three size-scales in the model. The multi-scale model developed validates well against experimental data and this validated model is then used to examine the role of manufacturing parameters (including the agglomerate radius) on battery performance. The remainder of the thesis is concerned with investigating phase-field models as a replacement for the aforementioned Stefan problem. Phase-field models have recently been used in LiFePO4 and are a far more accurate representation of experimentally observed crystal-scale behaviour. They are based around the Cahn-Hilliard-reaction (CHR) IBVP, a fourth-order PDE with electrochemical (flux) boundary conditions that is very stiff and possesses multiple time and space scales. Numerical solutions to the CHR IBVP can be difficult to compute and hence a least-squares based Finite Volume Method (FVM) is developed for discretising both the full CHR IBVP and the more traditional Cahn-Hilliard IBVP. Phase-field models are subject to two main physicality constraints and the numerical scheme presented performs well under these constraints. This least-squares based FVM is then used to simulate the discharge of individual crystals of LiFePO4 in two dimensions. This discharge is subject to isotropic Li+ diffusion, based on experimental evidence that suggests the normally orthotropic transport of Li+ in LiFePO4 may become more isotropic in the presence of lattice defects. Numerical investigation shows that two-dimensional Li+ transport results in crystals that phase-separate, even at very high discharge rates. This is very different from results shown in the literature, where phase-separation in LiFePO4 crystals is suppressed during discharge with orthotropic Li+ transport. Finally, the three-scale cathodic model used at the beginning of the thesis is modified to simulate modern, high-rate LiFePO4 cathodes. High-rate cathodes typically do not contain (large) agglomerates and therefore a two-scale model is developed. The Stefan problem used previously is also replaced with the phase-field models examined in earlier chapters. The results from this model are then compared with experimental data and fit poorly, though a significant parameter regime could not be investigated numerically. Many-particle effects however, are evident in the simulated discharges, which match the conclusions of recent literature. These effects result in crystals that are subject to local currents very different from the discharge rate applied to the cathode, which impacts the phase-separating behaviour of the crystals and raises questions about the validity of using cathodic-scale experimental measurements in order to determine crystal-scale behaviour.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
The degree of diversity or similarity detected in comets depends primarily on the lifetimes of the individual cometary nuclei at the time of analysis. It is inherent in our understanding of cometary orbital dynamics and the seminal model of comet origins that cometary evolution is the natural order of events in our Solar System. Thus, predictions of cometary behaviour in terms of bulk physical, mineralogical or chemical parameters should contain an appreciation of temporal variation(s). Previously, Rietmeijer and Mackinnon [1987] developed mineralogical bases for the chemical evolution of cometary nuclei primarily with regard to the predominantly silicate fraction of comet nuclei. We suggested that alteration of solids in cometary nuclei should be expected and that indications of likely reactants and products can be derived from judicious comparison with terrestrial diagenetic environments which include hydrocryogenic and low-temperature aqueous alterations. In a further development of this concept, Rietmeijer [1988] provides indirect evidence for the formation of sulfides and oxides in comet nuclei. Furthermore, Rietmeijer [1988] noted that timescales for hydrocryogenic and low-temperature reactions involving liquid water are probably adequate for relatively mature comets, e.g. P/comet Halley. In this paper, we will address the evolution of comet nuclei physical parameters such as solid particle grain size, porosity and density. In natural environments, chemical evolution (e.g. mineral reactions) is often accompanied by changes in physical properties. These concurrent changes are well-documented in the terrestrial geological literature, especially in studies of sediment diagenesis and we suggest that similar basic principles apply within the upper few meters of active comet nuclei. The database for prediction of comet nuclei physical parameters is, in principle, the same as used for the proposition of chemical evolution. We use detailed mineralogical studies of chondritic interplanetary dust particles (IDPS) as a guide to the likely constitution of mature comets traversing the inner Solar System. While there is, as yet, no direct proof that a specific sub-group or type of chondritic IDP is derived from a specific comet, it is clear that these particles are extraterrestrial in origin and that a certain portion of the interplanetary flux received by the Earth is cometary in origin. Two chondritic porous (CP) IDPS, sample numbers W701OA2 and W7029CI, from the Johnson Space Center Cosmic Dust Collection have been selected for this study of putative cometary physical parameters. This particular type of particle is considered a likely candidate for a cometary origin on the basis of mineralogy, bulk composition and morphology. While many IDPs have been subjected to intensive study over the past decade, we can develop a physical parameter model on only these two CP IDPs because few others have been studied in sufficient detail.
Resumo:
The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.
Resumo:
Despite its potential multiple contributions to sustainable policy objectives, urban transit is generally not widely used by the public in terms of its market share compared to that of automobiles, particularly in affluent societies with low-density urban forms like Australia. Transit service providers need to attract more people to transit by improving transit quality of service. The key to cost-effective transit service improvements lies in accurate evaluation of policy proposals by taking into account their impacts on transit users. If transit providers knew what is more or less important to their customers, they could focus their efforts on optimising customer-oriented service. Policy interventions could also be specified to influence transit users’ travel decisions, with targets of customer satisfaction and broader community welfare. This significance motivates the research into the relationship between urban transit quality of service and its user perception as well as behaviour. This research focused on two dimensions of transit user’s travel behaviour: route choice and access arrival time choice. The study area chosen was a busy urban transit corridor linking Brisbane central business district (CBD) and the St. Lucia campus of The University of Queensland (UQ). This multi-system corridor provided a ‘natural experiment’ for transit users between the CBD and UQ, as they can choose between busway 109 (with grade-separate exclusive right-of-way), ordinary on-street bus 412, and linear fast ferry CityCat on the Brisbane River. The population of interest was set as the attendees to UQ, who travelled from the CBD or from a suburb via the CBD. Two waves of internet-based self-completion questionnaire surveys were conducted to collect data on sampled passengers’ perception of transit service quality and behaviour of using public transit in the study area. The first wave survey is to collect behaviour and attitude data on respondents’ daily transit usage and their direct rating of importance on factors of route-level transit quality of service. A series of statistical analyses is conducted to examine the relationships between transit users’ travel and personal characteristics and their transit usage characteristics. A factor-cluster segmentation procedure is applied to respodents’ importance ratings on service quality variables regarding transit route preference to explore users’ various perspectives to transit quality of service. Based on the perceptions of service quality collected from the second wave survey, a series of quality criteria of the transit routes under study was quantitatively measured, particularly, the travel time reliability in terms of schedule adherence. It was proved that mixed traffic conditions and peak-period effects can affect transit service reliability. Multinomial logit models of transit user’s route choice were estimated using route-level service quality perceptions collected in the second wave survey. Relative importance of service quality factors were derived from choice model’s significant parameter estimates, such as access and egress times, seat availability, and busway system. Interpretations of the parameter estimates were conducted, particularly the equivalent in-vehicle time of access and egress times, and busway in-vehicle time. Market segmentation by trip origin was applied to investigate the difference in magnitude between the parameter estimates of access and egress times. The significant costs of transfer in transit trips were highlighted. These importance ratios were applied back to quality perceptions collected as RP data to compare the satisfaction levels between the service attributes and to generate an action relevance matrix to prioritise attributes for quality improvement. An empirical study on the relationship between average passenger waiting time and transit service characteristics was performed using the service quality perceived. Passenger arrivals for services with long headways (over 15 minutes) were found to be obviously coordinated with scheduled departure times of transit vehicles in order to reduce waiting time. This drove further investigations and modelling innovations in passenger’ access arrival time choice and its relationships with transit service characteristics and average passenger waiting time. Specifically, original contributions were made in formulation of expected waiting time, analysis of the risk-aversion attitude to missing desired service run in the passengers’ access time arrivals’ choice, and extensions of the utility function specification for modelling passenger access arrival distribution, by using complicated expected utility forms and non-linear probability weighting to explicitly accommodate the risk of missing an intended service and passenger’s risk-aversion attitude. Discussions on this research’s contributions to knowledge, its limitations, and recommendations for future research are provided at the concluding section of this thesis.
Resumo:
We estimated the heritability and correlations between body and carcass weight traits in a cultured stock of giant freshwater prawn (GFP) (Macrobrachium rosenbergii) selected for harvest body weight in Vietnam. The data set consisted of 18,387 body and 1,730 carcass records, as well as full pedigree information collected over four generations. Variance and covariance components were estimated by restricted maximum likelihood fitting a multi-trait animal model. Across generations, estimates of heritability for body and carcass weight traits were moderate and ranged from 0.14 to 0.19 and 0.17 to 0.21, respectively. Body trait heritabilities estimated for females were significantly higher than for males whereas carcass weight trait heritabilities estimated for females and males were not significantly different (P>. 0.05). Maternal effects for body traits accounted for 4 to 5% of the total variance and were greater in females than in males. Genetic correlations among body traits were generally high in the mixed sexes. Genetic correlations between body and carcass weight traits were also high. Although some issues remain regarding the best statistical model to be fitted to GFP data, our results suggest that selection for high harvest body weight based on breeding values estimated by fitting an animal model to the data can significantly improve mean body and carcass weight in GFP.
Resumo:
Non-periodic structural variation has been found in the high Tc cuprates, YBa2Cu3O7-x and Hg0.67Pb0.33Ba2Ca2Cu 3O8+δ, by image analysis of high resolution transmission electron microscope (HRTEM) images. We use two methods for analysis of the HRTEM images. The first method is a means for measuring the bending of lattice fringes at twin planes. The second method is a low-pass filter technique which enhances information contained by diffuse-scattered electrons and reveals what appears to be an interference effect between domains of differing lattice parameter in the top and bottom of the thin foil. We believe that these methods of image analysis could be usefully applied to the many thousands of HRTEM images that have been collected by other workers in the high temperature superconductor field. This work provides direct structural evidence for phase separation in high Tc cuprates, and gives support to recent stripes models that have been proposed to explain various angle resolved photoelectron spectroscopy and nuclear magnetic resonance data. We believe that the structural variation is a response to an opening of an electronic solubility gap where holes are not uniformly distributed in the material but are confined to metallic stripes. Optimum doping may occur as a consequence of the diffuse boundaries between stripes which arise from spinodal decomposition. Theoretical ideas about the high Tc cuprates which treat the cuprates as homogeneous may need to be modified in order to take account of this type of structural variation.
Resumo:
Purpose/aim Myopia incidence is increasing around the world. Myopisation is considered to be caused by a variety of factors. One consideration is whether higher-order aberrations (HOA) influence myopisation. More knowledge of optics in anisometropic eyes might give further insight into the development of refractive error. Materials and methods To analyse the possible influence of HOA on refractive error development, we compared HOA between anisometropes and isometropes. We analysed HOA up to the 4th order for both eyes of 20 anisometropes (mean age: 43 ± 17 years) and 20 isometropes (mean age: 33 ±17 years). HOA were measured with the Shack-Hartman i.Profiler (Carl Zeiss, Germany) and were recalculated for a 4 mm pupil. Mean spherical equivalent (MSE) was based on the subjective refraction. Anisometropia was defined as ≥1D interocular difference in MSE. The mean absolute differences between right and left eyes in spherical equivalent were 0.28 ± 0.21 D in the isometropic group and 2.81 ± 2.04 D in the anisometropic group. Interocular differences in HOA were compared with the interocular difference in MSE using correlations. Results For isometropes oblique trefoil, vertical coma, horizontal coma and spherical aberration showed significant correlations between the two eyes. In anisometropes all analysed higher-order aberrations correlated significantly between the two eyes except oblique secondary astigmatism and secondary astigmatism. When analysing anisometropes and isometropes separately, no significant correlations were found between interocular differences of higher-order aberrations and MSE. For isometropes and anisometropes combined, tetrafoil correlated significantly with MSE in left eyes. Conclusions The present study could not show that interocular differences of higher-order aberrations increase with increasing interocular difference in MSE.
Resumo:
In this paper, we propose a novel online hidden Markov model (HMM) parameter estimator based on Kerridge inaccuracy rate (KIR) concepts. Under mild identifiability conditions, we prove that our online KIR-based estimator is strongly consistent. In simulation studies, we illustrate the convergence behaviour of our proposed online KIR-based estimator and provide a counter-example illustrating the local convergence properties of the well known recursive maximum likelihood estimator (arguably the best existing solution).
Resumo:
The article examines the evidence of endemic financial crime in the global financial crisis (GFC), the legal impunity surrounding these crimes and the popular revolt against these abuses in the financial, political and legal systems. This is set against a consideration of the development since the 1970s of a conservative politics championing de-regulation, unfettered markets, welfare cuts and harsh law and order policies. On the one hand, this led to massively increased inequality and concentrations of wealth and political power in the hands of the super-rich, effectively placing them above the law, as the GFC revealed. On the other, a greatly enlarged, more punitive criminal justice system was directed at poor and minority communities. Explanations in terms of the rise of penal populism are helpful in explaining these developments, but it is argued they adopt a limited and reductionist view of populism, failing to see the prospects for a progressive populist politics to re-direct political attention to issues of inequality and corporate and white collar criminality.
Resumo:
In our rejoinder to Don Weatherburn's paper,"Law and Order Blues", we do not take issue with his advocacy of the need to take crime seriously and to foster a more rational approach to the problems it poses. Where differences do emerge is (1) with his claim that he is willing to do so whilst we (in our different ways) are not; and (2) on the question of what this involves. Of particular concern is the way in which his argument proceeds by a combination of simple misrepresentation of the positions it seeks to disparage, and silence concerning issues of real substance where intellectual debate and exchange would be welcome and useful. Our paper challenges, in turn, the misrepresentation of Indermaur's analysis of trends in violent crime, the misrepresentation of Hogg and Brown's Rethinking Law and Order, the misrepresentation of the findings of some of the research into the effectiveness of punitive policies and the silence on sexual assault in "Law and Order Blues". We suggest that his silence on sexual assault reflects a more widespread unwillingness to acknowledge the methodological problems that arise in the measurement of crime because such problems severely limit the extent to which confident assertions can be made about prevalence and trends.
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
Resumo:
Diagnostics of rotating machinery has developed significantly in the last decades, and industrial applications are spreading in different sectors. Most applications are characterized by varying velocities of the shaft and in many cases transients are the most critical to monitor. In these variable speed conditions, fault symptoms are clearer in the angular/order domains than in the common time/frequency ones. In the past, this issue was often solved by synchronously sampling data by means of phase locked circuits governing the acquisition; however, thanks to the spread of cheap and powerful microprocessors, this procedure is nowadays rarer; sampling is usually performed at constant time intervals, and the conversion to the order domain is made by means of digital signal processing techniques. In the last decades different algorithms have been proposed for the extraction of an order spectrum from a signal sampled asynchronously with respect to the shaft rotational velocity; many of them (the so called computed order tracking family) use interpolation techniques to resample the signal at constant angular increments, followed by a common discrete Fourier transform to shift from the angular domain to the order domain. A less exploited family of techniques shifts directly from the time domain to the order spectrum, by means of modified Fourier transforms. This paper proposes a new transform, named velocity synchronous discrete Fourier transform, which takes advantage of the instantaneous velocity to improve the quality of its result, reaching performances that can challenge the computed order tracking.
Resumo:
Cyclostationary models for the diagnostic signals measured on faulty rotating machineries have proved to be successful in many laboratory tests and industrial applications. The squared envelope spectrum has been pointed out as the most efficient indicator for the assessment of second order cyclostationary symptoms of damages, which are typical, for instance, of rolling element bearing faults. In an attempt to foster the spread of rotating machinery diagnostics, the current trend in the field is to reach higher levels of automation of the condition monitoring systems. For this purpose, statistical tests for the presence of cyclostationarity have been proposed during the last years. The statistical thresholds proposed in the past for the identification of cyclostationary components have been obtained under the hypothesis of having a white noise signal when the component is healthy. This need, coupled with the non-white nature of the real signals implies the necessity of pre-whitening or filtering the signal in optimal narrow-bands, increasing the complexity of the algorithm and the risk of losing diagnostic information or introducing biases on the result. In this paper, the authors introduce an original analytical derivation of the statistical tests for cyclostationarity in the squared envelope spectrum, dropping the hypothesis of white noise from the beginning. The effect of first order and second order cyclostationary components on the distribution of the squared envelope spectrum will be quantified and the effectiveness of the newly proposed threshold verified, providing a sound theoretical basis and a practical starting point for efficient automated diagnostics of machine components such as rolling element bearings. The analytical results will be verified by means of numerical simulations and by using experimental vibration data of rolling element bearings.