353 resultados para pseudo-random number generator


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Herein the mechanical properties of graphene, including Young’s modulus, fracture stress and fracture strain have been investigated by molecular dynamics simulations. The simulation results show that the mechanical properties of graphene are sensitive to the temperature changes but insensitive to the layer numbers in the multilayer graphene. Increasing temperature exerts adverse and significant effects on the mechanical properties of graphene. However, the adverse effect produced by the increasing layer number is marginal. On the other hand, isotope substitutions in graphene play a negligible role in modifying the mechanical properties of graphene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In recent years considerable attention has been paid to the numerical solution of stochastic ordinary differential equations (SODEs), as SODEs are often more appropriate than their deterministic counterparts in many modelling situations. However, unlike the deterministic case numerical methods for SODEs are considerably less sophisticated due to the difficulty in representing the (possibly large number of) random variable approximations to the stochastic integrals. Although Burrage and Burrage [High strong order explicit Runge-Kutta methods for stochastic ordinary differential equations, Applied Numerical Mathematics 22 (1996) 81-101] were able to construct strong local order 1.5 stochastic Runge-Kutta methods for certain cases, it is known that all extant stochastic Runge-Kutta methods suffer an order reduction down to strong order 0.5 if there is non-commutativity between the functions associated with the multiple Wiener processes. This order reduction down to that of the Euler-Maruyama method imposes severe difficulties in obtaining meaningful solutions in a reasonable time frame and this paper attempts to circumvent these difficulties by some new techniques. An additional difficulty in solving SODEs arises even in the Linear case since it is not possible to write the solution analytically in terms of matrix exponentials unless there is a commutativity property between the functions associated with the multiple Wiener processes. Thus in this present paper first the work of Magnus [On the exponential solution of differential equations for a linear operator, Communications on Pure and Applied Mathematics 7 (1954) 649-673] (applied to deterministic non-commutative Linear problems) will be applied to non-commutative linear SODEs and methods of strong order 1.5 for arbitrary, linear, non-commutative SODE systems will be constructed - hence giving an accurate approximation to the general linear problem. Secondly, for general nonlinear non-commutative systems with an arbitrary number (d) of Wiener processes it is shown that strong local order I Runge-Kutta methods with d + 1 stages can be constructed by evaluated a set of Lie brackets as well as the standard function evaluations. A method is then constructed which can be efficiently implemented in a parallel environment for this arbitrary number of Wiener processes. Finally some numerical results are presented which illustrate the efficacy of these approaches. (C) 1999 Elsevier Science B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Advances in solid-state switches and power electronics techniques have led to the development of compact, efficient and more reliable pulsed power systems. This paper proposes an efficient scheme that utilizes modular switch-capacitor units in obtaining high voltage levels with fast rise time (dv/dt) using low voltage solid-state switches. The proposed pulsed power supply has flexibility in terms of controlling energy and generating broad range of voltage levels. The energy flow can be controlled as the stored energy can be adjusted by a current source utilized at the first stage of the system. Desirable voltage level can be obtained by connecting adequate number of switch-capacitor units. Moreover, the proposed topology is load independent. Therefore it can easily supply wide range of applications especially the low impedance ones. The effectiveness of the proposed approach is verified by simulations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A newly developed computational approach is proposed in the paper for the analysis of multiple crack problems based on the eigen crack opening displacement (COD) boundary integral equations. The eigen COD particularly refers to a crack in an infinite domain under fictitious traction acting on the crack surface. With the concept of eigen COD, the multiple cracks in great number can be solved by using the conventional displacement discontinuity boundary integral equations in an iterative fashion with a small size of system matrix to determine all the unknown CODs step by step. To deal with the interactions among cracks for multiple crack problems, all cracks in the problem are divided into two groups, namely the adjacent group and the far-field group, according to the distance to the current crack in consideration. The adjacent group contains cracks with relatively small distances but strong effects to the current crack, while the others, the cracks of far-field group are composed of those with relatively large distances. Correspondingly, the eigen COD of the current crack is computed in two parts. The first part is computed by using the fictitious tractions of adjacent cracks via the local Eshelby matrix derived from the traction boundary integral equations in discretized form, while the second part is computed by using those of far-field cracks so that the high computational efficiency can be achieved in the proposed approach. The numerical results of the proposed approach are compared not only with those using the dual boundary integral equations (D-BIE) and the BIE with numerical Green's functions (NGF) but also with those of the analytical solutions in literature. The effectiveness and the efficiency of the proposed approach is verified. Numerical examples are provided for the stress intensity factors of cracks, up to several thousands in number, in both the finite and infinite plates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High magnification and large depth of field with a temporal resolution of less than 100 microseconds are possible using the present invention which combines a linear electron beam produced by a tungsten filament from an SX-40A Scanning Electron Microscope (SEM), a magnetic deflection coil with lower inductance resulting from reducing the number of turns of the saddle-coil wires, while increasing the diameter of the wires, a fast scintillator, photomultiplier tube, photomultiplier tube base, and signal amplifiers and a high speed data acquisition system which allows for a scan rate of 381 frames per second and 256.times.128 pixel density in the SEM image at a data acquisition rate of 25 MHz. The data acquisition and scan position are fully coordinated. A digitizer and a digital waveform generator which generates the sweep signals to the scan coils run off the same clock to acquire the signal in real-time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

utomatic pain monitoring has the potential to greatly improve patient diagnosis and outcomes by providing a continuous objective measure. One of the most promising methods is to do this via automatically detecting facial expressions. However, current approaches have failed due to their inability to: 1) integrate the rigid and non-rigid head motion into a single feature representation, and 2) incorporate the salient temporal patterns into the classification stage. In this paper, we tackle the first problem by developing a “histogram of facial action units” representation using Active Appearance Model (AAM) face features, and then utilize a Hidden Conditional Random Field (HCRF) to overcome the second issue. We show that both of these methods improve the performance on the task of pain detection in sequence level compared to current state-of-the-art-methods on the UNBC-McMaster Shoulder Pain Archive.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Historically a significant gap between male and female wages has existed in the Australian labour market. Indeed this wage differential was institutionalised in the 1912 arbitration decision which determined that the basic female wage would be set at between 54 and 66 per cent of the male wage. More recently however, the 1969 and 1972 Equal Pay Cases determined that male/female wage relativities should be based upon the premise of equal pay for work of equal value. It is important to note that the mere observation that average wages differ between males and females is not sine qua non evidence of sex discrimination. Economists restrict the definition of wage discrimination to cases where two distinct groups receive different average remuneration for reasons unrelated to differences in productivity characteristics. This paper extends previous studies of wage discrimination in Australia (Chapman and Mulvey, 1986; Haig, 1982) by correcting the estimated male/female wage differential for the existence of non-random sampling. Previous Australian estimates of male/female human capital basedwage specifications together with estimates of the corresponding wage differential all suffer from a failure to address this issue. If the sample of females observed to be working does not represent a random sample then the estimates of the male/female wage differential will be both biased and inconsistent.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: Effective management of multi-resistant organisms is an important issue for hospitals both in Australia and overseas. This study investigates the utility of using Bayesian Network (BN) analysis to examine relationships between risk factors and colonization with Vancomycin Resistant Enterococcus (VRE). Design: Bayesian Network Analysis was performed using infection control data collected over a period of 36 months (2008-2010). Setting: Princess Alexandra Hospital (PAH), Brisbane. Outcome of interest: Number of new VRE Isolates Methods: A BN is a probabilistic graphical model that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). BN enables multiple interacting agents to be studied simultaneously. The initial BN model was constructed based on the infectious disease physician‟s expert knowledge and current literature. Continuous variables were dichotomised by using third quartile values of year 2008 data. BN was used to examine the probabilistic relationships between VRE isolates and risk factors; and to establish which factors were associated with an increased probability of a high number of VRE isolates. Software: Netica (version 4.16). Results: Preliminary analysis revealed that VRE transmission and VRE prevalence were the most influential factors in predicting a high number of VRE isolates. Interestingly, several factors (hand hygiene and cleaning) known through literature to be associated with VRE prevalence, did not appear to be as influential as expected in this BN model. Conclusions: This preliminary work has shown that Bayesian Network Analysis is a useful tool in examining clinical infection prevention issues, where there is often a web of factors that influence outcomes. This BN model can be restructured easily enabling various combinations of agents to be studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This mathematics education research provides significant insights for the teaching of decimals to children. It is well known that decimals is one of the most difficult topics to learn and teach. Annette’s research is unique in that it focuses not only on the cognitive, but also on the affective and conative aspects of learning and teaching of decimals. The study is innovative as it includes the students as co-constructors and co-researchers. The findings open new ways of thinking for educators about how students cognitively process decimal knowledge, as well as how students might develop a sense of self as a learner, teacher and researcher in mathematics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feral pig, Sus scrofa, is a widespread and abundant invasive species in Australia. Feral pigs pose a significant threat to the environment, agricultural industry, and human health, and in far north Queensland they endanger World Heritage values of the Wet Tropics. Historical records document the first introduction of domestic pigs into Australia via European settlers in 1788 and subsequent introductions from Asia from 1827 onwards. Since this time, domestic pigs have been accidentally and deliberately released into the wild and significant feral pig populations have become established, resulting in the declaration of this species as a class 2 pest in Queensland. The overall objective of this study was to assess the population genetic structure of feral pigs in far north Queensland, in particular to enable delineation of demographically independent management units. The identification of ecologically meaningful management units using molecular techniques can assist in targeting feral pig control to bring about effective long-term management. Molecular genetic analysis was undertaken on 434 feral pigs from 35 localities between Tully and Innisfail. Seven polymorphic and unlinked microsatellite loci were screened and fixation indices (FST and analogues) and Bayesian clustering methods were used to identify population structure and management units in the study area. Sequencing of the hyper-variable mitochondrial control region (D-loop) of 35 feral pigs was also examined to identify pig ancestry. Three management units were identified in the study at a scale of 25 to 35 km. Even with the strong pattern of genetic structure identified in the study area, some evidence of long distance dispersal and/or translocation was found as a small number of individuals exhibited ancestry from a management unit outside of which they were sampled. Overall, gene flow in the study area was found to be influenced by environmental features such as topography and land use, but no distinct or obvious natural or anthropogenic geographic barriers were identified. Furthermore, strong evidence was found for non-random mating between pigs of European and Asian breeds indicating that feral pig ancestry influences their population genetic structure. Phylogenetic analysis revealed two distinct mitochondrial DNA clades, representing Asian domestic pig breeds and European breeds. A significant finding was that pigs of Asian origin living in Innisfail and south Tully were not mating randomly with European breed pigs populating the nearby Mission Beach area. Feral pig control should be implemented in each of the management units identified in this study. The control should be coordinated across properties within each management unit to prevent re-colonisation from adjacent localities. The adjacent rainforest and National Park Estates, as well as the rainforest-crop boundary should be included in a simultaneous control operation for greater success.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ability of a piezoelectric transducer in energy conversion is rapidly expanding in several applications. Some of the industrial applications for which a high power ultrasound transducer can be used are surface cleaning, water treatment, plastic welding and food sterilization. Also, a high power ultrasound transducer plays a great role in biomedical applications such as diagnostic and therapeutic applications. An ultrasound transducer is usually applied to convert electrical energy to mechanical energy and vice versa. In some high power ultrasound system, ultrasound transducers are applied as a transmitter, as a receiver or both. As a transmitter, it converts electrical energy to mechanical energy while a receiver converts mechanical energy to electrical energy as a sensor for control system. Once a piezoelectric transducer is excited by electrical signal, piezoelectric material starts to vibrate and generates ultrasound waves. A portion of the ultrasound waves which passes through the medium will be sensed by the receiver and converted to electrical energy. To drive an ultrasound transducer, an excitation signal should be properly designed otherwise undesired signal (low quality) can deteriorate the performance of the transducer (energy conversion) and increase power consumption in the system. For instance, some portion of generated power may be delivered in unwanted frequency which is not acceptable for some applications especially for biomedical applications. To achieve better performance of the transducer, along with the quality of the excitation signal, the characteristics of the high power ultrasound transducer should be taken into consideration as well. In this regard, several simulation and experimental tests are carried out in this research to model high power ultrasound transducers and systems. During these experiments, high power ultrasound transducers are excited by several excitation signals with different amplitudes and frequencies, using a network analyser, a signal generator, a high power amplifier and a multilevel converter. Also, to analyse the behaviour of the ultrasound system, the voltage ratio of the system is measured in different tests. The voltage across transmitter is measured as an input voltage then divided by the output voltage which is measured across receiver. The results of the transducer characteristics and the ultrasound system behaviour are discussed in chapter 4 and 5 of this thesis. Each piezoelectric transducer has several resonance frequencies in which its impedance has lower magnitude as compared to non-resonance frequencies. Among these resonance frequencies, just at one of those frequencies, the magnitude of the impedance is minimum. This resonance frequency is known as the main resonance frequency of the transducer. To attain higher efficiency and deliver more power to the ultrasound system, the transducer is usually excited at the main resonance frequency. Therefore, it is important to find out this frequency and other resonance frequencies. Hereof, a frequency detection method is proposed in this research which is discussed in chapter 2. An extended electrical model of the ultrasound transducer with multiple resonance frequencies consists of several RLC legs in parallel with a capacitor. Each RLC leg represents one of the resonance frequencies of the ultrasound transducer. At resonance frequency the inductor reactance and capacitor reactance cancel out each other and the resistor of this leg represents power conversion of the system at that frequency. This concept is shown in simulation and test results presented in chapter 4. To excite a high power ultrasound transducer, a high power signal is required. Multilevel converters are usually applied to generate a high power signal but the drawback of this signal is low quality in comparison with a sinusoidal signal. In some applications like ultrasound, it is extensively important to generate a high quality signal. Several control and modulation techniques are introduced in different papers to control the output voltage of the multilevel converters. One of those techniques is harmonic elimination technique. In this technique, switching angles are chosen in such way to reduce harmonic contents in the output side. It is undeniable that increasing the number of the switching angles results in more harmonic reduction. But to have more switching angles, more output voltage levels are required which increase the number of components and cost of the converter. To improve the quality of the output voltage signal with no more components, a new harmonic elimination technique is proposed in this research. Based on this new technique, more variables (DC voltage levels and switching angles) are chosen to eliminate more low order harmonics compared to conventional harmonic elimination techniques. In conventional harmonic elimination method, DC voltage levels are same and only switching angles are calculated to eliminate harmonics. Therefore, the number of eliminated harmonic is limited by the number of switching cycles. In the proposed modulation technique, the switching angles and the DC voltage levels are calculated off-line to eliminate more harmonics. Therefore, the DC voltage levels are not equal and should be regulated. To achieve this aim, a DC/DC converter is applied to adjust the DC link voltages with several capacitors. The effect of the new harmonic elimination technique on the output quality of several single phase multilevel converters is explained in chapter 3 and 6 of this thesis. According to the electrical model of high power ultrasound transducer, this device can be modelled as parallel combinations of RLC legs with a main capacitor. The impedance diagram of the transducer in frequency domain shows it has capacitive characteristics in almost all frequencies. Therefore, using a voltage source converter to drive a high power ultrasound transducer can create significant leakage current through the transducer. It happens due to significant voltage stress (dv/dt) across the transducer. To remedy this problem, LC filters are applied in some applications. For some applications such as ultrasound, using a LC filter can deteriorate the performance of the transducer by changing its characteristics and displacing the resonance frequency of the transducer. For such a case a current source converter could be a suitable choice to overcome this problem. In this regard, a current source converter is implemented and applied to excite the high power ultrasound transducer. To control the output current and voltage, a hysteresis control and unipolar modulation are used respectively. The results of this test are explained in chapter 7.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The emergence of pseudo-marginal algorithms has led to improved computational efficiency for dealing with complex Bayesian models with latent variables. Here an unbiased estimator of the likelihood replaces the true likelihood in order to produce a Bayesian algorithm that remains on the marginal space of the model parameter (with latent variables integrated out), with a target distribution that is still the correct posterior distribution. Very efficient proposal distributions can be developed on the marginal space relative to the joint space of model parameter and latent variables. Thus psuedo-marginal algorithms tend to have substantially better mixing properties. However, for pseudo-marginal approaches to perform well, the likelihood has to be estimated rather precisely. This can be difficult to achieve in complex applications. In this paper we propose to take advantage of multiple central processing units (CPUs), that are readily available on most standard desktop computers. Here the likelihood is estimated independently on the multiple CPUs, with the ultimate estimate of the likelihood being the average of the estimates obtained from the multiple CPUs. The estimate remains unbiased, but the variability is reduced. We compare and contrast two different technologies that allow the implementation of this idea, both of which require a negligible amount of extra programming effort. The superior performance of this idea over the standard approach is demonstrated on simulated data from a stochastic volatility model.