293 resultados para Event mixing technique

em Queensland University of Technology - ePrints Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Adequate amount of graphene oxide (GO) was firstly prepared by oxidation of graphite and GO/epoxy nanocomposites were subsequently prepared by typical solution mixing technique. X-ray diffraction (XRD) pattern, X-ray photoelectron (XPS), Raman and Fourier transform infrared (FTIR) spectroscopy indicated the successful preparation of GO. Scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) images of the graphite oxide showed that they consist of a large amount of graphene oxide platelets with a curled morphology containing of a thin wrinkled sheet like structure. AFM image of the exfoliated GO signified that the average thickness of GO sheets is ~1.0 nm which is very similar to GO monolayer. Mechanical properties of as prepared GO/epoxy nanocomposites were investigated. Significant improvements in both Young’s modulus and tensile strength were observed for the nanocomposites at very low level of GO loading. The Young’s modulus of the nanocomposites containing 0.5 wt% GO was 1.72 GPa, which was 35 % higher than that of the pure epoxy resin (1.28 GPa). The effective reinforcement of the GO based epoxy nanocomposites can be attributed to the good dispersion and the strong interfacial interactions between the GO sheets and the epoxy resin matrices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bulk amount of graphite oxide was prepared by oxidation of graphite using the modified Hummers method and its ultrasonication in organic solvents yielded graphene oxide (GO). X-ray diffraction (XRD) pattern, X-ray photoelectron (XPS), Raman and Fourier transform infrared (FTIR) spectroscopy indicated the successful preparation of GO. XPS survey spectrum of GO revealed the presence of 66.6 at% C and 30.4 at% O. Scanning electron microscopy (SEM) and Transmission electron microscopy (TEM) images of the graphene oxide showed that they consist of a large amount of graphene oxide platelets with a curled morphology containing of a thin wrinkled sheet like structure. AFM image of the exfoliated GO signified that the average thickness of GO sheets is ~1.0 nm which is very similar to GO monolayer. GO/epoxy nanocomposites were prepared by typical solution mixing technique and influence of GO on mechanical and thermal properties of nanocomposites were investigated. As for the mechanical behaviour of GO/epoxy nanocomposites, 0.5 wt% GO in the nanocomposite achieved the maximum increase in the elastic modulus (~35%) and tensile strength (~7%). The TEM analysis provided clear image of microstructure with homogeneous dispersion of GO in the polymer matrix. The improved strength properties of GO/epoxy nanocomposites can be attributed to inherent strength of GO, the good dispersion and the strong interfacial interactions between the GO sheets and the polymer matrix. However, incorporation of GO showed significant negative effect on composite glass transition temperature (Tg). This may arise due to the interference of GO on curing reaction of epoxy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The effect of graphene oxide (GO) on the mechanical properties and the curing reaction of Diglycidyl Ether of Bisphenol A/F and Triethylenetetramine epoxy system was investigated. GO was prepared by oxidation of graphite flakes and characterized by spectroscopic and microscopic techniques. Epoxy nanocomposites were fabricated with different GO loading by solution mixing technique. It was found that incorporation of small amount of GO into the epoxy matrix significantly enhanced the mechanical properties of the epoxy. In particular, model I fracture toughness was increased by nearly 50% with the addition of 0.1 wt. % GO to epoxy. The toughening mechanism was understood by fractography analysis of the tested samples. The more irregular, coarse, and multi-plane fracture surfaces of the epoxy/GO nanocomposites were observed. This implies that the two-dimensional GO sheets effectively disturbed and deflected the crack propagation. At 0.5 wt. % GO, elastic modulus was ~35% greater than neat epoxy. Differential scanning calorimetry (DSC) results showed that GO addition moderately affect the glass transition temperature (Tg) of epoxy. The maximum decrease of Tg by ~7 oC was shown for the nanocomposite with 0.5 wt. % GO. DSC results further revealed that GO significantly hindered the cure reaction in the epoxy system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research introduces the proposition that Electronic Dance Music’s beat-mixing function could be implemented to create immediacy in other musical genres. The inclusion of rhythmic sections at the beginning and end of each musical work created a ‘DJ friendly’ environment. The term used in this thesis to refer to the application of beat-mixing in Rock music is ‘ClubRock’. Collaboration between a number of DJs and Rock music professionals applied the process of beat-mixing to blend Rock tracks to produce a continuous ClubRock set. The DJ technique of beat-mixing Rock music transformed static renditions into a fluid creative work. The hybridisation of the two genres, EDM and Rock, resulted in a contribution to Rock music compositional approaches and the production of a unique Rock album; Manarays—Get Lucky.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

User evaluations using paper prototypes commonly lack social context. The Group simulation technique described in this paper offers a solution to this problem. The study introduces an early-phase participatory design technique targeted for small groups. The proposed technique is used for evaluating an interface, which enables group work in photo collection creation. Three groups of four users, 12 in total, took part in a simulation session where they tested a low-fidelity design concept that included their own personal photo content from an event that their group attended together. The users’ own content was used to evoke natural experiences. Our results indicate that the technique helped users to naturally engage with the prototype in the session. The technique is suggested to be suitable for evaluating other early-phase concepts and to guide design solutions, especially with the concepts that include users’ personal content and enable content sharing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper addresses the problem of identifying and explaining behavioral differences between two business process event logs. The paper presents a method that, given two event logs, returns a set of statements in natural language capturing behavior that is present or frequent in one log, while absent or infrequent in the other. This log delta analysis method allows users to diagnose differences between normal and deviant executions of a process or between two versions or variants of a process. The method relies on a novel approach to losslessly encode an event log as an event structure, combined with a frequency-enhanced technique for differencing pairs of event structures. A validation of the proposed method shows that it accurately diagnoses typical change patterns and can explain differences between normal and deviant cases in a real-life log, more compactly and precisely than previously proposed methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In an estuary, mixing and dispersion result from a combination of large-scale advection and smallscale turbulence, which are complex to estimate. The predictions of scalar transport and mixing are often inferred and rarely accurate, due to inadequate understanding of the contributions of these difference scales to estuarine recirculation. A multi-device field study was conducted in a small sub-tropical estuary under neap tide conditions with near-zero fresh water discharge for about 48 hours. During the study, acoustic Doppler velocimeters (ADV) were sampled at high frequency (50 Hz), while an acoustic Doppler current profiler (ADCP) and global positioning system (GPS) tracked drifters were used to obtain some lower frequency spatial distribution of the flow parameters within the estuary. The velocity measurements were complemented with some continuous measurement of water depth, conductivity, temperature and some other physiochemical parameters. Thorough quality control was carried out by implementation of relevant error removal filters on the individual data set to intercept spurious data. A triple decomposition (TD) technique was introduced to access the contributions of tides, resonance and ‘true’ turbulence in the flow field. The time series of mean flow measurements for both the ADCP and drifter were consistent with those of the mean ADV data when sampled within a similar spatial domain. The tidal scale fluctuation of velocity and water level were used to examine the response of the estuary to tidal inertial current. The channel exhibited a mixed type wave with a typical phase-lag between 0.035π– 0.116π. A striking feature of the ADV velocity data was the slow fluctuations, which exhibited large amplitudes of up to 50% of the tidal amplitude, particularly in slack waters. Such slow fluctuations were simultaneously observed in a number of physiochemical properties of the channel. The ensuing turbulence field showed some degree of anisotropy. For all ADV units, the horizontal turbulence ratio ranged between 0.4 and 0.9, and decreased towards the bed, while the vertical turbulence ratio was on average unity at z = 0.32 m and approximately 0.5 for the upper ADV (z = 0.55 m). The result of the statistical analysis suggested that the ebb phase turbulence field was dominated by eddies that evolved from ejection type process, while that of the flood phase contained mixed eddies with significant amount related to sweep type process. Over 65% of the skewness values fell within the range expected of a finite Gaussian distribution and the bulk of the excess kurtosis values (over 70%) fell within the range of -0.5 and +2. The TD technique described herein allowed the characterisation of a broader temporal scale of fluctuations of the high frequency data sampled within the durations of a few tidal cycles. The study provides characterisation of the ranges of fluctuation required for an accurate modelling of shallow water dispersion and mixing in a sub-tropical estuary.