174 resultados para Beat-mixing
Resumo:
The use of mobile devices and social media technologies are becoming all-pervasive in society: they are both transformative and constant. The high levels of mobile device ownership and increased access to social media technologies enables the potential for ‘anytime, anywhere’ cooperation and collaboration in education. While recent reports into emerging technologies in higher education predict an increase in the use of mobile devices and social media technologies (Horizon Report, 2013), there is a lack of theory-based research to indicate how these technologies can be most effectively harnessed to support and enhance student learning and what the impacts of these technologies are on both students and educators. In response to the need to understand how these technologies can be better embraced within higher education, this study investigated how first year education students used mobile devices and social media technologies. More specifically, the study identified how students spent most of their time when connected online with mobile devices and social media technologies and whether the online connected time engaged them in their learning or whether it was a distraction.
Resumo:
We propose a framework for adaptive security from hard random lattices in the standard model. Our approach borrows from the recent Agrawal-Boneh-Boyen families of lattices, which can admit reliable and punctured trapdoors, respectively used in reality and in simulation. We extend this idea to make the simulation trapdoors cancel not for a specific forgery but on a non-negligible subset of the possible challenges. Conceptually, we build a compactly representable, large family of input-dependent “mixture” lattices, set up with trapdoors that “vanish” for a secret subset which we hope the forger will target. Technically, we tweak the lattice structure to achieve “naturally nice” distributions for arbitrary choices of subset size. The framework is very general. Here we obtain fully secure signatures, and also IBE, that are compact, simple, and elegant.
Resumo:
In an estuary, mixing and dispersion result from a combination of large-scale advection and smallscale turbulence, which are complex to estimate. The predictions of scalar transport and mixing are often inferred and rarely accurate, due to inadequate understanding of the contributions of these difference scales to estuarine recirculation. A multi-device field study was conducted in a small sub-tropical estuary under neap tide conditions with near-zero fresh water discharge for about 48 hours. During the study, acoustic Doppler velocimeters (ADV) were sampled at high frequency (50 Hz), while an acoustic Doppler current profiler (ADCP) and global positioning system (GPS) tracked drifters were used to obtain some lower frequency spatial distribution of the flow parameters within the estuary. The velocity measurements were complemented with some continuous measurement of water depth, conductivity, temperature and some other physiochemical parameters. Thorough quality control was carried out by implementation of relevant error removal filters on the individual data set to intercept spurious data. A triple decomposition (TD) technique was introduced to access the contributions of tides, resonance and ‘true’ turbulence in the flow field. The time series of mean flow measurements for both the ADCP and drifter were consistent with those of the mean ADV data when sampled within a similar spatial domain. The tidal scale fluctuation of velocity and water level were used to examine the response of the estuary to tidal inertial current. The channel exhibited a mixed type wave with a typical phase-lag between 0.035π– 0.116π. A striking feature of the ADV velocity data was the slow fluctuations, which exhibited large amplitudes of up to 50% of the tidal amplitude, particularly in slack waters. Such slow fluctuations were simultaneously observed in a number of physiochemical properties of the channel. The ensuing turbulence field showed some degree of anisotropy. For all ADV units, the horizontal turbulence ratio ranged between 0.4 and 0.9, and decreased towards the bed, while the vertical turbulence ratio was on average unity at z = 0.32 m and approximately 0.5 for the upper ADV (z = 0.55 m). The result of the statistical analysis suggested that the ebb phase turbulence field was dominated by eddies that evolved from ejection type process, while that of the flood phase contained mixed eddies with significant amount related to sweep type process. Over 65% of the skewness values fell within the range expected of a finite Gaussian distribution and the bulk of the excess kurtosis values (over 70%) fell within the range of -0.5 and +2. The TD technique described herein allowed the characterisation of a broader temporal scale of fluctuations of the high frequency data sampled within the durations of a few tidal cycles. The study provides characterisation of the ranges of fluctuation required for an accurate modelling of shallow water dispersion and mixing in a sub-tropical estuary.
Resumo:
Welding system has now been concentrated on the development of new process to achieve cost savings, higher productivity and better quality in manufacturing industry. Discrete alternate supply of shielding gas is a new technology that alternately supplies the different kinds of shielding gases in weld zone. As the newdevelopedmethods compared to the previous generalwelding with a mixing supply of shielding gas, it cannot only increase thewelding quality, but also reduce the energy by 20% and the emission rate of fume. As a result, under thesamewelding conditions,comparedwith thewelding by supplying pure argon, argon + 67% helium mixture by conventional method and thewelding by supplying alternately pure argon and pure helium by alternate method showed the increased welding speed. Also, the alternate method showed the same welding speed with argon + 67% helium mixture without largely deteriorating of weld penetration. The alternate method with argon and helium compared with the conventional methods of pure argon and argon + 67% helium mixture produced the lowest degree of welding distortion.
Resumo:
The validation of Computed Tomography (CT) based 3D models takes an integral part in studies involving 3D models of bones. This is of particular importance when such models are used for Finite Element studies. The validation of 3D models typically involves the generation of a reference model representing the bones outer surface. Several different devices have been utilised for digitising a bone’s outer surface such as mechanical 3D digitising arms, mechanical 3D contact scanners, electro-magnetic tracking devices and 3D laser scanners. However, none of these devices is capable of digitising a bone’s internal surfaces, such as the medullary canal of a long bone. Therefore, this study investigated the use of a 3D contact scanner, in conjunction with a microCT scanner, for generating a reference standard for validating the internal and external surfaces of a CT based 3D model of an ovine femur. One fresh ovine limb was scanned using a clinical CT scanner (Phillips, Brilliance 64) with a pixel size of 0.4 mm2 and slice spacing of 0.5 mm. Then the limb was dissected to obtain the soft tissue free bone while care was taken to protect the bone’s surface. A desktop mechanical 3D contact scanner (Roland DG Corporation, MDX 20, Japan) was used to digitise the surface of the denuded bone. The scanner was used with the resolution of 0.3 × 0.3 × 0.025 mm. The digitised surfaces were reconstructed into a 3D model using reverse engineering techniques in Rapidform (Inus Technology, Korea). After digitisation, the distal and proximal parts of the bone were removed such that the shaft could be scanned with a microCT (µCT40, Scanco Medical, Switzerland) scanner. The shaft, with the bone marrow removed, was immersed in water and scanned with a voxel size of 0.03 mm3. The bone contours were extracted from the image data utilising the Canny edge filter in Matlab (The Mathswork).. The extracted bone contours were reconstructed into 3D models using Amira 5.1 (Visage Imaging, Germany). The 3D models of the bone’s outer surface reconstructed from CT and microCT data were compared against the 3D model generated using the contact scanner. The 3D model of the inner canal reconstructed from the microCT data was compared against the 3D models reconstructed from the clinical CT scanner data. The disparity between the surface geometries of two models was calculated in Rapidform and recorded as average distance with standard deviation. The comparison of the 3D model of the whole bone generated from the clinical CT data with the reference model generated a mean error of 0.19±0.16 mm while the shaft was more accurate(0.08±0.06 mm) than the proximal (0.26±0.18 mm) and distal (0.22±0.16 mm) parts. The comparison between the outer 3D model generated from the microCT data and the contact scanner model generated a mean error of 0.10±0.03 mm indicating that the microCT generated models are sufficiently accurate for validation of 3D models generated from other methods. The comparison of the inner models generated from microCT data with that of clinical CT data generated an error of 0.09±0.07 mm Utilising a mechanical contact scanner in conjunction with a microCT scanner enabled to validate the outer surface of a CT based 3D model of an ovine femur as well as the surface of the model’s medullary canal.
Resumo:
Dispersion characteristics of respiratory droplets in indoor environments are of special interest in controlling transmission of airborne diseases. This study adopts an Eulerian method to investigate the spatial concentration distribution and temporal evolution of exhaled and sneezed/coughed droplets within the range of 1.0~10.0μm in an office room with three air distribution methods, i.e. mixing ventilation (MV), displacement ventilation (DV), and under-floor air distribution (UFAD). The diffusion, gravitational settling, and deposition mechanism of particulate matters are well accounted in the one-way coupling Eulerian approach. The simulation results find that exhaled droplets with diameters up to 10.0μm from normal respiration process are uniformly distributed in MV, while they are trapped in the breathing height by thermal stratifications in DV and UFAD, resulting in a high droplet concentration and a high exposure risk to other occupants. Sneezed/coughed droplets are diluted much slower in DV/UFAD than in MV. Low air speed in the breathing zone in DV/UFAD can lead to prolonged residence of droplets in the breathing zone.
Resumo:
How and why visualisations support learning was the subject of this qualitative instrumental collective case study. Five computer programming languages (PHP, Visual Basic, Alice, GameMaker, and RoboLab) supporting differing degrees of visualisation were used as cases to explore the effectiveness of software visualisation to develop fundamental computer programming concepts (sequence, iteration, selection, and modularity). Cognitive theories of visual and auditory processing, cognitive load, and mental models provided a framework in which student cognitive development was tracked and measured by thirty-one 15-17 year old students drawn from a Queensland metropolitan secondary private girls’ school, as active participants in the research. Seventeen findings in three sections increase our understanding of the effects of visualisation on the learning process. The study extended the use of mental model theory to track the learning process, and demonstrated application of student research based metacognitive analysis on individual and peer cognitive development as a means to support research and as an approach to teaching. The findings also forward an explanation for failures in previous software visualisation studies, in particular the study has demonstrated that for the cases examined, where complex concepts are being developed, the mixing of auditory (or text) and visual elements can result in excessive cognitive load and impede learning. This finding provides a framework for selecting the most appropriate instructional programming language based on the cognitive complexity of the concepts under study.
Resumo:
Here we search for evidence of the existence of a sub-chondritic 142Nd/144Nd reservoir that balances the Nd isotope chemistry of the Earth relative to chondrites. If present, it may reside in the source region of deeply sourced mantle plume material. We suggest that lavas from Hawai’i with coupled elevations in 186Os/188Os and 187Os/188Os, from Iceland that represent mixing of upper mantle and lower mantle components, and from Gough with sub-chondritic 143Nd/144Nd and high 207Pb/206Pb, are favorable samples that could reflect mantle sources that have interacted with an Early-Enriched Reservoir (EER) with sub-chondritic 142Nd/144Nd. High-precision Nd isotope analyses of basalts from Hawai’i, Iceland and Gough demonstrate no discernable 142Nd/144Nd deviation from terrestrial standards. These data are consistent with previous high-precision Nd isotope analysis of recent mantle-derived samples and demonstrate that no mantle-derived material to date provides evidence for the existence of an EER in the mantle. We then evaluate mass balance in the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd. The Nd isotope systematics of EERs are modeled for different sizes and timing of formation relative to ε143Nd estimates of the reservoirs in the μ142Nd = 0 Earth, where μ142Nd is ((measured 142Nd/144Nd/terrestrial standard 142Nd/144Nd)−1 * 10−6) and the μ142Nd = 0 Earth is the proportion of the silicate Earth with 142Nd/144Nd indistinguishable from the terrestrial standard. The models indicate that it is not possible to balance the Earth with respect to both 142Nd/144Nd and 143Nd/144Nd unless the μ142Nd = 0 Earth has a ε143Nd within error of the present-day Depleted Mid-ocean ridge basalt Mantle source (DMM). The 4567 Myr age 142Nd–143Nd isochron for the Earth intersects μ142Nd = 0 at ε143Nd of +8 ± 2 providing a minimum ε143Nd for the μ142Nd = 0 Earth. The high ε143Nd of the μ142Nd = 0 Earth is confirmed by the Nd isotope systematics of Archean mantle-derived rocks that consistently have positive ε143Nd. If the EER formed early after solar system formation (0–70 Ma) continental crust and DMM can be complementary reservoirs with respect to Nd isotopes, with no requirement for significant additional reservoirs. If the EER formed after 70 Ma then the μ142Nd = 0 Earth must have a bulk ε143Nd more radiogenic than DMM and additional high ε143Nd material is required to balance the Nd isotope systematics of the Earth.
Resumo:
The effects of particulate matter on environment and public health have been widely studied in recent years. A number of studies in the medical field have tried to identify the specific effect on human health of particulate exposure, but agreement amongst these studies on the relative importance of the particles’ size and its origin with respect to health effects is still lacking. Nevertheless, air quality standards are moving, as the epidemiological attention, towards greater focus on the smaller particles. Current air quality standards only regulate the mass of particulate matter less than 10 μm in aerodynamic diameter (PM10) and less than 2.5 μm (PM2.5). The most reliable method used in measuring Total Suspended Particles (TSP), PM10, PM2.5 and PM1 is the gravimetric method since it directly measures PM concentration, guaranteeing an effective traceability to international standards. This technique however, neglects the possibility to correlate short term intra-day variations of atmospheric parameters that can influence ambient particle concentration and size distribution (emission strengths of particle sources, temperature, relative humidity, wind direction and speed and mixing height) as well as human activity patterns that may also vary over time periods considerably shorter than 24 hours. A continuous method to measure the number size distribution and total number concentration in the range 0.014 – 20 μm is the tandem system constituted by a Scanning Mobility Particle Sizer (SMPS) and an Aerodynamic Particle Sizer (APS). In this paper, an uncertainty budget model of the measurement of airborne particle number, surface area and mass size distributions is proposed and applied for several typical aerosol size distributions. The estimation of such an uncertainty budget presents several difficulties due to i) the complexity of the measurement chain, ii) the fact that SMPS and APS can properly guarantee the traceability to the International System of Measurements only in terms of number concentration. In fact, the surface area and mass concentration must be estimated on the basis of separately determined average density and particle morphology. Keywords: SMPS-APS tandem system, gravimetric reference method, uncertainty budget, ultrafine particles.
Resumo:
An experimental investigation has been made of a round, non-buoyant plume of nitric oxide, NO, in a turbulent grid flow of ozone, 03, using the Turbulent Smog Chamber at the University of Sydney. The measurements have been made at a resolution not previously reported in the literature. The reaction is conducted at non-equilibrium so there is significant interaction between turbulent mixing and chemical reaction. The plume has been characterized by a set of constant initial reactant concentration measurements consisting of radial profiles at various axial locations. Whole plume behaviour can thus be characterized and parameters are selected for a second set of fixed physical location measurements where the effects of varying the initial reactant concentrations are investigated. Careful experiment design and specially developed chemilurninescent analysers, which measure fluctuating concentrations of reactive scalars, ensure that spatial and temporal resolutions are adequate to measure the quantities of interest. Conserved scalar theory is used to define a conserved scalar from the measured reactive scalars and to define frozen, equilibrium and reaction dominated cases for the reactive scalars. Reactive scalar means and the mean reaction rate are bounded by frozen and equilibrium limits but this is not always the case for the reactant variances and covariances. The plume reactant statistics are closer to the equilibrium limit than those for the ambient reactant. The covariance term in the mean reaction rate is found to be negative and significant for all measurements made. The Toor closure was found to overestimate the mean reaction rate by 15 to 65%. Gradient model turbulent diffusivities had significant scatter and were not observed to be affected by reaction. The ratio of turbulent diffusivities for the conserved scalar mean and that for the r.m.s. was found to be approximately 1. Estimates of the ratio of the dissipation timescales of around 2 were found downstream. Estimates of the correlation coefficient between the conserved scalar and its dissipation (parallel to the mean flow) were found to be between 0.25 and the significant value of 0.5. Scalar dissipations for non-reactive and reactive scalars were found to be significantly different. Conditional statistics are found to be a useful way of investigating the reactive behaviour of the plume, effectively decoupling the interaction of chemical reaction and turbulent mixing. It is found that conditional reactive scalar means lack significant transverse dependence as has previously been found theoretically by Klimenko (1995). It is also found that conditional variance around the conditional reactive scalar means is relatively small, simplifying the closure for the conditional reaction rate. These properties are important for the Conditional Moment Closure (CMC) model for turbulent reacting flows recently proposed by Klimenko (1990) and Bilger (1993). Preliminary CMC model calculations are carried out for this flow using a simple model for the conditional scalar dissipation. Model predictions and measured conditional reactive scalar means compare favorably. The reaction dominated limit is found to indicate the maximum reactedness of a reactive scalar and is a limiting case of the CMC model. Conventional (unconditional) reactive scalar means obtained from the preliminary CMC predictions using the conserved scalar p.d.f. compare favorably with those found from experiment except where measuring position is relatively far upstream of the stoichiometric distance. Recommendations include applying a full CMC model to the flow and investigations both of the less significant terms in the conditional mean species equation and the small variation of the conditional mean with radius. Forms for the p.d.f.s, in addition to those found from experiments, could be useful for extending the CMC model to reactive flows in the atmosphere.
Resumo:
An iterative method for the fit optimisation of a pre-contoured fracture fixation plate for a given bone data set is presented. Both plate shape optimisation and plate fit quantification are conducted in a virtual environment utilising computer graphical methods and 3D bone and plate models. Two optimised shapes of the undersurface of an existing distal medial tibia plate were generated based on a dataset of 45 3D bone models reconstructed from computed tomography image data of Japanese tibiae. The existing plate shape achieved an anatomical fit on 13% of tibiae from the dataset. Modified plate 1 achieved an anatomical fit for 42% and modified plate 2 a fit for 67% of the bones. If either modified plate 1 or plate 2 is used, then the anatomical fit can be increased to 82% for the same dataset. Issues pertaining to any further improvement in plate fit/shape are discussed.
Resumo:
Virtual 3D models of long bones are increasingly being used for implant design and research applications. The current gold standard for the acquisition of such data is Computed Tomography (CT) scanning. Due to radiation exposure, CT is generally limited to the imaging of clinical cases and cadaver specimens. Magnetic Resonance Imaging (MRI) does not involve ionising radiation and therefore can be used to image selected healthy human volunteers for research purposes. The feasibility of MRI as alternative to CT for the acquisition of morphological bone data of the lower extremity has been demonstrated in recent studies [1, 2]. Some of the current limitations of MRI are long scanning times and difficulties with image segmentation in certain anatomical regions due to poor contrast between bone and surrounding muscle tissues. Higher field strength scanners promise to offer faster imaging times or better image quality. In this study image quality at 1.5T is quantitatively compared to images acquired at 3T. --------- The femora of five human volunteers were scanned using 1.5T and 3T MRI scanners from the same manufacturer (Siemens) with similar imaging protocols. A 3D flash sequence was used with TE = 4.66 ms, flip angle = 15° and voxel size = 0.5 × 0.5 × 1 mm. PA-Matrix and body matrix coils were used to cover the lower limb and pelvis respectively. Signal to noise ratio (SNR) [3] and contrast to noise ratio (CNR) [3] of the axial images from the proximal, shaft and distal regions were used to assess the quality of images from the 1.5T and 3T scanners. The SNR was calculated for the muscle and bone-marrow in the axial images. The CNR was calculated for the muscle to cortex and cortex to bone marrow interfaces, respectively. --------- Preliminary results (one volunteer) show that the SNR of muscle for the shaft and distal regions was higher in 3T images (11.65 and 17.60) than 1.5T images (8.12 and 8.11). For the proximal region the SNR of muscles was higher in 1.5T images (7.52) than 3T images (6.78). The SNR of bone marrow was slightly higher in 1.5T images for both proximal and shaft regions, while it was lower in the distal region compared to 3T images. The CNR between muscle and bone of all three regions was higher in 3T images (4.14, 6.55 and 12.99) than in 1.5T images (2.49, 3.25 and 9.89). The CNR between bone-marrow and bone was slightly higher in 1.5T images (4.87, 12.89 and 10.07) compared to 3T images (3.74, 10.83 and 10.15). These results show that the 3T images generated higher contrast between bone and the muscle tissue than the 1.5T images. It is expected that this improvement of image contrast will significantly reduce the time required for the mainly manual segmentation of the MR images. Future work will focus on optimizing the 3T imaging protocol for reducing chemical shift and susceptibility artifacts.
Resumo:
A pragmatic method for assessing the accuracy and precision of a given processing pipeline required for converting computed tomography (CT) image data of bones into representative three dimensional (3D) models of bone shapes is proposed. The method is based on coprocessing a control object with known geometry which enables the assessment of the quality of resulting 3D models. At three stages of the conversion process, distance measurements were obtained and statistically evaluated. For this study, 31 CT datasets were processed. The final 3D model of the control object contained an average deviation from reference values of −1.07±0.52 mm standard deviation (SD) for edge distances and −0.647±0.43 mm SD for parallel side distances of the control object. Coprocessing a reference object enables the assessment of the accuracy and precision of a given processing pipeline for creating CTbased 3D bone models and is suitable for detecting most systematic or human errors when processing a CT-scan. Typical errors have about the same size as the scan resolution.
Resumo:
In this paper we present a model for defining and enforcing a fine-grained information flow policy. We describe how the policy can be enforced on a typical computer and present experiments using the proposed model. A key feature of the model is that it allows the expression of rules which detail precisely which information elements are allowed to mix together. For example, the model allows the expression of a policy which forbids a doctor from mixing the personal medical details of the patients. The enforcement mechanisms tracks and records information flows within the system so that dynamic changes to the policy can be made with respect to information elements which may have propagated to different locations in the system.