950 resultados para sinus floor augmentation
Resumo:
Music and dance are art forms that involve a full mind-body experience, integrating the cognitive, affective and kinesthetic domains. To engage in creating music and dance is to use information to express oneself and communicate. In this chapter I explore the information experience of two distinct groups: those who compose music for an audience, and those who dance socially with a partner. For the composer, information sources can be a stimulus for creation. Sounds, feelings, moods, images, ideas and life experiences can trigger a creative idea. These ideas are shaped by existing musical styles and structures, and by the composer’s personal aesthetic. The intention of the composer is to communicate their expressive ideas to an audience. For the social dancer, information sources are those used to communicate with a partner. There is no intention to perform for an audience. A social dancer aims to express the music and style of the dance while creating a strong connection with their partner. Information sources include the music, the partner’s body, the emotions generated by the dance, the position of other couples on the floor and the feeling of the floor. Use of information in the arts is an under-researched experience. Most information studies are based on the assumption that information is documentary and codified. Subjective and affective information is rarely recognised and legitimised. Information-as-it-is-experienced through creative practice such as music and dance is holistic in acknowledging mind, body and spirit as well as traditional documentary forms of information. This chapter draws on empirical research to illustrate experiencing information as creating and expressing.
Resumo:
This paper presents the direct strength method (DSM) equations for cold-formed steel beams subject to shear. Light gauge cold-formed steel sections have been developed as more economical building solutions to the alternative heavier hot-rolled sections in the commercial and residential markets. Cold-formed lipped channel beams (LCB), LiteSteel beams (LSB) and hollow flange beams (HFB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. For the shear design of cold-formed web panels, their elastic shear buckling strength must be determined accurately including the potential post-buckling strength. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the flange and web elements and ignore the post-buckling strength. Hence experimental and numerical studies were conducted to investigate the shear behaviour and strength of LSBs, LCBs and HFBs. New direct strength method (DSM) based design equations were proposed to determine the ultimate shear capacities of cold-formed steel beams. An improved equation for the higher elastic shear buckling coefficient of cold-formed steel beams was proposed based on finite element analysis results and included in the DSM design equations. A new post-buckling coefficient was also introduced in the DSM equation to include the available post-buckling strength of cold-formed steel beams.
Resumo:
We consider the problem of maximizing the secure connectivity in wireless ad hoc networks, and analyze complexity of the post-deployment key establishment process constrained by physical layer properties such as connectivity, energy consumption and interference. Two approaches, based on graph augmentation problems with nonlinear edge costs, are formulated. The first one is based on establishing a secret key using only the links that are already secured by shared keys. This problem is in NP-hard and does not accept polynomial time approximation scheme PTAS since minimum cutsets to be augmented do not admit constant costs. The second one extends the first problem by increasing the power level between a pair of nodes that has a secret key to enable them physically connect. This problem can be formulated as the optimal key establishment problem with interference constraints with bi-objectives: (i) maximizing the concurrent key establishment flow, (ii) minimizing the cost. We prove that both problems are NP-hard and MAX-SNP with a reduction to MAX3SAT problem.
Resumo:
Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Cold-formed steel members are increasingly used as primary structural elements in the building industries around the world due to the availability of thin and high strength steels and advanced cold-forming technologies. Cold-formed lipped channel beams (LCB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. Current practice in flooring systems is to include openings in the web element of floor joists or bearers so that building services can be located within them. Shear behaviour of LCBs with web openings is more complicated while their shear strengths are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed experimental study involving 40 shear tests was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Simply supported test specimens of LCBs with aspect ratios of 1.0 and 1.5 were loaded at midspan until failure. This paper presents the details of this experimental study and the results of their shear capacities and behavioural characteristics. Experimental results showed that the current design rules in cold-formed steel structures design codes are very conservative for the shear design of LCBs with web openings. Improved design equations have been proposed for the shear strength of LCBs with web openings based on the experimental results from this study.
Resumo:
Cold-formed steel Lipped Channel Beams (LCB) with web openings are commonly used as floor joists and bearers in building structures. The shear behaviour of these beams is more complicated and their shear capacities are considerably reduced by the presence of web openings. However, limited research has been undertaken on the shear behaviour and strength of LCBs with web openings. Hence a detailed numerical study was undertaken to investigate the shear behaviour and strength of LCBs with web openings. Finite element models of simply supported LCBs under a mid-span load with aspect ratios of 1.0 and 1.5 were developed and validated by comparing their results with test results. They were then used in a detailed parametric study to investigate the effects of various influential parameters. Experimental and numerical results showed that the current design rules in cold-formed steel structures design codes are very conservative. Improved design equations were therefore proposed for the shear strength of LCBs with web openings based on both experimental and numerical results. This paper presents the details of finite element modelling of LCBs with web openings, validation of finite element models, and the development of improved shear design rules. The proposed shear design rules in this paper can be considered for inclusion in the future versions of cold-formed steel design codes.
Resumo:
Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.
Resumo:
Public libraries and coworking spaces seek for means to facilitate peer collaboration, peer inspiration and cross-pollination of skills and creativity. However, social learning, inspiration and collaboration between coworkers do not come naturally. In particular in (semi-) public spaces, the behavioural norm among unacquainted coworkers is to work in individual silos without taking advantage of social learning or collaboration opportunities. This paper presents results from a pilot study of ‘Gelatine’ – a system that facilitates shared encounters between coworkers by allowing them to digitally ‘check in’ at a work space. Gelatine displays skills, areas of interest, and needs of currently present coworkers on a public screen. The results indicate that the system amplifies users’ sense of place and awareness of other coworkers, and serves as an interface for social learning through exploratory, opportunistic and serendipitous inspirations, as well as through helping users identify like-minded peers for follow-up face-to-face encounters. We discuss how Gelatine is perceived by users with different pre-entry motivations, and discuss users’ challenges as well as non-use of the system.
Resumo:
Endotoxins can significantly affect the air quality in school environments. However, there is currently no reliable method for the measurement of endotoxins and there is a lack of reference values for endotoxin concentrations to aid in the interpretation of measurement results in school settings. We benchmarked the “baseline” range of endotoxin concentration in indoor air, together with endotoxin load in floor dust, and evaluated the correlation between endotoxin levels in indoor air and settled dust, as well as the effects of temperature and humidity on these levels in subtropical school settings. Bayesian hierarchical modeling indicated that the concentration in indoor air and the load in floor dust were generally (<95th percentile) < 13 EU/m3 and < 24,570 EU/m2, respectively. Exceeding these levels would indicate abnormal sources of endotoxins in the school environment, and the need for further investigation. Metaregression indicated no relationship between endotoxin concentration and load, which points to the necessity for measuring endotoxin levels in both the air and settled dust. Temperature increases were associated with lower concentrations in indoor air and higher loads in floor dust. Higher levels of humidity may be associated with lower airborne endotoxin concentrations.
Resumo:
A Neutral cluster and Air Ion Spectrometer (NAIS) was used to monitor the concentration of airborne ions on 258 full days between Nov 2011 and Dec 2012 in Brisbane, Australia. The air was sampled from outside a window on the sixth floor of a building close to the city centre, approximately 100 m away from a busy freeway. The NAIS detects all ions and charged particles smaller than 42 nm. It was operated in a 4 min measurement cycle, with ion data recorded at 10 s intervals over 2 min during each cycle. The data were analysed to derive the diurnal variation of small, large and total ion concentrations in the environment. We adapt the definition of Horrak et al (2000) and classify small ions as molecular clusters smaller than 1.6 nm and large ions as charged particles larger than this size...
Resumo:
Policy makers increasingly recognise that an educated workforce with a high proportion of Science, Technology, Engineering and Mathematics (STEM) graduates is a pre-requisite to a knowledge-based, innovative economy. Over the past ten years, the proportion of first university degrees awarded in Australia in STEM fields is below the global average and continues to decrease from 22.2% in 2002 to 18.8% in 2010 [1]. These trends are mirrored by declines between 20% and 30% in the proportions of high school students enrolled in science or maths. These trends are not unique to Australia but their impact is of concern throughout the policy-making community. To redress these demographic trends, QUT embarked upon a long-term investment strategy to integrate education and research into the physical and virtual infrastructure of the campus, recognising that expectations of students change as rapidly as technology and learning practices change. To implement this strategy, physical infrastructure refurbishment/re-building is accompanied by upgraded technologies not only for learning but also for research. QUT’s vision for its city-based campuses is to create vibrant and attractive places to learn and research and to link strongly to the wider surrounding community. Over a five year period, physical infrastructure at the Gardens Point campus was substantially reconfigured in two key stages: (a) a >$50m refurbishment of heritage-listed buildings to encompass public, retail and social spaces, learning and teaching “test beds” and research laboratories and (b) destruction of five buildings to be replaced by a $230m, >40,000m2 Science and Engineering Centre designed to accommodate retail, recreation, services, education and research in an integrated, coordinated precinct. This landmark project is characterised by (i) self-evident, collaborative spaces for learning, research and social engagement, (ii) sustainable building practices and sustainable ongoing operation and; (iii) dynamic and mobile re-configuration of spaces or staffing to meet demand. Innovative spaces allow for transformative, cohort-driven learning and the collaborative use of space to prosecute joint class projects. Research laboratories are aggregated, centralised and “on display” to the public, students and staff. A major visualisation space – the largest multi-touch, multi-user facility constructed to date – is a centrepiece feature that focuses on demonstrating scientific and engineering principles or science oriented scenes at large scale (e.g. the Great Barrier Reef). Content on this visualisation facility is integrated with the regional school curricula and supports an in-house schools program for student and teacher engagement. Researchers are accommodated in a combined open-plan and office floor-space (80% open plan) to encourage interdisciplinary engagement and cross-fertilisation of skills, ideas and projects. This combination of spaces re-invigorates the on-campus experience, extends educational engagement across all ages and rapidly enhances research collaboration.
Resumo:
This paper presents an account of an autonomous mobile robot deployment in a densely crowded public event with thousands of people from different age groups attending. The robot operated for eight hours on an open floor surrounded by tables, chairs and massive touchscreen displays. Due to the large number of people who were in close vicinity of the robot, different safety measures were implemented including the use of no-go zones which prevent the robot from blocking emergency exits or moving too close to the display screens. The paper presents the lessons learnt and experiences obtained from this experiment, and provides a discussion about the state of mobile service robots in such crowded environments.
Resumo:
The appropriateness of applying drink driving legislation to motorcycle riding has been questioned as there may be fundamental differences in the effects of alcohol on these two activities. For example, while the distribution of blood alcohol content (BAC) levels among fatally injured male drivers compared to riders is similar, a greater proportion of motorcycle fatalities involve levels in the lower (0 to .10% BAC) range. Several psychomotor and higher-order cognitive skills underpinning riding performance appear to be significantly influenced by low levels of alcohol. For example, at low levels (.02 to .046% BAC), riders show significant increases in reaction time to hazardous stimuli, inattention to the riding task, performance errors such as leaving the roadway and a reduced ability to complete a timed course. It has been suggested that alcohol may redirect riders’ focus from higher-order cognitive skills to more physical skills such as maintaining balance. As part of a research program to investigate the potential benefits of introducing a zero, or reduced, BAC for all riders in Queensland regardless of their licence status, the effects of low doses of alcohol on balance ability were investigated in a laboratory setting. The static balance of ten experienced riders was measured while they performed either no secondary task, a visual search task, or a cognitive (arithmetic) task following the administration of alcohol (0; 0.02, and 0.05% BAC). Subjective ratings of intoxication and balance impairment increased in a dose-dependent manner; however, objective measures of static balance were negatively affected only at the .05% BAC dose. Performance on a concurrent secondary visual search task, but not a purely cognitive (arithmetic) task, improved postural stability across all BAC levels. Finally, the .05% BAC dose was associated with impaired performance on the cognitive (arithmetic) task, but not the visual search task, when participants were balancing, but neither task was impaired by alcohol when participants were standing on the floor. Implications for road safety and future ‘drink riding’ policy considerations are discussed.
Resumo:
Cold-formed steel Lipped Channel Beams (LCB) with web openings are commonly used as floor joists and bearers in building structures. Shear behaviour of these beams is more complicated and their shear capacities are considerably reduced by the presence of web openings. Hence detailed numerical and experimental studies of simply supported LCBs under a mid-span load with aspect ratios of 1.0 and 1.5 were undertaken to investigate the shear behaviour and strength of LCBs with web openings. Experimental and numerical results showed that the current design rules in cold-formed steel structures design codes are very conservative. Improved design equations were therefore proposed for the shear strength of LCBs with web openings based on both experimental and numerical results. This research showed a significant reduction in shear capacities of LCBs when large web openings are included for the purpose of locating building services. A cost effective method of eliminating such detrimental effects of large circular web openings was also therefore investigated using experimental and numerical studies. For this purpose LCBS were reinforced using plate, stud, transverse and sleeve stiffeners with varying sizes and thicknesses that were welded and screw-fastened to the web of LCBs. These studies showed that plate stiffeners were the most suitable. Suitable screw-fastened plate stiffener arrangements with optimum thicknesses were then proposed for LCBs with web openings to restore their original shear capacities. This paper presents the details of finite element analyses and experiments of LCBs with web openings in shear, and the development of improved shear design rules. It then describes the experimental and numerical studies to determine the optimum plate stiffener arrangements and the results. The proposed shear design rules in this paper can be considered for inclusion in the future versions of cold-formed steel design codes.