235 resultados para analogy calculation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Physical and chemical properties of biofuel are influenced by structural features of fatty acid such as chain length, degree of unsaturation and branching of the chain. A simple and reliable calculation method to estimate fuel property is therefore needed to avoid experimental testing which is difficult, costly and time consuming. Typically in commercial biodiesel production such testing is done for every batch of fuel produced. In this study 9 different algae species were selected that were likely to be suitable for subtropical climates. The fatty acid methyl esters (FAMEs) of all algae species were analysed and the fuel properties like cetane number (CN), cold filter plugging point (CFPP), kinematic viscosity (KV), density and higher heating value (HHV) were determined. The relation of each fatty acid with particular fuel property is analysed using multivariate and multi-criteria decision method (MCDM) software. They showed that some fatty acids have major influences on the fuel properties whereas others have minimal influence. Based on the fuel properties and amounts of lipid content rank order is drawn by PROMETHEE-GAIA which helped to select the best algae species for biodiesel production in subtropical climates. Three species had fatty acid profiles that gave the best fuel properties although only one of these (Nannochloropsis oculata) is considered the best choice because of its higher lipid content.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Using Monte Carlo simulation for radiotherapy dose calculation can provide more accurate results when compared to the analytical methods usually found in modern treatment planning systems, especially in regions with a high degree of inhomogeneity. These more accurate results acquired using Monte Carlo simulation however, often require orders of magnitude more calculation time so as to attain high precision, thereby reducing its utility within the clinical environment. This work aims to improve the utility of Monte Carlo simulation within the clinical environment by developing techniques which enable faster Monte Carlo simulation of radiotherapy geometries. This is achieved principally through the use new high performance computing environments and simpler alternative, yet equivalent representations of complex geometries. Firstly the use of cloud computing technology and it application to radiotherapy dose calculation is demonstrated. As with other super-computer like environments, the time to complete a simulation decreases as 1=n with increasing n cloud based computers performing the calculation in parallel. Unlike traditional super computer infrastructure however, there is no initial outlay of cost, only modest ongoing usage fees; the simulations described in the following are performed using this cloud computing technology. The definition of geometry within the chosen Monte Carlo simulation environment - Geometry & Tracking 4 (GEANT4) in this case - is also addressed in this work. At the simulation implementation level, a new computer aided design interface is presented for use with GEANT4 enabling direct coupling between manufactured parts and their equivalent in the simulation environment, which is of particular importance when defining linear accelerator treatment head geometry. Further, a new technique for navigating tessellated or meshed geometries is described, allowing for up to 3 orders of magnitude performance improvement with the use of tetrahedral meshes in place of complex triangular surface meshes. The technique has application in the definition of both mechanical parts in a geometry as well as patient geometry. Static patient CT datasets like those found in typical radiotherapy treatment plans are often very large and present a significant performance penalty on a Monte Carlo simulation. By extracting the regions of interest in a radiotherapy treatment plan, and representing them in a mesh based form similar to those used in computer aided design, the above mentioned optimisation techniques can be used so as to reduce the time required to navigation the patient geometry in the simulation environment. Results presented in this work show that these equivalent yet much simplified patient geometry representations enable significant performance improvements over simulations that consider raw CT datasets alone. Furthermore, this mesh based representation allows for direct manipulation of the geometry enabling motion augmentation for time dependant dose calculation for example. Finally, an experimental dosimetry technique is described which allows the validation of time dependant Monte Carlo simulation, like the ones made possible by the afore mentioned patient geometry definition. A bespoke organic plastic scintillator dose rate meter is embedded in a gel dosimeter thereby enabling simultaneous 3D dose distribution and dose rate measurement. This work demonstrates the effectiveness of applying alternative and equivalent geometry definitions to complex geometries for the purposes of Monte Carlo simulation performance improvement. Additionally, these alternative geometry definitions allow for manipulations to be performed on otherwise static and rigid geometry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The byssus threads of the common mussel, Mytilus edulis L., have been tested mechanically and the results from the tests related to the ecology of the animal. The threads are mechanically similar to other crystalline polymers such as polyethylene having a modulus of about 108N m−2 and a long relaxation time. Resilience of 60% is similar to tendon; ultimate strain is about five times that of tendon at 0.44. The thread is laid down with a prestrain of 10% and so guys the mussel in position. Calculation shows that a mussel with 50 byssus threads would be able to resist all but severe winter storms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent experiments [F. E. Pinkerton, M. S. Meyer, G. P. Meisner, M. P. Balogh, and J. J. Vajo, J. Phys. Chem. C 111, 12881 (2007) and J. J. Vajo and G. L. Olson, Scripta Mater. 56, 829 (2007)] demonstrated that the recycling of hydrogen in the coupled LiBH4/MgH2 system is fully reversible. The rehydrogenation of MgB2 is an important step toward the reversibility. By using ab initio density functional theory calculations, we found that the activation barrier for the dissociation of H2 are 0.49 and 0.58 eV for the B and Mg-terminated MgB2(0001) surface, respectively. This implies that the dissociation kinetics of H2 on a MgB2 (0001) surface should be greatly improved compared to that in pure Mg materials. Additionally, the diffusion of dissociated H atom on the Mg-terminated MgB2(0001) surface is almost barrier-less. Our results shed light on the experimentally-observed reversibility and improved kinetics for the coupled LiBH4/MgH2 system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work, ab initio spin-polarised Density Functional Theory (DFT) calculations are performed to study the interaction of a Ti atom with a NaAlH4(001) surface. We confirm that an interstitially located Ti atom in the NaAlH4 subsurface is the most energetically favoured configuration as recently reported (Chem. Comm. (17) 2006, 1822). On the NaAlH4(001) surface, the Ti atom is most stable when adsorbed between two sodium atoms with an AlH4 unit beneath. A Ti atom on top of an Al atom is also found to be an important structure at low temperatures. The diffusion of Ti from the Al-top site to the Na-bridging site has a low activation barrier of 0.20 eV and may be activated at the experimental temperatures (∼323 K). The diffusion of a Ti atom into the energetically favoured subsurface interstitial site occurs via the Na-bridging surface site and is essentially barrierless.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Density functional theory (DFT) is a powerful approach to electronic structure calculations in extended systems, but suffers currently from inadequate incorporation of long-range dispersion, or Van der Waals (VdW) interactions. VdW-corrected DFT is tested for interactions involving molecular hydrogen, graphite, single-walled carbon nanotubes (SWCNTs), and SWCNT bundles. The energy correction, based on an empirical London dispersion term with a damping function at short range, allows a reasonable physisorption energy and equilibrium distance to be obtained for H2 on a model graphite surface. The VdW-corrected DFT calculation for an (8, 8) nanotube bundle reproduces accurately the experimental lattice constant. For H2 inside or outside an (8, 8) SWCNT, we find the binding energies are respectively higher and lower than that on a graphite surface, correctly predicting the well known curvature effect. We conclude that the VdW correction is a very effective method for implementing DFT calculations, allowing a reliable description of both short-range chemical bonding and long-range dispersive interactions. The method will find powerful applications in areas of SWCNT research where empirical potential functions either have not been developed, or do not capture the necessary range of both dispersion and bonding interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The accurate identification of tissue electron densities is of great importance for Monte Carlo (MC) dose calculations. When converting patient CT data into a voxelised format suitable for MC simulations, however, it is common to simplify the assignment of electron densities so that the complex tissues existing in the human body are categorized into a few basic types. This study examines the effects that the assignment of tissue types and the calculation of densities can have on the results of MC simulations, for the particular case of a Siemen’s Sensation 4 CT scanner located in a radiotherapy centre where QA measurements are routinely made using 11 tissue types (plus air). Methods: DOSXYZnrc phantoms are generated from CT data, using the CTCREATE user code, with the relationship between Hounsfield units (HU) and density determined via linear interpolation between a series of specified points on the ‘CT-density ramp’ (see Figure 1(a)). Tissue types are assigned according to HU ranges. Each voxel in the DOSXYZnrc phantom therefore has an electron density (electrons/cm3) defined by the product of the mass density (from the HU conversion) and the intrinsic electron density (electrons /gram) (from the material assignment), in that voxel. In this study, we consider the problems of density conversion and material identification separately: the CT-density ramp is simplified by decreasing the number of points which define it from 12 down to 8, 3 and 2; and the material-type-assignment is varied by defining the materials which comprise our test phantom (a Supertech head) as two tissues and bone, two plastics and bone, water only and (as an extreme case) lead only. The effect of these parameters on radiological thickness maps derived from simulated portal images is investigated. Results & Discussion: Increasing the degree of simplification of the CT-density ramp results in an increasing effect on the resulting radiological thickness calculated for the Supertech head phantom. For instance, defining the CT-density ramp using 8 points, instead of 12, results in a maximum radiological thickness change of 0.2 cm, whereas defining the CT-density ramp using only 2 points results in a maximum radiological thickness change of 11.2 cm. Changing the definition of the materials comprising the phantom between water and plastic and tissue results in millimetre-scale changes to the resulting radiological thickness. When the entire phantom is defined as lead, this alteration changes the calculated radiological thickness by a maximum of 9.7 cm. Evidently, the simplification of the CT-density ramp has a greater effect on the resulting radiological thickness map than does the alteration of the assignment of tissue types. Conclusions: It is possible to alter the definitions of the tissue types comprising the phantom (or patient) without substantially altering the results of simulated portal images. However, these images are very sensitive to the accurate identification of the HU-density relationship. When converting data from a patient’s CT into a MC simulation phantom, therefore, all possible care should be taken to accurately reproduce the conversion between HU and mass density, for the specific CT scanner used. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital (RBWH), Brisbane, Australia. The authors are grateful to the staff of the RBWH, especially Darren Cassidy, for assistance in obtaining the phantom CT data used in this study. The authors also wish to thank Cathy Hargrave, of QUT, for assistance in formatting the CT data, using the Pinnacle TPS. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: The use of amorphous-silicon electronic portal imaging devices (a-Si EPIDs) for dosimetry is complicated by the effects of scattered radiation. In photon radiotherapy, primary signal at the detector can be accompanied by photons scattered from linear accelerator components, detector materials, intervening air, treatment room surfaces (floor, walls, etc) and from the patient/phantom being irradiated. Consequently, EPID measurements which presume to take scatter into account are highly sensitive to the identification of these contributions. One example of this susceptibility is the process of calibrating an EPID for use as a gauge of (radiological) thickness, where specific allowance must be made for the effect of phantom-scatter on the intensity of radiation measured through different thicknesses of phantom. This is usually done via a theoretical calculation which assumes that phantom scatter is linearly related to thickness and field-size. We have, however, undertaken a more detailed study of the scattering effects of fields of different dimensions when applied to phantoms of various thicknesses in order to derive scattered-primary ratios (SPRs) directly from simulation results. This allows us to make a more-accurate calibration of the EPID, and to qualify the appositeness of the theoretical SPR calculations. Methods: This study uses a full MC model of the entire linac-phantom-detector system simulated using EGSnrc/BEAMnrc codes. The Elekta linac and EPID are modelled according to specifications from the manufacturer and the intervening phantoms are modelled as rectilinear blocks of water or plastic, with their densities set to a range of physically realistic and unrealistic values. Transmissions through these various phantoms are calculated using the dose detected in the model EPID and used in an evaluation of the field-size-dependence of SPR, in different media, applying a method suggested for experimental systems by Swindell and Evans [1]. These results are compared firstly with SPRs calculated using the theoretical, linear relationship between SPR and irradiated volume, and secondly with SPRs evaluated from our own experimental data. An alternate evaluation of the SPR in each simulated system is also made by modifying the BEAMnrc user code READPHSP, to identify and count those particles in a given plane of the system that have undergone a scattering event. In addition to these simulations, which are designed to closely replicate the experimental setup, we also used MC models to examine the effects of varying the setup in experimentally challenging ways (changing the size of the air gap between the phantom and the EPID, changing the longitudinal position of the EPID itself). Experimental measurements used in this study were made using an Elekta Precise linear accelerator, operating at 6MV, with an Elekta iView GT a-Si EPID. Results and Discussion: 1. Comparison with theory: With the Elekta iView EPID fixed at 160 cm from the photon source, the phantoms, when positioned isocentrically, are located 41 to 55 cm from the surface of the panel. At this geometry, a close but imperfect agreement (differing by up to 5%) can be identified between the results of the simulations and the theoretical calculations. However, this agreement can be totally disrupted by shifting the phantom out of the isocentric position. Evidently, the allowance made for source-phantom-detector geometry by the theoretical expression for SPR is inadequate to describe the effect that phantom proximity can have on measurements made using an (infamously low-energy sensitive) a-Si EPID. 2. Comparison with experiment: For various square field sizes and across the range of phantom thicknesses, there is good agreement between simulation data and experimental measurements of the transmissions and the derived values of the primary intensities. However, the values of SPR obtained through these simulations and measurements seem to be much more sensitive to slight differences between the simulated and real systems, leading to difficulties in producing a simulated system which adequately replicates the experimental data. (For instance, small changes to simulated phantom density make large differences to resulting SPR.) 3. Comparison with direct calculation: By developing a method for directly counting the number scattered particles reaching the detector after passing through the various isocentric phantom thicknesses, we show that the experimental method discussed above is providing a good measure of the actual degree of scattering produced by the phantom. This calculation also permits the analysis of the scattering sources/sinks within the linac and EPID, as well as the phantom and intervening air. Conclusions: This work challenges the assumption that scatter to and within an EPID can be accounted for using a simple, linear model. Simulations discussed here are intended to contribute to a fuller understanding of the contribution of scattered radiation to the EPID images that are used in dosimetry calculations. Acknowledgements: This work is funded by the NHMRC, through a project grant, and supported by the Queensland University of Technology (QUT) and the Royal Brisbane and Women's Hospital, Brisbane, Australia. The authors are also grateful to Elekta for the provision of manufacturing specifications which permitted the detailed simulation of their linear accelerators and amorphous-silicon electronic portal imaging devices. Computational resources and services used in this work were provided by the HPC and Research Support Group, QUT, Brisbane, Australia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction: Recent advances in the planning and delivery of radiotherapy treatments have resulted in improvements in the accuracy and precision with which therapeutic radiation can be administered. As the complexity of the treatments increases it becomes more difficult to predict the dose distribution in the patient accurately. Monte Carlo (MC) methods have the potential to improve the accuracy of the dose calculations and are increasingly being recognised as the ‘gold standard’ for predicting dose deposition in the patient [1]. This project has three main aims: 1. To develop tools that enable the transfer of treatment plan information from the treatment planning system (TPS) to a MC dose calculation engine. 2. To develop tools for comparing the 3D dose distributions calculated by the TPS and the MC dose engine. 3. To investigate the radiobiological significance of any errors between the TPS patient dose distribution and the MC dose distribution in terms of Tumour Control Probability (TCP) and Normal Tissue Complication Probabilities (NTCP). The work presented here addresses the first two aims. Methods: (1a) Plan Importing: A database of commissioned accelerator models (Elekta Precise and Varian 2100CD) has been developed for treatment simulations in the MC system (EGSnrc/BEAMnrc). Beam descriptions can be exported from the TPS using the widespread DICOM framework, and the resultant files are parsed with the assistance of a software library (PixelMed Java DICOM Toolkit). The information in these files (such as the monitor units, the jaw positions and gantry orientation) is used to construct a plan-specific accelerator model which allows an accurate simulation of the patient treatment field. (1b) Dose Simulation: The calculation of a dose distribution requires patient CT images which are prepared for the MC simulation using a tool (CTCREATE) packaged with the system. Beam simulation results are converted to absolute dose per- MU using calibration factors recorded during the commissioning process and treatment simulation. These distributions are combined according to the MU meter settings stored in the exported plan to produce an accurate description of the prescribed dose to the patient. (2) Dose Comparison: TPS dose calculations can be obtained using either a DICOM export or by direct retrieval of binary dose files from the file system. Dose difference, gamma evaluation and normalised dose difference algorithms [2] were employed for the comparison of the TPS dose distribution and the MC dose distribution. These implementations are spatial resolution independent and able to interpolate for comparisons. Results and Discussion: The tools successfully produced Monte Carlo input files for a variety of plans exported from the Eclipse (Varian Medical Systems) and Pinnacle (Philips Medical Systems) planning systems: ranging in complexity from a single uniform square field to a five-field step and shoot IMRT treatment. The simulation of collimated beams has been verified geometrically, and validation of dose distributions in a simple body phantom (QUASAR) will follow. The developed dose comparison algorithms have also been tested with controlled dose distribution changes. Conclusion: The capability of the developed code to independently process treatment plans has been demonstrated. A number of limitations exist: only static fields are currently supported (dynamic wedges and dynamic IMRT will require further development), and the process has not been tested for planning systems other than Eclipse and Pinnacle. The tools will be used to independently assess the accuracy of the current treatment planning system dose calculation algorithms for complex treatment deliveries such as IMRT in treatment sites where patient inhomogeneities are expected to be significant. Acknowledgements: Computational resources and services used in this work were provided by the HPC and Research Support Group, Queensland University of Technology, Brisbane, Australia. Pinnacle dose parsing made possible with the help of Paul Reich, North Coast Cancer Institute, North Coast, New South Wales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper takes its root in a trivial observation: management approaches are unable to provide relevant guidelines to cope with uncertainty, and trust of our modern worlds. Thus, managers are looking for reducing uncertainty through information’s supported decision-making, sustained by ex-ante rationalization. They strive to achieve best possible solution, stability, predictability, and control of “future”. Hence, they turn to a plethora of “prescriptive panaceas”, and “management fads” to bring simple solutions through best practices. However, these solutions are ineffective. They address only one part of a system (e.g. an organization) instead of the whole. They miss the interactions and interdependencies with other parts leading to “suboptimization”. Further classical cause-effects investigations and researches are not very helpful to this regard. Where do we go from there? In this conversation, we want to challenge the assumptions supporting the traditional management approaches and shed some lights on the problem of management discourse fad using the concept of maturity and maturity models in the context of temporary organizations as support for reflexion. Global economy is characterized by use and development of standards and compliance to standards as a practice is said to enable better decision-making by managers in uncertainty, control complexity, and higher performance. Amongst the plethora of standards, organizational maturity and maturity models hold a specific place due to general belief in organizational performance as dependent variable of (business) processes continuous improvement, grounded on a kind of evolutionary metaphor. Our intention is neither to offer a new “evidence based management fad” for practitioners, nor to suggest research gap to scholars. Rather, we want to open an assumption-challenging conversation with regards to main stream approaches (neo-classical economics and organization theory), turning “our eyes away from the blinding light of eternal certitude towards the refracted world of turbid finitude” (Long, 2002, p. 44) generating what Bernstein has named “Cartesian Anxiety” (Bernstein, 1983, p. 18), and revisit the conceptualization of maturity and maturity models. We rely on conventions theory and a systemic-discursive perspective. These two lenses have both information & communication and self-producing systems as common threads. Furthermore the narrative approach is well suited to explore complex way of thinking about organizational phenomena as complex systems. This approach is relevant with our object of curiosity, i.e. the concept of maturity and maturity models, as maturity models (as standards) are discourses and systems of regulations. The main contribution of this conversation is that we suggest moving from a neo-classical “theory of the game” aiming at making the complex world simpler in playing the game, to a “theory of the rules of the game”, aiming at influencing and challenging the rules of the game constitutive of maturity models – conventions, governing systems – making compatible individual calculation and social context, and possible the coordination of relationships and cooperation between agents with or potentially divergent interests and values. A second contribution is the reconceptualization of maturity as structural coupling between conventions, rather than as an independent variable leading to organizational performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVES: To describe the recruitment strategy and association between facility and staff characteristics and success of resident recruitment for the Promoting Independence in Residential Care (PIRC) trial. DESIGN: Cross-sectional study of staff and facility characteristics and recruitment rates within facilities with calculation of cluster effects of multiple measures. SETTING AND PARTICIPANTS: Staff of low-level dependency residential care facilities and residents able to engage in a physical activity program in 2 cities in New Zealand. MEASURES: A global impression of staff willingness to facilitate research was gauged by research nurses, facility characteristics were measured by staff interview. Relevant outcomes were measured by resident interview and included the following: (1) Function: Late Life FDI scale, timed-up-and-go, FICSIT balance scale and the Elderly Mobility Scale; (2) Quality of Life: EuroQol quality of life scale, Life Satisfaction Index; and (3) falls were assessed by audit of the medical record. Correlation between recruitment rates, facility characteristics and global impression of staff willingness to participate were investigated. Design effects were calculated on outcomes. RESULTS: Forty-one (85%) facilities and 682 (83%) residents participated, median age was 85 years (range 65-101), and 74% were women. Participants had complex health problems. Recruitment rates were associated (but did not increase linearly) with the perceived willingness of staff, and were not associated with facility size. Design effects from the cluster recruitment differed according to outcome. CONCLUSIONS: The recruitment strategy was successful in recruiting a large sample of people with complex comorbidities and high levels of functional disability despite perceptions of staff reluctance. Staff willingness was related to recruitment success.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Research background For almost 80 years the Chuck Taylor (or Chuck T's) All Star basketball shoe has been an iconic item of fashion apparel. The Chuck T's were first designed in 1921 by Converse, an American shoe company and over the decades they became a popular item not purely for sports and athletic purposes but rather evolved into the shoe of choice for many subcultural groups as a fashion item. In some circles the Chuck Taylor is still seen as the "coolest" sneaker of all time - one which will never go out of fashion regardless of changing trends. With over 600 millions pairs sold all over the world since its release, the Converse shoe is representative of not only a fashion culture - but also of a consumption culture - that evolved as the driving force behind the massive growth of the Western economic system during the 20th Century. Artisan Gallery (Brisbane), in conjunction with the exhibition Reboot: Function, Fashion and the Sneaker, a history of the sneaker, selected 20 designers to customise and re-design the classic Converse Chuck Taylor All Stars shoe and in doing so highlighted the diversity of forms possible for creative outcomes. As Artisan Gallery Curator Kirsten Fitzpatrick states “We were expecting people to draw and paint on them. Instead, we had shoes... mounted as trophies.." referring to the presentation of "Converse Consumption". The exhibition ran from 21 June – 16 August 2012: Research question The Chuck T’s is one of many overwhelmingly commercially successful designs of the last century. Nowadays we are faced with the significant problems of overconsumption and the stress this causes on the natural ecosystem; and on people as a result. As an active member of the industrial design fraternity – a discipline that sits at the core of this problem - how can I use this opportunity to comment on the significant issue of consumption? An effective way to do this was to associate consumption of goods with consumption of sugar. There are significant similarities between our ceaseless desires to consume products and our fervent need to consume indulgent sweet foods. Artisan Statement Delicious, scrumptious, delectable... your pupils dilate, your blood pressure spikes, your liver goes into overdrive. Immediately, your brain cuts off the adenosine receptors, preventing drowsiness. Your body increases dopamine production, in-turn stimulating the pleasure receptors in your brain. Your body absorbs all the sweetness and turns it into fat – while all the nutrients that you actually require are starting to be destroyed, about to be expelled. And this is only after one bite! After some time though, your body comes crashing back to earth. You become irritable and begin to feel sluggish. Your eyelids seem heavy while your breathing pattern changes. Your body has consumed all the energy and destroyed all available nutrients. You literally begin to shut down. These are the physiological effects of sugar consumption. A perfect analogy for our modern day consumer driven world. Enjoy your dessert! Research contribution “Converse Consumption” contributes to the conversation regarding over-consumption by compelling people to reflect on their consumption behaviour through the reconceptualising of the deconstructed Chuck T’s in an attractive edible form. By doing so the viewer has to deal with the desire to consume the indulgent looking dessert with the contradictory fact that it is comprised of a pair of shoes. The fact that the shoes are Chuck T’s make the effect even more powerful due to their iconic status. These clashing motivations are what make “Converse Consumption” a bizarre yet memorable experience. Significance The exhibition was viewed by an excess of 1000 people and generated exceptional media coverage and public exposure/impact. As Artisan Gallery Curator Kirsten Fitzpatrick states “20 of Brisbane's best designers were given the opportunity to customise their own Converse Sneakers, with The Converse Blank Canvas Project.” And to be selected in this category demonstrates the calibre of importance for design prominence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The early warning based on real-time prediction of rain-induced instability of natural residual slopes helps to minimise human casualties due to such slope failures. Slope instability prediction is complicated, as it is influenced by many factors, including soil properties, soil behaviour, slope geometry, and the location and size of deep cracks in the slope. These deep cracks can facilitate rainwater infiltration into the deep soil layers and reduce the unsaturated shear strength of residual soil. Subsequently, it can form a slip surface, triggering a landslide even in partially saturated soil slopes. Although past research has shown the effects of surface-cracks on soil stability, research examining the influence of deep-cracks on soil stability is very limited. This study aimed to develop methodologies for predicting the real-time rain-induced instability of natural residual soil slopes with deep cracks. The results can be used to warn against potential rain-induced slope failures. The literature review conducted on rain induced slope instability of unsaturated residual soil associated with soil crack, reveals that only limited studies have been done in the following areas related to this topic: - Methods for detecting deep cracks in residual soil slopes. - Practical application of unsaturated soil theory in slope stability analysis. - Mechanistic methods for real-time prediction of rain induced residual soil slope instability in critical slopes with deep cracks. Two natural residual soil slopes at Jombok Village, Ngantang City, Indonesia, which are located near a residential area, were investigated to obtain the parameters required for the stability analysis of the slope. A survey first identified all related field geometrical information including slope, roads, rivers, buildings, and boundaries of the slope. Second, the electrical resistivity tomography (ERT) method was used on the slope to identify the location and geometrical characteristics of deep cracks. The two ERT array models employed in this research are: Dipole-dipole and Azimuthal. Next, bore-hole tests were conducted at different locations in the slope to identify soil layers and to collect undisturbed soil samples for laboratory measurement of the soil parameters required for the stability analysis. At the same bore hole locations, Standard Penetration Test (SPT) was undertaken. Undisturbed soil samples taken from the bore-holes were tested in a laboratory to determine the variation of the following soil properties with the depth: - Classification and physical properties such as grain size distribution, atterberg limits, water content, dry density and specific gravity. - Saturated and unsaturated shear strength properties using direct shear apparatus. - Soil water characteristic curves (SWCC) using filter paper method. - Saturated hydraulic conductivity. The following three methods were used to detect and simulate the location and orientation of cracks in the investigated slope: (1) The electrical resistivity distribution of sub-soil obtained from ERT. (2) The profile of classification and physical properties of the soil, based on laboratory testing of soil samples collected from bore-holes and visual observations of the cracks on the slope surface. (3) The results of stress distribution obtained from 2D dynamic analysis of the slope using QUAKE/W software, together with the laboratory measured soil parameters and earthquake records of the area. It was assumed that the deep crack in the slope under investigation was generated by earthquakes. A good agreement was obtained when comparing the location and the orientation of the cracks detected by Method-1 and Method-2. However, the simulated cracks in Method-3 were not in good agreement with the output of Method-1 and Method-2. This may have been due to the material properties used and the assumptions made, for the analysis. From Method-1 and Method-2, it can be concluded that the ERT method can be used to detect the location and orientation of a crack in a soil slope, when the ERT is conducted in very dry or very wet soil conditions. In this study, the cracks detected by the ERT were used for stability analysis of the slope. The stability of the slope was determined using the factor of safety (FOS) of a critical slip surface obtained by SLOPE/W using the limit equilibrium method. Pore-water pressure values for the stability analysis were obtained by coupling the transient seepage analysis of the slope using finite element based software, called SEEP/W. A parametric study conducted on the stability of an investigated slope revealed that the existence of deep cracks and their location in the soil slope are critical for its stability. The following two steps are proposed to predict the rain-induced instability of a residual soil slope with cracks. (a) Step-1: The transient stability analysis of the slope is conducted from the date of the investigation (initial conditions are based on the investigation) to the preferred date (current date), using measured rainfall data. Then, the stability analyses are continued for the next 12 months using the predicted annual rainfall that will be based on the previous five years rainfall data for the area. (b) Step-2: The stability of the slope is calculated in real-time using real-time measured rainfall. In this calculation, rainfall is predicted for the next hour or 24 hours and the stability of the slope is calculated one hour or 24 hours in advance using real time rainfall data. If Step-1 analysis shows critical stability for the forthcoming year, it is recommended that Step-2 be used for more accurate warning against the future failure of the slope. In this research, the results of the application of the Step-1 on an investigated slope (Slope-1) showed that its stability was not approaching a critical value for year 2012 (until 31st December 2012) and therefore, the application of Step-2 was not necessary for the year 2012. A case study (Slope-2) was used to verify the applicability of the complete proposed predictive method. A landslide event at Slope-2 occurred on 31st October 2010. The transient seepage and stability analyses of the slope using data obtained from field tests such as Bore-hole, SPT, ERT and Laboratory tests, were conducted on 12th June 2010 following the Step-1 and found that the slope in critical condition on that current date. It was then showing that the application of the Step-2 could have predicted this failure by giving sufficient warning time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shaft fracture at an early stage of operation is a common problem for a certain type of wind turbine. To determine the cause of shaft failure a series of experimental tests were conducted to evaluate the chemical composition and mechanical properties. A detail analysis involving macroscopic feature and microstructure analysis of the material of the shaft was also performed to have an in depth knowledge of the cause of fracture. The experimental tests and analysis results show that there are no significant differences in the material property of the main shaft when comparing it with the Standard, EN10083-3:2006. The results show that stress concentration on the shaft surface close to the critical section of the shaft due to rubbing of the annular ring and coupled with high stress concentration caused by the change of inner diameter of the main shaft are the main reasons that result in fracture of the main shaft. In addition, inhomogeneity of the main shaft micro-structure also accelerates up the fracture process of the main shaft. In addition, the theoretical calculation of equivalent stress at the end of the shaft was performed, which demonstrate that cracks can easily occur under the action of impact loads. The contribution of this paper is to provide a reference in fracture analysis of similar main shaft of wind turbines.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Condensation technique of degree of freedom is firstly proposed to improve the computational efficiency of meshfree method with Galerkin weak form. In present method, scattered nodes without connectivity are divided into several subsets by cells with arbitrary shape. The local discrete equations are established over each cell by using moving kriging interpolation, in which the nodes that located in the cell are used for approximation. Then, the condensation technique can be introduced into the local discrete equations by transferring equations of inner nodes to equations of boundary nodes based on cell. In the scheme of present method, the calculation of each cell is carried out by meshfree method with Galerkin weak form, and local search is implemented in interpolation. Numerical examples show that the present method has high computational efficiency and convergence, and good accuracy is also obtained.