946 resultados para Ephemeral Computation
Resumo:
Fluid Infrastructure: Landscape Architecture Exhibition: This exhibition showcases the work of 4th Year undergraduate landscape architecture students in response to the 2011 Queensland floods through five installations: Systima Fluid Flux Flex Fluid Connectivity The Floods Verge Fluid Evolution The focus of these installations is the post-flood conditions of Brisbane’s riverside public infrastructure, within a scenario of flood as a normalised event. It recognises that within this scenario, parts of this city cannot be described as definitively ‘land’ or ‘water,’ but are best described as ‘fluid terrains’(Mathur, A. and Da Cunha, D. 2006). The landscape design propositions within the five installations include public transport diversification (RiverRats) schemes, greenspace elevations, ephemeral gardens and evolving landscapes, creative interpretation and warning devices and systems. These propositions do not resist fluid conditions, but work with them to propose a more resilient urban river landscape than Brisbane currently has. This QUT exhibition was developed as part of the 2011 Flood of Ideas Project (http://www.floodofideas.org.au) in partnership with Healthy Waterways (Water by Design), State Library of Queensland (The Edge), Brisbane City Council, Australian Institute of Architects, University of Queensland, Green Cross Australia, Stormwater Industry Association.
Resumo:
Bargara Pasturage Reserve: Future Visions This exhibition showcases the work of Postgraduate Landscape Architecture and final year Undergraduate Civil and Environmental Engineering students in response to issues of sustainability in a coastal wetland known as the Bargara Pasturage Reserve; an exemplar of the many issues facing sensitive coastal places in Queensland today. The 312ha Pasturage Reserve at Bargara is the only biofilter between the pressures of Bargara’s urban and tourism expansion, surrounding sugarcane farming, and the Great Sandy Marine Park, including the largest concentration of nesting marine turtles on the eastern Australian mainland. This ephemeral wetland, while struggling to fulfil its coastal biofiltration function, is also in high demand for passive recreation, and the project partners’ priorities were to meet both of these challenges. The students were required to plan and design for the best balance possible amongst, but not limited to: wetland and coastal ecological health, enhancement of cultural heritage and values, sustainable urban development, and local economic health. To understand these challenges, QUT staff and students met with partners, visited and analysed the Pasturage Reserve, spent time in and around Bargara talking to locals and inviting dialogue with Indigenous representatives and the South Sea Islander community. We then went home to Brisbane to undertake theoretical and technical research, and then worked to produce 11 Strategic Plans, 2 Environmental Management Plans and 33 Detailed Designs. One group of students analysed the Bargara coastal landscape as an historical and ongoing series of conversations between ecological systems, cultural heritage, community and stakeholders. Another group identified the landscape as neither ‘urban,’ ‘rural,’ nor ‘natural,’ instead identifying it metaphorically as a series of layered thematic ‘fields’ such as water, conservation, reconciliation, and educational fields. These landscape analyses became the organising mechanisms for strategic planning. An outstanding Strategic Plan was produced by Zhang, Lemberg and Jensen, entitled Metanoia, which means to ‘make a change as the result of reflection on values’. Three implementation phases of “flow”, “flux”, and “flex” span twenty-five years, and present a vision a coastal and marine research and conservation hub, with a focus on coastal wetland function, turtle habitat and coral reef conservation. An Environmental Management Plan by Brand and Stickland focuses on protecting and improving wetland biodiversity and habitat quality, and increasing hydrological and water quality function; vital in a coastal area of such high conservation value. After the planning phase, students individually developed detailed design proposals responsive to their plans. From Metanoia, Zhang concentrated on wetland access and interpretation, proposing four focal places to form the nucleus of a wider pattern of connectivity, and to encourage community engagement with coastal environmental management and education. Jensen tackled the thorny issue of coastal urban development, proposing a sensitive staged eco-village model which maintains both ecological and recreational connectivity between the wetland and the marine environment. This project offered QUT’s partners many innovative options to inform their future planning. BSC, BMRG and Oceanwatch Australia are currently engaged in the investigation of on-ground opportunities drawing on these options.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
Fractional partial differential equations have been applied to many problems in physics, finance, and engineering. Numerical methods and error estimates of these equations are currently a very active area of research. In this paper we consider a fractional diffusionwave equation with damping. We derive the analytical solution for the equation using the method of separation of variables. An implicit difference approximation is constructed. Stability and convergence are proved by the energy method. Finally, two numerical examples are presented to show the effectiveness of this approximation.
Resumo:
The space and time fractional Bloch–Torrey equation (ST-FBTE) has been used to study anomalous diffusion in the human brain. Numerical methods for solving ST-FBTE in three-dimensions are computationally demanding. In this paper, we propose a computationally effective fractional alternating direction method (FADM) to overcome this problem. We consider ST-FBTE on a finite domain where the time and space derivatives are replaced by the Caputo–Djrbashian and the sequential Riesz fractional derivatives, respectively. The stability and convergence properties of the FADM are discussed. Finally, some numerical results for ST-FBTE are given to confirm our theoretical findings.
Resumo:
In this paper we present a new simulation methodology in order to obtain exact or approximate Bayesian inference for models for low-valued count time series data that have computationally demanding likelihood functions. The algorithm fits within the framework of particle Markov chain Monte Carlo (PMCMC) methods. The particle filter requires only model simulations and, in this regard, our approach has connections with approximate Bayesian computation (ABC). However, an advantage of using the PMCMC approach in this setting is that simulated data can be matched with data observed one-at-a-time, rather than attempting to match on the full dataset simultaneously or on a low-dimensional non-sufficient summary statistic, which is common practice in ABC. For low-valued count time series data we find that it is often computationally feasible to match simulated data with observed data exactly. Our particle filter maintains $N$ particles by repeating the simulation until $N+1$ exact matches are obtained. Our algorithm creates an unbiased estimate of the likelihood, resulting in exact posterior inferences when included in an MCMC algorithm. In cases where exact matching is computationally prohibitive, a tolerance is introduced as per ABC. A novel aspect of our approach is that we introduce auxiliary variables into our particle filter so that partially observed and/or non-Markovian models can be accommodated. We demonstrate that Bayesian model choice problems can be easily handled in this framework.
Resumo:
Creative productivity emerges from human interactions (Hartley, 2009, p. 214). In an era when life is lived in rather than with media (Deuze, this issue), this productivity is widely distributed among ephemeral social networks mediated through the internet. Understanding the underlying dynamics of these networks of human interaction is an exciting and challenging task that requires us to come up with new ways of thinking and theorizing. For example, inducting theory from case studies that are designed to show the exceptional dynamics present within single settings can be augmented today by largescale data generation and collections that provide new analytic opportunities to research the diversity and complexity of human interaction. Large-scale data generation and collection is occurring across a wide range of individuals and organisations. This offers a massive field of analysis which internet companies and research labs in particular are keen on exploring. Lazer et al (2009: 721) argue that such analytic potential is transformational for many if not most research fields but that the use of such valuable data must neither remain confined to private companies and government agencies nor to a privileged set of academic researchers whose studies cannot be replicated nor critiqued. In fact, the analytic capacity to have data of such unprecedented scope and scale available not only requires us to analyse what is and could be done with it and by whom (1) but also what it is doing to us, our cultures and societies (2). Part (1) of such analysis is interested in dependencies and their implications. Part (2) of the enquiry embeds part (1) in a larger context that analyses the long-term, complex dynamics of networked human interaction. From the latter perspective we can treat specific phenomena and the methods used to analyse them as moments of evolution.
Resumo:
We present a rigorous validation of the analytical Amadei solution for the stress concentration around an arbitrarily orientated borehole in general anisotropic elastic media. First, we revisit the theoretical framework of the Amadei solution and present analytical insights that show that the solution does indeed contain all special cases of symmetry, contrary to previous understanding, provided that the reduced strain coefficients b11 and b55 are not equal. It is shown from theoretical considerations and published experimental data that the b11 and b55 are not equal for realistic rocks. Second, we develop a 3D finite element elastic model within a hybrid analytical–numerical workflow that circumvents the need to rebuild and remesh the model for every borehole and material orientation. Third, we show that the borehole stresses computed from the numerical model and the analytical solution match almost perfectly for different borehole orientations (vertical, deviated and horizontal) and for several cases involving isotropic, transverse isotropic and orthorhombic symmetries. It is concluded that the analytical Amadei solution is valid with no restriction on the borehole orientation or the symmetry of the elastic anisotropy.
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.
Resumo:
We report on an accurate numerical scheme for the evolution of an inviscid bubble in radial Hele-Shaw flow, where the nonlinear boundary effects of surface tension and kinetic undercooling are included on the bubble-fluid interface. As well as demonstrating the onset of the Saffman-Taylor instability for growing bubbles, the numerical method is used to show the effect of the boundary conditions on the separation (pinch-off) of a contracting bubble into multiple bubbles, and the existence of multiple possible asymptotic bubble shapes in the extinction limit. The numerical scheme also allows for the accurate computation of bubbles which pinch off very close to the theoretical extinction time, raising the possibility of computing solutions for the evolution of bubbles with non-generic extinction behaviour.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
Recent fire research into the behaviour of light gauge steel frame (LSF) wall systems has devel-oped fire design rules based on Australian and European cold-formed steel design standards, AS/NZS 4600 and Eurocode 3 Part 1.3. However, these design rules are complex since the LSF wall studs are subjected to non-uniform elevated temperature distributions when the walls are exposed to fire from one side. Therefore this paper proposes an alternative design method for routine predictions of fire resistance rating of LSF walls. In this method, suitable equations are recommended first to predict the idealised stud time-temperature pro-files of eight different LSF wall configurations subject to standard fire conditions based on full scale fire test results. A new set of equations was then proposed to find the critical hot flange (failure) temperature for a giv-en load ratio for the same LSF wall configurations with varying steel grades and thickness. These equations were developed based on detailed finite element analyses that predicted the axial compression capacities and failure times of LSF wall studs subject to non-uniform temperature distributions with varying steel grades and thicknesses. This paper proposes a simple design method in which the two sets of equations developed for time-temperature profiles and critical hot flange temperatures are used to find the failure times of LSF walls. The proposed method was verified by comparing its predictions with the results from full scale fire tests and finite element analyses. This paper presents the details of this study including the finite element models of LSF wall studs, the results from relevant fire tests and finite element analyses, and the proposed equations.
Resumo:
Computer Experiments, consisting of a number of runs of a computer model with different inputs, are now common-place in scientific research. Using a simple fire model for illustration some guidelines are given for the size of a computer experiment. A graph is provided relating the error of prediction to the sample size which should be of use when designing computer experiments. Methods for augmenting computer experiments with extra runs are also described and illustrated. The simplest method involves adding one point at a time choosing that point with the maximum prediction variance. Another method that appears to work well is to choose points from a candidate set with maximum determinant of the variance covariance matrix of predictions.
Resumo:
Deterministic computer simulations of physical experiments are now common techniques in science and engineering. Often, physical experiments are too time consuming, expensive or impossible to conduct. Complex computer models or codes, rather than physical experiments lead to the study of computer experiments, which are used to investigate many scientific phenomena of this nature. A computer experiment consists of a number of runs of the computer code with different input choices. The Design and Analysis of Computer Experiments is a rapidly growing technique in statistical experimental design. This thesis investigates some practical issues in the design and analysis of computer experiments and attempts to answer some of the questions faced by experimenters using computer experiments. In particular, the question of the number of computer experiments and how they should be augmented is studied and attention is given to when the response is a function over time.
Resumo:
Considerate amount of research has proposed optimization-based approaches employing various vibration parameters for structural damage diagnosis. The damage detection by these methods is in fact a result of updating the analytical structural model in line with the current physical model. The feasibility of these approaches has been proven. But most of the verification has been done on simple structures, such as beams or plates. In the application on a complex structure, like steel truss bridges, a traditional optimization process will cost massive computational resources and lengthy convergence. This study presents a multi-layer genetic algorithm (ML-GA) to overcome the problem. Unlike the tedious convergence process in a conventional damage optimization process, in each layer, the proposed algorithm divides the GA’s population into groups with a less number of damage candidates; then, the converged population in each group evolves as an initial population of the next layer, where the groups merge to larger groups. In a damage detection process featuring ML-GA, as parallel computation can be implemented, the optimization performance and computational efficiency can be enhanced. In order to assess the proposed algorithm, the modal strain energy correlation (MSEC) has been considered as the objective function. Several damage scenarios of a complex steel truss bridge’s finite element model have been employed to evaluate the effectiveness and performance of ML-GA, against a conventional GA. In both single- and multiple damage scenarios, the analytical and experimental study shows that the MSEC index has achieved excellent damage indication and efficiency using the proposed ML-GA, whereas the conventional GA only converges at a local solution.