946 resultados para Ephemeral Computation
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
The feasibility of real-time calculation of parameters for an internal combustion engine via reconfigurable hardware implementation is investigated as an alternative to software computation. A detailed in-hardware field programmable gate array (FPGA)-based design is developed and evaluated using input crank angle and in-cylinder pressure data from fully instrumented diesel engines in the QUT Biofuel Engine Research Facility (BERF). Results indicate the feasibility of employing a hardware-based implementation for real-time processing for speeds comparable to the data sampling rate currently used in the facility, with acceptably low level of discrepancies between hardware and software-based calculation of key engine parameters.
Resumo:
The Gallery of Modern Art (GoMA) in Brisbane, Australia’s third largest city, recently staged ‘21st Century: Art of the First Decade’. The gallery spaces were replete with a commissioned slide by Carsten Höller, an installation of Rivane Neuenschwande’s I Wish Your Wish (2003), a table of white Legos, a room of purple balloons and other participatory or interactive artworks designed to engage multiple publics and encourage audience participation in a variety of ways. Many of the featured projects used day-to-day experiences and offered new conceptions about art practice and what they can elicit in their public – raise awareness about local issues, help audiences imagine different ways of negotiating their environs or experi-ence a museum in a new way. At times, the bottom floor galleries resembled a theme park – adults and children playing with Legos and using Höller’s slide. This article examines the benefits and limitations of such artistic interventions by relating the GoMA exhibition to Brisbane City Council’s campaign of ‘Together Brisbane’ (featuring images of Neunenschwande’s ribbons); a response to the devastation brought to the city and its surrounds in January 2011. During the Brisbane floods, GoMA’s basement was damaged, the museum closed and upon reopening, visitor numbers soared. In this context, GoMA’s use of engaged art practice – always verging on the ephemeral and ‘fun’ – has been used to project a wider notion of a collective urban public. What questions does this raise, not only regarding the cultural politics around the social and participatory ‘turn’ in art practice, but its use to address a much wider urban public in a moment of crisis.
Resumo:
Advances in algorithms for approximate sampling from a multivariable target function have led to solutions to challenging statistical inference problems that would otherwise not be considered by the applied scientist. Such sampling algorithms are particularly relevant to Bayesian statistics, since the target function is the posterior distribution of the unobservables given the observables. In this thesis we develop, adapt and apply Bayesian algorithms, whilst addressing substantive applied problems in biology and medicine as well as other applications. For an increasing number of high-impact research problems, the primary models of interest are often sufficiently complex that the likelihood function is computationally intractable. Rather than discard these models in favour of inferior alternatives, a class of Bayesian "likelihoodfree" techniques (often termed approximate Bayesian computation (ABC)) has emerged in the last few years, which avoids direct likelihood computation through repeated sampling of data from the model and comparing observed and simulated summary statistics. In Part I of this thesis we utilise sequential Monte Carlo (SMC) methodology to develop new algorithms for ABC that are more efficient in terms of the number of model simulations required and are almost black-box since very little algorithmic tuning is required. In addition, we address the issue of deriving appropriate summary statistics to use within ABC via a goodness-of-fit statistic and indirect inference. Another important problem in statistics is the design of experiments. That is, how one should select the values of the controllable variables in order to achieve some design goal. The presences of parameter and/or model uncertainty are computational obstacles when designing experiments but can lead to inefficient designs if not accounted for correctly. The Bayesian framework accommodates such uncertainties in a coherent way. If the amount of uncertainty is substantial, it can be of interest to perform adaptive designs in order to accrue information to make better decisions about future design points. This is of particular interest if the data can be collected sequentially. In a sense, the current posterior distribution becomes the new prior distribution for the next design decision. Part II of this thesis creates new algorithms for Bayesian sequential design to accommodate parameter and model uncertainty using SMC. The algorithms are substantially faster than previous approaches allowing the simulation properties of various design utilities to be investigated in a more timely manner. Furthermore the approach offers convenient estimation of Bayesian utilities and other quantities that are particularly relevant in the presence of model uncertainty. Finally, Part III of this thesis tackles a substantive medical problem. A neurological disorder known as motor neuron disease (MND) progressively causes motor neurons to no longer have the ability to innervate the muscle fibres, causing the muscles to eventually waste away. When this occurs the motor unit effectively ‘dies’. There is no cure for MND, and fatality often results from a lack of muscle strength to breathe. The prognosis for many forms of MND (particularly amyotrophic lateral sclerosis (ALS)) is particularly poor, with patients usually only surviving a small number of years after the initial onset of disease. Measuring the progress of diseases of the motor units, such as ALS, is a challenge for clinical neurologists. Motor unit number estimation (MUNE) is an attempt to directly assess underlying motor unit loss rather than indirect techniques such as muscle strength assessment, which generally is unable to detect progressions due to the body’s natural attempts at compensation. Part III of this thesis builds upon a previous Bayesian technique, which develops a sophisticated statistical model that takes into account physiological information about motor unit activation and various sources of uncertainties. More specifically, we develop a more reliable MUNE method by applying marginalisation over latent variables in order to improve the performance of a previously developed reversible jump Markov chain Monte Carlo sampler. We make other subtle changes to the model and algorithm to improve the robustness of the approach.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
Adolescent idiopathic scoliosis (AIS) is a three-dimensional spinal deformity involving the side-to-side curvature of the spine in the coronal plane and axial rotation of the vertebrae in the transverse plane. For patients with a severe or rapidly progressing deformity, corrective instrumented fusion surgery is performed. The wide choice of implants and large variability between patients make it difficult for surgeons to choose optimal treatment strategies. This paper describes the patient specific finite element modelling techniques employed and the results of preliminary analyses predicting the surgical outcomes for a series of AIS patients. This report highlights the importance of not only patient-specific anatomy and material parameters, but also patient-specific data for the clinical and physiological loading conditions experienced by the patient who has corrective scoliosis surgery.
Resumo:
This work identifies the limitations of n-way data analysis techniques in multidimensional stream data, such as Internet chat room communications data, and establishes a link between data collection and performance of these techniques. Its contributions are twofold. First, it extends data analysis to multiple dimensions by constructing n-way data arrays known as high order tensors. Chat room tensors are generated by a simulator which collects and models actual communication data. The accuracy of the model is determined by the Kolmogorov-Smirnov goodness-of-fit test which compares the simulation data with the observed (real) data. Second, a detailed computational comparison is performed to test several data analysis techniques including svd [1], and multi-way techniques including Tucker1, Tucker3 [2], and Parafac [3].
Resumo:
This work investigates the accuracy and efficiency tradeoffs between centralized and collective (distributed) algorithms for (i) sampling, and (ii) n-way data analysis techniques in multidimensional stream data, such as Internet chatroom communications. Its contributions are threefold. First, we use the Kolmogorov-Smirnov goodness-of-fit test to show that statistical differences between real data obtained by collective sampling in time dimension from multiple servers and that of obtained from a single server are insignificant. Second, we show using the real data that collective data analysis of 3-way data arrays (users x keywords x time) known as high order tensors is more efficient than centralized algorithms with respect to both space and computational cost. Furthermore, we show that this gain is obtained without loss of accuracy. Third, we examine the sensitivity of collective constructions and analysis of high order data tensors to the choice of server selection and sampling window size. We construct 4-way tensors (users x keywords x time x servers) and analyze them to show the impact of server and window size selections on the results.
Resumo:
Background Predicting protein subnuclear localization is a challenging problem. Some previous works based on non-sequence information including Gene Ontology annotations and kernel fusion have respective limitations. The aim of this work is twofold: one is to propose a novel individual feature extraction method; another is to develop an ensemble method to improve prediction performance using comprehensive information represented in the form of high dimensional feature vector obtained by 11 feature extraction methods. Methodology/Principal Findings A novel two-stage multiclass support vector machine is proposed to predict protein subnuclear localizations. It only considers those feature extraction methods based on amino acid classifications and physicochemical properties. In order to speed up our system, an automatic search method for the kernel parameter is used. The prediction performance of our method is evaluated on four datasets: Lei dataset, multi-localization dataset, SNL9 dataset and a new independent dataset. The overall accuracy of prediction for 6 localizations on Lei dataset is 75.2% and that for 9 localizations on SNL9 dataset is 72.1% in the leave-one-out cross validation, 71.7% for the multi-localization dataset and 69.8% for the new independent dataset, respectively. Comparisons with those existing methods show that our method performs better for both single-localization and multi-localization proteins and achieves more balanced sensitivities and specificities on large-size and small-size subcellular localizations. The overall accuracy improvements are 4.0% and 4.7% for single-localization proteins and 6.5% for multi-localization proteins. The reliability and stability of our classification model are further confirmed by permutation analysis. Conclusions It can be concluded that our method is effective and valuable for predicting protein subnuclear localizations. A web server has been designed to implement the proposed method. It is freely available at http://bioinformatics.awowshop.com/snlpred_page.php.
Resumo:
Traffic congestion has a significant impact on the economy and environment. Encouraging the use of multimodal transport (public transport, bicycle, park’n’ride, etc.) has been identified by traffic operators as a good strategy to tackle congestion issues and its detrimental environmental impacts. A multi-modal and multi-objective trip planner provides users with various multi-modal options optimised on objectives that they prefer (cheapest, fastest, safest, etc) and has a potential to reduce congestion on both a temporal and spatial scale. The computation of multi-modal and multi-objective trips is a complicated mathematical problem, as it must integrate and utilize a diverse range of large data sets, including both road network information and public transport schedules, as well as optimising for a number of competing objectives, where fully optimising for one objective, such as travel time, can adversely affect other objectives, such as cost. The relationship between these objectives can also be quite subjective, as their priorities will vary from user to user. This paper will first outline the various data requirements and formats that are needed for the multi-modal multi-objective trip planner to operate, including static information about the physical infrastructure within Brisbane as well as real-time and historical data to predict traffic flow on the road network and the status of public transport. It will then present information on the graph data structures representing the road and public transport networks within Brisbane that are used in the trip planner to calculate optimal routes. This will allow for an investigation into the various shortest path algorithms that have been researched over the last few decades, and provide a foundation for the construction of the Multi-modal Multi-objective Trip Planner by the development of innovative new algorithms that can operate the large diverse data sets and competing objectives.
Resumo:
For the timber industry, the ability to simulate the drying of wood is invaluable for manufacturing high quality wood products. Mathematically, however, modelling the drying of a wet porous material, such as wood, is a diffcult task due to its heterogeneous and anisotropic nature, and the complex geometry of the underlying pore structure. The well{ developed macroscopic modelling approach involves writing down classical conservation equations at a length scale where physical quantities (e.g., porosity) can be interpreted as averaged values over a small volume (typically containing hundreds or thousands of pores). This averaging procedure produces balance equations that resemble those of a continuum with the exception that effective coeffcients appear in their deffnitions. Exponential integrators are numerical schemes for initial value problems involving a system of ordinary differential equations. These methods differ from popular Newton{Krylov implicit methods (i.e., those based on the backward differentiation formulae (BDF)) in that they do not require the solution of a system of nonlinear equations at each time step but rather they require computation of matrix{vector products involving the exponential of the Jacobian matrix. Although originally appearing in the 1960s, exponential integrators have recently experienced a resurgence in interest due to a greater undertaking of research in Krylov subspace methods for matrix function approximation. One of the simplest examples of an exponential integrator is the exponential Euler method (EEM), which requires, at each time step, approximation of φ(A)b, where φ(z) = (ez - 1)/z, A E Rnxn and b E Rn. For drying in porous media, the most comprehensive macroscopic formulation is TransPore [Perre and Turner, Chem. Eng. J., 86: 117-131, 2002], which features three coupled, nonlinear partial differential equations. The focus of the first part of this thesis is the use of the exponential Euler method (EEM) for performing the time integration of the macroscopic set of equations featured in TransPore. In particular, a new variable{ stepsize algorithm for EEM is presented within a Krylov subspace framework, which allows control of the error during the integration process. The performance of the new algorithm highlights the great potential of exponential integrators not only for drying applications but across all disciplines of transport phenomena. For example, when applied to well{ known benchmark problems involving single{phase liquid ow in heterogeneous soils, the proposed algorithm requires half the number of function evaluations than that required for an equivalent (sophisticated) Newton{Krylov BDF implementation. Furthermore for all drying configurations tested, the new algorithm always produces, in less computational time, a solution of higher accuracy than the existing backward Euler module featured in TransPore. Some new results relating to Krylov subspace approximation of '(A)b are also developed in this thesis. Most notably, an alternative derivation of the approximation error estimate of Hochbruck, Lubich and Selhofer [SIAM J. Sci. Comput., 19(5): 1552{1574, 1998] is provided, which reveals why it performs well in the error control procedure. Two of the main drawbacks of the macroscopic approach outlined above include the effective coefficients must be supplied to the model, and it fails for some drying configurations, where typical dual{scale mechanisms occur. In the second part of this thesis, a new dual{scale approach for simulating wood drying is proposed that couples the porous medium (macroscale) with the underlying pore structure (microscale). The proposed model is applied to the convective drying of softwood at low temperatures and is valid in the so{called hygroscopic range, where hygroscopically held liquid water is present in the solid phase and water exits only as vapour in the pores. Coupling between scales is achieved by imposing the macroscopic gradient on the microscopic field using suitably defined periodic boundary conditions, which allows the macroscopic ux to be defined as an average of the microscopic ux over the unit cell. This formulation provides a first step for moving from the macroscopic formulation featured in TransPore to a comprehensive dual{scale formulation capable of addressing any drying configuration. Simulation results reported for a sample of spruce highlight the potential and flexibility of the new dual{scale approach. In particular, for a given unit cell configuration it is not necessary to supply the effective coefficients prior to each simulation.
Resumo:
This paper presents the details of numerical studies on the shear behaviour and strength of lipped channel beams (LCBs) with stiffened web openings. Over the last couple of decades, cold-formed steel beams have been used extensively in residential, industrial and commercial buildings as primary load bearing structural components. Their shear strengths are considerably reduced when web openings are included for the purpose of locating building services. Our research has shown that shear strengths of LCBs were reduced by up to 70% due to the inclusion of web openings. Hence there is a need to improve the shear strengths of LCBs with web openings. A cost effective way to improve the detrimental effects of a large web opening is to attach appropriate stiffeners around the web openings in order to restore the original shear strength and stiffness of LCBs. Hence numerical studies were undertaken to investigate the shear strengths of LCBs with stiffened web openings. In this research, finite element models of LCBs with stiffened web openings in shear were developed to simulate the shear behaviour and strength of LCBs. Various stiffening methods using plate and LCB stud stiffeners attached to LCBs using screw-fastening were attempted. The developed models were then validated by comparing their results with experimental results and used in parametric studies. Both finite element analysis and experimental results showed that the stiffening arrangements recommended by past re-search for cold-formed steel channel beams are not adequate to restore the shear strengths of LCBs with web openings. Therefore new stiffener arrangements were proposed for LCBs with web openings based on experimental and finite element analysis results. This paper presents the details of finite element models and analyses used in this research and the results including the recommended stiffener arrangements.
Resumo:
Fire safety of light gauge steel frame (LSF) stud walls is important in the design of buildings. Currently LSF walls are increasingly used in the building industry, and are usually made of cold-formed and thin-walled steel studs that are fire-protected by two layers of plasterboard on both sides. Many experimental and numerical studies have been undertaken to investigate the fire performance of load bearing LSF walls under standard fire conditions. However, the standard time-temperature curve does not represent the fire load present in typical residential and commercial buildings that include considerable amount of thermoplastic materials. Real building fires are unlikely to follow a standard time-temperature curve. However, only limited research has been undertaken to investigate the fire performance of load bearing LSF walls under realistic design fire conditions. Therefore in this research, finite element thermal models of the traditional LSF wall panels without cavity insulation and the new LSF composite wall panels were developed to simulate their fire performance under recently developed realistic design fire curves. Suitable thermal properties were proposed for plasterboards and insulations based on laboratory tests and literature review. The developed models were then validated by comparing their thermal performance results with available results from realistic design fire tests, and were later used in parametric studies. This paper presents the details of the developed finite element thermal models of load bearing LSF wall panels under realistic design fire time-temperature curves and the re-sults. It shows that finite element thermal models can be used to predict the fire performance of load bearing LSF walls with varying configurations of insulations and plasterboards under realistic design fires. Failure times of load bearing LSF walls were also predicted based on the results from finite element thermal analyses.
Resumo:
This paper presents the direct strength method (DSM) equations for cold-formed steel beams subject to shear. Light gauge cold-formed steel sections have been developed as more economical building solutions to the alternative heavier hot-rolled sections in the commercial and residential markets. Cold-formed lipped channel beams (LCB), LiteSteel beams (LSB) and hollow flange beams (HFB) are commonly used as flexural members such as floor joists and bearers. However, their shear capacities are determined based on conservative design rules. For the shear design of cold-formed web panels, their elastic shear buckling strength must be determined accurately including the potential post-buckling strength. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the flange and web elements and ignore the post-buckling strength. Hence experimental and numerical studies were conducted to investigate the shear behaviour and strength of LSBs, LCBs and HFBs. New direct strength method (DSM) based design equations were proposed to determine the ultimate shear capacities of cold-formed steel beams. An improved equation for the higher elastic shear buckling coefficient of cold-formed steel beams was proposed based on finite element analysis results and included in the DSM design equations. A new post-buckling coefficient was also introduced in the DSM equation to include the available post-buckling strength of cold-formed steel beams.
Resumo:
The Lockyer Valley is situated 80 km west of Brisbane and is bounded on the sou th and west by the Great Dividing Range. The valley is a major western sub - catchment of the larger Brisbane River drainage system and is drained by the Lockyer Creek. The Lockyer catchment forms approximately 20% of the total Brisbane River catchment and has an area of around 2900 km2. The Lockyer Creek is an ephemeral drainage system, and the stream and associated alluvium are the main source for irrigation water supply in the Lockyer Valley. The catchment is comprised of a number of well -defined, elongate tributaries in the south, and others in the north, which are more meandering in nature.