887 resultados para Sparse potentials
Resumo:
The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
This study investigated a novel drug delivery system (DDS), consisting of polycaprolactone (PCL) or polycaprolactone 20% tricalcium phosphate (PCL-TCP) biodegradable scaffolds, fibrin Tisseel sealant and recombinant bone morphogenetic protein-2 (rhBMP-2) for bone regeneration. PCL and PCL-TCP-fibrin composites displayed a loading efficiency of 70% and 43%, respectively. Fluorescence and scanning electron microscopy revealed sparse clumps of rhBMP-2 particles, non-uniformly distributed on the rods’ surface of PCL-fibrin composites. In contrast, individual rhBMP-2 particles were evident and uniformly distributed on the rods’ surface of the PCL-TCP-fibrin composites. PCL-fibrin composites loaded with 10 and 20 μg/ml rhBMP-2 demonstrated a triphasic release profile as quantified by an enzyme-linked immunosorbent assay (ELISA). This consisted of burst releases at 2 h, and days 7 and 16. A biphasic release profile was observed for PCL-TCP-fibrin composites loaded with 10 μg/ml rhBMP-2, consisting of burst releases at 2 h and day 14. PCL-TCP-fibrin composites loaded with 20 μg/ml rhBMP-2 showed a tri-phasic release profile, consisting of burst releases at 2 h, and days 10 and 21. We conclude that the addition of TCP caused a delay in rhBMP-2 release. Sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) and alkaline phosphatase assay verified the stability and bioactivity of eluted rhBMP-2 at all time points
Resumo:
The impact of what has been broadly labelled the knowledge economy has been such that, even in the absence of precise measurement, it is the undoubted dynamo of today’s global market, and an essential part of any global city. The socio-economic importance of knowledge production in a knowledge economy is clear, and it is an emerging social phenomenon and research agenda in geographical studies. Knowledge production, and where, how and by whom it is produced, is an urban phenomenon that is poorly understood in an era of strong urbanisation. This paper focuses on knowledge community precincts as the catalytic magnet infrastructures impacting on knowledge production in cities. The paper discusses the increasing importance of knowledge-based urban development within the paradigm of the knowledge economy, and the role of knowledge community precincts as instruments to seed the foundation of knowledge production in cities. This paper explores the knowledge based urban development, and particularly knowledge community precinct development, potentials of Sydney, Melbourne and Brisbane, and benchmarks this against that of Boston, Massachusetts.
Resumo:
Microblogging is an emergent adolescent and adult literacy practice that has become popularized through platforms such as Twitter, Plurk and Jaiku, in the rise of Web 2.0 – “the social web”. Yet the potentials of microblogging for literacy learning in educational contexts is currently underexplored in the research and literature. This article draws on new research with 150 adolescent and adult participants in school and university contexts, which was made possible through cross-disciplinary collaboration between specialists English and Information and Communication Technologies (ICT) educators. Strategies are provided for teachers to establish their own microblogging networks, with suggested activities to enhance the literacy learning of adolescents in educational contexts.
Resumo:
Objective: The aim of this literature review is to identify the role of probiotics in the management of enteral tube feeding (ETF) diarrhoea in critically ill patients.---------- Background: Diarrhoea is a common gastrointestinal problem seen in ETF patients. The incidence of diarrhoea in tube fed patients varies from 2% to 68% across all patients. Despite extensive investigation, the pathogenesis surrounding ETF diarrhoea remains unclear. Evidence to support probiotics to manage ETF diarrhoea in critically ill patients remains sparse.---------- Method: Literature on ETF diarrhoea and probiotics in critically ill, adult patients was reviewed from 1980 to 2010. The Cochrane Library, Pubmed, Science Direct, Medline and the Cumulative Index of Nursing and Allied Health Literature (CINAHL) electronic databases were searched using specific inclusion/exclusion criteria. Key search terms used were: enteral nutrition, diarrhoea, critical illness, probiotics, probiotic species and randomised clinical control trial (RCT).---------- Results: Four RCT papers were identified with two reporting full studies, one reporting a pilot RCT and one conference abstract reporting an RCT pilot study. A trend towards a reduction in diarrhoea incidence was observed in the probiotic groups. However, mortality associated with probiotic use in some severely and critically ill patients must caution the clinician against its use.---------- Conclusion: Evidence to support probiotic use in the management of ETF diarrhoea in critically ill patients remains unclear. This paper argues that probiotics should not be administered to critically ill patients until further research has been conducted to examine the causal relationship between probiotics and mortality, irrespective of the patient's disease state or projected prophylactic benefit of probiotic administration.
Resumo:
What is a record producer? There is a degree of mystery and uncertainty about just what goes on behind the studio door. Some producers are seen as Svengali-like figures manipulating artists into mass consumer product. Producers are sometimes seen as mere technicians whose job is simply to set up a few microphones and press the record button. Close examination of the recording process will show how far this is from a complete picture. Artists are special—they come with an inspiration, and a talent, but also with a variety of complications, and in many ways a recording studio can seem the least likely place for creative expression and for an affective performance to happen. The task of the record producer is to engage with these artists and their songs and turn these potentials into form through the technology of the recording studio. The purpose of the exercise is to disseminate this fixed form to an imagined audience—generally in the hope that this audience will prove to be real. Finding an audience is the role of the record company. A record producer must also engage with the commercial expectations of the interests that underwrite a recording. This dissertation considers three fields of interest in the recording process: the performer and the song; the technology of the recording context; and the commercial ambitions of the record company—and positions the record producer as a nexus at the interface of all three. The author reports his structured recollection of five recordings, with three different artists, that all achieved substantial commercial success. The processes are considered from the author’s perspective as the record producer, and from inception of the project to completion of the recorded work. What were the processes of engagement? Do the actions reported conform to the template of nexus? This dissertation proposes that in all recordings the function of producer/nexus is present and necessary—it exists in the interaction of the artistry and the technology. The art of record production is to engage with these artists and the songs they bring and turn these potentials into form.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
Most research on numerical development in children is behavioural, focusing on accuracy and response time in different problem formats. However, Temple and Posner (1998) used ERPs and the numerical distance task with 5-year-olds to show that the development of numerical representations is difficult to disentangle from the development of the executive components of response organization and execution. Here we use the numerical Stroop paradigm (NSP) and ERPs to study possible executive interference in numerical processing tasks in 6–8-year-old children. In the NSP, the numerical magnitude of the digits is task-relevant and the physical size of the digits is task-irrelevant. We show that younger children are highly susceptible to interference from irrelevant physical information such as digit size, but that access to the numerical representation is almost as fast in young children as in adults. We argue that the developmental trajectories for executive function and numerical processing may act together to determine numerical development in young children.
Resumo:
Urban development in the first decade of the 21st century has faced many challenges ranging from rapid to shrinking urbanisation, from emerging knowledge economy to global division of labour and from globalisation to climate change. Along with these challenges new concepts, such as essentialism, environmentalism and dematerialism, are emerged and started to influence the way urban development plans are prepared and visions for the development of cities are made. Beyond this, scholars, practitioners and decision-makers have also started to discuss the need for an new urban planning and development approach in order to achieve a development that is sustainable and knowledge-based. Limited successful examples of alternative planning and development approaches showcased potentials of moving towards a new plan-making mindset in the era of knowledge economy. This paper presents a new urban planning and development approach that is taking application ground in many parts of the globe, namely knowledge-based urban development. After providing the theoretical foundation and conceptual framework of knowledge-based urban development the paper discusses whether knowledge-based development of cities is a myth or a reality.
Resumo:
This dissertation is based on theoretical study and experiments which extend geometric control theory to practical applications within the field of ocean engineering. We present a method for path planning and control design for underwater vehicles by use of the architecture of differential geometry. In addition to the theoretical design of the trajectory and control strategy, we demonstrate the effectiveness of the method via the implementation onto a test-bed autonomous underwater vehicle. Bridging the gap between theory and application is the ultimate goal of control theory. Major developments have occurred recently in the field of geometric control which narrow this gap and which promote research linking theory and application. In particular, Riemannian and affine differential geometry have proven to be a very effective approach to the modeling of mechanical systems such as underwater vehicles. In this framework, the application of a kinematic reduction allows us to calculate control strategies for fully and under-actuated vehicles via kinematic decoupled motion planning. However, this method has not yet been extended to account for external forces such as dissipative viscous drag and buoyancy induced potentials acting on a submerged vehicle. To fully bridge the gap between theory and application, this dissertation addresses the extension of this geometric control design method to include such forces. We incorporate the hydrodynamic drag experienced by the vehicle by modifying the Levi-Civita affine connection and demonstrate a method for the compensation of potential forces experienced during a prescribed motion. We present the design method for multiple different missions and include experimental results which validate both the extension of the theory and the ability to implement control strategies designed through the use of geometric techniques. By use of the extension presented in this dissertation, the underwater vehicle application successfully demonstrates the applicability of geometric methods to design implementable motion planning solutions for complex mechanical systems having equal or fewer input forces than available degrees of freedom. Thus, we provide another tool with which to further increase the autonomy of underwater vehicles.
Resumo:
Online dating networks, a type of social network, are gaining popularity. With many people joining and being available in the network, users are overwhelmed with choices when choosing their ideal partners. This problem can be overcome by utilizing recommendation methods. However, traditional recommendation methods are ineffective and inefficient for online dating networks where the dataset is sparse and/or large and two-way matching is required. We propose a methodology by using clustering, SimRank to recommend matching candidates to users in an online dating network. Data from a live online dating network is used in evaluation. The success rate of recommendation obtained using the proposed method is compared with baseline success rate of the network and the performance is improved by double.
Resumo:
Urban development in the 21st decade of the 21st century has faced many challenges ranging from rapid to shrinking urbanisation, from emerging knowledge economy to global division of labour and from globalisation to climate change. Along with with these challenges new concepts, such as essentialism, environmentalism and dematerialism, are emerged and started to influence the way urban development plans are prepared and visions for the development of cities are made. Beyond this, scholars, practitioners and decision-makers have also started to discuss the need for an new urban planning and development approach in order to achieve a development that is sustainable and knowledge-based. Limited successful examples of alternative planning and development approaches showcased potentials of moving towards a new plan-making mindset in the era of knowledge economy. This paper presents a new urban planning and development approach that is taking application ground in many parts of the globe, namely knowledge-based urban development. After providing the theoretical foundation and conceptual framework of knowledge-based urban development the paper discusses whether knowledge-based development of cities is a myth or a reality.
Resumo:
Prognostics and asset life prediction is one of research potentials in engineering asset health management. We previously developed the Explicit Hazard Model (EHM) to effectively and explicitly predict asset life using three types of information: population characteristics; condition indicators; and operating environment indicators. We have formerly studied the application of both the semi-parametric EHM and non-parametric EHM to the survival probability estimation in the reliability field. The survival time in these models is dependent not only upon the age of the asset monitored, but also upon the condition and operating environment information obtained. This paper is a further study of the semi-parametric and non-parametric EHMs to the hazard and residual life prediction of a set of resistance elements. The resistance elements were used as corrosion sensors for measuring the atmospheric corrosion rate in a laboratory experiment. In this paper, the estimated hazard of the resistance element using the semi-parametric EHM and the non-parametric EHM is compared to the traditional Weibull model and the Aalen Linear Regression Model (ALRM), respectively. Due to assuming a Weibull distribution in the baseline hazard of the semi-parametric EHM, the estimated hazard using this model is compared to the traditional Weibull model. The estimated hazard using the non-parametric EHM is compared to ALRM which is a well-known non-parametric covariate-based hazard model. At last, the predicted residual life of the resistance element using both EHMs is compared to the actual life data.
Resumo:
We assess the performance of an exponential integrator for advancing stiff, semidiscrete formulations of the unsaturated Richards equation in time. The scheme is of second order and explicit in nature but requires the action of the matrix function φ(A) where φ(z) = [exp(z) - 1]/z on a suitability defined vector v at each time step. When the matrix A is large and sparse, φ(A)v can be approximated by Krylov subspace methods that require only matrix-vector products with A. We prove that despite the use of this approximation the scheme remains second order. Furthermore, we provide a practical variable-stepsize implementation of the integrator by deriving an estimate of the local error that requires only a single additional function evaluation. Numerical experiments performed on two-dimensional test problems demonstrate that this implementation outperforms second-order, variable-stepsize implementations of the backward differentiation formulae.