258 resultados para Matrix-Splitting Scheme
Resumo:
With the overwhelming increase in the amount of texts on the web, it is almost impossible for people to keep abreast of up-to-date information. Text mining is a process by which interesting information is derived from text through the discovery of patterns and trends. Text mining algorithms are used to guarantee the quality of extracted knowledge. However, the extracted patterns using text or data mining algorithms or methods leads to noisy patterns and inconsistency. Thus, different challenges arise, such as the question of how to understand these patterns, whether the model that has been used is suitable, and if all the patterns that have been extracted are relevant. Furthermore, the research raises the question of how to give a correct weight to the extracted knowledge. To address these issues, this paper presents a text post-processing method, which uses a pattern co-occurrence matrix to find the relation between extracted patterns in order to reduce noisy patterns. The main objective of this paper is not only reducing the number of closed sequential patterns, but also improving the performance of pattern mining as well. The experimental results on Reuters Corpus Volume 1 data collection and TREC filtering topics show that the proposed method is promising.
Resumo:
An optical system which performs the multiplication of binary numbers is described and proof-of-principle experiments are performed. The simultaneous generation of all partial products, optical regrouping of bit products, and optical carry look-ahead addition are novel features of the proposed scheme which takes advantage of the parallel operations capability of optical computers. The proposed processor uses liquid crystal light valves (LCLVs). By space-sharing the LCLVs one such system could function as an array of multipliers. Together with the optical carry look-ahead adders described, this would constitute an optical matrix-vector multiplier.
Resumo:
Greater than 750 individual particles have now been selected from collection flags housed in the JSC Cosmic Dust Curatorial Facility and most have been documented in the Cosmic Dust Catalogs [1]. As increasing numbers of particles are placed in Cosmic Dust Collections, and a greater diversity of particles are introduced to the stratosphere through natural and man-made processes (e.g. decaying orbits of space debris [2]), there is an even greater need for a classification scheme to encompass all stratospheric particles rather than only extraterrestrial particles. The fundamental requirements for a suitable classification scheme have been outlined in earlier communications [3,4]. A quantitative survey of particles on collection flag W7017 indicates that there is some bias in the number of samples selected within a given category for the Cosmic Dust Catalog [5]. However, the sample diversity within this selection is still appropriate for the development of a reliable classification scheme. In this paper, we extend the earlier works on stratospheric particle classification to include particles collected during the period May 1981 to November 1983.
Resumo:
Triangle-shaped nanohole, nanodot, and lattice antidot structures in hexagonal boron-nitride (h-BN) monolayer sheets are characterized with density functional theory calculations utilizing the local spin density approximation. We find that such structures may exhibit very large magnetic moments and associated spin splitting. N-terminated nanodots and antidots show strong spin anisotropy around the Fermi level, that is, half-metallicity. While B-terminated nanodots are shown to lack magnetism due to edge reconstruction, B-terminated nanoholes can retain magnetic character due to the enhanced structural stability of the surrounding two-dimensional matrix. In spite of significant lattice contraction due to the presence of multiple holes, antidot super lattices are predicted to be stable, exhibiting amplified magnetism as well as greatly enhanced half-metallicity. Collectively, the results indicate new opportunities for designing h-BNbased nanoscale devices with potential applications in the areas of spintronics, light emission, and photocatalysis.
Resumo:
In March 2008, the Australian Government announced its intention to introduce a national Emissions Trading Scheme (ETS), now expected to start in 2015. This impending development provides an ideal setting to investigate the impact an ETS in Australia will have on the market valuation of Australian Securities Exchange (ASX) firms. This is the first empirical study into the pricing effects of the ETS in Australia. Primarily, we hypothesize that firm value will be negatively related to a firm's carbon intensity profile. That is, there will be a greater impact on firm value for high carbon emitters in the period prior (2007) to the introduction of the ETS, whether for reasons relating to the existence of unbooked liabilities associated with future compliance and/or abatement costs, or for reasons relating to reduced future earnings. Using a sample of 58 Australian listed firms (constrained by the current availability of emissions data) which comprise larger, more profitable and less risky listed Australian firms, we first undertake an event study focusing on five distinct information events argued to impact the probability of the proposed ETS being enacted. Here, we find direct evidence that the capital market is indeed pricing the proposed ETS. Second, using a modified version of the Ohlson (1995) valuation model, we undertake a valuation analysis designed not only to complement the event study results, but more importantly to provide insights into the capital market's assessment of the magnitude of the economic impact of the proposed ETS as reflected in market capitalization. Here, our results show that the market assesses the most carbon intensive sample firms a market value decrement relative to other sample firms of between 7% and 10% of market capitalization. Further, based on the carbon emission profile of the sample firms we imply a ‘future carbon permit price’ of between AUD$17 per tonne and AUD$26 per tonne of carbon dioxide emitted. This study is more precise than industry reports, which set a carbon price of between AUD$15 to AUD$74 per tonne.
Resumo:
The generation of a correlation matrix from a large set of long gene sequences is a common requirement in many bioinformatics problems such as phylogenetic analysis. The generation is not only computationally intensive but also requires significant memory resources as, typically, few gene sequences can be simultaneously stored in primary memory. The standard practice in such computation is to use frequent input/output (I/O) operations. Therefore, minimizing the number of these operations will yield much faster run-times. This paper develops an approach for the faster and scalable computing of large-size correlation matrices through the full use of available memory and a reduced number of I/O operations. The approach is scalable in the sense that the same algorithms can be executed on different computing platforms with different amounts of memory and can be applied to different problems with different correlation matrix sizes. The significant performance improvement of the approach over the existing approaches is demonstrated through benchmark examples.
Resumo:
We report on an accurate numerical scheme for the evolution of an inviscid bubble in radial Hele-Shaw flow, where the nonlinear boundary effects of surface tension and kinetic undercooling are included on the bubble-fluid interface. As well as demonstrating the onset of the Saffman-Taylor instability for growing bubbles, the numerical method is used to show the effect of the boundary conditions on the separation (pinch-off) of a contracting bubble into multiple bubbles, and the existence of multiple possible asymptotic bubble shapes in the extinction limit. The numerical scheme also allows for the accurate computation of bubbles which pinch off very close to the theoretical extinction time, raising the possibility of computing solutions for the evolution of bubbles with non-generic extinction behaviour.
Resumo:
Controlled drug delivery is a key topic in modern pharmacotherapy, where controlled drug delivery devices are required to prolong the period of release, maintain a constant release rate, or release the drug with a predetermined release profile. In the pharmaceutical industry, the development process of a controlled drug delivery device may be facilitated enormously by the mathematical modelling of drug release mechanisms, directly decreasing the number of necessary experiments. Such mathematical modelling is difficult because several mechanisms are involved during the drug release process. The main drug release mechanisms of a controlled release device are based on the device’s physiochemical properties, and include diffusion, swelling and erosion. In this thesis, four controlled drug delivery models are investigated. These four models selectively involve the solvent penetration into the polymeric device, the swelling of the polymer, the polymer erosion and the drug diffusion out of the device but all share two common key features. The first is that the solvent penetration into the polymer causes the transition of the polymer from a glassy state into a rubbery state. The interface between the two states of the polymer is modelled as a moving boundary and the speed of this interface is governed by a kinetic law. The second feature is that drug diffusion only happens in the rubbery region of the polymer, with a nonlinear diffusion coefficient which is dependent on the concentration of solvent. These models are analysed by using both formal asymptotics and numerical computation, where front-fixing methods and the method of lines with finite difference approximations are used to solve these models numerically. This numerical scheme is conservative, accurate and easily implemented to the moving boundary problems and is thoroughly explained in Section 3.2. From the small time asymptotic analysis in Sections 5.3.1, 6.3.1 and 7.2.1, these models exhibit the non-Fickian behaviour referred to as Case II diffusion, and an initial constant rate of drug release which is appealing to the pharmaceutical industry because this indicates zeroorder release. The numerical results of the models qualitatively confirms the experimental behaviour identified in the literature. The knowledge obtained from investigating these models can help to develop more complex multi-layered drug delivery devices in order to achieve sophisticated drug release profiles. A multi-layer matrix tablet, which consists of a number of polymer layers designed to provide sustainable and constant drug release or bimodal drug release, is also discussed in this research. The moving boundary problem describing the solvent penetration into the polymer also arises in melting and freezing problems which have been modelled as the classical onephase Stefan problem. The classical one-phase Stefan problem has unrealistic singularities existed in the problem at the complete melting time. Hence we investigate the effect of including the kinetic undercooling to the melting problem and this problem is called the one-phase Stefan problem with kinetic undercooling. Interestingly we discover the unrealistic singularities existed in the classical one-phase Stefan problem at the complete melting time are regularised and also find out the small time behaviour of the one-phase Stefan problem with kinetic undercooling is different to the classical one-phase Stefan problem from the small time asymptotic analysis in Section 3.3. In the case of melting very small particles, it is known that surface tension effects are important. The effect of including the surface tension to the melting problem for nanoparticles (no kinetic undercooling) has been investigated in the past, however the one-phase Stefan problem with surface tension exhibits finite-time blow-up. Therefore we investigate the effect of including both the surface tension and kinetic undercooling to the melting problem for nanoparticles and find out the the solution continues to exist until complete melting. The investigation of including kinetic undercooling and surface tension to the melting problems reveals more insight into the regularisations of unphysical singularities in the classical one-phase Stefan problem. This investigation gives a better understanding of melting a particle, and contributes to the current body of knowledge related to melting and freezing due to heat conduction.
Resumo:
The objective of this PhD research program is to investigate numerical methods for simulating variably-saturated flow and sea water intrusion in coastal aquifers in a high-performance computing environment. The work is divided into three overlapping tasks: to develop an accurate and stable finite volume discretisation and numerical solution strategy for the variably-saturated flow and salt transport equations; to implement the chosen approach in a high performance computing environment that may have multiple GPUs or CPU cores; and to verify and test the implementation. The geological description of aquifers is often complex, with porous materials possessing highly variable properties, that are best described using unstructured meshes. The finite volume method is a popular method for the solution of the conservation laws that describe sea water intrusion, and is well-suited to unstructured meshes. In this work we apply a control volume-finite element (CV-FE) method to an extension of a recently proposed formulation (Kees and Miller, 2002) for variably saturated groundwater flow. The CV-FE method evaluates fluxes at points where material properties and gradients in pressure and concentration are consistently defined, making it both suitable for heterogeneous media and mass conservative. Using the method of lines, the CV-FE discretisation gives a set of differential algebraic equations (DAEs) amenable to solution using higher-order implicit solvers. Heterogeneous computer systems that use a combination of computational hardware such as CPUs and GPUs, are attractive for scientific computing due to the potential advantages offered by GPUs for accelerating data-parallel operations. We present a C++ library that implements data-parallel methods on both CPU and GPUs. The finite volume discretisation is expressed in terms of these data-parallel operations, which gives an efficient implementation of the nonlinear residual function. This makes the implicit solution of the DAE system possible on the GPU, because the inexact Newton-Krylov method used by the implicit time stepping scheme can approximate the action of a matrix on a vector using residual evaluations. We also propose preconditioning strategies that are amenable to GPU implementation, so that all computationally-intensive aspects of the implicit time stepping scheme are implemented on the GPU. Results are presented that demonstrate the efficiency and accuracy of the proposed numeric methods and formulation. The formulation offers excellent conservation of mass, and higher-order temporal integration increases both numeric efficiency and accuracy of the solutions. Flux limiting produces accurate, oscillation-free solutions on coarse meshes, where much finer meshes are required to obtain solutions with equivalent accuracy using upstream weighting. The computational efficiency of the software is investigated using CPUs and GPUs on a high-performance workstation. The GPU version offers considerable speedup over the CPU version, with one GPU giving speedup factor of 3 over the eight-core CPU implementation.
Resumo:
Welcome to the Quality assessment matrix. This matrix is designed for highly qualified discipline experts to evaluate their course, major or unit in a systematic manner. The primary purpose of the Quality assessment matrix is to provide a tool that a group of academic staff at universities can collaboratively review the assessment within a course, major or unit annually. The annual review will result in you being read for an external curricula review at any point in time. This tool is designed for use in a workshop format with one, two or more academic staff, and will lead to an action plan for implementation.
Resumo:
Robust hashing is an emerging field that can be used to hash certain data types in applications unsuitable for traditional cryptographic hashing methods. Traditional hashing functions have been used extensively for data/message integrity, data/message authentication, efficient file identification and password verification. These applications are possible because the hashing process is compressive, allowing for efficient comparisons in the hash domain but non-invertible meaning hashes can be used without revealing the original data. These techniques were developed with deterministic (non-changing) inputs such as files and passwords. For such data types a 1-bit or one character change can be significant, as a result the hashing process is sensitive to any change in the input. Unfortunately, there are certain applications where input data are not perfectly deterministic and minor changes cannot be avoided. Digital images and biometric features are two types of data where such changes exist but do not alter the meaning or appearance of the input. For such data types cryptographic hash functions cannot be usefully applied. In light of this, robust hashing has been developed as an alternative to cryptographic hashing and is designed to be robust to minor changes in the input. Although similar in name, robust hashing is fundamentally different from cryptographic hashing. Current robust hashing techniques are not based on cryptographic methods, but instead on pattern recognition techniques. Modern robust hashing algorithms consist of feature extraction followed by a randomization stage that introduces non-invertibility and compression, followed by quantization and binary encoding to produce a binary hash output. In order to preserve robustness of the extracted features, most randomization methods are linear and this is detrimental to the security aspects required of hash functions. Furthermore, the quantization and encoding stages used to binarize real-valued features requires the learning of appropriate quantization thresholds. How these thresholds are learnt has an important effect on hashing accuracy and the mere presence of such thresholds are a source of information leakage that can reduce hashing security. This dissertation outlines a systematic investigation of the quantization and encoding stages of robust hash functions. While existing literature has focused on the importance of quantization scheme, this research is the first to emphasise the importance of the quantizer training on both hashing accuracy and hashing security. The quantizer training process is presented in a statistical framework which allows a theoretical analysis of the effects of quantizer training on hashing performance. This is experimentally verified using a number of baseline robust image hashing algorithms over a large database of real world images. This dissertation also proposes a new randomization method for robust image hashing based on Higher Order Spectra (HOS) and Radon projections. The method is non-linear and this is an essential requirement for non-invertibility. The method is also designed to produce features more suited for quantization and encoding. The system can operate without the need for quantizer training, is more easily encoded and displays improved hashing performance when compared to existing robust image hashing algorithms. The dissertation also shows how the HOS method can be adapted to work with biometric features obtained from 2D and 3D face images.
Resumo:
This article analyzes a series of stories and artworks that were produced in a collective biography workshop. It explores Judith Butler’s concept of the heterosexual matrix combined with a Deleuzian theoretical framework. The article begins with an overview of Butler’s concept of the heterosexual matrix and her theorizations on how it might be disrupted. It then suggests how a Deleuzian framework offers other tools for analyzing these ruptures at the micro level of girls’ everyday interactions.
Resumo:
This paper outlines an innovative and feasible flight control scheme for a rotary-wing unmanned aerial system (RUAS) with guaranteed safety and reliable flight quality in a gusty environment. The proposed control methodology aims to increase gust-attenuation capability of a RUAS to ensure improved flight performance when strong gusts occur. Based on the design of an effective estimator, an altitude controller is firstly constructed to synchronously compensate for fluctuations of the main rotor thrust which might lead to crashes in a gusty environment. Afterwards, a nonlinear state feedback controller is proposed to stabilize horizontal positions of the RUAS with gust-attenuation property. Performance of the proposed control framework is evaluated using parameters of a Vario XLC helicopter and high-fidelity simulations show that the proposed controllers can effectively reduce side-effect of gusts and demonstrate performance improvement when compared with the proportional-integral-derivative (PID) controllers.
Resumo:
Matrix metalloproteinases (MMPs) are proteolytic enzymes important to wound healing. In non-healing wounds, it has been suggested that MMP levels become dysfunctional, hence it is of great interest to develop sensors to detect MMP biomarkers. This study presents the development of a label-free optical MMP biosensor based on a functionalised porous silicon (pSi) thin film. The biosensor is fabricated by immobilising a peptidomimetic MMP inhibitor in the porous layer using hydrosilylation followed by amide coupling. The binding of MMP to the immobilised inhibitor translates into a change of effective optical thickness (EOT) over the time. We investigate the effect of surface functionalisation on the stability of pSi surface and evaluate the sensing performance. We successfully demonstrate MMP detection in buffer solution and human wound fluid at physiologically relevant concentrations. This biosensor may find application as a point-of-care device that is prognostic of the healing trajectory of chronic wounds.
Resumo:
This project addresses the viability of lightweight, low power consumption, flexible, large format LED screens. The investigation encompasses all aspects of the electrical and mechanical design, individually and as a system, and achieves a successful full scale prototype. The prototype implements novel techniques to achieve large displacement colour aliasing, a purely passive thermal management solution, a rapid deployment system, individual seven bit LED current control with two way display communication, auto-configuration and complete signal redundancy, all of which are in direct response to industry needs.