420 resultados para SEPARATION APPLICATIONS
Resumo:
The present rate of technological advance continues to place significant demands on data storage devices. The sheer amount of digital data being generated each year along with consumer expectations, fuels these demands. At present, most digital data is stored magnetically, in the form of hard disk drives or on magnetic tape. The increase in areal density (AD) of magnetic hard disk drives over the past 50 years has been of the order of 100 million times, and current devices are storing data at ADs of the order of hundreds of gigabits per square inch. However, it has been known for some time that the progress in this form of data storage is approaching fundamental limits. The main limitation relates to the lower size limit that an individual bit can have for stable storage. Various techniques for overcoming these fundamental limits are currently the focus of considerable research effort. Most attempt to improve current data storage methods, or modify these slightly for higher density storage. Alternatively, three dimensional optical data storage is a promising field for the information storage needs of the future, offering very high density, high speed memory. There are two ways in which data may be recorded in a three dimensional optical medium; either bit-by-bit (similar in principle to an optical disc medium such as CD or DVD) or by using pages of bit data. Bit-by-bit techniques for three dimensional storage offer high density but are inherently slow due to the serial nature of data access. Page-based techniques, where a two-dimensional page of data bits is written in one write operation, can offer significantly higher data rates, due to their parallel nature. Holographic Data Storage (HDS) is one such page-oriented optical memory technique. This field of research has been active for several decades, but with few commercial products presently available. Another page-oriented optical memory technique involves recording pages of data as phase masks in a photorefractive medium. A photorefractive material is one by which the refractive index can be modified by light of the appropriate wavelength and intensity, and this property can be used to store information in these materials. In phase mask storage, two dimensional pages of data are recorded into a photorefractive crystal, as refractive index changes in the medium. A low-intensity readout beam propagating through the medium will have its intensity profile modified by these refractive index changes and a CCD camera can be used to monitor the readout beam, and thus read the stored data. The main aim of this research was to investigate data storage using phase masks in the photorefractive crystal, lithium niobate (LiNbO3). Firstly the experimental methods for storing the two dimensional pages of data (a set of vertical stripes of varying lengths) in the medium are presented. The laser beam used for writing, whose intensity profile is modified by an amplitudemask which contains a pattern of the information to be stored, illuminates the lithium niobate crystal and the photorefractive effect causes the patterns to be stored as refractive index changes in the medium. These patterns are read out non-destructively using a low intensity probe beam and a CCD camera. A common complication of information storage in photorefractive crystals is the issue of destructive readout. This is a problem particularly for holographic data storage, where the readout beam should be at the same wavelength as the beam used for writing. Since the charge carriers in the medium are still sensitive to the read light field, the readout beam erases the stored information. A method to avoid this is by using thermal fixing. Here the photorefractive medium is heated to temperatures above 150�C; this process forms an ionic grating in the medium. This ionic grating is insensitive to the readout beam and therefore the information is not erased during readout. A non-contact method for determining temperature change in a lithium niobate crystal is presented in this thesis. The temperature-dependent birefringent properties of the medium cause intensity oscillations to be observed for a beam propagating through the medium during a change in temperature. It is shown that each oscillation corresponds to a particular temperature change, and by counting the number of oscillations observed, the temperature change of the medium can be deduced. The presented technique for measuring temperature change could easily be applied to a situation where thermal fixing of data in a photorefractive medium is required. Furthermore, by using an expanded beam and monitoring the intensity oscillations over a wide region, it is shown that the temperature in various locations of the crystal can be monitored simultaneously. This technique could be used to deduce temperature gradients in the medium. It is shown that the three dimensional nature of the recording medium causes interesting degradation effects to occur when the patterns are written for a longer-than-optimal time. This degradation results in the splitting of the vertical stripes in the data pattern, and for long writing exposure times this process can result in the complete deterioration of the information in the medium. It is shown in that simply by using incoherent illumination, the original pattern can be recovered from the degraded state. The reason for the recovery is that the refractive index changes causing the degradation are of a smaller magnitude since they are induced by the write field components scattered from the written structures. During incoherent erasure, the lower magnitude refractive index changes are neutralised first, allowing the original pattern to be recovered. The degradation process is shown to be reversed during the recovery process, and a simple relationship is found relating the time at which particular features appear during degradation and recovery. A further outcome of this work is that the minimum stripe width of 30 ìm is required for accurate storage and recovery of the information in the medium, any size smaller than this results in incomplete recovery. The degradation and recovery process could be applied to an application in image scrambling or cryptography for optical information storage. A two dimensional numerical model based on the finite-difference beam propagation method (FD-BPM) is presented and used to gain insight into the pattern storage process. The model shows that the degradation of the patterns is due to the complicated path taken by the write beam as it propagates through the crystal, and in particular the scattering of this beam from the induced refractive index structures in the medium. The model indicates that the highest quality pattern storage would be achieved with a thin 0.5 mm medium; however this type of medium would also remove the degradation property of the patterns and the subsequent recovery process. To overcome the simplistic treatment of the refractive index change in the FD-BPM model, a fully three dimensional photorefractive model developed by Devaux is presented. This model shows significant insight into the pattern storage, particularly for the degradation and recovery process, and confirms the theory that the recovery of the degraded patterns is possible since the refractive index changes responsible for the degradation are of a smaller magnitude. Finally, detailed analysis of the pattern formation and degradation dynamics for periodic patterns of various periodicities is presented. It is shown that stripe widths in the write beam of greater than 150 ìm result in the formation of different types of refractive index changes, compared with the stripes of smaller widths. As a result, it is shown that the pattern storage method discussed in this thesis has an upper feature size limit of 150 ìm, for accurate and reliable pattern storage.
Resumo:
Acoustic emission (AE) is the phenomenon where high frequency stress waves are generated by rapid release of energy within a material by sources such as crack initiation or growth. AE technique involves recording these stress waves by means of sensors placed on the surface and subsequent analysis of the recorded signals to gather information such as the nature and location of the source. It is one of the several diagnostic techniques currently used for structural health monitoring (SHM) of civil infrastructure such as bridges. Some of its advantages include ability to provide continuous in-situ monitoring and high sensitivity to crack activity. But several challenges still exist. Due to high sampling rate required for data capture, large amount of data is generated during AE testing. This is further complicated by the presence of a number of spurious sources that can produce AE signals which can then mask desired signals. Hence, an effective data analysis strategy is needed to achieve source discrimination. This also becomes important for long term monitoring applications in order to avoid massive date overload. Analysis of frequency contents of recorded AE signals together with the use of pattern recognition algorithms are some of the advanced and promising data analysis approaches for source discrimination. This paper explores the use of various signal processing tools for analysis of experimental data, with an overall aim of finding an improved method for source identification and discrimination, with particular focus on monitoring of steel bridges.
Resumo:
Sustainable Urban and Regional Infrastructure Development: Technologies, Applications and Management, bridges the gap in the current literature by addressing the overall problems present in society's major infrastructures, and the technologies that may be applied to overcome these problems. It focuses on ways in which energy intensive but 'invisible' (to the general public) facilities can become green or greener. The studies presented re lessons to be learnt from our neighbors and from our own backyard, and provide an excellent general overview of the issues facing us all.
Resumo:
The concept of ‘sustainability’ has been pushed to the forefront of policy-making and politics as the world wakes up to the impacts of climate change and the effects of the modern urban lifestyle. Climate change has emerged to be one of the biggest challenges faced by our planet today, threatening both built and natural systems with long term consequences which may be irreversible. While there is a vast literature in the market on sustainable cities and urban development, there is currently none that bring together the vital issues of urban and regional development, and the planning, management and implementation of sustainable infrastructure. Large scale infrastructure plays an important part in modern society by not only promoting economic growth, but also by acting as a key indicator for it. More importantly, it supplies municipal/local amenity and services: water, electricity, social and communication facilities, waste removal, transport of people and goods, as well as numerous other services. For the most part, infrastructure has been built by teams lead by engineers who are more concerned about functionality than the concept of sustainability. However, it has been widely stated that current practices and lifestyle cannot continue if we are to leave a healthy living planet to not only the next generation, but also to the generations beyond. Therefore, in order to be sustainable, there are drastic measures that need to be taken. Current single purpose and design infrastructures that are open looped are not sustainable; they are too resource intensive, consume too much energy and support the consumption of natural resources at a rate that will exhaust their supply. Because of this, it is vital that modern society, policy-makers, developers, engineers and planners become pioneers in introducing and incorporating sustainable features into urban and regional infrastructure.
Resumo:
The launch of the Apple iPad on January 2010 has seen considerable interest from the newspaper and publishing industry in developing content and business models for the tablet PC device that can address the limits of both the print and online news and information media products. It is early days in the iPad’s evolution, and we wait to see what competitor devices will emerge in the near future. It is apparent, however, that it has become a significant “niche” product, with considerable potential for mass market expansion over the next few years, possibly at the expense of netbook sales. The scope for the iPad and tablet PCs to become a “fourth screen” for users, alongside the TV, PC and mobile phone, is in early stages of evolution. The study used five criteria to assess iPad apps: • Content: timeliness; archive; personalisation; content depth; advertisements; the use of multimedia; and the extent to which the content was in sync with the provider brand. • Useability: degree of static content; ability to control multimedia; file size; page clutter; resolution; signposts; and customisation. • Interactivity: hyperlinks; ability to contribute content or provide feedback to news items; depth of multimedia; search function; ability to use plug-ins and linking; ability to highlight, rate and/or save items; functions that may facilitate a community of users. • Transactions capabilities: ecommerce functionality; purchase and download process; user privacy and transaction security. • Openness: degree of linking to outside sources; reader contribution processes; anonymity measures; and application code ownership.
Resumo:
Future air traffic management concepts often involve the proposal of automated separation management algorithms that replaces human air traffic controllers. This paper proposes a new type of automated separation management algorithm (based on the satisficing approach) that utilizes inter-aircraft communication and a track file manager (or bank of Kalman filters) that is capable of resolving conflicts during periods of communication failure. The proposed separation management algorithm is tested in a range of flight scenarios involving during periods of communication failure, in both simulation and flight test (flight tests were conducted as part of the Smart Skies project). The intention of the conducted flight tests was to investigate the benefits of using inter-aircraft communication to provide an extra layer of safety protection in support air traffic management during periods of failure of the communication network. These benefits were confirmed.
Resumo:
The ability to reproducibly load bioactive molecules into polymeric microspheres is a challenge. Traditional microsphere fabrication methods typically provide inhomogeneous release profiles and suffer from lack of batch to batch reproducibility, hindering their potential to up-scale and their translation to the clinic. This deficit in homogeneity is in part attributed to broad size distributions and variability in the morphology of particles. It is thus desirable to control morphology and size of non-loaded particles in the first instance, in preparation for obtaining desired release profiles of loaded particles in the later stage. This is achieved by identifying the key parameters involved in particle production and understanding how adapting these parameters affects the final characteristics of particles. In this study, electrospraying was presented as a promising technique for generating reproducible particles made of polycaprolactone, a biodegradable, FDA-approved polymer. Narrow size distributions were obtained by the control of electrospraying flow rate and polymer concentration, with average particle sizes ranging from 10 to 20 um. Particles were shown to be spherical with a homogenous embossed texture, determined by the polymer entanglement regime taking place during electrospraying. No toxic residue was detected by this process based on preliminary cell work using DNA quantification assays, validating this method as suitable for further loading of bioactive components.
Resumo:
Throughout this workshop session we have looked at various configurations of Sage as well as using the Sage UI to run Sage applications (e.g. the image viewer). More advanced usage of Sage has been demonstrated using a Sage compatible version of Paraview highlighting the potential of parallel rendering. The aim of this tutorial session is to give a practical introduction to developing visual content for a tiled display using the Sage libraries. After completing this tutorial you should have the basic tools required to develop your own custom Sage applications. This tutorial is designed for software developers and intermediate programming knowledge is assumed, along with some introductory OpenGL . You will be required to write small portions of C/C++ code to complete this worksheet. However if you do not feel comfortable writing code (or have never written in C or C++), we will be on hand throughout this session so feel free to ask for some help. We have a number of machines in this lab running a VNC client to a virtual machine running Fedora 12. You should all be able to log in with the username “escience”, and password “escience10”. Some of the commands in this worksheet require you to run them as the root user, so note the password as you may need to use it a few times. If you need to access the Internet, then use the username “qpsf01”, password “escience10”
Resumo:
In the past 20 years, mesoporous materials have been attracted great attention due to their significant feature of large surface area, ordered mesoporous structure, tunable pore size and volume, and well-defined surface property. They have many potential applications, such as catalysis, adsorption/separation, biomedicine, etc. [1]. Recently, the studies of the applications of mesoporous materials have been expanded into the field of biomaterials science. A new class of bioactive glass, referred to as mesoporous bioactive glass (MBG), was first developed in 2004. This material has a highly ordered mesopore channel structure with a pore size ranging from 5–20 nm [1]. Compared to non-mesopore bioactive glass (BG), MBG possesses a more optimal surface area, pore volume and improved in vitro apatite mineralization in simulated body fluids [1,2]. Vallet-Regí et al. has systematically investigated the in vitro apatite formation of different types of mesoporous materials, and they demonstrated that an apatite-like layer can be formed on the surfaces of Mobil Composition of Matters (MCM)-48, hexagonal mesoporous silica (SBA-15), phosphorous-doped MCM-41, bioglass-containing MCM-41 and ordered mesoporous MBG, allowing their use in biomedical engineering for tissue regeneration [2-4]. Chang et al. has found that MBG particles can be used for a bioactive drug-delivery system [5,6]. Our study has shown that MBG powders, when incorporated into a poly (lactide-co-glycolide) (PLGA) film, significantly enhance the apatite-mineralization ability and cell response of PLGA films. compared to BG [7]. These studies suggest that MBG is a very promising bioactive material with respect to bone regeneration. It is known that for bone defect repair, tissue engineering represents an optional method by creating three-dimensional (3D) porous scaffolds which will have more advantages than powders or granules as 3D scaffolds will provide an interconnected macroporous network to allow cell migration, nutrient delivery, bone ingrowth, and eventually vascularization [8]. For this reason, we try to apply MBG for bone tissue engineering by developing MBG scaffolds. However, one of the main disadvantages of MBG scaffolds is their low mechanical strength and high brittleness; the other issue is that they have very quick degradation, which leads to an unstable surface for bone cell growth limiting their applications. Silk fibroin, as a new family of native biomaterials, has been widely studied for bone and cartilage repair applications in the form of pure silk or its composite scaffolds [9-14]. Compared to traditional synthetic polymer materials, such as PLGA and poly(3-hydroxybutyrate-co-3-hydroxyvalerate) (PHBV), the chief advantage of silk fibroin is its water-soluble nature, which eliminates the need for organic solvents, that tend to be highly cytotoxic in the process of scaffold preparation [15]. Other advantages of silk scaffolds are their excellent mechanical properties, controllable biodegradability and cytocompatibility [15-17]. However, for the purposes of bone tissue engineering, the osteoconductivity of pure silk scaffolds is suboptimal. It is expected that combining MBG with silk to produce MBG/silk composite scaffolds would greatly improve their physiochemical and osteogenic properties for bone tissue engineering application. Therefore, in this chapter, we will introduce the research development of MBG/silk scaffolds for bone tissue engineering.
Resumo:
Osteoarthritis (OA) is a chronic, non-inflammatory type of arthritis, which usually affects the movable and weight bearing joints of the body. It is the most common joint disease in human beings and common in elderly people. Till date, there are no safe and effective diseases modifying OA drugs (DMOADs) to treat the millions of patients suffering from this serious and debilitating disease. However, recent studies provide strong evidence for the use of mesenchymal stem cell (MSC) therapy in curing cartilage related disorders. Due to their natural differentiation properties, MSCs can serve as vehicles for the delivery of effective, targeted treatment to damaged cartilage in OA disease. In vitro, MSCs can readily be tailored with transgenes with anti-catabolic or pro-anabolic effects to create cartilage-friendly therapeutic vehicles. On the other hand, tissue engineering constructs with scaffolds and biomaterials holds promising biological cartilage therapy. Many of these strategies have been validated in a wide range of in vitro and in vivo studies assessing treatment feasibility or efficacy. In this review, we provide an outline of the rationale and status of stem-cell-based treatments for OA cartilage, and we discuss prospects for clinical implementation and the factors crucial for maintaining the drive towards this goal.
Resumo:
Research in structural dynamics has received considerable attention due to problems associated with emerging slender structures, increased vulnerability of structures to random loads and aging infrastructure. This paper briefly describes some such research carried out on i) dynamics of composite floor structure, ii) dynamics of cable supported footbridge, iii) seismic mitigation of frame-shear wall structure using passive dampers and iv) development of a damage assessment model for use in structural health modelling.
Resumo:
As computer applications become more available—both technically and economically—construction project managers are increasingly able to access advanced computer tools capable of transforming the role that project managers have typically performed. Competence at using these tools requires a dual commitment in training—from the individual and the firm. Improving the computer skills of project managers can provide construction firms with a competitive advantage to differentiate from others in an increasingly competitive international market. Yet, few published studies have quantified what existing level of competence construction project managers have. Identification of project managers’ existing computer application skills is a necessary first step to developing more directed training to better capture the benefits of computer applications. This paper discusses the yet to be released results of a series of surveys undertaken in Malaysia, Singapore, Indonesia, Australia and the United States through QUT’s School of Construction Management and Property and the M.E. Rinker, Sr. School of Building Construction at the University of Florida. This international survey reviews the use and reported competence in using a series of commercially-available computer applications by construction project managers. The five different country locations of the survey allow cross-national comparisons to be made between project managers undertaking continuing professional development programs. The results highlight a shortfall in the ability of construction project managers to capture potential benefits provided by advanced computer applications and provide directions for targeted industry training programs. This international survey also provides a unique insight to the cross-national usage of advanced computer applications and forms an important step in this ongoing joint review of technology and the construction project manager.
Resumo:
A trend in design and implementation of modern industrial automation systems is to integrate computing, communication and control into a unified framework at different levels of machine/factory operations and information processing. These distributed control systems are referred to as networked control systems (NCSs). They are composed of sensors, actuators, and controllers interconnected over communication networks. As most of communication networks are not designed for NCS applications, the communication requirements of NCSs may be not satisfied. For example, traditional control systems require the data to be accurate, timely and lossless. However, because of random transmission delays and packet losses, the control performance of a control system may be badly deteriorated, and the control system rendered unstable. The main challenge of NCS design is to both maintain and improve stable control performance of an NCS. To achieve this, communication and control methodologies have to be designed. In recent decades, Ethernet and 802.11 networks have been introduced in control networks and have even replaced traditional fieldbus productions in some real-time control applications, because of their high bandwidth and good interoperability. As Ethernet and 802.11 networks are not designed for distributed control applications, two aspects of NCS research need to be addressed to make these communication networks suitable for control systems in industrial environments. From the perspective of networking, communication protocols need to be designed to satisfy communication requirements for NCSs such as real-time communication and high-precision clock consistency requirements. From the perspective of control, methods to compensate for network-induced delays and packet losses are important for NCS design. To make Ethernet-based and 802.11 networks suitable for distributed control applications, this thesis develops a high-precision relative clock synchronisation protocol and an analytical model for analysing the real-time performance of 802.11 networks, and designs a new predictive compensation method. Firstly, a hybrid NCS simulation environment based on the NS-2 simulator is designed and implemented. Secondly, a high-precision relative clock synchronization protocol is designed and implemented. Thirdly, transmission delays in 802.11 networks for soft-real-time control applications are modeled by use of a Markov chain model in which real-time Quality-of- Service parameters are analysed under a periodic traffic pattern. By using a Markov chain model, we can accurately model the tradeoff between real-time performance and throughput performance. Furthermore, a cross-layer optimisation scheme, featuring application-layer flow rate adaptation, is designed to achieve the tradeoff between certain real-time and throughput performance characteristics in a typical NCS scenario with wireless local area network. Fourthly, as a co-design approach for both a network and a controller, a new predictive compensation method for variable delay and packet loss in NCSs is designed, where simultaneous end-to-end delays and packet losses during packet transmissions from sensors to actuators is tackled. The effectiveness of the proposed predictive compensation approach is demonstrated using our hybrid NCS simulation environment.
Resumo:
Human hair fibres are ubiquitous in nature and are found frequently at crime scenes often as a result of exchange between the perpetrator, victim and/or the surroundings according to Locard's Principle. Therefore, hair fibre evidence can provide important information for crime investigation. For human hair evidence, the current forensic methods of analysis rely on comparisons of either hair morphology by microscopic examination or nuclear and mitochondrial DNA analyses. Unfortunately in some instances the utilisation of microscopy and DNA analyses are difficult and often not feasible. This dissertation is arguably the first comprehensive investigation aimed to compare, classify and identify the single human scalp hair fibres with the aid of FTIR-ATR spectroscopy in a forensic context. Spectra were collected from the hair of 66 subjects of Asian, Caucasian and African (i.e. African-type). The fibres ranged from untreated to variously mildly and heavily cosmetically treated hairs. The collected spectra reflected the physical and chemical nature of a hair from the near-surface particularly, the cuticle layer. In total, 550 spectra were acquired and processed to construct a relatively large database. To assist with the interpretation of the complex spectra from various types of human hair, Derivative Spectroscopy and Chemometric methods such as Principal Component Analysis (PCA), Fuzzy Clustering (FC) and Multi-Criteria Decision Making (MCDM) program; Preference Ranking Organisation Method for Enrichment Evaluation (PROMETHEE) and Geometrical Analysis for Interactive Aid (GAIA); were utilised. FTIR-ATR spectroscopy had two important advantages over to previous methods: (i) sample throughput and spectral collection were significantly improved (no physical flattening or microscope manipulations), and (ii) given the recent advances in FTIR-ATR instrument portability, there is real potential to transfer this work.s findings seamlessly to on-field applications. The "raw" spectra, spectral subtractions and second derivative spectra were compared to demonstrate the subtle differences in human hair. SEM images were used as corroborative evidence to demonstrate the surface topography of hair. It indicated that the condition of the cuticle surface could be of three types: untreated, mildly treated and treated hair. Extensive studies of potential spectral band regions responsible for matching and discrimination of various types of hair samples suggested the 1690-1500 cm-1 IR spectral region was to be preferred in comparison with the commonly used 1750-800 cm-1. The principal reason was the presence of the highly variable spectral profiles of cystine oxidation products (1200-1000 cm-1), which contributed significantly to spectral scatter and hence, poor hair sample matching. In the preferred 1690-1500 cm-1 region, conformational changes in the keratin protein attributed to the α-helical to β-sheet transitions in the Amide I and Amide II vibrations and played a significant role in matching and discrimination of the spectra and hence, the hair fibre samples. For gender comparison, the Amide II band is significant for differentiation. The results illustrated that the male hair spectra exhibit a more intense β-sheet vibration in the Amide II band at approximately 1511 cm-1 whilst the female hair spectra displayed more intense α-helical vibration at 1520-1515cm-1. In terms of chemical composition, female hair spectra exhibit greater intensity of the amino acid tryptophan (1554 cm-1), aspartic and glutamic acid (1577 cm-1). It was also observed that for the separation of samples based on racial differences, untreated Caucasian hair was discriminated from Asian hair as a result of having higher levels of the amino acid cystine and cysteic acid. However, when mildly or chemically treated, Asian and Caucasian hair fibres are similar, whereas African-type hair fibres are different. In terms of the investigation's novel contribution to the field of forensic science, it has allowed for the development of a novel, multifaceted, methodical protocol where previously none had existed. The protocol is a systematic method to rapidly investigate unknown or questioned single human hair FTIR-ATR spectra from different genders and racial origin, including fibres of different cosmetic treatments. Unknown or questioned spectra are first separated on the basis of chemical treatment i.e. untreated, mildly treated or chemically treated, genders, and racial origin i.e. Asian, Caucasian and African-type. The methodology has the potential to complement the current forensic analysis methods of fibre evidence (i.e. Microscopy and DNA), providing information on the morphological, genetic and structural levels.