233 resultados para LOW-COST ADSORBENTS
Resumo:
We show that it is possible to detect specifically adsorbed bacteriophage directly by breaking the interactions between proteins displayed on the phage coat and ligands immobilized on the surface of a quartz crystal microbalance (QCM). This is achieved through increasing the amplitude of oscillation of the QCM surface and sensitively detecting the acoustic emission produced when the bacteriophage detaches from the surface. There is no interference from nonspecifically adsorbed phage. The detection is quantitative over at least 5 orders of magnitude and is sensitive enough to detect as few as 20 phage. The method has potential as a sensitive and low-cost method for virus detection.
Resumo:
There is an increasing need for biodegradable, environmentally friendly plastics to replace the petroleum-based non-degradable plastics which litter and pollute the environment. Starch-based plastic film composites are becoming a popular alternative because of their low cost, biodegradability, the abundance of starch, and ease with which starch-based films can be chemically modified. This paper reports on the results of using sugar cane bagasse nanofibres to improve the physicochemical properties of starch-based polymers. The addition of bagasse nanofibre (2.5, 5, 10 or 20 wt%) to (modified) potato starch (‘Soluble starch’) reduced the moisture uptake by up to 17 % at 58 % relative humidity (RH). The film’s tensile strength and Young’s Modulus increased by up to 100 % and 200 % with 10 wt% and 20 wt% nanofibre respectively at 58% RH. The tensile strain reduced by up to 70 % at 20 wt% fibre loading. These results indicate that addition of sugar cane bagasse nanofibres significantly improved the properties of starch-based plastic films
Resumo:
In vivo osteochondral defect models predominantly consist of small animals, such as rabbits. Although they have an advantage of low cost and manageability, their joints are smaller and more easily healed compared with larger animals or humans. We hypothesized that osteochondral cores from large animals can be implanted subcutaneously in rats to create an ectopic osteochondral defect model for routine and high-throughput screening of multiphasic scaffold designs and/or tissue-engineered constructs (TECs). Bovine osteochondral plugs with 4 mm diameter osteochondral defect were fitted with novel multiphasic osteochondral grafts composed of chondrocyte-seeded alginate gels and osteoblast-seeded polycaprolactone scaffolds, prior to being implanted in rats subcutaneously with bone morphogenic protein-7. After 12 weeks of in vivo implantation, histological and micro-computed tomography analyses demonstrated that TECs are susceptible to mineralization. Additionally, there was limited bone formation in the scaffold. These results suggest that the current model requires optimization to facilitate robust bone regeneration and vascular infiltration into the defect site. Taken together, this study provides a proof-of-concept for a high-throughput osteochondral defect model. With further optimization, the presented hybrid in vivo model may address the growing need for a cost-effective way to screen osteochondral repair strategies before moving to large animal preclinical trials.
Resumo:
We show that it is possible to detect specifically adsorbed bacteriophage directly by breaking the interactions between proteins displayed on the phage coat and ligands immobilized on the surface of a quartz crystal microbalance (QCM). This is achieved through increasing the amplitude of oscillation of the QCM surface and sensitively detecting the acoustic emission produced when the bacteriophage detaches from the surface. There is no interference from nonspecifically adsorbed phage. The detection is quantitative over at least 5 orders of magnitude and is sensitive enough to detect as few as 20 phage. The method has potential as a sensitive and low-cost method for virus detection.
Resumo:
With measurement of physical activity becoming more common in clinical practice, it is imperative that healthcare professionals become more knowledgeable about the different methods available to objectively measure physical activity behaviour. Objective measures do not rely on information provided by the patient, but instead measure and record the biomechanical or physiological consequences of performing physical activity, often in real time. As such, objective measures are not subject to the reporting bias or recall problems associated with self-report methods. The purpose of this article was to provide an overview of the different methods used to objectively measure physical activity in clinical practice. The review was delimited to heart rate monitoring, accelerometers and pedometers since their small size, low participant burden and relatively low cost make these objective measures appropriate for use in clinical practice settings. For each measure, strengths and weakness were discussed; and whenever possible, literature-based examples of implementation were provided.
Resumo:
Accurate and detailed measurement of an individual's physical activity is a key requirement for helping researchers understand the relationship between physical activity and health. Accelerometers have become the method of choice for measuring physical activity due to their small size, low cost, convenience and their ability to provide objective information about physical activity. However, interpreting accelerometer data once it has been collected can be challenging. In this work, we applied machine learning algorithms to the task of physical activity recognition from triaxial accelerometer data. We employed a simple but effective approach of dividing the accelerometer data into short non-overlapping windows, converting each window into a feature vector, and treating each feature vector as an i.i.d training instance for a supervised learning algorithm. In addition, we improved on this simple approach with a multi-scale ensemble method that did not need to commit to a single window size and was able to leverage the fact that physical activities produced time series with repetitive patterns and discriminative features for physical activity occurred at different temporal scales.
Resumo:
Electrification of vehicular systems has gained increased momentum in recent years with particular attention to constant power loads (CPLs). Since a CPL potentially threatens system stability, stability analysis of hybrid electric vehicle with CPLs becomes necessary. A new power buffer configuration with battery is introduced to mitigate the effect of instability caused by CPLs. Model predictive control (MPC) is applied to regulate the power buffer to decouple source and load dynamics. Moreover, MPC provides an optimal tradeoff between modification of load impedance, variation of dc-link voltage and battery current ripples. This is particularly important during transients or starting of system faults, since battery response is not very fast. Optimal tradeoff becomes even more significant when considering low-cost power buffer without battery. This paper analyzes system models for both voltage swell and voltage dip faults. Furthermore, a dual mode MPC algorithm is implemented in real time offering improved stability. A comprehensive set of experimental results is included to verify the efficacy of the proposed power buffer.
Resumo:
Technological maturity and the exponential growth of digital applications are contributing to lifestyle changes worldwide. Consequently, learning and teaching is demanding more effective sociotechnical interactions involving emerging technologies, as opposed to traditional, conventional face-to-face learning and teaching approaches. In this context, usability engineering is making significant contributions for improving computer and distance-based learning, both for learners and instructors, which have often been ignored when designing online learning and teaching applications. Usability testing is a central part of the human centered learning approach for developing sustainable STEM education from the socio-technological perspective. Our experiences with usability engineering and the impact of teaching low-cost rapid usability testing methods on knowledge translation from undergraduate to graduate courses to real-world practice (i.e. getting the methods out there in real use) are diverse and multi-modal. Our sample space has been hundreds of trained students who have learned how to do effective usability engineering in real-world situations at higher levels of realism (i.e. fidelity) and at a much lower cost than using traditional fixed usability labs. Furthermore, this low-cost rapid approach to usability engineering has been adopted by many of our graduates who are now managers, CIOs etc and who are using the methods routinely in their organizations in real world applications and scenarios. This knowledge has been used to improve design and implementation of a wide range of applications, including applications designed for teaching and learning.
Resumo:
Classical ballet requires dancers to exercise significant muscle control and strength both while stationary and when moving. Following the Royal Academy of Dance (RAD) syllabus, 8 male and 27 female dancers (aged 20.2 + 1.9 yr) in a fulltime university undergraduate dance training program were asked to stand in first position for 10 seconds and then perform 10 repeats of a demi-plié exercise to a counted rhythm. Accelerometer records from the wrist, sacrum, knee and ankle were compared with the numerical scores from a professional dance instructor. The sacrum mounted sensor detected lateral tilts of the torso in dances with lower scores (Spearman’s rank correlation coefficient r = -0.64, p < 0.005). The RMS acceleration amplitude of wrist mounted sensor was linearly correlated to the movement scores (Spearman’s rank correlation coefficient r = 0.63, p < 0.005). The application of sacrum and wrist mounted sensors for biofeedback during dance training is a realistic, low cost option.
Resumo:
The use of Wireless Sensor Networks (WSNs) for vibration-based Structural Health Monitoring (SHM) has become a promising approach due to many advantages such as low cost, fast and flexible deployment. However, inherent technical issues such as data asynchronicity and data loss have prevented these distinct systems from being extensively used. Recently, several SHM-oriented WSNs have been proposed and believed to be able to overcome a large number of technical uncertainties. Nevertheless, there is limited research verifying the applicability of those WSNs with respect to demanding SHM applications like modal analysis and damage identification. Based on a brief review, this paper first reveals that Data Synchronization Error (DSE) is the most inherent factor amongst uncertainties of SHM-oriented WSNs. Effects of this factor are then investigated on outcomes and performance of the most robust Output-only Modal Analysis (OMA) techniques when merging data from multiple sensor setups. The two OMA families selected for this investigation are Frequency Domain Decomposition (FDD) and data-driven Stochastic Subspace Identification (SSI-data) due to the fact that they both have been widely applied in the past decade. Accelerations collected by a wired sensory system on a large-scale laboratory bridge model are initially used as benchmark data after being added with a certain level of noise to account for the higher presence of this factor in SHM-oriented WSNs. From this source, a large number of simulations have been made to generate multiple DSE-corrupted datasets to facilitate statistical analyses. The results of this study show the robustness of FDD and the precautions needed for SSI-data family when dealing with DSE at a relaxed level. Finally, the combination of preferred OMA techniques and the use of the channel projection for the time-domain OMA technique to cope with DSE are recommended.
Resumo:
Sensing the mental, physical and emotional demand of a driving task is of primary importance in road safety research and for effectively designing in-vehicle information systems (IVIS). Particularly, the need of cars capable of sensing and reacting to the emotional state of the driver has been repeatedly advocated in the literature. Algorithms and sensors to identify patterns of human behavior, such as gestures, speech, eye gaze and facial expression, are becoming available by using low cost hardware: This paper presents a new system which uses surrogate measures such as facial expression (emotion) and head pose and movements (intention) to infer task difficulty in a driving situation. 11 drivers were recruited and observed in a simulated driving task that involved several pre-programmed events aimed at eliciting emotive reactions, such as being stuck behind slower vehicles, intersections and roundabouts, and potentially dangerous situations. The resulting system, combining face expressions and head pose classification, is capable of recognizing dangerous events (such as crashes and near misses) and stressful situations (e.g. intersections and way giving) that occur during the simulated drive.
Resumo:
This thesis was a step forward in extracting valuable features from human's movement behaviour in terms of space utilisation based on Media-Access-Control data. This research offered a low-cost and less computational complexity approach compared to existing human's movement tracking methods. This research was successfully applied in QUT's Gardens Point campus and can be scaled to bigger environments and societies. Extractable information from human's movement by this approach can add a significant value to studying human's movement behaviour, enhancing future urban and interior design, improving crowd safety and evacuation plans.
Resumo:
Engineering design processes are necessary to attain the requisite standards of integrity for high-assurance safety-related systems. Additionally, human factors design initiatives can provide critical insights that parameterise their development. Unfortunately, the popular perception of human factors as a “forced marriage” between engineering and psychology often provokes views where the ‘human factor’ is perceived as a threat to systems design. Some popular performance-based standards for developing safety-related systems advocate identifying and managing human factors throughout the system lifecycle. However, they also have a tendency to fall short in their guidance on the application of human factors methods and tools, let alone how the outputs generated can be integrated in to various stages of the design process. This case study describes a project that converged engineering with human factors to develop a safety argument for new low-cost railway level crossing technology for system-wide implementation in Australia. The paper enjoins the perspectives of a software engineer and cognitive psychologist and their involvement in the project over two years of collaborative work to develop a safety argument for low-cost level crossing technology. Safety and reliability requirements were informed by applying human factors analytical tools that supported the evaluation and quantification of human reliability where users interfaced with the technology. The project team was confronted with significant challenges in cross-disciplinary engagement, particularly with the complexities of dealing with incongruences in disciplinary language. They were also encouraged to think ‘outside the box’ as to how users of a system interpreted system states and ehaviour. Importantly, some of these states, while considered safe within the boundary of the constituent systems that implemented safety-related functions, could actually lead the users to engage in deviant behaviour. Psychology explained how user compliance could be eroded to levels that effectively undermined levels of risk reduction afforded by systems. Linking the engineering and psychology disciplines intuitively, overall safety performance was improved by introducing technical requirements and making design decisions that minimized the system states and behaviours that led to user deviancy. As a commentary on the utility of transdisciplinary collaboration for technical specification, the processes used to bridge the two disciplines are conceptualised in a graphical model.
Resumo:
Intelligent Transport System (ITS) technology is seen as a cost-effective way to increase the conspicuity of approaching trains and the effectiveness of train warnings at level crossings by providing an in-vehicle warning of an approaching train. The technology is often seen as a potential low-cost alternative to upgrading passive level crossings with traditional active warning systems (flashing lights and boom barriers). ITS platforms provide sensor, localization and dedicated short-range communication (DSRC) technologies to support cooperative applications such as collision avoidance for road vehicles. In recent years, in-vehicle warning systems based on ITS technology have been trialed at numerous locations around Australia, at level crossing sites with active and passive controls. While significant research has been conducted on the benefits of the technology in nominal operating modes, little research has focused on the effects of the failure modes, the human factors implications of unreliable warnings and the technology adoption process from the railway industry’s perspective. Many ITS technology suppliers originate from the road industry and often have limited awareness of the safety assurance requirements, operational requirements and legal obligations of railway operators. This paper aims to raise awareness of these issues and start a discussion on how such technology could be adopted. This paper will describe several ITS implementation cenarios and discuss failure modes, human factors considerations and the impact these scenarios are likely to have in terms of safety, railway safety assurance requirements and the practicability of meeting these requirements. The paper will identify the key obstacles impeding the adoption of ITS systems for the different implementation scenarios and a possible path forward towards the adoption of ITS technology.
Resumo:
Precise clock synchronization is essential in emerging time-critical distributed control systems operating over computer networks where the clock synchronization requirements are mostly focused on relative clock synchronization and high synchronization precision. Existing clock synchronization techniques such as the Network Time Protocol (NTP) and the IEEE 1588 standard can be difficult to apply to such systems because of the highly precise hardware clocks required, due to network congestion caused by a high frequency of synchronization message transmissions, and high overheads. In response, we present a Time Stamp Counter based precise Relative Clock Synchronization Protocol (TSC-RCSP) for distributed control applications operating over local-area networks (LANs). In our protocol a software clock based on the TSC register, counting CPU cycles, is adopted in the time clients and server. TSC-based clocks offer clients a precise, stable and low-cost clock synchronization solution. Experimental results show that clock precision in the order of 10~microseconds can be achieved in small-scale LAN systems. Such clock precision is much higher than that of a processor's Time-Of-Day clock, and is easily sufficient for most distributed real-time control applications over LANs.