560 resultados para Biological applications
Resumo:
Visual sea-floor mapping is a rapidly growing application for Autonomous Underwater Vehicles (AUVs). AUVs are well-suited to the task as they remove humans from a potentially dangerous environment, can reach depths human divers cannot, and are capable of long-term operation in adverse conditions. The output of sea-floor maps generated by AUVs has a number of applications in scientific monitoring: from classifying coral in high biological value sites to surveying sea sponges to evaluate marine environment health.
Resumo:
In this thesis, the author proposed and developed gas sensors made of nanostructured WO3 thin film by a thermal evaporation technique. This technique gives control over film thickness, grain size and purity. The device fabrication, nanostructured material synthesis, characterization and gas sensing performance have been undertaken. Three different types of nanostructured thin films, namely, pure WO3 thin films, iron-doped WO3 thin films by co-evaporation and Fe-implanted WO3 thin films have been synthesized. All the thin films have a film thickness of 300 nm. The physical, chemical and electronic properties of these films have been optimized by annealing heat treatment at 300ºC and 400ºC for 2 hours in air. Various analytical techniques were employed to characterize these films. Atomic Force Microscopy and Transmission Electron Microscopy revealed a very small grain size of the order 5-10 nm in as-deposited WO3 films, and annealing at 300ºC or 400ºC did not result in any significant change in grain size. X-ray diffraction (XRD) analysis revealed a highly amorphous structure of as-deposited films. Annealing at 300ºC for 2 hours in air did not improve crystallinity in these films. However, annealing at 400ºC for 2 hours in air significantly improved the crystallinity in pure and iron-doped WO3 thin films, whereas it only slightly improved the crystallinity of iron-implanted WO3 thin film as a result of implantation. Rutherford backscattered spectroscopy revealed an iron content of 0.5 at.% and 5.5 at.% in iron-doped and iron-implanted WO3 thin films, respectively. The RBS results have been confirmed using energy dispersive x-ray spectroscopy (EDX) during analysis of the films using transmission electron microscopy (TEM). X-ray photoelectron spectroscopy (XPS) revealed significant lowering of W 4f7/2 binding energy in all films annealed at 400ºC as compared with the as-deposited and 300ºC annealed films. Lowering of W 4f7/2 is due to increase in number of oxygen vacancies in the films and is considered highly beneficial for gas sensing. Raman analysis revealed that 400ºC annealed films except the iron-implanted film are highly crystalline with significant number of O-W-O bonds, which was consistent with the XRD results. Additionally, XRD, XPS and Raman analyses showed no evidence of secondary peaks corresponding to compounds of iron due to iron doping or implantation. This provided an understanding that iron was incorporated in the host WO3 matrix rather than as a separate dispersed compound or as catalyst on the surface. WO3 thin film based gas sensors are known to operate efficiently in the temperature range 200ºC-500 ºC. In the present study, by optimizing the physical, chemical and electronic properties through heat treatment and doping, an optimum response to H2, ethanol and CO has been achieved at a low operating temperature of 150ºC. Pure WO3 thin film annealed at 400ºC showed the highest sensitivity towards H2 at 150ºC due to its very small grain size and porosity, coupled with high number of oxygen vacancies, whereas Fe-doped WO3 film annealed at 400ºC showed the highest sensitivity to ethanol at an operating temperature of 150ºC due to its crystallinity, increased number of oxygen vacancies and higher degree of crystal distortions attributed to Fe addition. Pure WO3 films are known to be insensitive to CO, but iron-doped WO3 thin film annealed at 300ºC and 400ºC showed an optimum response to CO at an operating temperature of 150ºC. This result is attributed to lattice distortions produced in WO3 host matrix as a result of iron incorporation as substitutional impurity. However, iron-implanted WO3 thin films did not show any promising response towards the tested gases as the film structure has been damaged due to implantation, and annealing at 300ºC or 400ºC was not sufficient to induce crystallinity in these films. This study has demonstrated enhanced sensing properties of WO3 thin film sensors towards CO at lower operating temperature, which was achieved by optimizing the physical, chemical and electronic properties of the WO3 film through Fe doping and annealing. This study can be further extended to systematically investigate the effects of different Fe concentrations (0.5 at.% to 10 at.%) on the sensing performance of WO3 thin film gas sensors towards CO.
Resumo:
Structural health monitoring (SHM) refers to the procedure used to assess the condition of structures so that their performance can be monitored and any damage can be detected early. Early detection of damage and appropriate retrofitting will aid in preventing failure of the structure and save money spent on maintenance or replacement and ensure the structure operates safely and efficiently during its whole intended life. Though visual inspection and other techniques such as vibration based ones are available for SHM of structures such as bridges, the use of acoustic emission (AE) technique is an attractive option and is increasing in use. AE waves are high frequency stress waves generated by rapid release of energy from localised sources within a material, such as crack initiation and growth. AE technique involves recording these waves by means of sensors attached on the surface and then analysing the signals to extract information about the nature of the source. High sensitivity to crack growth, ability to locate source, passive nature (no need to supply energy from outside, but energy from damage source itself is utilised) and possibility to perform real time monitoring (detecting crack as it occurs or grows) are some of the attractive features of AE technique. In spite of these advantages, challenges still exist in using AE technique for monitoring applications, especially in the area of analysis of recorded AE data, as large volumes of data are usually generated during monitoring. The need for effective data analysis can be linked with three main aims of monitoring: (a) accurately locating the source of damage; (b) identifying and discriminating signals from different sources of acoustic emission and (c) quantifying the level of damage of AE source for severity assessment. In AE technique, the location of the emission source is usually calculated using the times of arrival and velocities of the AE signals recorded by a number of sensors. But complications arise as AE waves can travel in a structure in a number of different modes that have different velocities and frequencies. Hence, to accurately locate a source it is necessary to identify the modes recorded by the sensors. This study has proposed and tested the use of time-frequency analysis tools such as short time Fourier transform to identify the modes and the use of the velocities of these modes to achieve very accurate results. Further, this study has explored the possibility of reducing the number of sensors needed for data capture by using the velocities of modes captured by a single sensor for source localization. A major problem in practical use of AE technique is the presence of sources of AE other than crack related, such as rubbing and impacts between different components of a structure. These spurious AE signals often mask the signals from the crack activity; hence discrimination of signals to identify the sources is very important. This work developed a model that uses different signal processing tools such as cross-correlation, magnitude squared coherence and energy distribution in different frequency bands as well as modal analysis (comparing amplitudes of identified modes) for accurately differentiating signals from different simulated AE sources. Quantification tools to assess the severity of the damage sources are highly desirable in practical applications. Though different damage quantification methods have been proposed in AE technique, not all have achieved universal approval or have been approved as suitable for all situations. The b-value analysis, which involves the study of distribution of amplitudes of AE signals, and its modified form (known as improved b-value analysis), was investigated for suitability for damage quantification purposes in ductile materials such as steel. This was found to give encouraging results for analysis of data from laboratory, thereby extending the possibility of its use for real life structures. By addressing these primary issues, it is believed that this thesis has helped improve the effectiveness of AE technique for structural health monitoring of civil infrastructures such as bridges.
Resumo:
Digital information that is place- and time-specific, is increasingly becoming available on all aspects of the urban landscape. People (cf. the Social Web), places (cf. the Geo Web), and physical objects (cf. ubiquitous computing, the Internet of Things) are increasingly infused with sensors, actuators, and tagged with a wealth of digital information. Urban informatics research explores these emerging digital layers of the city at the intersection of people, place and technology. However, little is known about the challenges and new opportunities that these digital layers may offer to road users driving through today’s mega cities. We argue that this aspect is worth exploring in particular with regards to Auto-UI’s overarching goal of making cars both safer and more enjoyable. This paper presents the findings of a pilot study, which included 14 urban informatics research experts participating in a guided ideation (idea creation) workshop within a simulated environment. They were immersed into different driving scenarios to imagine novel urban informatics type of applications specific to the driving context.
Resumo:
Many substation applications require accurate time-stamping. The performance of systems such as Network Time Protocol (NTP), IRIG-B and one pulse per second (1-PPS) have been sufficient to date. However, new applications, including IEC 61850-9-2 process bus and phasor measurement, require accuracy of one microsecond or better. Furthermore, process bus applications are taking time synchronisation out into high voltage switchyards where cable lengths may have an impact on timing accuracy. IEEE Std 1588, Precision Time Protocol (PTP), is the means preferred by the smart grid standardisation roadmaps (from both the IEC and US National Institute of Standards and Technology) of achieving this higher level of performance, and integrates well into Ethernet based substation automation systems. Significant benefits of PTP include automatic path length compensation, support for redundant time sources and the cabling efficiency of a shared network. This paper benchmarks the performance of established IRIG-B and 1-PPS synchronisation methods over a range of path lengths representative of a transmission substation. The performance of PTP using the same distribution system is then evaluated and compared to the existing methods to determine if the performance justifies the additional complexity. Experimental results show that a PTP timing system maintains the synchronising performance of 1-PPS and IRIG-B timing systems, when using the same fibre optic cables, and further meets the needs of process buses in large substations.
Resumo:
This article sets out the results of an empirical research study into the uses to which the Australian patent system is being put in the early 21st century. The focus of the study is business method patents, which are of interest because they are a controversial class of patent that are thought to differ significantly from the mechanical, chemical and industrial inventions that have traditionally been the mainstay of the patent system. The purpose of the study is to understand what sort of business method patent applications have been lodged in Australia in the first decade of this century and how the patent office is responding to those applications.
Resumo:
The interaction between host and donor cells is believed to play an important role in osteogenesis. However, it is still unclear how donor osteogenic cells behave and interact with host cells in vivo. The purpose of this study was to track the interactions between transplanted osteogenic cells and host cells during osteogenesis. In vitro migration assay was carried out to investigate the ability of osteogenic differentiated humanmesenchymal stemcells (O-hMSCs) to recruit MSCs. At the in vivo level, O-hMSCs were implanted subcutaneously or into skull defects in severe combined immunodeficient (SCID) mice. New bone formation was observed bymicro-CT and histological procedures. In situ hybridization (ISH) against human Alu sequences was performed to distinguish donor osteogenic cells from host cells. In vitro migration assay revealed an increased migration potential of MSCs by co-culturing with O-hMSCs. In agreement with the results of in vitro studies, ISH against human Alu sequences showed that host mouse MSCs migrated in large numbers into the transplantation site in response to O-hMSCs. Interestingly, host cells recruited by O-hMSCs were the major cell populations in newly formed bone tissues, indicating that O-hMSCs can trigger and initiate osteogenesis when transplanted in orthotopic sites. The observations fromthis study demonstrated that in vitro induced O-hMSCs were able to attract hostMSCs in vivo andwere involved inosteogenesis togetherwith host cells,whichmay be of importance for bone tissue-engineering applications.
Resumo:
The representation of business process models has been a continuing research topic for many years now. However, many process model representations have not developed beyond minimally interactive 2D icon-based representations of directed graphs and networks, with little or no annotation for information over- lays. With the rise of desktop computers and commodity mobile devices capable of supporting rich interactive 3D environments, we believe that much of the research performed in computer human interaction, virtual reality, games and interactive entertainment has much potential in areas of BPM; to engage, pro- vide insight, and to promote collaboration amongst analysts and stakeholders alike. This initial visualization workshop seeks to initiate the development of a high quality international forum to present and discuss research in this field. Via this workshop, we intend to create a community to unify and nurture the development of process visualization topics as a continuing research area.
Resumo:
Background: Outside the mass-spectrometer, proteomics research does not take place in a vacuum. It is affected by policies on funding and research infrastructure. Proteomics research both impacts and is impacted by potential clinical applications. It provides new techniques & clinically relevant findings, but the possibilities for such innovations (and thus the perception of the potential for the field by funders) are also impacted by regulatory practices and the readiness of the health sector to incorporate proteomics-related tools & findings. Key to this process is how knowledge is translated. Methods: We present preliminary results from a multi-year social science project, funded by the Canadian Institutes of Health Research, on the processes and motivations for knowledge translation in the health sciences. The proteomics case within this wider study uses qualitative methods to examine the interplay between proteomics science and regulatory and policy makers regarding clinical applications of proteomics. Results: Adopting an interactive format to encourage conference attendees’ feedback, our poster focuses on deficits in effective knowledge translation strategies from the laboratory to policy, clinical, & regulatory arenas. An analysis of the interviews conducted to date suggests five significant choke points: the changing priorities of funding agencies; the complexity of proteomics research; the organisation of proteomics research; the relationship of proteomics to genomics and other omics sciences; and conflict over the appropriate role of standardisation. Conclusion: We suggest that engagement with aspects of knowledge translation, such as those mentioned above, is crucially important for the eventual clinical application ofproteomics science on any meaningful scale.
Resumo:
New substation automation applications, such as sampled value process buses and synchrophasors, require sampling accuracy of 1 µs or better. The Precision Time Protocol (PTP), IEEE Std 1588, achieves this level of performance and integrates well into Ethernet based substation networks. This paper takes a systematic approach to the performance evaluation of commercially available PTP devices (grandmaster, slave, transparent and boundary clocks) from a variety of manufacturers. The ``error budget'' is set by the performance requirements of each application. The ``expenditure'' of this error budget by each component is valuable information for a system designer. The component information is used to design a synchronization system that meets the overall functional requirements. The quantitative performance data presented shows that this testing is effective and informative. Results from testing PTP performance in the presence of sampled value process bus traffic demonstrate the benefit of a ``bottom up'' component testing approach combined with ``top down'' system verification tests. A test method that uses a precision Ethernet capture card, rather than dedicated PTP test sets, to determine the Correction Field Error of transparent clocks is presented. This test is particularly relevant for highly loaded Ethernet networks with stringent timing requirements. The methods presented can be used for development purposes by manufacturers, or by system integrators for acceptance testing. A sampled value process bus was used as the test application for the systematic approach described in this paper. The test approach was applied, components were selected, and the system performance verified to meet the application's requirements. Systematic testing, as presented in this paper, is applicable to a range of industries that use, rather than develop, PTP for time transfer.
Resumo:
The most common software analysis tools available for measuring fluorescence images are for two-dimensional (2D) data that rely on manual settings for inclusion and exclusion of data points, and computer-aided pattern recognition to support the interpretation and findings of the analysis. It has become increasingly important to be able to measure fluorescence images constructed from three-dimensional (3D) datasets in order to be able to capture the complexity of cellular dynamics and understand the basis of cellular plasticity within biological systems. Sophisticated microscopy instruments have permitted the visualization of 3D fluorescence images through the acquisition of multispectral fluorescence images and powerful analytical software that reconstructs the images from confocal stacks that then provide a 3D representation of the collected 2D images. Advanced design-based stereology methods have progressed from the approximation and assumptions of the original model-based stereology(1) even in complex tissue sections(2). Despite these scientific advances in microscopy, a need remains for an automated analytic method that fully exploits the intrinsic 3D data to allow for the analysis and quantification of the complex changes in cell morphology, protein localization and receptor trafficking. Current techniques available to quantify fluorescence images include Meta-Morph (Molecular Devices, Sunnyvale, CA) and Image J (NIH) which provide manual analysis. Imaris (Andor Technology, Belfast, Northern Ireland) software provides the feature MeasurementPro, which allows the manual creation of measurement points that can be placed in a volume image or drawn on a series of 2D slices to create a 3D object. This method is useful for single-click point measurements to measure a line distance between two objects or to create a polygon that encloses a region of interest, but it is difficult to apply to complex cellular network structures. Filament Tracer (Andor) allows automatic detection of the 3D neuronal filament-like however, this module has been developed to measure defined structures such as neurons, which are comprised of dendrites, axons and spines (tree-like structure). This module has been ingeniously utilized to make morphological measurements to non-neuronal cells(3), however, the output data provide information of an extended cellular network by using a software that depends on a defined cell shape rather than being an amorphous-shaped cellular model. To overcome the issue of analyzing amorphous-shaped cells and making the software more suitable to a biological application, Imaris developed Imaris Cell. This was a scientific project with the Eidgenössische Technische Hochschule, which has been developed to calculate the relationship between cells and organelles. While the software enables the detection of biological constraints, by forcing one nucleus per cell and using cell membranes to segment cells, it cannot be utilized to analyze fluorescence data that are not continuous because ideally it builds cell surface without void spaces. To our knowledge, at present no user-modifiable automated approach that provides morphometric information from 3D fluorescence images has been developed that achieves cellular spatial information of an undefined shape (Figure 1). We have developed an analytical platform using the Imaris core software module and Imaris XT interfaced to MATLAB (Mat Works, Inc.). These tools allow the 3D measurement of cells without a pre-defined shape and with inconsistent fluorescence network components. Furthermore, this method will allow researchers who have extended expertise in biological systems, but not familiarity to computer applications, to perform quantification of morphological changes in cell dynamics.
Resumo:
Smart antenna receiver and transmitter systems consist of multi-port arrays with an individual receiver channel (including ADC) and an individual transmitter channel (including DAC)at every of the M antenna ports, respectively. By means of digital beamforming, an unlimited number of simultaneous complex-valued vector radiation patterns with M-1 degrees of freedom can be formed. Applications of smart antennas in communication systems include space-division multiple access. If both stations of a communication link are equipped with smart antennas (multiple-input-multiple-output, MIMO). multiple independent channels can be formed in a "multi-path-rich" environment. In this article, it will be shown that under certain circumstances, the correlation between signals from adjacent ports of a dense array (M + ΔM elements) can be kept as low as the correlation between signals from adjacent ports of a conventional array (M elements and half-wavelength pacing). This attractive feature is attained by means of a novel approach which employs a RF decoupling network at the array ports in order to form new ports which are decoupled and associated with mutually orthogonal (de-correlated) radiation patterns.
Resumo:
Purpose Arbitrary numbers of corneal confocal microscopy images have been used for analysis of corneal subbasal nerve parameters under the implicit assumption that these are a representative sample of the central corneal nerve plexus. The purpose of this study is to present a technique for quantifying the number of random central corneal images required to achieve an acceptable level of accuracy in the measurement of corneal nerve fiber length and branch density. Methods Every possible combination of 2 to 16 images (where 16 was deemed the true mean) of the central corneal subbasal nerve plexus, not overlapping by more than 20%, were assessed for nerve fiber length and branch density in 20 subjects with type 2 diabetes and varying degrees of functional nerve deficit. Mean ratios were calculated to allow comparisons between and within subjects. Results In assessing nerve branch density, eight randomly chosen images not overlapping by more than 20% produced an average that was within 30% of the true mean 95% of the time. A similar sampling strategy of five images was 13% within the true mean 80% of the time for corneal nerve fiber length. Conclusions The “sample combination analysis” presented here can be used to determine the sample size required for a desired level of accuracy of quantification of corneal subbasal nerve parameters. This technique may have applications in other biological sampling studies.
Resumo:
Most social network users hold more than one social network account and utilize them in different ways depending on the digital context. For example, friendly chat on Facebook, professional discussion on LinkedIn, and health information exchange on PatientsLikeMe. Thus many web users need to manage many disparate profiles across many distributed online sources. Maintaining these profiles is cumbersome, time consuming, inefficient, and leads to lost opportunity. In this paper we propose a framework for multiple profile management of online social networks and showcase a demonstrator utilising an open source platform. The result of the research enables a user to create and manage an integrated profile and share/synchronise their profiles with their social networks. A number of use cases were created to capture the functional requirements and describe the interactions between users and the online services. An innovative application of this project is in public health informatics. We utilize the prototype to examine how the framework can benefit patients and physicians. The framework can greatly enhance health information management for patients and more importantly offer a more comprehensive personal health overview of patients to physicians.
Resumo:
As a result of rapid urbanisation, population growth, changes in lifestyle, pollution and the impacts of climate change, water provision has become a critical challenge for planners and policy-makers. In the wake of increasingly difficult water provision and drought, the notion that freshwater is a finite and vulnerable resource is increasingly being realised. Many city administrations around the world are struggling to provide water security for their residents to maintain lifestyle and economic growth. This chapter reviews the global challenge of providing freshwater to sustain lifestyles and economic growth, and the contributing challenges of climate change, urbanisation, population growth and problems in rainfall distribution. The chapter proceeds to evaluate major alternatives to current water sources such as conservation, recycling and reclamation, and desalination. Integrated water resource management is briefly looked at to explore its role in complementing water provision. A comparative study on alternative resources is undertaken to evaluate their strengths, weaknesses, opportunities and constraints, and the results are discussed.