645 resultados para Artificial lift method
Resumo:
Access to dietetic care is important in chronic disease management and innovative technologies assists in this purpose. Photographic dietary records (PhDR) using mobile phones or cameras are valid and convenient for patients. Innovations in providing dietary interventions via telephone and computer can also inform dietetic practice. Three studies are presented. A mobile phone method was validated by comparing energy intake (EI) to a weighed food record and a measure of energy expenditure (EE) obtained using the doubly labelled water technique in 10 adults with T2 diabetes. The level of agreement between mean (±sd) energy intake mobile phone (8.2±1.7 MJ) and weighed record (8.5±1.6 MJ) was high (p=0.392), however EI/EE for both methods gave similar levels of under-reporting (0.69 and 0.72). All subjects preferred using the mobile phone vs. weighed record. Nineteen individuals with Parkinsons disease kept 3-day PhDRs on three occasions using point-and-shoot digital cameras over a 12 week period. The camera was rated as easy to use by 89%, keeping a PhDR was considered acceptable by 94% and none would rather use a “pen and paper” method. Eighty-three percent felt confident to use the camera again to record intake. An interactive, automated telephone system designed to coach people with T2 diabetes to adopt and maintain diabetes self-care behaviours, including nutrition, showed trends for improvements in total fat, saturated fat and vegetable intake of the intervention group compared to control participants over 6 months. Innovative technologies are acceptable to patients with chronic conditions and can be incorporated into dietetic care.
Resumo:
Suspension bridges meet the steadily growing demand for lighter and longer bridges in today’s infrastructure systems. These bridges are designed to have long life spans, but with age, their main cables and hangers could suffer from corrosion and fatigue. There is a need for a simple and reliable procedure to detect and locate such damage, so that appropriate retrofitting can be carried out to prevent bridge failure. Damage in a structure causes changes in its properties (mass, damping and stiffness) which in turn will cause changes in its vibration characteristics (natural frequencies, modal damping and mode shapes). Methods based on modal flexibility, which depends on both the natural frequencies and mode shapes, have the potential for damage detection. They have been applied successfully to beam and plate elements, trusses and simple structures in reinforced concrete and steel. However very limited applications for damage detection in suspension bridges have been identified to date. This paper examines the potential of modal flexibility methods for damage detection and localization of a suspension bridge under different damage scenarios in the main cables and hangers using numerical simulation techniques. Validated finite element model (FEM) of a suspension bridge is used to acquire mass normalized mode shape vectors and natural frequencies at intact and damaged states. Damage scenarios will be simulated in the validated FE models by varying stiffness of the damaged structural members. The capability of damage index based on modal flexibility to detect and locate damage is evaluated. Results confirm that modal flexibility based methods have the ability to successfully identify damage in suspension bridge main cables and hangers.
Resumo:
Composite steel-concrete structures experience non-linear effects which arise from both instability-related geometric non-linearity and from material non-linearity in all of their component members. Because of this, conventional design procedures cannot capture the true behaviour of a composite frame throughout its full loading range, and so a procedure to account for those non-linearities is much needed. This paper therefore presents a numerical procedure capable of addressing geometric and material non-linearities at the strength limit state based on the refined plastic hinge method. Different material non-linearity for different composite structural components such as T-beams, concrete-filled tubular (CFT) and steel-encased reinforced concrete (SRC) sections can be treated using a routine numerical procedure for their section properties in this plastic hinge approach. Simple and conservative initial and full yield surfaces for general composite sections are proposed in this paper. The refined plastic hinge approach models springs at the ends of the element which are activated when the surface defining the interaction of bending and axial force at first yield is reached; a transition from the first yield interaction surface to the fully plastic interaction surface is postulated based on a proposed refined spring stiffness, which formulates the load-displacement relation for material non-linearity under the interaction of bending and axial actions. This produces a benign method for a beam-column composite element under general loading cases. Another main feature of this paper is that, for members containing a point of contraflexure, its location is determined with a simple application of the method herein and a node is then located at this position to reproduce the real flexural behaviour and associated material non-linearity of the member. Recourse is made to an updated Lagrangian formulation to consider geometric non-linear behaviour and to develop a non-linear solution strategy. The formulation with the refined plastic hinge approach is efficacious and robust, and so a full frame analysis incorporating geometric and material non-linearity is tractable. By way of contrast, the plastic zone approach possesses the drawback of strain-based procedures which rely on determining plastic zones within a cross-section and which require lengthwise integration. Following development of the theory, its application is illustrated with a number of varied examples.
Resumo:
Collaborative contracting has emerged over the past 15 years as an innovative project delivery framework that is particularly suited to infrastructure projects. Australia leads the world in the development of project and program alliance approaches to collaborative delivery. These approaches are considered to promise superior project results. However, very little is known about the learning routines that are most widely used in support of collaborative projects in general and alliance projects in particular. The literature on absorptive capacity and dynamic capabilities indicates that such learning enhances project performance. The learning routines employed at corporate level during the operation of collaborative infrastructure projects in Australia were examined through a large survey conducted in 2013. This paper presents a descriptive summary of the preliminary findings. The survey captured the experiences of 320 practitioners of collaborative construction projects, including public and private sector clients, contractors, consultants and suppliers (three per cent of projects were located in New Zealand, but for brevity’s sake the sample is referred to as Australian). The majority of projects identified used alliances (78.6%); whilst 9% used Early Contractor Involvement (ECI) contracts and 2.7% used Early Tender Involvement contracts, which are ‘slimmer’ types of collaborative contract. The remaining 9.7% of respondents used traditional contracts that employed some collaborative elements. The majority of projects were delivered for public sector clients (86.3%), and/or clients experienced with asset procurement (89.6%). All of the projects delivered infrastructure assets; one third in the road sector, one third in the water sector, one fifth in the rail sector, and the rest spread across energy, building and mining. Learning routines were explored within three interconnected phases: knowledge exploration, transformation and exploitation. The results show that explorative and exploitative learning routines were applied to a similar extent. Transformative routines were applied to a relatively low extent. It was also found that the most highly applied routine is ‘regularly applying new knowledge to collaborative projects’; and the least popular routine was ‘staff incentives to encourage information sharing about collaborative projects’. Future research planned by the authors will examine the impact of these routines on project performance.
Resumo:
Guaranteeing the quality of extracted features that describe relevant knowledge to users or topics is a challenge because of the large number of extracted features. Most popular existing term-based feature selection methods suffer from noisy feature extraction, which is irrelevant to the user needs (noisy). One popular method is to extract phrases or n-grams to describe the relevant knowledge. However, extracted n-grams and phrases usually contain a lot of noise. This paper proposes a method for reducing the noise in n-grams. The method first extracts more specific features (terms) to remove noisy features. The method then uses an extended random set to accurately weight n-grams based on their distribution in the documents and their terms distribution in n-grams. The proposed approach not only reduces the number of extracted n-grams but also improves the performance. The experimental results on Reuters Corpus Volume 1 (RCV1) data collection and TREC topics show that the proposed method significantly outperforms the state-of-art methods underpinned by Okapi BM25, tf*idf and Rocchio.
Resumo:
We have developed a method to test the cytotoxicity of wound dressings, ointments, creams and gels used in our Burn Centre, by placing them on a permeable Nunc Polycarbonate cell culture insert, incubated with a monolayer of cells (HaCaTs and primary human keratinocytes). METHODS: We performed two different methods to determine the relative toxicity to cells. (1) Photo visualisation: The dressings or compounds were positioned on the insert's membrane which was placed onto the monolayer tissue culture plate. After 24 h the surviving adherent cells were stained with Toluidine Blue and photos of the plates were taken. The acellular area of non-adherent dead cells which had been washed off with buffer was measured as a percentage of the total area of the plate. (2) Cell count of surviving cells: After 24 h incubation with the test material, the remaining cells were detached with trypsin, spun down and counted in a Haemocytometer with Trypan Blue, which differentiates between live and dead cells. RESULTS: Seventeen products were tested. The least cytotoxic products were Melolite, White soft Paraffin and Chlorsig1% Ointment. Some cytotoxicity was shown with Jelonet, Mepitel((R)), PolyMem((R)), DuoDerm((R)) and Xeroform. The most cytotoxic products included those which contained silver or Chlorhexidine and Paraffin Cream a moisturizer which contains the preservative Chlorocresol. CONCLUSION: This in vitro cell culture insert method allows testing of agents without direct cell contact. It is easy and quick to perform, and should help the clinician to determine the relative cytotoxicity of various dressings and the optimal dressing for each individual wound.
Resumo:
Olfactory ensheathing cells (OECs) play an important role in the continuous regeneration of the primary olfactory nervous system throughout life and for regeneration of olfactory neurons after injury. While it is known that several individual OEC subpopulations with distinct properties exist in different anatomical locations, it remains unclear how these different subpopulations respond to a major injury. We have examined the proliferation of OECs from one distinct location, the peripheral accessory olfactory nervous system, following large-scale injury (bulbectomy) in mice. We used crosses of two transgenic reporter mouse lines, S100ß-DsRed and OMP-ZsGreen, to visualise OECs, and main/accessory olfactory neurons, respectively. We surgically removed one olfactory bulb including the accessory olfactory bulb to induce degeneration, and found that accessory OECs in the nerve bundles that terminate in the accessory olfactory bulb responded by increased proliferation with a peak occurring 2 days after the injury. To label proliferating cells we used the thymidine analogue ethynyl deoxyuridine (EdU) using intranasal delivery instead of intraperitoneal injection. We compared and quantified the number of proliferating cells at different regions at one and four days after EdU labelling by the two different methods and found that intranasal delivery method was as effective as intrapeitoneal injection. We demonstrated that accessory OECs actively respond to widespread degeneration of accessory olfactory axons by proliferating. These results have important implications for selecting the source of OECs for neural regeneration therapies and show that intranasal delivery of EdU is an efficient and reliable method for assessing proliferation of olfactory glia.
Resumo:
Purpose of review: Artificial corneas are being developed to meet a shortage of donor corneas as well as to address cases where allografting is contraindicated. A range of artificial corneas has been developed. Here we review several newer designs and especially those inspired by naturally occurring biomaterials found with the human body and elsewhere. Recent findings: Recent trends in the development of artificial corneas indicate a move towards the use of materials derived from native sources including decellularized corneal tissue and tissue substitutes synthesized by corneal cells in vitro when grown either on their own, or in conjunction with novel protein-based scaffolds. Biologically inspired materials are also being considered for implantation on their own with the view to promoting endogenous corneal tissue. Summary: More recent attempts at making artificial corneas have taken a more nature-based or nature-inspired approach. Several will in the near future be likely to be available clinically.
Resumo:
This study extends the ‘zero scan’ method for CT imaging of polymer gel dosimeters to include multi-slice acquisitions. Multi slice CT images consisting of 24 slices of 1.2 mm thickness were acquired of an irradiated polymer gel dosimeter, and processed with the zero scan technique. The results demonstrate that zero scan based gel readout can be successfully applied to generate a three dimensional image of the irradiated gel field. Compared to the raw CT images the processed figures and cross gel profiles demonstrated reduced noise and clear visibility of the penumbral region. Moreover these improved results further highlight the suitability of this method in volumetric reconstruction with reduced CT data acquisition per slice. This work shows that 3D volumes of irradiated polymer gel dosimeters can be acquired and processed with x-ray CT.
Resumo:
This work considers the problem of building high-fidelity 3D representations of the environment from sensor data acquired by mobile robots. Multi-sensor data fusion allows for more complete and accurate representations, and for more reliable perception, especially when different sensing modalities are used. In this paper, we propose a thorough experimental analysis of the performance of 3D surface reconstruction from laser and mm-wave radar data using Gaussian Process Implicit Surfaces (GPIS), in a realistic field robotics scenario. We first analyse the performance of GPIS using raw laser data alone and raw radar data alone, respectively, with different choices of covariance matrices and different resolutions of the input data. We then evaluate and compare the performance of two different GPIS fusion approaches. The first, state-of-the-art approach directly fuses raw data from laser and radar. The alternative approach proposed in this paper first computes an initial estimate of the surface from each single source of data, and then fuses these two estimates. We show that this method outperforms the state of the art, especially in situations where the sensors react differently to the targets they perceive.
Resumo:
Field robots often rely on laser range finders (LRFs) to detect obstacles and navigate autonomously. Despite recent progress in sensing technology and perception algorithms, adverse environmental conditions, such as the presence of smoke, remain a challenging issue for these robots. In this paper, we investigate the possibility to improve laser-based perception applications by anticipating situations when laser data are affected by smoke, using supervised learning and state-of-the-art visual image quality analysis. We propose to train a k-nearest-neighbour (kNN) classifier to recognise situations where a laser scan is likely to be affected by smoke, based on visual data quality features. This method is evaluated experimentally using a mobile robot equipped with LRFs and a visual camera. The strengths and limitations of the technique are identified and discussed, and we show that the method is beneficial if conservative decisions are the most appropriate.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.
Resumo:
This paper proposes an approach to achieve resilient navigation for indoor mobile robots. Resilient navigation seeks to mitigate the impact of control, localisation, or map errors on the safety of the platform while enforcing the robot’s ability to achieve its goal. We show that resilience to unpredictable errors can be achieved by combining the benefits of independent and complementary algorithmic approaches to navigation, or modalities, each tuned to a particular type of environment or situation. In this paper, the modalities comprise a path planning method and a reactive motion strategy. While the robot navigates, a Hidden Markov Model continually estimates the most appropriate modality based on two types of information: context (information known a priori) and monitoring (evaluating unpredictable aspects of the current situation). The robot then uses the recommended modality, switching between one and another dynamically. Experimental validation with a SegwayRMP- based platform in an office environment shows that our approach enables failure mitigation while maintaining the safety of the platform. The robot is shown to reach its goal in the presence of: 1) unpredicted control errors, 2) unexpected map errors and 3) a large injected localisation fault.
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
Autonomous navigation and locomotion of a mobile robot in natural environments remain a rather open issue. Several functionalities are required to complete the usual perception/decision/action cycle. They can be divided in two main categories : navigation (perception and decision about the movement) and locomotion (movement execution). In order to be able to face the large range of possible situations in natural environments, it is essential to make use of various kinds of complementary functionalities, defining various navigation and locomotion modes. Indeed, a number of navigation and locomotion approaches have been proposed in the literature for the last years, but none can pretend being able to achieve autonomous navigation and locomotion in every situation. Thus, it seems relevant to endow an outdoor mobile robot with several complementary navigation and locomotion modes. Accordingly, the robot must also have means to select the most appropriate mode to apply. This thesis proposes the development of such a navigation/locomotion mode selection system, based on two types of data: an observation of the context to determine in what kind of situation the robot has to achieve its movement and an evaluation of the behavior of the current mode, made by monitors which influence the transitions towards other modes when the behavior of the current one is considered as non satisfying. Hence, this document introduces a probabilistic framework for the estimation of the mode to be applied, some navigation and locomotion modes used, a qualitative terrain representation method (based on the evaluation of a difficulty computed from the placement of the robot's structure on a digital elevation map), and monitors that check the behavior of the modes used (evaluation of rolling locomotion efficiency, robot's attitude and configuration watching. . .). Some experimental results obtained with those elements integrated on board two different outdoor robots are presented and discussed.