966 resultados para large underground autonomous vehicles
Resumo:
Purpose: Data from two randomized phase III trials were analyzed to evaluate prognostic factors and treatment selection in the first-line management of advanced non-small cell lung cancer patients with performance status (PS) 2. Patients and Methods: Patients randomized to combination chemotherapy (carboplatin and paclitaxel) in one trial and single-agent therapy (gemcitabine or vinorelbine) in the second were included in these analyses. Both studies had identical eligibility criteria and were conducted simultaneously. Comparison of efficacy and safety was performed between the two cohorts. A regression analysis identified prognostic factors and subgroups of patients that may benefit from combination or single-agent therapy. Results: Two hundred one patients were treated with combination and 190 with single-agent therapy. Objective responses were 37 and 15%, respectively. Median time to progression was 4.6 months in the combination arm and 3.5 months in the single-agent arm (p < 0.001). Median survival imes were 8.0 and 6.6 months, and 1-year survival rates were 31 and 26%, respectively. Albumin <3.5 g, extrathoracic metastases, lactate dehydrogenase ≥200 IU, and 2 comorbid conditions predicted outcome. Patients with 0-2 risk factors had similar outcomes independent of treatment, whereas patients with 3-4 factors had a nonsignificant improvement in median survival with combination chemotherapy. Conclusion: Our results show that PS2 non-small cell lung cancer patients are a heterogeneous group who have significantly different outcomes. Patients treated with first-line combination chemotherapy had a higher response and longer time to progression, whereas overall survival did not appear significantly different. A prognostic model may be helpful in selecting PS 2 patients for either treatment strategy. © 2009 by the International Association for the Study of Lung Cancer.
Resumo:
In most of the advanced economies, students are losing interest in careers especially in en¬gineering and related industries. Hence, western economies are confronting a critical skilled labour shortage in areas of technology, science and engineering. Decisions about career pathways are made as early as the primary years of schooling and hence cooperation be¬tween industry and schools to attract students to the professions is crucial. The aim of this paper is to document how the organisational and institutional elements of one industry-school partnerships initiative — The Gateway Schools Program — contribute to productive knowledge sharing and networking. In particular this paper focuses on an initiative of an Australian State government in response to a perceived crisis around the skills shortage in an economy transitioning from a localised to a global knowledge production economy. The Gateway Schools initiative signals the first sustained attempt in Australia to incorporate schools into production networks through strategic partnerships linking them to partner organisations at the industry level. We provide case examples of how four schools opera¬tionalise the partnerships with the minerals and energy industries and how these partner¬ships as knowledge assets impact the delivery of curriculum and capacity building among teachers. Our ultimate goal is to define those characteristics of successful partnerships that do contribute to enhanced interest and engagement by students in those careers that are currently experiencing critical shortages.
Resumo:
Cooperation and caring are best taught within a group as it promotes connectedness, collaborative effort, and relationship building.
Resumo:
Several I- and A-type granite, syenite plutons and spatially associated, giant Fe–Ti–V deposit-bearing mafic ultramafic layered intrusions occur in the Pan–Xi(Panzhihua–Xichang) area within the inner zone of the Emeishan large igneous province (ELIP). These complexes are interpreted to be related to the Emeishan mantle plume. We present LA-ICP-MS and SIMS zircon U–Pb ages and Hf–Nd isotopic compositions for the gabbros, syenites and granites from these complexes. The dating shows that the age of the felsic intrusive magmatism (256.2 ± 3.0–259.8 ± 1.6 Ma) is indistinguishable from that of the mafic intrusive magmatism (255.4 ± 3.1–259.5 ± 2.7 Ma) and represents the final phase of a continuous magmatic episode that lasted no more than 10 Myr. The upper gabbros in the mafic–ultramafic intrusions are generally more isotopically enriched (lower eNd and eHf) than the middle and lower gabbros, suggesting that the upper gabbros have experienced a higher level of crustal contamination than the lower gabbros. The significantly positive eHf(t) values of the A-type granites and syenites (+4.9 to +10.8) are higher than those of the upper gabbros of the associated mafic intrusion, which shows that they cannot be derived by fractional crystallization of these bodies. They are however identical to those of the mafic enclaves (+7.0 to +11.4) and middle and lower gabbros, implying that they are cogenetic. We suggest that they were generated by fractionation of large-volume, plume-related basaltic magmas that ponded deep in the crust. The deep-seated magma chamber erupted in two stages: the first near a density minimum in the basaltic fractionation trend and the second during the final stage of fractionation when the magma was a low density Fe-poor, Si-rich felsic magma. The basaltic magmas emplaced in the shallowlevel magma chambers differentiated to form mafic–ultramafic layered intrusions accompanied by a small amount of crustal assimilation through roof melting. Evolved A-type granites (synenites and syenodiorites) were produced dominantly by crystallization in the deep crustal magma chamber. In contrast, the I-type granites have negative eNd(t) [-6.3 to -7.5] and eHf(t) [-1.3 to -6.7] values, with the Nd model ages (T Nd DM2) of 1.63-1.67 Ga and Hf model ages (T Hf DM2) of 1.56-1.58 Ga, suggesting that they were mainly derived from partial melting of Mesoproterozoic crust. In combination with previous studies, this study also shows that plume activity not only gave rise to reworking of ancient crust, but also significant growth of juvenile crust in the center of the ELIP.
Resumo:
Statistical methodology was applied to a survey of time-course incidence of four viruses (alfalfa mosaic virus, clover yellow vein virus, subterranean clover mottle virus and subterranean clover red leaf virus) in improved pastures in southern regions of Australia. -from Authors
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.
Resumo:
This paper presents a system which enhances the capabilities of a light general aviation aircraft to land autonomously in case of an unscheduled event such as engine failure. The proposed system will not only increase the level of autonomy for the general aviation aircraft industry but also increase the level of dependability. Safe autonomous landing in case of an engine failure with a certain level of reliability is the primary focus of our work as both safety and reliability are attributes of dependability. The system is designed for a light general aviation aircraft but can be extended for dependable unmanned aircraft systems. The underlying system components are computationally efficient and provides continuous situation assessment in case of an emergency landing. The proposed system is undergoing an evaluation phase using an experimental platform (Cessna 172R) in real world scenarios.
Resumo:
Due to the demand for better and deeper analysis in sports, organizations (both professional teams and broadcasters) are looking to use spatiotemporal data in the form of player tracking information to obtain an advantage over their competitors. However, due to the large volume of data, its unstructured nature, and lack of associated team activity labels (e.g. strategic/tactical), effective and efficient strategies to deal with such data have yet to be deployed. A bottleneck restricting such solutions is the lack of a suitable representation (i.e. ordering of players) which is immune to the potentially infinite number of possible permutations of player orderings, in addition to the high dimensionality of temporal signal (e.g. a game of soccer last for 90 mins). Leveraging a recent method which utilizes a "role-representation", as well as a feature reduction strategy that uses a spatiotemporal bilinear basis model to form a compact spatiotemporal representation. Using this representation, we find the most likely formation patterns of a team associated with match events across nearly 14 hours of continuous player and ball tracking data in soccer. Additionally, we show that we can accurately segment a match into distinct game phases and detect highlights. (i.e. shots, corners, free-kicks, etc) completely automatically using a decision-tree formulation.
Resumo:
Plug-in electric vehicles will soon be connected to residential distribution networks in high quantities and will add to already overburdened residential feeders. However, as battery technology improves, plug-in electric vehicles will also be able to support networks as small distributed generation units by transferring the energy stored in their battery into the grid. Even though the increase in the plug-in electric vehicle connection is gradual, their connection points and charging/discharging levels are random. Therefore, such single-phase bidirectional power flows can have an adverse effect on the voltage unbalance of a three-phase distribution network. In this article, a voltage unbalance sensitivity analysis based on charging/discharging levels and the connection point of plug-in electric vehicles in a residential low-voltage distribution network is presented. Due to the many uncertainties in plug-in electric vehicle ratings and connection points and the network load, a Monte Carlo-based stochastic analysis is developed to predict voltage unbalance in the network in the presence of plug-in electric vehicles. A failure index is introduced to demonstrate the probability of non-standard voltage unbalance in the network due to plug-in electric vehicles.
Resumo:
Background In China, as in many developing countries, rapid increases in car ownership and new drivers have been coupled with a large trauma burden. The World Health Organization has identified key risk factors including speeding, drink-driving, helmet and restraint non-use, overloaded vehicles, and fatigued-driving in many rapidly motorising countries, including China. Levels of awareness of these risk factors among road users are not well understood. Although research identifies speeding as the major factor contributing to road crashes in China, there appears to be widespread acceptance of it among the broader community. Purpose To assess self-reported speeding and awareness of crash risk factors among Chinese drivers in Beijing. Methods Car drivers (n=299) were recruited from car washing locations and car parks to complete an anonymous questionnaire. Perceptions of the relative risk of drink-driving, fatigued-driving and speeding, and attitudes towards speeding and self-reported driving speeds were assessed. Results Overall, driving speeds of >10km/hr above posted limits on two road types (60 and 80 km/hour zones) were reported by more than one third of drivers. High-range speeding (i.e., >30 km/hour in a 60 km/hour zone and >40 km/hour in an 80 km/hour zone) was reported by approximately 5% of the sample. Attitudinal measures indicated that approximately three quarters of drivers reported attitudes that were not supportive of speeding. Drink-driving was identified as the most risky behaviour; 18% reported the perception that drink-driving had the same level of danger as speeding and 82% reported it as more dangerous. For fatigued-driving, 1% reported the perception that it was not as dangerous as speeding; 27.4% reported it as the same level and 71.6% perceived it as more dangerous. Conclusion Driving speeds well above posted speed limits were commonly reported by drivers. Speeding was rated as the least dangerous on-road behaviour, compared to drink-driving and fatigued-driving. One third of drivers reported regularly engaging in speeds at least 10km/hr above posted limits, despite speeding being the major reported contributor to crashes. Greater awareness of the risks associated with speeding is needed to help reduce the road trauma burden in China and promote greater speed limit compliance.
Resumo:
As all-atom molecular dynamics method is limited by its enormous computational cost, various coarse-grained strategies have been developed to extend the length scale of soft matters in the modeling of mechanical behaviors. However, the classical thermostat algorithm in highly coarse-grained molecular dynamics method would underestimate the thermodynamic behaviors of soft matters (e.g. microfilaments in cells), which can weaken the ability of materials to overcome local energy traps in granular modeling. Based on all-atom molecular dynamics modeling of microfilament fragments (G-actin clusters), a new stochastic thermostat algorithm is developed to retain the representation of thermodynamic properties of microfilaments at extra coarse-grained level. The accuracy of this stochastic thermostat algorithm is validated by all-atom MD simulation. This new stochastic thermostat algorithm provides an efficient way to investigate the thermomechanical properties of large-scale soft matters.
Resumo:
This paper presents an accurate and robust geometric and material nonlinear formulation to predict structural behaviour of unprotected steel members at elevated temperatures. A fire analysis including large displacement effects for frame structures is presented. This finite element formulation of beam-column elements is based on the plastic hinge approach to model the elasto-plastic strain-hardening material behaviour. The Newton-Raphson method allowing for the thermal-time dependent effect was employed for the solution of the non-linear governing equations for large deflection in thermal history. A combined incremental and total formulation for determining member resistance is employed in this nonlinear solution procedure for the efficient modeling of nonlinear effects. Degradation of material strength with increasing temperature is simulated by a set of temperature-stress-strain curves according to both ECCS and BS5950 Part 8, which implicitly allows for creep deformation. The effects of uniform or non-uniform temperature distribution over the section of the structural steel member are also considered. Several numerical and experimental verifications are presented.
Resumo:
Background Nontuberculous mycobacteria (NTM) are normal inhabitants of a variety of environmental reservoirs including natural and municipal water. The aim of this study was to document the variety of species of NTM in potable water in Brisbane, QLD, with a specific interest in the main pathogens responsible for disease in this region and to explore factors associated with the isolation of NTM. One-litre water samples were collected from 189 routine collection sites in summer and 195 sites in winter. Samples were split, with half decontaminated with CPC 0.005%, then concentrated by filtration and cultured on 7H11 plates in MGIT tubes (winter only). Results Mycobacteria were grown from 40.21% sites in Summer (76/189) and 82.05% sites in winter (160/195). The winter samples yielded the greatest number and variety of mycobacteria as there was a high degree of subculture overgrowth and contamination in summer. Of those samples that did yield mycobacteria in summer, the variety of species differed from those isolated in winter. The inclusion of liquid media increased the yield for some species of NTM. Species that have been documented to cause disease in humans residing in Brisbane that were also found in water include M. gordonae, M. kansasii, M. abscessus, M. chelonae, M. fortuitum complex, M. intracellulare, M. avium complex, M. flavescens, M. interjectum, M. lentiflavum, M. mucogenicum, M. simiae, M. szulgai, M. terrae. M. kansasii was frequently isolated, but M. avium and M. intracellulare (the main pathogens responsible for disease is QLD) were isolated infrequently. Distance of sampling site from treatment plant in summer was associated with isolation of NTM. Pathogenic NTM (defined as those known to cause disease in QLD) were more likely to be identified from sites with narrower diameter pipes, predominantly distribution sample points, and from sites with asbestos cement or modified PVC pipes. Conclusions NTM responsible for human disease can be found in large urban water distribution systems in Australia. Based on our findings, additional point chlorination, maintenance of more constant pressure gradients in the system, and the utilisation of particular pipe materials should be considered.
Resumo:
Agent-based modelling (ABM), like other modelling techniques, is used to answer specific questions from real world systems that could otherwise be expensive or impractical. Its recent gain in popularity can be attributed to some degree to its capacity to use information at a fine level of detail of the system, both geographically and temporally, and generate information at a higher level, where emerging patterns can be observed. This technique is data-intensive, as explicit data at a fine level of detail is used and it is computer-intensive as many interactions between agents, which can learn and have a goal, are required. With the growing availability of data and the increase in computer power, these concerns are however fading. Nonetheless, being able to update or extend the model as more information becomes available can become problematic, because of the tight coupling of the agents and their dependence on the data, especially when modelling very large systems. One large system to which ABM is currently applied is the electricity distribution where thousands of agents representing the network and the consumers’ behaviours are interacting with one another. A framework that aims at answering a range of questions regarding the potential evolution of the grid has been developed and is presented here. It uses agent-based modelling to represent the engineering infrastructure of the distribution network and has been built with flexibility and extensibility in mind. What distinguishes the method presented here from the usual ABMs is that this ABM has been developed in a compositional manner. This encompasses not only the software tool, which core is named MODAM (MODular Agent-based Model) but the model itself. Using such approach enables the model to be extended as more information becomes available or modified as the electricity system evolves, leading to an adaptable model. Two well-known modularity principles in the software engineering domain are information hiding and separation of concerns. These principles were used to develop the agent-based model on top of OSGi and Eclipse plugins which have good support for modularity. Information regarding the model entities was separated into a) assets which describe the entities’ physical characteristics, and b) agents which describe their behaviour according to their goal and previous learning experiences. This approach diverges from the traditional approach where both aspects are often conflated. It has many advantages in terms of reusability of one or the other aspect for different purposes as well as composability when building simulations. For example, the way an asset is used on a network can greatly vary while its physical characteristics are the same – this is the case for two identical battery systems which usage will vary depending on the purpose of their installation. While any battery can be described by its physical properties (e.g. capacity, lifetime, and depth of discharge), its behaviour will vary depending on who is using it and what their aim is. The model is populated using data describing both aspects (physical characteristics and behaviour) and can be updated as required depending on what simulation is to be run. For example, data can be used to describe the environment to which the agents respond to – e.g. weather for solar panels, or to describe the assets and their relation to one another – e.g. the network assets. Finally, when running a simulation, MODAM calls on its module manager that coordinates the different plugins, automates the creation of the assets and agents using factories, and schedules their execution which can be done sequentially or in parallel for faster execution. Building agent-based models in this way has proven fast when adding new complex behaviours, as well as new types of assets. Simulations have been run to understand the potential impact of changes on the network in terms of assets (e.g. installation of decentralised generators) or behaviours (e.g. response to different management aims). While this platform has been developed within the context of a project focussing on the electricity domain, the core of the software, MODAM, can be extended to other domains such as transport which is part of future work with the addition of electric vehicles.
Resumo:
Long-term autonomy in robotics requires perception systems that are resilient to unusual but realistic conditions that will eventually occur during extended missions. For example, unmanned ground vehicles (UGVs) need to be capable of operating safely in adverse and low-visibility conditions, such as at night or in the presence of smoke. The key to a resilient UGV perception system lies in the use of multiple sensor modalities, e.g., operating at different frequencies of the electromagnetic spectrum, to compensate for the limitations of a single sensor type. In this paper, visual and infrared imaging are combined in a Visual-SLAM algorithm to achieve localization. We propose to evaluate the quality of data provided by each sensor modality prior to data combination. This evaluation is used to discard low-quality data, i.e., data most likely to induce large localization errors. In this way, perceptual failures are anticipated and mitigated. An extensive experimental evaluation is conducted on data sets collected with a UGV in a range of environments and adverse conditions, including the presence of smoke (obstructing the visual camera), fire, extreme heat (saturating the infrared camera), low-light conditions (dusk), and at night with sudden variations of artificial light. A total of 240 trajectory estimates are obtained using five different variations of data sources and data combination strategies in the localization method. In particular, the proposed approach for selective data combination is compared to methods using a single sensor type or combining both modalities without preselection. We show that the proposed framework allows for camera-based localization resilient to a large range of low-visibility conditions.