943 resultados para set-point
Resumo:
Purpose The use of intravascular devices is associated with a number of potential complications. Despite a number of evidence-based clinical guidelines in this area, there continues to be nursing practice discrepancies. This study aims to examine nursing practice in a cancer care setting to identify nursing practice and areas for improvement respective to best available evidence. Methods A point prevalence survey was undertaken in a tertiary cancer care centre in Queensland, Australia. On a randomly selected day, four nurses assessed intravascular device related nursing practices and collected data using a standardized survey tool. Results 58 inpatients (100%) were assessed. Forty-eight (83%) had a device in situ, comprising 14 Peripheral Intravenous Catheters (29.2%), 14 Peripherally Inserted Central Catheters (29.2%), 14 Hickman catheters (29.2%) and six Port-a-Caths (12.4%). Suboptimal outcomes such as incidences of local site complications, incorrect/inadequate documentation, lack of flushing orders, and unclean/non intact dressings were observed. Conclusions This study has highlighted a number of intravascular device related nursing practice discrepancies compared with current hospital policy. Education and other implementation strategies can be applied to improve nursing practice. Following education strategies, it will be valuable to repeat this survey on a regular basis to provide feedback to nursing staff and implement strategies to improve practice. More research is required to provide evidence to clinical practice with regards to intravascular device related consumables, flushing technique and protocols.
Resumo:
Timely and comprehensive scene segmentation is often a critical step for many high level mobile robotic tasks. This paper examines a projected area based neighbourhood lookup approach with the motivation towards faster unsupervised segmentation of dense 3D point clouds. The proposed algorithm exploits the projection geometry of a depth camera to find nearest neighbours which is time independent of the input data size. Points near depth discontinuations are also detected to reinforce object boundaries in the clustering process. The search method presented is evaluated using both indoor and outdoor dense depth images and demonstrates significant improvements in speed and precision compared to the commonly used Fast library for approximate nearest neighbour (FLANN) [Muja and Lowe, 2009].
Resumo:
Railway crew scheduling problem is the process of allocating train services to the crew duties based on the published train timetable while satisfying operational and contractual requirements. The problem is restricted by many constraints and it belongs to the class of NP-hard. In this paper, we develop a mathematical model for railway crew scheduling with the aim of minimising the number of crew duties by reducing idle transition times. Duties are generated by arranging scheduled trips over a set of duties and sequentially ordering the set of trips within each of duties. The optimisation model includes the time period of relief opportunities within which a train crew can be relieved at any relief point. Existing models and algorithms usually only consider relieving a crew at the beginning of the interval of relief opportunities which may be impractical. This model involves a large number of decision variables and constraints, and therefore a hybrid constructive heuristic with the simulated annealing search algorithm is applied to yield an optimal or near-optimal schedule. The performance of the proposed algorithms is evaluated by applying computational experiments on randomly generated test instances. The results show that the proposed approaches obtain near-optimal solutions in a reasonable computational time for large-sized problems.
Resumo:
Point-to-point speed cameras are a relatively new and innovative technological approach to speed enforcement that is increasingly been used in a number of highly motorised countries. Previous research has provided evidence of the positive impact of this approach on vehicle speeds and crash rates, as well as additional traffic related outcomes such as vehicle emissions and traffic flow. This paper reports on the conclusions and recommendations of a large-scale project involving extensive consultation with international and domestic (Australian) stakeholders to explore the technological, operational, and legislative characteristics associated with the technology. More specifically, this paper provides a number of recommendations for better practice regarding the implementation of point-to-point speed enforcement in the Australian and New Zealand context. The broader implications of the research, as well as directions for future research, are also discussed.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
Currently, the GNSS computing modes are of two classes: network-based data processing and user receiver-based processing. A GNSS reference receiver station essentially contributes raw measurement data in either the RINEX file format or as real-time data streams in the RTCM format. Very little computation is carried out by the reference station. The existing network-based processing modes, regardless of whether they are executed in real-time or post-processed modes, are centralised or sequential. This paper describes a distributed GNSS computing framework that incorporates three GNSS modes: reference station-based, user receiver-based and network-based data processing. Raw data streams from each GNSS reference receiver station are processed in a distributed manner, i.e., either at the station itself or at a hosting data server/processor, to generate station-based solutions, or reference receiver-specific parameters. These may include precise receiver clock, zenith tropospheric delay, differential code biases, ambiguity parameters, ionospheric delays, as well as line-of-sight information such as azimuth and elevation angles. Covariance information for estimated parameters may also be optionally provided. In such a mode the nearby precise point positioning (PPP) or real-time kinematic (RTK) users can directly use the corrections from all or some of the stations for real-time precise positioning via a data server. At the user receiver, PPP and RTK techniques are unified under the same observation models, and the distinction is how the user receiver software deals with corrections from the reference station solutions and the ambiguity estimation in the observation equations. Numerical tests demonstrate good convergence behaviour for differential code bias and ambiguity estimates derived individually with single reference stations. With station-based solutions from three reference stations within distances of 22–103 km the user receiver positioning results, with various schemes, show an accuracy improvement of the proposed station-augmented PPP and ambiguity-fixed PPP solutions with respect to the standard float PPP solutions without station augmentation and ambiguity resolutions. Overall, the proposed reference station-based GNSS computing mode can support PPP and RTK positioning services as a simpler alternative to the existing network-based RTK or regionally augmented PPP systems.
Resumo:
Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.
Resumo:
MapReduce is a computation model for processing large data sets in parallel on large clusters of machines, in a reliable, fault-tolerant manner. A MapReduce computation is broken down into a number of map tasks and reduce tasks, which are performed by so called mappers and reducers, respectively. The placement of the mappers and reducers on the machines directly affects the performance and cost of the MapReduce computation in cloud computing. From the computational point of view, the mappers/reducers placement problem is a generation of the classical bin packing problem, which is NP-complete. Thus, in this paper we propose a new heuristic algorithm for the mappers/reducers placement problem in cloud computing and evaluate it by comparing with other several heuristics on solution quality and computation time by solving a set of test problems with various characteristics. The computational results show that our heuristic algorithm is much more efficient than the other heuristics and it can obtain a better solution in a reasonable time. Furthermore, we verify the effectiveness of our heuristic algorithm by comparing the mapper/reducer placement for a benchmark problem generated by our heuristic algorithm with a conventional mapper/reducer placement which puts a fixed number of mapper/reducer on each machine. The comparison results show that the computation using our mapper/reducer placement is much cheaper than the computation using the conventional placement while still satisfying the computation deadline.
Resumo:
Prescribing errors remain a significant cause of patient harm. Safe prescribing is not just about writing a prescription, but involves many cognitive and decision-making steps. A set of national prescribing competencies for all prescribers (including non-medical) is needed to guide education and training curricula, assessment and credentialing of individual practitioners. We have identified 12 core competencies for safe prescribing which embody the four stages of the prescribing process – information gathering, clinical decision making, communication, and monitoring and review. These core competencies, along with their learning objectives and assessment methods, provide a useful starting point for teaching safe and effective prescribing.
Resumo:
Rigid lenses, which were originally made from glass (between 1888 and 1940) and later from polymethyl methacrylate or silicone acrylate materials, are uncomfortable to wear and are now seldom fitted to new patients. Contact lenses became a popular mode of ophthalmic refractive error correction following the discovery of the first hydrogel material – hydroxyethyl methacrylate – by Czech chemist Otto Wichterle in 1960. To satisfy the requirements for ocular biocompatibility, contact lenses must be transparent and optically stable (for clear vision), have a low elastic modulus (for good comfort), have a hydrophilic surface (for good wettability), and be permeable to certain metabolites, especially oxygen, to allow for normal corneal metabolism and respiration during lens wear. A major breakthrough in respect of the last of these requirements was the development of silicone hydrogel soft lenses in 1999 and techniques for making the surface hydrophilic. The vast majority of contact lenses distributed worldwide are mass-produced using cast molding, although spin casting is also used. These advanced mass-production techniques have facilitated the frequent disposal of contact lenses, leading to improvements in ocular health and fewer complications. More than one-third of all soft contact lenses sold today are designed to be discarded daily (i.e., ‘daily disposable’ lenses).
Resumo:
Like many cautionary tales, The Hunger Games takes as its major premise an observation about contemporary society, measuring its ballistic arc in order to present graphically its logical conclusions. The Hunger Games gazes back to the panem et circenses of Ancient Rome, staring equally cynically forward, following the trajectory of reality television to its unbearably barbaric end point – a sadistic voyeurism for an effete elite of consumers. At each end of the historical spectrum (and in the present), the prevailing social form is Arendt’s animal laborans. Consumer or consumed, Panem’s population is (with the exception of the inner circle) either deprived of the possibility of, or distracted from, political action. Within the confines of the Games themselves, Law is abandoned or de‐realised: Law – an elided Other in the pseudo‐Hobbesian nightmare that is the Arena. The Games are played out, as were gladiatorial combats and other diversions of the Roman Empire, against a background resonant of Juvenal’s concern for his contemporaries’ attachment to short term gratification at the expense the civic virtues of justice and caring which are (or would be) constitutive of a contemporary form of Arendt’s homo politicus. While the Games are, on their face, ‘reality’ they are (like the realities presented in contemporary reality television) a simulated reality, de‐realised in a Foucauldian set design constructed as a distraction for Capitol, and for the residents of the Districts, a constant reminder of their subservience to Capitol. Yet contemporary Western culture, for which manipulative reality TV is but a symptom of an underlying malaise, is inscribed at least as an incipient Panem, Its public/political space is diminished by the effective slavery of the poor, the pre‐occupation with and distractions of materiality and modern media, and the increasing concentration of power/wealth into a smaller proportion of the population.
Resumo:
Flows of cultural heritage in textual practices are vital to sustaining Indigenous communities. Indigenous heritage, whether passed on by oral tradition or ubiquitous social media, can be seen as a “conversation between the past and the future” (Fairclough, 2012, xv). Indigenous heritage involves appropriating memories within a cultural flow to pass on a spiritual legacy. This presentation reports ethnographic research of social media practices in a small independent Aboriginal school in Southeast Queensland, Australia that is resided over by the Yugambeh elders and an Aboriginal principal. The purpose was to rupture existing notions of white literacies in schools, and to deterritorialize the uses of digital media by dominant cultures in the public sphere. Examples of learning experiences included the following: i. Integrating Indigenous language and knowledge into media text production; ii. Using conversations with Indigenous elders and material artifacts as an entry point for storytelling; iii. Dadirri – spiritual listening in the yarning circle to develop storytelling (Ungunmerr-Baumann, 2002); and iv. Writing and publicly sharing oral histories through digital scrapbooking shared via social media. The program aligned with the Australian National Curriculum English (ACARA, 2012), which mandates the teaching of multimodal text creation. Data sources included a class set of digital scrapbooks collaboratively created in a multi-age primary classroom. The digital scrapbooks combined digitally encoded words, images of material artifacts, and digital music files. A key feature of the writing and digital design task was to retell and digitally display and archive a cultural narrative of significance to the Indigenous Australian community and its memories and material traces of the past for the future. Data analysis of the students’ digital stories involved the application of key themes of negotiated, material, and digitally mediated forms of heritage practice. It drew on Australian Indigenous research by Keddie et al. (2013) to guard against the homogenizing of culture that can arise from a focus on a static view of culture. The interpretation of findings located Indigenous appropriation of social media within broader racialized politics that enables Indigenous literacy to be understood as a dynamic, negotiated, and transgenerational flows of practice. The findings demonstrate that Indigenous children’s use of media production reflects “shifting and negotiated identities” in response to changing media environments that can function to sustain Indigenous cultural heritages (Appadurai, 1696, xv). It demonstrated how the children’s experiences of culture are layered over time, as successive generations inherit, interweave, and hear others’ cultural stories or maps. It also demonstrated how the children’s production of narratives through multimedia can provide a platform for the flow and reconstruction of performative collective memories and “lived traces of a common past” (Giaccardi, 2012). It disrupts notions of cultural reductionism and racial incommensurability that fix and homogenize Indigenous practices within and against a dominant White norm. Recommendations are provided for an approach to appropriating social media in schools that explicitly attends to the dynamic nature of Indigenous practices, negotiated through intercultural constructions and flows, and opening space for a critical anti-racist approach to multimodal text production.
Resumo:
Although there are many approaches for developing secure programs, they are not necessarily helpful for evaluating the security of a pre-existing program. Software metrics promise an easy way of comparing the relative security of two programs or assessing the security impact of modifications to an existing one. Most studies in this area focus on high level source code but this approach fails to take compiler-specific code generation into account. In this work we describe a set of object-oriented Java bytecode security metrics which are capable of assessing the security of a compiled program from the point of view of potential information flow. These metrics can be used to compare the security of programs or assess the effect of program modifications on security using a tool which we have developed to automatically measure the security of a given Java bytecode program in terms of the accessibility of distinguished ‘classified’ attributes.
Resumo:
In Thomas Mann’s tetralogy of the 1930s and 1940s, Joseph and His Brothers, the narrator declares history is not only “that which has happened and that which goes on happening in time,” but it is also “the stratified record upon which we set our feet, the ground beneath us.” By opening up history to its spatial, geographical, and geological dimensions Mann both predicts and encapsulates the twentieth-century’s “spatial turn,” a critical shift that divested geography of its largely passive role as history’s “stage” and brought to the fore intersections between the humanities and the earth sciences. In this paper, I draw out the relationships between history, narrative, geography, and geology revealed by this spatial turn and the questions these pose for thinking about the disciplinary relationship between geography and the humanities. As Mann’s statement exemplifies, the spatial turn itself has often been captured most strikingly in fiction, and I would argue nowhere more so than in Graham Swift’s Waterland (1983) and Anne Michaels’s Fugitive Pieces (1996), both of which present space, place, and landscape as having a palpable influence on history and memory. The geographical/geological line that runs through both Waterland and Fugitive Pieces continues through Tim Robinson’s non-fictional, two-volume “topographical” history Stones of Aran. Robinson’s Stones of Aran—which is not history, not geography, and not literature, and yet is all three—constructs an imaginative geography that renders inseparable geography, geology, history, memory, and the act of writing.
Resumo:
Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.