964 resultados para video surveillance
Resumo:
In this chapter we consider biosecurity surveillance as part of a complex system comprising many different biological, environmental and human factors and their interactions. Modelling and analysis of surveillance strategies should take into account these complexities, and also facilitate the use and integration of the many types of different information that can provide insight into the system as a whole. After a brief discussion of a range of options, we focus on Bayesian networks for representing such complex systems. We summarize the features of Bayesian networks and describe these in the context of surveillance.
Resumo:
Background and aims. Since 1999, hospitals in the Finnish Hospital Infection Program (SIRO) have reported data on surgical site infections (SSI) following major hip and knee surgery. The purpose of this study was to obtain detailed information to support prevention efforts by analyzing SIRO data on SSIs, to evaluate possible factors affecting the surveillance results, and to assess the disease burden of postoperative prosthetic joint infections in Finland. Methods. Procedures under surveillance included total hip (THA) and total knee arthroplasties (TKA), and the open reduction and internal fixation (ORIF) of femur fractures. Hospitals prospectively collected data using common definitions and written protocol, and also performed postdischarge surveillance. In the validation study, a blinded retrospective chart review was performed and infection control nurses were interviewed. Patient charts of deep incisional and organ/space SSIs were reviewed, and data from three sources (SIRO, the Finnish Arthroplasty Register, and the Finnish Patient Insurance Centre) were linked for capture-recapture analyses. Results. During 1999-2002, the overall SSI rate was 3.3% after 11,812 orthopedic procedures (median length of stay, eight days). Of all SSIs, 56% were detected after discharge. The majority of deep incisional and organ/space SSIs (65/108, 60%) were detected on readmission. Positive and negative predictive values, sensitivity, and specificity for SIRO surveillance were 94% (95% CI, 89-99%), 99% (99-100%), 75% (56-93%), and 100% (97-100%), respectively. Of the 9,831 total joint replacements performed during 2001-2004, 7.2% (THA 5.2% and TKA 9.9%) of the implants were inserted in a simultaneous bilateral operation. Patients who underwent bilateral operations were younger, healthier, and more often males than those who underwent unilateral procedures. The rates of deep SSIs or mortality did not differ between bi- and uni-lateral THAs or TKAs. Four deep SSIs were reported following bilateral operations (antimicrobial prophylaxis administered 48-218 minutes before incision). In the three registers, altogether 129 prosthetic joint infections were identified after 13,482 THA and TKA during 1999-2004. After correction with the positive predictive value of SIRO (91%), a log-linear model provided an estimated overall prosthetic joint infection rate of 1.6% after THA and 1.3% after TKA. The sensitivity of the SIRO surveillance ranged from 36% to 57%. According to the estimation, nearly 200 prosthetic joint infections could occur in Finland each year (the average from 1999 to 2004) after THA and TKA. Conclusions. Postdischarge surveillance had a major impact on SSI rates after major hip and knee surgery. A minority of deep incisional and organ/space SSIs would be missed, however, if postdischarge surveillance by questionnaire was not performed. According to the validation study, most SSIs reported to SIRO were true infections. Some SSIs were missed, revealing some weakness in case finding. Variation in diagnostic practices may also affect SSI rates. No differences were found in deep SSI rates or mortality between bi- and unilateral THA and TKA. However, patient materials between these two groups differed. Bilateral operations require specific attention paid to their antimicrobial prophylaxis as well as to data management in the surveillance database. The true disease burden of prosthetic joint infections may be heavier than the rates from national nosocomial surveillance systems usually suggest.
Resumo:
Technology is increasingly infiltrating all aspects of our lives and the rapid uptake of devices that live near, on or in our bodies are facilitating radical new ways of working, relating and socialising. This distribution of technology into the very fabric of our everyday life creates new possibilities, but also raises questions regarding our future relationship with data and the quantified self. By embedding technology into the fabric of our clothes and accessories, it becomes ‘wearable’. Such ‘wearables’ enable the acquisition of and the connection to vast amounts of data about people and environments in order to provide life-augmenting levels of interactivity. Wearable sensors for example, offer the potential for significant benefits in the future management of our wellbeing. Fitness trackers such as ‘Fitbit’ and ‘Garmen’ provide wearers with the ability to monitor their personal fitness indicators while other wearables provide healthcare professionals with information that improves diagnosis. While the rapid uptake of wearables may offer unique and innovative opportunities, there are also concerns surrounding the high levels of data sharing that come as a consequence of these technologies. As more ‘smart’ devices connect to the Internet, and as technology becomes increasingly available (e.g. via Wi-Fi, Bluetooth), more products, artefacts and things are becoming interconnected. This digital connection of devices is called The ‘Internet of Things’ (IoT). IoT is spreading rapidly, with many traditionally non-online devices becoming increasingly connected; products such as mobile phones, fridges, pedometers, coffee machines, video cameras, cars and clothing. The IoT is growing at a rapid rate with estimates indicating that by 2020 there will be over 25 billion connected things globally. As the number of devices connected to the Internet increases, so too does the amount of data collected and type of information that is stored and potentially shared. The ability to collect massive amounts of data - known as ‘big data’ - can be used to better understand and predict behaviours across all areas of research from societal and economic to environmental and biological. With this kind of information at our disposal, we have a more powerful lens with which to perceive the world, and the resulting insights can be used to design more appropriate products, services and systems. It can however, also be used as a method of surveillance, suppression and coercion by governments or large organisations. This is becoming particularly apparent in advertising that targets audiences based on the individual preferences revealed by the data collected from social media and online devices such as GPS systems or pedometers. This type of technology also provides fertile ground for public debates around future fashion, identity and broader social issues such as culture, politics and the environment. The potential implications of these type of technological interactions via wearables, through and with the IoT, have never been more real or more accessible. But, as highlighted, this interconnectedness also brings with it complex technical, ethical and moral challenges. Data security and the protection of privacy and personal information will become ever more present in current and future ethical and moral debates of the 21st century. This type of technology is also a stepping-stone to a future that includes implantable technology, biotechnologies, interspecies communication and augmented humans (cyborgs). Technologies that live symbiotically and perpetually in our bodies, the built environment and the natural environment are no longer the stuff of science fiction; it is in fact a reality. So, where next?... The works exhibited in Wear Next_ provide a snapshot into the broad spectrum of wearables in design and in development internationally. This exhibition has been curated to serve as a platform for enhanced broader debate around future technology, our mediated future-selves and the evolution of human interactions. As you explore the exhibition, may we ask that you pause and think to yourself, what might we... Wear Next_? WEARNEXT ONLINE LISTINGS AND MEDIA COVERAGE: http://indulgemagazine.net/wear-next/ http://www.weekendnotes.com/wear-next-exhibition-gallery-artisan/ http://concreteplayground.com/brisbane/event/wear-next_/ http://www.nationalcraftinitiative.com.au/news_and_events/event/48/wear-next http://bneart.com/whats-on/wear-next_/ http://creativelysould.tumblr.com/post/124899079611/creative-weekend-art-edition http://www.abc.net.au/radionational/programs/breakfast/smartly-dressed-the-future-of-wearable-technology/6744374 http://couriermail.newspaperdirect.com/epaper/viewer.aspx RADIO COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 TELEVISION COVERAGE http://www.abc.net.au/radionational/programs/breakfast/wear-next-exhibition-whats-next-for-wearable-technology/6745986 https://au.news.yahoo.com/video/watch/29439742/how-you-could-soon-be-wearing-smart-clothes/#page1
Resumo:
Scalable video coding (SVC) is an emerging standard built on the success of advanced video coding standard (H.264/AVC) by the Joint video team (JVT). Motion compensated temporal filtering (MCTF) and Closed loop hierarchical B pictures (CHBP) are two important coding methods proposed during initial stages of standardization. Either of the coding methods, MCTF/CHBP performs better depending upon noise content and characteristics of the sequence. This work identifies other characteristics of the sequences for which performance of MCTF is superior to that of CHBP and presents a method to adaptively select either of MCTF and CHBP coding methods at the GOP level. This method, referred as "Adaptive Decomposition" is shown to provide better R-D performance than of that by using MCTF or CRBP only. Further this method is extended to non-scalable coders.
Resumo:
This paper explores the obstacles associated with designing video game levels for the purpose of objectively measuring flow. We sought to create three video game levels capable of inducing a flow state, an overload state (low-flow), and a boredom state (low-flow). A pilot study, in which participants self-reported levels of flow after playing all three game levels, was undertaken. Unexpected results point to the challenges of operationalising flow in video game research, obstacles in experimental design for invoking flow and low-flow, concerns about flow as a construct for measuring video game enjoyment, the applicability of self-report flow scales, and the experience of flow in video game play despite substantial challenge-skill differences.
Resumo:
Feature track matrix factorization based methods have been attractive solutions to the Structure-front-motion (Sfnl) problem. Group motion of the feature points is analyzed to get the 3D information. It is well known that the factorization formulations give rise to rank deficient system of equations. Even when enough constraints exist, the extracted models are sparse due the unavailability of pixel level tracks. Pixel level tracking of 3D surfaces is a difficult problem, particularly when the surface has very little texture as in a human face. Only sparsely located feature points can be tracked and tracking error arc inevitable along rotating lose texture surfaces. However, the 3D models of an object class lie in a subspace of the set of all possible 3D models. We propose a novel solution to the Structure-from-motion problem which utilizes the high-resolution 3D obtained from range scanner to compute a basis for this desired subspace. Adding subspace constraints during factorization also facilitates removal of tracking noise which causes distortions outside the subspace. We demonstrate the effectiveness of our formulation by extracting dense 3D structure of a human face and comparing it with a well known Structure-front-motion algorithm due to Brand.
Resumo:
Objective: To identify key stakeholder preferences and priorities when considering a national healthcare-associated infection (HAI) surveillance programme through the use of a discrete choice experiment (DCE). Setting: Australia does not have a national HAI surveillance programme. An online web-based DCE was developed and made available to participants in Australia. Participants: A sample of 184 purposively selected healthcare workers based on their senior leadership role in infection prevention in Australia. Primary and secondary outcomes: A DCE requiring respondents to select 1 HAI surveillance programme over another based on 5 different characteristics (or attributes) in repeated hypothetical scenarios. Data were analysed using a mixed logit model to evaluate preferences and identify the relative importance of each attribute. Results: A total of 122 participants completed the survey (response rate 66%) over a 5-week period. Excluding 22 who mismatched a duplicate choice scenario, analysis was conducted on 100 responses. The key findings included: 72% of stakeholders exhibited a preference for a surveillance programme with continuous mandatory core components (mean coefficient 0.640 (p<0.01)), 65% for a standard surveillance protocol where patient-level data are collected on infected and non-infected patients (mean coefficient 0.641 (p<0.01)), and 92% for hospital-level data that are publicly reported on a website and not associated with financial penalties (mean coefficient 1.663 (p<0.01)). Conclusions: The use of the DCE has provided a unique insight to key stakeholder priorities when considering a national HAI surveillance programme. The application of a DCE offers a meaningful method to explore and quantify preferences in this setting.
Resumo:
Large external memory bandwidth requirement leads to increased system power dissipation and cost in video coding application. Majority of the external memory traffic in video encoder is due to reference data accesses. We describe a lossy reference frame compression technique that can be used in video coding with minimal impact on quality while significantly reducing power and bandwidth requirement. The low cost transformless compression technique uses lossy reference for motion estimation to reduce memory traffic, and lossless reference for motion compensation (MC) to avoid drift. Thus, it is compatible with all existing video standards. We calculate the quantization error bound and show that by storing quantization error separately, bandwidth overhead due to MC can be reduced significantly. The technique meets key requirements specific to the video encode application. 24-39% reduction in peak bandwidth and 23-31% reduction in total average power consumption are observed for IBBP sequences.
Resumo:
In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.
Resumo:
A built-in-self-test (BIST) subsystem embedded in a 65-nm mobile broadcast video receiver is described. The subsystem is designed to perform analog and RF measurements at multiple internal nodes of the receiver. It uses a distributed network of CMOS sensors and a low bandwidth, 12-bit A/D converter to perform the measurements with a serial bus interface enabling a digital transfer of measured data to automatic test equipment (ATE). A perturbation/correlation based BIST method is described, which makes pass/fail determination on parts, resulting in significant test time and cost reduction.
Resumo:
With the advent of Internet, video over IP is gaining popularity. In such an environment, scalability and fault tolerance will be the key issues. Existing video on demand (VoD) service systems are usually neither scalable nor tolerant to server faults and hence fail to comply to multi-user, failure-prone networks such as the Internet. Current research areas concerning VoD often focus on increasing the throughput and reliability of single server, but rarely addresses the smooth provision of service during server as well as network failures. Reliable Server Pooling (RSerPool), being capable of providing high availability by using multiple redundant servers as single source point, can be a solution to overcome the above failures. During a possible server failure, the continuity of service is retained by another server. In order to achieve transparent failover, efficient state sharing is an important requirement. In this paper, we present an elegant, simple, efficient and scalable approach which has been developed to facilitate the transfer of state by the client itself, using extended cookie mechanism, which ensures that there is no noticeable change in disruption or the video quality.
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
Non-Identical Duplicate video detection is a challenging research problem. Non-Identical Duplicate video are a pair of videos that are not exactly identical but are almost similar.In this paper, we evaluate two methods - Keyframe -based and Tomography-based methods to determine the Non-Identical Duplicate videos. These two methods make use of the existing scale based shift invariant (SIFT) method to find the match between the key frames in first method, and the cross-sections through the temporal axis of the videos in second method.We provide extensive experimental results and the analysis of accuracy and efficiency of the above two methods on a data set of Non- Identical Duplicate video-pair.