964 resultados para Video Surveillance
Resumo:
We introduce a view-point invariant representation of moving object trajectories that can be used in video database applications. It is assumed that trajectories lie on a surface that can be locally approximated with a plane. Raw trajectory data is first locally approximated with a cubic spline via least squares fitting. For each sampled point of the obtained curve, a projective invariant feature is computed using a small number of points in its neighborhood. The resulting sequence of invariant features computed along the entire trajectory forms the view invariant descriptor of the trajectory itself. Time parametrization has been exploited to compute cross ratios without ambiguity due to point ordering. Similarity between descriptors of different trajectories is measured with a distance that takes into account the statistical properties of the cross ratio, and its symmetry with respect to the point at infinity. In experiments, an overall correct classification rate of about 95% has been obtained on a dataset of 58 trajectories of players in soccer video, and an overall correct classification rate of about 80% has been obtained on matching partial segments of trajectories collected from two overlapping views of outdoor scenes with moving people and cars.
Resumo:
In this project we design and implement a centralized hashing table in the snBench sensor network environment. We discuss the feasibility of this approach and compare and contrast with the distributed hashing architecture, with particular discussion regarding the conditions under which a centralized architecture makes sense. There are numerous computational tasks that require persistence of data in a sensor network environment. To help motivate the need for data storage in snBench we demonstrate a practical application of the technology whereby a video camera can monitor a room to detect the presence of a person and send an alert to the appropriate authorities.
Resumo:
A notable feature of the surveillance case law of the European Court of Human Rights has been the tendency of the Court to focus on the “in accordance with the law” aspect of the Article 8 ECHR inquiry. This focus has been the subject of some criticism, but the impact of this approach on the manner in which domestic surveillance legislation has been formulated in the Party States has received little scholarly attention. This thesis addresses that gap in the literature through its consideration of the Interception of Postal Packets and Telecommunications Messages (Regulation) Act, 1993 and the Criminal Justice (Surveillance) Act, 2009. While both Acts provide several of the safeguards endorsed by the European Court of Human Rights, this thesis finds that they suffer from a number of crucial weaknesses that undermine the protection of privacy. This thesis demonstrates how the focus of the European Court of Human Rights on the “in accordance with the law” test has resulted in some positive legislative change. Notwithstanding this fact, it is maintained that the legality approach has gained prominence at the expense of a full consideration of the “necessary in a democratic society” inquiry. This has resulted in superficial legislative responses at the domestic level, including from the Irish government. Notably, through the examination of a number of more recent cases, this project discerns a significant alteration in the interpretive approach adopted by the European Court of Human Rights regarding the application of the necessity test. The implications of this development are considered and the outlook for Irish surveillance legislation is assessed.
Resumo:
This research provides an interpretive cross-class analysis of the leisure experience of children, aged between six and ten years, living in Cork city. This study focuses on the cultural dispositions underpinning parental decisions in relation to children’s leisure activities, with a particular emphasis on their child-surveillance practices. In this research, child-surveillance is defined as the adult monitoring of children by technological means, physical supervision, community supervision, or adult supervised activities (Nelson, 2010; Lareau, 2003; Fotel and Thomsen, 2004). This research adds significantly to understandings of Irish childhood by providing the first in-depth qualitative analysis of the surveillance of children’s leisure-time. Since the 1990s, international research on children has highlighted the increasingly structured nature of children’s leisure-time (Lareau, 2011; Valentine & McKendrick, 1997). Furthermore, research on child-surveillance has found an increase in the intensive supervision of children during their unstructured leisure-time (Nelson, 2010; Furedi, 2008; Fotel and Thomsen, 2004). This research bridges the gap between these two key bodies of literature, providing a more integrated overview of children’s experience of leisure in Ireland. Using Bourdieu’s (1992) model of habitus, field and capital, the dispositions that shape parents’ decisions about their children’s leisure time are interrogated. The holistic view of childhood adopted in this research echoes the ‘Whole Child Approach’ by analysing the child’s experience within a wider set of social relationships including family, school, and community. Underpinned by James and Prout’s (1990) paradigm on childhood, this study considers Irish children’s agency in negotiating with parents’ decisions regarding leisure-time. The data collated in this study enhances our understanding of the micro-interactions between parents and children and, the ability of the child to shape their own experience. Moreover, this is the first Irish sociological research to identify and discuss class distinctions in children’s agentic potential during leisure-time.
Resumo:
Background: Many European countries including Ireland lack high quality, on-going, population based estimates of maternal behaviours and experiences during pregnancy. PRAMS is a CDC surveillance program which was established in the United States in 1987 to generate high quality, population based data to reduce infant mortality rates and improve maternal and infant health. PRAMS is the only on-going population based surveillance system of maternal behaviours and experiences that occur before, during and after pregnancy worldwide.Methods: The objective of this study was to adapt, test and evaluate a modified CDC PRAMS methodology in Ireland. The birth certificate file which is the standard approach to sampling for PRAMS in the United States was not available for the PRAMS Ireland study. Consequently, delivery record books for the period between 3 and 5 months before the study start date at a large urban obstetric hospital [8,900 births per year] were used to randomly sample 124 women. Name, address, maternal age, infant sex, gestational age at delivery, delivery method, APGAR score and birth weight were manually extracted from records. Stillbirths and early neonatal deaths were excluded using APGAR scores and hospital records. Women were sent a letter of invitation to participate including option to opt out, followed by a modified PRAMS survey, a reminder letter and a final survey.Results: The response rate for the pilot was 67%. Two per cent of women refused the survey, 7% opted out of the study and 24% did not respond. Survey items were at least 88% complete for all 82 respondents. Prevalence estimates of socially undesirable behaviours such as alcohol consumption during pregnancy were high [>50%] and comparable with international estimates.Conclusion: PRAMS is a feasible and valid method of collecting information on maternal experiences and behaviours during pregnancy in Ireland. PRAMS may offer a potential solution to data deficits in maternal health behaviour indicators in Ireland with further work. This study is important to researchers in Europe and elsewhere who may be interested in new ways of tailoring an established CDC methodology to their unique settings to resolve data deficits in maternal health.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet and mobile networks, exposes challenges in coping with heterogeneous devices and varying network throughput. Adaptive schemes, such as scalable video coding, are an attractive solution but fare badly in the presence of packet losses. Techniques that use description-based streaming models, such as multiple description coding (MDC), are more suitable for lossy networks, and can mitigate the effects of packet loss by increasing the error resilience of the encoded stream, but with an increased transmission byte cost. In this paper, we present our adaptive scalable streaming technique adaptive layer distribution (ALD). ALD is a novel scalable media delivery technique that optimises the tradeoff between streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data are spread amongst all packets, thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the resiliency of the scalable video. The Subjective testing results illustrate that our techniques and models were able to provide levels of consistent high-quality viewing, with lower transmission cost, relative to MDC, irrespective of clip type. This highlights the benefits of selective packetisation in addition to intuitive encoding and transmission.
Resumo:
Bandwidth constriction and datagram loss are prominent issues that affect the perceived quality of streaming video over lossy networks, such as wireless. The use of layered video coding seems attractive as a means to alleviate these issues, but its adoption has been held back in large part by the inherent priority assigned to the critical lower layers and the consequences for quality that result from their loss. The proposed use of forward error correction (FEC) as a solution only further burdens the bandwidth availability and can negate the perceived benefits of increased stream quality. In this paper, we propose Adaptive Layer Distribution (ALD) as a novel scalable media delivery technique that optimises the tradeoff between the streaming bandwidth and error resiliency. ALD is based on the principle of layer distribution, in which the critical stream data is spread amongst all datagrams thus lessening the impact on quality due to network losses. Additionally, ALD provides a parameterised mechanism for dynamic adaptation of the scalable video, while providing increased resilience to the highest quality layers. Our experimental results show that ALD improves the perceived quality and also reduces the bandwidth demand by up to 36% in comparison to the well-known Multiple Description Coding (MDC) technique.
Resumo:
BACKGROUND: Invasive fungal infections (IFIs) are a major cause of morbidity and mortality among organ transplant recipients. Multicenter prospective surveillance data to determine disease burden and secular trends are lacking. METHODS: The Transplant-Associated Infection Surveillance Network (TRANSNET) is a consortium of 23 US transplant centers, including 15 that contributed to the organ transplant recipient dataset. We prospectively identified IFIs among organ transplant recipients from March, 2001 through March, 2006 at these sites. To explore trends, we calculated the 12-month cumulative incidence among 9 sequential cohorts. RESULTS: During the surveillance period, 1208 IFIs were identified among 1063 organ transplant recipients. The most common IFIs were invasive candidiasis (53%), invasive aspergillosis (19%), cryptococcosis (8%), non-Aspergillus molds (8%), endemic fungi (5%), and zygomycosis (2%). Median time to onset of candidiasis, aspergillosis, and cryptococcosis was 103, 184, and 575 days, respectively. Among a cohort of 16,808 patients who underwent transplantation between March 2001 and September 2005 and were followed through March 2006, a total of 729 IFIs were reported among 633 persons. One-year cumulative incidences of the first IFI were 11.6%, 8.6%, 4.7%, 4.0%, 3.4%, and 1.3% for small bowel, lung, liver, heart, pancreas, and kidney transplant recipients, respectively. One-year incidence was highest for invasive candidiasis (1.95%) and aspergillosis (0.65%). Trend analysis showed a slight increase in cumulative incidence from 2002 to 2005. CONCLUSIONS: We detected a slight increase in IFIs during the surveillance period. These data provide important insights into the timing and incidence of IFIs among organ transplant recipients, which can help to focus effective prevention and treatment strategies.
Resumo:
BACKGROUND: The incidence and epidemiology of invasive fungal infections (IFIs), a leading cause of death among hematopoeitic stem cell transplant (HSCT) recipients, are derived mainly from single-institution retrospective studies. METHODS: The Transplant Associated Infections Surveillance Network, a network of 23 US transplant centers, prospectively enrolled HSCT recipients with proven and probable IFIs occurring between March 2001 and March 2006. We collected denominator data on all HSCTs preformed at each site and clinical, diagnostic, and outcome information for each IFI case. To estimate trends in IFI, we calculated the 12-month cumulative incidence among 9 sequential subcohorts. RESULTS: We identified 983 IFIs among 875 HSCT recipients. The median age of the patients was 49 years; 60% were male. Invasive aspergillosis (43%), invasive candidiasis (28%), and zygomycosis (8%) were the most common IFIs. Fifty-nine percent and 61% of IFIs were recognized within 60 days of neutropenia and graft-versus-host disease, respectively. Median onset of candidiasis and aspergillosis after HSCT was 61 days and 99 days, respectively. Within a cohort of 16,200 HSCT recipients who received their first transplants between March 2001 and September 2005 and were followed up through March 2006, we identified 718 IFIs in 639 persons. Twelve-month cumulative incidences, based on the first IFI, were 7.7 cases per 100 transplants for matched unrelated allogeneic, 8.1 cases per 100 transplants for mismatched-related allogeneic, 5.8 cases per 100 transplants for matched-related allogeneic, and 1.2 cases per 100 transplants for autologous HSCT. CONCLUSIONS: In this national prospective surveillance study of IFIs in HSCT recipients, the cumulative incidence was highest for aspergillosis, followed by candidiasis. Understanding the epidemiologic trends and burden of IFIs may lead to improved management strategies and study design.
Resumo:
We explore the possibilities of obtaining compression in video through modified sampling strategies using multichannel imaging systems. The redundancies in video streams are exploited through compressive sampling schemes to achieve low power and low complexity video sensors. The sampling strategies as well as the associated reconstruction algorithms are discussed. These compressive sampling schemes could be implemented in the focal plane readout hardware resulting in drastic reduction in data bandwidth and computational complexity.
Variation in use of surveillance colonoscopy among colorectal cancer survivors in the United States.
Resumo:
BACKGROUND: Clinical practice guidelines recommend colonoscopies at regular intervals for colorectal cancer (CRC) survivors. Using data from a large, multi-regional, population-based cohort, we describe the rate of surveillance colonoscopy and its association with geographic, sociodemographic, clinical, and health services characteristics. METHODS: We studied CRC survivors enrolled in the Cancer Care Outcomes Research and Surveillance (CanCORS) study. Eligible survivors were diagnosed between 2003 and 2005, had curative surgery for CRC, and were alive without recurrences 14 months after surgery with curative intent. Data came from patient interviews and medical record abstraction. We used a multivariate logit model to identify predictors of colonoscopy use. RESULTS: Despite guidelines recommending surveillance, only 49% of the 1423 eligible survivors received a colonoscopy within 14 months after surgery. We observed large regional differences (38% to 57%) across regions. Survivors who received screening colonoscopy were more likely to: have colon cancer than rectal cancer (OR = 1.41, 95% CI: 1.05-1.90); have visited a primary care physician (OR = 1.44, 95% CI: 1.14-1.82); and received adjuvant chemotherapy (OR = 1.75, 95% CI: 1.27-2.41). Compared to survivors with no comorbidities, survivors with moderate or severe comorbidities were less likely to receive surveillance colonoscopy (OR = 0.69, 95% CI: 0.49-0.98 and OR = 0.44, 95% CI: 0.29-0.66, respectively). CONCLUSIONS: Despite guidelines, more than half of CRC survivors did not receive surveillance colonoscopy within 14 months of surgery, with substantial variation by site of care. The association of primary care visits and adjuvant chemotherapy use suggests that access to care following surgery affects cancer surveillance.
Resumo:
This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.
Resumo:
We present a novel system to be used in the rehabilitation of patients with forearm injuries. The system uses surface electromyography (sEMG) recordings from a wireless sleeve to control video games designed to provide engaging biofeedback to the user. An integrated hardware/software system uses a neural net to classify the signals from a user’s muscles as they perform one of a number of common forearm physical therapy exercises. These classifications are used as input for a suite of video games that have been custom-designed to hold the patient’s attention and decrease the risk of noncompliance with the physical therapy regimen necessary to regain full function in the injured limb. The data is transmitted wirelessly from the on-sleeve board to a laptop computer using a custom-designed signal-processing algorithm that filters and compresses the data prior to transmission. We believe that this system has the potential to significantly improve the patient experience and efficacy of physical therapy using biofeedback that leverages the compelling nature of video games.
Resumo:
Fractal image compression is a relatively recent image compression method. Its extension to a sequence of motion images is important in video compression applications. There are two basic fractal compression methods, namely the cube-based and the frame-based methods, being commonly used in the industry. However there are advantages and disadvantages in both methods. This paper proposes a hybrid algorithm highlighting the advantages of the two methods in order to produce a good compression algorithm for video industry. Experimental results show the hybrid algorithm improves the compression ratio and the quality of decompressed images.
Resumo:
Fractal video compression is a relatively new video compression method. Its attraction is due to the high compression ratio and the simple decompression algorithm. But its computational complexity is high and as a result parallel algorithms on high performance machines become one way out. In this study we partition the matching search, which occupies the majority of the work in a fractal video compression process, into small tasks and implement them in two distributed computing environments, one using DCOM and the other using .NET Remoting technology, based on a local area network consists of loosely coupled PCs. Experimental results show that the parallel algorithm is able to achieve a high speedup in these distributed environments.