317 resultados para cumulative sum
Resumo:
The collective purpose of these two studies was to determine a link between the V02 slow component and the muscle activation patterns that occur during cycling. Six, male subjects performed an incremental cycle ergometer exercise test to determine asub-TvENT (i.e. 80% of TvENT) and supra-TvENT (TvENT + 0.75*(V02 max - TvENT) work load. These two constant work loads were subsequently performed on either three or four occasions for 8 mins each, with V02 captured on a breath-by-breath basis for every test, and EMO of eight major leg muscles collected on one occasion. EMG was collected for the first 10 s of every 30 s period, except for the very first 10 s period. The V02 data was interpolated, time aligned, averaged and smoothed for both intensities. Three models were then fitted to the V02 data to determine the kinetics responses. One of these models was mono-exponential, while the other two were biexponential. A second time delay parameter was the only difference between the two bi-exponential models. An F-test was used to determine significance between the biexponential models using the residual sum of squares term for each model. EMO was integrated to obtain one value for each 10 s period, per muscle. The EMG data was analysed by a two-way repeated measures ANOV A. A correlation was also used to determine significance between V02 and IEMG. The V02 data during the sub-TvENT intensity was best described by a mono-exponential response. In contrast, during supra-TvENT exercise the two bi-exponential models best described the V02 data. The resultant F-test revealed no significant difference between the two models and therefore demonstrated that the slow component was not delayed relative to the onset of the primary component. Furthermore, only two parameters were deemed to be significantly different based upon the two models. This is in contrast to other findings. The EMG data, for most muscles, appeared to follow the same pattern as V02 during both intensities of exercise. On most occasions, the correlation coefficient demonstrated significance. Although some muscles demonstrated the same relative increase in IEMO based upon increases in intensity and duration, it cannot be assumed that these muscles increase their contribution to V02 in a similar fashion. Larger muscles with a higher percentage of type II muscle fibres would have a larger increase in V02 over the same increase in intensity.
Resumo:
Young drivers are at higher risk of crashes than other drivers when carrying passengers. Graduated Driver Licensing has demonstrated effectiveness in reducing fatalities however there is considerable potential for additional strategies to complement the approach. A survey with 276 young adults (aged 17-25 years, 64% females) was conducted to examine the potential and importance of strategies that are delivered via the Internet and potential strategies for passengers. Strategies delivered via the Internet represent opportunity for widespread dissemination and greater reach to young people at times convenient to them. The current study found some significant differences between males and females with regard to ways the Internet is used to obtain road safety information and the components valued in trusted road safety sites. There were also significant differences between males and females on the kinds of strategies used as passengers to promote driver safety and the context in which it occurred, with females tending to take more proactive strategies than males. In sum, young people see value in Internet delivery for passenger safety information (80% agreed/ strongly agreed) and more than 90% thought it was important to intervene while a passenger of a risky driver. Thus tailoring Internet road safety strategies to young people may differ for males and females however there is considerable potential for a passenger focus in strategies aimed at reducing young driver crashes.
Analytical modeling and sensitivity analysis for travel time estimation on signalized urban networks
Resumo:
This paper presents a model for estimation of average travel time and its variability on signalized urban networks using cumulative plots. The plots are generated based on the availability of data: a) case-D, for detector data only; b) case-DS, for detector data and signal timings; and c) case-DSS, for detector data, signal timings and saturation flow rate. The performance of the model for different degrees of saturation and different detector detection intervals is consistent for case-DSS and case-DS whereas, for case-D the performance is inconsistent. The sensitivity analysis of the model for case-D indicates that it is sensitive to detection interval and signal timings within the interval. When detection interval is integral multiple of signal cycle then it has low accuracy and low reliability. Whereas, for detection interval around 1.5 times signal cycle both accuracy and reliability are high.
Resumo:
This paper presents a methodology for estimation of average travel time on signalized urban networks by integrating cumulative plots and probe data. This integration aims to reduce the relative deviations in the cumulative plots due to midlink sources and sinks. During undersaturated traffic conditions, the concept of a virtual probe is introduced, and therefore, accurate travel time can be obtained when a real probe is unavailable. For oversaturated traffic conditions, only one probe per travel time estimation interval—360 s or 3% of vehicles traversing the link as a probe—has the potential to provide accurate travel time.
Resumo:
Camera calibration information is required in order for multiple camera networks to deliver more than the sum of many single camera systems. Methods exist for manually calibrating cameras with high accuracy. Manually calibrating networks with many cameras is, however, time consuming, expensive and impractical for networks that undergo frequent change. For this reason, automatic calibration techniques have been vigorously researched in recent years. Fully automatic calibration methods depend on the ability to automatically find point correspondences between overlapping views. In typical camera networks, cameras are placed far apart to maximise coverage. This is referred to as a wide base-line scenario. Finding sufficient correspondences for camera calibration in wide base-line scenarios presents a significant challenge. This thesis focuses on developing more effective and efficient techniques for finding correspondences in uncalibrated, wide baseline, multiple-camera scenarios. The project consists of two major areas of work. The first is the development of more effective and efficient view covariant local feature extractors. The second area involves finding methods to extract scene information using the information contained in a limited set of matched affine features. Several novel affine adaptation techniques for salient features have been developed. A method is presented for efficiently computing the discrete scale space primal sketch of local image features. A scale selection method was implemented that makes use of the primal sketch. The primal sketch-based scale selection method has several advantages over the existing methods. It allows greater freedom in how the scale space is sampled, enables more accurate scale selection, is more effective at combining different functions for spatial position and scale selection, and leads to greater computational efficiency. Existing affine adaptation methods make use of the second moment matrix to estimate the local affine shape of local image features. In this thesis, it is shown that the Hessian matrix can be used in a similar way to estimate local feature shape. The Hessian matrix is effective for estimating the shape of blob-like structures, but is less effective for corner structures. It is simpler to compute than the second moment matrix, leading to a significant reduction in computational cost. A wide baseline dense correspondence extraction system, called WiDense, is presented in this thesis. It allows the extraction of large numbers of additional accurate correspondences, given only a few initial putative correspondences. It consists of the following algorithms: An affine region alignment algorithm that ensures accurate alignment between matched features; A method for extracting more matches in the vicinity of a matched pair of affine features, using the alignment information contained in the match; An algorithm for extracting large numbers of highly accurate point correspondences from an aligned pair of feature regions. Experiments show that the correspondences generated by the WiDense system improves the success rate of computing the epipolar geometry of very widely separated views. This new method is successful in many cases where the features produced by the best wide baseline matching algorithms are insufficient for computing the scene geometry.
Resumo:
No-tillage (NT) management has been promoted as a practice capable of offsetting greenhouse gas (GHG) emissions because of its ability to sequester carbon in soils. However, true mitigation is only possible if the overall impact of NT adoption reduces the net global warming potential (GWP) determined by fluxes of the three major biogenic GHGs (i.e. CO2, N2O, and CH4). We compiled all available data of soil-derived GHG emission comparisons between conventional tilled (CT) and NT systems for humid and dry temperate climates. Newly converted NT systems increase GWP relative to CT practices, in both humid and dry climate regimes, and longer-term adoption (>10 years) only significantly reduces GWP in humid climates. Mean cumulative GWP over a 20-year period is also reduced under continuous NT in dry areas, but with a high degree of uncertainty. Emissions of N2O drive much of the trend in net GWP, suggesting improved nitrogen management is essential to realize the full benefit from carbon storage in the soil for purposes of global warming mitigation. Our results indicate a strong time dependency in the GHG mitigation potential of NT agriculture, demonstrating that GHG mitigation by adoption of NT is much more variable and complex than previously considered, and policy plans to reduce global warming through this land management practice need further scrutiny to ensure success.
Resumo:
The design of driven pile foundations involves an iterative process requiring an initial estimate of the refusal level to determine the depth of boreholes for subsequent analyses. Current procedures for determining borehole depths incorporate parameters typically unknown at the investigation stage. Thus, a quantifiable procedure more applicable at this preliminary stage would provide greater confidence in estimating the founding level of driven piles. This paper examines the effectiveness of the Standard Penetration Test (SPT) in directly estimating driven pile refusal levels. A number of significant correlations were obtained between SPT information and pile penetration records demonstrating the potential application of the SPT. Results indicated pile penetration was generally best described as a function of both the pile toe and cumulative shaft SPT values. The influence of the toe SPT increased when piles penetrated rock. A refusal criteria was established from the results to guide both the estimation of borehole depths and likely pile lengths during the design stage.
Resumo:
Objective: The aim of this literature review is to identify the role of probiotics in the management of enteral tube feeding (ETF) diarrhoea in critically ill patients.---------- Background: Diarrhoea is a common gastrointestinal problem seen in ETF patients. The incidence of diarrhoea in tube fed patients varies from 2% to 68% across all patients. Despite extensive investigation, the pathogenesis surrounding ETF diarrhoea remains unclear. Evidence to support probiotics to manage ETF diarrhoea in critically ill patients remains sparse.---------- Method: Literature on ETF diarrhoea and probiotics in critically ill, adult patients was reviewed from 1980 to 2010. The Cochrane Library, Pubmed, Science Direct, Medline and the Cumulative Index of Nursing and Allied Health Literature (CINAHL) electronic databases were searched using specific inclusion/exclusion criteria. Key search terms used were: enteral nutrition, diarrhoea, critical illness, probiotics, probiotic species and randomised clinical control trial (RCT).---------- Results: Four RCT papers were identified with two reporting full studies, one reporting a pilot RCT and one conference abstract reporting an RCT pilot study. A trend towards a reduction in diarrhoea incidence was observed in the probiotic groups. However, mortality associated with probiotic use in some severely and critically ill patients must caution the clinician against its use.---------- Conclusion: Evidence to support probiotic use in the management of ETF diarrhoea in critically ill patients remains unclear. This paper argues that probiotics should not be administered to critically ill patients until further research has been conducted to examine the causal relationship between probiotics and mortality, irrespective of the patient's disease state or projected prophylactic benefit of probiotic administration.
Resumo:
The theory of nonlinear dyamic systems provides some new methods to handle complex systems. Chaos theory offers new concepts, algorithms and methods for processing, enhancing and analyzing the measured signals. In recent years, researchers are applying the concepts from this theory to bio-signal analysis. In this work, the complex dynamics of the bio-signals such as electrocardiogram (ECG) and electroencephalogram (EEG) are analyzed using the tools of nonlinear systems theory. In the modern industrialized countries every year several hundred thousands of people die due to sudden cardiac death. The Electrocardiogram (ECG) is an important biosignal representing the sum total of millions of cardiac cell depolarization potentials. It contains important insight into the state of health and nature of the disease afflicting the heart. Heart rate variability (HRV) refers to the regulation of the sinoatrial node, the natural pacemaker of the heart by the sympathetic and parasympathetic branches of the autonomic nervous system. Heart rate variability analysis is an important tool to observe the heart's ability to respond to normal regulatory impulses that affect its rhythm. A computerbased intelligent system for analysis of cardiac states is very useful in diagnostics and disease management. Like many bio-signals, HRV signals are non-linear in nature. Higher order spectral analysis (HOS) is known to be a good tool for the analysis of non-linear systems and provides good noise immunity. In this work, we studied the HOS of the HRV signals of normal heartbeat and four classes of arrhythmia. This thesis presents some general characteristics for each of these classes of HRV signals in the bispectrum and bicoherence plots. Several features were extracted from the HOS and subjected an Analysis of Variance (ANOVA) test. The results are very promising for cardiac arrhythmia classification with a number of features yielding a p-value < 0.02 in the ANOVA test. An automated intelligent system for the identification of cardiac health is very useful in healthcare technology. In this work, seven features were extracted from the heart rate signals using HOS and fed to a support vector machine (SVM) for classification. The performance evaluation protocol in this thesis uses 330 subjects consisting of five different kinds of cardiac disease conditions. The classifier achieved a sensitivity of 90% and a specificity of 89%. This system is ready to run on larger data sets. In EEG analysis, the search for hidden information for identification of seizures has a long history. Epilepsy is a pathological condition characterized by spontaneous and unforeseeable occurrence of seizures, during which the perception or behavior of patients is disturbed. An automatic early detection of the seizure onsets would help the patients and observers to take appropriate precautions. Various methods have been proposed to predict the onset of seizures based on EEG recordings. The use of nonlinear features motivated by the higher order spectra (HOS) has been reported to be a promising approach to differentiate between normal, background (pre-ictal) and epileptic EEG signals. In this work, these features are used to train both a Gaussian mixture model (GMM) classifier and a Support Vector Machine (SVM) classifier. Results show that the classifiers were able to achieve 93.11% and 92.67% classification accuracy, respectively, with selected HOS based features. About 2 hours of EEG recordings from 10 patients were used in this study. This thesis introduces unique bispectrum and bicoherence plots for various cardiac conditions and for normal, background and epileptic EEG signals. These plots reveal distinct patterns. The patterns are useful for visual interpretation by those without a deep understanding of spectral analysis such as medical practitioners. It includes original contributions in extracting features from HRV and EEG signals using HOS and entropy, in analyzing the statistical properties of such features on real data and in automated classification using these features with GMM and SVM classifiers.
Resumo:
Purpose - The cumulative impacts of the knowledge economy together with the emerging dominance of knowledge-intensive sectors, have led to an unprecedented period of socio-economic and spatial restructuring. As a result, the paradigm of knowledge-based urban development (KBUD) has emerged as a development strategy to guide knowledge-based economic transformation (Knight, 1995; Yigitcanlar, 2007). Notwithstanding widespread government commitment and financial investment, in many cases providing the enabling circumstances for KUBUD has proven a complicated task as institutional barriers remain. Researchers and practitioners advocate that the way organisations work and their institutional relationships, policies and programs, will have a significant impact on a regions capacity to achieve KBUD (Savitch, 1998; Savitch and Kantor, 2002; Keast and Mandell, 2009). In this context, building organisational capacity is critical to achieving institutional change and bring together all of the key actors and sources, for the successful development, adoption, and implementation of knowledge-based development of a city (Yigitcanlar, 2009). Design/methodology/approach - There is a growing need to determine the complex inter-institutional arrangements and intra-organisational interactions required to advance urban development within the knowledge economy. In order to design organisational capacity-building strategies, the associated attributes of good capacity must first be identified. The paper, with its appraisal of knowledge-based urban development, scrutinises organisational capacity and institutional change in Brisbane. As part of the discussion of the case study findings, the paper describes the institutional relationships, policies, programs and funding streams, which are supporting KBUD in the region. Originality/value - In consideration that there has been limited investigation into the institutional lineaments required to provide the enabling circumstances for KBUD, the broad aim of this paper is to discover some good organisational capacity attributes, achieved through a case study of Brisbane. Practical implications - It is anticipated that the findings of the case study will contribute to moving the discussion on the complex inter-institutional arrangements and intra-organisaational interactions required for KBUD, beyond a position of rhetoric.
Resumo:
This article reports on a research program that has developed new methodologies for mapping the Australian blogosphere and tracking how information is disseminated across it. The authors improve on conventional web crawling methodologies in a number of significant ways: First, the authors track blogging activity as it occurs, by scraping new blog posts when such posts are announced through Really Simple Syndication (RSS) feeds. Second, the authors use custom-made tools that distinguish between the different types of content and thus allow us to analyze only the salient discursive content provided by bloggers. Finally, the authors are able to examine these better quality data using both link network mapping and textual analysis tools, to produce both cumulative longer term maps of interlinkages and themes, and specific shorter term snapshots of current activity that indicate current clusters of heavy interlinkage and highlight their key themes. In this article, the authors discuss findings from a yearlong observation of the Australian political blogosphere, suggesting that Australian political bloggers consistently address current affairs, but interpret them differently from mainstream news outlets. The article also discusses the next stage of the project, which extends this approach to an examination of other social networks used by Australians, including Twitter, YouTube, and Flickr. This adaptation of our methodology moves away from narrow models of political communication, and toward an investigation of everyday and popular communication, providing a more inclusive and detailed picture of the Australian networked public sphere.
Resumo:
This paper discusses the content, origin and development of Tendering Theory as a theory of price determination. It demonstrates how tendering theory determines market prices and how it is different from game and decision theories, and that in the tendering process, with non-cooperative, simultaneous, single sealed bids with individual private valuations, extensive public information, a large number of bidders and a long sequence of tendering occasions, there develops a competitive equilibrium. The development of a competitive equilibrium means that the concept of the tender as the sum of a valuation and a strategy, which is at the core of tendering theory, cannot be supported and that there are serious empirical, theoretical and methodological inconsistencies in the theory.
Resumo:
Background: Nurse-led telephone follow-up offers a relatively inexpensive method of delivering education and support for assisting recovery in the early discharge period; however, its efficacy is yet to be determined. Aim: To perform a critical integrative review of the research literature addressing the effectiveness of nurse-led telephone interventions for people with coronary heart disease (CHD). Methods: A literature search of five health care databases; Sciencedirect, Cumulative Index to Nursing and Allied Health Literature, Pubmed, Proquest and Medline to identify journal articles between 1980 and 2009. People with cardiac disease were considered for inclusion in this review. The search yielded 128 papers, of which 24 met the inclusion criteria. Results: A total of 8330 participants from 24 studies were included in the final review. Seven studies demonstrated statistically significant differences in all outcomes measured, used two group experimental research design and valid and reliable instruments. Some positive effects were detected in eight studies in regards to nurse-led telephone interventions for people with cardiac disease and no differences were detected in nine studies. Discussion: Studies with some positive effects generally had stronger research designs, large samples, used valid and reliable instruments and extensive nurse-led educative interventions. Conclusion: The results suggest that people with cardiac disease showed some benefits from nurse-led/delivered telephone interventions. More rigorous research into this area is needed.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Applications of stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics, industrial automation and stereomicroscopy. A key issue in stereo vision is that of image matching, or identifying corresponding points in a stereo pair. The difference in the positions of corresponding points in image coordinates is termed the parallax or disparity. When the orientation of the two cameras is known, corresponding points may be projected back to find the location of the original object point in world coordinates. Matching techniques are typically categorised according to the nature of the matching primitives they use and the matching strategy they employ. This report provides a detailed taxonomy of image matching techniques, including area based, transform based, feature based, phase based, hybrid, relaxation based, dynamic programming and object space methods. A number of area based matching metrics as well as the rank and census transforms were implemented, in order to investigate their suitability for a real-time stereo sensor for mining automation applications. The requirements of this sensor were speed, robustness, and the ability to produce a dense depth map. The Sum of Absolute Differences matching metric was the least computationally expensive; however, this metric was the most sensitive to radiometric distortion. Metrics such as the Zero Mean Sum of Absolute Differences and Normalised Cross Correlation were the most robust to this type of distortion but introduced additional computational complexity. The rank and census transforms were found to be robust to radiometric distortion, in addition to having low computational complexity. They are therefore prime candidates for a matching algorithm for a stereo sensor for real-time mining applications. A number of issues came to light during this investigation which may merit further work. These include devising a means to evaluate and compare disparity results of different matching algorithms, and finding a method of assigning a level of confidence to a match. Another issue of interest is the possibility of statistically combining the results of different matching algorithms, in order to improve robustness.