940 resultados para Median Filtering


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The figure Beets took exception to displays sex‐ and age‐specific median values of aggregated published expected values for pedometer determined physical activity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective To assemble expected values for free-living steps/day in special populations living with chronic illnesses and disabilities. Method Studies identified since 2000 were categorized into similar illnesses and disabilities, capturing the original reference, sample descriptions, descriptions of instruments used (i.e., pedometers, piezoelectric pedometers, accelerometers), number of days worn, and mean and standard deviation of steps/day. Results Sixty unique studies represented: 1) heart and vascular diseases, 2) chronic obstructive lung disease, 3) diabetes and dialysis, 4) breast cancer, 5) neuromuscular diseases, 6) arthritis, joint replacement, and fibromyalgia, 7) disability (including mental retardation/intellectual difficulties), and 8) other special populations. A median steps/day was calculated for each category. Waist-mounted and ankle-mounted instruments were considered separately due to fundamental differences in assessment properties. For waist-mounted instruments, the lowest median values for steps/day are found in disabled older adults (1214 steps/day) followed by people living with COPD (2237 steps/day). The highest values were seen in individuals with Type 1 diabetes (8008 steps/day), mental retardation/intellectual disability (7787 steps/day), and HIV (7545 steps/day). Conclusion This review will be useful to researchers/practitioners who work with individuals living with chronic illness and disability and require such information for surveillance, screening, intervention, and program evaluation purposes. Keywords: Exercise; Walking; Ambulatory monitoring

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Ambulance ramping within the Emergency Department (ED) is a common problem both internationally and in Australia. Previous research has focused on various issues associated with ambulance ramping such as access block, ED overcrowding and ambulance bypass. However, limited research has been conducted on ambulance ramping and its effects on patient outcomes. ----- ----- Methods: A case-control design was used to describe, compare and predict patient outcomes of 619 ramped (cases) vs. 1238 non-ramped (control) patients arriving to one ED via ambulance from 1 June 2007 to 31 August 2007. Cases and controls were matched (on a 1:2 basis) on age, gender and presenting problem. Outcome measures included ED length of stay and in-hospital mortality. ----- ----- Results: The median ramp time for all 1857 patients was 11 (IQR 6—21) min. Compared to nonramped patients, ramped patients had significantly longer wait time to be triaged (10 min vs. 4 min). Ramped patients also comprised significantly higher proportions of those access blocked (43% vs. 34%). No significant difference in the proportion of in-hospital deaths was identified (2%vs. 3%). Multivariate analysis revealed that the likelihood of having an ED length of stay greater than eight hours was 34% higher among patients who were ramped (OR 1.34, 95% CI 1.06—1.70, p = 0.014). In relation to in-hospital mortality age was the only significant independent predictor of mortality (p < 0.0001). ----- ----- Conclusion: Ambulance ramping is one factor that contributes to prolonged ED length of stay and adds additional strain on ED service provision. The potential for adverse patient outcomes that may occur as a result of ramping warrants close attention by health care service providers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Freeways are divided roadways designed to facilitate the uninterrupted movement of motor vehicles. However, many freeways now experience demand flows in excess of capacity, leading to recurrent congestion. The Highway Capacity Manual (TRB, 1994) uses empirical macroscopic relationships between speed, flow and density to quantify freeway operations and performance. Capacity may be predicted as the maximum uncongested flow achievable. Although they are effective tools for design and analysis, macroscopic models lack an understanding of the nature of processes taking place in the system. Szwed and Smith (1972, 1974) and Makigami and Matsuo (1990) have shown that microscopic modelling is also applicable to freeway operations. Such models facilitate an understanding of the processes whilst providing for the assessment of performance, through measures of capacity and delay. However, these models are limited to only a few circumstances. The aim of this study was to produce more comprehensive and practical microscopic models. These models were required to accurately portray the mechanisms of freeway operations at the specific locations under consideration. The models needed to be able to be calibrated using data acquired at these locations. The output of the models needed to be able to be validated with data acquired at these sites. Therefore, the outputs should be truly descriptive of the performance of the facility. A theoretical basis needed to underlie the form of these models, rather than empiricism, which is the case for the macroscopic models currently used. And the models needed to be adaptable to variable operating conditions, so that they may be applied, where possible, to other similar systems and facilities. It was not possible to produce a stand-alone model which is applicable to all facilities and locations, in this single study, however the scene has been set for the application of the models to a much broader range of operating conditions. Opportunities for further development of the models were identified, and procedures provided for the calibration and validation of the models to a wide range of conditions. The models developed, do however, have limitations in their applicability. Only uncongested operations were studied and represented. Driver behaviour in Brisbane was applied to the models. Different mechanisms are likely in other locations due to variability in road rules and driving cultures. Not all manoeuvres evident were modelled. Some unusual manoeuvres were considered unwarranted to model. However the models developed contain the principal processes of freeway operations, merging and lane changing. Gap acceptance theory was applied to these critical operations to assess freeway performance. Gap acceptance theory was found to be applicable to merging, however the major stream, the kerb lane traffic, exercises only a limited priority over the minor stream, the on-ramp traffic. Theory was established to account for this activity. Kerb lane drivers were also found to change to the median lane where possible, to assist coincident mergers. The net limited priority model accounts for this by predicting a reduced major stream flow rate, which excludes lane changers. Cowan's M3 model as calibrated for both streams. On-ramp and total upstream flow are required as input. Relationships between proportion of headways greater than 1 s and flow differed for on-ramps where traffic leaves signalised intersections and unsignalised intersections. Constant departure onramp metering was also modelled. Minimum follow-on times of 1 to 1.2 s were calibrated. Critical gaps were shown to lie between the minimum follow-on time, and the sum of the minimum follow-on time and the 1 s minimum headway. Limited priority capacity and other boundary relationships were established by Troutbeck (1995). The minimum average minor stream delay and corresponding proportion of drivers delayed were quantified theoretically in this study. A simulation model was constructed to predict intermediate minor and major stream delays across all minor and major stream flows. Pseudo-empirical relationships were established to predict average delays. Major stream average delays are limited to 0.5 s, insignificant compared with minor stream delay, which reach infinity at capacity. Minor stream delays were shown to be less when unsignalised intersections are located upstream of on-ramps than signalised intersections, and less still when ramp metering is installed. Smaller delays correspond to improved merge area performance. A more tangible performance measure, the distribution of distances required to merge, was established by including design speeds. This distribution can be measured to validate the model. Merging probabilities can be predicted for given taper lengths, a most useful performance measure. This model was also shown to be applicable to lane changing. Tolerable limits to merging probabilities require calibration. From these, practical capacities can be estimated. Further calibration is required of traffic inputs, critical gap and minimum follow-on time, for both merging and lane changing. A general relationship to predict proportion of drivers delayed requires development. These models can then be used to complement existing macroscopic models to assess performance, and provide further insight into the nature of operations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A remarkable growth in quantity and popularity of online social networks has been observed in recent years. There is a good number of online social networks exists which have over 100 million registered users. Many of these popular social networks offer automated recommendations to their users. This automated recommendations are normally generated using collaborative filtering systems based on the past ratings or opinions of the similar users. Alternatively, trust among the users in the network also can be used to find the neighbors while making recommendations. To obtain the optimum result, there must be a positive correlation exists between trust and interest similarity. Though the positive relations between trust and interest similarity are assumed and adopted by many researchers; no survey work on real life people’s opinion to support this hypothesis is found. In this paper, we have reviewed the state-of-the-art research work on trust in online social networks and have presented the result of the survey on the relationship between trust and interest similarity. Our result supports the assumed hypothesis of positive relationship between the trust and interest similarity of the users.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recommender systems are one of the recent inventions to deal with ever growing information overload. Collaborative filtering seems to be the most popular technique in recommender systems. With sufficient background information of item ratings, its performance is promising enough. But research shows that it performs very poor in a cold start situation where previous rating data is sparse. As an alternative, trust can be used for neighbor formation to generate automated recommendation. User assigned explicit trust rating such as how much they trust each other is used for this purpose. However, reliable explicit trust data is not always available. In this paper we propose a new method of developing trust networks based on user’s interest similarity in the absence of explicit trust data. To identify the interest similarity, we have used user’s personalized tagging information. This trust network can be used to find the neighbors to make automated recommendations. Our experiment result shows that the proposed trust based method outperforms the traditional collaborative filtering approach which uses users rating data. Its performance improves even further when we utilize trust propagation techniques to broaden the range of neighborhood.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Trust can be used for neighbor formation to generate automated recommendations. User assigned explicit rating data can be used for this purpose. However, the explicit rating data is not always available. In this paper we present a new method of generating trust network based on user’s interest similarity. To identify the interest similarity, we use user’s personalized tag information. This trust network can be used to find the neighbors to make automated recommendation. Our experiment result shows that the precision of the proposed method outperforms the traditional collaborative filtering approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent years, there is a dramatic growth in number and popularity of online social networks. There are many networks available with more than 100 million registered users such as Facebook, MySpace, QZone, Windows Live Spaces etc. People may connect, discover and share by using these online social networks. The exponential growth of online communities in the area of social networks attracts the attention of the researchers about the importance of managing trust in online environment. Users of the online social networks may share their experiences and opinions within the networks about an item which may be a product or service. The user faces the problem of evaluating trust in a service or service provider before making a choice. Recommendations may be received through a chain of friends network, so the problem for the user is to be able to evaluate various types of trust opinions and recommendations. This opinion or recommendation has a great influence to choose to use or enjoy the item by the other user of the community. Collaborative filtering system is the most popular method in recommender system. The task in collaborative filtering is to predict the utility of items to a particular user based on a database of user rates from a sample or population of other users. Because of the different taste of different people, they rate differently according to their subjective taste. If two people rate a set of items similarly, they share similar tastes. In the recommender system, this information is used to recommend items that one participant likes, to other persons in the same cluster. But the collaborative filtering system performs poor when there is insufficient previous common rating available between users; commonly known as cost start problem. To overcome the cold start problem and with the dramatic growth of online social networks, trust based approach to recommendation has emerged. This approach assumes a trust network among users and makes recommendations based on the ratings of the users that are directly or indirectly trusted by the target user.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND. Physical symptoms are common in pregnancy and are predominantly associated with normal physiological changes. These symptoms have a social and economic cost, leading to absenteeism from work and additional medical interventions. There is currently no simple method for identifying common pregnancy related problems in the antenatal period. A validated tool, for use by pregnancy care providers would be useful. AIM: The aim of the project was to develop and validate a Pregnancy Symptoms Inventory for use by healthcare professionals (HCPs). METHODS: A list of symptoms was generated via expert consultation with midwives and obstetrician gynaecologists. Focus groups were conducted with pregnant women in their first, second or third trimester. The inventory was then tested for face validity and piloted for readability and comprehension. For test-re-test reliability, it was administered to the same women 2 to 3 days apart. Finally, outpatient midwives trialled the inventory for 1 month and rated its usefulness on a 10cm visual analogue scale (VAS). The number of referrals to other health care professionals was recorded during this month. RESULTS: Expert consultation and focus group discussions led to the generation of a 41-item inventory. Following face validity and readability testing, several items were modified. Individual item test re-test reliability was between .51 to 1 with the majority (34 items) scoring .0.70. During the testing phase, 211 surveys were collected in the 1 month trial. Tiredness (45.5%), poor sleep (27.5%) back pain (19.5%) and nausea (12.6%) were experienced often. Among the women surveyed, 16.2% claimed to sometimes or often be incontinent. Referrals to the incontinence nurse increased > 8 fold during the study period. The median rating by midwives of the ‘usefulness’ of the inventory was 8.4 (range 0.9 to 10). CONCLUSIONS: The Pregnancy Symptoms Inventory (PSI) was well accepted by women in the 1 month trial and may be a useful tool for pregnancy care providers and aids clinicians in early detection and subsequent treatment of symptoms. It shows promise for use in the research community for assessing the impact of lifestyle intervention in pregnancy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim To identify relationships between preventive activities, psychosocial factors and leg ulcer recurrence in patients with chronic venous leg ulcers. Background Chronic venous leg ulcers are slow to heal and frequently recur, resulting in years of suffering and intensive use of health care resources. Methods A prospective longitudinal study was undertaken with a sample of 80 patients with a venous leg ulcer recruited when their ulcer healed. Data were collected from 2006–2009 from medical records on demographics, medical history and ulcer history; and from self-report questionnaires on physical activity, nutrition, preventive activities and psychosocial measures. Follow-up data were collected via questionnaires every three months for 12 months after healing. Median time to recurrence was calculated using the Kaplan-Meier method. A Cox proportional-hazards regression model was used to adjust for potential confounders and determine effects of preventive strategies and psychosocial factors on recurrence. Results: There were 35 recurrences in a sample of 80 participants. Median time to recurrence was 27 weeks. After adjustment for potential confounders, a Cox proportional hazards regression model found that at least an hour/day of leg elevation, six or more days/week in Class 2 (20–25mmHg) or 3 (30–40mmHg) compression hosiery, higher social support scale scores and higher General Self-Efficacy scores remained significantly associated (p<0.05) with a lower risk of recurrence, while male gender and a history of DVT remained significant risk factors for recurrence. Conclusion Results indicate that leg elevation, compression hosiery, high levels of self-efficacy and strong social support will help prevent recurrence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tags are an important information source in Web 2.0. They can be used to describe users’ topic preferences as well as the content of items to make personalized recommendations. However, since tags are arbitrary words given by users, they contain a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise brings difficulties to improve the accuracy of item recommendations. To eliminate the noise of tags, in this paper we propose to use the multiple relationships among users, items and tags to find the semantic meaning of each tag for each user individually. With the proposed approach, the relevant tags of each item and the tag preferences of each user are determined. In addition, the user and item-based collaborative filtering combined with the content filtering approach are explored. The effectiveness of the proposed approaches is demonstrated in the experiments conducted on real world datasets collected from Amazon.com and citeULike website.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Social tags in web 2.0 are becoming another important information source to describe the content of items as well as to profile users’ topic preferences. However, as arbitrary words given by users, tags contains a lot of noise such as tag synonym and semantic ambiguity a large number personal tags that only used by one user, which brings challenges to effectively use tags to make item recommendations. To solve these problems, this paper proposes to use a set of related tags along with their weights to represent semantic meaning of each tag for each user individually. A hybrid recommendation generation approaches that based on the weighted tags are proposed. We have conducted experiments using the real world dataset obtained from Amazon.com. The experimental results show that the proposed approaches outperform the other state of the art approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many data mining techniques have been proposed for mining useful patterns in text documents. However, how to effectively use and update discovered patterns is still an open research issue, especially in the domain of text mining. Since most existing text mining methods adopted term-based approaches, they all suffer from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern (or phrase) based approaches should perform better than the term-based ones, but many experiments did not support this hypothesis. This paper presents an innovative technique, effective pattern discovery which includes the processes of pattern deploying and pattern evolving, to improve the effectiveness of using and updating discovered patterns for finding relevant and interesting information. Substantial experiments on RCV1 data collection and TREC topics demonstrate that the proposed solution achieves encouraging performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relevance Feedback (RF) has been proven very effective for improving retrieval accuracy. Adaptive information filtering (AIF) technology has benefited from the improvements achieved in all the tasks involved over the last decades. A difficult problem in AIF has been how to update the system with new feedback efficiently and effectively. In current feedback methods, the updating processes focus on updating system parameters. In this paper, we developed a new approach, the Adaptive Relevance Features Discovery (ARFD). It automatically updates the system's knowledge based on a sliding window over positive and negative feedback to solve a nonmonotonic problem efficiently. Some of the new training documents will be selected using the knowledge that the system currently obtained. Then, specific features will be extracted from selected training documents. Different methods have been used to merge and revise the weights of features in a vector space. The new model is designed for Relevance Features Discovery (RFD), a pattern mining based approach, which uses negative relevance feedback to improve the quality of extracted features from positive feedback. Learning algorithms are also proposed to implement this approach on Reuters Corpus Volume 1 and TREC topics. Experiments show that the proposed approach can work efficiently and achieves the encouragement performance.