209 resultados para Individually rational utility set


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background—Palpation is an important clinical test for jumper's knee. Objectives—To (a) test the reproducibility of palpation tenderness, (b) evaluate the sensitivity and specificity of palpation in subjects with clinical symptoms of jumper's knee, and (c) determine whether tenderness to palpation may serve as a useful screening test for patellar tendinopathy. The yardstick for diagnosis of patellar tendinopathy was ultrasonographic abnormality. Methods—In 326 junior symptomatic and asymptomatic athletes' tendons, palpation was performed by a single examiner before ultrasonographic examination by a certified ultrasound radiologist. In 58 tendons, palpation was performed twice to test reliability. Tenderness to palpation was scored on a scale from 0 to 3 where 0 represented no pain, and 1, 2, and 3 represented mild, moderate, and severe tenderness respectively. Results—Patellar tendon palpation was a reliable examination for a single examiner (Pearson r = 0.82). In symptomatic tendons, the positive predictive value of palpation was 68%. As a screening examination in asymptomatic subjects, the positive predictive value of tendon palpation was 36–38%. Moderate and severe palpation tenderness were better predictors of ultrasonographic tendon pathology than absent or mild tenderness (p<0.001). Tender and symptomatic tendons were more likely to have ultrasound abnormality than tenderness alone (p<0.01). Conclusions—In this age group, palpation is a reliable test but it is not cost effective in detecting patellar tendinopathy in a preparticipation examination. In symptomatic tendons, palpation is a moderately sensitive but not specific test. Mild tenderness in the patellar tendons in asymptomatic jumping athletes should be considered normal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The question as to whether poser race affects the happy categorization advantage, the faster categorization of happy than of negative emotional expressions, has been answered inconsistently. Hugenberg (2005) found the happy categorization advantage only for own race faces whereas faster categorization of angry expressions was evident for other race faces. Kubota and Ito (2007) found a happy categorization advantage for both own race and other race faces. These results have vastly different implications for understanding the influence of race cues on the processing of emotional expressions. The current study replicates the results of both prior studies and indicates that face type (computer-generated vs. photographic), presentation duration, and especially stimulus set size influence the happy categorization advantage as well as the moderating effect of poser race.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As of June 2009, 361 genome-wide association studies (GWAS) had been referenced by the HuGE database. GWAS require DNA from many thousands of individuals, relying on suitable DNA collections. We recently performed a multiple sclerosis (MS) GWAS where a substantial component of the cases (24%) had DNA derived from saliva. Genotyping was done on the Illumina genotyping platform using the Infinium Hap370CNV DUO microarray. Additionally, we genotyped 10 individuals in duplicate using both saliva- and blood-derived DNA. The performance of blood- versus saliva-derived DNA was compared using genotyping call rate, which reflects both the quantity and quality of genotyping per sample and the “GCScore,” an Illumina genotyping quality score, which is a measure of DNA quality. We also compared genotype calls and GCScores for the 10 sample pairs. Call rates were assessed for each sample individually. For the GWAS samples, we compared data according to source of DNA and center of origin. We observed high concordance in genotyping quality and quantity between the paired samples and minimal loss of quality and quantity of DNA in the saliva samples in the large GWAS sample, with the blood samples showing greater variation between centers of origin. This large data set highlights the usefulness of saliva DNA for genotyping, especially in high-density single-nucleotide polymorphism microarray studies such as GWAS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Utility functions in Bayesian experimental design are usually based on the posterior distribution. When the posterior is found by simulation, it must be sampled from for each future data set drawn from the prior predictive distribution. Many thousands of posterior distributions are often required. A popular technique in the Bayesian experimental design literature to rapidly obtain samples from the posterior is importance sampling, using the prior as the importance distribution. However, importance sampling will tend to break down if there is a reasonable number of experimental observations and/or the model parameter is high dimensional. In this paper we explore the use of Laplace approximations in the design setting to overcome this drawback. Furthermore, we consider using the Laplace approximation to form the importance distribution to obtain a more efficient importance distribution than the prior. The methodology is motivated by a pharmacokinetic study which investigates the effect of extracorporeal membrane oxygenation on the pharmacokinetics of antibiotics in sheep. The design problem is to find 10 near optimal plasma sampling times which produce precise estimates of pharmacokinetic model parameters/measures of interest. We consider several different utility functions of interest in these studies, which involve the posterior distribution of parameter functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose Videokeratoscopy images can be used for the non-invasive assessment of the tear film. In this work the applicability of an image processing technique, textural-analysis, for the assessment of the tear film in Placido disc images has been investigated. Methods In the presence of tear film thinning/break-up, the reflected pattern from the videokeratoscope is disturbed in the region of tear film disruption. Thus, the Placido pattern carries information about the stability of the underlying tear film. By characterizing the pattern regularity, the tear film quality can be inferred. In this paper, a textural features approach is used to process the Placido images. This method provides a set of texture features from which an estimate of the tear film quality can be obtained. The method is tested for the detection of dry eye in a retrospective dataset from 34 subjects (22-normal and 12-dry eye), with measurements taken under suppressed blinking conditions. Results To assess the capability of each texture-feature to discriminate dry eye from normal subjects, the receiver operating curve (ROC) was calculated and the area under the curve (AUC), specificity and sensitivity extracted. For the different features examined, the AUC value ranged from 0.77 to 0.82, while the sensitivity typically showed values above 0.9 and the specificity showed values around 0.6. Overall, the estimated ROCs indicate that the proposed technique provides good discrimination performance. Conclusions Texture analysis of videokeratoscopy images is applicable to study tear film anomalies in dry eye subjects. The proposed technique appears to have demonstrated its clinical relevance and utility.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Issue addressed: Although increases in cycling in Brisbane are encouraging, bicycle mode share to work in the state of Queensland remains low. The aim of this qualitative study was to draw upon the lived experiences of Queensland cyclists to understand the main motivators for utility cycling (cycling as a means to get to and from places) and compare motivators between utility cyclists (those who cycle for utility as well as for recreation) and non-utility cyclists (those who cycle only for recreation). Methods: For an online survey, members of a bicycle group (831 utility cyclists and 931 non-utility cyclists, aged 18-90 years) were asked to describe, unprompted, what would motivate them to engage in utility cycling (more often). Responses were coded into themes within four levels of an ecological model. Results: Within an ecological model, built environment influences on motivation were grouped according to whether they related to appeal (safety), convenience (accessibility) or attractiveness (more amenities) and included adequate infrastructure for short trips, bikeway connectivity, end-of-trip facilities at public locations and easy and safe bicycle access to destinations outside of cities. A key social-cultural influence related to improved interactions among different road users. Conclusions: The built and social-cultural environments need to be more supportive of utility cycling before even current utility and non-utility cyclists will be motivated to engage (more often) in utility cycling. So what?: Additional government strategies and more and better infrastructure that support utility cycling beyond commuter cycling may encourage a utility cycling culture.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the Rural Health Education, Training and Research Network is to support the education and training of rural health practitioners and research in rural health through the optimum use of appropriate information and communication technologies to link and inform all individuals and organisation involved in the teaching, planning and delivery of health care in rural and remote Queensland. The health care of people in rural areas has the potential to be enhanced, through providing the rural and remote health professionals in Queensland with the same access to educational and training opportunities as their metropolitan colleagues. This consultative, coordinated approach should be cost-effective through both increasing awareness and utilisation of existing and developing networks, and through more efficient and rational use of both the basic and sophisticated technologies which support them. Technological hardware, expertise and infrastructure are already in place in Queensland to support a Rural Health Education, Training and Research Network, but are not being used to their potential, more often due to a lack of awareness of their existence and utility than to their perceived costs. Development of the network has commenced through seeding funds provided by Queensland Health. Future expansion will ensure access by health professionals to existing networks within Queensland. This paper explores the issues and implications of a network for rural health professionals in Queensland and potentially throughout Australia, with a specific focus on the implications for rural and isolated health professional.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chronic physical inactivity is a major risk factor for a number of important lifestyle diseases, while inappropriate exposure to high physical demands is a risk factor for musculoskeletal injury and fatigue. Proteomic and metabolomic investigations of the physical activity continuum - extreme sedentariness to extremes in physical performance - offer increasing insight into the biological impacts of physical activity. Moreover, biomarkers, revealed in such studies, may have utility in the monitoring of metabolic and musculoskeletal health or recovery following injury. As a diagnostic matrix, urine is non-invasive to collect and it contains many biomolecules, which reflect both positive and negative adaptations to physical activity exposure. This review examines the utility and landscape of biomarkers of physical activity with particular reference to those found in urine.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a novel framework for the unsupervised alignment of an ensemble of temporal sequences. This approach draws inspiration from the axiom that an ensemble of temporal signals stemming from the same source/class should have lower rank when "aligned" rather than "misaligned". Our approach shares similarities with recent state of the art methods for unsupervised images ensemble alignment (e.g. RASL) which breaks the problem into a set of image alignment problems (which have well known solutions i.e. the Lucas-Kanade algorithm). Similarly, we propose a strategy for decomposing the problem of temporal ensemble alignment into a similar set of independent sequence problems which we claim can be solved reliably through Dynamic Time Warping (DTW). We demonstrate the utility of our method using the Cohn-Kanade+ dataset, to align expression onset across multiple sequences, which allows us to automate the rapid discovery of event annotations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a unified sequential Monte Carlo (SMC) framework for performing sequential experimental design for discriminating between a set of models. The model discrimination utility that we advocate is fully Bayesian and based upon the mutual information. SMC provides a convenient way to estimate the mutual information. Our experience suggests that the approach works well on either a set of discrete or continuous models and outperforms other model discrimination approaches.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Thomas Mann’s tetralogy of the 1930s and 1940s, Joseph and His Brothers, the narrator declares history is not only “that which has happened and that which goes on happening in time,” but it is also “the stratified record upon which we set our feet, the ground beneath us.” By opening up history to its spatial, geographical, and geological dimensions Mann both predicts and encapsulates the twentieth-century’s “spatial turn,” a critical shift that divested geography of its largely passive role as history’s “stage” and brought to the fore intersections between the humanities and the earth sciences. In this paper, I draw out the relationships between history, narrative, geography, and geology revealed by this spatial turn and the questions these pose for thinking about the disciplinary relationship between geography and the humanities. As Mann’s statement exemplifies, the spatial turn itself has often been captured most strikingly in fiction, and I would argue nowhere more so than in Graham Swift’s Waterland (1983) and Anne Michaels’s Fugitive Pieces (1996), both of which present space, place, and landscape as having a palpable influence on history and memory. The geographical/geological line that runs through both Waterland and Fugitive Pieces continues through Tim Robinson’s non-fictional, two-volume “topographical” history Stones of Aran. Robinson’s Stones of Aran—which is not history, not geography, and not literature, and yet is all three—constructs an imaginative geography that renders inseparable geography, geology, history, memory, and the act of writing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Numeric set watermarking is a way to provide ownership proof for numerical data. Numerical data can be considered to be primitives for multimedia types such as images and videos since they are organized forms of numeric information. Thereby, the capability to watermark numerical data directly implies the capability to watermark multimedia objects and discourage information theft on social networking sites and the Internet in general. Unfortunately, there has been very limited research done in the field of numeric set watermarking due to underlying limitations in terms of number of items in the set and LSBs in each item available for watermarking. In 2009, Gupta et al. proposed a numeric set watermarking model that embeds watermark bits in the items of the set based on a hash value of the items’ most significant bits (MSBs). If an item is chosen for watermarking, a watermark bit is embedded in the least significant bits, and the replaced bit is inserted in the fractional value to provide reversibility. The authors show their scheme to be resilient against the traditional subset addition, deletion, and modification attacks as well as secondary watermarking attacks. In this paper, we present a bucket attack on this watermarking model. The attack consists of creating buckets of items with the same MSBs and determine if the items of the bucket carry watermark bits. Experimental results show that the bucket attack is very strong and destroys the entire watermark with close to 100% success rate. We examine the inherent weaknesses in the watermarking model of Gupta et al. that leave it vulnerable to the bucket attack and propose potential safeguards that can provide resilience against this attack.