950 resultados para Objective measure
Resumo:
Locating hands in sign language video is challenging due to a number of factors. Hand appearance varies widely across signers due to anthropometric variations and varying levels of signer proficiency. Video can be captured under varying illumination, camera resolutions, and levels of scene clutter, e.g., high-res video captured in a studio vs. low-res video gathered by a web cam in a user’s home. Moreover, the signers’ clothing varies, e.g., skin-toned clothing vs. contrasting clothing, short-sleeved vs. long-sleeved shirts, etc. In this work, the hand detection problem is addressed in an appearance matching framework. The Histogram of Oriented Gradient (HOG) based matching score function is reformulated to allow non-rigid alignment between pairs of images to account for hand shape variation. The resulting alignment score is used within a Support Vector Machine hand/not-hand classifier for hand detection. The new matching score function yields improved performance (in ROC area and hand detection rate) over the Vocabulary Guided Pyramid Match Kernel (VGPMK) and the traditional, rigid HOG distance on American Sign Language video gestured by expert signers. The proposed match score function is computationally less expensive (for training and testing), has fewer parameters and is less sensitive to parameter settings than VGPMK. The proposed detector works well on test sequences from an inexpert signer in a non-studio setting with cluttered background.
The psychology of immersion and development of a quantitative measure of immersive response in games
Resumo:
This study sets out to investigate the psychology of immersion and the immersive response of individuals in relation to video and computer games. Initially, an exhaustive review of literature is presented, including research into games, player demographics, personality and identity. Play in traditional psychology is also reviewed, as well as previous research into immersion and attempts to define and measure this construct. An online qualitative study was carried out (N=38), and data was analysed using content analysis. A definition of immersion emerged, as well as a classification of two separate types of immersion, namely, vicarious immersion and visceral immersion. A survey study (N=217) verified the discrete nature of these categories and rejected the null hypothesis that there was no difference between individuals' interpretations of vicarious and visceral immersion. The primary aim of this research was to create a quantitative instrument which measures the immersive response as experienced by the player in a single game session. The IMX Questionnaire was developed using data from the initial qualitative study and quantitative survey. Exploratory Factor Analysis was carried out on data from 300 participants for the IMX Version 1, and Confirmatory Factor Analysis was conducted on data from 380 participants on the IMX Version 2. IMX Version 3 was developed from the results of these analyses. This questionnaire was found to have high internal consistency reliability and validity.
Resumo:
In many real world situations, we make decisions in the presence of multiple, often conflicting and non-commensurate objectives. The process of optimizing systematically and simultaneously over a set of objective functions is known as multi-objective optimization. In multi-objective optimization, we have a (possibly exponentially large) set of decisions and each decision has a set of alternatives. Each alternative depends on the state of the world, and is evaluated with respect to a number of criteria. In this thesis, we consider the decision making problems in two scenarios. In the first scenario, the current state of the world, under which the decisions are to be made, is known in advance. In the second scenario, the current state of the world is unknown at the time of making decisions. For decision making under certainty, we consider the framework of multiobjective constraint optimization and focus on extending the algorithms to solve these models to the case where there are additional trade-offs. We focus especially on branch-and-bound algorithms that use a mini-buckets algorithm for generating the upper bound at each node of the search tree (in the context of maximizing values of objectives). Since the size of the guiding upper bound sets can become very large during the search, we introduce efficient methods for reducing these sets, yet still maintaining the upper bound property. We define a formalism for imprecise trade-offs, which allows the decision maker during the elicitation stage, to specify a preference for one multi-objective utility vector over another, and use such preferences to infer other preferences. The induced preference relation then is used to eliminate the dominated utility vectors during the computation. For testing the dominance between multi-objective utility vectors, we present three different approaches. The first is based on a linear programming approach, the second is by use of distance-based algorithm (which uses a measure of the distance between a point and a convex cone); the third approach makes use of a matrix multiplication, which results in much faster dominance checks with respect to the preference relation induced by the trade-offs. Furthermore, we show that our trade-offs approach, which is based on a preference inference technique, can also be given an alternative semantics based on the well known Multi-Attribute Utility Theory. Our comprehensive experimental results on common multi-objective constraint optimization benchmarks demonstrate that the proposed enhancements allow the algorithms to scale up to much larger problems than before. For decision making problems under uncertainty, we describe multi-objective influence diagrams, based on a set of p objectives, where utility values are vectors in Rp, and are typically only partially ordered. These can be solved by a variable elimination algorithm, leading to a set of maximal values of expected utility. If the Pareto ordering is used this set can often be prohibitively large. We consider approximate representations of the Pareto set based on ϵ-coverings, allowing much larger problems to be solved. In addition, we define a method for incorporating user trade-offs, which also greatly improves the efficiency.
Resumo:
Measurement of antigen-specific T cell responses is an adjunctive parameter to evaluate protection induced by a previous Bordetella pertussis infection or vaccination. The assessment of T cell responses is technically complex and usually performed on fresh peripheral blood mononuclear cells (PBMC). The objective of this study was to identify simplified methods to assess pertussis specific T cell responses and verify if these assays could be performed using frozen/thawed (frozen) PBMC. Three read-outs to measure proliferation were compared: the fluorescent dye 5,6-carboxylfluorescein diacetate succinimidyl ester (CFSE) dilution test, the number of blast cells defined by physical parameters, and the incorporation of (3)H-thymidine. The results of pertussis-specific assays performed on fresh PBMC were compared to the results on frozen PBMC from the same donor. High concordance was obtained when the results of CFSE and blast read-outs were compared, an encouraging result since blast analysis allows the identification of proliferating cells and does not require any use of radioactive tracer as well as any staining. The results obtained using fresh and frozen PBMC from the same donor in the different T cell assays, including IFNγ and TNFα cytokine production, did not show significant differences, suggesting that a careful cryopreservation process of PBMC would not significantly influence T cell response evaluation. Adopting blast analysis and frozen PBMC, the possibility to test T cell responses is simplified and might be applied in population studies, providing for new instruments to better define correlates of protection still elusive in pertussis.
Resumo:
OBJECTIVE: This study compared self-reported subjective life expectancy (i.e., probability of living to age 75) for normal-weight, overweight, and obese weight groups to examine whether individuals are internalizing information about the health risks due to excessive weight. RESEARCH METHODS AND PROCEDURES: Using data from the Health and Retirement Study, a total of 9035 individuals 51 to 61 years old were analyzed by BMI category. The primary outcome measure was individuals' reports about their own expectations of survival to age 75. Absolute and relative risks of survival were compared with published estimates of survival to age 75. RESULTS: Consistently, higher levels of BMI were associated with lower self-estimated survival probabilities. Differences relative to normal weight ranged from 4.9% (p < 0.01) for male nonsmokers to 8.8% (p < 0.001) for female nonsmokers. However, these differences were substantially less than those obtained from published survival curve estimates, suggesting that obese individuals tended to underestimate mortality risks. DISCUSSION: Individuals appeared to underestimate the mortality risks of excessive weight; thus, knowledge campaigns about the risks of obesity should remain a top priority.
Resumo:
OBJECTIVE: The Veterans Health Administration has developed My HealtheVet (MHV), a Web-based portal that links veterans to their care in the veteran affairs (VA) system. The objective of this study was to measure diabetic veterans' access to and use of the Internet, and their interest in using MHV to help manage their diabetes. MATERIALS AND METHODS: Cross-sectional mailed survey of 201 patients with type 2 diabetes and hemoglobin A(1c) > 8.0% receiving primary care at any of five primary care clinic sites affiliated with a VA tertiary care facility. Main measures included Internet usage, access, and attitudes; computer skills; interest in using the Internet; awareness of and attitudes toward MHV; demographics; and socioeconomic status. RESULTS: A majority of respondents reported having access to the Internet at home. Nearly half of all respondents had searched online for information about diabetes, including some who did not have home Internet access. More than a third obtained "some" or "a lot" of their health-related information online. Forty-one percent reported being "very interested" in using MHV to help track their home blood glucose readings, a third of whom did not have home Internet access. Factors associated with being "very interested" were as follows: having access to the Internet at home (p < 0.001), "a lot/some" trust in the Internet as a source of health information (p = 0.002), lower age (p = 0.03), and some college (p = 0.04). Neither race (p = 0.44) nor income (p = 0.25) was significantly associated with interest in MHV. CONCLUSIONS: This study found that a diverse sample of older VA patients with sub-optimally controlled diabetes had a level of familiarity with and access to the Internet comparable to an age-matched national sample. In addition, there was a high degree of interest in using the Internet to help manage their diabetes.
Resumo:
Duke Medicine utilized interprofessional case conferences (ICCs) from 2008-2012 with the objective of modeling and facilitating development of teamwork skills among diverse health profession students, including physical therapy, physician assistant, medical doctor and nursing. The purpose of this publication was to describe the operational process used to develop and implement the ICCs and measure the success of the ICCs in order to shape future work. The ICCs were offered to develop skills and attitudes essential for participation in healthcare teams. Students were facilitated by faculty of different professions to conduct a comprehensive historical assessment of a standardized patient (SP), determine pertinent physical and lab assessments to undertake, and develop and share a comprehensive management plan. Cases included patient problems that were authentic and relevant to each professional student in attendance. The main barriers to implementation are outlined and the focus on the process of working together is highlighted. Evaluation showed high satisfaction rates among participants and the outcomes from these experiences are presented. The limitations of these results are discussed and recommendations for future assessment are emphasized. The ICCs demonstrated that students will come together voluntarily to learn in teams, even at a research-focused institution, and express benefit from the collaborative exercise.
Resumo:
Context : Stress fractures are one of the most common injuries in sports, accounting for approximately 10% of all overuse injuries. Treatment of fifth metatarsal stress fractures involves both surgical and nonsurgical interventions. Fifth metatarsal stress fractures are difficult to treat because of the risks of delayed union, nonunion, and recurrent injuries. Most of these injuries occur during agility tasks, such as those performed in soccer, basketball, and lacrosse. Objective : To examine the effect of a rigid carbon graphite footplate on plantar loading during 2 agility tasks. Design : Crossover study. Setting : Laboratory. Patients or Other Participants : A total of 19 recreational male athletes with no history of lower extremity injury in the past 6 months and no previous metatarsal stress fractures were tested. Main Outcome Measure(s) : Seven 45° side-cut and crossover-cut tasks were completed in a shoe with or without a full-length rigid carbon plate. Testing order between the shoe conditions and the 2 cutting tasks was randomized. Plantar-loading data were recorded using instrumented insoles. Peak pressure, maximum force, force-time integral, and contact area beneath the total foot, the medial and lateral midfoot, and the medial, middle, and lateral forefoot were analyzed. A series of paired t tests was used to examine differences between the footwear conditions (carbon graphite footplate, shod) for both cutting tasks independently (α = .05). Results : During the side-cut task, the footplate increased total foot and lateral midfoot peak pressures while decreasing contact area and lateral midfoot force-time integral. During the crossover-cut task, the footplate increased total foot and lateral midfoot peak pressure and lateral forefoot force-time integral while decreasing total and lateral forefoot contact area. Conclusions : Although a rigid carbon graphite footplate altered some aspects of the plantar- pressure profile during cutting in uninjured participants, it was ineffective in reducing plantar loading beneath the fifth metatarsal.
Resumo:
We consider a deterministic system with two conserved quantities and infinity many invariant measures. However the systems possess a unique invariant measure when enough stochastic forcing and balancing dissipation are added. We then show that as the forcing and dissipation are removed a unique limit of the deterministic system is selected. The exact structure of the limiting measure depends on the specifics of the stochastic forcing.
Resumo:
We introduce a new scale that measures how central an event is to a person's identity and life story. For the most stressful or traumatic event in a person's life, the full 20-item Centrality of Event Scale (CES) and the short 7-item scale are reliable (alpha's of .94 and .88, respectively) in a sample of 707 undergraduates. The scale correlates .38 with PTSD symptom severity and .23 with depression. The present findings are discussed in relation to previous work on individual differences related to PTSD symptoms. Possible connections between the CES and measures of maladaptive attributions and rumination are considered along with suggestions for future research.
Resumo:
The percentage of subjects recalling each unit in a list or prose passage is considered as a dependent measure. When the same units are recalled in different tasks, processing is assumed to be the same; when different units are recalled, processing is assumed to be different. Two collections of memory tasks are presented, one for lists and one for prose. The relations found in these two collections are supported by an extensive reanalysis of the existing prose memory literature. The same set of words were learned by 13 different groups of subjects under 13 different conditions. Included were intentional free-recall tasks, incidental free recall following lexical decision, and incidental free recall following ratings of orthographic distinctiveness and emotionality. Although the nine free-recall tasks varied widely with regard to the amount of recall, the relative probability of recall for the words was very similar among the tasks. Imagery encoding and recognition produced relative probabilities of recall that were different from each other and from the free-recall tasks. Similar results were obtained with a prose passage. A story was learned by 13 different groups of subjects under 13 different conditions. Eight free-recall tasks, which varied with respect to incidental or intentional learning, retention interval, and the age of the subjects, produced similar relative probabilities of recall, whereas recognition and prompted recall produced relative probabilities of recall that were different from each other and from the free-recall tasks. A review of the prose literature was undertaken to test the generality of these results. Analysis of variance is the most common statistical procedure in this literature. If the relative probability of recall of units varied across conditions, a units by condition interaction would be expected. For the 12 studies that manipulated retention interval, an average of 21% of the variance was accounted for by the main effect of retention interval, 17% by the main effect of units, and only 2% by the retention interval by units interaction. Similarly, for the 12 studies that varied the age of the subjects, 6% of the variance was accounted for by the main effect of age, 32% by the main effect of units, and only 1% by the interaction of age by units.(ABSTRACT TRUNCATED AT 400 WORDS)
Resumo:
Scheduling a set of jobs over a collection of machines to optimize a certain quality-of-service measure is one of the most important research topics in both computer science theory and practice. In this thesis, we design algorithms that optimize {\em flow-time} (or delay) of jobs for scheduling problems that arise in a wide range of applications. We consider the classical model of unrelated machine scheduling and resolve several long standing open problems; we introduce new models that capture the novel algorithmic challenges in scheduling jobs in data centers or large clusters; we study the effect of selfish behavior in distributed and decentralized environments; we design algorithms that strive to balance the energy consumption and performance.
The technically interesting aspect of our work is the surprising connections we establish between approximation and online algorithms, economics, game theory, and queuing theory. It is the interplay of ideas from these different areas that lies at the heart of most of the algorithms presented in this thesis.
The main contributions of the thesis can be placed in one of the following categories.
1. Classical Unrelated Machine Scheduling: We give the first polygorithmic approximation algorithms for minimizing the average flow-time and minimizing the maximum flow-time in the offline setting. In the online and non-clairvoyant setting, we design the first non-clairvoyant algorithm for minimizing the weighted flow-time in the resource augmentation model. Our work introduces iterated rounding technique for the offline flow-time optimization, and gives the first framework to analyze non-clairvoyant algorithms for unrelated machines.
2. Polytope Scheduling Problem: To capture the multidimensional nature of the scheduling problems that arise in practice, we introduce Polytope Scheduling Problem (\psp). The \psp problem generalizes almost all classical scheduling models, and also captures hitherto unstudied scheduling problems such as routing multi-commodity flows, routing multicast (video-on-demand) trees, and multi-dimensional resource allocation. We design several competitive algorithms for the \psp problem and its variants for the objectives of minimizing the flow-time and completion time. Our work establishes many interesting connections between scheduling and market equilibrium concepts, fairness and non-clairvoyant scheduling, and queuing theoretic notion of stability and resource augmentation analysis.
3. Energy Efficient Scheduling: We give the first non-clairvoyant algorithm for minimizing the total flow-time + energy in the online and resource augmentation model for the most general setting of unrelated machines.
4. Selfish Scheduling: We study the effect of selfish behavior in scheduling and routing problems. We define a fairness index for scheduling policies called {\em bounded stretch}, and show that for the objective of minimizing the average (weighted) completion time, policies with small stretch lead to equilibrium outcomes with small price of anarchy. Our work gives the first linear/ convex programming duality based framework to bound the price of anarchy for general equilibrium concepts such as coarse correlated equilibrium.
Resumo:
BACKGROUND: RA and CVD both have inflammation as part of the underlying biology. Our objective was to explore the relationships of GlycA, a measure of glycosylated acute phase proteins, with inflammation and cardiometabolic risk in RA, and explore whether these relationships were similar to those for persons without RA. METHODS: Plasma GlycA was determined for 50 individuals with mild-moderate RA disease activity and 39 controls matched for age, gender, and body mass index (BMI). Regression analyses were performed to assess relationships between GlycA and important markers of traditional inflammation and cardio-metabolic health: inflammatory cytokines, disease activity, measures of adiposity and insulin resistance. RESULTS: On average, RA activity was low (DAS-28 = 3.0 ± 1.4). Traditional inflammatory markers, ESR, hsCRP, IL-1β, IL-6, IL-18 and TNF-α were greater in RA versus controls (P < 0.05 for all). GlycA concentrations were significantly elevated in RA versus controls (P = 0.036). In RA, greater GlycA associated with disease activity (DAS-28; RDAS-28 = 0.5) and inflammation (RESR = 0.7, RhsCRP = 0.7, RIL-6 = 0.3: P < 0.05 for all); in BMI-matched controls, these inflammatory associations were absent or weaker (hsCRP), but GlycA was related to IL-18 (RhsCRP = 0.3, RIL-18 = 0.4: P < 0.05). In RA, greater GlycA associated with more total abdominal adiposity and less muscle density (Rabdominal-adiposity = 0.3, Rmuscle-density = -0.3, P < 0.05 for both). In BMI-matched controls, GlycA associated with more cardio-metabolic markers: BMI, waist circumference, adiposity measures and insulin resistance (R = 0.3-0.6, P < 0.05 for all). CONCLUSIONS: GlycA provides an integrated measure of inflammation with contributions from traditional inflammatory markers and cardio-metabolic sources, dominated by inflammatory markers in persons with RA and cardio-metabolic factors in those without.
Resumo:
Statistical learning can be used to extract the words from continuous speech. Gómez, Bion, and Mehler (Language and Cognitive Processes, 26, 212–223, 2011) proposed an online measure of statistical learning: They superimposed auditory clicks on a continuous artificial speech stream made up of a random succession of trisyllabic nonwords. Participants were instructed to detect these clicks, which could be located either within or between words. The results showed that, over the length of exposure, reaction times (RTs) increased more for within-word than for between-word clicks. This result has been accounted for by means of statistical learning of the between-word boundaries. However, even though statistical learning occurs without an intention to learn, it nevertheless requires attentional resources. Therefore, this process could be affected by a concurrent task such as click detection. In the present study, we evaluated the extent to which the click detection task indeed reflects successful statistical learning. Our results suggest that the emergence of RT differences between within- and between-word click detection is neither systematic nor related to the successful segmentation of the artificial language. Therefore, instead of being an online measure of learning, the click detection task seems to interfere with the extraction of statistical regularities.