969 resultados para Real-world


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We conduct the first empirical investigation of common-pool resource users' dynamic and strategic behavior at the micro level using real-world data. Fishermen's strategies in a fully dynamic game account for latent resource dynamics and other players' actions, revealing the profit structure of the fishery. We compare the fishermen's actual and socially optimal exploitation paths under a time-specific vessel allocation policy and find a sizable dynamic externality. Individual fishermen respond to other users by exerting effort above the optimal level early in the season. Congestion is costly instantaneously but is beneficial in the long run because it partially offsets dynamic inefficiencies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An event memory is a mental construction of a scene recalled as a single occurrence. It therefore requires the hippocampus and ventral visual stream needed for all scene construction. The construction need not come with a sense of reliving or be made by a participant in the event, and it can be a summary of occurrences from more than one encoding. The mental construction, or physical rendering, of any scene must be done from a specific location and time; this introduces a "self" located in space and time, which is a necessary, but need not be a sufficient, condition for a sense of reliving. We base our theory on scene construction rather than reliving because this allows the integration of many literatures and because there is more accumulated knowledge about scene construction's phenomenology, behavior, and neural basis. Event memory differs from episodic memory in that it does not conflate the independent dimensions of whether or not a memory is relived, is about the self, is recalled voluntarily, or is based on a single encoding with whether it is recalled as a single occurrence of a scene. Thus, we argue that event memory provides a clearer contrast to semantic memory, which also can be about the self, be recalled voluntarily, and be from a unique encoding; allows for a more comprehensive dimensional account of the structure of explicit memory; and better accounts for laboratory and real-world behavioral and neural results, including those from neuropsychology and neuroimaging, than does episodic memory.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

If and only if each single cue uniquely defines its target, a independence model based on fragment theory can predict the strength of a combined dual cue from the strengths of its single cue components. If the single cues do not each uniquely define their target, no single monotonic function can predict the strength of the dual cue from its components; rather, what matters is the number of possible targets. The probability of generating a target word was .19 for rhyme cues, .14 for category cues, and .97 for rhyme-plus-category dual cues. Moreover, some pairs of cues had probabilities of producing their targets of .03 when used individually and 1.00 when used together, whereas other pairs had moderate probabilities individually and together. The results, which are interpreted in terms of multiple constraints limiting the number of responses, show why rhymes, which play a minimal role in laboratory studies of memory, are common in real-world mnemonics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Several trials have demonstrated the efficacy of nurse telephone case management for diabetes (DM) and hypertension (HTN) in academic or vertically integrated systems. Little is known about the real-world potency of these interventions. OBJECTIVE: To assess the effectiveness of nurse behavioral management of DM and HTN in community practices among patients with both diseases. DESIGN: The study was designed as a patient-level randomized controlled trial. PARTICIPANTS: Participants included adult patients with both type 2 DM and HTN who were receiving care at one of nine community fee-for-service practices. Subjects were required to have inadequately controlled DM (hemoglobin A1c [A1c] ≥ 7.5%) but could have well-controlled HTN. INTERVENTIONS: All patients received a call from a nurse experienced in DM and HTN management once every two months over a period of two years, for a total of 12 calls. Intervention patients received tailored DM- and HTN- focused behavioral content; control patients received non-tailored, non-interactive information regarding health issues unrelated to DM and HTN (e.g., skin cancer prevention). MAIN OUTCOMES AND MEASURES: Systolic blood pressure (SBP) and A1c were co-primary outcomes, measured at 6, 12, and 24 months; 24 months was the primary time point. RESULTS: Three hundred seventy-seven subjects were enrolled; 193 were randomized to intervention, 184 to control. Subjects were 55% female and 50% white; the mean baseline A1c was 9.1% (SD = 1%) and mean SBP was 142 mmHg (SD = 20). Eighty-two percent of scheduled interviews were conducted; 69% of intervention patients and 70% of control patients reached the 24-month time point. Expressing model estimated differences as (intervention--control), at 24 months, intervention patients had similar A1c [diff = 0.1 %, 95 % CI (-0.3, 0.5), p = 0.51] and SBP [diff = -0.9 mmHg, 95% CI (-5.4, 3.5), p = 0.68] values compared to control patients. Likewise, DBP (diff = 0.4 mmHg, p = 0.76), weight (diff = 0.3 kg, p = 0.80), and physical activity levels (diff = 153 MET-min/week, p = 0.41) were similar between control and intervention patients. Results were also similar at the 6- and 12-month time points. CONCLUSIONS: In nine community fee-for-service practices, telephonic nurse case management did not lead to improvement in A1c or SBP. Gains seen in telephonic behavioral self-management interventions in optimal settings may not translate to the wider range of primary care settings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In preventing invasive fungal disease (IFD) in patients with acute myelogenous leukemia (AML) or myelodysplastic syndrome (MDS), clinical trials demonstrated efficacy of posaconazole over fluconazole and itraconazole. However, effectiveness of posaconazole has not been investigated in the United States in real-world setting outside the environment of controlled clinical trial. We performed a single-center, retrospective cohort study of 130 evaluable patients ≥18 years of age admitted to Duke University Hospital between 2004 and 2010 who received either posaconazole or fluconazole as prophylaxis during first induction or first reinduction chemotherapy for AML or MDS. The primary endpoint was possible, probable, or definite breakthrough IFD. Baseline characteristics were well balanced between groups, except that posaconazole recipients received reinduction chemotherapy and cytarabine more frequently. IFD occurred in 17/65 (27.0%) in the fluconazole group and in 6/65 (9.2%) in the posaconazole group (P = 0.012). Definite/probable IFDs occurred in 7 (10.8%) and 0 patients (0%), respectively (P = 0.0013). In multivariate analysis, fluconazole prophylaxis and duration of neutropenia were predictors of IFD. Mortality was similar between groups. This study demonstrates superior effectiveness of posaconazole over fluconazole as prophylaxis of IFD in AML and MDS patients. Such superiority did not translate to reductions in 100-day all-cause mortality.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2005-2012 IEEE.Within industrial automation systems, three-dimensional (3-D) vision provides very useful feedback information in autonomous operation of various manufacturing equipment (e.g., industrial robots, material handling devices, assembly systems, and machine tools). The hardware performance in contemporary 3-D scanning devices is suitable for online utilization. However, the bottleneck is the lack of real-time algorithms for recognition of geometric primitives (e.g., planes and natural quadrics) from a scanned point cloud. One of the most important and the most frequent geometric primitive in various engineering tasks is plane. In this paper, we propose a new fast one-pass algorithm for recognition (segmentation and fitting) of planar segments from a point cloud. To effectively segment planar regions, we exploit the orthonormality of certain wavelets to polynomial function, as well as their sensitivity to abrupt changes. After segmentation of planar regions, we estimate the parameters of corresponding planes using standard fitting procedures. For point cloud structuring, a z-buffer algorithm with mesh triangles representation in barycentric coordinates is employed. The proposed recognition method is tested and experimentally validated in several real-world case studies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Our media is saturated with claims of ``facts'' made from data. Database research has in the past focused on how to answer queries, but has not devoted much attention to discerning more subtle qualities of the resulting claims, e.g., is a claim ``cherry-picking''? This paper proposes a Query Response Surface (QRS) based framework that models claims based on structured data as parameterized queries. A key insight is that we can learn a lot about a claim by perturbing its parameters and seeing how its conclusion changes. This framework lets us formulate and tackle practical fact-checking tasks --- reverse-engineering vague claims, and countering questionable claims --- as computational problems. Within the QRS based framework, we take one step further, and propose a problem along with efficient algorithms for finding high-quality claims of a given form from data, i.e. raising good questions, in the first place. This is achieved to using a limited number of high-valued claims to represent high-valued regions of the QRS. Besides the general purpose high-quality claim finding problem, lead-finding can be tailored towards specific claim quality measures, also defined within the QRS framework. An example of uniqueness-based lead-finding is presented for ``one-of-the-few'' claims, landing in interpretable high-quality claims, and an adjustable mechanism for ranking objects, e.g. NBA players, based on what claims can be made for them. Finally, we study the use of visualization as a powerful way of conveying results of a large number of claims. An efficient two stage sampling algorithm is proposed for generating input of 2d scatter plot with heatmap, evalutaing a limited amount of data, while preserving the two essential visual features, namely outliers and clusters. For all the problems, we present real-world examples and experiments that demonstrate the power of our model, efficiency of our algorithms, and usefulness of their results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The Affordable Care Act encourages healthcare systems to integrate behavioral and medical healthcare, as well as to employ electronic health records (EHRs) for health information exchange and quality improvement. Pragmatic research paradigms that employ EHRs in research are needed to produce clinical evidence in real-world medical settings for informing learning healthcare systems. Adults with comorbid diabetes and substance use disorders (SUDs) tend to use costly inpatient treatments; however, there is a lack of empirical data on implementing behavioral healthcare to reduce health risk in adults with high-risk diabetes. Given the complexity of high-risk patients' medical problems and the cost of conducting randomized trials, a feasibility project is warranted to guide practical study designs. METHODS: We describe the study design, which explores the feasibility of implementing substance use Screening, Brief Intervention, and Referral to Treatment (SBIRT) among adults with high-risk type 2 diabetes mellitus (T2DM) within a home-based primary care setting. Our study includes the development of an integrated EHR datamart to identify eligible patients and collect diabetes healthcare data, and the use of a geographic health information system to understand the social context in patients' communities. Analysis will examine recruitment, proportion of patients receiving brief intervention and/or referrals, substance use, SUD treatment use, diabetes outcomes, and retention. DISCUSSION: By capitalizing on an existing T2DM project that uses home-based primary care, our study results will provide timely clinical information to inform the designs and implementation of future SBIRT studies among adults with multiple medical conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Patients, clinicians, researchers and payers are seeking to understand the value of using genomic information (as reflected by genotyping, sequencing, family history or other data) to inform clinical decision-making. However, challenges exist to widespread clinical implementation of genomic medicine, a prerequisite for developing evidence of its real-world utility. METHODS: To address these challenges, the National Institutes of Health-funded IGNITE (Implementing GeNomics In pracTicE; www.ignite-genomics.org ) Network, comprised of six projects and a coordinating center, was established in 2013 to support the development, investigation and dissemination of genomic medicine practice models that seamlessly integrate genomic data into the electronic health record and that deploy tools for point of care decision making. IGNITE site projects are aligned in their purpose of testing these models, but individual projects vary in scope and design, including exploring genetic markers for disease risk prediction and prevention, developing tools for using family history data, incorporating pharmacogenomic data into clinical care, refining disease diagnosis using sequence-based mutation discovery, and creating novel educational approaches. RESULTS: This paper describes the IGNITE Network and member projects, including network structure, collaborative initiatives, clinical decision support strategies, methods for return of genomic test results, and educational initiatives for patients and providers. Clinical and outcomes data from individual sites and network-wide projects are anticipated to begin being published over the next few years. CONCLUSIONS: The IGNITE Network is an innovative series of projects and pilot demonstrations aiming to enhance translation of validated actionable genomic information into clinical settings and develop and use measures of outcome in response to genome-based clinical interventions using a pragmatic framework to provide early data and proofs of concept on the utility of these interventions. Through these efforts and collaboration with other stakeholders, IGNITE is poised to have a significant impact on the acceleration of genomic information into medical practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

© 2015 IEEE.Although definition of single-program benchmarks is relatively straight-forward-a benchmark is a program plus a specific input-definition of multi-program benchmarks is more complex. Each program may have a different runtime and they may have different interactions depending on how they align with each other. While prior work has focused on sampling multiprogram benchmarks, little attention has been paid to defining the benchmarks in their entirety. In this work, we propose a four-tuple that formally defines multi-program benchmarks in a well-defined way. We then examine how four different classes of benchmarks created by varying the elements of this tuple align with real-world use-cases. We evaluate the impact of these variations on real hardware, and see drastic variations in results between different benchmarks constructed from the same programs. Notable differences include significant speedups versus slowdowns (e.g., +57% vs -5% or +26% vs -18%), and large differences in magnitude even when the results are in the same direction (e.g., 67% versus 11%).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

*Designated as an exemplary master's project for 2015-16*

This paper examines how contemporary literature contributes to the discussion of punitory justice. It uses close analysis of three contemporary novels, Margaret Atwood’s The Heart Goes Last, Hillary Jordan’s When She Woke, and Joyce Carol Oates’s Carthage, to deconstruct different conceptions of punitory justice. This analysis is framed and supported by relevant social science research on the concept of punitivity within criminal justice. Each section examines punitory justice at three levels: macro, where media messages and the predominant social conversation reside; meso, which involves penal policy and judicial process; and micro, which encompasses personal attitudes towards criminal justice. The first two chapters evaluate works by Atwood and Jordan, examining how their dystopian schemas of justice shed light on top-down and bottom-up processes of punitory justice in the real world. The third chapter uses a more realistic novel, Oates’s Carthage, to examine the ontological nature of punitory justice. It explores a variety of factors that give rise to and legitimize punitory justice, both at the personal level and within a broader cultural consensus. This chapter also discusses how both victim and perpetrator can come to stand in as metaphors to both represent and distract from broader social issues. As a whole, analysis of these three novels illuminate how current and common conceptualizations of justice have little to do with the actual act of transgression itself. Instead, justice emerges as a set of specific, conditioned responses to perceived threats, mediated by complex social, cultural, and emotive forces.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The manual effort required to convert sequential computational mechanics programs into a useful, scalable parallel form is considerable. Tools that can assist in the conversion process are clearly required. Computer aided parallelisation tools (CAPTools) have been developed to generate efficient parallel code for real world structured grid application codes such as Computational Fluid Dynamics. Automatable single-program multi-data (SPMD) overlapping domain decomposition (DD) techniques established for structured grid codes have been adapted by the authors to manually parallelise unstructured mesh applications. Inspector loops have been used to provide generic techniques for the run-time support necessary to extend the capabilities of CAPTools to automatic implementation of SPMD DD techniques in the parallelisation of unstructured mesh codes. Copyright © 1999 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The most common parallelisation strategy for many Computational Mechanics (CM) (typified by Computational Fluid Dynamics (CFD) applications) which use structured meshes, involves a 1D partition based upon slabs of cells. However, many CFD codes employ pipeline operations in their solution procedure. For parallelised versions of such codes to scale well they must employ two (or more) dimensional partitions. This paper describes an algorithmic approach to the multi-dimensional mesh partitioning in code parallelisation, its implementation in a toolkit for almost automatically transforming scalar codes to parallel form, and its testing on a range of ‘real-world’ FORTRAN codes. The concept of multi-dimensional partitioning is straightforward, but non-trivial to represent as a sufficiently generic algorithm so that it can be embedded in a code transformation tool. The results of the tests on fine real-world codes demonstrate clear improvements in parallel performance and scalability (over a 1D partition). This is matched by a huge reduction in the time required to develop the parallel versions when hand coded – from weeks/months down to hours/days.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a framework for Historical Case-Based Reasoning (HCBR) which allows the expression of both relative and absolute temporal knowledge, representing case histories in the real world. The formalism is founded on a general temporal theory that accommodates both points and intervals as primitive time elements. A case history is formally defined as a collection of (time-independent) elemental cases, together with its corresponding temporal reference. Case history matching is two-fold, i.e., there are two similarity values need to be computed: the non-temporal similarity degree and the temporal similarity degree. On the one hand, based on elemental case matching, the non-temporal similarity degree between case histories is defined by means of computing the unions and intersections of the involved elemental cases. On the other hand, by means of the graphical presentation of temporal references, the temporal similarity degree in case history matching is transformed into conventional graph similarity measurement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This poster describes a "real world" example of the teaching of Human-Computer Interaction at the final level of a Computer Science degree. It highlights many of the problems of the ever expanding HCI domain and the consequential issues of what to teach and why. The poster describes the conception and development of a new HCI course, its historical background, the justification for decisions made, lessons learnt from its implementation, and questions arising from its implementation that are yet to be addressed. For example, should HCI be taught as a course in its own right or as a component of another course? At what level is the teaching of HCI appropriate, and how is teaching influenced by industry? It considers suitable learning pedagogies as well as the demands and the contribution of industry. The experiences presented will no doubt be familiar to many HCI educators. Whilst the poster raises more questions than it answers, the resolution of some questions will hopefully be achieved by the workshop.