4 resultados para average complexity

em Duke University


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent genomic analyses suggest the importance of combinatorial regulation by broadly expressed transcription factors rather than expression domains characterized by highly specific factors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. METHODS AND PRINCIPAL FINDINGS: The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. CONCLUSIONS: Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Consent forms have lengthened over time and become harder for participants to understand. We sought to demonstrate the feasibility of creating a simplified consent form for biobanking that comprises the minimum information necessary to meet ethical and regulatory requirements. We then gathered preliminary data concerning its content from hypothetical biobank participants. METHODOLOGY/PRINCIPAL FINDINGS: We followed basic principles of plain-language writing and incorporated into a 2-page form (not including the signature page) those elements of information required by federal regulations and recommended by best practice guidelines for biobanking. We then recruited diabetes patients from community-based practices and randomized half (n = 56) to read the 2-page form, first on paper and then a second time on a tablet computer. Participants were encouraged to use "More information" buttons on the electronic version whenever they had questions or desired further information. These buttons led to a series of "Frequently Asked Questions" (FAQs) that contained additional detailed information. Participants were asked to identify specific sentences in the FAQs they thought would be important if they were considering taking part in a biorepository. On average, participants identified 7 FAQ sentences as important (mean 6.6, SD 14.7, range: 0-71). No one sentence was highlighted by a majority of participants; further, 34 (60.7%) participants did not highlight any FAQ sentences. CONCLUSIONS: Our preliminary findings suggest that our 2-page form contains the information that most prospective participants identify as important. Combining simplified forms with supplemental material for those participants who desire more information could help minimize consent form length and complexity, allowing the most substantively material information to be better highlighted and enabling potential participants to read the form and ask questions more effectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Both stimulus and response conflict can disrupt behavior by slowing response times and decreasing accuracy. Although several neural activations have been associated with conflict processing, it is unclear how specific any of these are to the type of stimulus conflict or the amount of response conflict. Here, we recorded electrical brain activity, while manipulating the type of stimulus conflict in the task (spatial [Flanker] versus semantic [Stroop]) and the amount of response conflict (two versus four response choices). Behaviorally, responses were slower to incongruent versus congruent stimuli across all task and response types, along with overall slowing for higher response-mapping complexity. The earliest incongruency-related neural effect was a short-duration frontally-distributed negativity at ~200 ms that was only present in the Flanker spatial-conflict task. At longer latencies, the classic fronto-central incongruency-related negativity 'N(inc)' was observed for all conditions, but was larger and ~100 ms longer in duration with more response options. Further, the onset of the motor-related lateralized readiness potential (LRP) was earlier for the two vs. four response sets, indicating that smaller response sets enabled faster motor-response preparation. The late positive complex (LPC) was present in all conditions except the two-response Stroop task, suggesting this late conflict-related activity is not specifically related to task type or response-mapping complexity. Importantly, across tasks and conditions, the LRP onset at or before the conflict-related N(inc), indicating that motor preparation is a rapid, automatic process that interacts with the conflict-detection processes after it has begun. Together, these data highlight how different conflict-related processes operate in parallel and depend on both the cognitive demands of the task and the number of response options.