941 resultados para cost-informed process execution
Resumo:
Retinal detachment is a common ophthalmologic procedure, and outcome is typically measured by a single factor-improvement in visual acuity. Health related functional outcome testing, which quantifies patient's self-reported perception of impairment, can be integrated with objective clinical findings. Based on the patient's self-assessed lifestyle impairment, the physician and patient together can make an informed decision on the treatment that is most likely to benefit the patient. ^ A functional outcome test (the Houston Vision Assessment Test-Retina; HVAT-Retina) was developed and validated in patients with multiple retinal detachments in the same eye. The HVAT-Retina divides an estimated total impairment into subcomponents: contribution of visual disability (potentially correctable by retinal detachment surgery) and nonvisual physical disabilities (co-morbidities not affected by retinal detachment surgery. ^ Seventy-six patients participated in this prospective multicenter study. Seven patients were excluded from the analysis because they were not certain of their answers. Cronbach's alpha coefficient was 0.91 for presurgery HVAT-Retina and 0.94 post-surgery. The item-to-total correlation ranged from 0.50 to 0.88. Visual impairment score improved by 9 points from pre-surgery (p = 0.0003). Physical impairment score also improved from pre-surgery (p = 0.0002). ^ In conclusion, the results of this study demonstrate that the instrument is reliable and valid in patients presenting with recurrent retinal detachments. The HVAT-Retina is a simple instrument and does not burden the patient or the health professional in terms of time or cost. It may be self-administrated, not requiring an interviewer. Because the HVAT-Retina was designed to demonstrate outcomes perceivable by the patient, it has the potential to guide the decision making process between patient and physician. ^
Resumo:
With its turbulent and volatile legal evolution, the right to an abortion in the United States still remains a highly contested issue and has developed into one of the most divisive topics within modern legal discourse. By deconstructing the political underpinnings and legal rationale of the right to an abortion through a systematic case law analysis, I will demonstrate that this right has been incrementally destabilized. This instability embedded in abortion jurisprudence has been primarily produced by a combination of textual ambiguity in the case law and judicial ambivalence regarding this complex area of law. In addition, I argue that the use of the largely discredited substantive due process doctrine to ground this contentious right has also contributed to the lack of legal stability. I assert that when these elements culminate in the realm of reproductive privacy the right to terminate a pregnancy becomes increasingly unstable and contested.
Resumo:
Institutional Review Boards (IRBs) are the primary gatekeepers for the protection of ethical standards of federally regulated research on human subjects in this country. This paper focuses on what general, broad measures that may be instituted or enhanced to exemplify a "model IRB". This is done by examining the current regulatory standards of federally regulated IRBs, not private or commercial boards, and how many of those standards have been found either inadequate or not generally understood or followed. The analysis includes suggestions on how to bring about changes in order to make the IRB process more efficient, less subject to litigation, and create standardized educational protocols for members. The paper also considers how to include better oversight for multi-center research, increased centralization of IRBs, utilization of Data Safety Monitoring Boards when necessary, payment for research protocol review, voluntary accreditation, and the institution of evaluation/quality assurance programs. ^ This is a policy study utilizing secondary analysis of publicly available data. Therefore, the research for this paper focuses on scholarly medical/legal journals, web information from the Department of Health and Human Services, Federal Drug Administration, and the Office of the Inspector General, Accreditation Programs, law review articles, and current regulations applicable to the relevant portions of the paper. ^ Two issues are found to be consistently cited by the literature as major concerns. One is a need for basic, standardized educational requirements across all IRBs and its members, and secondly, much stricter and more informed management of continuing research. There is no federally regulated formal education system currently in place for IRB members, except for certain NIH-based trials. Also, IRBs are not keeping up with research once a study has begun, and although regulated to do so, it does not appear to be a great priority. This is the area most in danger of increased litigation. Other issues such as voluntary accreditation and outcomes evaluation are slowing gaining steam as the processes are becoming more available and more sought after, such as JCAHO accrediting of hospitals. ^ Adopting the principles discussed in this paper should promote better use of a local IRBs time, money, and expertise for protecting the vulnerable population in their care. Without further improvements to the system, there is concern that private and commercial IRBs will attempt to create a monopoly on much of the clinical research in the future as they are not as heavily regulated and can therefore offer companies quicker and more convenient reviews. IRBs need to consider the advantages of charging for their unique and important services as a cost of doing business. More importantly, there must be a minimum standard of education for all IRB members in the area of the ethical standards of human research and a greater emphasis placed on the follow-up of ongoing research as this is the most critical time for study participants and may soon lead to the largest area for litigation. Additionally, there should be a centralized IRB for multi-site trials or a study website with important information affecting the trial in real time. There needs to be development of standards and metrics to assess the performance of the IRBs for quality assurance and outcome evaluations. The boards should not be content to run the business of human subjects' research without determining how well that function is actually being carried out. It is important that federally regulated IRBs provide excellence in human research and promote those values most important to the public at large.^
Resumo:
Colorectal cancer (CRC) has become a public health concern due to the underutilization of the various screening methods. There is a need to understand a patient's decision making process in regards to their health and obtaining the appropriate screening. Previous research has defined patient autonomy in two dimensions: The patient's involvement in the decision making process and their desire to be informed (Ende, Kazis, Ash, & Moskowitz, 1989). Past research shows that patients have a high desire to be informed, but a low desire to be involved in the medical decision process. Deber, Kraetschmer, and Irvine (1996) developed a measure which consisted of two subscales that measures patients' involvement: Patient's desire to be involved in the problem solving (PS) and decision making (DM) process. Little research has examined the desire for involvement and decision making of Latino populations. The present study sought to investigate the psychometric properties of the Deber et al. (1996) measure. In general, Latino patients in the present sample had low desire for autonomy in health decisions or to be involved in the decision making processes of their health related issues. ^
Resumo:
Background. Childhood immunization programs have dramatically reduced the morbidity and mortality associated with vaccine-preventable diseases. Proper documentation of immunizations that have been administered is essential to prevent duplicate immunization of children. To help improve documentation, immunization information systems (IISs) have been developed. IISs are comprehensive repositories of immunization information for children residing within a geographic region. The two models for participation in an IIS are voluntary inclusion, or "opt-in," and voluntary exclusion, or "opt-out." In an opt-in system, consent must be obtained for each participant, conversely, in an opt-out IIS, all children are included unless procedures to exclude the child are completed. Consent requirements for participation vary by state; the Texas IIS, ImmTrac, is an opt-in system.^ Objectives. The specific objectives are to: (1) Evaluate the variance among the time and costs associated with collecting ImmTrac consent at public and private birthing hospitals in the Greater Houston area; (2) Estimate the total costs associated with collecting ImmTrac consent at selected public and private birthing hospitals in the Greater Houston area; (3) Describe the alternative opt-out process for collecting ImmTrac consent at birth and discuss the associated cost savings relative to an opt-in system.^ Methods. Existing time-motion studies (n=281) conducted between October, 2006 and August, 2007 at 8 birthing hospitals in the Greater Houston area were used to assess the time and costs associated with obtaining ImmTrac consent at birth. All data analyzed are deidentified and contain no personal information. Variations in time and costs at each location were assessed and total costs per child and costs per year were estimated. The cost of an alternative opt-out system was also calculated.^ Results. The median time required by birth registrars to complete consent procedures varied from 72-285 seconds per child. The annual costs associated with obtaining consent for 388,285 newborns in ImmTrac's opt-in consent process were estimated at $702,000. The corresponding costs of the proposed opt-out system were estimated to total $194,000 per year. ^ Conclusions. Substantial variation in the time and costs associated with completion of ImmTrac consent procedures were observed. Changing to an opt-out system for participation could represent significant cost savings. ^
Resumo:
The objectives of this dissertation were to evaluate health outcomes, quality improvement measures, and the long-term cost-effectiveness and impact on diabetes-related microvascular and macrovascular complications of a community health worker-led culturally tailored diabetes education and management intervention provided to uninsured Mexican Americans in an urban faith-based clinic. A prospective, randomized controlled repeated measures design was employed to compare the intervention effects between: (1) an intervention group (n=90) that participated in the Community Diabetes Education (CoDE) program along with usual medical care; and (2) a wait-listed comparison group (n=90) that received only usual medical care. Changes in hemoglobin A1c (HbA1c) and secondary outcomes (lipid status, blood pressure and body mass index) were assessed using linear mixed-models and an intention-to-treat approach. The CoDE group experienced greater reduction in HbA1c (-1.6%, p<.001) than the control group (-.9%, p<.001) over the 12 month study period. After adjusting for group-by-time interaction, antidiabetic medication use at baseline, changes made to the antidiabetic regime over the study period, duration of diabetes and baseline HbA1c, a statistically significant intervention effect on HbA1c (-.7%, p=.02) was observed for CoDE participants. Process and outcome quality measures were evaluated using multiple mixed-effects logistic regression models. Assessment of quality indicators revealed that the CoDE intervention group was significantly more likely to have received a dilated retinal examination than the control group, and 53% achieved a HbA1c below 7% compared with 38% of control group subjects. Long-term cost-effectiveness and impact on diabetes-related health outcomes were estimated through simulation modeling using the rigorously validated Archimedes Model. Over a 20 year time horizon, CoDE participants were forecasted to have less proliferative diabetic retinopathy, fewer foot ulcers, and reduced numbers of foot amputations than control group subjects who received usual medical care. An incremental cost-effectiveness ratio of $355 per quality-adjusted life-year gained was estimated for CoDE intervention participants over the same time period. The results from the three areas of program evaluation: impact on short-term health outcomes, quantification of improvement in quality of diabetes care, and projection of long-term cost-effectiveness and impact on diabetes-related health outcomes provide evidence that a community health worker can be a valuable resource to reduce diabetes disparities for uninsured Mexican Americans. This evidence supports formal integration of community health workers as members of the diabetes care team.^
Resumo:
Due to the increasing demand of petroleum everywhere, and the great amount of spills, accidents and disasters, there is an urgent need to find an effective, non-cost and harmless method to clean up the affected areas. There are microorganisms in nature (bacteria and fungi, mainly) that feed on hydrocarbons and transform them into others harmless chemical substances. These bacteria produce enzymes that degrade oil very effectively. This natural process can be accelerated by adding more bacteria or providing nutrients and oxygen to facilitate their growth, which is called ―bioaugmentation and biostimulation. Through this project we discover that these processes can be affected by different factors making difficult the biodegradation execution and opening a gap between the laboratory experiments and the real cases. Therefore, there is much remain to be done and a lot of study ahead to make this technique available in a great scale.
Resumo:
Modern FPGAs with run-time reconfiguration allow the implementation of complex systems offering both the flexibility of software-based solutions combined with the performance of hardware. This combination of characteristics, together with the development of new specific methodologies, make feasible to reach new points of the system design space, and make embedded systems built on these platforms acquire more and more importance. However, the practical exploitation of this technique in fields that traditionally have relied on resource restricted embedded systems, is mainly limited by strict power consumption requirements, the cost and the high dependence of DPR techniques with the specific features of the device technology underneath. In this work, we tackle the previously reported problems, designing a reconfigurable platform based on the low-cost and low-power consuming Spartan-6 FPGA family. The full process to develop the platform will be detailed in the paper from scratch. In addition, the implementation of the reconfiguration mechanism, including two profiles, is reported. The first profile is a low-area and low-speed reconfiguration engine based mainly on software functions running on the embedded processor, while the other one is a hardware version of the same engine, implemented in the FPGA logic. This reconfiguration hardware block has been originally designed to the Virtex-5 family, and its porting process will be also described in this work, facing the interoperability problem among different families.
Resumo:
The verification of compliance with a design specification in manufacturing requires the use of metrological instruments to check if the magnitude associated with the design specification is or not according with tolerance range. Such instrumentation and their use during the measurement process, has associated an uncertainty of measurement whose value must be related to the value of tolerance tested. Most papers dealing jointly tolerance and measurement uncertainties are mainly focused on the establishment of a relationship uncertainty-tolerance without paying much attention to the impact from the standpoint of process cost. This paper analyzes the cost-measurement uncertainty, considering uncertainty as a productive factor in the process outcome. This is done starting from a cost-tolerance model associated with the process. By means of this model the existence of a measurement uncertainty is calculated in quantitative terms of cost and its impact on the process is analyzed.
Resumo:
Abstract machines provide a certain separation between platformdependent and platform-independent concerns in compilation. Many of the differences between architectures are encapsulated in the speciflc abstract machine implementation and the bytecode is left largely architecture independent. Taking advantage of this fact, we present a framework for estimating upper and lower bounds on the execution times of logic programs running on a bytecode-based abstract machine. Our approach includes a one-time, programindependent proflling stage which calculates constants or functions bounding the execution time of each abstract machine instruction. Then, a compile-time cost estimation phase, using the instruction timing information, infers expressions giving platform-dependent upper and lower bounds on actual execution time as functions of input data sizes for each program. Working at the abstract machine level makes it possible to take into account low-level issues in new architectures and platforms by just reexecuting the calibration stage instead of having to tailor the analysis for each architecture and platform. Applications of such predicted execution times include debugging/veriflcation of time properties, certiflcation of time properties in mobile code, granularity control in parallel/distributed computing, and resource-oriented specialization.
Resumo:
Effective static analyses have been proposed which infer bounds on the number of resolutions. These have the advantage of being independent from the platform on which the programs are executed and have been shown to be useful in a number of applications, such as granularity control in parallel execution. On the other hand, in distributed computation scenarios where platforms with different capabilities come into play, it is necessary to express costs in metrics that include the characteristics of the platform. In particular, it is specially interesting to be able to infer upper and lower bounds on actual execution times. With this objective in mind, we propose an approach which combines compile-time analysis for cost bounds with a one-time profiling of a given platform in order to determine the valúes of certain parameters for that platform. These parameters calibrate a cost model which, from then on, is able to compute statically time bound functions for procedures and to predict with a significant degree of accuracy the execution times of such procedures in that concrete platform. The approach has been implemented and integrated in the CiaoPP system.
Resumo:
Predicting statically the running time of programs has many applications ranging from task scheduling in parallel execution to proving the ability of a program to meet strict time constraints. A starting point in order to attack this problem is to infer the computational complexity of such programs (or fragments thereof). This is one of the reasons why the development of static analysis techniques for inferring cost-related properties of programs (usually upper and/or lower bounds of actual costs) has received considerable attention.
Resumo:
It is generally recognized that information about the runtime cost of computations can be useful for a variety of applications, including program transformation, granularity control during parallel execution, and query optimization in deductive databases. Most of the work to date on compile-time cost estimation of logic programs has focused on the estimation of upper bounds on costs. However, in many applications, such as parallel implementations on distributed-memory machines, one would prefer to work with lower bounds instead. The problem with estimating lower bounds is that in general, it is necessary to account for the possibility of failure of head unification, leading to a trivial lower bound of 0. In this paper, we show how, given type and mode information about procedures in a logic program, it is possible to (semi-automatically) derive nontrivial lower bounds on their computational costs. We also discuss the cost analysis for the special and frequent case of divide-and-conquer programs and show how —as a pragmatic short-term solution —it may be possible to obtain useful results simply by identifying and treating divide-and-conquer programs specially.
Resumo:
We propose a computational methodology -"B-LOG"-, which offers the potential for an effective implementation of Logic Programming in a parallel computer. We also propose a weighting scheme to guide the search process through the graph and we apply the concepts of parallel "branch and bound" algorithms in order to perform a "best-first" search using an information theoretic bound. The concept of "session" is used to speed up the search process in a succession of similar queries. Within a session, we strongly modify the bounds in a local database, while bounds kept in a global database are weakly modified to provide a better initial condition for other sessions. We also propose an implementation scheme based on a database machine using "semantic paging", and the "B-LOG processor" based on a scoreboard driven controller.
Resumo:
The advantages of tabled evaluation regarding program termination and reduction of complexity are well known —as are the significant implementation, portability, and maintenance efforts that some proposals (especially those based on suspensión) require. This implementation effort is reduced by program transformation-based continuation cali techniques, at some eñrciency cost. However, the traditional formulation of this proposal by Ramesh and Cheng limits the interleaving of tabled and non-tabled predicates and thus cannot be used as-is for arbitrary programs. In this paper we present a complete translation for the continuation cali technique which, using the runtime support needed for the traditional proposal, solves these problems and makes it possible to execute arbitrary tabled programs. We present performance results which show that CCall offers a useful tradeoff that can be competitive with state-of-the-art implementations.