1000 resultados para Developing
Resumo:
Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation, which is a generalisation of the feedback vertex set problem on directed graphs, and thus it is an NP-hard task. We present several constraint programming formulations of the problem. We also present formulations using partial weighted maximum Boolean satisfiability and mixed integer linear programming. We study all these formulations by experimentally comparing them on a variety of randomly generated instances of the feature subscription problem.
Resumo:
This paper presents the findings from an innovative project funded by the
International Association of Schools of Social Work (IASSW) and undertaken by
an international team of academics investigating the development of a global
curriculum for social work in the context of political conflict. Coupled alongside
the emerging research and literature on the subject, our small-scale survey
findings indicate support for the need for social work educators to address
political conflict more systematically within social work curricula at both
undergraduate and post-qualifying levels of social work education. The paper
illuminates the opportunities for creative pedagogy whilst also examining the
threats and challenges permeating the realisation of such initiatives. In this way,
the implementation of a proposed curriculum for political conflict is given meaning within the context of IASSW’s Global Standards for social work education. Given the exploratory nature of this project, the authors do conclude that further research is warranted in regard to potential curriculum development and suggest using a comparative case study approach with more in-depth qualitative methods as a way to address this.
Resumo:
The conjunction fallacy has been cited as a classic example of the automatic contextualisation of problems. In two experiments we compared the performance of autistic and typically developing adolescents on a set of conjunction fallacy tasks. Participants with autism were less susceptible to the conjunction fallacy. Experiment 2 also demonstrated that the difference between the groups did not result from increased sensitivity to the conjunction rule, or from impaired processing of social materials amongst the autistic participants. Although adolescents with autism showed less bias in their reasoning they were not more logical than the control group in a normative sense. The findings are discussed in the light of accounts which emphasise differences in contextual processing between typical and autistic populations.
Resumo:
An appreciation of the quantity of streamflow derived from the main hydrological pathways involved in transporting diffuse contaminants is critical when addressing a wide range of water resource management issues. In order to assess hydrological pathway contributions to streams, it is necessary to provide feasible upper and lower bounds for flows in each pathway. An important first step in this process is to provide reliable estimates of the slower responding groundwater pathways and subsequently the quicker overland and interflow pathways. This paper investigates the effectiveness of a multi-faceted approach applying different hydrograph separation techniques, supplemented by lumped hydrological modelling, for calculating the Baseflow Index (BFI), for the development of an integrated approach to hydrograph separation. A semi-distributed, lumped and deterministic rainfall runoff model known as NAM has been applied to ten catchments (ranging from 5 to 699 km2). While this modelling approach is useful as a validation method, NAM itself is also an important tool for investigation. These separation techniques provide a large variation in BFI, a difference of 0.741 predicted for BFI in a catchment with the less reliable fixed and sliding interval methods and local minima turning point methods included. This variation is reduced to 0.167 with these methods omitted. The Boughton and Eckhardt algorithms, while quite subjective in their use, provide quick and easily implemented approaches for obtaining physically realistic hydrograph separations. It is observed that while the different separation techniques give varying BFI values for each of the catchments, a recharge coefficient approach developed in Ireland, when applied in conjunction with the Master recession Curve Tabulation method, predict estimates in agreement with those obtained using the NAM model, and these estimates are also consistent with the study catchments’ geology. These two separation methods, in conjunction with the NAM model, were selected to form an integrated approach to assessing BFI in catchments.
Resumo:
Exposure assessment is a critical part of epidemiological studies into the effect of mycotoxins on human health. Whilst exposure assessment can be made by estimating the quantity of ingested toxins from food analysis and questionnaire data, the use of biological markers (biomarkers) of exposure can provide a more accurate measure of individual level of exposure in reflecting the internal dose. Biomarkers of exposure can include the excreted toxin or its metabolites, as well as the products of interaction between the toxin and macromolecules such as protein and DNA. Samples in which biomarkers may be analysed include urine, blood, other body fluids and tissues, with urine and blood being the most accessible for human studies. Here we describe the development of biomarkers of exposure for the assessment of three important mycotoxins; aflatoxin, fumonisin and deoxynivalenol. A number of different biomarkers and methods have been developed that can be applied to human population studies, and these approaches are reviewed in the context of their application to molecular epidemiology research.
Resumo:
The purpose of this paper is to provide a framework for developing an effective evaluation practice within health care settings. Three features are reviewed; capacity building, the application of evaluation to program activities and the utilization of evaluation recommendations. First, the organizational elements required to establish effective evaluation practice are reviewed emphasizing that an organization's capacity for evaluation develops over time and in stages. Second, a comprehensive evaluation framework is presented which demonstrates how evaluation practice can be applied to all aspects of a program's life cycle, thus promoting the scope of evidence-based decision making within an organization. Finally, factors which influence the adoption of evaluation recommendations by decision makers are reviewed accompanied by strategies to promote the utilization of evaluation recommendations in organization decision making.
Resumo:
Cancer is a complex and heterogeneous disease which is one of the leading causes of death in Western civilisations. Thus, oncology is viewed as a primary focus for personalized medicine. It is recognised that cancer treatment needs to be better tailored in order to improve patient outcome. Patient tumor samples will be required to characterize cancer at a molecular level and identify where there may be disease subgroups that should be treated differently. The use of formalin-fixed paraffin-embedded tissue is important for enabling such studies. In this report, we focus on the challenges that have been faced to date along with the technological developments that have now made this possible. We also highlight the impact this may have on drug and diagnostic development.
Resumo:
Abstract
BACKGROUND:
Glaucoma is a leading cause of blindness. Early detection is advocated but there is insufficient evidence from randomized controlled trials (RCTs) to inform health policy on population screening. Primarily, there is no agreed screening intervention. For a screening programme, agreement is required on the screening tests to be used, either individually or in combination, the person to deliver the test and the location where testing should take place. This study aimed to use ophthalmologists (who were experienced glaucoma subspecialists), optometrists, ophthalmic nurses and patients to develop a reduced set of potential screening tests and testing arrangements that could then be explored in depth in a further study of their feasibility for evaluation in a glaucoma screening RCT.
METHODS:
A two-round Delphi survey involving 38 participants was conducted. Materials were developed from a prior evidence synthesis. For round one, after some initial priming questions in four domains, specialists were asked to nominate three screening interventions, the intervention being a combination of the four domains; target population, (age and higher risk groups), site, screening test and test operator (provider). More than 250 screening interventions were identified. For round two, responses were condensed into 72 interventions and each was rated by participants on a 0-10 scale in terms of feasibility.
RESULTS:
Using a cut-off of a median rating of feasibility of =5.5 as evidence of agreement of intervention feasibility, six interventions were identified from round 2. These were initiating screening at age 50, with a combination of two or three screening tests (varying combinations of tonometry/measures of visual function/optic nerve damage) organized in a community setting with an ophthalmic trained technical assistant delivering the tests. An alternative intervention was a 'glaucoma risk score' ascertained by questionnaire. The advisory panel recommended that further exploration of the feasibility of screening higher risk populations and detailed specification of the screening tests was required.
CONCLUSIONS:
With systematic use of expert opinions, a shortlist of potential screening interventions was identified. Views of users, service providers and cost-effectiveness modeling are now required to identify a feasible intervention to evaluate in a future glaucoma screening trial.
Resumo:
BACKGROUND: Measures that reflect patients' assessment of their health are of increasing importance as outcome measures in randomised controlled trials. The methodological approach used in the pre-validation development of new instruments (item generation, item reduction and question formatting) should be robust and transparent. The totality of the content of existing PRO instruments for a specific condition provides a valuable resource (pool of items) that can be utilised to develop new instruments. Such 'top down' approaches are common, but the explicit pre-validation methods are often poorly reported. This paper presents a systematic and generalisable 5-step pre-validation PRO instrument methodology.
METHODS: The method is illustrated using the example of the Aberdeen Glaucoma Questionnaire (AGQ). The five steps are: 1) Generation of a pool of items; 2) Item de-duplication (three phases); 3) Item reduction (two phases); 4) Assessment of the remaining items' content coverage against a pre-existing theoretical framework appropriate to the objectives of the instrument and the target population (e.g. ICF); and 5) qualitative exploration of the target populations' views of the new instrument and the items it contains.
RESULTS: The AGQ 'item pool' contained 725 items. Three de-duplication phases resulted in reduction of 91, 225 and 48 items respectively. The item reduction phases discarded 70 items and 208 items respectively. The draft AGQ contained 83 items with good content coverage. The qualitative exploration ('think aloud' study) resulted in removal of a further 15 items and refinement to the wording of others. The resultant draft AGQ contained 68 items.
CONCLUSIONS: This study presents a novel methodology for developing a PRO instrument, based on three sources: literature reporting what is important to patient; theoretically coherent framework; and patients' experience of completing the instrument. By systematically accounting for all items dropped after the item generation phase, our method ensures that the AGQ is developed in a transparent, replicable manner and is fit for validation. We recommend this method to enhance the likelihood that new PRO instruments will be appropriate to the research context in which they are used, acceptable to research participants and likely to generate valid data.
Resumo:
Context: Family carers of palliative care patients report high levels of psychological distress throughout the caregiving phase and during bereavement. Palliative care providers are required to provide psychosocial support to family carers; however, determining which carers are more likely to develop prolonged grief (PG) is currently unclear.
Objectives: To ascertain whether family carers reporting high levels of PG symptoms and those who develop PG disorder (PGD) by six and 13 months postdeath can be predicted from predeath information.
Methods: A longitudinal study of 301 carers of patients receiving palliative care was conducted across three palliative care services. Data were collected on entry to palliative care (T1) on a variety of sociodemographic variables, carer-related factors, and psychological distress measures. The measures of psychological distress were then readministered at six (T2; n = 167) and 13 months postdeath (T3; n = 143).
Results: The PG symptoms at T1 were a strong predictor of both PG symptoms and PGD at T2 and T3. Greater bereavement dependency, a spousal relationship to the patient, greater impact of caring on schedule, poor family functioning, and low levels of optimism also were risk factors for PG symptoms.
Conclusion: Screening family carers on entry to palliative care seems to be the most effective way of identifying who has a higher risk of developing PG. We recommend screening carers six months after the death of their relative to identify most carers with PG.