209 resultados para Individually rational utility set
Resumo:
Objective: To estimate the relative inpatient costs of hospital-acquired conditions. Methods: Patient level costs were estimated using computerized costing systems that log individual utilization of inpatient services and apply sophisticated cost estimates from the hospital's general ledger. Occurrence of hospital-acquired conditions was identified using an Australian ‘condition-onset' flag for diagnoses not present on admission. These were grouped to yield a comprehensive set of 144 categories of hospital-acquired conditions to summarize data coded with ICD-10. Standard linear regression techniques were used to identify the independent contribution of hospital-acquired conditions to costs, taking into account the case-mix of a sample of acute inpatients (n = 1,699,997) treated in Australian public hospitals in Victoria (2005/06) and Queensland (2006/07). Results: The most costly types of complications were post-procedure endocrine/metabolic disorders, adding AU$21,827 to the cost of an episode, followed by MRSA (AU$19,881) and enterocolitis due to Clostridium difficile (AU$19,743). Aggregate costs to the system, however, were highest for septicaemia (AU$41.4 million), complications of cardiac and vascular implants other than septicaemia (AU$28.7 million), acute lower respiratory infections, including influenza and pneumonia (AU$27.8 million) and UTI (AU$24.7 million). Hospital-acquired complications are estimated to add 17.3% to treatment costs in this sample. Conclusions: Patient safety efforts frequently focus on dramatic but rare complications with very serious patient harm. Previous studies of the costs of adverse events have provided information on ‘indicators’ of safety problems rather than the full range of hospital-acquired conditions. Adding a cost dimension to priority-setting could result in changes to the focus of patient safety programmes and research. Financial information should be combined with information on patient outcomes to allow for cost-utility evaluation of future interventions.
Resumo:
Long-term systematic population monitoring data sets are rare but are essential in identifying changes in species abundance. In contrast, community groups and natural history organizations have collected many species lists. These represent a large, untapped source of information on changes in abundance but are generally considered of little value. The major problem with using species lists to detect population changes is that the amount of effort used to obtain the list is often uncontrolled and usually unknown. It has been suggested that using the number of species on the list, the "list length," can be a measure of effort. This paper significantly extends the utility of Franklin's approach using Bayesian logistic regression. We demonstrate the value of List Length Analysis to model changes in species prevalence (i.e., the proportion of lists on which the species occurs) using bird lists collected by a local bird club over 40 years around Brisbane, southeast Queensland, Australia. We estimate the magnitude and certainty of change for 269 bird species and calculate the probabilities that there have been declines and increases of given magnitudes. List Length Analysis confirmed suspected species declines and increases. This method is an important complement to systematically designed intensive monitoring schemes and provides a means of utilizing data that may otherwise be deemed useless. The results of List Length Analysis can be used for targeting species of conservation concern for listing purposes or for more intensive monitoring. While Bayesian methods are not essential for List Length Analysis, they can offer more flexibility in interrogating the data and are able to provide a range of parameters that are easy to interpret and can facilitate conservation listing and prioritization. © 2010 by the Ecological Society of America.
Resumo:
Objectives The objective of this study was to develop process quality indicators (PQIs) to support the improvement of care services for older people with cognitive impairment in emergency departments (ED). Methods A structured research approach was taken for the development of PQIs for the care of older people with cognitive impairment in EDs, including combining available evidence with expert opinion (phase 1), a field study (phase 2), and formal voting (phase 3). A systematic review of the literature identified ED processes targeting the specific care needs of older people with cognitive impairment. Existing relevant PQIs were also included. By integrating the scientific evidence and clinical expertise, new PQIs were drafted and, along with the existing PQIs, extensively discussed by an advisory panel. These indicators were field tested in eight hospitals using a cohort of older persons aged 70 years and older. After analysis of the field study data (indicator prevalence, variability across sites), in a second meeting, the advisory panel further defined the PQIs. The advisory panel formally voted for selection of those PQIs that were most appropriate for care evaluation. Results In addition to seven previously published PQIs relevant to the care of older persons, 15 new indicators were created. These 22 PQIs were then field tested. PQIs designed specifically for the older ED population with cognitive impairment were only scored for patients with identified cognitive impairment. Following formal voting, a total of 11 PQIs were included in the set. These PQIs targeted cognitive screening, delirium screening, delirium risk assessment, evaluation of acute change in mental status, delirium etiology, proxy notification, collateral history, involvement of a nominated support person, pain assessment, postdischarge follow-up, and ED length of stay. Conclusions This article presents a set of PQIs for the evaluation of the care for older people with cognitive impairment in EDs. The variation in indicator triggering across different ED sites suggests that there are opportunities for quality improvement in care for this vulnerable group. Applied PQIs will identify an emergency services' implementation of care strategies for cognitively impaired older ED patients. Awareness of the PQI triggers at an ED level enables implementation of targeted interventions to improve any suboptimal processes of care. Further validation and utility of the indicators in a wider population is now indicated.
Resumo:
This study examines the impact of incentives on commuters' travel behavior based upon a questionnaire survey conducted with respect to the Beijing Subway System. Overall, we find that offering incentives to commuters, particularly fast food restaurant-related services and reduced ticket fares, has a positive influence on avoiding the morning rush hour. Furthermore, by using an interaction analysis, we discover that a flexible work schedule has an impact on commuters' behavior and the efficiency of the subway system. Finally, we recommend two possible policies to maximize the utility of the subway system and to reduce congestion at the peak of morning service: (1) a set of incentives that includes free wireless internet service with a coupon for breakfast and a discount on ticket fares before the morning peak, and; (2) the introduction of a flexible work schedule.
Resumo:
Purpose To compare small nerve fiber damage in the central cornea and whorl area in participants with diabetic peripheral neuropathy (DPN) and to examine the accuracy of evaluating these 2 anatomical sites for the diagnosis of DPN. Methods A cohort of 187 participants (107 with type 1 diabetes and 80 controls) was enrolled. The neuropathy disability score (NDS) was used for the identification of DPN. The corneal nerve fiber length at the central cornea (CNFLcenter) and whorl (CNFLwhorl) was quantified using corneal confocal microscopy and a fully automated morphometric technique and compared according to the DPN status. Receiver operating characteristic analyses were used to compare the accuracy of the 2 corneal locations for the diagnosis of DPN. Results CNFLcenter and CNFLwhorl were able to differentiate all 3 groups (diabetic participants with and without DPN and controls) (P < 0.001). There was a weak but significant linear relationship for CNFLcenter and CNFLwhorl versus NDS (P < 0.001); however, the corneal location x NDS interaction was not statistically significant (P = 0.17). The area under the receiver operating characteristic curve was similar for CNFLcenter and CNFLwhorl (0.76 and 0.77, respectively, P = 0.98). The sensitivity and specificity of the cutoff points were 0.9 and 0.5 for CNFLcenter and 0.8 and 0.6 for CNFLwhorl. Conclusions Small nerve fiber pathology is comparable at the central and whorl anatomical sites of the cornea. Quantification of CNFL from the corneal center is as accurate as CNFL quantification of the whorl area for the diagnosis of DPN.
Resumo:
This research seeks to demonstrate the ways in which urban design factors, individually and in various well-considered arrangements, stimulate and encourage social activities in Brisbane’s public squares through the mapping and analysis of user behaviour. No design factors contribute to public space in isolation, so the combinations of different design factors, contextual and social impacts as well as local climate are considered to be highly influential to the way in which Brisbane’s public engages with public space. It is this local distinctiveness that this research seeks to ascertain. The research firstly pinpoints and consolidates the design factors identified and recommended in existing literature and then maps the identified factors as they are observed at case study sites in Brisbane. This is then set against observational mappings of the site’s corresponding user activities and engagement. These mappings identify a number of patterns of behaviour; pertinently that “activated” areas of social gathering actively draw people in, and the busier a space is, both the frequency and duration of people lingering in the space increases. The study finds that simply providing respite from the urban environment (and/or weather conditions) does not adequately encourage social interaction and that people friendly design factors can instigate social activities which, if coexisting in a public space, can themselves draw in further users of the space. One of the primary conclusions drawn from these observations is that members of the public in Brisbane are both actively and passively social and often seek out locations where “people-watching” and being around other members of the public (both categorised as passive social activities) are facilitated and encouraged. Spaces that provide respite from the urban environment but that do not sufficiently accommodate social connections and activities are less favourable and are often left abandoned despite their comparable tranquillity and available space.
Resumo:
There has been a paucity of research published in relation to the temporal aspect of destination image change over time. Given increasing investments in destination branding, research is needed to enhance understanding of how to monitor destination brand performance, of which destination image is the core construct, over time. This article reports the results of four studies tracking brand performance of a competitive set of five destinations, between 2003 and 2012. Results indicate minimal changes in perceptions held of the five destinations of interest over the 10 years, supporting the assertion of Gartner (1986) and Gartner and Hunt (1987) that destination image change will only occur slowly over time. While undertaken in Australia, the research approach provides DMOs in other parts of the world with a practical tool for evaluating brand performance over time; in terms of measures of effectiveness of past marketing communications, and indicators of future performance.
Resumo:
The Commission has been asked to identify appropriate options for reducing entry and exit barriers including advice on the potential impacts of the personal/corporate insolvency regimes on business exits...
Resumo:
The Commission has released a Draft Report on Business Set-Up, Transfer and Closure for public consultation and input. It is pleasing to note that three chapters of the Draft Report address aspects of personal and corporate insolvency. Nevertheless, we continue to make the submission to national policy inquiries and discussions that a comprehensive review should be undertaken of the regulation of insolvency and restructuring in Australia. The last comprehensive review of the insolvency system was by the Australian Law Reform Commission (the Harmer Report) and was handed down in 1988. Whilst there have been aspects of our insolvency laws that have been reviewed since that time, none has been able to provide the clear and comprehensive analysis that is able to come from a more considered review. Such a review ought to be conducted by the Australian Law Reform Commission or similar independent panel set up for the task. We also suggest that there is a lack of data available to assist with addressing questions raised by the Draft Report. There is a need to invest in finding out, in a rigorous and informed way, how the current law operates. Until there is a willingness to make a public investment in such research with less reliance upon the anecdotal (often from well-meaning but ultimately inadequately informed participants and others) the government cannot be sure that the insolvency regime we have provides the most effective regime to underpin Australia’s commercial and financial dealings, nor that any change is justified. We also make the submission that there are benefits in a serious investigation into a merged regulatory architecture of personal and corporate insolvency and a combined personal and corporate insolvency regulator.
Resumo:
Currently we are facing an overburdening growth of the number of reliable information sources on the Internet. The quantity of information available to everyone via Internet is dramatically growing each year [15]. At the same time, temporal and cognitive resources of human users are not changing, therefore causing a phenomenon of information overload. World Wide Web is one of the main sources of information for decision makers (reference to my research). However our studies show that, at least in Poland, the decision makers see some important problems when turning to Internet as a source of decision information. One of the most common obstacles raised is distribution of relevant information among many sources, and therefore need to visit different Web sources in order to collect all important content and analyze it. A few research groups have recently turned to the problem of information extraction from the Web [13]. The most effort so far has been directed toward collecting data from dispersed databases accessible via web pages (related to as data extraction or information extraction from the Web) and towards understanding natural language texts by means of fact, entity, and association recognition (related to as information extraction). Data extraction efforts show some interesting results, however proper integration of web databases is still beyond us. Information extraction field has been recently very successful in retrieving information from natural language texts, however it is still lacking abilities to understand more complex information, requiring use of common sense knowledge, discourse analysis and disambiguation techniques.
Resumo:
This is a methodological paper describing when and how manifest items dropped from a latent construct measurement model (e.g., factor analysis) can be retained for additional analysis. Presented are protocols for assessment for retention in the measurement model, evaluation of dropped items as potential items separate from the latent construct, and post hoc analyses that can be conducted using all retained (manifest or latent) variables. The protocols are then applied to data relating to the impact of the NAPLAN test. The variables examined are teachers’ achievement goal orientations and teachers’ perceptions of the impact of the test on curriculum and pedagogy. It is suggested that five attributes be considered before retaining dropped manifest items for additional analyses. (1) Items can be retained when employed in service of an established or hypothesized theoretical model. (2) Items should only be retained if sufficient variance is present in the data set. (3) Items can be retained when they provide a rational segregation of the data set into subsamples (e.g., a consensus measure). (4) The value of retaining items can be assessed using latent class analysis or latent mean analysis. (5) Items should be retained only when post hoc analyses with these items produced significant and substantive results. These suggested exploratory strategies are presented so that other researchers using survey instruments might explore their data in similar and more innovative ways. Finally, suggestions for future use are provided.
Resumo:
A phylogenetic hypothesis for the lepidopteran superfamily Noctuoidea was inferred based on the complete mitochondrial (mt) genomes of 12 species (six newly sequenced). The monophyly of each noctuoid family in the latest classification was well supported. Novel and robust relationships were recovered at the family level, in contrast to previous analyses using nuclear genes. Erebidae was recovered as sister to (Nolidae+(Euteliidae+Noctuidae)), while Notodontidae was sister to all these taxa (the putatively basalmost lineage Oenosandridae was not included). In order to improve phylogenetic resolution using mt genomes, various analytical approaches were tested: Bayesian inference (BI) vs. maximum likelihood (ML), excluding vs. including RNA genes (rRNA or tRNA), and Gblocks treatment. The evolutionary signal within mt genomes had low sensitivity to analytical changes. Inference methods had the most significant influence. Inclusion of tRNAs positively increased the congruence of topologies, while inclusion of rRNAs resulted in a range of phylogenetic relationships varying depending on other analytical factors. The two Gblocks parameter settings had opposite effects on nodal support between the two inference methods. The relaxed parameter (GBRA) resulted in higher support values in BI analyses, while the strict parameter (GBDH) resulted in higher support values in ML analyses.
Resumo:
The microbial mediated production of nitrous oxide (N2O) and its reduction to dinitrogen (N2) via denitrification represents a loss of nitrogen (N) from fertilised agro-ecosystems to the atmosphere. Although denitrification has received great interest by biogeochemists in the last decades, the magnitude of N2lossesand related N2:N2O ratios from soils still are largely unknown due to methodical constraints. We present a novel 15N tracer approach, based on a previous developed tracer method to study denitrification in pure bacterial cultures which was modified for the use on soil incubations in a completely automated laboratory set up. The method uses a background air in the incubation vessels that is replaced with a helium-oxygen gas mixture with a 50-fold reduced N2 background (2 % v/v). This method allows for a direct and sensitive quantification of the N2 and N2O emissions from the soil with isotope-ratio mass spectrometry after 15N labelling of denitrification N substrates and minimises the sensitivity to the intrusion of atmospheric N2 at the same time. The incubation set up was used to determine the influence of different soil moisture levels on N2 and N2O emissions from a sub-tropical pasture soil in Queensland/Australia. The soil was labelled with an equivalent of 50 μg-N per gram dry soil by broadcast application of KNO3solution (4 at.% 15N) and incubated for 3 days at 80% and 100% water filled pore space (WFPS), respectively. The headspace of the incubation vessel was sampled automatically over 12hrs each day and 3 samples (0, 6, and 12 hrs after incubation start) of headspace gas analysed for N2 and N2O with an isotope-ratio mass spectrometer (DELTA V Plus, Thermo Fisher Scientific, Bremen, Germany(. In addition, the soil was analysed for 15N NO3- and NH4+ using the 15N diffusion method, which enabled us to obtain a complete N balance. The method proved to be highly sensitive for N2 and N2O emissions detecting N2O emissions ranging from 20 to 627 μN kg-1soil-1hr-1and N2 emissions ranging from 4.2 to 43 μN kg-1soil-1hr-1for the different treatments. The main end-product of denitrification was N2O for both water contents with N2 accounting for 9% and 13% of the total denitrification losses at 80% and 100%WFPS, respectively. Between 95-100% of the added 15N fertiliser could be recovered. Gross nitrification over the 3 days amounted to 8.6 μN g-1 soil-1 and 4.7 μN g-1 soil-1, denitrification to 4.1 μN g-1 soil-1 and 11.8 μN g-1 soil-1at 80% and 100%WFPS, respectively. The results confirm that the tested method allows for a direct and highly sensitive detection of N2 and N2O fluxes from soils and hence offers a sensitive tool to study denitrification and N turnover in terrestrial agro-ecosystems.
Resumo:
The total entropy utility function is considered for the dual purpose of Bayesian design for model discrimination and parameter estimation. A sequential design setting is proposed where it is shown how to efficiently estimate the total entropy utility for a wide variety of data types. Utility estimation relies on forming particle approximations to a number of intractable integrals which is afforded by the use of the sequential Monte Carlo algorithm for Bayesian inference. A number of motivating examples are considered for demonstrating the performance of total entropy in comparison to utilities for model discrimination and parameter estimation. The results suggest that the total entropy utility selects designs which are efficient under both experimental goals with little compromise in achieving either goal. As such, the total entropy utility is advocated as a general utility for Bayesian design in the presence of model uncertainty.