901 resultados para probabilistic roadmap
Resumo:
There is an increased interest on the use of Unmanned Aerial Vehicles (UAVs) for wildlife and feral animal monitoring around the world. This paper describes a novel system which uses a predictive dynamic application that places the UAV ahead of a user, with a low cost thermal camera, a small onboard computer that identifies heat signatures of a target animal from a predetermined altitude and transmits that target’s GPS coordinates. A map is generated and various data sets and graphs are displayed using a GUI designed for easy use. The paper describes the hardware and software architecture and the probabilistic model for downward facing camera for the detection of an animal. Behavioral dynamics of target movement for the design of a Kalman filter and Markov model based prediction algorithm are used to place the UAV ahead of the user. Geometrical concepts and Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of the user, thus delivering a new way point for autonomous navigation. Results show that the system is capable of autonomously locating animals from a predetermined height and generate a map showing the location of the animals ahead of the user.
Resumo:
Köyhiä maanviljelijöitä on usein syytetty kehitysmaiden ympäristöongelmista. On väitetty, että eloonjäämistaistelu pakottaa heidät käyttämään maata ja muita luonnonvaroja lyhytnäköisesti. Harva asiaa koskeva tutkimus on kuitenkaan tukenut tätä väitettä; perheiden köyhyyden astetta ja heidän aiheuttamaansa ympäristövaikutusta ei ole kyetty kytkemään toisiinsa. Selkeyttääkseen köyhyys-ympäristö –keskustelua, Thomas Reardon ja Steven Vosti kehittivät investointiköyhyyden käsitteen. Se tunnistaa sen kenties suuren joukon maanviljelijäperheitä, jotka eivät ole köyhiä perinteisten köyhyysmittareiden mukaan, mutta joiden hyvinvointi ei ole riittävästi köyhyysrajojen yläpuolella salliakseen perheen investoida kestävämpään maankäyttöön. Reardon ja Vosti korostivat myös omaisuuden vaikutusta perheiden hyvinvointiin, ja uskoivat sen vaikuttavan tuotanto- ja investointipäätöksiin. Tässä tutkimuksessa pyritään vastaamaan kahteen kysymykseen: Miten investointiköyhyyttä voidaan ymmärtää ja mitata? Ja, mikä on viljelijäperheiden omaisuuden hyvinvointia lisäävä vaikutus? Tätä tutkimusta varten haastateltiin 402 maanviljelijäperhettä Väli-Amerikassa, Panaman tasavallan Herreran läänissä. Näiden perheiden hyvinvointia mitattiin heidän kulutuksensa mukaan, ja paikalliset köyhyysrajat laskettiin paikallisen ruoan hinnan mukaan. Herrerassa ihminen tarvitsee keskimäärin 494 dollaria vuodessa saadakseen riittävän ravinnon, tai 876 dollaria vuodessa voidakseen ravinnon lisäksi kattaa muitakin välttämättömiä menoja. Ruoka- eli äärimmäisen köyhyyden rajan alle jäi 15,4% tutkituista perheistä, ja 33,6% oli jokseenkin köyhiä, eli saavutti kyllä riittävän ravitsemuksen, muttei kyennyt kustantamaan muita perustarpeitaan. Molempien köyhyysrajojen yläpuolelle ylsi siis 51% tutkituista perheistä. Näiden köyhyysryhmien välillä on merkittäviä eroavaisuuksia ei vain perheiden varallisuuden, tulojen ja investointistrategioiden välillä, mutta myös perheiden rakenteessa, elinympäristössä ja mahdollisuuksissa saada palveluja. Investointiköyhyyden mittaaminen osoittautui haastavaksi. Herrerassa viljelijät eivät tee investointeja puhtaasti ympäristönsuojeluun, eikä maankäytön kestävyyttä muutenkaan pystytty yhdistämään perheiden hyvinvoinnin tasoon. Siksi investointiköyhyyttä etsittiin sellaisena hyvinvoinnin tasona, jonka alapuolella elävien perheiden parissa tuottavat maanparannusinvestoinnit eivät enää ole suorassa suhteessa hyvinvointiin. Tällaisia investointeja ovat mm. istutetut aidat, lannoitus ja paranneltujen laiduntyyppien viljely. Havaittiin, että jos perheen hyvinvointi putoaa alle 1000 dollarin/henkilö/vuosi, tällaiset tuottavat maanparannusinvestoinnit muuttuvat erittäin harvinaisiksi. Investointiköyhyyden raja on siis noin kaksi kertaa riittävän ravitsemuksen hinta, ja sen ylitti 42,3% tutkituista perheistä. Heille on tyypillistä, että molemmat puolisot käyvät työssä, ovat korkeasti koulutettuja ja yhteisössään aktiivisia, maatila tuottaa paremmin, tilalla kasvatetaan vaativampia kasveja, ja että he ovat kerryttäneet enemmän omaisuutta kuin investointi-köyhyyden rajan alla elävät perheet. Tässä tutkimuksessa kyseenalaistettiin yleinen oletus, että omaisuudesta olisi poikkeuksetta hyötyä viljelijäperheelle. Niinpä omaisuuden vaikutusta perheiden hyvinvointiin tutkittiin selvittämällä, mitä reittejä pitkin perheiden omistama maa, karja, koulutus ja työikäiset perheenjäsenet voisivat lisätä perheen hyvinvointia. Näiden hyvinvointi-mekanismien ajateltiin myös riippuvan monista väliin tulevista tekijöistä. Esimerkiksi koulutus voisi lisätä hyvinvointia, jos sen avulla saataisiin paremmin palkattuja töitä tai perustettaisiin yritys; mutta näihin mekanismeihin saattaa vaikuttaa vaikkapa etäisyys kaupungeista tai se, omistaako perhe ajoneuvon. Köyhimpien perheiden parissa nimenomaan koulutus olikin ainoa tutkittu omaisuuden muoto, joka edisti perheen hyvinvointia, kun taas maasta, karjasta tai työvoimasta ei ollut apua köyhyydestä nousemiseen. Varakkaampien perheiden parissa sen sijaan korkeampaa hyvinvointia tuottivat koulutuksen lisäksi myös maa ja työvoima, joskin monesta väliin tulevasta muuttujasta, kuten tuotantopanoksista riippuen. Ei siis ole automaatiota, jolla omaisuus parantaisi perheiden hyvinvointia. Vaikka rikkailla onkin yleensä enemmän karjaa kuin köyhemmillä, ei tässä aineistossa löydetty yhtään mekanismia, jota kautta karjan määrä tuottaisi korkeampaa hyvinvointia viljelijäperheille. Omaisuuden keräämisen ja hyödyntämisen strategiat myös muuttuvat hyvinvoinnin kasvaessa ja niihin vaikuttavat monet ulkoiset tekijät. Ympäristön ja köyhyyden suhde on siis edelleen epäselvä. Köyhyyden voittaminen vaatii pitkällä tähtäimellä sitä, että viljelijäperheet nousisivat investointiköyhyyden rajan yläpuolelle. Näin heillä olisi varaa alkaa kartuttaa omaisuutta ja investoida kestävämpään maankäyttöön. Tällä hetkellä kuitenkin isolle osalle herreralaisia perheitä tuo raja on kaukana tavoittamattomissa. Miten päästä yli tuhannen dollarin kulutukseen perheenjäsentä kohden, mikäli elintaso ei yllä edes riittävään ravitsemukseen? Ja sittenkin, vaikka hyvinvointi kohenisi, ei ympäristön kannalta parannuksia ole välttämättä odotettavissa, mikäli karjalaumat kasvavat ja eroosioalttiit laitumet leviävät.
Resumo:
Background Exercise referral schemes (ERS) aim to identify inactive adults in the primary care setting. The primary care professional refers the patient to a third party service, with this service taking responsibility for prescribing and monitoring an exercise programme tailored to the needs of the patient. This paper examines the cost-effectiveness of ERS in promoting physical activity compared with usual care in primary care setting. Methods A decision analytic model was developed to estimate the cost-effectiveness of ERS from a UK NHS perspective. The costs and outcomes of ERS were modelled over the patient's lifetime. Data were derived from a systematic review of the literature on the clinical and cost-effectiveness of ERS, and on parameter inputs in the modelling framework. Outcomes were expressed as incremental cost per quality-adjusted life-year (QALY). Deterministic and probabilistic sensitivity analyses investigated the impact of varying ERS cost and effectiveness assumptions. Sub-group analyses explored the cost-effectiveness of ERS in sedentary people with an underlying condition. Results Compared with usual care, the mean incremental lifetime cost per patient for ERS was £169 and the mean incremental QALY was 0.008, generating a base-case incremental cost-effectiveness ratio (ICER) for ERS at £20,876 per QALY in sedentary individuals without a diagnosed medical condition. There was a 51% probability that ERS was cost-effective at £20,000 per QALY and 88% probability that ERS was cost-effective at £30,000 per QALY. In sub-group analyses, cost per QALY for ERS in sedentary obese individuals was £14,618, and in sedentary hypertensives and sedentary individuals with depression the estimated cost per QALY was £12,834 and £8,414 respectively. Incremental lifetime costs and benefits associated with ERS were small, reflecting the preventative public health context of the intervention, with this resulting in estimates of cost-effectiveness that are sensitive to variations in the relative risk of becoming physically active and cost of ERS. Conclusions ERS is associated with modest increase in lifetime costs and benefits. The cost-effectiveness of ERS is highly sensitive to small changes in the effectiveness and cost of ERS and is subject to some significant uncertainty mainly due to limitations in the clinical effectiveness evidence base.
Resumo:
Uncertainties associated with the structural model and measured vibration data may lead to unreliable damage detection. In this paper, we show that geometric and measurement uncertainty cause considerable problem in damage assessment which can be alleviated by using a fuzzy logic-based approach for damage detection. Curvature damage factor (CDF) of a tapered cantilever beam are used as damage indicators. Monte Carlo simulation (MCS) is used to study the changes in the damage indicator due to uncertainty in the geometric properties of the beam. Variation in these CDF measures due to randomness in structural parameter, further contaminated with measurement noise, are used for developing and testing a fuzzy logic system (FLS). Results show that the method correctly identifies both single and multiple damages in the structure. For example, the FLS detects damage with an average accuracy of about 95 percent in a beam having geometric uncertainty of 1 percent COV and measurement noise of 10 percent in single damage scenario. For multiple damage case, the FLS identifies damages in the beam with an average accuracy of about 94 percent in the presence of above mentioned uncertainties. The paper brings together the disparate areas of probabilistic analysis and fuzzy logic to address uncertainty in structural damage detection.
Resumo:
The behaviour of laterally loaded piles is considerably influenced by the uncertainties in soil properties. Hence probabilistic models for assessment of allowable lateral load are necessary. Cone penetration test (CPT) data are often used to determine soil strength parameters, whereby the allowable lateral load of the pile is computed. In the present study, the maximum lateral displacement and moment of the pile are obtained based on the coefficient of subgrade reaction approach, considering the nonlinear soil behaviour in undrained clay. The coefficient of subgrade reaction is related to the undrained shear strength of soil, which can be obtained from CPT data. The soil medium is modelled as a one-dimensional random field along the depth, and it is described by the standard deviation and scale of fluctuation of the undrained shear strength of soil. Inherent soil variability, measurement uncertainty and transformation uncertainty are taken into consideration. The statistics of maximum lateral deflection and moment are obtained using the first-order, second-moment technique. Hasofer-Lind reliability indices for component and system failure criteria, based on the allowable lateral displacement and moment capacity of the pile section, are evaluated. The geotechnical database from the Konaseema site in India is used as a case example. It is shown that the reliability-based design approach for pile foundations, considering the spatial variability of soil, permits a rational choice of allowable lateral loads.
Resumo:
Those seeking to bring change to cultivars sold in the banana markets of the world have encountered major difficulties over the years. Change has been sought because of production difficulties caused by banana diseases such as Fusarium wilt or a desire to invigorate a stagnant market and obtain a competitive advantage by the introduction of diversity of product. Currently the world banana scene is dominated by cultivars from the Cavendish subgroup with their production in excess of 40% of total world production of banana and plantain combined, and in most western countries Cavendish is synonymous with banana. But Cavendish production usually necessitates very regular applications of pesticides, particularly fungicides for Mycosphaerella leaf spots control. So genetic resistance to these and other diseases would be very beneficial to minimizing costs of production, as well as reducing health risks to banana workers and the general population and minimizing impacts on the environment. In recent years, the overall market sales of some crops, such as tomatoes, have increased by providing diversity of cultivars to consumers. Can the same be done for banana? Perhaps a better understanding of how we have arrived at our current situation and the forces that have shaped our preference for Cavendish will allow us to plan more strategic crop improvement research which has enhanced chances of adoption by the banana industries of the world. A scoping study was recently undertaken in Australia to determine the current market opportunity for alternative cultivars and provide a roadmap for the industry to successfully develop this market. A multidisciplinary team reviewed the literature, surveyed the supply chain, analyzed gross margins and conducted consumer and sensory evaluations of 'new' cultivars. This has provided insight on why Cavendish dominates the market, which is the focus of this paper, and we believe will provide a solid foundation for future progress.
Resumo:
Post-rainy sorghum (Sorghum bicolor (L.) Moench) production underpins the livelihood of millions in the semiarid tropics, where the crop is affected by drought. Drought scenarios have been classified and quantified using crop simulation. In this report, variation in traits that hypothetically contribute to drought adaptation (plant growth dynamics, canopy and root water conducting capacity, drought stress responses) were virtually introgressed into the most common post-rainy sorghum genotype, and the influence of these traits on plant growth, development, and grain and stover yield were simulated across different scenarios. Limited transpiration rates under high vapour pressure deficit had the highest positive effect on production, especially combined with enhanced water extraction capacity at the root level. Variability in leaf development (smaller canopy size, later plant vigour or increased leaf appearance rate) also increased grain yield under severe drought, although it caused a stover yield trade-off under milder stress. Although the leaf development response to soil drying varied, this trait had only a modest benefit on crop production across all stress scenarios. Closer dissection of the model outputs showed that under water limitation, grain yield was largely determined by the amount of water availability after anthesis, and this relationship became closer with stress severity. All traits investigated increased water availability after anthesis and caused a delay in leaf senescence and led to a ‘stay-green’ phenotype. In conclusion, we showed that breeding success remained highly probabilistic; maximum resilience and economic benefits depended on drought frequency. Maximum potential could be explored by specific combinations of traits.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
Advancements in the analysis techniques have led to a rapid accumulation of biological data in databases. Such data often are in the form of sequences of observations, examples including DNA sequences and amino acid sequences of proteins. The scale and quality of the data give promises of answering various biologically relevant questions in more detail than what has been possible before. For example, one may wish to identify areas in an amino acid sequence, which are important for the function of the corresponding protein, or investigate how characteristics on the level of DNA sequence affect the adaptation of a bacterial species to its environment. Many of the interesting questions are intimately associated with the understanding of the evolutionary relationships among the items under consideration. The aim of this work is to develop novel statistical models and computational techniques to meet with the challenge of deriving meaning from the increasing amounts of data. Our main concern is on modeling the evolutionary relationships based on the observed molecular data. We operate within a Bayesian statistical framework, which allows a probabilistic quantification of the uncertainties related to a particular solution. As the basis of our modeling approach we utilize a partition model, which is used to describe the structure of data by appropriately dividing the data items into clusters of related items. Generalizations and modifications of the partition model are developed and applied to various problems. Large-scale data sets provide also a computational challenge. The models used to describe the data must be realistic enough to capture the essential features of the current modeling task but, at the same time, simple enough to make it possible to carry out the inference in practice. The partition model fulfills these two requirements. The problem-specific features can be taken into account by modifying the prior probability distributions of the model parameters. The computational efficiency stems from the ability to integrate out the parameters of the partition model analytically, which enables the use of efficient stochastic search algorithms.
Resumo:
Genetics, the science of heredity and variation in living organisms, has a central role in medicine, in breeding crops and livestock, and in studying fundamental topics of biological sciences such as evolution and cell functioning. Currently the field of genetics is under a rapid development because of the recent advances in technologies by which molecular data can be obtained from living organisms. In order that most information from such data can be extracted, the analyses need to be carried out using statistical models that are tailored to take account of the particular genetic processes. In this thesis we formulate and analyze Bayesian models for genetic marker data of contemporary individuals. The major focus is on the modeling of the unobserved recent ancestry of the sampled individuals (say, for tens of generations or so), which is carried out by using explicit probabilistic reconstructions of the pedigree structures accompanied by the gene flows at the marker loci. For such a recent history, the recombination process is the major genetic force that shapes the genomes of the individuals, and it is included in the model by assuming that the recombination fractions between the adjacent markers are known. The posterior distribution of the unobserved history of the individuals is studied conditionally on the observed marker data by using a Markov chain Monte Carlo algorithm (MCMC). The example analyses consider estimation of the population structure, relatedness structure (both at the level of whole genomes as well as at each marker separately), and haplotype configurations. For situations where the pedigree structure is partially known, an algorithm to create an initial state for the MCMC algorithm is given. Furthermore, the thesis includes an extension of the model for the recent genetic history to situations where also a quantitative phenotype has been measured from the contemporary individuals. In that case the goal is to identify positions on the genome that affect the observed phenotypic values. This task is carried out within the Bayesian framework, where the number and the relative effects of the quantitative trait loci are treated as random variables whose posterior distribution is studied conditionally on the observed genetic and phenotypic data. In addition, the thesis contains an extension of a widely-used haplotyping method, the PHASE algorithm, to settings where genetic material from several individuals has been pooled together, and the allele frequencies of each pool are determined in a single genotyping.
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.
Resumo:
Background The objective is to estimate the incremental cost-effectiveness of the Australian National Hand Hygiene Inititiave implemented between 2009 and 2012 using healthcare associated Staphylococcus aureus bacteraemia as the outcome. Baseline comparators are the eight existing state and territory hand hygiene programmes. The setting is the Australian public healthcare system and 1,294,656 admissions from the 50 largest Australian hospitals are included. Methods The design is a cost-effectiveness modelling study using a before and after quasi-experimental design. The primary outcome is cost per life year saved from reduced cases of healthcare associated Staphylococcus aureus bacteraemia, with cost estimated by the annual on-going maintenance costs less the costs saved from fewer infections. Data were harvested from existing sources or were collected prospectively and the time horizon for the model was 12 months, 2011–2012. Findings No useable pre-implementation Staphylococcus aureus bacteraemia data were made available from the 11 study hospitals in Victoria or the single hospital in Northern Territory leaving 38 hospitals among six states and territories available for cost-effectiveness analyses. Total annual costs increased by $2,851,475 for a return of 96 years of life giving an incremental cost-effectiveness ratio (ICER) of $29,700 per life year gained. Probabilistic sensitivity analysis revealed a 100% chance the initiative was cost effective in the Australian Capital Territory and Queensland, with ICERs of $1,030 and $8,988 respectively. There was an 81% chance it was cost effective in New South Wales with an ICER of $33,353, a 26% chance for South Australia with an ICER of $64,729 and a 1% chance for Tasmania and Western Australia. The 12 hospitals in Victoria and the Northern Territory incur annual on-going maintenance costs of $1.51M; no information was available to describe cost savings or health benefits. Conclusions The Australian National Hand Hygiene Initiative was cost-effective against an Australian threshold of $42,000 per life year gained. The return on investment varied among the states and territories of Australia.
Resumo:
An iterative algorithm baaed on probabilistic estimation is described for obtaining the minimum-norm solution of a very large, consistent, linear system of equations AX = g where A is an (m times n) matrix with non-negative elements, x and g are respectively (n times 1) and (m times 1) vectors with positive components.
Resumo:
- Objective To compare health service cost and length of stay between a traditional and an accelerated diagnostic approach to assess acute coronary syndromes (ACS) among patients who presented to the emergency department (ED) of a large tertiary hospital in Australia. - Design, setting and participants This historically controlled study analysed data collected from two independent patient cohorts presenting to the ED with potential ACS. The first cohort of 938 patients was recruited in 2008–2010, and these patients were assessed using the traditional diagnostic approach detailed in the national guideline. The second cohort of 921 patients was recruited in 2011–2013 and was assessed with the accelerated diagnostic approach named the Brisbane protocol. The Brisbane protocol applied early serial troponin testing for patients at 0 and 2 h after presentation to ED, in comparison with 0 and 6 h testing in traditional assessment process. The Brisbane protocol also defined a low-risk group of patients in whom no objective testing was performed. A decision tree model was used to compare the expected cost and length of stay in hospital between two approaches. Probabilistic sensitivity analysis was used to account for model uncertainty. - Results Compared with the traditional diagnostic approach, the Brisbane protocol was associated with reduced expected cost of $1229 (95% CI −$1266 to $5122) and reduced expected length of stay of 26 h (95% CI −14 to 136 h). The Brisbane protocol allowed physicians to discharge a higher proportion of low-risk and intermediate-risk patients from ED within 4 h (72% vs 51%). Results from sensitivity analysis suggested the Brisbane protocol had a high chance of being cost-saving and time-saving. - Conclusions This study provides some evidence of cost savings from a decision to adopt the Brisbane protocol. Benefits would arise for the hospital and for patients and their families.