952 resultados para Ruin Probability
Resumo:
Circular shortest paths represent a powerful methodology for image segmentation. The circularity condition ensures that the contour found by the algorithm is closed, a natural requirement for regular objects. Several implementations have been proposed in the past that either promise closure with high probability or ensure closure strictly, but with a mild computational efficiency handicap. Circularity can be viewed as a priori information that helps recover the correct object contour. Our "observation" is that circularity is only one among many possible constraints that can be imposed on shortest paths to guide them to a desirable solution. In this contribution, we illustrate this opportunity under a volume constraint but the concept is generally applicable. We also describe several adornments to the circular shortest path algorithm that proved useful in applications. © 2011 IEEE.
Resumo:
We propose expected attainable discrimination (EAD) as a measure to select discrete valued features for reliable discrimination between two classes of data. EAD is an average of the area under the ROC curves obtained when a simple histogram probability density model is trained and tested on many random partitions of a data set. EAD can be incorporated into various stepwise search methods to determine promising subsets of features, particularly when misclassification costs are difficult or impossible to specify. Experimental application to the problem of risk prediction in pregnancy is described.
Resumo:
Objectives Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method of estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. Methods A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shape parameters of these beta distributions were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures i.e. prevalence and incidence as reported by other studies. Results Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ~120% over 12 years. Nationally, an estimated 356,000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (~90% increase). Conclusions We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence.
Resumo:
Ambiguity validation as an important procedure of integer ambiguity resolution is to test the correctness of the fixed integer ambiguity of phase measurements before being used for positioning computation. Most existing investigations on ambiguity validation focus on test statistic. How to determine the threshold more reasonably is less understood, although it is one of the most important topics in ambiguity validation. Currently, there are two threshold determination methods in the ambiguity validation procedure: the empirical approach and the fixed failure rate (FF-) approach. The empirical approach is simple but lacks of theoretical basis. The fixed failure rate approach has a rigorous probability theory basis, but it employs a more complicated procedure. This paper focuses on how to determine the threshold easily and reasonably. Both FF-ratio test and FF-difference test are investigated in this research and the extensive simulation results show that the FF-difference test can achieve comparable or even better performance than the well-known FF-ratio test. Another benefit of adopting the FF-difference test is that its threshold can be expressed as a function of integer least-squares (ILS) success rate with specified failure rate tolerance. Thus, a new threshold determination method named threshold function for the FF-difference test is proposed. The threshold function method preserves the fixed failure rate characteristic and is also easy-to-apply. The performance of the threshold function is validated with simulated data. The validation results show that with the threshold function method, the impact of the modelling error on the failure rate is less than 0.08%. Overall, the threshold function for the FF-difference test is a very promising threshold validation method and it makes the FF-approach applicable for the real-time GNSS positioning applications.
Resumo:
The estimation of the critical gap has been an issue since the 1970s, when gap acceptance was introduced to evaluate the capacity of unsignalized intersections. The critical gap is the shortest gap that a driver is assumed to accept. A driver’s critical gap cannot be measured directly and a number of techniques have been developed to estimate the mean critical gaps of a sample of drivers. This paper reviews the ability of the Maximum Likelihood technique and the Probability Equilibrium Method to predict the mean and standard deviation of the critical gap with a simulation of 100 drivers, repeated 100 times for each flow condition. The Maximum Likelihood method gave consistent and unbiased estimates of the mean critical gap. Whereas the probability equilibrium method had a significant bias that was dependent on the flow in the priority stream. Both methods were reasonably consistent, although the Maximum Likelihood Method was slightly better. If drivers are inconsistent, then again the Maximum Likelihood method is superior. A criticism levelled at the Maximum Likelihood method is that a distribution of the critical gap has to be assumed. It was shown that this does not significantly affect its ability to predict the mean and standard deviation of the critical gaps. Finally, the Maximum Likelihood method can predict reasonable estimates with observations for 25 to 30 drivers. A spreadsheet procedure for using the Maximum Likelihood method is provided in this paper. The PEM can be improved if the maximum rejected gap is used.
Resumo:
A number of Intelligent Transportation Systems (ITS) were used with an advanced driving simulator to assess its influence on driving behavior. Three types of ITS interventions namely, Video in-vehicle (ITS1), Audio in-vehicle (ITS2), and On-road flashing marker (ITS3) were tested. Then, the results from the driving simulator were used as inputs for a developed model using a traffic micro-simulation (Vissim 5.4) in order to assess the safety interventions. Using a driving simulator, 58 participants were required to drive through a number of active and passive crossings with and without an ITS device and in the presence or absence of an approaching train. The effect of driver behavior changing in terms of speed and compliance rate was greater at passive crossings than at active crossings. The difference in speed of drivers approaching ITS devices was very small which indicates that ITS helps drivers encounter the crossings in a safer way. Since the current traffic simulation was not able to replicate a dynamic speed change or a probability of stopping that varies based on different ITS safety devices, some modifications of the current traffic simulation were conducted. The results showed that exposure to ITS devices at active crossings did not influence the drivers’ behavior significantly according to the traffic performance indicators used, such as delay time, number of stops, speed, and stopped delay. On the other hand, the results of traffic simulation for passive crossings, where low traffic volumes and low train headway normally occur, showed that ITS devices improved overall traffic performance.
Resumo:
In this paper, the security of two recent RFID mutual authentication protocols are investigated. The first protocol is a scheme proposed by Huang et al. [7] and the second one by Huang, Lin and Li [6]. We show that these two protocols have several weaknesses. In Huang et al.’s scheme, an adversary can determine the 32-bit secret password with a probability of 2−2 , and in Huang-Lin-Li scheme, a passive adversary can recognize a target tag with a success probability of 1−2−4 and an active adversary can determine all 32 bits of Access password with success probability of 2−4 . The computational complexity of these attacks is negligible.
Resumo:
Objective: The incidence and cost of complications occurring in older and younger inpatients were compared. Design: Secondary analysis of hospital-recorded diagnosis and costs for multiday-stay inpatients in 68 public hospitals in two Australian states. Main outcome measures: A complication is defined as a hospital-acquired diagnosis that required additional treatment. The Australian Classification of Hospital-Acquired Diagnoses system is used to identify these complications. Results: Inpatients aged >70 years have a 10.9% complication rate, which is not substantially different from the 10.89% complication rate found in patients aged <70 years. Examination of the probability by single years, however, showed that the peak incidence associated with the neonatal period and childbirth is balanced by rates of up to 20% in patients >80 years. Examining the adult patient population (40–70 years), we found that while some common complications are not age specific (electrolyte disorders and cardiac arrhythmias), others (urinary tract and lower respiratory tract infections) are more common in the older adult inpatient. Conclusion: For inpatients aged >70 years, the risks of complications increase. The incidence of hospital-acquired diagnoses in older adults differs significantly from incidence rates found in younger cohorts. Urinary tract infection and alteration to mental state are more common in older adult inpatients. Surprisingly, these complexities do not result in additional costs when compared with costs for the same complications in younger adults. Greater awareness of these differing patterns will allow patient safety efforts for older patients to focus on complications with the highest incidence and cost.
Resumo:
Since the beginning of 1980s, the Iranian health care system has undergone several reforms designed to increase accessibility of health services. Notwithstanding these reforms, out-of-pocket payments which create a barrier to access health services contribute almost half of total health are financing in Iran. This study aimed to provide a greater understanding about the inequality and determinants of the out-of-pocket expenditure (OOPE) and the related catastrophic expenditure (CE) for hospital services in Iran using a nationwide survey data, the 2003 Utilisation of Health Services Survey (UHSS). The concentration index and the Heckman selection model were used to assess inequality and factors associated with these expenditures. Inequality analysis suggests that the CE is concentrated among households in lower socioeconomic levels. The results of the Heckman selection model indicate that factors such as length of stay, admission to a hospital owned by private sector or Ministry of Health and Medical Education, and living in remote areas are positively associated with higher OOPE. Results of the ordered-probit selection model demonstrate that length of stay, lower household wealth index, and admission to a private hospital are major factors contributing to the increase in the probability of CE. Also, we find that households living in East Azarbaijan, Kordestan and Sistan and Balochestan face a higher level of CE. Based on our findings, the current employer-sponsored health insurance system does not offer equal protection against hospital expenditure in Iran. It seems that a single universal health insurance scheme that covers health services for all Iranian—regardless of their employment status—can better protect households from catastrophic health spending.
Resumo:
This study examines hospital care system performance in Iran. We first briefly review hospital care delivery system in Iran. Then, the hospital care system in Iran has been investigated from financial, utilization, and quality perspectives. In particular, we examined the extent to which health care system in Iran protects people from the financial consequence of health care expenses and whether inpatient care distributed according to need. We also empirically analyzed the quality of hospital care in Iran using patient satisfaction information collected in a national health service survey. The Iranian health care system consists of unequal access to hospital care; mismatch between the distribution of services and inpatients' need; and high probability of financial catastrophe due to out-of-pocket payments for inpatient services. Our analysis indicates that the quality of hospital care among Iranian provinces favors patients residing in provinces with high numbers of hospital beds per capita such as Esfahan and Yazd. Patients living in provinces with low levels of accessibility to hospital care (e.g. Gilan, Kermanshah, Hamadan, Chahar Mahall and Bakhtiari, Khuzestan, and Sistan and Baluchestan) receive lower-quality services. These findings suggest that policymakers in Iran should work on several fronts including utilization, financing, and service quality to improve hospital care.
Resumo:
The requirement of isolated relays is one of the prime obstacles in utilizing sequential slotted cooperative protocols for Vehicular Ad-hoc Networks (VANET). Significant research advancement has taken place to improve the diversity multiplexing trade-off (DMT) of cooperative protocols in conventional mobile networks without much attention on vehicular ad-hoc networks. We have extended the concept of sequential slotted amplify and forward (SAF) protocols in the context of urban vehicular ad-hoc networks. Multiple Input Multiple Output (MIMO) reception is used at relaying vehicular nodes to isolate the relays effectively. The proposed approach adds a pragmatic value to the sequential slotted cooperative protocols while achieving attractive performance gains in urban VANETs. We have analysed the DMT bounds and the outage probabilities of the proposed scheme. The results suggest that the proposed scheme can achieve an optimal DMT similar to the DMT upper bound of the sequential SAF. Furthermore, the outage performance of the proposed scheme outperforms the SAF protocol by 2.5 dB at a target outage probability of 10-4.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
Aim: To quantify the consequences of major threats to biodiversity, such as climate and land-use change, it is important to use explicit measures of species persistence, such as extinction risk. The extinction risk of metapopulations can be approximated through simple models, providing a regional snapshot of the extinction probability of a species. We evaluated the extinction risk of three species under different climate change scenarios in three different regions of the Mexican cloud forest, a highly fragmented habitat that is particularly vulnerable to climate change. Location: Cloud forests in Mexico. Methods: Using Maxent, we estimated the potential distribution of cloud forest for three different time horizons (2030, 2050 and 2080) and their overlap with protected areas. Then, we calculated the extinction risk of three contrasting vertebrate species for two scenarios: (1) climate change only (all suitable areas of cloud forest through time) and (2) climate and land-use change (only suitable areas within a currently protected area), using an explicit patch-occupancy approximation model and calculating the joint probability of all populations becoming extinct when the number of remaining patches was less than five. Results: Our results show that the extent of environmentally suitable areas for cloud forest in Mexico will sharply decline in the next 70 years. We discovered that if all habitat outside protected areas is transformed, then only species with small area requirements are likely to persist. With habitat loss through climate change only, high dispersal rates are sufficient for persistence, but this requires protection of all remaining cloud forest areas. Main conclusions: Even if high dispersal rates mitigate the extinction risk of species due to climate change, the synergistic impacts of changing climate and land use further threaten the persistence of species with higher area requirements. Our approach for assessing the impacts of threats on biodiversity is particularly useful when there is little time or data for detailed population viability analyses. © 2013 John Wiley & Sons Ltd.
Resumo:
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species. © 2010 by the Ecological Society of America.
Resumo:
Long-term systematic population monitoring data sets are rare but are essential in identifying changes in species abundance. In contrast, community groups and natural history organizations have collected many species lists. These represent a large, untapped source of information on changes in abundance but are generally considered of little value. The major problem with using species lists to detect population changes is that the amount of effort used to obtain the list is often uncontrolled and usually unknown. It has been suggested that using the number of species on the list, the "list length," can be a measure of effort. This paper significantly extends the utility of Franklin's approach using Bayesian logistic regression. We demonstrate the value of List Length Analysis to model changes in species prevalence (i.e., the proportion of lists on which the species occurs) using bird lists collected by a local bird club over 40 years around Brisbane, southeast Queensland, Australia. We estimate the magnitude and certainty of change for 269 bird species and calculate the probabilities that there have been declines and increases of given magnitudes. List Length Analysis confirmed suspected species declines and increases. This method is an important complement to systematically designed intensive monitoring schemes and provides a means of utilizing data that may otherwise be deemed useless. The results of List Length Analysis can be used for targeting species of conservation concern for listing purposes or for more intensive monitoring. While Bayesian methods are not essential for List Length Analysis, they can offer more flexibility in interrogating the data and are able to provide a range of parameters that are easy to interpret and can facilitate conservation listing and prioritization. © 2010 by the Ecological Society of America.