959 resultados para Average Case Complexity
Resumo:
We present an overlapping generations model that explains price dispersion among Catalonian healthcare insurance firms. The model shows that firms with different premium policies can coexist. Furthermore, if interest rates are low, firms that apply equal premium to all insureds can charge higher average prices than insurers that set premiums according to the risk of insured. Economic theory, health insurance, health economics.
Resumo:
Every year, debris flows cause huge damage in mountainous areas. Due to population pressure in hazardous zones, the socio-economic impact is much higher than in the past. Therefore, the development of indicative susceptibility hazard maps is of primary importance, particularly in developing countries. However, the complexity of the phenomenon and the variability of local controlling factors limit the use of processbased models for a first assessment. A debris flow model has been developed for regional susceptibility assessments using digital elevation model (DEM) with a GIS-based approach.. The automatic identification of source areas and the estimation of debris flow spreading, based on GIS tools, provide a substantial basis for a preliminary susceptibility assessment at a regional scale. One of the main advantages of this model is its workability. In fact, everything is open to the user, from the data choice to the selection of the algorithms and their parameters. The Flow-R model was tested in three different contexts: two in Switzerland and one in Pakistan, for indicative susceptibility hazard mapping. It was shown that the quality of the DEM is the most important parameter to obtain reliable results for propagation, but also to identify the potential debris flows sources.
Resumo:
The understanding of sedimentary evolution is intimately related to the knowledge of the exact ages of the sediments. When working on carbonate sediments, age dating is commonly based on paleontological observations and established biozonations, which may prove to be relatively imprecise. Dating by means of strontium isotope ratios in marine bioclasts is the probably best method in order to precisely date carbonate successions, provided that the sample reflects original marine geochemical characteristics. This requires a precise study of the samples including its petrography, SEM and cathodoluminescence observations, stable carbon and oxygen isotope geochemistry and finally the strontium isotope measurement itself. On the Nicoya Peninsula (Northwestern Costa Rica) sediments from the Piedras Blancas Formation, Nambi Formation and Quebrada Pavas Formation were dated by the means of strontium isotope ratios measured in Upper Cretaceous Inoceramus shell fragments. Results have shown average 87Sr/86Sr values of 0.707654 (middle late Campanian) for the Piedras Blancas Formation, 0.707322 (Turonian-Coniacian) for the Nambi Formation and 0.707721 (late Campanian-Maastrichtian) for the Quebrada Pavas Formation. Abundant detrital components in the studied formations constitute a difficulty to strontium isotope dating. In fact, the fossil bearing sediments can easily contaminate the target fossil with strontium mobilized form basalts during diagenesis and thus the obtained strontium isotope ratios may be influenced significantly and so will the obtained ages. The new and more precise age assignments allow for more precision in the chronostratigraphic chart of the sedimentary and tectonic evolution of the Nicoya Peninsula, providing a better insight on the evolution of this region. Meteor Cruise M81 dredged shallow water carbonates from the Hess Rise and Hess Escarpment during March 2010. Several of these shallow water carbonates contain abundant Larger Foraminifera that indicates an Eocene-Oligocene age. In this study the strontium isotope values ranging from 0.707847 to 0.708238 can be interpreted as a Rupelian to Chattian age of these sediments. These platform sediments are placed on seamounts, now located at depths reaching 1600 m. Observation of sedimentologic characteristics of these sediments has helped to resolve apparent discrepancies between fossil and strontium isotope ages. Hence, it is possible to show that the subsidence was active during early Miocene times. On La Désirade (Guadeloupe France), the Neogene to Quaternary carbonate cover has been dated by microfossils and some U/Th-ages. Disagreements subsisted in the paleontological ages of the formations. Strontium isotope ratios ranging from 0.709047 to 0.709076 showed the Limestone Table of La Désirade to range from an Early Pliocene to Late Pliocene/early Pleistocene age. A very late Miocene age (87Sr/86Sr =0.709013) can be determined to the Detrital Offshore Limestone. The flat volcanic basement had to be eroded by wave-action during a long-term stable relative sea-level. Sediments of the Table Limestone on La Désirade show both low-stand and high-stand facies that encroach on the igneous basement, implying deposition during a major phase of subsidence creating accommodation space. Subsidence is followed by tectonic uplift documented by fringing reefs and beach rocks that young from the top of the Table Limestone (180 m) towards the present coastline. Strontium isotope ratios from two different fringing reefs (0.707172 and 0.709145) and from a beach rock (0.709163) allow tentative dating, (125ky, ~ 400ky, 945ky) and indicate an uplift rate of about 5cm/ky for this time period of La Désirade Island. The documented subsidence and uplift history calls for a new model of tectonic evolution of the area.
Resumo:
In this paper, we study the average inter-crossing number between two random walks and two random polygons in the three-dimensional space. The random walks and polygons in this paper are the so-called equilateral random walks and polygons in which each segment of the walk or polygon is of unit length. We show that the mean average inter-crossing number ICN between two equilateral random walks of the same length n is approximately linear in terms of n and we were able to determine the prefactor of the linear term, which is a = (3 In 2)/(8) approximate to 0.2599. In the case of two random polygons of length n, the mean average inter-crossing number ICN is also linear, but the prefactor of the linear term is different from that of the random walks. These approximations apply when the starting points of the random walks and polygons are of a distance p apart and p is small compared to n. We propose a fitting model that would capture the theoretical asymptotic behaviour of the mean average ICN for large values of p. Our simulation result shows that the model in fact works very well for the entire range of p. We also study the mean ICN between two equilateral random walks and polygons of different lengths. An interesting result is that even if one random walk (polygon) has a fixed length, the mean average ICN between the two random walks (polygons) would still approach infinity if the length of the other random walk (polygon) approached infinity. The data provided by our simulations match our theoretical predictions very well.
Resumo:
The public primary school system in the State of Geneva, Switzerland, is characterized by centrally evaluated pupil performance measured with the use of standardized tests. As a result, consistent data are collected among the system. The 2010-2011 dataset is used to develop a two-stage data envelopment analysis (DEA) of school efficiency. In the first stage, DEA is employed to calculate an individual efficiency score for each school. It shows that, on average, each school could reduce its inputs by 7% whilst maintaining the same quality of pupil performance. The cause of inefficiency lies in perfectible management. In the second stage, efficiency is regressed on school characteristics and environmental variables;external factors outside of the control of headteachers. The model is tested for multicollinearity, heteroskedasticity and endogeneity. Four variables are identified as statistically significant. School efficiency is negatively influenced by (1) the provision of special education, (2) the proportion of disadvantaged pupils enrolled at the school and (3) operations being held on multiple sites, but positively influenced by school size (captured by the number of pupils). The proportion of allophone pupils; schools located in urban areas and the provision of reception classes for immigrant pupils are not significant. Although the significant variables influencing school efficiency are outside of the control of headteachers, it is still possible to either boost the positive impact or curb the negative impact. Dans le canton de Genève (Suisse), les écoles publiques primaires sont caractérisées par un financement assuré par les collectivités publiques (canton et communes) et par une évaluation des élèves à l'aide d'épreuves standardisées à trois moments distincts de leur scolarité. Cela permet de réunir des informations statistiques consistantes. La base de données de l'année 2010-2011 est utilisée dans une analyse en deux étapes de l'efficience des écoles. Dans une première étape, la méthode d'analyse des données par enveloppement (DEA) est utilisée pour calculer un score d'efficience pour chaque école. Cette analyse démontre que l'efficience moyenne des écoles s'élève à 93%. Chaque école pourrait, en moyenne, réduire ses ressources de 7% tout en conservant constants les résultats des élèves aux épreuves standardisées. La source de l'inefficience réside dans un management des écoles perfectible. Dans une seconde étape, les scores d'efficience sont régressés sur les caractéristiques des écoles et sur des variables environnementales. Ces variables ne sont pas sous le contrôle (ou l'influence) des directeurs d'école. Le modèle est testé pour la multicolinéartié, l'hétéroscédasticité et l'endogénéité. Quatre variables sont statistiquement significatives. L'efficience des écoles est influencée négativement par (1) le fait d'offrir un enseignement spécialisé en classe séparée, (2) la proporition d'élèves défavorisés et (3) le fait d'opérer sur plusieurs sites différents. L'efficience des écoles est influencée positivement par la taille de l'école, mesurée par le nombre d'élèves. La proporition d'élèves allophones, le fait d'être situé dans une zone urbaine et d'offrir des classes d'accueil pour les élèves immigrants constituent autant de variables non significatives. Le fait que les variables qui influencent l'efficience des écoles ne soient pas sous le contrôle des directeurs ne signifie pas qu'il faille céder au fatalisme. Différentes pistes sont proposées pour permettre soit de réduire l'impact négatif soit de tirer parti de l'impact positif des variables significatives.
Resumo:
Abstract Purpose: To describe viral retinitis following intravitreal and periocular corticosteroid administration. Methods: Retrospective case series and comprehensive literature review. Results: We analyzed 5 unreported and 25 previously published cases of viral retinitis following local corticosteroid administration. Causes of retinitis included 23 CMV (76.7%), 5 HSV (16.7%), and 1 each VZV and unspecified (3.3%). Two of 22 tested patients (9.1%) were HIV positive. Twenty-one of 30 (70.0%) cases followed one or more intravitreal injections of triamcinolone acetonide (TA), 4 (13.3%) after one or more posterior sub-Tenon injections of TA, 3 (10.0%) after placement of a 0.59-mg fluocinolone acetonide implant (Retisert), and 1 (3.3%) each after an anterior subconjunctival injection of TA (together with IVTA), an anterior chamber injection, and an anterior sub-Tenon injection. Mean time from most recent corticosteroid administration to development of retinitis was 4.2 months (median 3.8; range 0.25-13.0). Twelve patients (40.0%) had type II diabetes mellitus. Treatments used included systemic antiviral agents (26/30, 86.7%), intravitreal antiviral injections (20/30, 66.7%), and ganciclovir intravitreal implants (4/30, 13.3%). Conclusions: Viral retinitis may develop or reactivate following intraocular or periocular corticosteroid administration. Average time to development of retinitis was 4 months, and CMV was the most frequently observed agent. Diabetes was a frequent co-morbidity and several patients with uveitis who developed retinitis were also receiving systemic immunosuppressive therapy.
Resumo:
Introduction: Patients who repeatedly attend the Emergency Department (ED) often have a distinct and complex vulnerability profile that includes poor somatic, psychological, and social indicators. This profile has an impact on the patients' well-being as well as on hospital costs. The objective of the study was to specify the characteristics of hyper users (HU) and explore the connection with ED care and hospital costs. Methods: The study sample comprised all adult patients with 12 or more attendances at the ED of the Lausanne University Hospital in 2009. The data were collected by retrospectively searching internal databases to identify the patients concerned and then analysing the profiles of these patients. Information gathered included demographic, somatic, psychological, at-risk behaviour, and social indicators, and health system consumption including costs. Results: In 2009, 23 patients (0.1%) attended 12 times or more (425 attendances, 0.8%). The average age was about 43 years, 60.9% were female, and 47.8% single. Of these 95.7% had basic insurance, 87.0% had a general practitioner, and 30.4% were under legal guardianship. The majority attended in the evening or at night (67.1%), and almost one quarter of these attendances resulted in inpatient treatment (24.0%). Most HU had attended the ED in previous years too (95.7% in 2008). The most prevalent diagnoses concerned 'mental disorders' (87.0%). About 30.4% of patients had attempted suicide (all were female patients). Other frequent diagnoses concerned 'trauma' (65.2%), and the 'digestive' and the 'nervous system' (each 56.5%). At-risk behaviour such as severe alcohol consumption (34.8%), or excessive use of medicines (26.1%) was very frequent, and some patients used illicit drugs (21.7%). There was only a weak association between the number of ED attendances and the resulting costs. However, a reduction of one outpatient visit per patient would have decreased ED outpatient costs by 8.5%. Conclusions: HU often have a particularly vulnerable profile. Mental problems are prevalent among them, as are at-risk behaviour and severe somatic conditions. The complexity of the patients' profiles demands specific care that cannot be guaranteed within an everyday ED routine. The use of an interdisciplinary case management team might be a promising approach in diminishing the number of attendances and the associated costs, although the profiles of HU are such that they probably cannot completely give up ED attendance.
Resumo:
The growing multilingual trend in movie production comes with a challenge for dubbing translators since they are increasingly confronted with more than one source language. The main purpose of this master’s thesis is to provide a case study on how these third languages (see CORRIUS and ZABALBEASCOA 2011) are rendered. Another aim is to put a particular focus on their textual and narrative functions and detect possible shifts that might occur in translations. By applying a theoretical model for translation analysis (CORRIUS and ZABALBEASCOA 2011), this study describes how third languages are rendered in the German, Spanish, and Italian dubbed versions of the 2009 Tarantino movie Inglourious Basterds. A broad range of solution-types are thereby revealed and prevalent restrictions of the translation process identified. The target texts are brought in context with some sociohistorical aspects of dubbing in order to detect prevalent norms of the respective cultures andto discuss the acceptability of translations (TOURY 1995). The translatability potential of even highly complex multilingual audiovisual texts is demonstrated in this study. Moreover, proposals for further studies in multilingual audiovisual translation are outlined and the potential for future investigations in this field thereby emphasised.
Resumo:
A 41-year-old male presented with severe frostbite that was monitored clinically and with a new laser Doppler imaging (LDI) camera that records arbitrary microcirculatory perfusion units (1-256 arbitrary perfusion units (APU's)). LDI monitoring detected perfusion differences in hand and foot not seen visually. On day 4-5 after injury, LDI showed that while fingers did not experience any significant perfusion change (average of 31±25 APUs on day 5), the patient's left big toe did (from 17±29 APUs day 4 to 103±55 APUs day 5). These changes in regional perfusion were not detectable by visual examination. On day 53 postinjury, all fingers with reduced perfusion by LDI were amputated, while the toe could be salvaged. This case clearly demonstrates that insufficient microcirculatory perfusion can be identified using LDI in ways which visual examination alone does not permit, allowing prognosis of clinical outcomes. Such information may also be used to develop improved treatment approaches.
Resumo:
OBJECTIVE Identify the direct cost of reprocessing double and single cotton-woven drapes of the surgical LAP package. METHOD A quantitative, exploratory and descriptive case study, performed at a teaching hospital. The direct cost of reprocessing cotton-woven surgical drapes was calculated by multiplying the time spent by professionals involved in reprocessing the unit with the direct cost of labor, adding to the cost of materials. The Brazilian currency (R$) originally used for the calculations was converted to US currency at the rate of US$0.42/R$. RESULTS The average total cost for surgical LAP package was US$9.72, with the predominance being in the cost of materials (US$8.70 or 89.65%). It is noteworthy that the average total cost of materials was mostly impacted by the cost of the cotton-woven drapes (US$7.99 or 91.90%). CONCLUSION The knowledge gained will subsidize discussions about replacing reusable cotton-woven surgical drapes for disposable ones, favoring arguments regarding the advantages and disadvantages of this possibility considering human resources, materials, as well as structural, environmental and financial resources.
Resumo:
Objectives The relevance of the SYNTAX score for the particular case of patients with acute ST- segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) has previously only been studied in the setting of post hoc analysis of large prospective randomized clinical trials. A "real-life" population approach has never been explored before. The aim of this study was to evaluate the impact of the SYNTAX score for the prediction of the myocardial infarction size, estimated by the creatin-kinase (CK) peak value, using the SYNTAX score in patients treated with primary coronary intervention for acute ST-segment elevation myocardial infarction. Methods The primary endpoint of the study was myocardial infarction size as measured by the CK peak value. The SYNTAX score was calculated retrospectively in 253 consecutive patients with acute ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PPCI) in a large tertiary referral center in Switzerland, between January 2009 and June 2010. Linear regression analysis was performed to compare myocardial infarction size with the SYNTAX score. This same endpoint was then stratified according to SYNTAX score tertiles: low <22 (n=178), intermediate [22-32] (n=60), and high >=33 (n=15). Results There were no significant differences in terms of clinical characteristics between the three groups. When stratified according to the SYNTAX score tertiles, average CK peak values of 1985 (low<22), 3336 (intermediate [22-32]) and 3684 (high>=33) were obtained with a p-value <0.0001. Bartlett's test for equal variances between the three groups was 9.999 (p-value <0.0067). A moderate Pearson product-moment correlation coefficient (r=0.4074) with a high statistical significance level (p-value <0.0001) was found. The coefficient of determination (R^2=0.1660) showed that approximately 17% of the variation of CK peak value (myocardial infarction size) could be explained by the SYNTAX score, i.e. by the coronary disease complexity. Conclusion In an all-comers population, the SYNTAX score is an additional tool in predicting myocardial infarction size in patients treated with primary percutaneous coronary intervention (PPCI). The stratification of patients in different risk groups according to SYNTAX enables to identify a high-risk population that may warrant particular patient care.
Resumo:
We investigate on-line prediction of individual sequences. Given a class of predictors, the goal is to predict as well as the best predictor in the class, where the loss is measured by the self information (logarithmic) loss function. The excess loss (regret) is closely related to the redundancy of the associated lossless universal code. Using Shtarkov's theorem and tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret. The bound depends on certain metric properties of the class of predictors. We apply the bound to both parametric and nonparametric classes ofpredictors. Finally, we point out a suboptimal behavior of the popular Bayesian weighted average algorithm.
Resumo:
We study the complexity of rationalizing choice behavior. We do so by analyzing two polar cases, and a number of intermediate ones. In our most structured case, that is where choice behavior is defined in universal choice domains and satisfies the "weak axiom of revealed preference," finding the complete preorder rationalizing choice behavior is a simple matter. In the polar case, where no restriction whatsoever is imposed, either on choice behavior or on choice domain, finding the complete preordersthat rationalize behavior turns out to be intractable. We show that the task of finding the rationalizing complete preorders is equivalent to a graph problem. This allows the search for existing algorithms in the graph theory literature, for the rationalization of choice.
Resumo:
In recent years there has been an explosive growth in the development of adaptive and data driven methods. One of the efficient and data-driven approaches is based on statistical learning theory (Vapnik 1998). The theory is based on Structural Risk Minimisation (SRM) principle and has a solid statistical background. When applying SRM we are trying not only to reduce training error ? to fit the available data with a model, but also to reduce the complexity of the model and to reduce generalisation error. Many nonlinear learning procedures recently developed in neural networks and statistics can be understood and interpreted in terms of the structural risk minimisation inductive principle. A recent methodology based on SRM is called Support Vector Machines (SVM). At present SLT is still under intensive development and SVM find new areas of application (www.kernel-machines.org). SVM develop robust and non linear data models with excellent generalisation abilities that is very important both for monitoring and forecasting. SVM are extremely good when input space is high dimensional and training data set i not big enough to develop corresponding nonlinear model. Moreover, SVM use only support vectors to derive decision boundaries. It opens a way to sampling optimization, estimation of noise in data, quantification of data redundancy etc. Presentation of SVM for spatially distributed data is given in (Kanevski and Maignan 2004).
Resumo:
The student´s screening made by schools corresponds to a regulatory mechanism for school inclusion and exclusion that normally overlaps the parental expectations of school choice. Based in "Parents survey 2006" data (n=188.073) generated by the Chilean Educational Ministry, this paper describe the parents reasons for choosing their children's school, and school´s criteria for screening students. It concludes that the catholic schools are the most selective institutions and usually exceed the capacity of parental choice. One of the reasons to select students would be the direct relationship between this practice and increasing the average score on the test of the Chilean Educational Quality Measurement System (SIMCE).