990 resultados para DRINKING PROBLEMS
Resumo:
A regularization method based on the non-extensive maximum entropy principle is devised. Special emphasis is given to the q=1/2 case. We show that, when the residual principle is considered as constraint, the q=1/2 generalized distribution of Tsallis yields a regularized solution for bad-conditioned problems. The so devised regularized distribution is endowed with a component which corresponds to the well known regularized solution of Tikhonov (1977).
Resumo:
While equal political representation of all citizens is a fundamental democratic goal, it is hampered empirically in a multitude of ways. This study examines how the societal level of economic inequality affects the representation of relatively poor citizens by parties and governments. Using CSES survey data for citizens' policy preferences and expert placements of political parties, empirical evidence is found that in economically more unequal societies, the party system represents the preferences of relatively poor citizens worse than in more equal societies. This moderating effect of economic equality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut.
Resumo:
BACKGROUND: This study compared frequency of alcohol consumption and binge drinking between young adult childhood cancer survivors and the general population in Switzerland, and assessed its socio-demographic and clinical determinants. PROCEDURE: Childhood cancer survivors aged <16 years when diagnosed 1976-2003, who had survived >5 years and were currently aged 20-40 years received a postal questionnaire. Reported frequency of alcohol use and of binge drinking were compared to the Swiss Health Survey, a representative general population survey. Determinants of frequent alcohol consumption and binge drinking were assessed in a multivariable logistic regression. RESULTS: Of 1,697 eligible survivors, 1,447 could be contacted and 1,049 (73%) responded. Survivors reported more often than controls to consume alcohol frequently (OR = 1.7; 95%CI = 1.3-2.1) and to engage in binge drinking (OR = 2.9; 95%CI = 2.3-3.8). Peak frequency of binge drinking in males occurred at age 24-26 years in survivors, compared to age 18-20 in the general population. Socio-demographic factors (male gender, high educational attainment, French and Italian speaking, and migration background from Northern European countries) were most strongly associated with alcohol consumption patterns among both survivors and controls. CONCLUSIONS: The high frequency of alcohol consumption found in this study is a matter of concern. Our data suggest that survivors should be better informed on the health effects of alcohol consumption during routine follow-up, and that such counseling should be included in clinical guidelines. Future research should study motives of alcohol consumption among survivors to allow development of targeted health interventions for this vulnerable group.
Resumo:
The number of patients treated by haemodialysis (HD) is continuously increasing. The complications associated with vascular accesses represent the first cause of hospitalisation in these patients. Since 2001 nephrologists, surgeons, angiologists and radiologists at the CHUV are working to develop a multidisciplinary model that includes planning and monitoring of HD accesses. In this setting the echo-Doppler represents an important tool of investigation. Every patient is discussed and decisions are taken during a weekly multidisciplinary meeting. A network has been created with nephrologists of peripheral centres and other specialists. This model allows to centralize investigational information and coordinate patient care while keeping and even developing some investigational activities and treatment in peripheral centres.
Resumo:
AIMS: Managing patients with alcohol dependence includes assessment for heavy drinking, typically by asking patients. Some recommend biomarkers to detect heavy drinking but evidence of accuracy is limited. METHODS: Among people with dependence, we assessed the performance of disialo-carbohydrate-deficient transferrin (%dCDT, ≥1.7%), gamma-glutamyltransferase (GGT, ≥66 U/l), either %dCDT or GGT positive, and breath alcohol (> 0) for identifying 3 self-reported heavy drinking levels: any heavy drinking (≥4 drinks/day or >7 drinks/week for women, ≥5 drinks/day or >14 drinks/week for men), recurrent (≥5 drinks/day on ≥5 days) and persistent heavy drinking (≥5 drinks/day on ≥7 consecutive days). Subjects (n = 402) with dependence and current heavy drinking were referred to primary care and assessed 6 months later with biomarkers and validated self-reported calendar method assessment of past 30-day alcohol use. RESULTS: The self-reported prevalence of any, recurrent and persistent heavy drinking was 54, 34 and 17%. Sensitivity of %dCDT for detecting any, recurrent and persistent self-reported heavy drinking was 41, 53 and 66%. Specificity was 96, 90 and 84%, respectively. %dCDT had higher sensitivity than GGT and breath test for each alcohol use level but was not adequately sensitive to detect heavy drinking (missing 34-59% of the cases). Either %dCDT or GGT positive improved sensitivity but not to satisfactory levels, and specificity decreased. Neither a breath test nor GGT was sufficiently sensitive (both tests missed 70-80% of cases). CONCLUSIONS: Although biomarkers may provide some useful information, their sensitivity is low the incremental value over self-report in clinical settings is questionable.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
In this paper, we consider a discrete-time risk process allowing for delay in claim settlement, which introduces a certain type of dependence in the process. From martingale theory, an expression for the ultimate ruin probability is obtained, and Lundberg-type inequalities are derived. The impact of delay in claim settlement is then investigated. To this end, a convex order comparison of the aggregate claim amounts is performed with the corresponding non-delayed risk model, and numerical simulations are carried out with Belgian market data.
Resumo:
Résumé: Le département de Gaya, cadre de notre étude, est situé au sud-ouest de la république du Niger. Il dispose d'un important potentiel hydrique composé des eaux de surface (une centaine de mares permanentes, le fleuve Niger sur 106 km) et de sept aquifères superposés comprenant des nappes de subsurface (affleurantes par endroit) et des nappes artésiennes. L'étude sur les usages de l'eau à Gaya a été menée à travers plusieurs axes centrés sur l'estimation et la répartition spatiale des ressources en eau, le cadre juridique et institutionnel régulant leur mise en valeur, les différents secteurs d'utilisation de l'eau ainsi que les contraintes affectant cette utilisation. L'usage de la cartographie à travers les SIG dans le traitement et l'analyse des données, couplée à notre expérience d'une dizaine d'année de travaux dans la région, a permis de dresser des synthèses richement illustrées permettant de mieux comprendre tous les enjeux liés à la problématique des usages de l'eau dans cette partie du Niger. Contrairement à la vision que l'on a traditionnellement du Sahel où le manque d'eau constitue une des contraintes majeures au développement, ici des conditions locales particulières contredisent ce cliché et transposent le débat sur un autre plan. Il s'agit de la maîtrise de l'eau au niveau local à travers l'élaboration d'une politique appropriée qui tienne compte non seulement des spécificités locales de la ressource, mais aussi des différents types d'usages. La politique de l'eau au Niger, définie selon le Schéma directeur de mise en valeur et de gestion des ressources en eau, à travers la mise en place d'un important arsenal juridique et institutionnel, a eu le mérite de tracer un canevas sur la question, mais a montré ses limites au niveau pratique après dix ans d'essai. En effet au niveau de Gaya, ni l'Etat ni les partenaires au développement (bailleurs de fonds extérieurs) n'ont tenu compte des caractéristiques locales de la ressource ou du contexte socioéconomique particulier de la région. Ce qui a entraîné la réalisation d'infrastructures inadaptées aux réalités hydrogéologiques locales ainsi que des choix inappropriés au niveau de certains aménagements. En dépit de l'abondance de la ressource, son accès tant au niveau quantitatif que qualitatif, reste difficile pour une grande partie des acteurs ruraux. Les différents handicaps rencontrés dans la mise en valeur des ressources en eau résultent de cette incohérence de la politique nationale de l'eau, mais aussi de la difficulté de son application sur le terrain où persiste un pluralisme juridique caractérisé par la cohabitation de deux systèmes de régulation à savoir les droits coutumiers et la législation moderne. Ces différents éléments mis en évidence dans cette étude sur la zone de Gaya pourraient servir de base pour un meilleur aménagement des ressources en eau dans le cadre plus large d'une politique d'aménagement du territoire prenant en compte tous les facteurs tant physiques que socioéconomiques de la région. Abstract: The department of Gaya, in which this study was done, is located in the SW area of the Republic of Niger. It has an important hydrological potential composed of surface water (approximately 100 permanent ponds, 106 km of the Niger River) and 7 bodies of underground water sources including sub-surface and artresan wells. This study of the exploitation of wtaer in Gaya has been carried out employing several parameters based on: the estimation and spacial distribution of water ressources, the juridic and institutional rules governing their utilisation and the various constraints affecting this exploitation. The use of mapping when treating and analysing data, coupled with ten years personel field experience, resulted in a richly illustrated synthesis of this data. This, in turn, led to a better comprehension of all the factors related to problems of water utilisation in this particular region of Niger. Contrary to the generally accepted view that the lack of water ressources is a major contributing factor to the lack of development in the Sahel, in Gaya the local conditions contradict this statement. In this region, and at the local level, the proper use of water is based on the elaboration of an appropriate policy which takes into account not only the local specifics of water ressources but the various types of water utilsation as well. Local use of water and water ressources are dependant on established rules. Water policy in Niger is defined by the General Schema based on an important institutional and judicary arsenal of rules and regulations. However, after a ten-year trial period, this system was shown to have its limitations. In Gaya, neither the State nor the development agencies took into consideration local characteristics nor the socio-economic context of the region. This, in turn, resulted in putting in place infrastructures that were not adapted to local hydrogeological realities as well as inappropriate choices in land planning and development. In spite of the abundance of water ressources, access to them remains difficult for most of the rural population. The various difficulties encountered are the result of incoherent water policies on a national level as well as the lack of practical application in this area. This is due to a double judicary system where two regulatory systems co-exist:traditional laws and modern legislation. the different elements brought out by this study could serve as a basis for a better utilisation of water ressources on a larger scale in which land planning and development policies would take into consideration all the physcial as well as the socio-economical factors of this region.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
Alkyl ketene dimers (AKD) are effective and highly hydrophobic sizing agents for the internal sizing of alkaline papers, but in some cases they may form deposits on paper machines and copiers. In addition, alkenyl succinic anhydrides (ASA)- based sizing agents are highly reactive, producing on-machine sizing, but under uncontrolled wet end conditions the hydrolysis of ASA may cause problems. This thesis aims at developing an improved ketene dimer based sizing agent that would have a lower deposit formation tendency on paper machines and copiers than a traditional type of AKD. The aim is also to improve the ink jet printability of a AKD sized paper. The sizing characteristics ofketene dimers have been compared to those of ASA. A lower tendency of ketene dimer deposit formation was shown in paper machine trials and in printability tests when branched fatty acids were used in the manufacture of a ketene dimer basedsizing agent. Fitting the melting and solidification temperature of a ketene dimer size to the process temperature of a paper machine or a copier contributes to machine cleanliness. A lower hydrophobicity of the paper sized with branched ketene dimer compared to the paper sized with traditional AKD was discovered. However, the ink jet print quality could be improved by the use of a branched ketene dimer. The branched ketene dimer helps in balancing the paper hydrophobicity for both black and color printing. The use of a high amount of protective colloidin the emulsification was considered to be useful for the sizing performance ofthe liquid type of sizing agents. Similar findings were indicated for both the branched ketene dimer and ASA.
Resumo:
BACKGROUND: In this study, we aimed at assessing Inflammatory Bowel Disease patients' needs and current nursing practice to investigate to what extent consensus statements (European Crohn's and Colitis Organization) on the nursing roles in caring for patients with IBD concur with local practice. METHODS: We used a mixed-method convergent design to combine quantitative data prospectively collected in the Swiss IBD cohort study and qualitative data from structured interviews with IBD healthcare experts. Symptoms, quality of life, and anxiety and depression scores were retrieved from physician charts and patient self-reported questionnaires. Descriptive analyses were performed based on quantitative and qualitative data. RESULTS: 230 patients of a single center were included, 60% of patients were males, and median age was 40 (range 18-85). The prevalence of abdominal pain was 42%. Self-reported data were obtained from 75 out of 230 patients. General health was perceived significantly lower compared with the general population (p < 0.001). Prevalence of tiredness was 73%; sleep problems, 78%; issues related to work, 20%; sexual constraints, 35%; diarrhea, 67%; being afraid of not finding a bathroom, 42%; depression, 11%; and anxiety symptoms, 23%. According to experts' interviews, the consensus statements are found mostly relevant with many recommendations that are not yet realized in clinical practice. CONCLUSION: Identified prevalence may help clinicians in detecting patients at risk and improve patient management. © 2015 S. Karger AG, Basel.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.