979 resultados para BEHAVIORAL-PROBLEMS
Resumo:
While equal political representation of all citizens is a fundamental democratic goal, it is hampered empirically in a multitude of ways. This study examines how the societal level of economic inequality affects the representation of relatively poor citizens by parties and governments. Using CSES survey data for citizens' policy preferences and expert placements of political parties, empirical evidence is found that in economically more unequal societies, the party system represents the preferences of relatively poor citizens worse than in more equal societies. This moderating effect of economic equality is also found for policy congruence between citizens and governments, albeit slightly less clear-cut.
Resumo:
Background Depression is one of the more severe and serious health problems because of its morbidity, disabling effects and for its societal and economic burden. Despite the variety of existing pharmacological and psychological treatments, most of the cases evolve with only partial remission, relapse and recurrence. Cognitive models have contributed significantly to the understanding of unipolar depression and its psychological treatment. However, success is only partial and many authors affirm the need to improve those models and also the treatment programs derived from them. One of the issues that requires further elaboration is the difficulty these patients experience in responding to treatment and in maintaining therapeutic gains across time without relapse or recurrence. Our research group has been working on the notion of cognitive conflict viewed as personal dilemmas according to personal construct theory. We use a novel method for identifying those conflicts using the repertory grid technique (RGT). Preliminary results with depressive patients show that about 90% of them have one or more of those conflicts. This fact might explain the blockage and the difficult progress of these patients, especially the more severe and/or chronic. These results justify the need for specific interventions focused on the resolution of these internal conflicts. This study aims to empirically test the hypothesis that an intervention focused on the dilemma(s) specifically detected for each patient will enhance the efficacy of cognitive behavioral therapy (CBT) for depression. Design A therapy manual for a dilemma-focused intervention will be tested using a randomized clinical trial by comparing the outcome of two treatment conditions: combined group CBT (eight, 2-hour weekly sessions) plus individual dilemma-focused therapy (eight, 1-hour weekly sessions) and CBT alone (eight, 2-hour group weekly sessions plus eight, 1-hour individual weekly sessions). Method Participants are patients aged over 18 years meeting diagnostic criteria for major depressive disorder or dysthymic disorder, with a score of 19 or above on the Beck depression inventory, second edition (BDI-II) and presenting at least one cognitive conflict (implicative dilemma or dilemmatic construct) as assessed using the RGT. The BDI-II is the primary outcome measure, collected at baseline, at the end of therapy, and at 3- and 12-month follow-up; other secondary measures are also used. Discussion We expect that adding a dilemma-focused intervention to CBT will increase the efficacy of one of the more prestigious therapies for depression, thus resulting in a significant contribution to the psychological treatment of depression. Trial registration ISRCTN92443999; ClinicalTrials.gov Identifier: NCT01542957.
Resumo:
The number of patients treated by haemodialysis (HD) is continuously increasing. The complications associated with vascular accesses represent the first cause of hospitalisation in these patients. Since 2001 nephrologists, surgeons, angiologists and radiologists at the CHUV are working to develop a multidisciplinary model that includes planning and monitoring of HD accesses. In this setting the echo-Doppler represents an important tool of investigation. Every patient is discussed and decisions are taken during a weekly multidisciplinary meeting. A network has been created with nephrologists of peripheral centres and other specialists. This model allows to centralize investigational information and coordinate patient care while keeping and even developing some investigational activities and treatment in peripheral centres.
Resumo:
Biological scaling analyses employing the widely used bivariate allometric model are beset by at least four interacting problems: (1) choice of an appropriate best-fit line with due attention to the influence of outliers; (2) objective recognition of divergent subsets in the data (allometric grades); (3) potential restrictions on statistical independence resulting from phylogenetic inertia; and (4) the need for extreme caution in inferring causation from correlation. A new non-parametric line-fitting technique has been developed that eliminates requirements for normality of distribution, greatly reduces the influence of outliers and permits objective recognition of grade shifts in substantial datasets. This technique is applied in scaling analyses of mammalian gestation periods and of neonatal body mass in primates. These analyses feed into a re-examination, conducted with partial correlation analysis, of the maternal energy hypothesis relating to mammalian brain evolution, which suggests links between body size and brain size in neonates and adults, gestation period and basal metabolic rate. Much has been made of the potential problem of phylogenetic inertia as a confounding factor in scaling analyses. However, this problem may be less severe than suspected earlier because nested analyses of variance conducted on residual variation (rather than on raw values) reveals that there is considerable variance at low taxonomic levels. In fact, limited divergence in body size between closely related species is one of the prime examples of phylogenetic inertia. One common approach to eliminating perceived problems of phylogenetic inertia in allometric analyses has been calculation of 'independent contrast values'. It is demonstrated that the reasoning behind this approach is flawed in several ways. Calculation of contrast values for closely related species of similar body size is, in fact, highly questionable, particularly when there are major deviations from the best-fit line for the scaling relationship under scrutiny.
Resumo:
In this paper, we consider a discrete-time risk process allowing for delay in claim settlement, which introduces a certain type of dependence in the process. From martingale theory, an expression for the ultimate ruin probability is obtained, and Lundberg-type inequalities are derived. The impact of delay in claim settlement is then investigated. To this end, a convex order comparison of the aggregate claim amounts is performed with the corresponding non-delayed risk model, and numerical simulations are carried out with Belgian market data.
Resumo:
Abstract The solvability of the problem of fair exchange in a synchronous system subject to Byzantine failures is investigated in this work. The fair exchange problem arises when a group of processes are required to exchange digital items in a fair manner, which means that either each process obtains the item it was expecting or no process obtains any information on, the inputs of others. After introducing a novel specification of fair exchange that clearly separates safety and liveness, we give an overview of the difficulty of solving such a problem in the context of a fully-connected topology. On one hand, we show that no solution to fair exchange exists in the absence of an identified process that every process can trust a priori; on the other, a well-known solution to fair exchange relying on a trusted third party is recalled. These two results lead us to complete our system model with a flexible representation of the notion of trust. We then show that fair exchange is solvable if and only if a connectivity condition, named the reachable majority condition, is satisfied. The necessity of the condition is proven by an impossibility result and its sufficiency by presenting a general solution to fair exchange relying on a set of trusted processes. The focus is then turned towards a specific network topology in order to provide a fully decentralized, yet realistic, solution to fair exchange. The general solution mentioned above is optimized by reducing the computational load assumed by trusted processes as far as possible. Accordingly, our fair exchange protocol relies on trusted tamperproof modules that have limited communication abilities and are only required in key steps of the algorithm. This modular solution is then implemented in the context of a pedagogical application developed for illustrating and apprehending the complexity of fair exchange. This application, which also includes the implementation of a wide range of Byzantine behaviors, allows executions of the algorithm to be set up and monitored through a graphical display. Surprisingly, some of our results on fair exchange seem contradictory with those found in the literature of secure multiparty computation, a problem from the field of modern cryptography, although the two problems have much in common. Both problems are closely related to the notion of trusted third party, but their approaches and descriptions differ greatly. By introducing a common specification framework, a comparison is proposed in order to clarify their differences and the possible origins of the confusion between them. This leads us to introduce the problem of generalized fair computation, a generalization of fair exchange. Finally, a solution to this new problem is given by generalizing our modular solution to fair exchange
Resumo:
Convective transport, both pure and combined with diffusion and reaction, can be observed in a wide range of physical and industrial applications, such as heat and mass transfer, crystal growth or biomechanics. The numerical approximation of this class of problemscan present substantial difficulties clue to regions of high gradients (steep fronts) of the solution, where generation of spurious oscillations or smearing should be precluded. This work is devoted to the development of an efficient numerical technique to deal with pure linear convection and convection-dominated problems in the frame-work of convection-diffusion-reaction systems. The particle transport method, developed in this study, is based on using rneshless numerical particles which carry out the solution along the characteristics defining the convective transport. The resolution of steep fronts of the solution is controlled by a special spacial adaptivity procedure. The serni-Lagrangian particle transport method uses an Eulerian fixed grid to represent the solution. In the case of convection-diffusion-reaction problems, the method is combined with diffusion and reaction solvers within an operator splitting approach. To transfer the solution from the particle set onto the grid, a fast monotone projection technique is designed. Our numerical results confirm that the method has a spacial accuracy of the second order and can be faster than typical grid-based methods of the same order; for pure linear convection problems the method demonstrates optimal linear complexity. The method works on structured and unstructured meshes, demonstrating a high-resolution property in the regions of steep fronts of the solution. Moreover, the particle transport method can be successfully used for the numerical simulation of the real-life problems in, for example, chemical engineering.
Resumo:
There is a broad consensus among economists that technologicalchange has been a major contributor to the productivity growth and, hence, to the growth of the material welfare in western industrialized countries at least over the last century. Paradoxically, this issue has not been the focal point of theoretical economics. At the same time, we have witnessed the rise of the importance of technological issues at the strategic management level of business firms. Interestingly, the research has not accurately responded to this challenge either. The tension between the overwhelming empirical evidence of the importance of technology and its relative omission in the research offers a challenging target for a methodological endeavor. This study deals with the question of how different theories cope with technology and explain technological change. The focusis at the firm level and the analysis concentrates on metatheoretical issues, except for the last two chapters, which examine the problems of strategic management of technology. Here the aim is to build a new evolutionary-based theoreticalframework to analyze innovation processes at the firm level. The study consistsof ten chapters. Chapter 1 poses the research problem and contrasts the two basic approaches, neoclassical and evolutionary, to be analyzed. Chapter 2 introduces the methodological framework which is based on the methodology of isolation. Methodological and ontoogical commitments of the rival approaches are revealed and basic questions concerning their ways of theorizing are elaborated. Chapters 3-6 deal with the so-called substantive isolative criteria. The aim is to examine how different approaches cope with such critical issues as inherent uncertainty and complexity of innovative activities (cognitive isolations, chapter 3), theboundedness of rationality of innovating agents (behavioral isolations, chapter4), the multidimensional nature of technology (chapter 5), and governance costsrelated to technology (chapter 6). Chapters 7 and 8 put all these things together and look at the explanatory structures used by the neoclassical and evolutionary approaches in the light of substantive isolations. The last two cpahters of the study utilize the methodological framework and tools to appraise different economics-based candidates in the context of strategic management of technology. The aim is to analyze how different approaches answer the fundamental question: How can firms gain competitive advantages through innovations and how can the rents appropriated from successful innovations be sustained? The last chapter introduces a new evolutionary-based technology management framework. Also the largely omitted issues of entrepreneurship are examined.
Resumo:
Alkyl ketene dimers (AKD) are effective and highly hydrophobic sizing agents for the internal sizing of alkaline papers, but in some cases they may form deposits on paper machines and copiers. In addition, alkenyl succinic anhydrides (ASA)- based sizing agents are highly reactive, producing on-machine sizing, but under uncontrolled wet end conditions the hydrolysis of ASA may cause problems. This thesis aims at developing an improved ketene dimer based sizing agent that would have a lower deposit formation tendency on paper machines and copiers than a traditional type of AKD. The aim is also to improve the ink jet printability of a AKD sized paper. The sizing characteristics ofketene dimers have been compared to those of ASA. A lower tendency of ketene dimer deposit formation was shown in paper machine trials and in printability tests when branched fatty acids were used in the manufacture of a ketene dimer basedsizing agent. Fitting the melting and solidification temperature of a ketene dimer size to the process temperature of a paper machine or a copier contributes to machine cleanliness. A lower hydrophobicity of the paper sized with branched ketene dimer compared to the paper sized with traditional AKD was discovered. However, the ink jet print quality could be improved by the use of a branched ketene dimer. The branched ketene dimer helps in balancing the paper hydrophobicity for both black and color printing. The use of a high amount of protective colloidin the emulsification was considered to be useful for the sizing performance ofthe liquid type of sizing agents. Similar findings were indicated for both the branched ketene dimer and ASA.
Resumo:
BACKGROUND: In this study, we aimed at assessing Inflammatory Bowel Disease patients' needs and current nursing practice to investigate to what extent consensus statements (European Crohn's and Colitis Organization) on the nursing roles in caring for patients with IBD concur with local practice. METHODS: We used a mixed-method convergent design to combine quantitative data prospectively collected in the Swiss IBD cohort study and qualitative data from structured interviews with IBD healthcare experts. Symptoms, quality of life, and anxiety and depression scores were retrieved from physician charts and patient self-reported questionnaires. Descriptive analyses were performed based on quantitative and qualitative data. RESULTS: 230 patients of a single center were included, 60% of patients were males, and median age was 40 (range 18-85). The prevalence of abdominal pain was 42%. Self-reported data were obtained from 75 out of 230 patients. General health was perceived significantly lower compared with the general population (p < 0.001). Prevalence of tiredness was 73%; sleep problems, 78%; issues related to work, 20%; sexual constraints, 35%; diarrhea, 67%; being afraid of not finding a bathroom, 42%; depression, 11%; and anxiety symptoms, 23%. According to experts' interviews, the consensus statements are found mostly relevant with many recommendations that are not yet realized in clinical practice. CONCLUSION: Identified prevalence may help clinicians in detecting patients at risk and improve patient management. © 2015 S. Karger AG, Basel.
Resumo:
Sudoku problems are some of the most known and enjoyed pastimes, with a never diminishing popularity, but, for the last few years those problems have gone from an entertainment to an interesting research area, a twofold interesting area, in fact. On the one side Sudoku problems, being a variant of Gerechte Designs and Latin Squares, are being actively used for experimental design, as in [8, 44, 39, 9]. On the other hand, Sudoku problems, as simple as they seem, are really hard structured combinatorial search problems, and thanks to their characteristics and behavior, they can be used as benchmark problems for refining and testing solving algorithms and approaches. Also, thanks to their high inner structure, their study can contribute more than studies of random problems to our goal of solving real-world problems and applications and understanding problem characteristics that make them hard to solve. In this work we use two techniques for solving and modeling Sudoku problems, namely, Constraint Satisfaction Problem (CSP) and Satisfiability Problem (SAT) approaches. To this effect we define the Generalized Sudoku Problem (GSP), where regions can be of rectangular shape, problems can be of any order, and solution existence is not guaranteed. With respect to the worst-case complexity, we prove that GSP with block regions of m rows and n columns with m = n is NP-complete. For studying the empirical hardness of GSP, we define a series of instance generators, that differ in the balancing level they guarantee between the constraints of the problem, by finely controlling how the holes are distributed in the cells of the GSP. Experimentally, we show that the more balanced are the constraints, the higher the complexity of solving the GSP instances, and that GSP is harder than the Quasigroup Completion Problem (QCP), a problem generalized by GSP. Finally, we provide a study of the correlation between backbone variables – variables with the same value in all the solutions of an instance– and hardness of GSP.
Resumo:
A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.
Resumo:
BACKGROUND: Twelve-step mutual-help groups (TMGs) are among the most available forms of support for homeless individuals with alcohol problems. Qualitative research, however, has suggested that this population often has negative perceptions of these groups, which has been shown to be associated with low TMG attendance. It is important to understand this population's perceptions of TMGs and their association with alcohol outcomes to provide more appropriate and better tailored programming for this multiply affected population. The aims of this cross-sectional study were to (a) qualitatively examine perception of TMGs in this population and (b) quantitatively evaluate its association with motivation, treatment attendance and alcohol outcomes. METHODS: Participants (N=62) were chronically homeless individuals with alcohol problems who received single-site Housing First within a larger evaluation study. Perceptions of TMGs were captured using an open-ended item. Quantitative outcome variables were created from assessments of motivation, treatment attendance and alcohol outcomes. RESULTS: Findings indicated that perceptions of TMGs were primarily negative followed by positive and neutral perceptions, respectively. There were significant, positive associations between perceptions of TMGs and motivation and treatment attendance, whereas no association was found for alcohol outcomes. CONCLUSIONS: Although some individuals view TMGs positively, alternative forms of help are needed to engage the majority of chronically homeless individuals with alcohol problems.
Resumo:
Gelled aspect in papaya fruit is typically confused with premature ripening. This research reports the characterization of this physiological disorder in the pulp of papaya fruit by measuring electrolyte leakage, Pi content, lipid peroxidation, pulp firmness, mineral contents (Ca, Mg and K - in pulp and seed tissues), and histological analysis of pulp tissue. The results showed that the gelled aspect of the papaya fruit pulp is not associated with tissue premature ripening. Data indicate a reduction of the vacuole water intake as the principal cause of the loss of cellular turgor; while the waterlogged aspect of the tissue may be due to water accumulation in the apoplast.
Resumo:
Chromosomal inversion clines paralleling the long-standing ones in native Palearctic populations of Drosophila subobscura evolved swiftly after this species invaded the Americas in the late 1970s and early 1980s. However, the new clines did not consistently continue to converge on the Old World baseline. Our recent survey of Chilean populations of D. subobscura shows that inversion clines have faded or even changed sign with latitude. Here, we investigate the hypothesis that this fading of inversion clines might be due to the Bogert effect, namely, that flies' thermoregulatory behavior has eventually compensated for environmental variation in temperature, thus buffering selection on thermal-related traits. We show that latitudinal divergence in thermal preference (T-p) has evolved in Chile for females, with higher-latitude flies having a lower mean T-p. Plastic responses in T-p also lessen latitudinal thermal variation because flies developed at colder temperatures prefer warmer microclimates. Our results are consistent with the idea that active behavioral thermoregulation might buffer environmental variation and reduce the potential effect of thermal selection on other traits as chromosomal arrangements.