215 resultados para Complexity analyses
Resumo:
OBJECTIVES: To document biopsychosocial profiles of patients with rheumatoid arthritis (RA) by means of the INTERMED and to correlate the results with conventional methods of disease assessment and health care utilization. METHODS: Patients with RA (n = 75) were evaluated with the INTERMED, an instrument for assessing case complexity and care needs. Based on their INTERMED scores, patients were compared with regard to severity of illness, functional status, and health care utilization. RESULTS: In cluster analysis, a 2-cluster solution emerged, with about half of the patients characterized as complex. Complex patients scoring especially high in the psychosocial domain of the INTERMED were disabled significantly more often and took more psychotropic drugs. Although the 2 patient groups did not differ in severity of illness and functional status, complex patients rated their illness as more severe on subjective measures and on most items of the Medical Outcomes Study Short Form 36. Complex patients showed increased health care utilization despite a similar biologic profile. CONCLUSIONS: The INTERMED identified complex patients with increased health care utilization, provided meaningful and comprehensive patient information, and proved to be easy to implement and advantageous compared with conventional methods of disease assessment. Intervention studies will have to demonstrate whether management strategies based on INTERMED profiles can improve treatment response and outcome of complex patients.
Resumo:
Modern sonic logging tools designed for shallow environmental and engineering applications allow for P-wave phase velocity measurements over a wide frequency band. Methodological considerations indicate that, for saturated unconsolidated sediments in the silt to sand range and source frequencies ranging from approximately 1 to 30 kHz, the observable poro-elastic P-wave velocity dispersion is sufficiently pronounced to allow for reliable first-order estimations of the underlying permeability structure. These predictions have been tested on and verified for a surficial alluvial aquifer. Our results indicate that, even without any further calibration, the thus obtained permeability estimates as well as their variabilities within the pertinent lithological units are remarkably close to those expected based on the corresponding granulometric characteristics.
Resumo:
Postmortem angiography methods that use water soluble or lipid soluble liquid contrast compounds may potentially modify the composition of fluid-based biological samples and thus influence toxicological findings. In this study, we investigated whether toxicological investigations performed in urine collected prior to and post angiography using Angiofil? mixed with paraffin oil are characterized by different qualitative or quantitative results. In addition, we studied whether diluting samples with 1% and 3% contrast medium solution may modify molecule concentration. A postmortem angiography group consisting of 50 cases and a postmortem group without angiography consisting of 50 cases were formed. In the first group, toxicological investigations were performed in urine samples collected prior to and post angiography as well as in undiluted and diluted samples. In the second group, analyses were performed in undiluted and diluted urine, bile, gastric content, cerebrospinal and pericardial fluids collected during autopsy. The preliminary results indicate that differences may be observed between urine samples collected prior to and post angiography in the number of identified molecules in relation to specific cases. Analyses performed in diluted samples failed to reveal differences that might potentially alter the interpretation of toxicological results in all analyzed specimens for nearly all molecules, except for tetrahydrocannabinol and its metabolites. Though these findings suggest that toxicology might be effectively performed, in very special cases and for a large number of molecules, in biological samples collected after angiography, it remains recommendable to collect biological fluids for toxicology prior to contrast medium injection.
Resumo:
Eusociality is taxonomically rare, yet associated with great ecological success. Surprisingly, studies of environmental conditions favouring eusociality are often contradictory. Harsh conditions associated with increasing altitude and latitude seem to favour increased sociality in bumblebees and ants, but the reverse pattern is found in halictid bees and polistine wasps. Here, we compare the life histories and distributions of populations of 176 species of Hymenoptera from the Swiss Alps. We show that differences in altitudinal distributions and development times among social forms can explain these contrasting patterns: highly social taxa develop more quickly than intermediate social taxa, and are thus able to complete the reproductive cycle in shorter seasons at higher elevations. This dual impact of altitude and development time on sociality illustrates that ecological constraints can elicit dynamic shifts in behaviour, and helps explain the complex distribution of sociality across ecological gradients.
Resumo:
BACKGROUND: According to recent guidelines, patients with coronary artery disease (CAD) should undergo revascularization if significant myocardial ischemia is present. Both, cardiovascular magnetic resonance (CMR) and fractional flow reserve (FFR) allow for a reliable ischemia assessment and in combination with anatomical information provided by invasive coronary angiography (CXA), such a work-up sets the basis for a decision to revascularize or not. The cost-effectiveness ratio of these two strategies is compared. METHODS: Strategy 1) CMR to assess ischemia followed by CXA in ischemia-positive patients (CMR + CXA), Strategy 2) CXA followed by FFR in angiographically positive stenoses (CXA + FFR). The costs, evaluated from the third party payer perspective in Switzerland, Germany, the United Kingdom (UK), and the United States (US), included public prices of the different outpatient procedures and costs induced by procedural complications and by diagnostic errors. The effectiveness criterion was the correct identification of hemodynamically significant coronary lesion(s) (= significant CAD) complemented by full anatomical information. Test performances were derived from the published literature. Cost-effectiveness ratios for both strategies were compared for hypothetical cohorts with different pretest likelihood of significant CAD. RESULTS: CMR + CXA and CXA + FFR were equally cost-effective at a pretest likelihood of CAD of 62% in Switzerland, 65% in Germany, 83% in the UK, and 82% in the US with costs of CHF 5'794, euro 1'517, £ 2'680, and $ 2'179 per patient correctly diagnosed. Below these thresholds, CMR + CXA showed lower costs per patient correctly diagnosed than CXA + FFR. CONCLUSIONS: The CMR + CXA strategy is more cost-effective than CXA + FFR below a CAD prevalence of 62%, 65%, 83%, and 82% for the Swiss, the German, the UK, and the US health care systems, respectively. These findings may help to optimize resource utilization in the diagnosis of CAD.
Resumo:
Introduction In my thesis I argue that economic policy is all about economics and politics. Consequently, analysing and understanding economic policy ideally has at least two parts. The economics part, which is centered around the expected impact of a specific policy on the real economy both in terms of efficiency and equity. The insights of this part point into which direction the fine-tuning of economic policies should go. However, fine-tuning of economic policies will be most likely subject to political constraints. That is why, in the politics part, a much better understanding can be gained by taking into account how the incentives of politicians and special interest groups as well as the role played by different institutional features affect the formation of economic policies. The first part and chapter of my thesis concentrates on the efficiency-related impact of economic policies: how does corporate income taxation in general, and corporate income tax progressivity in specific, affect the creation of new firms? Reduced progressivity and flat-rate taxes are in vogue. By 2009, 22 countries are operating flat-rate income tax systems, as do 7 US states and 14 Swiss cantons (for corporate income only). Tax reform proposals in the spirit of the "flat tax" model typically aim to reduce three parameters: the average tax burden, the progressivity of the tax schedule, and the complexity of the tax code. In joint work, Marius Brülhart and I explore the implications of changes in these three parameters on entrepreneurial activity, measured by counts of firm births in a panel of Swiss municipalities. Our results show that lower average tax rates and reduced complexity of the tax code promote firm births. Controlling for these effects, reduced progressivity inhibits firm births. Our reading of these results is that tax progressivity has an insurance effect that facilitates entrepreneurial risk taking. The positive effects of lower tax levels and reduced complexity are estimated to be significantly stronger than the negative effect of reduced progressivity. To the extent that firm births reflect desirable entrepreneurial dynamism, it is not the flattening of tax schedules that is key to successful tax reforms, but the lowering of average tax burdens and the simplification of tax codes. Flatness per se is of secondary importance and even appears to be detrimental to firm births. The second part of my thesis, which corresponds to the second and third chapter, concentrates on how economic policies are formed. By the nature of the analysis, these two chapters draw on a broader literature than the first chapter. Both economists and political scientists have done extensive research on how economic policies are formed. Thereby, researchers in both disciplines have recognised the importance of special interest groups trying to influence policy-making through various channels. In general, economists base their analysis on a formal and microeconomically founded approach, while abstracting from institutional details. In contrast, political scientists' frameworks are generally richer in terms of institutional features but lack the theoretical rigour of economists' approaches. I start from the economist's point of view. However, I try to borrow as much as possible from the findings of political science to gain a better understanding of how economic policies are formed in reality. In the second chapter, I take a theoretical approach and focus on the institutional policy framework to explore how interactions between different political institutions affect the outcome of trade policy in presence of special interest groups' lobbying. Standard political economy theory treats the government as a single institutional actor which sets tariffs by trading off social welfare against contributions from special interest groups seeking industry-specific protection from imports. However, these models lack important (institutional) features of reality. That is why, in my model, I split up the government into a legislative and executive branch which can both be lobbied by special interest groups. Furthermore, the legislative has the option to delegate its trade policy authority to the executive. I allow the executive to compensate the legislative in exchange for delegation. Despite ample anecdotal evidence, bargaining over delegation of trade policy authority has not yet been formally modelled in the literature. I show that delegation has an impact on policy formation in that it leads to lower equilibrium tariffs compared to a standard model without delegation. I also show that delegation will only take place if the lobby is not strong enough to prevent it. Furthermore, the option to delegate increases the bargaining power of the legislative at the expense of the lobbies. Therefore, the findings of this model can shed a light on why the U.S. Congress often practices delegation to the executive. In the final chapter of my thesis, my coauthor, Antonio Fidalgo, and I take a narrower approach and focus on the individual politician level of policy-making to explore how connections to private firms and networks within parliament affect individual politicians' decision-making. Theories in the spirit of the model of the second chapter show how campaign contributions from lobbies to politicians can influence economic policies. There exists an abundant empirical literature that analyses ties between firms and politicians based on campaign contributions. However, the evidence on the impact of campaign contributions is mixed, at best. In our paper, we analyse an alternative channel of influence in the shape of personal connections between politicians and firms through board membership. We identify a direct effect of board membership on individual politicians' voting behaviour and an indirect leverage effect when politicians with board connections influence non-connected peers. We assess the importance of these two effects using a vote in the Swiss parliament on a government bailout of the national airline, Swissair, in 2001, which serves as a natural experiment. We find that both the direct effect of connections to firms and the indirect leverage effect had a strong and positive impact on the probability that a politician supported the government bailout.
Resumo:
Metabolic problems lead to numerous failures during clinical trials, and much effort is now devoted in developing in silico models predicting metabolic stability and metabolites. Such models are well known for cytochromes P450 and some transferases, whereas little has been done to predict the hydrolytic activity of human hydrolases. The present study was undertaken to develop a computational approach able to predict the hydrolysis of novel esters by human carboxylesterase hCES1. The study involves both docking analyses of known substrates to develop predictive models, and molecular dynamics (MD) simulations to reveal the in situ behavior of substrates and products, with particular attention being paid to the influence of their ionization state. The results emphasize some crucial properties of the hCES1 catalytic cavity, confirming that as a trend with several exceptions, hCES1 prefers substrates with relatively smaller and somewhat polar alkyl/aryl groups and larger hydrophobic acyl moieties. The docking results underline the usefulness of the hydrophobic interaction score proposed here, which allows a robust prediction of hCES1 catalysis, while the MD simulations show the different behavior of substrates and products in the enzyme cavity, suggesting in particular that basic substrates interact with the enzyme in their unprotonated form.
Resumo:
Myotonic dystrophy Type 1 (DM-1) is caused by abnormal expansion of a (CTG) repeat located in the DM protein kinase gene. Respiratory problems have long been recognized to be a major feature of this disorder. Because respiratory failure can be associated with dysfunction of phrenic nerves and diaphragm muscle, we examined the diaphragm and respiratory neural network in transgenic mice carrying the human genomic DM-1 region with expanded repeats of more than 300 CTG, a valid model of the human disease. Morphologic and morphometric analyses revealed distal denervation of diaphragm neuromuscular junctions in DM-1 transgenic mice indicated by a decrease in the size and shape complexity of end-plates and a reduction in the concentration of acetyl choline receptors on the postsynaptic membrane. More importantly, there was a significant reduction in numbers of unmyelinated, but not of myelinated, fibers in DM-1 phrenic nerves; no morphologic alternations of the nerves or loss of neuronal cells were detected in medullary respiratory centers or cervical phrenic motor neurons. Because neuromuscular junctions are involved in action potential transmission and the afferent phrenic unmyelinated fibers control the inspiratory activity, our results suggest that the respiratory impairment associated with DM-1 may be partially due to pathologic alterations in neuromuscular junctions and phrenic nerves.
Resumo:
Human perception of bitterness displays pronounced interindividual variation. This phenotypic variation is mirrored by equally pronounced genetic variation in the family of bitter taste receptor genes. To better understand the effects of common genetic variations on human bitter taste perception, we conducted a genome-wide association study on a discovery panel of 504 subjects and a validation panel of 104 subjects from the general population of São Paulo in Brazil. Correction for general taste-sensitivity allowed us to identify a SNP in the cluster of bitter taste receptors on chr12 (10.88- 11.24 Mb, build 36.1) significantly associated (best SNP: rs2708377, P = 5.31 × 10(-13), r(2) = 8.9%, β = -0.12, s.e. = 0.016) with the perceived bitterness of caffeine. This association overlaps with-but is statistically distinct from-the previously identified SNP rs10772420 influencing the perception of quinine bitterness that falls in the same bitter taste cluster. We replicated this association to quinine perception (P = 4.97 × 10(-37), r(2) = 23.2%, β = 0.25, s.e. = 0.020) and additionally found the effect of this genetic locus to be concentration specific with a strong impact on the perception of low, but no impact on the perception of high concentrations of quinine. Our study, thus, furthers our understanding of the complex genetic architecture of bitter taste perception.
Resumo:
Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.