952 resultados para Primary variables of empowerment
Resumo:
In this final project the high availability options for PostgreSQL database management system were explored and evaluated. The primary objective of the project was to find a reliable replication system and implement it to a production environment. The secondary objective was to explore different load balancing methods and compare their performance. The potential replication methods were thoroughly examined, and the most promising was implemented to a database system gathering weather information in Lithuania. The different load balancing methods were tested performance wise with different load scenarios and the results were analysed. As a result for this project a functioning PostgreSQL database replication system was built to the Lithuanian Hydrometeorological Service's headquarters, and definite guidelines for future load balancing needs were produced. This study includes the actual implementation of a replication system to a demanding production environment, but only guidelines for building a load balancing system to the same production environment.
Resumo:
OBJECTIVE: The aim of this study was to determine whether V˙O(2) kinetics and specifically, the time constant of transitions from rest to heavy (τ(p)H) and severe (τ(p)S) exercise intensities, are related to middle distance swimming performance. DESIGN: Fourteen highly trained male swimmers (mean ± SD: 20.5 ± 3.0 yr; 75.4 ± 12.4 kg; 1.80 ± 0.07 m) performed an discontinuous incremental test, as well as square wave transitions for heavy and severe swimming intensities, to determine V˙O(2) kinetics parameters using two exponential functions. METHODS: All the tests involved front-crawl swimming with breath-by-breath analysis using the Aquatrainer swimming snorkel. Endurance performance was recorded as the time taken to complete a 400 m freestyle swim within an official competition (T400), one month from the date of the other tests. RESULTS: T400 (Mean ± SD) (251.4 ± 12.4 s) was significantly correlated with τ(p)H (15.8 ± 4.8s; r=0.62; p=0.02) and τ(p)S (15.8 ± 4.7s; r=0.61; p=0.02). The best single predictor of 400 m freestyle time, out of the variables that were assessed, was the velocity at V˙O(2max)vV˙O(2max), which accounted for 80% of the variation in performance between swimmers. However, τ(p)H and V˙O(2max) were also found to influence the prediction of T400 when they were included in a regression model that involved respiratory parameters only. CONCLUSIONS: Faster kinetics during the primary phase of the V˙O(2) response is associated with better performance during middle-distance swimming. However, vV˙O(2max) appears to be a better predictor of T400.
Resumo:
Skin exposures to chemicals may lead, through percutaneous permeation, to a significant increase in systemic circulation. Skin is the primary route of entry during some occupational activities, especially in agriculture. To reduce skin exposures, the use of personal protective equipment (PPE) is recommended. PPE efficiency is characterized as the time until products permeate through material (lag time, Tlag). Both skin and PPE permeations are assessed using similar in vitro methods; the diffusion cell system. Flow-through diffusion cells were used in this study to assess the permeation of two herbicides, bentazon and isoproturon, as well as four related commercial formulations (Basagran(®), Basamais(®), Arelon(®) and Matara(®)). Permeation was measured through fresh excised human skin, protective clothing suits (suits) (Microchem(®) 3000, AgriSafe Pro(®), Proshield(®) and Microgard(®) 2000 Plus Green), and a combination of skin and suits. Both herbicides, tested by itself or as an active ingredient in formulations, permeated readily through human skin and tested suits (Tlag < 2 h). High permeation coefficients were obtained regardless of formulations or tested membranes, except for Microchem(®) 3000. Short Tlag, were observed even when skin was covered with suits, except for Microchem(®) 3000. Kp values tended to decrease when suits covered the skin (except when Arelon(®) was applied to skin covered with AgriSafe Pro and Microgard(®) 2000), suggesting that Tlag alone is insufficient in characterizing suits. To better estimate human skin permeations, in vitro experiments should not only use human skin but also consider the intended use of the suit, i.e., the active ingredient concentrations and type of formulations, which significantly affect skin permeation.
Resumo:
OBJECTIVE: A single course of antenatal corticosteroids (ACS) is associated with a reduction in respiratory distress syndrome and neonatal death. Multiple Courses of Antenatal Corticosteroids Study (MACS), a study involving 1858 women, was a multicentre randomized placebo-controlled trial of multiple courses of ACS, given every 14 days until 33+6 weeks or birth, whichever came first. The primary outcome of the study, a composite of neonatal mortality and morbidity, was similar for the multiple ACS and placebo groups (12.9% vs. 12.5%), but infants exposed to multiple courses of ACS weighed less, were shorter, and had smaller head circumferences. Thus for women who remain at increased risk of preterm birth, multiple courses of ACS (every 14 days) are not recommended. Chronic use of corticosteroids is associated with numerous side effects including weight gain and depression. The aim of this postpartum assessment was to ascertain if multiple courses of ACS were associated with maternal side effects. METHODS: Three months postpartum, women who participated in MACS were asked to complete a structured questionnaire that asked about maternal side effects of corticosteroid use during MACS and included the Edinburgh Postnatal Depression Scale. Women were also asked to evaluate their study participation. RESULTS: Of the 1858 women randomized, 1712 (92.1%) completed the postpartum questionnaire. There were no significant differences in the risk of maternal side effects between the two groups. Large numbers of women met the criteria for postpartum depression (14.1% in the ACS vs. 16.0% in the placebo group). Most women (94.1%) responded that they would participate in the trial again. CONCLUSION: In pregnancy, corticosteroids are given to women for fetal lung maturation and for the treatment of various maternal diseases. In this international multicentre randomized controlled trial, multiple courses of ACS (every 14 days) were not associated with maternal side effects, and the majority of women responded that they would participate in such a study again.
Resumo:
The primary objective of the Fifth Assessment is to evaluate the level of progress in the deployment of high-speed Internet technologies in the State of Iowa.
Resumo:
With the advent of highly active antiretroviral therapy (HAART), HIV infection has become a chronic disease. Various end-stage organ failures have now become common co-morbidities and are primary causes of mortality in HIV-infected patients. Solid-organ transplantation therefore has been proposed to these patients, as HIV infection is not anymore considered an absolute contraindication. The initial results of organ transplantation in HIV-infected patients are encouraging with no differences in patient and graft survival compared with non-HIV-infected patients. The use of immunosuppressive drug therapy in HIV-infected patients has so far not shown major detrimental effects, and some drugs in combination with HAART have even demonstrated possible beneficial effects for specific HIV settings. Nevertheless, organ transplantation in HIV-infected patients remains a complex intervention, and more studies will be required to clarify open questions such as long-term effects of drug interactions between antiretroviral and immunosuppressive drugs, outcome of recurrent HCV infection in HIV-infected patients, incidence of graft rejection, or long-term graft and patient survival. In this article, we first review the immunological pathogenesis of HIV infection and the rationale for using immunosuppression combined with HAART. We then discuss the most recent results of solid-organ transplantation in HIV-infected patients.
Resumo:
General Introduction This thesis can be divided into two main parts :the first one, corresponding to the first three chapters, studies Rules of Origin (RoOs) in Preferential Trade Agreements (PTAs); the second part -the fourth chapter- is concerned with Anti-Dumping (AD) measures. Despite wide-ranging preferential access granted to developing countries by industrial ones under North-South Trade Agreements -whether reciprocal, like the Europe Agreements (EAs) or NAFTA, or not, such as the GSP, AGOA, or EBA-, it has been claimed that the benefits from improved market access keep falling short of the full potential benefits. RoOs are largely regarded as a primary cause of the under-utilization of improved market access of PTAs. RoOs are the rules that determine the eligibility of goods to preferential treatment. Their economic justification is to prevent trade deflection, i.e. to prevent non-preferred exporters from using the tariff preferences. However, they are complex, cost raising and cumbersome, and can be manipulated by organised special interest groups. As a result, RoOs can restrain trade beyond what it is needed to prevent trade deflection and hence restrict market access in a statistically significant and quantitatively large proportion. Part l In order to further our understanding of the effects of RoOs in PTAs, the first chapter, written with Pr. Olivier Cadot, Celine Carrère and Pr. Jaime de Melo, describes and evaluates the RoOs governing EU and US PTAs. It draws on utilization-rate data for Mexican exports to the US in 2001 and on similar data for ACP exports to the EU in 2002. The paper makes two contributions. First, we construct an R-index of restrictiveness of RoOs along the lines first proposed by Estevadeordal (2000) for NAFTA, modifying it and extending it for the EU's single-list (SL). This synthetic R-index is then used to compare Roos under NAFTA and PANEURO. The two main findings of the chapter are as follows. First, it shows, in the case of PANEURO, that the R-index is useful to summarize how countries are differently affected by the same set of RoOs because of their different export baskets to the EU. Second, it is shown that the Rindex is a relatively reliable statistic in the sense that, subject to caveats, after controlling for the extent of tariff preference at the tariff-line level, it accounts for differences in utilization rates at the tariff line level. Finally, together with utilization rates, the index can be used to estimate total compliance costs of RoOs. The second chapter proposes a reform of preferential Roos with the aim of making them more transparent and less discriminatory. Such a reform would make preferential blocs more "cross-compatible" and would therefore facilitate cumulation. It would also contribute to move regionalism toward more openness and hence to make it more compatible with the multilateral trading system. It focuses on NAFTA, one of the most restrictive FTAs (see Estevadeordal and Suominen 2006), and proposes a way forward that is close in spirit to what the EU Commission is considering for the PANEURO system. In a nutshell, the idea is to replace the current array of RoOs by a single instrument- Maximum Foreign Content (MFC). An MFC is a conceptually clear and transparent instrument, like a tariff. Therefore changing all instruments into an MFC would bring improved transparency pretty much like the "tariffication" of NTBs. The methodology for this exercise is as follows: In step 1, I estimate the relationship between utilization rates, tariff preferences and RoOs. In step 2, I retrieve the estimates and invert the relationship to get a simulated MFC that gives, line by line, the same utilization rate as the old array of Roos. In step 3, I calculate the trade-weighted average of the simulated MFC across all lines to get an overall equivalent of the current system and explore the possibility of setting this unique instrument at a uniform rate across lines. This would have two advantages. First, like a uniform tariff, a uniform MFC would make it difficult for lobbies to manipulate the instrument at the margin. This argument is standard in the political-economy literature and has been used time and again in support of reductions in the variance of tariffs (together with standard welfare considerations). Second, uniformity across lines is the only way to eliminate the indirect source of discrimination alluded to earlier. Only if two countries face uniform RoOs and tariff preference will they face uniform incentives irrespective of their initial export structure. The result of this exercise is striking: the average simulated MFC is 25% of good value, a very low (i.e. restrictive) level, confirming Estevadeordal and Suominen's critical assessment of NAFTA's RoOs. Adopting a uniform MFC would imply a relaxation from the benchmark level for sectors like chemicals or textiles & apparel, and a stiffening for wood products, papers and base metals. Overall, however, the changes are not drastic, suggesting perhaps only moderate resistance to change from special interests. The third chapter of the thesis considers whether Europe Agreements of the EU, with the current sets of RoOs, could be the potential model for future EU-centered PTAs. First, I have studied and coded at the six-digit level of the Harmonised System (HS) .both the old RoOs -used before 1997- and the "Single list" Roos -used since 1997. Second, using a Constant Elasticity Transformation function where CEEC exporters smoothly mix sales between the EU and the rest of the world by comparing producer prices on each market, I have estimated the trade effects of the EU RoOs. The estimates suggest that much of the market access conferred by the EAs -outside sensitive sectors- was undone by the cost-raising effects of RoOs. The chapter also contains an analysis of the evolution of the CEECs' trade with the EU from post-communism to accession. Part II The last chapter of the thesis is concerned with anti-dumping, another trade-policy instrument having the effect of reducing market access. In 1995, the Uruguay Round introduced in the Anti-Dumping Agreement (ADA) a mandatory "sunset-review" clause (Article 11.3 ADA) under which anti-dumping measures should be reviewed no later than five years from their imposition and terminated unless there was a serious risk of resumption of injurious dumping. The last chapter, written with Pr. Olivier Cadot and Pr. Jaime de Melo, uses a new database on Anti-Dumping (AD) measures worldwide to assess whether the sunset-review agreement had any effect. The question we address is whether the WTO Agreement succeeded in imposing the discipline of a five-year cycle on AD measures and, ultimately, in curbing their length. Two methods are used; count data analysis and survival analysis. First, using Poisson and Negative Binomial regressions, the count of AD measures' revocations is regressed on (inter alia) the count of "initiations" lagged five years. The analysis yields a coefficient on measures' initiations lagged five years that is larger and more precisely estimated after the agreement than before, suggesting some effect. However the coefficient estimate is nowhere near the value that would give a one-for-one relationship between initiations and revocations after five years. We also find that (i) if the agreement affected EU AD practices, the effect went the wrong way, the five-year cycle being quantitatively weaker after the agreement than before; (ii) the agreement had no visible effect on the United States except for aone-time peak in 2000, suggesting a mopping-up of old cases. Second, the survival analysis of AD measures around the world suggests a shortening of their expected lifetime after the agreement, and this shortening effect (a downward shift in the survival function postagreement) was larger and more significant for measures targeted at WTO members than for those targeted at non-members (for which WTO disciplines do not bind), suggesting that compliance was de jure. A difference-in-differences Cox regression confirms this diagnosis: controlling for the countries imposing the measures, for the investigated countries and for the products' sector, we find a larger increase in the hazard rate of AD measures covered by the Agreement than for other measures.
Resumo:
Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.
Resumo:
Secretory IgA (SIgA) plays an important role in the protection and homeostatic regulation of intestinal, respiratory, and urogenital mucosal epithelia separating the outside environment from the inside of the body. This primary function of SIgA is referred to as immune exclusion, a process that limits the access of numerous microorganisms and mucosal antigens to these thin and vulnerable mucosal barriers. SIgA has been shown to be involved in avoiding opportunistic pathogens to enter and disseminate in the systemic compartment, as well as tightly controlling the necessary symbiotic relationship existing between commensals and the host. Clearance by peristalsis appears thus as one of the numerous mechanisms whereby SIgA fulfills its function at mucosal surfaces. Sampling of antigen-SIgA complexes by microfold (M) cells, intimate contact occurring with Peyer's patch dendritic cells (DC), down-regulation of inflammatory processes, modulation of epithelial, and DC responsiveness are some of the recently identified processes to which the contribution of SIgA has been underscored. This review aims at presenting, with emphasis at the biochemical level, how the molecular complexity of SIgA can serve these multiple and non-redundant modes of action.
Resumo:
The manner by which genotype and environment affect complex phenotypes is one of the fundamental questions in biology. In this study, we quantified the transcriptome--a subset of the metabolome--and, using targeted proteomics, quantified a subset of the liver proteome from 40 strains of the BXD mouse genetic reference population on two diverse diets. We discovered dozens of transcript, protein, and metabolite QTLs, several of which linked to metabolic phenotypes. Most prominently, Dhtkd1 was identified as a primary regulator of 2-aminoadipate, explaining variance in fasted glucose and diabetes status in both mice and humans. These integrated molecular profiles also allowed further characterization of complex pathways, particularly the mitochondrial unfolded protein response (UPR(mt)). UPR(mt) shows strikingly variant responses at the transcript and protein level that are remarkably conserved among C. elegans, mice, and humans. Overall, these examples demonstrate the value of an integrated multilayered omics approach to characterize complex metabolic phenotypes.
Resumo:
BACKGROUND: Lapatinib is an effective anti-HER2 therapy in advanced breast cancer and docetaxel is one of the most active agents in breast cancer. Combining these agents in pre-treated patients with metastatic disease had previously proved challenging, so the primary objective of this study aimed to determine the maximum tolerated dose (MTD) in treatment-naive patients, by identifying acute dose-limiting toxicities (DLT) during cycle 1 in the first part of a phases 1-2 neoadjuvant European Organisation for Research and Treatment of Cancer (EORTC) trial. PATIENTS AND METHODS: Patients with large operable or locally-advanced HER2 positive breast cancer were treated with continuous lapatinib, and docetaxel every 21days for 4 cycles. Dose levels (DLs) were: 1000/75, 1250/75, 1000/85, 1250/85, 1000/100 and 1250/100 (mg/day)/(mg/m(2)). RESULTS: Twenty-one patients were included. Two DLTs occurred at dose level 5 (1000/100); one grade 4 neutropenia ⩾7days and one febrile neutropenia. A further 3 patients were therefore treated at the same dose with prophylactic granulocyte-colony stimulating factor (G-CSF), and 3 patients at dose level 6. No further DLTs were observed. CONCLUSIONS: Our recommended dose for phase II is lapatinib 1000mg/day and docetaxel 100mg/m(2) with G-CSF in HER2 positive non-metastatic breast cancer. The dose of lapatinib should have been 1250mg/day but we were mindful of the high rate of treatment discontinuation in GeparQuinto with lapatinib 1250mg/day combined with docetaxel. No grade 3-4 diarrhoea was observed. Pharmacodynamics analysis suggests that concomitant medications altering P-glycoprotein activity (in addition to lapatinib) can modify toxicity, including non-haematological toxicities. This needs verification in larger trials, where it may contribute to understanding the sources of variability in clinical toxicity and treatment discontinuation.
Resumo:
Human decision-making has consistently demonstrated deviation from "pure" rationality. Emotions are a primary driver of human actions and the current study investigates how perceived emotions and personality traits may affect decision-making during the Ultimatum Game (UG). We manipulated emotions by showing images with emotional connotation while participants decided how to split money with a second player. Event-related potentials (ERPs) from scalp electrodes were recorded during the whole decision-making process. We observed significant differences in the activity of central and frontal areas when participants offered money with respect to when they accepted or rejected an offer. We found that participants were more likely to offer a higher amount of money when making their decision in association with negative emotions. Furthermore, participants were more likely to accept offers when making their decision in association with positive emotions. Honest, conscientious, and introverted participants were more likely to accept offers. Our results suggest that factors others than a rational strategy may predict economic decision-making in the UG.
Resumo:
BACKGROUND: Strict definition of invasive aspergillosis (IA) cases is required to allow precise conclusions about the efficacy of antifungal therapy. The Global Comparative Aspergillus Study (GCAS) compared voriconazole to amphotericin B (AmB) deoxycholate for the primary therapy of IA. Because predefined definitions used for this trial were substantially different from the consensus definitions proposed by the European Organization for Research and Treatment of Cancer/Mycoses Study Group in 2008, we recategorized the 379 episodes of the GCAS according to the later definitions. METHODS: The objectives were to assess the impact of the current definitions on the classification of the episodes and to provide comparative efficacy for probable/proven and possible IA in patients treated with either voriconazole or AmB. In addition to original data, we integrated the results of baseline galactomannan serum levels obtained from 249 (65.7%) frozen samples. The original response assessment was accepted unchanged. RESULTS: Recategorization allowed 59 proven, 178 probable, and 106 possible IA cases to be identified. A higher favorable 12-week response rate was obtained with voriconazole (54.7%) than with AmB (29.9%) (P < .0001). Survival was higher for voriconazole for mycologically documented (probable/proven) IA (70.2%) than with AmB (54.9%) (P = .010). Higher response rates were obtained in possible IA treated with voriconazole vs AmB with the same magnitude of difference (26.2%; 95% confidence interval [CI], 7.2%-45.3%) as in mycologically documented episodes (24.3%; 95% CI, 11.9%-36.7%), suggesting that possible cases are true IA. CONCLUSIONS: Recategorization resulted in a better identification of the episodes and confirmed the higher efficacy of voriconazole over AmB deoxycholate in mycologically documented IA.
Resumo:
The primary purpose of this brief is to provide various statistical and institutional details on the development and current status of the public agricultural research system in Cape Verde. This information has been collected and presented in a systematic way in order to inform and thereby improve research policy formulation with regard to the Cape Verdean NARS. Most importantly, these data are assembled and reported in a way that makes them directly comparable with the data presented in the other country briefs in this series. And because institutions take time to develop and there are often considerable lags in the agricultural research process, it is necessary for many analytical and policy purposes to have access to longer-run series of data. NARSs vary markedly in their institutional structure and these institutional aspects can have a substantial and direct effect on their research performance. To provide a basis for analysis and cross-country, over-time comparisons, the various research agencies in a country have been grouped into five general categories; government, semi-public, private, academic, and supranational. A description of these categories is provided in table 1.
Resumo:
The primary purpose of this brief is to provide various statistical and institutional details on the development and current status of the public agricultural research system in Cape Verde. This information has been collected and presented in a systematic way in order to inform and thereby improve research policy formulation with regard to the Cape Verdean NARS. Most importantly, these data are assembled and reported in a way that makes them directly comparable with the data presented in the other country briefs in this series. And because institutions take time to develop and there are often considerable lags in the agricultural research process, it is necessary for many analytical and policy purposes to have access to longer-run series of data. NARSs vary markedly in their institutional structure and these institutional aspects can have a substantial and direct effect on their research performance. To provide a basis for analysis and cross-country, over-time comparisons, the various research agencies in a country have been grouped into five general categories; government, semi-public, private, academic, and supranational. A description of these categories is provided in table 1.