971 resultados para Isomerization Equilibrium
Resumo:
This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.
Resumo:
As estratégias de malevolência implicam que um indivíduo pague um custo para infligir um custo superior a um oponente. Como um dos comportamentos fundamentais da sociobiologia, a malevolência tem recebido menos atenção que os seus pares o egoísmo e a cooperação. Contudo, foi estabelecido que a malevolência é uma estratégia viável em populações pequenas quando usada contra indivíduos negativamente geneticamente relacionados pois este comportamento pode i) ser eliminado naturalmente, ou ii) manter-se em equilíbrio com estratégias cooperativas devido à disponibilidade da parte de indivíduos malevolentes de pagar um custo para punir. Esta tese propõe compreender se a propensão para a malevolência nos humanos é inerente ou se esta se desenvolve com a idade. Para esse efeito, considerei duas experiências de teoria de jogos em crianças em ambiente escolar com idades entre os 6 e os 22 anos. A primeira, um jogo 2x2 foi testada com duas variantes: 1) um prémio foi atribuído a ambos os jogadores, proporcionalmente aos pontos acumulados; 2), um prémio foi atribuído ao jogador com mais pontos. O jogo foi desenhado com o intuito de causar o seguinte dilema a cada jogador: i) maximizar o seu ganho e arriscar ter menos pontos que o adversário; ou ii) decidir não maximizar o seu ganho, garantindo que este não era inferior ao do seu adversário. A segunda experiência consistia num jogo do ditador com duas opções: uma escolha egoísta/altruísta (A), onde o ditador recebia mais ganho, mas o seu recipiente recebia mais que ele e uma escolha malevolente (B) que oferecia menos ganhos ao ditador que a A mas mais ganhos que o recipiente. O dilema era que se as crianças se comportassem de maneira egoísta, obtinham mais ganho para si, ao mesmo tempo que aumentavam o ganho do seu colega. Se fossem malevolentes, então prefeririam ter mais ganho que o seu colega ao mesmo tempo que tinham menos para eles próprios. As experiências foram efetuadas em escolas de duas áreas distintas de Portugal (continente e Açores) para perceber se as preferências malevolentes aumentavam ou diminuíam com a idade. Os resultados na primeira experiência sugerem que (1) os alunos compreenderam a primeira variante como um jogo de coordenação e comportaram-se como maximizadores, copiando as jogadas anteriores dos seus adversários; (2) que os alunos repetentes se comportaram preferencialmente como malevolentes, mais frequentemente que como maximizadores, com especial ênfase para os alunos de 14 anos; (3) maioria dos alunos comportou-se reciprocamente desde os 12 até aos 16 anos de idade, após os quais começaram a desenvolver uma maior tolerância às escolhas dos seus parceiros. Os resultados da segunda experiência sugerem que (1) as estratégias egoístas eram prevalentes até aos 6 anos de idade, (2) as tendências altruístas emergiram até aos 8 anos de idade e (3) as estratégias de malevolência começaram a emergir a partir dos 8 anos de idade. Estes resultados complementam a literatura relativamente escassa sobre malevolência e sugerem que este comportamento está intimamente ligado a preferências de consideração sobre os outros, o paroquialismo e os estágios de desenvolvimento das crianças.************************************************************Spite is defined as an act that causes loss of payoff to an opponent at a cost to the actor. As one of the four fundamental behaviours in sociobiology, it has received far less attention than its counterparts selfishness and cooperation. It has however been established as a viable strategy in small populations when used against negatively related individuals. Because of this, spite can either i) disappear or ii) remain at equilibrium with cooperative strategies due to the willingness of spiteful individuals to pay a cost in order to punish. This thesis sets out to understand whether propensity for spiteful behaviour is inherent or if it develops with age. For that effect, two game-theoretical experiments were performed with schoolboys and schoolgirls aged 6 to 22. The first, a 2 x 2 game, was tested in two variants: 1) a prize was awarded to both players, proportional to accumulated points; 2), a prize was given to the player with most points. Each player faced the following dilemma: i) to maximise pay-off risking a lower pay-off than the opponent; or ii) not to maximise pay-off in order to cut down the opponent below their own. The second game was a dictator experiment with two choices, (A) a selfish/altruistic choice affording more payoff to the donor than B, but more to the recipient than to the donor, and (B) a spiteful choice that afforded less payoff to the donor than A, but even lower payoff to the recipient. The dilemma here was that if subjects behaved selfishly, they obtained more payoff for themselves, while at the same time increasing their opponent payoff. If they were spiteful, they would rather have more payoff than their colleague, at the cost of less for themselves. Experiments were run in schools in two different areas in Portugal (mainland and Azores) to understand whether spiteful preferences varied with age. Results in the first experiment suggested that (1) students understood the first variant as a coordination game and engaged in maximising behaviour by copying their opponent’s plays; (2) repeating students preferentially engaged in spiteful behaviour more often than maximising behaviour, with special emphasis on 14 year-olds; (3) most students engaged in reciprocal behaviour from ages 12 to 16, as they began developing higher tolerance for their opponent choices. Results for the second experiment suggested that (1) selfish strategies were prevalent until the age of 6, (2) altruistic tendencies emerged since then, and (3) spiteful strategies began being chosen more often by 8 year-olds. These results add to the relatively scarce body of literature on spite and suggest that this type of behaviour is closely tied with other-regarding preferences, parochialism and the children’s stages of development.
Resumo:
The main objective of this thesis was the development of a gold nanoparticle-based methodology for detection of DNA adducts as biomarkers, to try and overcome existing drawbacks in currently employed techniques. For this objective to be achieved, the experimental work was divided in three components: sample preparation, method of detection and development of a model for exposure to acrylamide. Different techniques were employed and combined for de-complexation and purification of DNA samples (including ultrasonic energy, nuclease digestion and chromatography), resulting in a complete protocol for sample treatment, prior to detection. The detection of alkylated nucleotides using gold nanoparticles was performed by two distinct methodologies: mass spectrometry and colorimetric detection. In mass spectrometry, gold nanoparticles were employed for laser desorption/ionisation instead of the organic matrix. Identification of nucleotides was possible by fingerprint, however no specific mass signals were denoted when using gold nanoparticles to analyse biological samples. An alternate method using the colorimetric properties of gold nanoparticles was employed for detection. This method inspired in the non-cross-linking assay allowed the identification of glycidamide-guanine adducts and DNA adducts generated in vitro. For the development of a model of exposure, two different aquatic organisms were studies: a goldfish and a mussel. Organisms were exposed to waterborne acrylamide, after which mortality was recorded and effect concentrations were estimated. In goldfish, both genotoxicity and metabolic alterations were assessed and revealed dose-effect relationships of acrylamide. Histopathological alterations were verified primarily in pancreatic cells, but also in hepatocytes. Mussels showed higher effect concentrations than goldfish. Biomarkers of oxidative stress, biotransformation and neurotoxicity were analysed after prolonged exposure, showing mild oxidative stress in mussel cells, and induction of enzymes involved in detoxification of oxygen radicals. A qualitative histopathological screening revealed gonadotoxicity in female mussels, which may present some risk to population equilibrium.
Resumo:
INTRODUCTION: The present study investigated the association between mannose-binding lectin (MBL) gene polymorphism and serum levels with infection by HIV-1. METHODS: Blood samples (5mL) were collected from 97 HIV-1-infected individuals resident in Belém, State of Pará, Brazil, who attended the Special Outpatient Unit for Infections and Parasitic Diseases (URE-DIPE). CD4+ T-lymphocyte count and plasma viral load were quantified. A 349bp fragment of exon 1 of the MBL was amplified via PCR, using genomic DNA extracted from controls and HIV-1-infected individuals, following established protocols. MBL plasma levels of the patients were quantified using an enzyme immunoassay kit. RESULTS: Two alleles were observed: MBL*O, with a frequency of 26.3% in HIV-1-infected individuals; and the wild allele MBL*A (73.7%). Similar frequencies were observed in the control group (p > 0.05). Genotype frequencies were distributed according to the Hardy-Weinberg equilibrium in both groups. Mean MBL plasma levels varied by genotype, with statistically significant differences between the AA and AO (p < 0.0001), and AA and OO (p < 0.001) genotypes, but not AO and OO (p = 0.17). Additionally, CD4+ T-lymphocytes and plasma viral load levels did not differ significantly by genotype (p > 0.05). CONCLUSIONS: The results of this study do not support the hypothesis that MBL gene polymorphism or low plasma MBL concentrations might have a direct influence on HIV-1 infection, although a broader study involving a large number of patients is needed.
Resumo:
RESUMO - Numa época de constrangimento orçamental, os hospitais do SNS vêm-se na obrigação de melhorar a eficiência de utilização dos recursos disponíveis, por forma a contribuir para o seu equilíbrio financeiro. Cabe a cada prestador analisar a sua posição, avaliar as suas oportunidades e adoptar estratégias que a curto, médio ou longo prazo se traduzam numa efetiva melhoria na eficiência. A análise e o controlo do desperdício associado à prestação de cuidados de saúde apresentam-se, globalmente, como uma dessas oportunidades. Neste trabalho são exploradas oportunidades de redução de desperdício em medicamentos, numa perspectiva meramente operacional, a nível das funções desempenhadas pelos Serviços Farmacêuticos (SF). No hospital em estudo acompanhou-se as diferentes linhas de produção dos SF, nomeadamente as tarefas envolvidas no processo de Distribuição Individual Diária em Dose Unitária, na distribuição de medicamentos para o Serviço de Urgências (SU) e na preparação de citotóxicos e imunomoduladores para o Hospital de Dia de Oncologia. Durante o ano de 2013, os SF devolveram aos fornecedores 0,07% e abateram 0,05% da despesa em medicamentos. A análise dos erros de medicação registados reflete o tipo de distribuição adotado para a maioria dos serviços de internamento do hospital. As melhorias encontradas a este nível passam pelo reforço de recursos humanos a desempenhar as tarefas de dispensa de medicamentos mas também pela implementação de uma cultura de registo de erros e acidentes, baseada no sistema de informação, para que se consiga quantificar o desperdício associado e agir com vista à optimização do circuito. A relação entre o método de distribuição adotado para o SU e a utilização do medicamento neste serviço foi apenas investigada para os medicamentos de registo individual de administração. Foi determinado um índice de eficiência de utilização de 67,7%, entre o dispensado e o administrado. Às discrepâncias encontradas está associado um custo de 32 229,6 € para o ano de 2013. Constatou-se também que, a nível do consumo de citotóxicos e imunomoduladores houve, durante o mês de abril de 2013, um índice de desperdício médio de 14,7%, entre o prescrito e o consumido, que se traduziu num custo do desperdício mensal de 13 070,9 €. Com base no desperdício mensal estimou-se que o desperdício anual associado à manipulação de citotóxicos e imunomoduladores deverá corresponder a 5,5% da despesa anual do serviço com estes medicamentos. Não obstante as limitações encontradas durantes o trabalho, e parte do desperdício apurado ser inevitável, demonstrou-se que o desperdício em medicamentos pode traduzir-se numa fatia não negligenciável mas controlável da despesa do hospital em estudo. No seguimento do seu conhecimento, a sua contenção pode ter um impacto na redução de despesa a curto-médio prazo, sem a necessidade de racionamento da utilização de medicamentos e sem alterar os padrões de qualidade assistencial exigidos pela tutela e pelos doentes. Por último, são apresentadas recomendações para a redução do desperdício em medicamentos, adequadas a cada uma das dimensões analisadas.
Resumo:
We use a novel pricing model to imply time series of diffusive volatility and jump intensity from S&P 500 index options. These two measures capture the ex ante risk assessed by investors. Using a simple general equilibrium model, we translate the implied measures of ex ante risk into an ex ante risk premium. The average premium that compensates the investor for the ex ante risks is 70% higher than the premium for realized volatility. The equity premium implied from option prices is shown to significantly predict subsequent stock market returns.
Resumo:
Clinical research is essential for the development of new drugs, diagnostic tests and new devices. Clinical monitoring is implemented to improve the quality of research and attain high ethical and scientific standards. This review discusses the role of clinical monitors, taking into account the variety of scenarios in which medical research is developed, and highlights the challenges faced by research teams to ensure that patients rights are respected and that the social role of scientific research is preserved. Specific emphasis is given to the ethical dilemmas related to the multiple roles which clinical monitors play in the research framework, mainly those involving the delicate equilibrium between the loyalty to the sponsor and to the research subjects. The essential role of clinical monitoring for research developed in poor healthcare scenarios is highlighted as an approach to get the local infrastructure strengthening needed to achieve an adequate level of good clinical practices.
Resumo:
Simulated moving bed (SMB) chromatography is attracting more and more attention since it is a powerful technique for complex separation tasks. Nowadays, more than 60% of preparative SMB units are installed in the pharmaceutical and in the food in- dustry [SDI, Preparative and Process Liquid Chromatography: The Future of Process Separations, International Strategic Directions, Los Angeles, USA, 2002. http://www. strategicdirections.com]. Chromatography is the method of choice in these ¯elds, be- cause often pharmaceuticals and ¯ne-chemicals have physico-chemical properties which di®er little from those of the by-products, and they may be thermally instable. In these cases, standard separation techniques as distillation and extraction are not applicable. The noteworthiness of preparative chromatography, particulary SMB process, as a sep- aration and puri¯cation process in the above mentioned industries has been increasing, due to its °exibility, energy e±ciency and higher product purity performance. Consequently, a new SMB paradigm is requested by the large number of potential small- scale applications of the SMB technology, which exploits the °exibility and versatility of the technology. In this new SMB paradigm, a number of possibilities for improving SMB performance through variation of parameters during a switching interval, are pushing the trend toward the use of units with smaller number of columns because less stationary phase is used and the setup is more economical. This is especially important for the phar- maceutical industry, where SMBs are seen as multipurpose units that can be applied to di®erent separations in all stages of the drug-development cycle. In order to reduce the experimental e®ort and accordingly the coast associated with the development of separation processes, simulation models are intensively used. One impor- tant aspect in this context refers to the determination of the adsorption isotherms in SMB chromatography, where separations are usually carried out under strongly nonlinear conditions in order to achieve higher productivities. The accurate determination of the competitive adsorption equilibrium of the enantiomeric species is thus of fundamental importance to allow computer-assisted optimization or process scale-up. Two major SMB operating problems are apparent at production scale: the assessment of product quality and the maintenance of long-term stable and controlled operation. Constraints regarding product purity, dictated by pharmaceutical and food regulatory organizations, have drastically increased the demand for product quality control. The strict imposed regulations are increasing the need for developing optically pure drugs.(...)
Resumo:
This paper studies the existing price linkage between generics and branded pharmaceuticals, in which the generic price must be a fraction of the latter. Using a vertical differentiation model, we look at the market equilibrium, the effects on the incentives for the brand producer to develop new products, and the possibility of predation by the brand producer over the generic firm. We find that the price linkage increases prices compared to no indexation and it may increase the incentives for the brand producer to expand its set of products. When prices are freely set, the branded firm may also want to expand a new product with a higher quality, but will prefer to remove the original one from the market. Predation may equally occur in both schemes but the price linkage may give fewer incentives for the branded firm to predate while compensating losses with a new drug.
Resumo:
Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).
Resumo:
AbstractINTRODUCTION:Hepatic fibrosis progression in patients with chronic hepatitis C virus infections has been associated with viral and host factors, including genetic polymorphisms. Human platelet antigen polymorphisms are associated with the rapid development of fibrosis in HCV-monoinfected patients. This study aimed to determine whether such an association exists in human immunodeficiency virus-1/hepatitis C virus-coinfected patients.METHODS:Genomic deoxyribonucleic acid from 36 human immunodeficiency virus-1/hepatitis C virus-coinfected patients was genotyped to determine the presence of human platelet antigens-1, -3, or -5 polymorphisms. Fibrosis progression was evaluated using the Metavir scoring system, and the patients were assigned to two groups, namely, G1 that comprised patients with F1, portal fibrosis without septa, or F2, few septa (n = 23) and G2 that comprised patients with F3, numerous septa, or F4, cirrhosis (n = 13). Fisher's exact test was utilized to determine possible associations between the human platelet antigen polymorphisms and fibrosis progression.RESULTS:There were no deviations from the Hardy-Weinberg equilibrium in the human platelet antigen systems evaluated. Statistically significant differences were not observed between G1 and G2 with respect to the distributions of the allelic and genotypic frequencies of the human platelet antigen systems.CONCLUSION:The greater stimulation of hepatic stellate cells by the human immunodeficiency virus and, consequently, the increased expression of transforming growth factor beta can offset the effect of human platelet antigen polymorphism on the progression of fibrosis in patients coinfected with the human immunodeficiency virus-1 and the hepatitis C virus.