41 resultados para Liquid–liquid equilibrium


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is divided into two distinct parts. The first part consists of the study of the metal organic framework UiO-66Zr, where the aim was to determine the force field that best describes the adsorption equilibrium properties of two different gases, methane and carbon dioxide. The other part of the work focuses on the study of the single wall carbon nanotube topology for ethane adsorption; the aim was to simplify as much as possible the solid-fluid force field model to increase the computational efficiency of the Monte Carlo simulations. The choice of both adsorbents relies on their potential use in adsorption processes, such as the capture and storage of carbon dioxide, natural gas storage, separation of components of biogas, and olefin/paraffin separations. The adsorption studies on the two porous materials were performed by molecular simulation using the grand canonical Monte Carlo (μ,V,T) method, over the temperature range of 298-343 K and pressure range 0.06-70 bar. The calibration curves of pressure and density as a function of chemical potential and temperature for the three adsorbates under study, were obtained Monte Carlo simulation in the canonical ensemble (N,V,T); polynomial fit and interpolation of the obtained data allowed to determine the pressure and gas density at any chemical potential. The adsorption equilibria of methane and carbon dioxide in UiO-66Zr were simulated and compared with the experimental data obtained by Jasmina H. Cavka et al. The results show that the best force field for both gases is a chargeless united-atom force field based on the TraPPE model. Using this validated force field it was possible to estimate the isosteric heats of adsorption and the Henry constants. In the Grand-Canonical Monte Carlo simulations of carbon nanotubes, we conclude that the fastest type of run is obtained with a force field that approximates the nanotube as a smooth cylinder; this approximation gives execution times that are 1.6 times faster than the typical atomistic runs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As estratégias de malevolência implicam que um indivíduo pague um custo para infligir um custo superior a um oponente. Como um dos comportamentos fundamentais da sociobiologia, a malevolência tem recebido menos atenção que os seus pares o egoísmo e a cooperação. Contudo, foi estabelecido que a malevolência é uma estratégia viável em populações pequenas quando usada contra indivíduos negativamente geneticamente relacionados pois este comportamento pode i) ser eliminado naturalmente, ou ii) manter-se em equilíbrio com estratégias cooperativas devido à disponibilidade da parte de indivíduos malevolentes de pagar um custo para punir. Esta tese propõe compreender se a propensão para a malevolência nos humanos é inerente ou se esta se desenvolve com a idade. Para esse efeito, considerei duas experiências de teoria de jogos em crianças em ambiente escolar com idades entre os 6 e os 22 anos. A primeira, um jogo 2x2 foi testada com duas variantes: 1) um prémio foi atribuído a ambos os jogadores, proporcionalmente aos pontos acumulados; 2), um prémio foi atribuído ao jogador com mais pontos. O jogo foi desenhado com o intuito de causar o seguinte dilema a cada jogador: i) maximizar o seu ganho e arriscar ter menos pontos que o adversário; ou ii) decidir não maximizar o seu ganho, garantindo que este não era inferior ao do seu adversário. A segunda experiência consistia num jogo do ditador com duas opções: uma escolha egoísta/altruísta (A), onde o ditador recebia mais ganho, mas o seu recipiente recebia mais que ele e uma escolha malevolente (B) que oferecia menos ganhos ao ditador que a A mas mais ganhos que o recipiente. O dilema era que se as crianças se comportassem de maneira egoísta, obtinham mais ganho para si, ao mesmo tempo que aumentavam o ganho do seu colega. Se fossem malevolentes, então prefeririam ter mais ganho que o seu colega ao mesmo tempo que tinham menos para eles próprios. As experiências foram efetuadas em escolas de duas áreas distintas de Portugal (continente e Açores) para perceber se as preferências malevolentes aumentavam ou diminuíam com a idade. Os resultados na primeira experiência sugerem que (1) os alunos compreenderam a primeira variante como um jogo de coordenação e comportaram-se como maximizadores, copiando as jogadas anteriores dos seus adversários; (2) que os alunos repetentes se comportaram preferencialmente como malevolentes, mais frequentemente que como maximizadores, com especial ênfase para os alunos de 14 anos; (3) maioria dos alunos comportou-se reciprocamente desde os 12 até aos 16 anos de idade, após os quais começaram a desenvolver uma maior tolerância às escolhas dos seus parceiros. Os resultados da segunda experiência sugerem que (1) as estratégias egoístas eram prevalentes até aos 6 anos de idade, (2) as tendências altruístas emergiram até aos 8 anos de idade e (3) as estratégias de malevolência começaram a emergir a partir dos 8 anos de idade. Estes resultados complementam a literatura relativamente escassa sobre malevolência e sugerem que este comportamento está intimamente ligado a preferências de consideração sobre os outros, o paroquialismo e os estágios de desenvolvimento das crianças.************************************************************Spite is defined as an act that causes loss of payoff to an opponent at a cost to the actor. As one of the four fundamental behaviours in sociobiology, it has received far less attention than its counterparts selfishness and cooperation. It has however been established as a viable strategy in small populations when used against negatively related individuals. Because of this, spite can either i) disappear or ii) remain at equilibrium with cooperative strategies due to the willingness of spiteful individuals to pay a cost in order to punish. This thesis sets out to understand whether propensity for spiteful behaviour is inherent or if it develops with age. For that effect, two game-theoretical experiments were performed with schoolboys and schoolgirls aged 6 to 22. The first, a 2 x 2 game, was tested in two variants: 1) a prize was awarded to both players, proportional to accumulated points; 2), a prize was given to the player with most points. Each player faced the following dilemma: i) to maximise pay-off risking a lower pay-off than the opponent; or ii) not to maximise pay-off in order to cut down the opponent below their own. The second game was a dictator experiment with two choices, (A) a selfish/altruistic choice affording more payoff to the donor than B, but more to the recipient than to the donor, and (B) a spiteful choice that afforded less payoff to the donor than A, but even lower payoff to the recipient. The dilemma here was that if subjects behaved selfishly, they obtained more payoff for themselves, while at the same time increasing their opponent payoff. If they were spiteful, they would rather have more payoff than their colleague, at the cost of less for themselves. Experiments were run in schools in two different areas in Portugal (mainland and Azores) to understand whether spiteful preferences varied with age. Results in the first experiment suggested that (1) students understood the first variant as a coordination game and engaged in maximising behaviour by copying their opponent’s plays; (2) repeating students preferentially engaged in spiteful behaviour more often than maximising behaviour, with special emphasis on 14 year-olds; (3) most students engaged in reciprocal behaviour from ages 12 to 16, as they began developing higher tolerance for their opponent choices. Results for the second experiment suggested that (1) selfish strategies were prevalent until the age of 6, (2) altruistic tendencies emerged since then, and (3) spiteful strategies began being chosen more often by 8 year-olds. These results add to the relatively scarce body of literature on spite and suggest that this type of behaviour is closely tied with other-regarding preferences, parochialism and the children’s stages of development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main objective of this thesis was the development of a gold nanoparticle-based methodology for detection of DNA adducts as biomarkers, to try and overcome existing drawbacks in currently employed techniques. For this objective to be achieved, the experimental work was divided in three components: sample preparation, method of detection and development of a model for exposure to acrylamide. Different techniques were employed and combined for de-complexation and purification of DNA samples (including ultrasonic energy, nuclease digestion and chromatography), resulting in a complete protocol for sample treatment, prior to detection. The detection of alkylated nucleotides using gold nanoparticles was performed by two distinct methodologies: mass spectrometry and colorimetric detection. In mass spectrometry, gold nanoparticles were employed for laser desorption/ionisation instead of the organic matrix. Identification of nucleotides was possible by fingerprint, however no specific mass signals were denoted when using gold nanoparticles to analyse biological samples. An alternate method using the colorimetric properties of gold nanoparticles was employed for detection. This method inspired in the non-cross-linking assay allowed the identification of glycidamide-guanine adducts and DNA adducts generated in vitro. For the development of a model of exposure, two different aquatic organisms were studies: a goldfish and a mussel. Organisms were exposed to waterborne acrylamide, after which mortality was recorded and effect concentrations were estimated. In goldfish, both genotoxicity and metabolic alterations were assessed and revealed dose-effect relationships of acrylamide. Histopathological alterations were verified primarily in pancreatic cells, but also in hepatocytes. Mussels showed higher effect concentrations than goldfish. Biomarkers of oxidative stress, biotransformation and neurotoxicity were analysed after prolonged exposure, showing mild oxidative stress in mussel cells, and induction of enzymes involved in detoxification of oxygen radicals. A qualitative histopathological screening revealed gonadotoxicity in female mussels, which may present some risk to population equilibrium.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

RESUMO - Numa época de constrangimento orçamental, os hospitais do SNS vêm-se na obrigação de melhorar a eficiência de utilização dos recursos disponíveis, por forma a contribuir para o seu equilíbrio financeiro. Cabe a cada prestador analisar a sua posição, avaliar as suas oportunidades e adoptar estratégias que a curto, médio ou longo prazo se traduzam numa efetiva melhoria na eficiência. A análise e o controlo do desperdício associado à prestação de cuidados de saúde apresentam-se, globalmente, como uma dessas oportunidades. Neste trabalho são exploradas oportunidades de redução de desperdício em medicamentos, numa perspectiva meramente operacional, a nível das funções desempenhadas pelos Serviços Farmacêuticos (SF). No hospital em estudo acompanhou-se as diferentes linhas de produção dos SF, nomeadamente as tarefas envolvidas no processo de Distribuição Individual Diária em Dose Unitária, na distribuição de medicamentos para o Serviço de Urgências (SU) e na preparação de citotóxicos e imunomoduladores para o Hospital de Dia de Oncologia. Durante o ano de 2013, os SF devolveram aos fornecedores 0,07% e abateram 0,05% da despesa em medicamentos. A análise dos erros de medicação registados reflete o tipo de distribuição adotado para a maioria dos serviços de internamento do hospital. As melhorias encontradas a este nível passam pelo reforço de recursos humanos a desempenhar as tarefas de dispensa de medicamentos mas também pela implementação de uma cultura de registo de erros e acidentes, baseada no sistema de informação, para que se consiga quantificar o desperdício associado e agir com vista à optimização do circuito. A relação entre o método de distribuição adotado para o SU e a utilização do medicamento neste serviço foi apenas investigada para os medicamentos de registo individual de administração. Foi determinado um índice de eficiência de utilização de 67,7%, entre o dispensado e o administrado. Às discrepâncias encontradas está associado um custo de 32 229,6 € para o ano de 2013. Constatou-se também que, a nível do consumo de citotóxicos e imunomoduladores houve, durante o mês de abril de 2013, um índice de desperdício médio de 14,7%, entre o prescrito e o consumido, que se traduziu num custo do desperdício mensal de 13 070,9 €. Com base no desperdício mensal estimou-se que o desperdício anual associado à manipulação de citotóxicos e imunomoduladores deverá corresponder a 5,5% da despesa anual do serviço com estes medicamentos. Não obstante as limitações encontradas durantes o trabalho, e parte do desperdício apurado ser inevitável, demonstrou-se que o desperdício em medicamentos pode traduzir-se numa fatia não negligenciável mas controlável da despesa do hospital em estudo. No seguimento do seu conhecimento, a sua contenção pode ter um impacto na redução de despesa a curto-médio prazo, sem a necessidade de racionamento da utilização de medicamentos e sem alterar os padrões de qualidade assistencial exigidos pela tutela e pelos doentes. Por último, são apresentadas recomendações para a redução do desperdício em medicamentos, adequadas a cada uma das dimensões analisadas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We use a novel pricing model to imply time series of diffusive volatility and jump intensity from S&P 500 index options. These two measures capture the ex ante risk assessed by investors. Using a simple general equilibrium model, we translate the implied measures of ex ante risk into an ex ante risk premium. The average premium that compensates the investor for the ex ante risks is 70% higher than the premium for realized volatility. The equity premium implied from option prices is shown to significantly predict subsequent stock market returns.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simulated moving bed (SMB) chromatography is attracting more and more attention since it is a powerful technique for complex separation tasks. Nowadays, more than 60% of preparative SMB units are installed in the pharmaceutical and in the food in- dustry [SDI, Preparative and Process Liquid Chromatography: The Future of Process Separations, International Strategic Directions, Los Angeles, USA, 2002. http://www. strategicdirections.com]. Chromatography is the method of choice in these ¯elds, be- cause often pharmaceuticals and ¯ne-chemicals have physico-chemical properties which di®er little from those of the by-products, and they may be thermally instable. In these cases, standard separation techniques as distillation and extraction are not applicable. The noteworthiness of preparative chromatography, particulary SMB process, as a sep- aration and puri¯cation process in the above mentioned industries has been increasing, due to its °exibility, energy e±ciency and higher product purity performance. Consequently, a new SMB paradigm is requested by the large number of potential small- scale applications of the SMB technology, which exploits the °exibility and versatility of the technology. In this new SMB paradigm, a number of possibilities for improving SMB performance through variation of parameters during a switching interval, are pushing the trend toward the use of units with smaller number of columns because less stationary phase is used and the setup is more economical. This is especially important for the phar- maceutical industry, where SMBs are seen as multipurpose units that can be applied to di®erent separations in all stages of the drug-development cycle. In order to reduce the experimental e®ort and accordingly the coast associated with the development of separation processes, simulation models are intensively used. One impor- tant aspect in this context refers to the determination of the adsorption isotherms in SMB chromatography, where separations are usually carried out under strongly nonlinear conditions in order to achieve higher productivities. The accurate determination of the competitive adsorption equilibrium of the enantiomeric species is thus of fundamental importance to allow computer-assisted optimization or process scale-up. Two major SMB operating problems are apparent at production scale: the assessment of product quality and the maintenance of long-term stable and controlled operation. Constraints regarding product purity, dictated by pharmaceutical and food regulatory organizations, have drastically increased the demand for product quality control. The strict imposed regulations are increasing the need for developing optically pure drugs.(...)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper studies the existing price linkage between generics and branded pharmaceuticals, in which the generic price must be a fraction of the latter. Using a vertical differentiation model, we look at the market equilibrium, the effects on the incentives for the brand producer to develop new products, and the possibility of predation by the brand producer over the generic firm. We find that the price linkage increases prices compared to no indexation and it may increase the incentives for the brand producer to expand its set of products. When prices are freely set, the branded firm may also want to expand a new product with a higher quality, but will prefer to remove the original one from the market. Predation may equally occur in both schemes but the price linkage may give fewer incentives for the branded firm to predate while compensating losses with a new drug.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Economics is a social science which, therefore, focuses on people and on the decisions they make, be it in an individual context, or in group situations. It studies human choices, in face of needs to be fulfilled, and a limited amount of resources to fulfill them. For a long time, there was a convergence between the normative and positive views of human behavior, in that the ideal and predicted decisions of agents in economic models were entangled in one single concept. That is, it was assumed that the best that could be done in each situation was exactly the choice that would prevail. Or, at least, that the facts that economics needed to explain could be understood in the light of models in which individual agents act as if they are able to make ideal decisions. However, in the last decades, the complexity of the environment in which economic decisions are made and the limits on the ability of agents to deal with it have been recognized, and incorporated into models of decision making in what came to be known as the bounded rationality paradigm. This was triggered by the incapacity of the unboundedly rationality paradigm to explain observed phenomena and behavior. This thesis contributes to the literature in three different ways. Chapter 1 is a survey on bounded rationality, which gathers and organizes the contributions to the field since Simon (1955) first recognized the necessity to account for the limits on human rationality. The focus of the survey is on theoretical work rather than the experimental literature which presents evidence of actual behavior that differs from what classic rationality predicts. The general framework is as follows. Given a set of exogenous variables, the economic agent needs to choose an element from the choice set that is avail- able to him, in order to optimize the expected value of an objective function (assuming his preferences are representable by such a function). If this problem is too complex for the agent to deal with, one or more of its elements is simplified. Each bounded rationality theory is categorized according to the most relevant element it simplifes. Chapter 2 proposes a novel theory of bounded rationality. Much in the same fashion as Conlisk (1980) and Gabaix (2014), we assume that thinking is costly in the sense that agents have to pay a cost for performing mental operations. In our model, if they choose not to think, such cost is avoided, but they are left with a single alternative, labeled the default choice. We exemplify the idea with a very simple model of consumer choice and identify the concept of isofin curves, i.e., sets of default choices which generate the same utility net of thinking cost. Then, we apply the idea to a linear symmetric Cournot duopoly, in which the default choice can be interpreted as the most natural quantity to be produced in the market. We find that, as the thinking cost increases, the number of firms thinking in equilibrium decreases. More interestingly, for intermediate levels of thinking cost, an equilibrium in which one of the firms chooses the default quantity and the other best responds to it exists, generating asymmetric choices in a symmetric model. Our model is able to explain well-known regularities identified in the Cournot experimental literature, such as the adoption of different strategies by players (Huck et al. , 1999), the inter temporal rigidity of choices (Bosch-Dom enech & Vriend, 2003) and the dispersion of quantities in the context of di cult decision making (Bosch-Dom enech & Vriend, 2003). Chapter 3 applies a model of bounded rationality in a game-theoretic set- ting to the well-known turnout paradox in large elections, pivotal probabilities vanish very quickly and no one should vote, in sharp contrast with the ob- served high levels of turnout. Inspired by the concept of rhizomatic thinking, introduced by Bravo-Furtado & Côrte-Real (2009a), we assume that each per- son is self-delusional in the sense that, when making a decision, she believes that a fraction of the people who support the same party decides alike, even if no communication is established between them. This kind of belief simplifies the decision of the agent, as it reduces the number of players he believes to be playing against { it is thus a bounded rationality approach. Studying a two-party first-past-the-post election with a continuum of self-delusional agents, we show that the turnout rate is positive in all the possible equilibria, and that it can be as high as 100%. The game displays multiple equilibria, at least one of which entails a victory of the bigger party. The smaller one may also win, provided its relative size is not too small; more self-delusional voters in the minority party decreases this threshold size. Our model is able to explain some empirical facts, such as the possibility that a close election leads to low turnout (Geys, 2006), a lower margin of victory when turnout is higher (Geys, 2006) and high turnout rates favoring the minority (Bernhagen & Marsh, 1997).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work studies fuel retail firms’ strategic behavior in a two-dimensional product differentiation framework. Following the mandatory provision of “low-cost” fuel we consider that capacity constraints force firms to eliminate of one the previously offered qualities. Firms play a two-stage game choosing fuel qualities from three possibilities (low-cost, medium quality and high quality fuel) and then prices having exogenous opposite locations. In the highest level of consumers’ heterogeneity, a subgame perfect Nash equilibrium exists in which firms both choose minimum quality differentiation. Consumers’ are worse off if no differentiation occurs in medium and high qualities. The effect over prices from the mandatory “low-cost” fuel law is ambiguous.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ionic Liquids (ILs) consist in organic salts that are liquid at/or near room temperature. Since ILs are entirely composed of ions, the formation of ion pairs is expected to be one essential feature for describing solvation in ILs. In recent years, protein - ionic liquid (P-IL) interactions have been the subject of intensive studies mainly because of their capability to promote folding/unfolding of proteins. However, the ion pairs and their lifetimes in ILs in P-IL thematic is dismissed, since the action of ILs is therefore the result of a subtle equilibrium between anion-cation interaction, ion-solvent and ion-protein interaction. The work developed in this thesis innovates in this thematic, once the design of ILs for protein stabilisation was bio-inspired in the high concentration of organic charged metabolites found in cell milieu. Although this perception is overlooked, those combined concentrations have been estimated to be ~300 mM among the macromolecules at concentrations exceeding 300 g/L (macromolecular crowding) and transient ion-pair can naturally occur with a potential specific biological role. Hence the main objective of this work is to develop new bio-ILs with a detectable ion-pair and understand its effects on protein structure and stability, under crowding environment, using advanced NMR techniques and calorimetric techniques. The choline-glutamate ([Ch][Glu]) IL was synthesized and characterized. The ion-pair was detected in water solutions using mainly the selective NOE NMR technique. Through the same technique, it was possible to detect a similar ion-pair promotion under synthetic and natural crowding environments. Using NMR spectroscopy (protein diffusion, HSQC experiments, and hydrogen-deuterium exchange) and differential scanning calorimetry (DSC), the model protein GB1 (production and purification in isotopic enrichment media) it was studied in the presence of [Ch][Glu] under macromolecular crowding conditions (PEG, BSA, lysozyme). Under dilute condition, it is possible to assert that the [Ch][Glu] induces a preferential hydration by weak and non-specific interactions, which leads to a significant stabilisation. On the other hand, under crowding environment, the [Ch][Glu] ion pair is promoted, destabilising the protein by favourable weak hydrophobic interactions , which disrupt the hydration layer of the protein. However, this capability can mitigates the effect of protein crowders. Overall, this work explored the ion-pair existence and its consequences on proteins in conditions similar to cell milieu. In this way, the charged metabolites found in cell can be understood as key for protein stabilisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of the work presented in this thesis was the development of an innovative approach for the separation of enantiomers of secondary alcohols, combining the use of an ionic liquid (IL) - both as solvent for conducting enzymatic kinetic resolution and as acylating agent - with the use of carbon dioxide (CO2) as solvent for extraction. Menthol was selected for testing this reaction/separation approach due to the increasing demand for this substance, which is widely used in the pharmaceutical, cosmetics and food industries. With a view to using an ionic ester as acylating agent, whose conversion led to the release of ethanol, and due to the need to remove this alcohol so as to drive reaction equilibrium forward, a phase equilibrium study was conducted for the ehtanol/(±)-menthol/CO2 system, at pressures between 8 and 10 MPa and temperatures between 40 and 50 oC. It was found that CO2 is more selective towards ethanol, especially at the lowest pressure and highest temperature tested, leading to separation factors in the range 1.6-7.6. The pressure-temperature-composition data obtained were correlated with the Peng-Robinson equation of state and the Mathias-Klotz-Prausnitz mixing rule. The model fit the experimental results well, with an average absolute deviation (AAD) of 3.7 %. The resolution of racemic menthol was studied using two lipases, namely lipase from Candida rugosa (CRL) and immobilized lipase B from Candida antarctica (CALB), and two ionic acylating esters. No reaction was detected in either case. (R,S)-1-phenylethanol was used next, and it was found that with CRL low, nonselective, conversion of the alcohol took place, whereas CALB led to an enantiomeric excess (ee) of the substrate of 95%, at 30% conversion. Other acylating agents were tested for the resolution of (±)-menthol, namely vinyl esters and acid anhydrides, using several lipases and varying other parameters that affect conversion and enantioselectivity, such as substrate concentration, solvent and temperature. One such acylating agent was propionic anhydride. It was thus performed a phase equilibrium study on the propionic anhydride/CO2 system, at temperatures between 35 and 50 oC. This study revealed that, at 35 oC and pressures from 7 MPa, the system is monophasic for all compositions. The enzymatic catalysis studies carried out with propionic anhydride revealed that the extent of noncatalyzed reaction was high, with a negative effect on enantioselectivity. These studies showed also that it was possible to reduce considerably the impact of the noncatalyzed reaction relative to the reaction catalyzed by CRL by lowering temperature to 4 oC. Vinyl decanoate was shown to lead to the best results at conditions amenable to a process combining the use of supercritical CO2 as agent for post-reaction separation. The use of vinyl decanoate in a number of IL solvents, namely [bmim][PF6], [bmim][BF4], [hmim][PF6], [omim][PF6], and [bmim][Tf2N], led to an enantiomeric excess of product (eep) values of over 96%, at about 50% conversion, using CRL. In n-hexane and supercritical CO2, reaction progressed more slowly.(...)