798 resultados para Counter-visuality
Resumo:
Sex-dependent selection can help maintain sexual dimorphism. When the magnitude of selection exerted on a heritable sex trait differs between the sexes, it may prevent each sex to reach its phenotypic optimum. As a consequence, the benefit of expressing a sex trait to a given value may differ between males and females favouring sex-specific adaptations associated with different values of a sex trait. The level of metabolites regulated by genes that are under sex-dependent selection may therefore covary with the degree of ornamentation differently in the two sexes. We investigated this prediction in the barn owl, a species in which females display on average larger black spots on the plumage than males, a heritable ornament. This melanin-based colour trait is strongly selected in females and weakly counter-selected in males indicating sex-dependent selection. In nestling barn owls, we found that daily variation in baseline corticosterone levels, a key hormone that mediates life history trade-offs, covaries with spot diameter displayed by their biological parents. When their mother displayed larger spots, nestlings had lower corticosterone levels in the morning and higher levels in the evening, whereas the opposite pattern was found with the size of paternal spots. Our study suggests a link between daily regulation of glucocorticoids and sex-dependent selection exerted on sexually dimorphic melanin-based ornaments.
Resumo:
In a previous work we have shown that sinusoidal whole-body rotations producing continuous vestibular stimulation, affected the timing of motor responses as assessed with a paced finger tapping (PFT) task (Binetti et al. (2010). Neuropsychologia, 48(6), 1842-1852). Here, in two new psychophysical experiments, one purely perceptual and one with both sensory and motor components, we explored the relationship between body motion/vestibular stimulation and perceived timing of acoustic events. In experiment 1, participants were required to discriminate sequences of acoustic tones endowed with different degrees of acceleration or deceleration. In this experiment we found that a tone sequence presented during acceleratory whole-body rotations required a progressive increase in rate in order to be considered temporally regular, consistent with the idea of an increase in "clock" frequency and of an overestimation of time. In experiment 2 participants produced self-paced taps, which entailed an acoustic feedback. We found that tapping frequency in this task was affected by periodic motion by means of anticipatory and congruent (in-phase) fluctuations irrespective of the self-generated sensory feedback. On the other hand, synchronizing taps to an external rhythm determined a completely opposite modulation (delayed/counter-phase). Overall this study shows that body displacements "remap" our metric of time, affecting not only motor output but also sensory input.
Resumo:
Back-focal-plane interferometry is used to measure displacements of optically trapped samples with very high spatial and temporal resolution. However, the technique is closely related to a method that measures the rate of change in light momentum. It has long been known that displacements of the interference pattern at the back focal plane may be used to track the optical force directly, provided that a considerable fraction of the light is effectively monitored. Nonetheless, the practical application of this idea has been limited to counter-propagating, low-aperture beams where the accurate momentum measurements are possible. Here, we experimentally show that the connection can be extended to single-beam optical traps. In particular, we show that, in a gradient trap, the calibration product κ·β (where κ is the trap stiffness and 1/β is the position sensitivity) corresponds to the factor that converts detector signals into momentum changes; this factor is uniquely determined by three construction features of the detection instrument and does not depend, therefore, on the specific conditions of the experiment. Then, we find that force measurements obtained from back-focal-plane displacements are in practice not restricted to a linear relationship with position and hence they can be extended outside that regime. Finally, and more importantly, we show that these properties are still recognizable even when the system is not fully optimized for light collection. These results should enable a more general use of back-focal-plane interferometry whenever the ultimate goal is the measurement of the forces exerted by an optical trap.
Resumo:
Back-focal-plane interferometry is used to measure displacements of optically trapped samples with very high spatial and temporal resolution. However, the technique is closely related to a method that measures the rate of change in light momentum. It has long been known that displacements of the interference pattern at the back focal plane may be used to track the optical force directly, provided that a considerable fraction of the light is effectively monitored. Nonetheless, the practical application of this idea has been limited to counter-propagating, low-aperture beams where the accurate momentum measurements are possible. Here, we experimentally show that the connection can be extended to single-beam optical traps. In particular, we show that, in a gradient trap, the calibration product κ·β (where κ is the trap stiffness and 1/β is the position sensitivity) corresponds to the factor that converts detector signals into momentum changes; this factor is uniquely determined by three construction features of the detection instrument and does not depend, therefore, on the specific conditions of the experiment. Then, we find that force measurements obtained from back-focal-plane displacements are in practice not restricted to a linear relationship with position and hence they can be extended outside that regime. Finally, and more importantly, we show that these properties are still recognizable even when the system is not fully optimized for light collection. These results should enable a more general use of back-focal-plane interferometry whenever the ultimate goal is the measurement of the forces exerted by an optical trap.
Resumo:
Elevated circulating concentrations in modified LDL-cholesterol particles (e.g. oxidised LDL) and low levels in HDL increase not only the risk for diabetic patients to develop cardiovascular diseases but also may contribute to development and progression of diabetes by directly having adverse effects on β-cells. Chronic exposure of β-cells to 2 mM human oxidised LDL-cholesterol (oxLDL) increases the rate of apoptosis, reduce insulin biosynthesis and the secretory capacity of the cells in response to nutrients. In line with the protective role, HDL efficiently antagonised the harmful effects of ox- LDL, suggesting that low levels of HDL would be inefficient to protect β-cells against oxLDL attack in patients. Activation of endoplasmic reticulum (ER) stress is pointed out to contribute to β-cell dysfunction elicited by environmental stressors. In this study we investigated whether activation of ER stress is required for oxLDL to mediate detrimental effects on β-cells and we tested the potential antagonist properties of HDL: The mouse MIN6 insulin-secreting cells were cultured with 2 mM of LDL-cholesterol preparation (native or in vitro oxidized) in the presence or absence of 1 mM of HDL-cholesterol or the ER stress inhibitor 4-phenylbutyrate (4-PBA): Prolonged exposure of MIN6 cells to 2 mM oxLDL-cholesterol for 48 hours led to an increase in expression of ER stress markers such as ATF4, CHOP and p58 and stimulated the splicing of XBP-1 whereas, induction of these markers was not observable in the cells cultured with native LDL. Treatment of the cells with the 4-PBA chemical chaperone molecule efficiently blocked activation of the ER stress markers induced by oxLDL. The latter mediates β-cell dysfunction and apoptosis by diminishing the expression of islet brain 1 (IB1) and Bcl2. The levels of these two proteins were preserved in the cells that were co-treated with oxLDL and the 4-PBA. Consistent with this result we found that blockade of ER stress activation alleviated the loss of insulin synthesis and abolished apoptosis evoked by oxLDL. However incubation of the cells with 4-PBA did not prevent impairment of insulin secretion elicited by oxLDL, indicating that ER stress is not responsible for the oxLDL-mediated defect of insulin secretion. Co-incubation of the cells with HDL mimicked the effects of 4-PBA on the expression of IB1 and Blc2 and thereby counteracted oxLDL attacks on insulin synthesis and cell survivals. We found that HDL efficiently inhibited activation of the ER stress mediated by oxLDL: These data highlight the contribution of the ER stress in the defects of insulin synthesis and cell survivals induced by oxLDL and emphasize the potent role of HDL to counter activation of the oxLDL-mediated ER-stress activation:
Resumo:
The Iowa Department of Public Health recommends a 14-day treatment process. You may use over-the-counter products. They are safe and not costly. Mark your calendar to help you keep track of treatment.
Resumo:
The determination of gross alpha, gross beta and 226Ra activity in natural waters is useful in a wide range of environmental studies. Furthermore, gross alpha and gross beta parameters are included in international legislation on the quality of drinking water [Council Directive 98/83/EC].1 In this work, a low-background liquid scintillation counter (Wallac, Quantulus 1220) was used to simultaneously determine gross alpha, gross beta and 226Ra activity in natural water samples. Sample preparation involved evaporation to remove 222Rn and its short-lived decay daughters. The evaporation process concentrated the sample ten-fold. Afterwards, a sample aliquot of 8 mL was mixed with 12 mL of Ultima Gold AB scintillation cocktail in low-diffusion vials. In this study, a theoretical mathematical model based on secular equilibrium conditions between 226Ra and its short-lived decay daughters is presented. The proposed model makes it possible to determine 226Ra activity from two measurements. These measurements also allow determining gross alpha and gross beta simultaneously. To validate the proposed model, spiked samples with different activity levels for each parameter were analysed. Additionally, to evaluate the model's applicability in natural water, eight natural water samples from different parts of Spain were analysed. The eight natural water samples were also characterised by alpha spectrometry for the naturally occurring isotopes of uranium (234U, 235U and 238U), radium (224Ra and 226Ra), 210Po and 232Th. The results for gross alpha and 226Ra activity were compared with alpha spectrometry characterization, and an acceptable concordance was obtained.
Resumo:
The estimation of muscle forces in musculoskeletal shoulder models is still controversial. Two different methods are widely used to solve the indeterminacy of the system: electromyography (EMG)-based methods and stress-based methods. The goal of this work was to evaluate the influence of these two methods on the prediction of muscle forces, glenohumeral load and joint stability after total shoulder arthroplasty. An EMG-based and a stress-based method were implemented into the same musculoskeletal shoulder model. The model replicated the glenohumeral joint after total shoulder arthroplasty. It contained the scapula, the humerus, the joint prosthesis, the rotator cuff muscles supraspinatus, subscapularis and infraspinatus and the middle, anterior and posterior deltoid muscles. A movement of abduction was simulated in the plane of the scapula. The EMG-based method replicated muscular activity of experimentally measured EMG. The stress-based method minimised a cost function based on muscle stresses. We compared muscle forces, joint reaction force, articular contact pressure and translation of the humeral head. The stress-based method predicted a lower force of the rotator cuff muscles. This was partly counter-balanced by a higher force of the middle part of the deltoid muscle. As a consequence, the stress-based method predicted a lower joint load (16% reduced) and a higher superior-inferior translation of the humeral head (increased by 1.2 mm). The EMG-based method has the advantage of replicating the observed cocontraction of stabilising muscles of the rotator cuff. This method is, however, limited to available EMG measurements. The stress-based method has thus an advantage of flexibility, but may overestimate glenohumeral subluxation.
Resumo:
La gouvernance de l'Internet est une thématique récente dans la politique mondiale. Néanmoins, elle est devenue au fil des années un enjeu économique et politique important. La question a même pris une importance particulière au cours des derniers mois en devenant un sujet d'actualité récurrent. Forte de ce constat, c ette recherche retrace l'histoire de la gouvernance de l'Internet depuis son émergence comme enjeu politique dans les années 1980 jusqu'à la fin du Sommet Mondial sur la Société de l'Information (SMSI) en 2005. Plutôt que de se focaliser sur l'une ou l'autre des institutions impliquées dans la régulation du réseau informatique mondial, cette recherche analyse l'émergence et l'évolution historique d'un espace de luttes rassemblant un nombre croissant d'acteurs différents. Cette évolution est décrite à travers le prisme de la relation dialectique entre élites et non-élites et de la lutte autour de la définition de la gouvernance de l'Internet. Cette thèse explore donc la question de comment les relations au sein des élites de la gouvernance de l'Internet et entre ces élites et les non-élites expliquent l'emergence, l'évolution et la structuration d'un champ relativement autonome de la politique mondiale centré sur la gouvernance de l'Internet. Contre les perspectives dominantes réaliste et libérales, cette recherche s'ancre dans une approche issue de la combinaison des traditions hétérodoxes en économie politique internationale et des apports de la sociologie politique internationale. Celle-ci s'articule autour des concepts de champ, d'élites et d'hégémonie. Le concept de champ, développé par Bourdieu inspire un nombre croissant d'études de la politique mondiale. Il permet à la fois une étude différenciée de la mondialisation et l'émergence d'espaces de lutte et de domination au niveau transnational. La sociologie des élites, elle, permet une approche pragmatique et centrée sur les acteurs des questions de pouvoir dans la mondialisation. Cette recherche utilise plus particulièrement le concept d'élite du pouvoir de Wright Mills pour étudier l'unification d'élites a priori différentes autour de projets communs. Enfin, cette étude reprend le concept néo-gramscien d'hégémonie afin d'étudier à la fois la stabilité relative du pouvoir d'une élite garantie par la dimension consensuelle de la domination, et les germes de changement contenus dans tout ordre international. A travers l'étude des documents produits au cours de la période étudiée et en s'appuyant sur la création de bases de données sur les réseaux d'acteurs, cette étude s'intéresse aux débats qui ont suivi la commercialisation du réseau au début des années 1990 et aux négociations lors du SMSI. La première période a abouti à la création de l'Internet Corporation for Assigned Names and Numbers (ICANN) en 1998. Cette création est le résultat de la recherche d'un consensus entre les discours dominants des années 1990. C'est également le fruit d'une coalition entre intérêts au sein d'une élite du pouvoir de la gouvernance de l'Internet. Cependant, cette institutionnalisation de l'Internet autour de l'ICANN excluait un certain nombre d'acteurs et de discours qui ont depuis tenté de renverser cet ordre. Le SMSI a été le cadre de la remise en cause du mode de gouvernance de l'Internet par les États exclus du système, des universitaires et certaines ONG et organisations internationales. C'est pourquoi le SMSI constitue la seconde période historique étudiée dans cette thèse. La confrontation lors du SMSI a donné lieu à une reconfiguration de l'élite du pouvoir de la gouvernance de l'Internet ainsi qu'à une redéfinition des frontières du champ. Un nouveau projet hégémonique a vu le jour autour d'éléments discursifs tels que le multipartenariat et autour d'insitutions telles que le Forum sur la Gouvernance de l'Internet. Le succès relatif de ce projet a permis une stabilité insitutionnelle inédite depuis la fin du SMSI et une acceptation du discours des élites par un grand nombre d'acteurs du champ. Ce n'est que récemment que cet ordre a été remis en cause par les pouvoirs émergents dans la gouvernance de l'Internet. Cette thèse cherche à contribuer au débat scientifique sur trois plans. Sur le plan théorique, elle contribue à l'essor d'un dialogue entre approches d'économie politique mondiale et de sociologie politique internationale afin d'étudier à la fois les dynamiques structurelles liées au processus de mondialisation et les pratiques localisées des acteurs dans un domaine précis. Elle insiste notamment sur l'apport de les notions de champ et d'élite du pouvoir et sur leur compatibilité avec les anlayses néo-gramsciennes de l'hégémonie. Sur le plan méthodologique, ce dialogue se traduit par une utilisation de méthodes sociologiques telles que l'anlyse de réseaux d'acteurs et de déclarations pour compléter l'analyse qualitative de documents. Enfin, sur le plan empirique, cette recherche offre une perspective originale sur la gouvernance de l'Internet en insistant sur sa dimension historique, en démontrant la fragilité du concept de gouvernance multipartenaire (multistakeholder) et en se focalisant sur les rapports de pouvoir et les liens entre gouvernance de l'Internet et mondialisation. - Internet governance is a recent issue in global politics. However, it gradually became a major political and economic issue. It recently became even more important and now appears regularly in the news. Against this background, this research outlines the history of Internet governance from its emergence as a political issue in the 1980s to the end of the World Summit on the Information Society (WSIS) in 2005. Rather than focusing on one or the other institution involved in Internet governance, this research analyses the emergence and historical evolution of a space of struggle affecting a growing number of different actors. This evolution is described through the analysis of the dialectical relation between elites and non-elites and through the struggle around the definition of Internet governance. The thesis explores the question of how the relations among the elites of Internet governance and between these elites and non-elites explain the emergence, the evolution, and the structuration of a relatively autonomous field of world politics centred around Internet governance. Against dominant realist and liberal perspectives, this research draws upon a cross-fertilisation of heterodox international political economy and international political sociology. This approach focuses on concepts such as field, elites and hegemony. The concept of field, as developed by Bourdieu, is increasingly used in International Relations to build a differentiated analysis of globalisation and to describe the emergence of transnational spaces of struggle and domination. Elite sociology allows for a pragmatic actor-centred analysis of the issue of power in the globalisation process. This research particularly draws on Wright Mill's concept of power elite in order to explore the unification of different elites around shared projects. Finally, this thesis uses the Neo-Gramscian concept of hegemony in order to study both the consensual dimension of domination and the prospect of change contained in any international order. Through the analysis of the documents produced within the analysed period, and through the creation of databases of networks of actors, this research focuses on the debates that followed the commercialisation of the Internet throughout the 1990s and during the WSIS. The first time period led to the creation of the Internet Corporation for Assigned Names and Numbers (ICANN) in 1998. This creation resulted from the consensus-building between the dominant discourses of the time. It also resulted from the coalition of interests among an emerging power elite. However, this institutionalisation of Internet governance around the ICANN excluded a number of actors and discourses that resisted this mode of governance. The WSIS became the institutional framework within which the governance system was questioned by some excluded states, scholars, NGOs and intergovernmental organisations. The confrontation between the power elite and counter-elites during the WSIS triggered a reconfiguration of the power elite as well as a re-definition of the boundaries of the field. A new hegemonic project emerged around discursive elements such as the idea of multistakeholderism and institutional elements such as the Internet Governance Forum. The relative success of the hegemonic project allowed for a certain stability within the field and an acceptance by most non-elites of the new order. It is only recently that this order began to be questioned by the emerging powers of Internet governance. This research provides three main contributions to the scientific debate. On the theoretical level, it contributes to the emergence of a dialogue between International Political Economy and International Political Sociology perspectives in order to analyse both the structural trends of the globalisation process and the located practices of actors in a given issue-area. It notably stresses the contribution of concepts such as field and power elite and their compatibility with a Neo-Gramscian framework to analyse hegemony. On the methodological level, this perspective relies on the use of mixed methods, combining qualitative content analysis with social network analysis of actors and statements. Finally, on the empirical level, this research provides an original perspective on Internet governance. It stresses the historical dimension of current Internet governance arrangements. It also criticise the notion of multistakeholde ism and focuses instead on the power dynamics and the relation between Internet governance and globalisation.
Resumo:
OBJECTIVES: Our analysis assessed the impact of information on patients' preferences in prescription versus over-the-counter (OTC) delivery systems. METHODS: A contingent valuation (CV) study was implemented, randomly assigning 534 lay people into the receipt of limited or extended information concerning new influenza drugs. In each information arm, people answered two questions: the first asked about willingness to pay (WTP) for the new prescription drug; the second asked about WTP for the same drug sold OTC. RESULTS: We show that WTP is higher for the OTC scenario and that the level of information plays a significant role in the evaluation of the OTC scenario, with more information being associated with an increase in the WTP. In contrast, the level of information provided has no impact on WTP for prescription medicine. Thus, for the kind of drug considered here (i.e. safe, not requiring medical supervision), a switch to OTC status can be expected to be all the more beneficial, as the patient is provided with more information concerning the capability of the drug. CONCLUSIONS: Our results shed light on one of the most challenging issues that health policy makers are currently faced with, namely the threat of a bird flu pandemic. Drug delivery is a critical component of pandemic influenza preparedness. Furthermore, the congruence of our results with the agency and demand theories provides an important test of the validity of using WTP based on CV methods.
Resumo:
Tehokkain tapavalkaista mekaanisesti kuidutettua puumassaa on suorittaa se hapettavasti peroksidikemikaalilla vahvasti alkalisissa oloissa. Perinteisesti alkalisuus on aikaansaatu natriumhydroksidin ja -silikaatin avulla. Se kuitenkin liuottaa massasta merkittävästi ligniiniä, mikä huonontaa saantoa ja suurentaa valkaisun jätevesien orgaanisen hiilen määrää sekä kemiallista hapenkulutusta. Yhä kovenevien vaaleustavoitteiden ja tiukentuneen vedenkäytön seurauksena on syntynyt tarve etsiä parempia valkaisun alkaleja. Kirjallisuuden pohjalta valittiinkokeellisesti tutkittaviksi alkaleiksi magnesiumhydroksidi, magnesiumoksidi, kalsiumhydroksidi sekä kalsiumoksidi. Niiden toimivuutta hapettavan vetyperoksidivalkaisun alkaleina tutkittiin valkaisukokein natriumsilikaattilisäyksen kanssa sekä ilman. Näistä parhaiten toimivaksi osoittautui Mg(OH)2. Sen avulla suoritettiin jatkoksi laboratoriokoevalkaisuja korkeassa sakeudessa. Keski- ja korkeasakeusvalkaisukokeiden tulosten mukaan käytettäessä Mg(OH)2 -alkalia natriumydroksidin ja -silikaatin asemesta jää massan loppuvaaleus noin yhden ISO-prosentin verran heikommaksi. Tällöin valkaisusuodoksessa oli vain varsin vähäinen määrä massasta liuennutta orgaanista hiiliainesta, noin 45 % siitä, mitä natriumin yhdisteiden käyttö vertailukokeessa tuotti. Tulosta varmennettiin suorittamalla korkea-sakeusvalkaisukokeita hiokemassatehtaan olosuhteissa, massoilla ja kiertovesillä.Myös tehdaskokeiden mukaan valkaistun massan loppuvaaleus jää noin yhden ISO-prosentin alhaisemmaksi, mutta valkaisusuodoksen orgaanisen hiilen määrä (TOC) jääalle puoleen Na-kemikaalein suoritetusta vertailukokeesta.
Resumo:
Tutkielman tavoitteena oli tutkia itsehoitolääkkeiden jakelukanavaverkoston laajentumista Suomessa. Päätavoitteena oli löytää erilaisia tulevaisuuden toimintamalleja, ja tutkia, onko jakelun vapauttaminen uhka vai mahdollisuus lääketeollisuudelle. Lisäksi tutkielmassa oli tavoitteena selvittää millaisia erityisvaatimuksia itsehoitolääkkeiden jakeluunliittyy ja mitkä tekijät puoltavat ja mitkä jarruttavat itsehoitolääkkeiden kilpailun vapauttamista Suomessa. Tutkielma jakautuu teoreettiseen ja empiiriseen osaan. Teoreettinen osa keskittyy tarkastelemaan jakelukanaverkoston laajentumista lääketeollisuudessa. Empiirinen tutkimus toteutettiin laadullisena case-tutkimuksena ja tutkimusmenetelmänä käytettiin puoli-strukturoitua haastattelua. Tutkimustulokset osoittavat, että itsehoitolääkkeiden jakelun vapauttaminen on monitahoinen kysymys jamielipiteet sen tarpeellisuudesta ja hyödyllisyydestä vaihtelevat suuresti lääketeollisuuden jakelukanavan eri jäsenten välillä. Itsehoitolääkkeiden vähittäisjakelu voitaisiin tulevaisuudessa järjestää mahdollisesti joko vain apteekeista, tai apteekeista ja päivittäistavarakapoista. Mahdollisuutena olisi myös saada itsehoitolääkkeitä hotelleista ja ravintoloista, huoltamoista ja kioskeista. Lääkkeitä voisi ostaa myös erilaisista automaateista ja internetistä. Uudet kaupan muodot olisivat mahdollisia jos itsehoitolääkkeiden jakelu vapautuu tulevaisuudessa.
Resumo:
In this study, a model for the unsteady dynamic behaviour of a once-through counter flow boiler that uses an organic working fluid is presented. The boiler is a compact waste-heat boiler without a furnace and it has a preheater, a vaporiser and a superheater. The relative lengths of the boiler parts vary with the operating conditions since they are all parts of a single tube. The present research is a part of a study on the unsteady dynamics of an organic Rankine cycle power plant and it will be a part of a dynamic process model. The boiler model is presented using a selected example case that uses toluene as the process fluid and flue gas from natural gas combustion as the heat source. The dynamic behaviour of the boiler means transition from the steady initial state towards another steady state that corresponds to the changed process conditions. The solution method chosen was to find such a pressure of the process fluid that the mass of the process fluid in the boiler equals the mass calculated using the mass flows into and out of the boiler during a time step, using the finite difference method. A special method of fast calculation of the thermal properties has been used, because most of the calculation time is spent in calculating the fluid properties. The boiler was divided into elements. The values of the thermodynamic properties and mass flows were calculated in the nodes that connect the elements. Dynamic behaviour was limited to the process fluid and tube wall, and the heat source was regarded as to be steady. The elements that connect the preheater to thevaporiser and the vaporiser to the superheater were treated in a special way that takes into account a flexible change from one part to the other. The model consists of the calculation of the steady state initial distribution of the variables in the nodes, and the calculation of these nodal values in a dynamic state. The initial state of the boiler was received from a steady process model that isnot a part of the boiler model. The known boundary values that may vary during the dynamic calculation were the inlet temperature and mass flow rates of both the heat source and the process fluid. A brief examination of the oscillation around a steady state, the so-called Ledinegg instability, was done. This examination showed that the pressure drop in the boiler is a third degree polynomial of the mass flow rate, and the stability criterion is a second degree polynomial of the enthalpy change in the preheater. The numerical examination showed that oscillations did not exist in the example case. The dynamic boiler model was analysed for linear and step changes of the entering fluid temperatures and flow rates.The problem for verifying the correctness of the achieved results was that there was no possibility o compare them with measurements. This is why the only way was to determine whether the obtained results were intuitively reasonable and the results changed logically when the boundary conditions were changed. The numerical stability was checked in a test run in which there was no change in input values. The differences compared with the initial values were so small that the effects of numerical oscillations were negligible. The heat source side tests showed that the model gives results that are logical in the directions of the changes, and the order of magnitude of the timescale of changes is also as expected. The results of the tests on the process fluid side showed that the model gives reasonable results both on the temperature changes that cause small alterations in the process state and on mass flow rate changes causing very great alterations. The test runs showed that the dynamic model has no problems in calculating cases in which temperature of the entering heat source suddenly goes below that of the tube wall or the process fluid.
Resumo:
Davant del fracàs de la contrarevolució interior efectuada pels reialistes, a partir pràcticament de l’inici del sistema constitucional inaugurat per la revolució de Riego, els elements més absolutistes van organitzar mitjançant la celebració del Congrés de Verona la invasió del territori espanyol per les tropes franceses (els Cent Mil Fills de Sant Lluís). La reacció de les institucions locals lleidatanes, sobretot la Paeria (l’Ajuntament) no es va fer esperar. Ràpidament van rebutjar ferventment la imposició estrangera i van organitzar la resistència a l’interior de la ciutat. Aquesta resistència va topar amb la penúria econòmica de la hisenda municipal, motiu pel qual van haver de realitzar una guerra defensiva, que va tenir èxit, ja que van resistir la invasió fins a l’últim dia d’octubre de l’any 1823 i es van convertir, juntament amb ciutats com ara Barcelona o Tarragona, en els baluards del liberalisme.
Resumo:
Colonization is likely to be more successful for species with an ability to self-fertilize and thus to establish new populations as single individuals. As a result, self-compatibility should be common among colonizing species. This idea, labelled 'Baker's law', has been influential in discussions of sexual-system and mating-system evolution. However, its generality has been questioned, because models of the evolution of dispersal and the mating system predict an association between high dispersal rates and outcrossing rather than selfing, and because of many apparent counter examples to the law. The contrasting predictions made by models invoking Baker's law versus those for the evolution of the mating system and dispersal urges a reassessment of how we should view both these traits. Here, I review the literature on the evolution of mating and dispersal in colonizing species, with a focus on conceptual issues. I argue for the importance of distinguishing between the selfing or outcrossing rate and a simple ability to self-fertilize, as well as for the need for a more nuanced consideration of dispersal. Colonizing species will be characterized by different phases in their life pattern: dispersal to new habitat, implying an ecological sieve on dispersal traits; establishment and a phase of growth following colonization, implying a sieve on reproductive traits; and a phase of demographic stasis at high density, during which new trait associations can evolve through local adaptation. This dynamic means that the sorting of mating-system and dispersal traits should change over time, making simple predictions difficult.