62 resultados para Exposición Universal (1855. París)
Resumo:
Diverse parameters, including chaotropicity, can limit the function of cellular systems and thereby determine the extent of Earth's biosphere. Whereas parameters such as temperature, hydrophobicity, pressure, pH, Hofmeister effects, and water activity can be quantified via standard scales of measurement, the chao-/kosmotropic activities of environmentally ubiquitous substances have no widely accepted, universal scale. We developed an assay to determine and quantify chao-/kosmotropicity for 97 chemically diverse substances that can be universally applied to all solutes. This scale is numerically continuous for the solutes assayed (from +361kJkg-1mol-1 for chaotropes to -659kJkg-1mol-1 for kosmotropes) but there are key points that delineate (i) chaotropic from kosmotropic substances (i.e. chaotropes =+4; kosmotropes =-4kJkg-1mol-1); and (ii) chaotropic solutes that are readily water-soluble (log P<1.9) from hydrophobic substances that exert their chaotropic activity, by proxy, from within the hydrophobic domains of macromolecular systems (log P>1.9). Examples of chao-/kosmotropicity values are, for chaotropes: phenol +143, CaCl2 +92.2, MgCl2 +54.0, butanol +37.4, guanidine hydrochloride +31.9, urea +16.6, glycerol [>6.5M] +6.34, ethanol +5.93, fructose +4.56; for kosmotropes: proline -5.76, sucrose -6.92, dimethylsulphoxide (DMSO) -9.72, mannitol -6.69, trehalose -10.6, NaCl -11.0, glycine -14.2, ammonium sulfate -66.9, polyethylene glycol- (PEG-)1000 -126; and for relatively neutral solutes: methanol, +3.12, ethylene glycol +1.66, glucose +1.19, glycerol [<5M] +1.06, maltose -1.43 (kJkg-1mol-1). The data obtained correlate with solute interactions with, and structure-function changes in, enzymes and membranes. We discuss the implications for diverse fields including microbial ecology, biotechnology and astrobiology.
Resumo:
The self-consistent interaction between energetic particles and self-generated hydromagnetic waves in a cosmic ray pressure dominated plasma is considered. Using a three-dimensional hybrid magnetohydrodynamics (MHD)-kinetic code, which utilizes a spherical harmonic expansion of the Vlasov-Fokker-Planck equation, high-resolution simulations of the magnetic field growth including feedback on the cosmic rays are carried out. It is found that for shocks with high cosmic ray acceleration efficiency, the magnetic fields become highly disorganized, resulting in near isotropic diffusion, independent of the initial orientation of the ambient magnetic field. The possibility of sub-Bohm diffusion is demonstrated for parallel shocks, while the diffusion coefficient approaches the Bohm limit from below for oblique shocks. This universal behaviour suggests that Bohm diffusion in the root-mean-squared field inferred from observation may provide a realistic estimate for the maximum energy acceleration time-scale in young supernova remnants. Although disordered, the magnetic field is not self-similar suggesting a non-uniform energy-dependent behaviour of the energetic particle transport in the precursor. Possible indirect radiative signatures of cosmic ray driven magnetic field amplification are discussed.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).
We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
Purpose: To evaluate the clinical and histological side effects of a prototype stereotactic radiotherapy system delivering microcollimated external beam radiation through pars plana in porcine eyes.
Methods: Five Yucatan mini-swine (10 eyes) were randomized to five treatment groups. Eight eyes were dosed with X-ray radiation on Day 1, and two eyes served as untreated controls. Treated eyes received doses up to 60 Gy to the retina and up to 130 Gy to the sclera using single or overlapping beams. The treatment beams were highly collimated such that the diameter was approximately 2.5 mm on the sclera and 3 mm on the retinal surface. Fundus photography, fluorescein angiography (FA), and spectral domain optical coherence tomography (SD-OCT) were obtained on days 7, 30, 60, and 110. Images were examined by a masked grader and evaluated for abnormalities. Animals were sacrificed on day 111 and gross and histopathological analysis was conducted.
Results: Histological and gross changes to eye structures including conjunctiva and lens were minimal at all doses. Fundus, FA, and SD-OCT of the targeted region failed to disclose any abnormality in the control or 21 Gy treated animals. In the 42 and 60 Gy animals, hypopigmented spots were noted after treatment on clinical exam, and corresponding hyperfluorescent staining was seen in late frames. No evidence of choroidal hypoperfusion was seen. The histological specimens from the 60 Gy animals showed photoreceptor loss and displacement of cone nuclei.
Conclusion: Transcleral stereotactic radiation dosing in porcine eyes can be accomplished with no significant adverse events as doses less than 42 Gy.
Resumo:
International exhibitions were greatly responsible for the modernization of western society. The motive for these events was based on the possibility of enhancing the country’s international status abroad. The genesis of world exhibitions came from the conviction that humanity as a whole would improve the continual flow of new practical applications, the development of modern communication techniques and the social need for a medium that could acquaint the general public with changes in technology, economy and society .
Since the first national industrial exhibitions in Paris during the eighteenth century and especially starting from the first Great Exhibition in London’s Hyde Park in 1851 these international events spread steadily all over Europe and the United States, to reach Latin America in the beginnings of the twentieth century . The work of professionals such as Daniel Burnham, Werner Hegemann and Elbert Peets made the relation between exhibitions and urban transformation a much more connected one, setting a precedent for subsequent exhibitions.
In Buenos Aires, the celebration of the centennial of independence from Spain in 1910 had many meanings and repercussions. A series of factors allowed for a moment of change in the city. Official optimism, economical progress, inequality and social conflict made of this a suitable time for transformation. With the organization of the Exposición Internacional the government had, among others, one specific aim: to achieve a network of visual tools to set the feeling of belonging and provide an identity for the mixture of cultures that populated the city of Buenos Aires at the time. Another important objective of the government was to put Buenos Aires at the level of European cities.
Foreign professionals had a great influence in the conceptual and factual shaping of the exhibition and in the subsequent changes caused in the urban condition. The exhibition had an important role in the ways of thinking the city and in the leisure ideas it introduced. The exhibition, as a didactic tool, worked as a precedent for conceiving leisure spaces in the future. Urban and landscape planners such as Joseph Bouvard and Charles Thays were instrumental in great part of the design of the Exhibition, but it was not only the architects and designers who shaped the identity of the fair. Other visitors such as Jules Huret or Georges Clemenceau were responsible for giving the city an international image it did not previously have.
This paper will explore on the one hand the significance of the exhibition of 1910 for the shaping of the city and its image; and on the other hand, the role of foreign professionals and the reach these influences had.
Resumo:
A revised water model intended for use in condensed phase simulations in the framework of the self consistent polarizable ion tight binding theory is constructed. The model is applied to water monomer, dimer, hexamers, ice, and liquid, where it demonstrates good agreement with theoretical results obtained by more accurate methods, such as DFT and CCSD(T), and with experiment. In particular, the temperature dependence of the self diffusion coefficient in liquid water predicted by the model, closely reproduces experimental curves in the temperature interval between 230 K and 350 K. In addition, and in contrast to standard DFT, the model properly orders the relative densities of liquid water and ice. A notable, but inevitable, shortcoming of the model is underestimation of the static dielectric constant by a factor of two. We demonstrate that the description of inter and intramolecular forces embodied in the tight binding approximation in quantum mechanics leads to a number of valuable insights which can be missing from ab initio quantum chemistry and classical force fields. These include a discussion of the origin of the enhanced molecular electric dipole moment in the condensed phases, and a detailed explanation for the increase of coordination number in liquid water as a function of temperature and compared with ice-leading to insights into the anomalous expansion on freezing. The theory holds out the prospect of an understanding of the currently unexplained density maximum of water near the freezing point.
Resumo:
We demonstrate a model for stoichiometric and reduced titanium dioxide intended for use in molecular dynamics and other atomistic simulations and based in the polarizable ion tight binding theory. This extends the model introduced in two previous papers from molecular and liquid applications into the solid state, thus completing the task of providing a comprehensive and unified scheme for studying chemical reactions, particularly aimed at problems in catalysis and electrochemistry. As before, experimental results are given priority over theoretical ones in selecting targets for model fitting, for which we used crystal parameters and band gaps of titania bulk polymorphs, rutile and anatase. The model is applied to six low index titania surfaces, with and without oxygen vacancies and adsorbed water molecules, both in dissociated and non-dissociated states. Finally, we present the results of molecular dynamics simulation of an anatase cluster with a number of adsorbed water molecules and discuss the role of edge and corner atoms of the cluster. (C) 2014 AIP Publishing LLC.
Resumo:
As is now well established, a first order expansion of the Hohenberg-Kohn total energy density functional about a trial input density, namely, the Harris-Foulkes functional, can be used to rationalize a non self consistent tight binding model. If the expansion is taken to second order then the energy and electron density matrix need to be calculated self consistently and from this functional one can derive a charge self consistent tight binding theory. In this paper we have used this to describe a polarizable ion tight binding model which has the benefit of treating charge transfer in point multipoles. This admits a ready description of ionic polarizability and crystal field splitting. It is necessary in constructing such a model to find a number of parameters that mimic their more exact counterparts in the density functional theory. We describe in detail how this is done using a combination of intuition, exact analytical fitting, and a genetic optimization algorithm. Having obtained model parameters we show that this constitutes a transferable scheme that can be applied rather universally to small and medium sized organic molecules. We have shown that the model gives a good account of static structural and dynamic vibrational properties of a library of molecules, and finally we demonstrate the model's capability by showing a real time simulation of an enolization reaction in aqueous solution. In two subsequent papers, we show that the model is a great deal more general in that it will describe solvents and solid substrates and that therefore we have created a self consistent quantum mechanical scheme that may be applied to simulations in heterogeneous catalysis.
Resumo:
Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This article focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. In particular, the seemingly obvious lower bounds of Ω(m) messages, where m is the number of edges in the network, and Ω(D) time, where D is the network diameter, are nontrivial to show for randomized (Monte Carlo) algorithms. (Recent results, showing that even Ω(n), where n is the number of nodes in the network, is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms, except for the restricted case of comparison algorithms, where it was also required that nodes may not wake up spontaneously and that D and n were not known. We establish these fundamental lower bounds in this article for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (namely, algorithms that work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time leader election algorithm is known. A slight adaptation of our lower bound technique gives rise to an Ω(m) message lower bound for randomized broadcast algorithms.
An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. The answer is known to be negative in the deterministic setting. We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that tradeoff messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.
Resumo:
With interest in microneedles as a novel drug transdermal delivery system increasing rapidly since the late 1990s (Margetts and Sawyer Contin Educ Anaesthesia Crit Care Pain. 7(5):171-76, 2007), a diverse range of microneedle systems have been fabricated with varying designs and dimensions. However, there are still very few commercially available microneedle products. One major issue regarding microneedle manufacture on an industrial scale is the lack of specific quality standards for this novel dosage form in the context of Good Manufacturing Practice (GMP). A range of mechanical characterisation tests and microneedle insertion analysis techniques are used by researchers working on microneedle systems to assess the safety and performance profiles of their various designs. The lack of standardised tests and equipment used to demonstrate microneedle mechanical properties and insertion capability makes it difficult to directly compare the in use performance of candidate systems. This review highlights the mechanical tests and insertion analytical techniques used by various groups to characterise microneedles. This in turn exposes the urgent need for consistency across the range of microneedle systems in order to promote innovation and the successful commercialisation of microneedle products.
Resumo:
The introduction of the Universal Periodic Review (UPR) mechanism as an innovative component of the new Human Rights Council in 2006 has suffered little academic scrutiny. This is partly because it holds as its objective an improvement in human rights situations on the ground, a goal that is difficult to test amongst so many possible causal factors attributable to law reform and policy change, and partly due to the fact that the mechanism has only completed one full cycle of review. This article seeks to remedy this absence of analysis by examining the experience of the United Kingdom during its first review. In doing so, the article first considers the conception of the UPR, before progressing to examine the procedure and recommendations made to the UK by its peers. Finally, the article considers the five year review of the UPR which occurred as a subset of the Human Rights Council Review in 2011 and the resulting changes to the process modalities.