962 resultados para Moment of symmetry
Resumo:
At this moment of extended economic, social and environmental crisis within which new interventions on the consolidated city are being set out, it is essential to count on the acquired experience in urban rehabilitation processes that were carried out in Spain during the last thirty years. Despite the complexity of this kind of processes and the diversity of the situations and actions that happened, this paper addresses the analysis of common patterns in twenty urban rehabilitation experiences. Different stages of the processes were studied, from the management to the regenerated areas in order to ease the design of new intervention initiatives.
Resumo:
Over the past few years, the common practice within air traffic management has been that commercial aircraft fly by following a set of predefined routes to reach their destination. Currently, aircraft operators are requesting more flexibility to fly according to their prefer- ences, in order to achieve their business objectives. Due to this reason, much research effort is being invested in developing different techniques which evaluate aircraft optimal trajectory and traffic synchronisation. Also, the inefficient use of the airspace using barometric altitude overall in the landing and takeoff phases or in Continuous Descent Approach (CDA) trajectories where currently it is necessary introduce the necessary reference setting (QNH or QFE). To solve this problem and to permit a better airspace management born the interest of this research. Where the main goals will be to evaluate the impact, weakness and strength of the use of geometrical altitude instead of the use of barometric altitude. Moreover, this dissertation propose the design a simplified trajectory simulator which is able to predict aircraft trajectories. The model is based on a three degrees of freedom aircraft point mass model that can adapt aircraft performance data from Base of Aircraft Data, and meteorological information. A feature of this trajectory simulator is to support the improvement of the strategic and pre-tactical trajectory planning in the future Air Traffic Management. To this end, the error of the tool (aircraft Trajectory Simulator) is measured by comparing its performance variables with actual flown trajectories obtained from Flight Data Recorder information. The trajectory simulator is validated by analysing the performance of different type of aircraft and considering different routes. A fuel consumption estimation error was identified and a correction is proposed for each type of aircraft model. In the future Air Traffic Management (ATM) system, the trajectory becomes the fundamental element of a new set of operating procedures collectively referred to as Trajectory-Based Operations (TBO). Thus, governmental institutions, academia, and industry have shown a renewed interest for the application of trajectory optimisation techniques in com- mercial aviation. The trajectory optimisation problem can be solved using optimal control methods. In this research we present and discuss the existing methods for solving optimal control problems focusing on direct collocation, which has received recent attention by the scientific community. In particular, two families of collocation methods are analysed, i.e., Hermite-Legendre-Gauss-Lobatto collocation and the pseudospectral collocation. They are first compared based on a benchmark case study: the minimum fuel trajectory problem with fixed arrival time. For the sake of scalability to more realistic problems, the different meth- ods are also tested based on a real Airbus 319 El Cairo-Madrid flight. Results show that pseudospectral collocation, which has shown to be numerically more accurate and computa- tionally much faster, is suitable for the type of problems arising in trajectory optimisation with application to ATM. Fast and accurate optimal trajectory can contribute properly to achieve the new challenges of the future ATM. As atmosphere uncertainties are one of the most important issues in the trajectory plan- ning, the final objective of this dissertation is to have a magnitude order of how different is the fuel consumption under different atmosphere condition. Is important to note that in the strategic phase planning the optimal trajectories are determined by meteorological predictions which differ from the moment of the flight. The optimal trajectories have shown savings of at least 500 [kg] in the majority of the atmosphere condition (different pressure, and temperature at Mean Sea Level, and different lapse rate temperature) with respect to the conventional procedure simulated at the same atmosphere condition.This results show that the implementation of optimal profiles are beneficial under the current Air traffic Management (ATM).
Resumo:
The aim of this paper was to know the coach point of view about the critical moment concept in basketball. The methodology used in our research is a qualitative methodology. The instrument used to obtain the data was the semi- structured interviews. There were interviewed 12 ACB league coaches during the 2011/12 season. Results show that incertitude is associated to the critical moment concept. Last minutes of close games and also overtime are identified as critical moment of the game. There is nott any variable in statistics considered by all coaches to determine the critical moment, and they were agree that respecting some game rules is basic to face up the critical moment.
Resumo:
This study evaluates the effect of Lecirelin (Dalmarelin®, Fatro, Italy) diluted in different excipients (benzilic alcohol, benzoic acid and paraben) added to a seminal dose on LH concentrations, progesterone concentrations and ovarian status in rabbits. The in vitro effect on spermatozoa was also tested. A total of 100 multiparous female rabbits were divided into 5 groups, which at the moment of AI, received 0.2 mL (5 μg/dose) intramuscular (im) inoculation of Lecirelin (control) or the same Lecirelin dose administered intravaginally (iv) with the seminal dose alone (Lecirelin group) or with benzilic alcohol (Lecirelin BA group), benzoic acid (Lecirelin BAc group) or parabens (Lecirelin PA group) as an excipient. After 7 days, 10 rabbits per group were euthanized to analyze their ovarian status. In the control group, a high LH peak was detected 30 min post AI, while in the iv groups a slight increase in LH occurred after 120 min. The ovulation and fertility rate was similar in control and Lecirelin groups, while the lowest fertility rate was detected in the Lecirelin BA group. In a second experiment, the semen samples collected from male rabbits were diluted in TALP (control) or mixed with the 5 μg of Lecirelin solutions used in the first experiment. The highest percentage of capacitated sperm (68.3%) was recorded in the Lecirelin PA. The lowest percentages were observed in the Lecirelin BA and BAc groups. In conclusion, the iv administration of Lecirelin represents an alternative method for simplifying rabbit insemination procedures.
Analytical bearing capacity of strip footing in weightless materials with power-law failure criteria
Resumo:
Sokolovskii’s method of characteristics is extended to provide analytical solutions for the ultimate load at the moment of plastic failure under plane-strain conditions of shallow strip foundations on weightless rigid-plastic media with a noncohesive power-law failure envelope. The formulation is made parametrically in terms of the instantaneous friction angle, and the key idea to obtain the bearing capacity is that information can be transmitted from the free surface (where external loads are known) to the contact plane of the foundation. The methodology can consider foundations adjacent to a slope, external surcharges at the free surface, and inclined loads (both on the slope and on the foundation). Sensitivity analyses illustrate the influence on bearing capacity of changes in the different geometrical parameters involved. An application example is presented and design plots are provided, and model predictions are compared with results of bearing capacity tests under low gravity.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
La tesis doctoral se centra en la posibilidad de entender que la práctica de arquitectura puede encontrar en las prácticas comunicativas un apoyo instrumental, que sobrepasa cualquier simplificación clásica del uso de los medios como una mera aplicación superficial, post-producida o sencillamente promocional. A partir de esta premisa se exponen casos del último cuarto del siglo XX y se detecta que amenazas como el riesgo de la banalización, la posible saturación de la imagen pública o la previsible asociación incorrecta con otros individuos en presentaciones grupales o por temáticas, han podido influir en un crecimiento notable de la adquisición de control, por parte de los arquitectos, en sus oportunidades mediáticas. Esto es, como si la arquitectura hubiera empezado a superar y optimizar algo inevitable, que las fórmulas expositivas y las publicaciones, o más bien del exponer(se) y publicar(se), son herramientas disponibles para activar algún tipo de gestión intelectual de la comunicación e información circulante sobre si misma. Esta práctica de “autoedición” se analiza en un periodo concreto de la trayectoria de OMA -Office for Metropolitan Architecture-, estudio considerado pionero en el uso eficiente, oportunista y personalizado de los medios. Así, la segunda parte de la tesis se ocupa del análisis de su conocida monografía S,M,L,XL (1995), un volumen que contó con gran participación por parte de sus protagonistas durante la edición, y de cuyo proceso de producción apenas se había investigado. Esta publicación señaló un punto de inflexión en su género alterando todo formato y restricciones anteriores, y se ha convertido en un volumen emblemático para la disciplina que ninguna réplica posterior ha podido superar. Aquí se presenta a su vez como el desencadenante de la construcción de un “gran evento” que concluye en la transformación de la identidad de OMA en 10 años, paradójicamente entre el nacimiento de la Fundación Groszstadt y el arranque de la actividad de AMO, dos entidades paralelas clave anexas a OMA. Este planteamiento deviene de cómo la investigación desvela que S,M,L,XL es una pieza más, central pero no independiente, dentro de una suma de acciones e individuos, así como otras publicaciones, exposiciones, eventos y también artículos ensayados y proyectos, en particular Bigness, Generic City, Euralille y los concursos de 1989. Son significativos aspectos como la apertura a una autoría múltiple, encabezada por Rem Koolhaas y el diseñador gráfico Bruce Mau, acompañados en los agradecimientos de la editora Jennifer Sigler y cerca de una centena de nombres, cuyas aportaciones no necesariamente se basan en la construcción de fragmentos del libro. La supresión de ciertos límites permite superar también las tareas inicialmente relevantes en la edición de una publicación. Un objetivo general de la tesis es también la reflexión sobre relaciones anteriormente cuestionadas, como la establecida entre la arquitectura y los mercados o la economía. Tomando como punto de partida la idea de “design intelligence” sugerida por Michael Speaks (2001), se extrae de sus argumentos que lo esencial es el hallazgo de la singularidad o inteligencia propia de cada estudio de arquitectura o diseño. Asimismo se explora si en la construcción de ese tipo de fórmulas magistrales se alojaban también combinaciones de interés y productivas entre asuntos como la eficiencia y la creatividad, o la organización y las ideas. En esta dinámica de relaciones bidireccionales, y en ese presente de exceso de información, se fundamenta la propuesta de una equivalencia más evidenciada entre la “socialización” del trabajo del arquitecto, al compartirlo públicamente e introducir nuevas conversaciones, y la relación inversa a partir del trabajo sobre la “socialización” misma. Como si la consciencia sobre el uso de los medios pudiera ser efectivamente instrumental, y contribuir al desarrollo de la práctica de arquitectura, desde una perspectiva idealmente comprometida e intelectual. ABSTRACT The dissertation argues the possibility to understand that the practice of architecture can find an instrumental support in the practices of communication, overcoming any classical simplification of the use of media, generally reduced to superficial treatments or promotional efforts. Thus some cases of the last decades of the 20th century are presented. Some threats detected, such as the risk of triviality, the saturation of the public image or the foreseeable wrong association among individuals when they are introduced as part of thematic groups, might have encouraged a noticeable increase of command taken by architects when there is chance to intervene in a media environment. In other words, it can be argued that architecture has started to overcome and optimize the inevitable, the fact that exhibition formulas and publications, or simply the practice of (self)exhibition or (self)publication, are tools at our disposal for the activation of any kind of intellectual management of communication and circulating information about itself. This practice of “self-edition” is analyzed in a specific timeframe of OMA’s trajectory, an office that is considered as a ground-breaking actor in the efficient and opportunistic use of media. Then the second part of the thesis dissects their monograph S,M,L,XL (1995), a volume in which its main characters were deeply involved in terms of edition and design, a process barely analyzed up to now. This publication marked a turning point in its own genre, disrupting old formats and traditional restrictions. It became such an emblematic volume for the discipline that none of the following attempts of replica has ever been able to improve this precedent. Here, the book is also presented as the element that triggers the construction of a “big event” that concludes in the transformation of OMA identity in 10 years. Paradoxically, between the birth of the Groszstadt Foundation and the early steps of AMO, both two entities parallel and connected to OMA. This positions emerge from how the research unveils that S,M,L,XL is one more piece, a key one but not an unrelated element, within a sum of actions and individuals, as well as other publications, exhibitions, articles and projects, in particular Bigness, Generic City, Euralille and the competitions of 1989. Among the remarkable innovations of the monograph, there is an outstanding openness to a regime of multiple authorship, headed by Rem Koolhaas and the graphic designer Bruce Mau, who share the acknowledgements page with the editor, Jennifer Sigler, and almost 100 people, not necessarily responsible for specific fragments of the book. In this respect, the dissolution of certain limits made possible that the expected tasks in the edition of a publication could be trespassed. A general goal of the thesis is also to open a debate on typically questioned relations, particularly between architecture and markets or economy. Using the idea of “design intelligence”, outlined by Michael Speaks in 2001, the thesis pulls out its essence, basically the interest in detecting the singularity, or particular intelligence of every office of architecture and design. Then it explores if in the construction of this kind of ingenious formulas one could find interesting and useful combinations among issues like efficiency and creativity, or organization and ideas. This dynamic of bidirectional relations, rescued urgently at this present moment of excess of information, is based on the proposal for a more evident equivalence between the “socialization” of the work in architecture, anytime it is shared in public, and the opposite concept, the work on the proper act of “socialization” itself. As if a new awareness of the capacities of the use of media could turn it into an instrumental force, capable of contributing to the development of the practice of architecture, from an ideally committed and intelectual perspective.
Structural analysis of the binding modes of minor groove ligands comprised of disubstituted benzenes
Resumo:
Two-dimensional homonuclear NMR was used to characterize synthetic DNA minor groove-binding ligands in complexes with oligonucleotides containing three different A-T binding sites. The three ligands studied have a C2 axis of symmetry and have the same general structural motif of a central para-substituted benzene ring flanked by two meta-substituted rings, giving the molecules a crescent shape. As with other ligands of this shape, specificity seems to arise from a tight fit in the narrow minor groove of the preferred A-T-rich sequences. We found that these ligands slide between binding subsites, behavior attributed to the fact that all of the amide protons in the ligand backbone cannot hydrogen bond to the minor groove simultaneously.
Resumo:
The NagC and Mlc proteins are homologous transcriptional regulators that control the expression of several phosphotransferase system (PTS) genes in Escherichia coli. NagC represses nagE, encoding the N-acetylglucosamine-specific transporter, while Mlc represses three PTS operons, ptsG, manXYZ and ptsHIcrr, involved in the uptake of glucose. NagC and Mlc can bind to each others operator, at least in vitro. A binding site selection procedure was used to try to distinguish NagC and Mlc sites. The major difference was that all selected NagC binding sites had a G or a C at positions +11/–11 from the centre of symmetry. This is also the case for most native NagC sites, but not the nagE operator, which thus looks like a potential Mlc target. The nagE operator does exhibit a higher affinity for Mlc than NagC, but no regulation of nagE by physiological concentrations of Mlc was detected in vivo. Regulation of wild-type nagE by NagC is achieved because of the chelation effect due to a second high affinity NagC operator covering the nagB promoter. Replacing the A/T at +11/–11 with C/G allows repression by NagC in the absence of the nagB operator.
Resumo:
The use (and misuse) of symmetry arguments in constructing molecular models and in the interpretation of experimental observations bearing on molecular structure (spectroscopy, diffraction, etc.) is discussed. Examples include the development of point groups and space groups for describing the external and internal symmetry of crystals, the derivation of molecular symmetry by counting isomers (the benzene structure), molecular chirality, the connection between macroscopic and molecular chirality, pseudorotation, the symmetry group of nonrigid molecules, and the use of orbital symmetry arguments in discussing aspects of chemical reactivity.
Resumo:
We have used a PCR-based technology to study the V beta 5 and V beta 17 repertoire of T-cell populations in HLA-DR2 multiple sclerosis (MS) patients. We have found that the five MS DR2 patients studied present, at the moment of diagnosis and prior to any treatment, a marked expansion of a CD4+ T-cell population bearing V beta 5-J beta 1.4 beta chains. The sequences of the complementarity-determining region 3 of the expanded T cells are highly homologous. One shares structural features with that of the T cells infiltrating the central nervous system and of myelin basic protein-reactive T cells found in HLA-DR2 MS patients. An homologous sequence was not detectable in MS patients expressing DR alleles other than DR2. However, it is detectable but not expanded in healthy DR2 individuals. The possible mechanisms leading to its in vivo proliferation at the onset of MS are discussed.
Resumo:
Examination of the structural basis for antiviral activity, oral pharmacokinetics, and hepatic metabolism among a series of symmetry-based inhibitors of the human immunodeficiency virus (HIV) protease led to the discovery of ABT-538, a promising experimental drug for the therapeutic intervention in acquired immunodeficiency syndrome (AIDS). ABT-538 exhibited potent in vitro activity against laboratory and clinical strains of HIV-1 [50% effective concentration (EC50) = 0.022-0.13 microM] and HIV-2 (EC50 = 0.16 microM). Following a single 10-mg/kg oral dose, plasma concentrations in rat, dog, and monkey exceeded the in vitro antiviral EC50 for > 12 h. In human trials, a single 400-mg dose of ABT-538 displayed a prolonged absorption profile and achieved a peak plasma concentration in excess of 5 micrograms/ml. These findings demonstrate that high oral bioavailability can be achieved in humans with peptidomimetic inhibitors of HIV protease.
Resumo:
A natação de águas abertas tem registrado aumento no número de competições e participantes em todo mundo. Acompanhando esta tendência têm sido desenvolvidos estudos para identificar as características físicas e as respostas fisiológicas dos atletas neste tipo de prova. Entretanto, são escassos estudos ao nível de análise comportamental, principalmente, em condições reais de distância e meio ambiente (mar). Foi objetivo deste estudo investigar as características de desempenho e da organização temporal das braçadas de nadadores de águas abertas. Mais especificamente, conhecer quais recursos os atletas de águas abertas lançam mão para atingir sua meta de vencer um percurso no mar no menor tempo possível. A amostra foi constituída por 23 atletas, com média de idade de 26,4(±3,2) anos. A tarefa foi nadar um trajeto de 1500 metros em forma de um circuito em mar aberto. Para a captação das variáveis relacionadas ao desempenho utilizou-se um GPS (Garmin modelo Fênix 3) e um cronômetro (FINIS modelo Accusplit Eagle AX602). O registro das imagens para captação dos dados relacionados à descrição da organização temporal das braçadas ocorreu em três pontos do trajeto: início (I) - 20 a 40 metros, meio (M) - 800 a 820 metros e final (F) - 1450 a 1470 metros. Foi utilizada uma filmadora (Nikon Coolpix S5300) afixada à embarcação. O software Kinovea 8.20 permitiu a análise quadro a quadro das braçadas. Foram consideradas variáveis dependentes relacionadas ao desempenho (tempo, velocidade e distância total percorrida, bem como, a frequência de braçadas em cada um dos três pontos do trajeto); aos aspectos variantes das braçadas (tempo total do ciclo, das braçadas, das fases aérea e aquática) e aos aspectos invariantes das braçadas (timing relativo das fases aérea e aquática e sua variabilidade). A análise de variância de medidas repetidas foi usada para comparar os três momentos da tarefa (I, M e F) para todas as variáveis, e a correlação de Pearson para analisar a magnitude das relações entre as variáveis de desempenho, enquanto o teste t de Student para medidas pareadas foi utilizado para comparar as possíveis diferenças entre os braços direito e esquerdo para cada um dos momentos e determinou-se como significância estatística α≤=0,05. Em relação ao desempenho, os resultados indicaram que os nadadores fizeram uso de frequência de braçada (Fb) diferente para os três momentos, sendo maior no I quando comparada ao M e F, e no M, menor que em F; estas mudanças foram acompanhadas por ajustes nos aspectos variantes como o tempo total do ciclo, das braçadas e das fases aérea e aquática. Ainda, nos três momentos os nadadores apresentaram simetria temporal entre as braçadas dos dois braços, apesar de as diferenças serem evidenciadas entre as fases das braçadas quando comparados os braços. Com relação aos aspectos invariantes detectou-se mudança do padrão de I para M e F da tarefa, sendo que em M e F os atletas utilizaram a mesma estrutura temporal. Quanto à variabilidade dos aspectos variantes e invariantes para as braçadas e as fases das braçadas, observou-se diminuição da magnitude ao longo da tarefa sendo que o braço esquerdo apresentou nos três momentos maior variabilidade que o direito. Assim, diante dos resultados, concluiu-se que os recursos utilizados por nadadores habilidosos para nadar em ambiente pouco estável, em condições reais de distância e meio ambiente (mar) compreendem a alteração do desempenho (Fb) associado a ajustes nos aspectos variantes, concomitantemente à alteração dos aspectos invariantes das braçadas, em função do momento da tarefa
Resumo:
Resumen de la presentación oral en el 6th EOS Topical Meeting on Visual and Physiological Optics (EMVPO 2012), Dublín, 20-22 Agosto 2012.
Resumo:
Presentación oral realizada en el 6th EOS Topical Meeting on Visual and Physiological Optics (EMVPO 2012), Dublín, 20-22 Agosto 2012.