913 resultados para Oficio de los santos
Resumo:
BACKGROUND IL28B genotype predicts response to treatment against hepatitis C virus (HCV) with pegylated interferon/ribavirin (PR) and impacts on the outcome of therapy including telaprevir (TVR). This study aimed to determine the influence of the favorable IL28B genotype on early viral kinetics during therapy with TVR/PR in HIV/HCV-coinfected patients. METHODS All HIV/HCV genotype 1-coinfected subjects who received TVR/PR for at least 4 weeks were included from populations prospectively followed in 22 centers throughout Germany, Switzerland and Spain. RESULTS Of the 129 subjects included, 38 (29.5%) presented with IL28B genotype CC and 94 (72.9%) were treatment-experienced. Ninety-six (73.8%) patients showed undetectable plasma HCV-RNA at treatment week (W) 4: 30 (78.9%) of the IL28B-CC carriers and 65 (71.4%) of the non-CC carriers (p=0.377). Among treatment-naïve patients, proportions of undetectable HCV-RNA among IL28B-CC versus non-CC carriers were 8/9 (88.9%) versus 3/9 (33.3%, p=0.016) and 14/17 (82.4%) versus 11/18 (61.1%, p=0.164) at W2 and W4. The decrease of HCV-RNA at W2 and W4 was similar among the IL28B carriers. CONCLUSIONS IL28B genotype does not predict W4 response to TVR/PR in HIV/HCV-coinfected patients, regardless of their treatment history. However, there is evidence of an impact on response during the first weeks in treatment-naïve patients.
Resumo:
En América Latina, la tasa es de 32 abortos por cada 1000 mujeres y el 95% de estos presentan riesgos para la vida y la Salud de la mujer. La tasa de aborto en Europa Occidental, donde la Interrupción Voluntaria del Embarazo es legal, es de tan solo 12 casos por cada 1000 mujeres. Su prohibición solo alienta al aborto clandestino y en condiciones de riesgo. Idea y realización: Pieter Van Eecke Concepto y producción: Objeto Directo Diseño gráfico: Florencia Lastreto Fotografía: Natalie Mikhaloff y Médicos del Mundo Voz en off español: Jorge Varela Cámara: Pieter Van Eecke Animaciones: Florencia Lastreto Musica (Creative Commons Licence - Attribution - Non Commercial - Share alike) Bonifrate - Estudio Rural Em R Major Fabrizio Paternili - Profondo Blu Robin Grey - Ninety Days Instrumental Robin Grey – Every Walking Hour Instrumental Médicos del Mundo Francia Coordinadora General Uruguay: Carine Thibaut Comunicación: Mauricio de los Santos Agradecimientos: MYSU, Lilián Abracinskas, Morgane Aveline, Camila Giugliani, Jean Guerini, Sandrine Simon, Alain Forgeot, Aurore Voet.
Resumo:
Complexity has always been one of the most important issues in distributed computing. From the first clusters to grid and now cloud computing, dealing correctly and efficiently with system complexity is the key to taking technology a step further. In this sense, global behavior modeling is an innovative methodology aimed at understanding the grid behavior. The main objective of this methodology is to synthesize the grid's vast, heterogeneous nature into a simple but powerful behavior model, represented in the form of a single, abstract entity, with a global state. Global behavior modeling has proved to be very useful in effectively managing grid complexity but, in many cases, deeper knowledge is needed. It generates a descriptive model that could be greatly improved if extended not only to explain behavior, but also to predict it. In this paper we present a prediction methodology whose objective is to define the techniques needed to create global behavior prediction models for grid systems. This global behavior prediction can benefit grid management, specially in areas such as fault tolerance or job scheduling. The paper presents experimental results obtained in real scenarios in order to validate this approach.
Resumo:
The manipulation and handling of an ever increasing volume of data by current data-intensive applications require novel techniques for e?cient data management. Despite recent advances in every aspect of data management (storage, access, querying, analysis, mining), future applications are expected to scale to even higher degrees, not only in terms of volumes of data handled but also in terms of users and resources, often making use of multiple, pre-existing autonomous, distributed or heterogeneous resources.
Resumo:
Con la llegada al trono español de Felipe V, una pléyade de arquitectos y artistas italianos y franceses fueron llamados a Madrid por la nueva dinastía borbónica para cambiar el gusto artístico de un país de cultura y costumbres ajenas. De las obras que estos arquitectos dejaron en España, nos hemos centrado en la arquitectura religiosa por ser más evidente esta influencia. Hemos analizado dos iglesias madrileñas: la de San Ignacio y la Basílica Pontificia de San Miguel (antes iglesia de los santos Justo y Pastor).
Resumo:
In this introductory chapter we put in context and give a brief outline of the work that we thoroughly present in the rest of the dissertation. We consider this work divided in two main parts. The first part is the Firenze Framework, a knowledge level description framework rich enough to express the semantics required for describing both semantic Web services and semantic Grid services. We start by defining what the Semantic Grid is and its relation with the Semantic Web; and the possibility of their convergence since both initiatives have become mainly service-oriented. We also introduce the main motivators of the creation of this framework, one is to provide a valid description framework that works at knowledge level; the other to provide a description framework that takes into account the characteristics of Grid services in order to be able to describe them properly. The other part of the dissertation is devoted to Vega, an event-driven architecture that, by means of proposed knowledge level description framework, is able to achieve high scale provisioning of knowledge-intensive services. In this introductory chapter we portrait the anatomy of a generic event-driven architecture, and we briefly enumerate their main characteristics, which are the reason that make them our choice.
Resumo:
The popularity of MapReduce programming model has increased interest in the research community for its improvement. Among the other directions, the point of fault tolerance, concretely the failure detection issue seems to be a crucial one, but that until now has not reached its satisfying level. Motivated by this, I decided to devote my main research during this period into having a prototype system architecture of MapReduce framework with a new failure detection service, containing both analytical (theoretical) and implementation part. I am confident that this work should lead the way for further contributions in detecting failures to any NoSQL App frameworks, and cloud storage systems in general.
Resumo:
Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.
Resumo:
Over the last decade, Grid computing paved the way for a new level of large scale distributed systems. This infrastructure made it possible to securely and reliably take advantage of widely separated computational resources that are part of several different organizations. Resources can be incorporated to the Grid, building a theoretical virtual supercomputer. In time, cloud computing emerged as a new type of large scale distributed system, inheriting and expanding the expertise and knowledge that have been obtained so far. Some of the main characteristics of Grids naturally evolved into clouds, others were modified and adapted and others were simply discarded or postponed. Regardless of these technical specifics, both Grids and clouds together can be considered as one of the most important advances in large scale distributed computing of the past ten years; however, this step in distributed computing has came along with a completely new level of complexity. Grid and cloud management mechanisms play a key role, and correct analysis and understanding of the system behavior are needed. Large scale distributed systems must be able to self-manage, incorporating autonomic features capable of controlling and optimizing all resources and services. Traditional distributed computing management mechanisms analyze each resource separately and adjust specific parameters of each one of them. When trying to adapt the same procedures to Grid and cloud computing, the vast complexity of these systems can make this task extremely complicated. But large scale distributed systems complexity could only be a matter of perspective. It could be possible to understand the Grid or cloud behavior as a single entity, instead of a set of resources. This abstraction could provide a different understanding of the system, describing large scale behavior and global events that probably would not be detected analyzing each resource separately. In this work we define a theoretical framework that combines both ideas, multiple resources and single entity, to develop large scale distributed systems management techniques aimed at system performance optimization, increased dependability and Quality of Service (QoS). The resulting synergy could be the key 350 J. Montes et al. to address the most important difficulties of Grid and cloud management.
Resumo:
In just a few years cloud computing has become a very popular paradigm and a business success story, with storage being one of the key features. To achieve high data availability, cloud storage services rely on replication. In this context, one major challenge is data consistency. In contrast to traditional approaches that are mostly based on strong consistency, many cloud storage services opt for weaker consistency models in order to achieve better availability and performance. This comes at the cost of a high probability of stale data being read, as the replicas involved in the reads may not always have the most recent write. In this paper, we propose a novel approach, named Harmony, which adaptively tunes the consistency level at run-time according to the application requirements. The key idea behind Harmony is an intelligent estimation model of stale reads, allowing to elastically scale up or down the number of replicas involved in read operations to maintain a low (possibly zero) tolerable fraction of stale reads. As a result, Harmony can meet the desired consistency of the applications while achieving good performance. We have implemented Harmony and performed extensive evaluations with the Cassandra cloud storage on Grid?5000 testbed and on Amazon EC2. The results show that Harmony can achieve good performance without exceeding the tolerated number of stale reads. For instance, in contrast to the static eventual consistency used in Cassandra, Harmony reduces the stale data being read by almost 80% while adding only minimal latency. Meanwhile, it improves the throughput of the system by 45% while maintaining the desired consistency requirements of the applications when compared to the strong consistency model in Cassandra.
Resumo:
Es suficientemente conocido el viaje que realizó Alvar Aalto a España en 1951, para impartir conferencias primero en Barcelona y después en Madrid. Aquella visita tuvo entonces gran influencia en nuestro país y, posteriormente, con efectos retardados, en la década de los sesenta. Aunque, lógicamente, las mencionaremos, no pretendemos incidir en cuestiones ya tratadas por otros autores como, por ejemplo, su negativa a contemplar una arquitectura que le distrajese de su quehacer edilicio habitual, como fue la Sagrada Familia, El Escorial, la Plaza Mayor de Madrid o el edificio del Museo del Prado; o su gusto por la arquitectura popular que tuvo ocasión de ver y dibujar en aquel viaje; sino intentar buscar las razones que le indujeron a elogiar el que parece ser el único edificio que le interesó: la Facultad de Ciencias Físicas y Químicas de la Ciudad Universitaria de Madrid, obra de Miguel de los Santos.
Resumo:
D. Gaspar de Guzmán, III Conde-Duque de Olivares Roma 6/1/1587, Toro 22/7/1645), adquiere el señorío de Loeches en 1633 y desde entonces hasta su muerte construye un conjunto monástico-palacial, labor que continua su esposa Inés de Zúñiga hasta su fallecimiento en 1647. El autor de sus trazas fue el arquitecto albaceteño Alonso de Carbonel (1583-1660), responsable de la ejecución de otras obras coetáneas en Madrid, como el Palacio del Buen Retiro y el convento e Iglesia de los Santos Justo y Pastor ó de las Maravillas. Y Loeches fue el refugio y lugar del primer destierro de Olivares tras la pérdida del favor del rey Felipe IV, el 23 de enero de 1643. Gregorio Marañón en su libro sobre Olivares1 menciona y analiza someramente el palacio en ruinas, visitando y fotografiando el lugar antes de la Guerra Civil. En 2002, el profesor Juan Luis Blanco Mozo leyó una Tesis doctoral sobre Alonso Carbonel. En ella describía superficialmente el conjunto de Loeches, ya que su visita se limitó al exterior dado su carácter de clausura, su dificultad de acceso y su extensión. Son numerosos los estudios y publicaciones sobre los monasterios-palacio de los Austrias españoles, pero no lo son (exceptuando el caso de Lerma estudiado por D. Luis Cervera Vera) los que se han ocupado en profundidad de la emulación arquitectónica de esta idea por parte de la alta nobleza, y los altos cargos, secretarios ó ministros. Sus residencias eran con frecuencia una réplica de otras reales, eran prueba y demostración tanto de su poderío económico como de su “fuerza política”, de su reputación y de su “fama”. Ellos eran parte de una clase dirigente, inspiradores y coautores de la política y del programa ideológico de sus soberanos, responsables de un aparato administrativo y de un “estamento clientelar”, considerado por muchos como un invariante de la monarquía Habsburgo española.
Resumo:
Hay un ejemplar encuadernado con: Sermon en la Solemne fiesta que en honra de la canonización de S. Josef de Calasanz, fundador de las Escuelas Pias celebró la ... Parroquia de los Santos Juanes (NP08/35).
Resumo:
Sign.: A-C2, D1
Resumo:
The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.