864 resultados para theatre industry and academy
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
A methodology to analyze, design and implement very fast and robust controls of Buck-type converters
Resumo:
La electrónica digital moderna presenta un desafío a los diseñadores de sistemas de potencia. El creciente alto rendimiento de microprocesadores, FPGAs y ASICs necesitan sistemas de alimentación que cumplan con requirimientos dinámicos y estáticos muy estrictos. Específicamente, estas alimentaciones son convertidores DC-DC de baja tensión y alta corriente que necesitan ser diseñados para tener un pequeño rizado de tensión y una pequeña desviación de tensión de salida bajo transitorios de carga de una alta pendiente. Además, dependiendo de la aplicación, se necesita cumplir con otros requerimientos tal y como proveer a la carga con ”Escalado dinámico de tensión”, donde el convertidor necesitar cambiar su tensión de salida tan rápidamente posible sin sobreoscilaciones, o ”Posicionado Adaptativo de la Tensión” donde la tensión de salida se reduce ligeramente cuanto más grande sea la potencia de salida. Por supuesto, desde el punto de vista de la industria, las figuras de mérito de estos convertidores son el coste, la eficiencia y el tamaño/peso. Idealmente, la industria necesita un convertidor que es más barato, más eficiente, más pequeño y que aún así cumpla con los requerimienos dinámicos de la aplicación. En este contexto, varios enfoques para mejorar la figuras de mérito de estos convertidores se han seguido por la industria y la academia tales como mejorar la topología del convertidor, mejorar la tecnología de semiconducores y mejorar el control. En efecto, el control es una parte fundamental en estas aplicaciones ya que un control muy rápido hace que sea más fácil que una determinada topología cumpla con los estrictos requerimientos dinámicos y, consecuentemente, le da al diseñador un margen de libertar más amplio para mejorar el coste, la eficiencia y/o el tamaño del sistema de potencia. En esta tesis, se investiga cómo diseñar e implementar controles muy rápidos para el convertidor tipo Buck. En esta tesis se demuestra que medir la tensión de salida es todo lo que se necesita para lograr una respuesta casi óptima y se propone una guía de diseño unificada para controles que sólo miden la tensión de salida Luego, para asegurar robustez en controles muy rápidos, se proponen un modelado y un análisis de estabilidad muy precisos de convertidores DC-DC que tienen en cuenta circuitería para sensado y elementos parásitos críticos. También, usando este modelado, se propone una algoritmo de optimización que tiene en cuenta las tolerancias de los componentes y sensados distorsionados. Us ando este algoritmo, se comparan controles muy rápidos del estado del arte y su capacidad para lograr una rápida respuesta dinámica se posiciona según el condensador de salida utilizado. Además, se propone una técnica para mejorar la respuesta dinámica de los controladores. Todas las propuestas se han corroborado por extensas simulaciones y prototipos experimentales. Con todo, esta tesis sirve como una metodología para ingenieros para diseñar e implementar controles rápidos y robustos de convertidores tipo Buck. ABSTRACT Modern digital electronics present a challenge to designers of power systems. The increasingly high-performance of microprocessors, FPGAs (Field Programmable Gate Array) and ASICs (Application-Specific Integrated Circuit) require power supplies to comply with very demanding static and dynamic requirements. Specifically, these power supplies are low-voltage/high-current DC-DC converters that need to be designed to exhibit low voltage ripple and low voltage deviation under high slew-rate load transients. Additionally, depending on the application, other requirements need to be met such as to provide to the load ”Dynamic Voltage Scaling” (DVS), where the converter needs to change the output voltage as fast as possible without underdamping, or ”Adaptive Voltage Positioning” (AVP) where the output voltage is slightly reduced the greater the output power. Of course, from the point of view of the industry, the figures of merit of these converters are the cost, efficiency and size/weight. Ideally, the industry needs a converter that is cheaper, more efficient, smaller and that can still meet the dynamic requirements of the application. In this context, several approaches to improve the figures of merit of these power supplies are followed in the industry and academia such as improving the topology of the converter, improving the semiconductor technology and improving the control. Indeed, the control is a fundamental part in these applications as a very fast control makes it easier for the topology to comply with the strict dynamic requirements and, consequently, gives the designer a larger margin of freedom to improve the cost, efficiency and/or size of the power supply. In this thesis, how to design and implement very fast controls for the Buck converter is investigated. This thesis proves that sensing the output voltage is all that is needed to achieve an almost time-optimal response and a unified design guideline for controls that only sense the output voltage is proposed. Then, in order to assure robustness in very fast controls, a very accurate modeling and stability analysis of DC-DC converters is proposed that takes into account sensing networks and critical parasitic elements. Also, using this modeling approach, an optimization algorithm that takes into account tolerances of components and distorted measurements is proposed. With the use of the algorithm, very fast analog controls of the state-of-the-art are compared and their capabilities to achieve a fast dynamic response are positioned de pending on the output capacitor. Additionally, a technique to improve the dynamic response of controllers is also proposed. All the proposals are corroborated by extensive simulations and experimental prototypes. Overall, this thesis serves as a methodology for engineers to design and implement fast and robust controls for Buck-type converters.
Resumo:
Grazed pastures are the backbone of the Brazilian livestock industry and grasses of the genus Brachiaria (syn. Urochloa) are some of most used tropical forages in the country. Although the dependence on the forage resource is high, grazing management is often empirical and based on broad and non-specific guidelines. Mulato II brachiariagrass (Convert HD 364, Dow AgroSciences, São Paulo, Brazil) (B. brizantha × B. ruziziensis × B. decumbens), a new Brachiaria hybrid, was released as an option for a broad range of environmental conditions. There is no scientific information on specific management practices for Mulato II under continuous stocking in Brazil. The objectives of this research were to describe and explain variations in carbon assimilation, herbage accumulation (HA), plant-part accumulation, nutritive value, and grazing efficiency (GE) of Mulato II brachiariagrass as affected by canopy height and growth rate, the latter imposed by N fertilization rate, under continuous stocking. An experiment was carried out in Piracicaba, SP, Brazil, during two summer grazing seasons. The experimental design was a randomized complete block, with a 3 x 2 factorial arrangement, corresponding to three steady-state canopy heights (10, 25 and 40 cm) maintained by mimicked continuous stocking and two growth rates (imposed as 50 and 250 kg N ha-1 yr-1), with three replications. There were no height × N rate interactions for most of the responses studied. The HA of Mulato II increased linearly (8640 to 13400 kg DM ha-1 yr-1), the in vitro digestible organic matter (IVDOM) decreased linearly (652 to 586 g kg-1), and the GE decreased (65 to 44%) as canopy height increased. Thus, although GE and IVDOM were greatest at 10 cm height, HA was 36% less for the 10- than for the 40-cm height. The leaf carbon assimilation was greater for the shortest canopy (10 cm), but canopy assimilation was less than in taller canopies, likely a result of less leaf area index (LAI). The reductions in HA, plant-part accumulation, and LAI, were not associated with other signs of stand deterioration. Leaf was the main plant-part accumulated, at a rate that increased from 70 to 100 kg DM ha-1 d-1 as canopy height increased from 10 to 40 cm. Mulato II was less productive (7940 vs. 13380 kg ha-1 yr-1) and had lesser IVDOM (581 vs. 652 g kg-1) at the lower N rate. The increase in N rate affected plant growth, increasing carbon assimilation, LAI, rates of plant-part accumulation (leaf, stem, and dead), and HA. The results indicate that the increase in the rate of dead material accumulation due to more N applied is a result of overall increase in the accumulation rates of all plant-parts. Taller canopies (25 or 40 cm) are advantageous for herbage accumulation of Mulato II, but nutritive value and GE was greater for 25 cm, suggesting that maintaining ∼25-cm canopy height is optimal for continuously stocked Mulato II.
Resumo:
"The pr1.mary purpose of this thea1a waa to learn about and t .o produce a aeries of pa1nt1n~.a that aymbolized. tbe oil induatry 1n Lea. Count,-, New ltexic.o. The secondary purpose waa to learn more about tho oil industry, which is the big business in Le.a Countj, :New Mexico"
Resumo:
Poor hygienic practices and illness of restaurant employees are major contributors to the contamination of food and the occurrence of food-borne illness in the United States, costing the food industry and society billions of dollars each year. Risk factors associated with this problem include lack of proper handwashing; food handlers reporting to work sick; poor personal hygiene; and bare hand contact with ready-to-eat foods. However, traditional efforts to control these causes of food-borne illness by public health authorities have had limited impact, and have revealed the need for comprehensive and innovative programs that provide active managerial control over employee health and hygiene in restaurant establishments. Further, the introduction and eventual adoption by the food industry of such programs can be facilitated through the use of behavior-change theory. This Capstone Project develops a model program to assist restaurant owners and operators in exerting active control over health and hygiene in their establishments and provides theory-based recommendations for the introduction of the program to the food industry.
Resumo:
There we analyce the first touristic nucleus arouse in the Spanish Mediterranean coast between World War II and the Petroleum Crisis (1945-75). Special attention is payed to the characteristics of these new villages: the relation of their urban frame with nature -original or artificial- and the lack of industry. We make a distintion of three types: cluster nucleus (La Manga and El Saler), tridimentional urbanism (Playa de San Juan y Urbanova) and extreme typologies (Campoamor and Benidorm). With them the cities for vacations are discovered, mainly for second home purpouse (vacation home/holiday home). The panorama after the current crisis is a lineal chain of small urban settlements on the coast. Finally, Finally, we can see how these "secondary cities" without industry and specialized in leisure, are developing to our days until become new cities of services, doubling the existing ones; now they are "the other cities".
Resumo:
We develop a dynamic general-equilibrium framework in which growth is driven by skill-biased technology diffusion. The model incorporates leisure–labor decisions and human capital accumulation through education. We are able to reproduce the trends in income inequality and labor and skills supplies observed in the United States between 1969 and 1996. The paper also provides an explanation for why more individuals invest in human capital when the investment premium is going down, and why the skill-premium goes up when the skills supply is increasing.
Resumo:
The thousands of books and articles on Charles de Gaulle's policy toward European integration, whether written by historians, social scientists, or commentators, universally accord primary explanatory importance to the General's distinctive geopolitical ideology. In explaining his motivations, only secondary significance, if any at all, is attached to commercial considerations. This paper seeks to reverse this historiographical consensus by examining the four major decisions toward European integration during de Gaulle's presidency: the decisions to remain in the Common Market in 1958, to propose the Foucher Plan in the early 1960s, to veto British accession to the EC, and to provoke the "empty chair" crisis in 1965-1966, resulting in the "Luxembourg Compromise." In each case, the overwhelming bulk of the primary evidence-speeches, memoirs, or government documents-suggests that de Gaulle's primary motivation was economic, not geopolitical or ideological. Like his predecessors and successors, de Gaulle sought to promote French industry and agriculture by establishing protected markets for their export products. This empirical finding has three broader implications: (1) For those interesred in the European Union, it suggests that regional integration has been driven primarily by economic, not geopolitical considerations--even in the "least likely" case. (2) For those interested in the role of ideas in foreign policy, it suggests that strong interest groups in a democracy limit the impact of a leader's geopolitical ideology--even where the executive has very broad institutional autonomy. De Gaulle was a democratic statesman first and an ideological visionary second. (3) For those who employ qualitative case-study methods, it suggests that even a broad, representative sample of secondary sources does not create a firm basis for causal inference. For political scientists, as for historians, there is in many cases no reliable alternative to primary-source research.
Resumo:
Recent Russian actions have unequivocally underlined that it does not play by the rules. This provides a wake-up call and should alert not only the countries of the former Soviet Union, but the EU as a whole. For the EU, this has one clear implication: it cannot continue to depend on an unreliable energy supplier, which is prone to use energy as a political tool. Luckily for the EU, summer is approaching and Europeans will need less Russian gas for heating. However, potential gas supply disruptions remind Europe of its energy vulnerabilities, and of the 2006 and 2009 winters, when Russia’s decision to stop the flow of gas to Ukraine led to supply crises in a number of EU Member States. As the EU’s heads of states and governments gather in the European Council on 20 and 21 March, the developments in Ukraine and the possible Russian illegal annexation of Crimea will undoubtedly dominate the discussions. Securing energy supply will figure on the agenda, but energy should also be seen as a means to pressure Russia. It is important that the Member States use the occasion to commit to working together on energy security. If this is addressed in a holistic way, it can also support European industry and climate policy – the other issues on the Council agenda that run the risk of being forgotten.
Resumo:
The thousands of books and articles on Charles de Gaulle's policy toward European integration, whether written by historians, political scientists, or commentators, universally accord primary explanatory importance to the General's distinctive geopolitical ideology. In explaining his motivations, only secondary significance, if any at all, is attached to commercial considerations. This paper seeks to reverse this historiographical consensus by the four major decisions toward European integration taken under de Gaulle's Presidency: the decisions to remain in the Common Market in 1958, to propose the Fouchet Plan in the early 1960s, to veto British accession to the EC, and to provoke the "empty chair" crisis in 1965-1966, resulting in Luxembourg Compromise. In each case, the overwhelming bulk of the primary evidence speeches, memoirs, or government documents suggests that de Gaulle's primary motivation was economic, not geopolitical or ideological. Like his predecessors and successors, de Gaulle sought to promote French industry and agriculture by establishing protected markets for their export products. This empirical finding has three broader implications: (1) For those interested in the European Union, it suggests that regional integration has been driven primarily by economic, not geopolitical considerations even in the least likely case. (2) For those interested in the role of ideas in foreign policy, it suggests that strong interest groups in a democracy limit the impact of a leaders geopolitical ideology even where the executive has very broad institutional autonomy. De Gaulle was a democratic statesman first and an ideological visionary second. (3) For those who employ qualitative case-study methods, it suggests that even a broad, representative sample of secondary sources does not create a firm basis for causal inference. For political scientists, as for historians, there is in many cases no reliable alternative to primary source research.
Resumo:
Civil aviation in Europe is one major area where landmark changes have taken place since the late 1980s – the liberalization and deregulation of the sector by member states in three “packages” in the 1980s has transformed an economic sector historically characterized by heavy protectionism, collusion and strong state intervention. Today, the European Union’s (EU) aviation sector contributes to 2.4% of European GDP and supports 5.1 million jobs. The Association of Southeast Asian Nations (ASEAN) has also eagerly taken steps to integrate its aviation markets as part of the ASEAN Economic Community (AEC) in 2015. This background brief chronicles the changes made in the aviation sector in Europe through regional integration and examines how these changes have affected policymaking in member states, the airline industry and consumers. The brief also examines ASEAN’s own effort in the integration of its own aviation sector and, taking into account the EU’s strong interest in cooperating with ASEAN on transport and civil aviation policy, whether the changes in the EU are applicable in the ASEAN context.
Resumo:
Purpose – Bread is one of the most consumed foods in the world, and its main function is to provide nutrients and energy for the body. Thus, the purpose of this paper was to raise awareness about the consumption habits of bread and consumer preferences in the region of Viseu (centre of Portugal), assessing the extent to which the preferences and consumption habits differ based on individual variables. Design/methodology/approach – The study was conducted by means of a questionnaire by direct interviewing. The questionnaire included sections aimed at gathering information about demographics, consumption habits and preferences related to bread. The sample consisted of 500 consented respondents. Findings – The results showed significant differences between genders regarding the type of bread eaten: women consumed less wheat bread (52 per cent against 62 per cent; p 0.029) and less unsalted bread (0.3 per cent against 3 per cent; p 0.023), but more whole bread (25 per cent against 11 per cent; p 0.001) and more bread with cereal grains (23 per cent against 11 per cent; p 0.001), thus revealing a trend for a nutritionally more adequate choice. Accordingly, women valued more the composition of the bread when purchasing it (p 0.001). It was also observed a trend for a lower consumption of wheat bread among classes with more advanced studies (47 per cent on university graduates against 60 per cent on nongraduates; p 0.004). Originality/value – This work is innovative because it was the first time the preferences and consuming habits of a sample of Portuguese population regarding bread were accessed, altogether. The results hereby obtained may be of importance both to understand the nutritional importance of bread in the diet of the Portuguese and also for the industry and manufacturers to better correspond to the buying preferences.
Resumo:
La presente investigación es un estudio de tres aplicaciones de los satélites del océano y las zonas costeras (OCzM). Los sensores de radar que se utilizan en la exploración batimétrica son útiles en la industria de las tuberías de petróleo y en la navegación costera. Térmica y la imagen de radar se han utilizado para detectar indirectamente la distribución de los recursos de las pesquerías de atún y últimamente también otras pesquerías. El sistema de posicionamiento global (GPS) y de comunicaciones de datos de seguimiento de la flota actual de permisos, aunque el enfoque de esta tesis es sobre la flota pesquera. El desarrollo de cualquier sistema de monitoreo de la flota puede seguir el mismo principio.
Resumo:
We have analyzed inorganic and organic carbons and determined the isotopic composition of both sedimentary organic carbon and inorganic carbon in carbonates contained in sediments recovered from Holes 434, 434A, 434B, 435, and 435A in the landward slope of Japan and from Hole 436 in the oceanic slope of the Japan Trench. Both inorganic and organic carbons were assayed at the P. P. Shirshov Institute of Oceanology, in the same sample, using the Knopp technique and measuring evolved CO2 gravimetrically. Each sample was analyzed twice in parallel. Measurements were of a ±0.05 per cent accuracy and a probability level of 0.95. Carbon isotopic analysis was carried out on a MI-1305 mass spectrometer at the I. M. Gubkin Institute of Petrochemical and Gas Industry and the results presented as dC13 values related to the PDB standard. The procedure for preparing samples for organic carbon isotopic analysis involved (1) drying damp sediments at 60°C; (2) treating samples, while heating, with 10 N HCl to remove carbonate carbon; and (3) evaporating surplus HCl at 60°C. The organic substance was turned to CO2 by oxidizing it in an oxygen atmosphere. To prepare samples for inorganic carbon isotopic analysis we decomposed the carbonates with orthophosphoric acid and refined the gas evolved. The dC13 measurements, including a full cycle of sample preparation, were of a ±0.5 per cent accuracy and a probability level of 0.95.
Resumo:
Prepared by G. J. Pagliano and others.