925 resultados para resource allocation
Resumo:
Technocracy often holds out the promise of rational, disinterested decision-making. Yet states look to technocracy not just for expert inputs and calculated outcomes but to embed the exercise of power in many agendas, policies and programs. Thus, technocracy operates as an appendage of politically constructed structures and configurations of power, and highly placed technocrats cannot be 'mere' backroom experts who supply disinterested rational-technical solutions in economic planning, resource allocation and social distribution, which are inherently political. This paper traces the trajectories of technocracy in conditions of rapid social transformation, severe economic restructuring, or political crises - when the technocratic was unavoidably political.
Resumo:
The paper explores the effects of birth order and sibling sex composition on human capital investment in children in India using the Indian Human Development Survey (IHDS). Endogeneity of fertility is addressed using instruments and controlling for household fixed effects. Family size effect is also distinguished from the sibling sex composition effect. Previous literature has often failed to take endogeneity into account and shows a negative birth order effect for girls in India. Once endogeneity of fertility is addressed, there is no evidence for a negative birth order effect or sibling sex composition effect for girls. Results show that boys are worse off in households that have a higher proportion of boys specifically when they have older brothers.
Resumo:
Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.
Resumo:
A generic bio-inspired adaptive architecture for image compression suitable to be implemented in embedded systems is presented. The architecture allows the system to be tuned during its calibration phase. An evolutionary algorithm is responsible of making the system evolve towards the required performance. A prototype has been implemented in a Xilinx Virtex-5 FPGA featuring an adaptive wavelet transform core directed at improving image compression for specific types of images. An Evolution Strategy has been chosen as the search algorithm and its typical genetic operators adapted to allow for a hardware friendly implementation. HW/SW partitioning issues are also considered after a high level description of the algorithm is profiled which validates the proposed resource allocation in the device fabric. To check the robustness of the system and its adaptation capabilities, different types of images have been selected as validation patterns. A direct application of such a system is its deployment in an unknown environment during design time, letting the calibration phase adjust the system parameters so that it performs efcient image compression. Also, this prototype implementation may serve as an accelerator for the automatic design of evolved transform coefficients which are later on synthesized and implemented in a non-adaptive system in the final implementation device, whether it is a HW or SW based computing device. The architecture has been built in a modular way so that it can be easily extended to adapt other types of image processing cores. Details on this pluggable component point of view are also given in the paper.
Resumo:
Conventional programming techniques are not well suited for solving many highly combinatorial industrial problems, like scheduling, decision making, resource allocation or planning. Constraint Programming (CP), an emerging software technology, offers an original approach allowing for efficient and flexible solving of complex problems, through combined implementation of various constraint solvers and expert heuristics. Its applications are increasingly elded in various industries.
Resumo:
Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.
Resumo:
This paper proposes an economic instrument designed to assess the competitive nature of the sugar industry in Romania. In the first part of the paper is presented the theoretical background underlying index (HHI) and its calculation methodology. Then comes the results of a first application of this index for a total of 10 plants in the sugar industry, the robustness of these results is discussed. We believe HHI is a proactive tool that may prove useful competition authority, in its pursuit of continuous monitoring of various industries in the economy and in the internal decision-making on resource allocation institution (Peacock, and Prisecaru, 2013).The starting point of our research is to free competition in the European market with competitors much stronger than Romanian plants, plants that produce at a price lower than the domestic ones. In our study we will see if it is a concentration of production in factories around the strongest in Romania, concentration accompanied by the collapse of those who could not resist the market.The market concentration, competition policy, we will follow using the HHI index, for evaluation of impact analysis on existing trade, the number and size of competitors, protecting existing sales structures, avoiding disruptions in the competitive environment, etc.
Resumo:
Acknowledgments This study was funded by the Research Council of Norway (POLARPROG grant 216051; SFF-III grant 223257/ F50) and Svalbard Environmental Protection Fund (SMF grant 13/74). We thank Mathilde Le Moullec for helping with the fieldwork and the Norwegian Meteorological Institute for access to weather data.
Resumo:
Uma compreensão aprofundada da dinâmica de competição portuária é particularmente importante dado o contexto atual do setor, que orienta à outorga de novos portos e terminais no Brasil, à luz da Nova Lei dos Portos, Lei Nº 12.815 de 2013. A avaliação dos reais impactos decorrentes do aumento de capacidade portuária em cada região será atividade fundamental para que, por um lado, o poder público oriente a alocação efetiva de recursos, sem prejudicar a operação dos complexos existentes; e para que a iniciativa privada, por sua vez, possa compreender os impactos dos possíveis novos empreendimentos sobre as suas operações e delinear estratégias comerciais compatíveis com o novo cenário competitivo. A partir de extensa revisão bibliográfica e da aplicação de técnicas a casos específicos, o presente trabalho detalha a dinâmica competitiva entre terminais de contêineres e avalia criticamente seis métodos utilizados para identificar a existência de competição: correlação de market share, comparação de taxas de ocupação, sobreposição de escalas marítimas, comparação de custos logísticos terrestres, representatividade da região de influência contestável e existência de poder de mercado sobre a região de influência. Dos seis métodos analisados, dois apresentam conclusões fulminantes para a questão, embora sua aplicação demande grande volume de informações; um é assertivo em condições normais de distribuição geográfica de cargas; dois apresentam condições necessárias, porém não suficientes para a identificação de competição; e um deve ser aplicado com ressalvas, uma vez que pode levar a conclusões equivocadas.
Resumo:
O Controle judicial das políticas públicas necessárias para a efetivação dos direitos constitucionais sociais à prestação é tema muito polêmico na atividade jurisdicional brasileira. Há os que defendem a intervenção irrestrita na tarefa de impor à administração pública a qualquer custo a efetivação das políticas públicas de sua competência. Contudo o nosso trabalho defende que a intervenção do Poder Judiciário no controle de políticas públicas é possível, segundo comando constitucional contido no artigo 3º da Carta Magna, mas com limitações para evitar a violação do princípio da separação dos poderes. Para demonstrar a nossa concepção sobre o tema partimos da definição e da natureza dos direitos fundamentais sociais e sua concepção na ordem constitucional brasileira. Analisamos as principais funções dos direitos fundamentais, concentrando a nossa atenção na função prestacional, ou direito a prestação em sentido estrito. Nesse particular passamos a discutir as questões que envolvem a efetivação dos direitos fundamentais sociais, a partir de sua eficácia jurídica e social até aos aspectos referentes a sua concretização. Salientamos que a efetivação dos direitos fundamentais sociais “derivados” passa por uma atividade legislativa de conformação antes de sua efetivação e que os “originais” poderiam ser concretizados imediatamente, sem se descurar da necessidade de outra atividade legislativa de destinação dos recurso públicos através da lei orçamentária, na foi ressaltado que deveria ser tomado como um dos critérios para essa destinação o princípio do “mínimo existencial”. Analisamos a tese recorrente de defesa da administração para justificar a não efetivação dos direitos sociais à prestação, a denominada “tese da reserva do possível”, salientamos que embora relevante, não era absoluto esse argumento. Contudo a atuação jurisdicional no controle das políticas públicas, não pode fugir da atenção aos princípios da razoabilidade e da proporcionalidade.
Resumo:
The European market for asset-backed securities (ABS) has all but closed for business since the start of the economic and financial crisis. ABS (see Box 1) were in fact the first financial assets hit at the onset of the crisis in 2008. The subprime mortgage meltdown caused a deterioration in the quality of collateral in the ABS market in the United States, which in turn dried up overall liquidity because ABS AAA notes were popular collateral for inter-bank lending. The lack of demand for these products, together with the Great Recession in 2009, had a considerable negative impact on the European ABS market. The post-crisis regulatory environment has further undermined the market. The practice of slicing and dicing of loans into ABS packages was blamed for starting and spreading the crisis through the global financial system. Regulation in the post-crisis context has thus been relatively unfavourable to these types of instruments, with heightened capital requirements now necessary for the issuance of new ABS products. And yet policymakers have recently underlined the need to revitalise the ABS market as a tool to improve credit market conditions in the euro area and to enhance transmission of monetary policy. In particular, the European Central Bank and the Bank of England have jointly emphasised that: “a market for prudently designed ABS has the potential to improve the efficiency of resource allocation in the economy and to allow for better risk sharing... by transforming relatively illiquid assets into more liquid securities. These can then be sold to investors thereby allowing originators to obtain funding and, potentially, transfer part of the underlying risk, while investors in such securities can diversify their portfolios... . This can lead to lower costs of capital, higher economic growth and a broader distribution of risk” (ECB and Bank of England, 2014a). In addition, consideration has started to be given to the extent to which ABS products could become the target of explicit monetary policy operations, a line of action proposed by Claeys et al (2014). The ECB has officially announced the start of preparatory work related to possible outright purchases of selected ABS1. In this paper we discuss how a revamped market for corporate loans securitised via ABS products, and how use of ABS as a monetary policy instrument, can indeed play a role in revitalising Europe’s credit market. However, before using this instrument a number of issues should be addressed: First, the European ABS market has significantly contracted since the crisis. Hence it needs to be revamped through appropriate regulation if securitisation is to play a role in improving the efficiency of resource allocation in the economy. Second, even assuming that this market can expand again, the European ABS market is heterogeneous: lending criteria are different in different countries and banking institutions and the rating methodologies to assess the quality of the borrowers have to take these differences into account. One further element of differentiation is default law, which is specific to national jurisdictions in the euro area. Therefore, the pool of loans will not only be different in terms of the macro risks related to each country of origination (which is a ‘positive’ idiosyncratic risk, because it enables a portfolio manager to differentiate), but also in terms of the normative side, in case of default. The latter introduces uncertainties and inefficiencies in the ABS market that could create arbitrage opportunities. It is also unclear to what extent a direct purchase of these securities by the ECB might have an impact on the credit market. This will depend on, for example, the type of securities targeted in terms of the underlying assets that would be considered as eligible for inclusion (such as loans to small and medium-sized companies, car loans, leases, residential and commercial mortgages). The timing of a possible move by the ECB is also an issue; immediate action would take place in the context of relatively limited market volumes, while if the ECB waits, it might have access to a larger market, provided steps are taken in the next few months to revamp the market. We start by discussing the first of these issues – the size of the EU ABS market. We estimate how much this market could be worth if some specific measures are implemented. We then discuss the different options available to the ECB should they decide to intervene in the EU ABS market. We include a preliminary list of regulatory steps that could be taken to homogenise asset-backed securities in the euro area. We conclude with our recommended course of action.
Resumo:
Experimental ocean acidification leads to a shift in resource allocation and to an increased [HCO3-] within the perivisceral coelomic fluid (PCF) in the Baltic green sea urchin Strongylocentrotus droebachiensis. We investigated putative mechanisms of this pH compensation reaction by evaluating epithelial barrier function and the magnitude of skeleton (stereom) dissolution. In addition, we measured ossicle growth and skeletal stability. Ussing chamber measurements revealed that the intestine formed a barrier for HCO3- and was selective for cation diffusion. In contrast, the peritoneal epithelium was leaky and only formed a barrier for macromolecules. The ossicles of 6 week high CO2-acclimatised sea urchins revealed minor carbonate dissolution, reduced growth but unchanged stability. On the other hand, spines dissolved more severely and were more fragile following acclimatisation to high CO2. Our results indicate that epithelia lining the PCF space contribute to its acid-base regulation. The intestine prevents HCO3- diffusion and thus buffer leakage. In contrast, the leaky peritoneal epithelium allows buffer generation via carbonate dissolution from the surrounding skeletal ossicles. Long-term extracellular acid-base balance must be mediated by active processes, as sea urchins can maintain relatively high extracellular [HCO3-]. The intestinal epithelia are good candidate tissues for this active net import of HCO3- into the PCF. Spines appear to be more vulnerable to ocean acidification which might significantly impact resistance to predation pressure and thus influence fitness of this keystone species.
Resumo:
In total, ca. 7000 zooplanktonic species have been described for the World Ocean. This figure represents less than 4% of the total number of known marine organisms. Of the 7000 zooplanktonic species world-wide, some 60% are present in the South Atlantic; about one third of the latter have been recorded in its Subantarctic waters, and ca. 20% south of the Polar Front. When compared with those of benthic animals, these figures indicate that proportions of the overall inventories that are present in the cold waters are almost two times higher among the zooplankton. In agreement with this pattern, the proportions of Antarctic endemics in the benthos are very significantly higher than those in the plankton. For the water-column dwelling animals, the Polar Front boundary is more important than the Tropical-Subtropical limit, but almost equivalent to the Subtropical-Transitional limit, and weaker in biogeographic terms than the Transitional-Subantarctic boundary. Some of the implications of these dissimilarities, both for ecological theory and for resource allocation strategies, are discussed.
Resumo:
Federal Highway Administration, Washington, D.C.
Resumo:
Mode of access: Internet.