948 resultados para Multicast Application Level
Resumo:
Chronic patellar tendinopathy is a common pathology in sporting population. To date, there is no agreed upon protocol as election treatment. Eccentric exercises have been used with satisfactory outcomes (3). The purpose of this trial was to compare the effects of two eccentric exercise protocols.
Resumo:
Climate Change, Water Scarcity in Agriculture and the Country-Level Economic Impacts. A Multimarket Analysis. Abstract: Agriculture could be one of the most vulnerable economic sectors to the impacts of climate change in the coming decades. Considering the critical role that water plays for agricultural production, any shock in water availability will have great implications for agricultural production, land allocation, and agricultural prices. In this paper, an Agricultural Multimarket model is developed to analyze climate change impacts in developing countries, accounting for the uncertainty associated with the impacts of climate change. The model has a structure flexible enough to represent local conditions, resource availability, and market conditions. The results suggest different economic consequences of climate change depending on the specific activity, with many distributional effects across regions
Resumo:
Chronic patellar tendinopathy is a common pathology in sporting population. To date, there is no agreed upon protocol as election treatment. Eccentric exercises have been used with satisfactory outcomes.
Resumo:
A non-local gradient-based damage formulation within a geometrically non-linear set- ting is presented. The hyperelastic constitutive response at local material point level is governed by a strain energy function which is additively composed by an isotropic neo-Hookean matrix and by an anisotropic fibre-reinforced material based on the model proposed by T. Gasser, R. Ogden, and G. Holzapfel.
Resumo:
High intrinsic carrier concentration (n-type) • Efforts to reduce this effect: • Homoepitaxy1 • Non-polar orientations • Similar samples exhibit residual doping as low as ~1014 cm-3 (2) The path to p-type doping • Many dopants proposed • N is a promising candidate • Simple NO is a deep level • Complex levels have shallower energies • N-related levels observed near the VB by many groups • Energies between 130 meV and 160 meV from VBM
Resumo:
Adjusting N fertilizer application to crop requirements is a key issue to improve fertilizer efficiency, reducing unnecessary input costs to farmers and N environmental impact. Among the multiple soil and crop tests developed, optical sensors that detect crop N nutritional status may have a large potential to adjust N fertilizer recommendation (Samborski et al. 2009). Optical readings are rapid to take and non-destructive, they can be efficiently processed and combined to obtain indexes or indicators of crop status. However, other physiological stress conditions may interfere with the readings and detection of the best crop nutritional status indicators is not always and easy task. Comparison of different equipments and technologies might help to identify strengths and weakness of the application of optical sensors for N fertilizer recommendation. The aim of this study was to evaluate the potential of various ground-level optical sensors and narrow-band indices obtained from airborne hyperspectral images as tools for maize N fertilizer recommendations. Specific objectives were i) to determine which indices could detect differences in maize plants treated with different N fertilizer rates, and ii) to evaluate its ability to identify N-responsive from non-responsive sites.
Resumo:
Urban mobility in Europe is always a responsibility of the municipalities which propose measures to reduce CO2 emissions in terms of mobility aimed at reducing individual private transport (car). The European Commission's Action Plan on Urban Mobility calls for an increase in the take-up of Sustainable Urban Mobility Plans in Europe. SUMPs aim to create a sustainable urban transport system. Europe has got some long term initiatives and has been using some evaluation procedures, many of them through European projects. Nevertheless, the weak point with the SUMPs in Spain, has been the lack of concern about the evaluation and the effectiveness of the measures implemented in a SUMP. For this reason, it is difficult to know exactly whether or not the SUMPs have positively influenced in the modal split of the cities, and its contribution to reduce CO2 levels. The case of the City of Burgos is a very illustrative example as it developed a CiViTAS project during the years 2005-2009, with a total investment of 6M?. The results have been considered as ?very successful? even at European level. The modal split has changed considerably for better, The cost-effectiveness ratio of the SUMP in the city can be measured with the CO2 ton saved, specifically 36 ? per CO2 ton saved, which is fully satisfactory and in line with calculations from other European researchers. Additionally, the authors propose a single formula to measure the effectiveness of the activities developed under the umbrella of a SUMP.
Resumo:
As embedded systems evolve, problems inherent to technology become important limitations. In less than ten years, chips will exceed the maximum allowed power consumption affecting performance, since, even though the resources available per chip are increasing, frequency of operation has stalled. Besides, as the level of integration is increased, it is difficult to keep defect density under control, so new fault tolerant techniques are required. In this demo work, a new dynamically adaptable virtual architecture (ARTICo3) to allow dynamic and context-aware use of resources is implemented in a high performance Wireless Sensor node (HiReCookie) to perform an image processing application.
Resumo:
Recientemente, el paradigma de la computación en la nube ha recibido mucho interés por parte tanto de la industria como del mundo académico. Las infraestructuras cloud públicas están posibilitando nuevos modelos de negocio y ayudando a reducir costes. Sin embargo, una compañía podría desear ubicar sus datos y servicios en sus propias instalaciones, o tener que atenerse a leyes de protección de datos. Estas circunstancias hacen a las infraestructuras cloud privadas ciertamente deseables, ya sea para complementar a las públicas o para sustituirlas por completo. Por desgracia, las carencias en materia de estándares han impedido que las soluciones para la gestión de infraestructuras privadas se hayan desarrollado adecuadamente. Además, la multitud de opciones disponibles ha creado en los clientes el miedo a depender de una tecnología concreta (technology lock-in). Una de las causas de este problema es la falta de alineación entre la investigación académica y los productos comerciales, ya que aquella está centrada en el estudio de escenarios idealizados sin correspondencia con el mundo real, mientras que éstos consisten en soluciones desarrolladas sin tener en cuenta cómo van a encajar con los estándares más comunes o sin preocuparse de hacer públicos sus resultados. Con objeto de resolver este problema, propongo un sistema de gestión modular para infraestructuras cloud privadas enfocado en tratar con las aplicaciones en lugar de centrarse únicamente en los recursos hardware. Este sistema de gestión sigue el paradigma de la computación autónoma y está diseñado en torno a un modelo de información sencillo, desarrollado para ser compatible con los estándares más comunes. Este modelo divide el entorno en dos vistas, que sirven para separar aquello que debe preocupar a cada actor involucrado del resto de información, pero al mismo tiempo permitiendo relacionar el entorno físico con las máquinas virtuales que se despliegan encima de él. En dicho modelo, las aplicaciones cloud están divididas en tres tipos genéricos (Servicios, Trabajos de Big Data y Reservas de Instancias), para que así el sistema de gestión pueda sacar partido de las características propias de cada tipo. El modelo de información está complementado por un conjunto de acciones de gestión atómicas, reversibles e independientes, que determinan las operaciones que se pueden llevar a cabo sobre el entorno y que es usado para hacer posible la escalabilidad en el entorno. También describo un motor de gestión encargado de, a partir del estado del entorno y usando el ya mencionado conjunto de acciones, la colocación de recursos. Está dividido en dos niveles: la capa de Gestores de Aplicación, encargada de tratar sólo con las aplicaciones; y la capa del Gestor de Infraestructura, responsable de los recursos físicos. Dicho motor de gestión obedece un ciclo de vida con dos fases, para así modelar mejor el comportamiento de una infraestructura real. El problema de la colocación de recursos es atacado durante una de las fases (la de consolidación) por un resolutor de programación entera, y durante la otra (la online) por un heurístico hecho ex-profeso. Varias pruebas han demostrado que este acercamiento combinado es superior a otras estrategias. Para terminar, el sistema de gestión está acoplado a arquitecturas de monitorización y de actuadores. Aquella estando encargada de recolectar información del entorno, y ésta siendo modular en su diseño y capaz de conectarse con varias tecnologías y ofrecer varios modos de acceso. ABSTRACT The cloud computing paradigm has raised in popularity within the industry and the academia. Public cloud infrastructures are enabling new business models and helping to reduce costs. However, the desire to host company’s data and services on premises, and the need to abide to data protection laws, make private cloud infrastructures desirable, either to complement or even fully substitute public oferings. Unfortunately, a lack of standardization has precluded private infrastructure management solutions to be developed to a certain level, and a myriad of diferent options have induced the fear of lock-in in customers. One of the causes of this problem is the misalignment between academic research and industry ofering, with the former focusing in studying idealized scenarios dissimilar from real-world situations, and the latter developing solutions without taking care about how they f t with common standards, or even not disseminating their results. With the aim to solve this problem I propose a modular management system for private cloud infrastructures that is focused on the applications instead of just the hardware resources. This management system follows the autonomic system paradigm, and is designed around a simple information model developed to be compatible with common standards. This model splits the environment in two views that serve to separate the concerns of the stakeholders while at the same time enabling the traceability between the physical environment and the virtual machines deployed onto it. In it, cloud applications are classifed in three broad types (Services, Big Data Jobs and Instance Reservations), in order for the management system to take advantage of each type’s features. The information model is paired with a set of atomic, reversible and independent management actions which determine the operations that can be performed over the environment and is used to realize the cloud environment’s scalability. From the environment’s state and using the aforementioned set of actions, I also describe a management engine tasked with the resource placement. It is divided in two tiers: the Application Managers layer, concerned just with applications; and the Infrastructure Manager layer, responsible of the actual physical resources. This management engine follows a lifecycle with two phases, to better model the behavior of a real infrastructure. The placement problem is tackled during one phase (consolidation) by using an integer programming solver, and during the other (online) with a custom heuristic. Tests have demonstrated that this combined approach is superior to other strategies. Finally, the management system is paired with monitoring and actuators architectures. The former able to collect the necessary information from the environment, and the later modular in design and capable of interfacing with several technologies and ofering several access interfaces.
Resumo:
Las aplicaciones distribuidas que precisan de un servicio multipunto fiable son muy numerosas, y entre otras es posible citar las siguientes: bases de datos distribuidas, sistemas operativos distribuidos, sistemas de simulación interactiva distribuida y aplicaciones de distribución de software, publicaciones o noticias. Aunque en sus orígenes el dominio de aplicación de tales sistemas distribuidos estaba reducido a una única subred (por ejemplo una Red de Área Local) posteriormente ha surgido la necesidad de ampliar su aplicabilidad a interredes. La aproximación tradicional al problema del multipunto fiable en interredes se ha basado principalmente en los dos siguientes puntos: (1) proporcionar en un mismo protocolo muchas garantías de servicio (por ejemplo fiabilidad, atomicidad y ordenación) y a su vez algunas de éstas en distintos grados, sin tener en cuenta que muchas aplicaciones multipunto que precisan fiabilidad no necesitan otras garantías; y (2) extender al entorno multipunto las soluciones ya adoptadas en el entorno punto a punto sin considerar las características diferenciadoras; y de aquí, que se haya tratado de resolver el problema de la fiabilidad multipunto con protocolos extremo a extremo (protocolos de transporte) y utilizando esquemas de recuperación de errores, centralizados (las retransmisiones se hacen desde un único punto, normalmente la fuente) y globales (los paquetes solicitados se vuelven a enviar al grupo completo). En general, estos planteamientos han dado como resultado protocolos que son ineficientes en tiempo de ejecución, tienen problemas de escalabilidad, no hacen un uso óptimo de los recursos de red y no son adecuados para aplicaciones sensibles al retardo. En esta Tesis se investiga el problema de la fiabilidad multipunto en interredes operando en modo datagrama y se presenta una forma novedosa de enfocar el problema: es más óptimo resolver el problema de la fiabilidad multipunto a nivel de red y separar la fiabilidad de otras garantías de servicio, que pueden ser proporcionadas por un protocolo de nivel superior o por la propia aplicación. Siguiendo este nuevo enfoque se ha diseñado un protocolo multipunto fiable que opera a nivel de red (denominado RMNP). Las características más representativas del RMNP son las siguientes; (1) sigue una aproximación orientada al emisor, lo cual permite lograr un grado muy alto de fiabilidad; (2) plantea un esquema de recuperación de errores distribuido (las retransmisiones se hacen desde ciertos encaminadores intermedios que siempre estarán más cercanos a los miembros que la propia fuente) y de ámbito restringido (el alcance de las retransmisiones está restringido a un cierto número de miembros). Este esquema hace posible optimizar el retardo medio de distribución y disminuir la sobrecarga introducida por las retransmisiones; (3) incorpora en ciertos encaminadores funciones de agregación y filtrado de paquetes de control, que evitan problemas de implosión y reducen el tráfico que fluye hacia la fuente. Con el fin de evaluar el comportamiento del protocolo diseñado, se han realizado pruebas de simulación obteniéndose como principales conclusiones que, el RMNP escala correctamente con el tamaño del grupo, hace un uso óptimo de los recursos de red y es adecuado para aplicaciones sensibles al retardo.---ABSTRACT---There are many distributed applications that require a reliable multicast service, including: distributed databases, distributed operating systems, distributed interactive simulation systems and distribution applications of software, publications or news. Although the application domain of distributed systems of this type was originally confíned to a single subnetwork (for example, a Local Área Network), it later became necessary extend their applicability to internetworks. The traditional approach to the reliable multicast problem in internetworks is based mainly on the following two points: (1) provide a lot of service guarantees in one and the same protocol (for example, reliability, atomicity and ordering) and different levéis of guarantee in some cases, without taking into account that many multicast applications that require reliability do not need other guarantees, and (2) extend solutions adopted in the unicast environment to the multicast environment without taking into account their distinctive characteristics. So, the attempted solutions to the multicast reliability problem were end-to-end protocols (transport protocols) and centralized error recovery schemata (retransmissions made from a single point, normally the source) and global error retrieval schemata (the requested packets are retransmitted to the whole group). Generally, these approaches have resulted in protocols that are inefficient in execution time, have scaling problems, do not make optimum use of network resources and are not suitable for delay-sensitive applications. Here, the multicast reliability problem is investigated in internetworks operating in datagram mode and a new way of approaching the problem is presented: it is better to solve to the multicast reliability problem at network level and sepárate reliability from other service guarantees that can be supplied by a higher protocol or the application itself. A reliable multicast protocol that operates at network level (called RMNP) has been designed on the basis of this new approach. The most representative characteristics of the RMNP are as follows: (1) it takes a transmitter-oriented approach, which provides for a very high reliability level; (2) it provides for an error retrieval schema that is distributed (the retransmissions are made from given intermedíate routers that will always be closer to the members than the source itself) and of restricted scope (the scope of the retransmissions is confined to a given number of members), and this schema makes it possible to optimize the mean distribution delay and reduce the overload caused by retransmissions; (3) some routers include control packet aggregation and filtering functions that prevent implosión problems and reduce the traffic flowing towards the source. Simulation test have been performed in order to evalúate the behaviour of the protocol designed. The main conclusions are that the RMNP scales correctly with group size, makes optimum use of network resources and is suitable for delay-sensitive applications.
Resumo:
The aim of this study was to establish the relationships between faecal fat concentration and gaseous emissions from pig slurry. Five diets were designed to meet essential nutrient requirements: a control and four experimental feeds including two levels (35 or 70 g/kg) of calcium soap fatty acids distillate (CSP) and 0 or 200 g/kg of orange pulp (OP) combined in a 2 × 2 factorial structure. Thirty growing pigs (six per treatment) were used to measure dry matter (DM) and N balance, coefficients of total tract apparent digestibility (CTTAD) of nutrients, faecal and urine composition and potential emissions of ammonia (NH3) and methane (CH4). Increasing dietary CSP level decreased DM, ether extract (EE) and crude protein (CP) CTTAD (by 4.0, 11.1 and 3.5%, respectively, P < 0.05), but did not influence those of fibrous constituents. It also led to a decrease (from 475 to 412 g/kg DM, P < 0.001) of faecal concentration of neutral detergent fibre (aNDFom) and to an increment (from 138 to 204 g/kg, P < 0.001) of EE in faecal DM that was related to greater CH4 emissions, both per gram of organic matter (P = 0.021) or on a daily basis (P < 0.001). Level of CSP did not affect N content in faeces or urine, but increased daily DM (P < 0.001), and N (P = 0.031) faecal excretion with no effect on urine N excretion. This resulted in lesser (P = 0.036) NH3 potential emission per kg of slurry. Addition of OP decreased CTTAD of EE (by 7.9%, P = 0.044), but increased (P < 0.05) that of all the fibrous fractions. As a consequence, faecal EE content increased (from 165 to 177 g/kg DM; P = 0.012), and aNDFom decreased greatly (from 483 to 404 g/kg DM, P < 0.001), which in all resulted in a lack of effect of OP on CH4 potential emission. Inclusion of OP in the diet also led to a significant decrease of CP CTTAD (by 6.85%, P < 0.001), and to an increase of faecal CP concentration (from 174 to 226 g/kg DM, P < 0.001), with no significant influence on urine N content. These effects resulted in higher N faecal losses, especially those of the undigested dietary origin, without significant effects on potential NH3 emission. No significant interactions between CSP and OP supplementation were observed for the gaseous emissions measured.
Resumo:
¿Suministrarán las fuentes de energía renovables toda la energía que el mundo necesita algún día? Algunos argumentan que sí, mientras que otros dicen que no. Sin embargo, en algunas regiones del mundo, la producción de electricidad a través de fuentes de energía renovables ya está en una etapa prometedora de desarrollo en la que su costo de generación de electricidad compite con fuentes de electricidad convencionales, como por ejemplo la paridad de red. Este logro ha sido respaldado por el aumento de la eficiencia de la tecnología, la reducción de los costos de producción y, sobre todo, los años de intervenciones políticas de apoyo financiero. La difusión de los sistemas solares fotovoltaicos (PV) en Alemania es un ejemplo relevante. Alemania no sólo es el país líder en términos de capacidad instalada de sistemas fotovoltaicos (PV) en todo el mundo, sino también uno de los países pioneros donde la paridad de red se ha logrado recientemente. No obstante, podría haber una nube en el horizonte. La tasa de difusión ha comenzado a declinar en muchas regiones. Además, las empresas solares locales – que se sabe son importantes impulsores de la difusión – han comenzado a enfrentar dificultades para manejar sus negocios. Estos acontecimientos plantean algunas preguntas importantes: ¿Es ésta una disminución temporal en la difusión? ¿Los adoptantes continuarán instalando sistemas fotovoltaicos? ¿Qué pasa con los modelos de negocio de las empresas solares locales? Con base en el caso de los sistemas fotovoltaicos en Alemania a través de un análisis multinivel y dos revisiones literarias complementarias, esta tesis doctoral extiende el debate proporcionando riqueza múltiple de datos empíricos en un conocimiento de contexto limitado. El primer análisis se basa en la perspectiva del adoptante, que explora el nivel "micro" y el proceso social que subyace a la adopción de los sistemas fotovoltaicos. El segundo análisis es una perspectiva a nivel de empresa, que explora los modelos de negocio de las empresas y sus roles impulsores en la difusión de los sistemas fotovoltaicos. El tercero análisis es una perspectiva regional, la cual explora el nivel "meso", el proceso social que subyace a la adopción de sistemas fotovoltaicos y sus técnicas de modelado. Los resultados incluyen implicaciones tanto para académicos como políticos, no sólo sobre las innovaciones en energía renovable relativas a la paridad de red, sino también, de manera inductiva, sobre las innovaciones ambientales impulsadas por las políticas que logren la competitividad de costes. ABSTRACT Will renewable energy sources supply all of the world energy needs one day? Some argue yes, while others say no. However, in some regions of the world, the electricity production through renewable energy sources is already at a promising stage of development at which their electricity generation costs compete with conventional electricity sources’, i.e., grid parity. This achievement has been underpinned by the increase of technology efficiency, reduction of production costs and, above all, years of policy interventions of providing financial support. The diffusion of solar photovoltaic (PV) systems in Germany is an important frontrunner case in point. Germany is not only the top country in terms of installed PV systems’ capacity worldwide but also one of the pioneer countries where the grid parity has recently been achieved. However, there might be a cloud on the horizon. The diffusion rate has started to decline in many regions. In addition, local solar firms – which are known to be important drivers of diffusion – have started to face difficulties to run their businesses. These developments raise some important questions: Is this a temporary decline on diffusion? Will adopters continue to install PV systems? What about the business models of the local solar firms? Based on the case of PV systems in Germany through a multi-level analysis and two complementary literature reviews, this PhD Dissertation extends the debate by providing multiple wealth of empirical details in a context-limited knowledge. The first analysis is based on the adopter perspective, which explores the “micro” level and the social process underlying the adoption of PV systems. The second one is a firm-level perspective, which explores the business models of firms and their driving roles in diffusion of PV systems. The third one is a regional perspective, which explores the “meso” level, i.e., the social process underlying the adoption of PV systems and its modeling techniques. The results include implications for both scholars and policymakers, not only about renewable energy innovations at grid parity, but also in an inductive manner, about policy-driven environmental innovations that achieve the cost competiveness.
Resumo:
We have analyzed the influence of the actual height of Bolund island above water level on different full-scale statistics of the velocity field over the peninsula. Our analysis is focused on the database of 10-minute statistics provided by Risø-DTU for the Bolund Blind Experiment. We have considered 10-minut.e periods with near-neutral atmospheric conditions, mean wind speed values in the interval [5,20] m/s, and westerly wind directions. As expected, statistics such as speed-up, normalized increase of turbulent kinetic energy and probability of recirculating flow show a large dependence on the emerged height of the island for the locations close to the escarpment. For the published ensemble mean values of speed-up and normalized increase of turbulent kinetic energy in these locations, we propose that some ammount of uncertainty could be explained as a deterministic dependence of the flow field statistics upon the actual height of the Bolund island above the sea level.
Resumo:
En la última década, la telefonía móvil ha evolucionado a una extraordinaria velocidad, permitiéndonos acceder a funcionalidades características de los PC pero con la ventaja de poseer una movilidad total. Con la aparición de la tecnología Long Term Evolution (LTE), comúnmente conocida como 4G, se ha conseguido desarrollar un sistema que se ha mejorado notablemente las prestaciones proporcionando alta velocidad y eficiencia a los ya masivamente utilizados smartphones. Gracias a este exponencial incremento del ancho de banda disponible, los usuarios hoy en día no se conforman sólo con navegar por páginas Web, sino que cada vez muestran un mayor interés en poder explotar al máximo los recursos multimedia, dando lugar a servicios como el streaming de vídeo. De este modo, a raíz del proyecto LTExtreme centrado en el análisis y la propuesta de optimización para servicios de streaming multimedia multicast/unicast sobre la tecnología LTE, surge este trabajo en el cual se pretende extender dicho análisis a la multidifusión de vídeo en directo. El proyecto se basa en la implementación de la arquitectura propuesta por el organismo 3GPP para dar este servicio, considerándose como una solución eficiente en la que se combina el protocolo de transporte multicast FLUTE (File Delivery over Unidirectional Transport) con la tecnología DASH (Dynamic Adaptative Streaming over HTTP). La arquitectura se ha implementado mediante la creación y configuración de una maqueta de laboratorio gracias a la herramienta de virtualización Virtual Networks over linuX (VNX). Un escenario simplificado de la red móvil LTE junto con el servidor de contenidos y varios clientes móviles, pudiendo realizar simulaciones de una emisión de vídeo en directo, y a su vez analizar los resultados obtenidos, así como la calidad de servicio percibida. Concretamente, se realizará un análisis de los problemas asociados a los casos de uso tratados, tanto de la emisión de un único vídeo como una de duración infinita, asemejándose a lo que supondría la emisión de la programación televisiva para un determinado canal. Por último, se plantearán ideas surgidas a raíz de los resultados obtenidos de dichos estudios y que puedan tener futuro y ser aplicables al mundo real.
Resumo:
We present a set of new volume scaling relationships specific to Svalbard glaciers, derived from a sample of 60 volume–area pairs. Glacier volumes are computed from ground-penetrating radar (GPR)-retrieved ice thickness measurements, which have been compiled from different sources for this study. The most precise scaling models, in terms of lowest cross-validation errors, are obtained using a multivariate approach where, in addition to glacier area, glacier length and elevation range are also used as predictors. Using this multivariate scaling approach, together with the Randolph Glacier Inventory V3.2 for Svalbard and Jan Mayen, we obtain a regional volume estimate of 6700 ± 835 km3, or 17 ± 2 mm of sea-level equivalent (SLE). This result lies in the mid- to low range of recently published estimates, which show values as varied as 13 and 24 mm SLE. We assess the sensitivity of the scaling exponents to glacier characteristics such as size, aspect ratio and average slope, and find that the volume of steep-slope and cirque-type glaciers is not very sensitive to changes in glacier area.