21 resultados para management of design and scope definition

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The intense activity in the construction sector during the last decade has generated huge volumes of construction and demolition (C&D) waste. In average, Europe has generated around 890 million tonnes of construction and demolition waste per year. Although now the activity has entered in a phase of decline, due to the change of the economic cycle, we don’t have to forget all the problems caused by such waste, or rather, by their management which is still far from achieving the overall target of 70% for C&D waste --excludes soil and stones not containing dangerous substances-- should be recycled in the EU Countries by 2020 (Waste Framework Directive). But in fact, the reality is that only 50% of the C&D waste generated in EU is recycled and 40% of it corresponds to the recycling of soil and stones not containing dangerous substances. Aware of this situation, the European Countries are implementing national policies as well as different measures to prevent the waste that can be avoidable and to promote measures to increase recycling and recovering. In this aspect, this article gives an overview of the amount of C&D waste generated in European countries, as well as the amount of this waste that is being recycled and the different measures that European countries have applied to solve this situation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The construction industry, one of the most important ones in the development of a country, generates unavoidable impacts on the environment. The social demand towards greater respect for the environment is a high and general outcry. Therefore, the construction industry needs to reduce the impact it produces. Proper waste management is not enough; we must take a further step in environmental management, where new measures need to be introduced for the prevention at source, such as good practices to promote recycling. Following the amendment of the legal frame applicable to Construction and Demolition Waste (C&D waste), important developments have been incorporated in European and International laws, aiming to promote the culture of reusing and recycling. This change of mindset, that is progressively taking place in society, is allowing for the consideration of C&D waste no longer as an unusable waste, but as a reusable material. The main objective of the work presented in this paper is to enhance C&D waste management systems through the development of preventive measures during the construction process. These measures concern all the agents intervening in the construction process as only the personal implication of all of them can ensure an efficient management of the C&D waste generated. Finally, a model based on preventive measures achieves organizational cohesion between the different stages of the construction process, as well as promoting the conservation of raw materials through the use and waste minimization. All of these in order to achieve a C&D waste management system, whose primary goal is zero waste generation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This poster raises the issue of a research work oriented to the storage, retrieval, representation and analysis of dynamic GI, taking into account the semantic, the temporal and the spatiotemporal components. We intend to define a set of methods, rules and restrictions for the adequate integration of these components into the primary elements of the GI: theme, location, time [1]. We intend to establish and incorporate three new structures (layers) into the core of data storage by using mark-up languages: a semantictemporal structure, a geosemantic structure, and an incremental spatiotemporal structure. The ultimate objective is the modelling and representation of the dynamic nature of geographic features, establishing mechanisms to store geometries enriched with a temporal structure (regardless of space) and a set of semantic descriptors detailing and clarifying the nature of the represented features and their temporality. Thus, data would be provided with the capability of pinpointing and expressing their own basic and temporal characteristics, enabling them to interact each other according to their context, and their time and meaning relationships that could be eventually established

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Los modelos de simulación de cultivos permiten analizar varias combinaciones de laboreo-rotación y explorar escenarios de manejo. El modelo DSSAT fue evaluado bajo condiciones de secano en un experimento de campo de 16 años en la semiárida España central. Se evaluó el efecto del sistema de laboreo y las rotaciones basadas en cereales de invierno, en el rendimiento del cultivo y la calidad del suelo. Los modelos CERES y CROPGRO se utilizaron para simular el crecimiento y rendimiento del cultivo, mientras que el modelo DSSAT CENTURY se utilizó en las simulaciones de SOC y SN. Tanto las observaciones de campo como las simulaciones con CERES-Barley, mostraron que el rendimiento en grano de la cebada era mas bajo para el cereal continuo (BB) que para las rotaciones de veza (VB) y barbecho (FB) en ambos sistemas de laboreo. El modelo predijo más nitrógeno disponible en el laboreo convencional (CT) que en el no laboreo (NT) conduciendo a un mayor rendimiento en el CT. El SOC y el SN en la capa superficial del suelo, fueron mayores en NT que en CT, y disminuyeron con la profundidad en los valores tanto observados como simulados. Las mejores combinaciones para las condiciones de secano estudiadas fueron CT-VB y CT-FB, pero CT presentó menor contenido en SN y SOC que NT. El efecto beneficioso del NT en SOC y SN bajo condiciones Mediterráneas semiáridas puede ser identificado por observaciones de campo y por simulaciones de modelos de cultivos. La simulación del balance de agua en sistemas de cultivo es una herramienta útil para estudiar como el agua puede ser utilizado eficientemente. La comparación del balance de agua de DSSAT , con una simple aproximación “tipping bucket”, con el modelo WAVE más mecanicista, el cual integra la ecuación de Richard , es un potente método para valorar el funcionamiento del modelo. Los parámetros de suelo fueron calibrados usando el método de optimización global Simulated Annealing (SA). Un lisímetro continuo de pesada en suelo desnudo suministró los valores observados de drenaje y evapotranspiración (ET) mientras que el contenido de agua en el suelo (SW) fue suministrado por sensores de capacitancia. Ambos modelos funcionaron bien después de la optimización de los parámetros de suelo con SA, simulando el balance de agua en el suelo para el período de calibración. Para el período de validación, los modelos optimizados predijeron bien el contenido de agua en el suelo y la evaporación del suelo a lo largo del tiempo. Sin embargo, el drenaje fue predicho mejor con WAVE que con DSSAT, el cual presentó mayores errores en los valores acumulados. Esto podría ser debido a la naturaleza mecanicista de WAVE frente a la naturaleza más funcional de DSSAT. Los buenos resultados de WAVE indican que, después de la calibración, este puede ser utilizado como "benchmark" para otros modelos para periodos en los que no haya medidas de campo del drenaje. El funcionamiento de DSSAT-CENTURY en la simulación de SOC y N depende fuertemente del proceso de inicialización. Se propuso como método alternativo (Met.2) la inicialización de las fracciones de SOC a partir de medidas de mineralización aparente del suelo (Napmin). El Met.2 se comparó con el método de inicialización de Basso et al. (2011) (Met.1), aplicando ambos métodos a un experimento de campo de 4 años en un área en regadío de España central. Nmin y Napmin fueron sobreestimados con el Met.1, ya que la fracción estable obtenida (SOC3) en las capas superficiales del suelo fue más baja que con Met.2. El N lixiviado simulado fue similar en los dos métodos, con buenos resultados en los tratamientos de barbecho y cebada. El Met.1 subestimó el SOC en la capa superficial del suelo cuando se comparó con una serie observada de 12 años. El crecimiento y rendimiento del cultivo fueron adecuadamente simulados con ambos métodos, pero el N en la parte aérea de la planta y en el grano fueron sobreestimados con el Met.1. Los resultados variaron significativamente con las fracciones iniciales de SOC, resaltando la importancia del método de inicialización. El Met.2 ofrece una alternativa para la inicialización del modelo CENTURY, mejorando la simulación de procesos de N en el suelo. La continua emergencia de nuevas variedades de híbridos modernos de maíz limita la aplicación de modelos de simulación de cultivos, ya que estos nuevos híbridos necesitan ser calibrados en el campo para ser adecuados para su uso en los modelos. El desarrollo de relaciones basadas en la duración del ciclo, simplificaría los requerimientos de calibración facilitando la rápida incorporación de nuevos cultivares en DSSAT. Seis híbridos de maiz (FAO 300 hasta FAO 700) fueron cultivados en un experimento de campo de dos años en un área semiárida de regadío en España central. Los coeficientes genéticos fueron obtenidos secuencialmente, comenzando con los parámetros de desarrollo fenológico (P1, P2, P5 and PHINT), seguido de los parámetros de crecimiento del cultivo (G2 and G3). Se continuó el procedimiento hasta que la salida de las simulaciones estuvo en concordancia con las observaciones fenológicas de campo. Después de la calibración, los parámetros simulados se ajustaron bien a los parámetros observados, con bajos RMSE en todos los casos. Los P1 y P5 calibrados, incrementaron con la duración del ciclo. P1 fue una función lineal del tiempo térmico (TT) desde emergencia hasta floración y P5 estuvo linealmente relacionada con el TT desde floración a madurez. No hubo diferencias significativas en PHINT entre híbridos de FAO-500 a 700 , ya que tuvieron un número de hojas similar. Como los coeficientes fenológicos estuvieron directamente relacionados con la duración del ciclo, sería posible desarrollar rangos y correlaciones que permitan estimar dichos coeficientes a partir de la clasificación del ciclo. ABSTRACT Crop simulation models allow analyzing various tillage-rotation combinations and exploring management scenarios. DSSAT model was tested under rainfed conditions in a 16-year field experiment in semiarid central Spain. The effect of tillage system and winter cereal-based rotations on the crop yield and soil quality was evaluated. The CERES and CROPGRO models were used to simulate crop growth and yield, while the DSSAT CENTURY was used in the SOC and SN simulations. Both field observations and CERES-Barley simulations, showed that barley grain yield was lower for continuous cereal (BB) than for vetch (VB) and fallow (FB) rotations for both tillage systems. The model predicted higher nitrogen availability in the conventional tillage (CT) than in the no tillage (NT) leading to a higher yield in the CT. The SOC and SN in the top layer, were higher in NT than in CT, and decreased with depth in both simulated and observed values. The best combinations for the dry land conditions studied were CT-VB and CT-FB, but CT presented lower SN and SOC content than NT. The beneficial effect of NT on SOC and SN under semiarid Mediterranean conditions can be identified by field observations and by crop model simulations. The simulation of the water balance in cropping systems is a useful tool to study how water can be used efficiently. The comparison of DSSAT soil water balance, with a simpler “tipping bucket” approach, with the more mechanistic WAVE model, which integrates Richard’s equation, is a powerful method to assess model performance. The soil parameters were calibrated by using the Simulated Annealing (SA) global optimizing method. A continuous weighing lysimeter in a bare fallow provided the observed values of drainage and evapotranspiration (ET) while soil water content (SW) was supplied by capacitance sensors. Both models performed well after optimizing soil parameters with SA, simulating the soil water balance components for the calibrated period. For the validation period, the optimized models predicted well soil water content and soil evaporation over time. However, drainage was predicted better by WAVE than by DSSAT, which presented larger errors in the cumulative values. That could be due to the mechanistic nature of WAVE against the more functional nature of DSSAT. The good results from WAVE indicate that, after calibration, it could be used as benchmark for other models for periods when no drainage field measurements are available. The performance of DSSAT-CENTURY when simulating SOC and N strongly depends on the initialization process. Initialization of the SOC pools from apparent soil N mineralization (Napmin) measurements was proposed as alternative method (Met.2). Method 2 was compared to the Basso et al. (2011) initialization method (Met.1), by applying both methods to a 4-year field experiment in a irrigated area of central Spain. Nmin and Napmin were overestimated by Met.1, since the obtained stable pool (SOC3) in the upper layers was lower than from Met.2. Simulated N leaching was similar for both methods, with good results in fallow and barley treatments. Method 1 underestimated topsoil SOC when compared with a 12-year observed serial. Crop growth and yield were properly simulated by both methods, but N in shoots and grain were overestimated by Met.1. Results varied significantly with the initial SOC pools, highlighting the importance of the initialization procedure. Method 2 offers an alternative to initialize the CENTURY model, enhancing the simulation of soil N processes. The continuous emergence of new varieties of modern maize hybrids limits the application of crop simulation models, since these new hybrids should be calibrated in the field to be suitable for model use. The development of relationships based on the cycle duration, would simplify the calibration requirements facilitating the rapid incorporation of new cultivars into DSSAT. Six maize hybrids (FAO 300 through FAO 700) were grown in a 2-year field experiment in a semiarid irrigated area of central Spain. Genetic coefficients were obtained sequentially, starting with the phenological development parameters (P1, P2, P5 and PHINT), followed by the crop growth parameters (G2 and G3). The procedure was continued until the simulated outputs were in good agreement with the field phenological observations. After calibration, simulated parameters matched observed parameters well, with low RMSE in most cases. The calibrated P1 and P5 increased with the duration of the cycle. P1 was a linear function of the thermal time (TT) from emergence to silking and P5 was linearly related with the TT from silking to maturity . There were no significant differences in PHINT between hybrids from FAO-500 to 700 , as they had similar leaf number. Since phenological coefficients were directly related with the cycle duration, it would be possible to develop ranges and correlations which allow to estimate such coefficients from the cycle classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En el futuro, la gestión del tráfico aéreo (ATM, del inglés air traffic management) requerirá un cambio de paradigma, de la gestión principalmente táctica de hoy, a las denominadas operaciones basadas en trayectoria. Un incremento en el nivel de automatización liberará al personal de ATM —controladores, tripulación, etc.— de muchas de las tareas que realizan hoy. Las personas seguirán siendo el elemento central en la gestión del tráfico aéreo del futuro, pero lo serán mediante la gestión y toma de decisiones. Se espera que estas dos mejoras traigan un incremento en la eficiencia de la gestión del tráfico aéreo que permita hacer frente al incremento previsto en la demanda de transporte aéreo. Para aplicar el concepto de operaciones basadas en trayectoria, el usuario del espacio aéreo (la aerolínea, piloto, u operador) y el proveedor del servicio de navegación aérea deben negociar las trayectorias mediante un proceso de toma de decisiones colaborativo. En esta negociación, es necesaria una forma adecuada de compartir dichas trayectorias. Compartir la trayectoria completa requeriría un gran ancho de banda, y la trayectoria compartida podría invalidarse si cambiase la predicción meteorológica. En su lugar, podría compartirse una descripción de la trayectoria independiente de las condiciones meteorológicas, de manera que la trayectoria real se pudiese calcular a partir de dicha descripción. Esta descripción de la trayectoria debería ser fácil de procesar usando un programa de ordenador —ya que parte del proceso de toma de decisiones estará automatizado—, pero también fácil de entender para un operador humano —que será el que supervise el proceso y tome las decisiones oportunas—. Esta tesis presenta una serie de lenguajes formales que pueden usarse para este propósito. Estos lenguajes proporcionan los medios para describir trayectorias de aviones durante todas las fases de vuelo, desde la maniobra de push-back (remolcado hasta la calle de rodaje), hasta la llegada a la terminal del aeropuerto de destino. También permiten describir trayectorias tanto de aeronaves tripuladas como no tripuladas, incluyendo aviones de ala fija y cuadricópteros. Algunos de estos lenguajes están estrechamente relacionados entre sí, y organizados en una jerarquía. Uno de los lenguajes fundamentales de esta jerarquía, llamado aircraft intent description language (AIDL), ya había sido desarrollado con anterioridad a esta tesis. Este lenguaje fue derivado de las ecuaciones del movimiento de los aviones de ala fija, y puede utilizarse para describir sin ambigüedad trayectorias de este tipo de aeronaves. Una variante de este lenguaje, denominada quadrotor AIDL (QR-AIDL), ha sido desarrollada en esta tesis para permitir describir trayectorias de cuadricópteros con el mismo nivel de detalle. Seguidamente, otro lenguaje, denominado intent composite description language (ICDL), se apoya en los dos lenguajes anteriores, ofreciendo más flexibilidad para describir algunas partes de la trayectoria y dejar otras sin especificar. El ICDL se usa para proporcionar descripciones genéricas de maniobras comunes, que después se particularizan y combinan para formar descripciones complejas de un vuelo. Otro lenguaje puede construirse a partir del ICDL, denominado flight intent description language (FIDL). El FIDL especifica requisitos de alto nivel sobre las trayectorias —incluyendo restricciones y objetivos—, pero puede utilizar características del ICDL para proporcionar niveles de detalle arbitrarios en las distintas partes de un vuelo. Tanto el ICDL como el FIDL han sido desarrollados en colaboración con Boeing Research & Technology Europe (BR&TE). También se ha desarrollado un lenguaje para definir misiones en las que interactúan varias aeronaves, el mission intent description language (MIDL). Este lenguaje se basa en el FIDL y mantiene todo su poder expresivo, a la vez que proporciona nuevas semánticas para describir tareas, restricciones y objetivos relacionados con la misión. En ATM, los movimientos de un avión en la superficie de aeropuerto también tienen que ser monitorizados y gestionados. Otro lenguaje formal ha sido diseñado con este propósito, llamado surface movement description language (SMDL). Este lenguaje no pertenece a la jerarquía de lenguajes descrita en el párrafo anterior, y se basa en las clearances (autorizaciones del controlador) utilizadas durante las operaciones en superficie de aeropuerto. También proporciona medios para expresar incertidumbre y posibilidad de cambios en las distintas partes de la trayectoria. Finalmente, esta tesis explora las aplicaciones de estos lenguajes a la predicción de trayectorias y a la planificación de misiones. El concepto de trajectory language processing engine (TLPE) se usa en ambas aplicaciones. Un TLPE es una función de ATM cuya principal entrada y salida se expresan en cualquiera de los lenguajes incluidos en la jerarquía descrita en esta tesis. El proceso de predicción de trayectorias puede definirse como una combinación de TLPEs, cada uno de los cuales realiza una pequeña sub-tarea. Se le ha dado especial importancia a uno de estos TLPEs, que se encarga de generar el perfil horizontal, vertical y de configuración de la trayectoria. En particular, esta tesis presenta un método novedoso para la generación del perfil vertical. El proceso de planificar una misión también se puede ver como un TLPE donde la entrada se expresa en MIDL y la salida consiste en cierto número de trayectorias —una por cada aeronave disponible— descritas utilizando FIDL. Se ha formulado este problema utilizando programación entera mixta. Además, dado que encontrar caminos óptimos entre distintos puntos es un problema fundamental en la planificación de misiones, también se propone un algoritmo de búsqueda de caminos. Este algoritmo permite calcular rápidamente caminos cuasi-óptimos que esquivan todos los obstáculos en un entorno urbano. Los diferentes lenguajes formales definidos en esta tesis pueden utilizarse como una especificación estándar para la difusión de información entre distintos actores de la gestión del tráfico aéreo. En conjunto, estos lenguajes permiten describir trayectorias con el nivel de detalle necesario en cada aplicación, y se pueden utilizar para aumentar el nivel de automatización explotando esta información utilizando sistemas de soporte a la toma de decisiones. La aplicación de estos lenguajes a algunas funciones básicas de estos sistemas, como la predicción de trayectorias, han sido analizadas. ABSTRACT Future air traffic management (ATM) will require a paradigm shift from today’s mainly tactical ATM to trajectory-based operations (TBOs). An increase in the level of automation will also relieve humans —air traffic control officers (ATCOs), flight crew, etc.— from many of the tasks they perform today. Humans will still be central in this future ATM, as decision-makers and managers. These two improvements (TBOs and increased automation) are expected to provide the increase in ATM performance that will allow coping with the expected increase in air transport demand. Under TBOs, trajectories are negotiated between the airspace user (an airline, pilot, or operator) and the air navigation service provider (ANSP) using a collaborative decision making (CDM) process. A suitable method for sharing aircraft trajectories is necessary for this negotiation. Sharing a whole trajectory would require a high amount of bandwidth, and the shared trajectory might become invalid if the weather forecast changed. Instead, a description of the trajectory, decoupled from the weather conditions, could be shared, so that the actual trajectory could be computed from this trajectory description. This trajectory description should be easy to process using a computing program —as some of the CDM processes will be automated— but also easy to understand for a human operator —who will be supervising the process and making decisions. This thesis presents a series of formal languages that can be used for this purpose. These languages provide the means to describe aircraft trajectories during all phases of flight, from push back to arrival at the gate. They can also describe trajectories of both manned and unmanned aircraft, including fixedwing and some rotary-wing aircraft (quadrotors). Some of these languages are tightly interrelated and organized in a language hierarchy. One of the key languages in this hierarchy, the aircraft intent description language (AIDL), had already been developed prior to this thesis. This language was derived from the equations of motion of fixed-wing aircraft, and can provide an unambiguous description of fixed-wing aircraft trajectories. A variant of this language, the quadrotor AIDL (QR-AIDL), is developed in this thesis to allow describing a quadrotor aircraft trajectory with the same level of detail. Then, the intent composite description language (ICDL) is built on top of these two languages, providing more flexibility to describe some parts of the trajectory while leaving others unspecified. The ICDL is used to provide generic descriptions of common aircraft manoeuvres, which can be particularized and combined to form complex descriptions of flight. Another language is built on top of the ICDL, the flight intent description language (FIDL). The FIDL specifies high-level requirements on trajectories —including constraints and objectives—, but can use features of the ICDL to provide arbitrary levels of detail in different parts of the flight. The ICDL and FIDL have been developed in collaboration with Boeing Research & Technology Europe (BR&TE). Also, the mission intent description language (MIDL) has been developed to allow describing missions involving multiple aircraft. This language is based on the FIDL and keeps all its expressive power, while it also provides new semantics for describing mission tasks, mission objectives, and constraints involving several aircraft. In ATM, the movement of aircraft while on the airport surface also has to be monitored and managed. Another formal language has been designed for this purpose, denoted surface movement description language (SMDL). This language does not belong to the language hierarchy described above, and it is based on the clearances used in airport surface operations. Means to express uncertainty and mutability of different parts of the trajectory are also provided. Finally, the applications of these languages to trajectory prediction and mission planning are explored in this thesis. The concept of trajectory language processing engine (TLPE) is used in these two applications. A TLPE is an ATM function whose main input and output are expressed in any of the languages in the hierarchy described in this thesis. A modular trajectory predictor is defined as a combination of multiple TLPEs, each of them performing a small subtask. Special attention is given to the TLPE that builds the horizontal, vertical, and configuration profiles of the trajectory. In particular, a novel method for the generation of the vertical profile is presented. The process of planning a mission can also be seen as a TLPE, where the main input is expressed in the MIDL and the output consists of a number of trajectory descriptions —one for each aircraft available in the mission— expressed in the FIDL. A mixed integer linear programming (MILP) formulation for the problem of assigning mission tasks to the available aircraft is provided. In addition, since finding optimal paths between locations is a key problem to mission planning, a novel path finding algorithm is presented. This algorithm can compute near-shortest paths avoiding all obstacles in an urban environment in very short times. The several formal languages described in this thesis can serve as a standard specification to share trajectory information among different actors in ATM. In combination, these languages can describe trajectories with the necessary level of detail for any application, and can be used to increase automation by exploiting this information using decision support tools (DSTs). Their applications to some basic functions of DSTs, such as trajectory prediction, have been analized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesis doctoral se centra en la posibilidad de entender que la práctica de arquitectura puede encontrar en las prácticas comunicativas un apoyo instrumental, que sobrepasa cualquier simplificación clásica del uso de los medios como una mera aplicación superficial, post-producida o sencillamente promocional. A partir de esta premisa se exponen casos del último cuarto del siglo XX y se detecta que amenazas como el riesgo de la banalización, la posible saturación de la imagen pública o la previsible asociación incorrecta con otros individuos en presentaciones grupales o por temáticas, han podido influir en un crecimiento notable de la adquisición de control, por parte de los arquitectos, en sus oportunidades mediáticas. Esto es, como si la arquitectura hubiera empezado a superar y optimizar algo inevitable, que las fórmulas expositivas y las publicaciones, o más bien del exponer(se) y publicar(se), son herramientas disponibles para activar algún tipo de gestión intelectual de la comunicación e información circulante sobre si misma. Esta práctica de “autoedición” se analiza en un periodo concreto de la trayectoria de OMA -Office for Metropolitan Architecture-, estudio considerado pionero en el uso eficiente, oportunista y personalizado de los medios. Así, la segunda parte de la tesis se ocupa del análisis de su conocida monografía S,M,L,XL (1995), un volumen que contó con gran participación por parte de sus protagonistas durante la edición, y de cuyo proceso de producción apenas se había investigado. Esta publicación señaló un punto de inflexión en su género alterando todo formato y restricciones anteriores, y se ha convertido en un volumen emblemático para la disciplina que ninguna réplica posterior ha podido superar. Aquí se presenta a su vez como el desencadenante de la construcción de un “gran evento” que concluye en la transformación de la identidad de OMA en 10 años, paradójicamente entre el nacimiento de la Fundación Groszstadt y el arranque de la actividad de AMO, dos entidades paralelas clave anexas a OMA. Este planteamiento deviene de cómo la investigación desvela que S,M,L,XL es una pieza más, central pero no independiente, dentro de una suma de acciones e individuos, así como otras publicaciones, exposiciones, eventos y también artículos ensayados y proyectos, en particular Bigness, Generic City, Euralille y los concursos de 1989. Son significativos aspectos como la apertura a una autoría múltiple, encabezada por Rem Koolhaas y el diseñador gráfico Bruce Mau, acompañados en los agradecimientos de la editora Jennifer Sigler y cerca de una centena de nombres, cuyas aportaciones no necesariamente se basan en la construcción de fragmentos del libro. La supresión de ciertos límites permite superar también las tareas inicialmente relevantes en la edición de una publicación. Un objetivo general de la tesis es también la reflexión sobre relaciones anteriormente cuestionadas, como la establecida entre la arquitectura y los mercados o la economía. Tomando como punto de partida la idea de “design intelligence” sugerida por Michael Speaks (2001), se extrae de sus argumentos que lo esencial es el hallazgo de la singularidad o inteligencia propia de cada estudio de arquitectura o diseño. Asimismo se explora si en la construcción de ese tipo de fórmulas magistrales se alojaban también combinaciones de interés y productivas entre asuntos como la eficiencia y la creatividad, o la organización y las ideas. En esta dinámica de relaciones bidireccionales, y en ese presente de exceso de información, se fundamenta la propuesta de una equivalencia más evidenciada entre la “socialización” del trabajo del arquitecto, al compartirlo públicamente e introducir nuevas conversaciones, y la relación inversa a partir del trabajo sobre la “socialización” misma. Como si la consciencia sobre el uso de los medios pudiera ser efectivamente instrumental, y contribuir al desarrollo de la práctica de arquitectura, desde una perspectiva idealmente comprometida e intelectual. ABSTRACT The dissertation argues the possibility to understand that the practice of architecture can find an instrumental support in the practices of communication, overcoming any classical simplification of the use of media, generally reduced to superficial treatments or promotional efforts. Thus some cases of the last decades of the 20th century are presented. Some threats detected, such as the risk of triviality, the saturation of the public image or the foreseeable wrong association among individuals when they are introduced as part of thematic groups, might have encouraged a noticeable increase of command taken by architects when there is chance to intervene in a media environment. In other words, it can be argued that architecture has started to overcome and optimize the inevitable, the fact that exhibition formulas and publications, or simply the practice of (self)exhibition or (self)publication, are tools at our disposal for the activation of any kind of intellectual management of communication and circulating information about itself. This practice of “self-edition” is analyzed in a specific timeframe of OMA’s trajectory, an office that is considered as a ground-breaking actor in the efficient and opportunistic use of media. Then the second part of the thesis dissects their monograph S,M,L,XL (1995), a volume in which its main characters were deeply involved in terms of edition and design, a process barely analyzed up to now. This publication marked a turning point in its own genre, disrupting old formats and traditional restrictions. It became such an emblematic volume for the discipline that none of the following attempts of replica has ever been able to improve this precedent. Here, the book is also presented as the element that triggers the construction of a “big event” that concludes in the transformation of OMA identity in 10 years. Paradoxically, between the birth of the Groszstadt Foundation and the early steps of AMO, both two entities parallel and connected to OMA. This positions emerge from how the research unveils that S,M,L,XL is one more piece, a key one but not an unrelated element, within a sum of actions and individuals, as well as other publications, exhibitions, articles and projects, in particular Bigness, Generic City, Euralille and the competitions of 1989. Among the remarkable innovations of the monograph, there is an outstanding openness to a regime of multiple authorship, headed by Rem Koolhaas and the graphic designer Bruce Mau, who share the acknowledgements page with the editor, Jennifer Sigler, and almost 100 people, not necessarily responsible for specific fragments of the book. In this respect, the dissolution of certain limits made possible that the expected tasks in the edition of a publication could be trespassed. A general goal of the thesis is also to open a debate on typically questioned relations, particularly between architecture and markets or economy. Using the idea ofdesign intelligence”, outlined by Michael Speaks in 2001, the thesis pulls out its essence, basically the interest in detecting the singularity, or particular intelligence of every office of architecture and design. Then it explores if in the construction of this kind of ingenious formulas one could find interesting and useful combinations among issues like efficiency and creativity, or organization and ideas. This dynamic of bidirectional relations, rescued urgently at this present moment of excess of information, is based on the proposal for a more evident equivalence between the “socialization” of the work in architecture, anytime it is shared in public, and the opposite concept, the work on the proper act of “socialization” itself. As if a new awareness of the capacities of the use of media could turn it into an instrumental force, capable of contributing to the development of the practice of architecture, from an ideally committed and intelectual perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper shows the results of a research aimed to formulate a general model for supporting the implementation and management of an urban road pricing scheme. After a preliminary work, to define the state of the art in the field of sustainable urban mobility strategies, the problem has been theoretically set up in terms of transport economy, introducing the external costs’ concept duly translated into the principle of pricing for the use of public infrastructures. The research is based on the definition of a set of direct and indirect indicators to qualify the urban areas by land use, mobility, environmental and economic conditions. These indicators have been calculated for a selected set of typical urban areas in Europe on the basis of the results of a survey carried out by means of a specific questionnaire. Once identified the most typical and interesting applications of the road pricing concept in cities such as London (Congestion Charging), Milan (Ecopass), Stockholm (Congestion Tax) and Rome (ZTL), a large benchmarking exercise and the cross analysis of direct and indirect indicators, has allowed to define a simple general model, guidelines and key requirements for the implementation of a pricing scheme based traffic restriction in a generic urban area. The model has been finally applied to the design of a road pricing scheme for a particular area in Madrid, and to the quantification of the expected results of its implementation from a land use, mobility, environmental and economic perspective.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The progressive ageing of population has turned the mild cognitive impairment (MCI) into a prevalent disease suffered by elderly. Consequently, the spatial disorientation has become a significant problem for older people and their caregivers. The ambient-assisted living applications are offering location-based services for empowering elderly to go outside and encouraging a greater independence. Therefore, this paper describes the design and technical evaluation of a location-awareness service enabler aimed at supporting and managing probable wandering situations of a person with MCI. Through the presence capabilities of the IP multimedia subsystem (IMS) architecture, the service will alert patient's contacts if a hazardous situation is detected depending on his location. Furthermore, information about the older person's security areas has been included in the user profile managed by IMS. In doing so, the service enabler introduced contribute to “context-awareness” paradigm allowing the adaptation and personalization of services depending on user's context and specific conditions or preferences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we describe a complete development platform that features different innovative acceleration strategies, not included in any other current platform, that simplify and speed up the definition of the different elements required to design a spoken dialog service. The proposed accelerations are mainly based on using the information from the backend database schema and contents, as well as cumulative information produced throughout the different steps in the design. Thanks to these accelerations, the interaction between the designer and the platform is improved, and in most cases the design is reduced to simple confirmations of the “proposals” that the platform dynamically provides at each step. In addition, the platform provides several other accelerations such as configurable templates that can be used to define the different tasks in the service or the dialogs to obtain or show information to the user, automatic proposals for the best way to request slot contents from the user (i.e. using mixed-initiative forms or directed forms), an assistant that offers the set of more probable actions required to complete the definition of the different tasks in the application, or another assistant for solving specific modality details such as confirmations of user answers or how to present them the lists of retrieved results after querying the backend database. Additionally, the platform also allows the creation of speech grammars and prompts, database access functions, and the possibility of using mixed initiative and over-answering dialogs. In the paper we also describe in detail each assistant in the platform, emphasizing the different kind of methodologies followed to facilitate the design process at each one. Finally, we describe the results obtained in both a subjective and an objective evaluation with different designers that confirm the viability, usefulness, and functionality of the proposed accelerations. Thanks to the accelerations, the design time is reduced in more than 56% and the number of keystrokes by 84%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo general de esta Tesis Doctoral fue estudiar la influencia de diversos factores nutricionales y de manejo sobre la productividad y la calidad del huevo en gallinas ponedoras comerciales rubias. Los factores estudiados fueron: 1) Cereal principal y tipo de grasa en la dieta; 2) Nivel de proteína bruta y grasa en la dieta; 3) Nivel energético de la dieta; 4) Peso vivo al inicio del período de puesta. En el experimento 1, la influencia del cereal principal en la dieta y el tipo de grasa suplementada en la dieta sobre los parámetros productivos y la calidad del huevo fue estudiado en 756 gallinas rubias de la estirpe Lohmann desde la sem 22 hasta las 54 de vida. El experimento se realizó mediante un diseño completamente al azar con 9 tratamientos ordenados factorialmente, con 3 cereales bases (maíz, trigo blando y cebada) y 3 tipos de grasa que variaban en su contenido en ácido linoléico (aceite de soja, oleína vegetal mezcla y manteca). Todas las dietas satisfacian las recomendaciones nutricionales para gallinas ponedoras rubias según el NRC (1994) y FEDNA (2008). La unidad experimental fue la jaula para todas las variables. Cada tratamiento fue replicado 4 veces, y la unidad experimental estuvo formada por 21 gallinas alojadas en grupos de 7. Las dietas fueron formuladas con un contenido nutritivo similar, excepto para el ácido linoléico, que varió en función del tipo de cereal y grasa utilizado. Así, dependiendo de la combinación de estos elementos el contenido de este ácido graso varió desde un 0.8% (dieta trigo-manteca) a un 3.4% (dieta maíz-aceite de soja). Este rango de ácido linoléico permitió estimar el nivel mínimo de este nutriente en el pienso que permite maximizar el peso del huevo. Los parámetros productivos y la calidad del huevo se controlaron cada 28 días y el peso de las aves se midió individualmente al inicio y al final del experimento con el objetivo de estudiar la variación en el peso vivo de los animales. No se observaron interacciones entre el tipo de cereal y grasa en la dieta para ninguna de las variables productivas estudiadas. Los tratamientos experimentales no afectaron a las principales variables productivas (porcentaje de puesta, peso del huevo y masa de huevo). Sin embargo, la ganancia de peso fue mayor en gallinas alimentadas con maíz o trigo que las gallinas alimentadas con cebada (243 vs. 238 vs. 202 g, respectivamente; P< 0.05). En el mismo sentido, las gallinas alimentadas con manteca obtuvieron una mayor ganancia de peso que las gallinas alimentadas con aceite de soja u oleína vegetal (251 vs. 221 vs. 210 g, respectivamente; P< 0.05). En cuanto a las variables estudiadas en relación con la calidad del huevo, ninguna de las variables estudiadas se vio afectada por el tratamiento experimental, salvo la pigmentación de la yema. Así, las gallinas alimentadas con maíz como cereal principal obtuvieron una mayor puntuación en relación con la escala de color que las gallinas alimentadas con trigo y con cebada (9.0 vs. 8.3 vs. 8.3, respectivamente; P< 0.001). La pigmentación de la yema también se vio afectada por el tipo de grasa en la dieta, así, las gallinas alimentadas con manteca obtuvieron una mayor puntuación de color en relación con la escala de color que las gallinas alimentadas con aceite de soja u oleína vegetal (8.9 vs. 8.5 vs. 8.2, respectivamente; P< 0.001). La influencia del contenido en ácido linoléico respecto al peso de huevo y masa de huevo fue mayor a medida que el contenido de dicho ácido graso se redujo en la dieta. Así, la influencia de la dieta en los radios peso de huevo/g linoléico ingerido y masa de huevo/g linoléico ingerido fue significativamente mayor a medida que el contenido en dicho ácido graso disminuyo en la dieta (P< 0.001). Los resultados del ensayo indican que las gallinas ponedoras rubias no necesitan más de un 1.0% de ácido linoléico en la dieta para maximizar la producción y el tamaño del huevo. Además, se pudo concluir que los 3 cereales y las 3 grasas utilizadas pueden sustituirse en la dieta sin ningún perjuicio productivo o referente a la calidad del huevo siempre que los requerimientos de los animales sean cubiertos. En el experimento 2, la influencia del nivel de proteína bruta y el contenido de grasa de la dieta sobre los parámetros productivos y la calidad del huevo fue estudiado en 672 gallinas ponedoras rubias de la estirpe Lohmann entre las sem 22 y 50 de vida. El experimento fue conducido mediante un diseño completamente al azar con 8 tratamientos ordenados factorialmente con 4 dietas y 2 pesos vivos distintos al inicio de puesta (1592 vs. 1860g). Tres de esas dietas diferían en el contenido de proteína bruta (16.5%, 17.5% y 18.5%) y tenían un contenido en grasa añadida de 1.8%. La cuarta dieta tenía el nivel proteico más elevado (18.5%) pero fue suplementada con 3.6% de grasa añadida en vez de 1.8%. Cada tratamiento fue replicado 4 veces y la unidad experimental consistió en 21 gallinas alojadas dentro de grupos de 7 animales en 3 jaulas contiguas. Todas las dietas fueron isocalóricas (2750 kcal EMAn/kg) y cubrieron las recomendaciones en aminoácidos para gallinas ponedoras rubias (Arg, Ile, Lys, Met, Thr, Trp, TSAA y Val) según el NRC (1994) y FEDNA (2008). Los efectos de los tratamientos sobre las variables productivas y la calidad de huevo fueron estudiados cada 28 días. La dieta no afecto a ninguna de las variables productivas estudiadas a lo largo del período productivo. Sin embargo, el peso inicial origino que las gallinas pesadas consumieran más (120.6 vs. 113.9 g; P< 0.001), obtuvieran un porcentaje de puesta mayor (92.5 vs. 89.8%; P< 0.01) y un peso del huevo mayor (64.9 vs. 62.4 g; P< 0.001) que las gallinas ligeras. El peso inicial de las gallinas no afecto al IC por kg de huevo ni a la mortalidad, sin embargo, la ganancia de peso fue mayor (289 vs. 233 g; P< 0.01) y el IC por docena de huevos fue mejor (1.52 vs. 1.57; P< 0.01) en las gallinas ligeras que en las gallinas pesadas. En cuanto a la calidad del huevo, la dieta no influyó sobre ninguna de las variables estudiadas. Los resultados del ensayo muestran que las gallinas ponedoras rubias, independientemente de su peso vivo al inicio de la puesta, no necesitan una cantidad de proteína bruta superior a 16.5% para maximizar la producción, asegurando que las dietas cubren los requerimientos en AA indispensables. Asimismo, se puedo concluir que las gallinas con un peso más elevado al inicio de puesta producen más masa de huevo que las gallinas con un peso más bajo debido a que las primeras producen más cantidad de huevos y más pesados. Sin embargo, ambos grupos de peso obtuvieron el mismo IC por kg de huevo y las gallinas más livianas en peso obtuvieron un mejor IC por docena de huevo que las pesadas. En el experimento 3 la influencia de la concentración energética sobre los parámetros productivos y la calidad del huevo fue estudiada en 520 gallinas ponedoras rubias de la estirpe Hy-Line en el período 24-59 sem de vida. Se utilizaron 8 tratamientos ordenados factorialmente con 4 dietas que variaron en el contenido energético (2650, 2750, 2850 y 2950 kcal EMAn/kg) y 2 pesos vivos distintos al inicio del período de puesta (1733 vs. 1606g). Cada tratamiento fue replicado 5 veces y la unidad experimental consistió en una jaula con 13 aves. Todas las dietas se diseñaron para que tuvieran una concentración nutritiva similar por unidad energética. Las variables productivas y de calidad de huevo se estudiaron mediante controles cada 28 días desde el inicio del experimento. No se observaron interacciones entre el nivel energético y el peso inicial del ave para ninguna de las variables estudiadas. Un incremento en la concentración energética de la dieta incrementó la producción de huevos (88.8 % vs. 91.2 % vs. 92.7 % vs. 90.5 %), masa de huevo (56.1 g/d vs. 58.1 g/d vs. 58.8 g/d vs. 58.1 g/d), y eficiencia energética (5.42 vs. 5.39 vs. 5.38 vs. 5.58 kcal EMA/g huevo) de forma lineal y cuadrática (P< 0.05) y afectó significativamente a la ganancia de peso (255 g vs. 300 g vs. 325 g vs. 359 g; P<0.05) . Sin embargo, un incremento en la concentración energética provocó un descenso lineal en el consumo de los animales (115 g vs. 114 g vs. 111 g vs. 110 g; P< 0.001) y un descenso lineal y cuadrático en el IC por kg de huevo (2.05 vs. 1.96 vs. 1.89 vs. 1.89; P< 0.01). En cuanto a la calidad del huevo, un incremento en el contenido energético de la dieta provocó una reducción en la calidad del albumen de forma lineal en forma de reducción de Unidades Haugh (88.4 vs. 87.8 vs. 86.3 vs. 84.7; P< 0.001), asimismo el incremento de energía redujo de forma lineal la proporción relativa de cáscara en el huevo (9.7 vs. 9.6 vs. 9.6 vs. 9.5; P< 0.001). Sin embargo, el incremento energético propició un incremento lineal en la pigmentación de la yema del huevo (7.4 vs. 7.4 vs. 7.6 vs. 7.9; P< 0.001). El peso vivo al inicio de la prueba afecto a las variables productivas y a la calidad del huevo. Así, los huevos procedentes de gallinas pesadas al inicio de puesta tuvieron una mayor proporción de yema (25.7 % vs. 25.3 %; P< 0.001) y menor de albumen (64.7 vs. 65.0; P< 0.01) y cáscara (9.5 vs. 9.6; P< 0.05) respecto de los huevos procedentes de gallinas ligeras. Consecuentemente, el ratio yema:albumen fue mayor (0.40 vs. 0.39; P< 0.001) para las gallinas pesadas. Según los resultados del experimento se pudo concluir que las actuales gallinas ponedoras rubias responden con incrementos en la producción y en la masa del huevo a incrementos en la concentración energética hasta un límite que se sitúa en 2850 kcal EMAn/kg. Asimismo, los resultados obtenidos entre los 2 grupos de peso al inicio de puesta demostraron que las gallinas pesadas al inicio de puesta tienen un mayor consumo y producen huevos más pesados, con el consecuente aumento de la masa del huevo respecto de gallinas más ligeras. Sin embargo, el IC por kg de huevo fue el mismo en ambos grupos de gallinas y el IC por docena de huevo fue mejor en las gallinas ligeras. Asimismo, la eficiencia energética fue mejor en las gallinas ligeras. Abstract The general aim of this PhD Thesis was to study the influence of different nutritional factors and management on the productivity and egg quality of comercial Brown laying hens. The factor studied were: 1) The effect of the main cereal and type of fat of the diet; 2) The effect of crude protein and fat content of the diet; 3) The effect of energy concentration of the diet; 4) The effect of initial body weight of the hens at the onset of lay period. In experiment 1, the influence of the main cereal and type of supplemental fat in the diet on productive performance and egg quality of the eggs was studied in 756 Lohmann brown-egg laying hens from 22 to 54 wk of age. The experiment was conducted as a completely randomized design with 9 treatments arranged factorially with 3 cereals (dented corn, soft wheat, and barley) and 3 types of fat (soy oil, acidulated vegetable soapstocks, and lard). Each treatment was replicated 4 times (21 hens per replicate). All diets were formulated according to NRC (1994) and FEDNA (2008) to have similar nutrient content except for linoleic acid that ranged from 0.8 (wheat-lard diet) to 3.4% (corn-soy bean oil) depending on the combination of cereal and fat source used. This approach will allow to estimate the minimum level of linoleic acid in the diets that maximizes egg weight. Productive performance and egg quality traits were recorded every 28 d and BW of the hens was measured individually at the beginning and at the end of the experiment. No significant interactions between main factors were detected for any of the variables studied. Egg production, egg weight, and egg mass were not affected by dietary treatment. Body weight gain was higher (243 vs. 238 vs. 202 g; P<0.05) for hens fed corn or wheat than for hens fed barley and also for hens fed lard than for hens fed soy oil or acidulated vegetable soapstocks (251 vs. 221 vs. 210 g; P< 0.05). Egg quality was not influenced by dietary treatment except for yolk color that was greater (9.0 vs. 8.3 vs. 8.3; P< 0.001) for hens fed corn than for hens fed wheat or barley and for hens fed lard than for hens fed soy oil or acidulated vegetable soapstocks (8.9 vs. 8.5 vs. 8.2, respectivamente; P< 0.001). The influence of linoleic acid on egg weight and egg mass was higher when the fatty acid was reduced in the diet. Thus, the influence of the diet in egg weight/g linoleic acid intake and egg mass/g linolec acid intake was higher when the amount of this fatty acid decreased in the diet (P< 0.001). It is concluded that brown egg laying hens do not need more than 1.0% of linoleic acid in the diet (1.16 g/hen/d) to maximize egg production and egg size. The 3 cereals and the 3 fat sources tested can replace each other in the diet provided that the linoleic acid requirements to maximize egg size are met. In experiment 2, the influence of CP and fat content of the diet on performance and egg quality traits was studied in 672 Lohmann brown egg-laying hens from 22 to 50 wk of age. The experiment was conducted as a completely randomized design with 8 treatments arranged factorially with 4 diets and 2 initial BW of the hens (1,592 vs. 1,860 g). Three of these diets differed in the CP content (16.5, 17.5, and 18.5%) and included 1.8% added fat. The fourth diet had also 18.5% CP but was supplemented with 3.6% fat instead of 1.8% fat. Each treatment was replicated 4 times and the experimental unit consisted of 21 hens allocated in groups of 7 in 3 adjacent cages. All diets were isocaloric (2,750 kcal AME/kg) and met the recommendations of brown egg-laying hens for digestible Arg, Ile, Lys, Met, Thr, Trp, TSAA, and Val. Productive performance and egg quality were recorded by replicate every 28-d. For the entire experimental period, diet did not affect any of the productive performance traits studied but the heavier hens had higher ADFI (120.6 vs. 113.9g; P< 0.001), egg production (92.5 vs. 89.8%; P< 0.01), and egg weight (64.9 vs. 62.4g; P< 0.001) than the lighter hens. Initial BW did not affect feed conversion per kilogram of eggs or hen mortality but BW gain was higher (289 vs. 233g; P< 0.01) and FCR per dozen of eggs was better (1.52 vs. 1.57; P< 0.01) for the lighter than for the heavier hens. None of the egg quality variables studied was affected by dietary treatment or initial BW of the hens. It is concluded that brown egg-laying hens, irrespective of their initial BW, do not need more than 16.5% CP to maximize egg production provided that the diet meet the requirements for key indispensable amino acids. Heavier hens produce more eggs that are larger than lighter hens but feed efficiency per kilogram of eggs is not affected. In experiment 3, the influence of AMEn concentration of the diet on productive performance and egg quality traits was studied in 520 Hy-Line brown egg-laying hens differing in initial BW from 24 to 59 wks of age. There were 8 treatments arranged factorially with 4 diets varying in energy content (2,650, 2,750, 2,850, and 2,950 kcal AMEn/kg) and 2 initial BW of the hens (1,733 vs. 1,606 g). Each treatment was replicated 5 times (13 hens per replicate) and all diets had similar nutrient content per unit of energy. No interactions between energy content of the diet and initial BW of the hens were detected for any trait. An increase in energy concentration of the diet increased (linear, P< 0.05; quadratic P< 0.05) egg production (88.8 % vs. 91.2 % vs. 92.7 % vs. 90.5 %), egg mass (56.1 g/d vs. 58.1 g/d vs. 58.8 g/d vs. 58.1 g/d), energy efficiency (5.42 vs. 5.39 vs. 5.38 vs. 5.58 kcal AMEn/g of egg), and BW gain (255 g vs. 300 g vs. 325 g vs. 359 g; P<0.05) but decreased ADFI (115 g vs. 114 g vs. 111 g vs. 110 g; P< linear, P< 0.001) and FCR per kg of eggs (2.05 vs. 1.96 vs. 1.89 vs. 1.89; linear, P< 0.01; quadratic P< 0.01). An increase in energy content of the diet reduced Haugh units (88.4 vs. 87.8 vs. 86.3 vs. 84.7; P< 0.01) and the proportion of shell in the egg (9.7 vs. 9.6 vs. 9.6 vs. 9.5; P< 0.001). Feed intake (114.6 vs. 111.1 g/hen per day), AMEn intake (321 vs. 311 kcal/hen per day), egg weight (64.2 vs. 63.0 g), and egg mass (58.5 vs. 57.0 g) were higher for the heavier than for the lighter hens (P<0.01) but FCR per kg of eggs and energy efficiency were not affected. Eggs from the heavier hens had higher proportion of yolk (25.7 % vs. 25.3 %; P< 0.001) and lower of albumen (64.7 vs. 65.0; P< 0.01) and shell (9.5 vs. 9.6; P< 0.05) than eggs from the lighter hens. Consequently, the yolk to albumen ratio was higher (0.40 vs. 0.39; P< 0.001) for the heavier hens. It is concluded that brown egg-laying hens respond with increases in egg production and egg mass, to increases in AMEn concentration of the diet up to 2,850 kcal/kg. Heavy hens had higher feed intake and produced heavier eggs and more egg mass than light hens. However, energy efficiency was better for the lighter hens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

onceptual design phase is partially supported by product lifecycle management/computer-aided design (PLM/CAD) systems causing discontinuity of the design information flow: customer needs — functional requirements — key characteristics — design parameters (DPs) — geometric DPs. Aiming to address this issue, it is proposed a knowledge-based approach is proposed to integrate quality function deployment, failure mode and effects analysis, and axiomatic design into a commercial PLM/CAD system. A case study, main subject of this article, was carried out to validate the proposed process, to evaluate, by a pilot development, how the commercial PLM/CAD modules and application programming interface could support the information flow, and based on the pilot scheme results to propose a full development framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications?it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ?Activity Monitor? has been designed and implemented: a personal health-persuasive application that provides feedback on the user?s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user?s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.