946 resultados para Time domain simulation tools


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The utilisation of biofuels in gas turbines is a promising alternative to fossil fuels for power generation. It would lead to significant reduction of CO2 emissions using an existing combustion technology, although significant changes seem to be needed and further technological development is necessary. The goal of this work is to perform energy and exergy analyses of the behaviour of gas turbines fired with biogas, ethanol and synthesis gas (bio-syngas), compared with natural gas. The global energy transformation process (i.e. from biomass to electricity) has also been studied. Furthermore, the potential reduction of CO2 emissions attained by the use of biofuels has been determined, considering the restrictions regarding biomass availability. Two different simulation tools have been used to accomplish the aims of this work. The results suggest a high interest and the technical viability of the use of Biomass Integrated Gasification Combined Cycle (BIGCC) systems for large scale power generation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper reports the studies carried out to develop and calibrate the optimal models for the objectives of this work. In particular, quarter bogie model for vehicle, rail-wheel contact with Lagrangian multiplier method, 2D spatial discretization were selected as the optimal decisions. Furthermore, the 3D model of coupled vehicle-track also has been developed to contrast the results obtained in the 2D model. The calculations were carried out in the time domain and envelopes of relevant results were obtained for several track profiles and speed ranges. Distributed elevation irregularities were generated based on power spectral density (PSD) distributions. The results obtained include the wheel-rail contact forces, forces transmitted to the bogie by primary suspension. The latter loads are relevant for the purpose of evaluating the performance of the infrastructure

Relevância:

100.00% 100.00%

Publicador:

Resumo:

production, during the summer of 2010. This farm is integrated at the Spanish research network for the sugar beet development (AIMCRA) which regarding irrigation, focuses on maximizing water saving and cost reduction. According to AIMCRA 0 s perspective for promoting irrigation best practices, it is essential to understand soil response to irrigation i.e. maximum irrigation length for each soil infiltration capacity. The Use of Humidity Sensors provides foundations to address soil 0 s behavior at the irrigation events and, therefore, to establish the boundaries regarding irrigation length and irrigation interval. In order to understand to what extent farmer 0 s performance at Tordesillas farm could have been potentially improved, this study aims to address suitable irrigation length and intervals for the given soil properties and evapotranspiration rates. In this sense, several humidity sensors were installed: (1) A Frequency Domain Reflectometry (FDR) EnviroScan Probe taking readings at 10, 20, 40 and 60cm depth and (2) different Time Domain Reflectometry (TDR) Echo 2 and Cr200 probes buried in a 50cm x 30cm x 50cm pit and placed along the walls at 10, 20, 30 and 40 cm depth. Moreover, in order to define soil properties, a textural analysis at the Tordesillas Farm was conducted. Also, data from the Tordesillas meteorological station was utilized.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current nanometer technologies are subjected to several adverse effects that seriously impact the yield and performance of integrated circuits. Such is the case of within-die parameters uncertainties, varying workload conditions, aging, temperature, etc. Monitoring, calibration and dynamic adaptation have appeared as promising solutions to these issues and many kinds of monitors have been presented recently. In this scenario, where systems with hundreds of monitors of different types have been proposed, the need for light-weight monitoring networks has become essential. In this work we present a light-weight network architecture based on digitization resource sharing of nodes that require a time-to-digital conversion. Our proposal employs a single wire interface, shared among all the nodes in the network, and quantizes the time domain to perform the access multiplexing and transmit the information. It supposes a 16% improvement in area and power consumption compared to traditional approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Compilation techniques such as those portrayed by the Warren Abstract Machine(WAM) have greatly improved the speed of execution of logic programs. The research presented herein is geared towards providing additional performance to logic programs through the use of parallelism, while preserving the conventional semantics of logic languages. Two áreas to which special attention is given are the preservation of sequential performance and storage efficiency, and the use of low overhead mechanisms for controlling parallel execution. Accordingly, the techniques used for supporting parallelism are efficient extensions of those which have brought high inferencing speeds to sequential implementations. At a lower level, special attention is also given to design and simulation detail and to the architectural implications of the execution model behavior. This paper offers an overview of the basic concepts and techniques used in the parallel design, simulation tools used, and some of the results obtained to date.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work explores the automatic recognition of physical activity intensity patterns from multi-axial accelerometry and heart rate signals. Data collection was carried out in free-living conditions and in three controlled gymnasium circuits, for a total amount of 179.80 h of data divided into: sedentary situations (65.5%), light-to-moderate activity (17.6%) and vigorous exercise (16.9%). The proposed machine learning algorithms comprise the following steps: time-domain feature definition, standardization and PCA projection, unsupervised clustering (by k-means and GMM) and a HMM to account for long-term temporal trends. Performance was evaluated by 30 runs of a 10-fold cross-validation. Both k-means and GMM-based approaches yielded high overall accuracy (86.97% and 85.03%, respectively) and, given the imbalance of the dataset, meritorious F-measures (up to 77.88%) for non-sedentary cases. Classification errors tended to be concentrated around transients, what constrains their practical impact. Hence, we consider our proposal to be suitable for 24 h-based monitoring of physical activity in ambulatory scenarios and a first step towards intensity-specific energy expenditure estimators

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a time domain approach to the flutter analysis of a missile-type wing/body configuration with concentrated structural non-linearities. The missile wing is considered fully movable and its rotation angle contains the structural freeplay-type non-linearity. Although a general formulation for flexible configurations is developed, only two rigid degrees of freedom are taken into account for the results: pitching of the whole wing/body configuration and wing rotation angle around its hinge. An unsteady aerodynamic model based on the slender-body approach is used to calculate aerodynamic generalized forces. Limit-cycle oscillations and chaotic motion below the flutter speed are observed in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ante la cuestión ¿La universidad debe amoldarse a la realidad ambiental o debe precursar nuevas realidades? se propone demostrar la factibilidad de alcanzar aproximaciones objetivables hacia el desarrollo sostenible, mediante la cooperación universitaria transfronteriza en ambientes urbanos insulares y costeros de la subregión del Caribe y Centroamérica. Se desarrolla el estudio en cuatro momentos: momento proyectivo en el cual se delimita el problema y se contextualiza su significación en el plano teórico de las sub-áreas implicadas. Fue reconocida la existencia de un problema social que, más allá de significar una actuación negativa intencional humana, revela una insuficiencia en el aprovechamiento de un potencial conocido y estratégico cuyos síntomas suelen ser: crisis ambiental generalizada; poca capacidad de respuesta por parte de las universidades ante las exigencias del desarrollo sostenible; incipientes estrategias de cooperación interinstitucional con refuerzo negativo en la atomización de esfuerzos materiales y comunicacionales. Se contrastaron enfoques y se adoptó una postura ante un hecho determinante: La universidad como articulador del desarrollo sostenible; hecho concretado en un objeto investigable: la cooperación universitaria como estrategia aún no focalizada en acercamientos al desarrollo sostenible. En la estrategia de abordaje de la investigación, la gestión fue considerada hipótesis de trabajo no positivista, generando en consecuencia la aplicación de adaptaciones a metodologías reconocidas, contextualizadas dentro de una particular visión sobre del hecho investigado. En el momento metodológico se describe el diseño concreto y los procedimientos de abordaje del problema en todas sus fases. En el momento técnico, fueron aplicados instrumentos y técnicas para obtener los datos diagnósticos e iniciar el diseño de un modelo de cooperación universitaria para el desarrollo sostenible. El diagnóstico se basó en estrategias cuali-cuantitativas que permitieron el análisis de resultados en la aplicación de encuestas, entrevistas a expertos, análisis prospectivo estructural, situacional e integrado. La construcción del modelo se desarrolló con fundamento en experiencias de cooperación previas, adoptando modelos de gestión de relevante alcance científico como referencias de aplicación. Se trata de una investigación socio-ambiental cuyo objeto de estudio la identifica como no experimental, aplicada; basada en el análisis descriptivo de datos cualicuantitativos, conducidos en un diseño de campo devenido finalmente en un proyecto factible. La información se recolectada por observación de campo, aplicación de instrumentos y dinámicas inspiradas en grupos de enfoque a escala local y nacional; con sujetos pertenecientes al sistema de educación universitaria e instancias gubernamentales y sociales propias del ámbito seleccionado. Conduce el estudio a la presentación del modelo MOP-GECUDES, descrito en cuanto a sus dimensiones, variables, estrategias; con 166 indicadores clasificados en 49 categorías, expresados en metas. Se presenta en 24 procedimientos, apoyados en 47 instrumentos específicos consistentes en aplicaciones prácticas, hojas metodológicas o manual de instrucciones para la operacionalización del modelo. Se complementa el diseño con un sistema de procedimientos surgidos de la propia experiencia, lo que le atribuye el modelo diseñado el rasgo particular de haber sido diseñado bajo sus propios principios. Faced with the question: Does the college must conform to the environmental reality or has to should promote new realities? This research aims to demonstrate the feasibility of achieving objectifiable approaches towards sustainable development through cross-border university cooperation in urban, coastal and islands space" of Caribbean and Central American. This study is developed in four stages: projective moment, in which delimits the problem, and contextualizes its importance in their theoretical subareas. It was recognized that there is a social problem, that beyond an intentional human action negative, reveals a deficience ability in exploiting potential strategic known and whose symptoms are: widespread environmental crisis, poor ability to answer on the part of universities to the demands of sustainable development; emerging interagency cooperation strategies with the aggravating fragmentation of resources. Were contrasted Approaches and was been adopted a stance before a triggering event: The university as articulator of sustainable development. Fact materialized in a study object: university cooperation as a strategy that yet has not been focused on approaches to sustainable development. In research approach, the management was considered as a working-hypothesis not positivist, consequently, were applied adjustments recognized methodologies that were contextualized within the author's personal view on the matter under investigation. En el momento metodológico se describe el diseño concreto y los procedimientos de abordaje del problema en todas sus fases. At the methodological time, describes the design and procedures to address the problem in all its phases. At the technical time, were applied tools directed to obtain diagnostic data and start designing a model of university cooperation for sustainable development. The diagnosis was based on qualitative and quantitative strategies that allowed the analysis of findings in the surveys, expert interviews, prospective analysis, and structural situational and integrated. Construction of the model was developed on the basis of cooperation experiences of the author, adopting management models relevant scientific scope and application references. It is a socio-environmental research with a not-experimental focus of study, applied, based on the descriptive analysis of qualitative and quantitative data, conducted in a field design that finally was been become a feasible project. The information is collected by field observation, application of instruments inspired in dynamic focus groups at local and national levels, with individual-subjects of the university education system; the government bodies and the social groups of the selected area. The Study leads to the presentation of the model MOP-GECUDES, described in terms of their dimensions, variables, strategies; with 166 indicators classified in 49 categories, expressed in its activities and goals. It comes in 24 procedures, supported by 47 specific instruments consisting of practical applications, methodology sheets or instructions for the operationalization of the model. Design is complemented with a system of procedures arising from the own experience. This have the particular attribute of generate a model than has been designed under its own principles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La temperatura es una preocupación que juega un papel protagonista en el diseño de circuitos integrados modernos. El importante aumento de las densidades de potencia que conllevan las últimas generaciones tecnológicas ha producido la aparición de gradientes térmicos y puntos calientes durante el funcionamiento normal de los chips. La temperatura tiene un impacto negativo en varios parámetros del circuito integrado como el retardo de las puertas, los gastos de disipación de calor, la fiabilidad, el consumo de energía, etc. Con el fin de luchar contra estos efectos nocivos, la técnicas de gestión dinámica de la temperatura (DTM) adaptan el comportamiento del chip en función en la información que proporciona un sistema de monitorización que mide en tiempo de ejecución la información térmica de la superficie del dado. El campo de la monitorización de la temperatura en el chip ha llamado la atención de la comunidad científica en los últimos años y es el objeto de estudio de esta tesis. Esta tesis aborda la temática de control de la temperatura en el chip desde diferentes perspectivas y niveles, ofreciendo soluciones a algunos de los temas más importantes. Los niveles físico y circuital se cubren con el diseño y la caracterización de dos nuevos sensores de temperatura especialmente diseñados para los propósitos de las técnicas DTM. El primer sensor está basado en un mecanismo que obtiene un pulso de anchura variable dependiente de la relación de las corrientes de fuga con la temperatura. De manera resumida, se carga un nodo del circuito y posteriormente se deja flotando de tal manera que se descarga a través de las corrientes de fugas de un transistor; el tiempo de descarga del nodo es la anchura del pulso. Dado que la anchura del pulso muestra una dependencia exponencial con la temperatura, la conversión a una palabra digital se realiza por medio de un contador logarítmico que realiza tanto la conversión tiempo a digital como la linealización de la salida. La estructura resultante de esta combinación de elementos se implementa en una tecnología de 0,35 _m. El sensor ocupa un área muy reducida, 10.250 nm2, y consume muy poca energía, 1.05-65.5nW a 5 muestras/s, estas cifras superaron todos los trabajos previos en el momento en que se publicó por primera vez y en el momento de la publicación de esta tesis, superan a todas las implementaciones anteriores fabricadas en el mismo nodo tecnológico. En cuanto a la precisión, el sensor ofrece una buena linealidad, incluso sin calibrar; se obtiene un error 3_ de 1,97oC, adecuado para tratar con las aplicaciones de DTM. Como se ha explicado, el sensor es completamente compatible con los procesos de fabricación CMOS, este hecho, junto con sus valores reducidos de área y consumo, lo hacen especialmente adecuado para la integración en un sistema de monitorización de DTM con un conjunto de monitores empotrados distribuidos a través del chip. Las crecientes incertidumbres de proceso asociadas a los últimos nodos tecnológicos comprometen las características de linealidad de nuestra primera propuesta de sensor. Con el objetivo de superar estos problemas, proponemos una nueva técnica para obtener la temperatura. La nueva técnica también está basada en las dependencias térmicas de las corrientes de fuga que se utilizan para descargar un nodo flotante. La novedad es que ahora la medida viene dada por el cociente de dos medidas diferentes, en una de las cuales se altera una característica del transistor de descarga |la tensión de puerta. Este cociente resulta ser muy robusto frente a variaciones de proceso y, además, la linealidad obtenida cumple ampliamente los requisitos impuestos por las políticas DTM |error 3_ de 1,17oC considerando variaciones del proceso y calibrando en dos puntos. La implementación de la parte sensora de esta nueva técnica implica varias consideraciones de diseño, tales como la generación de una referencia de tensión independiente de variaciones de proceso, que se analizan en profundidad en la tesis. Para la conversión tiempo-a-digital, se emplea la misma estructura de digitalización que en el primer sensor. Para la implementación física de la parte de digitalización, se ha construido una biblioteca de células estándar completamente nueva orientada a la reducción de área y consumo. El sensor resultante de la unión de todos los bloques se caracteriza por una energía por muestra ultra baja (48-640 pJ) y un área diminuta de 0,0016 mm2, esta cifra mejora todos los trabajos previos. Para probar esta afirmación, se realiza una comparación exhaustiva con más de 40 propuestas de sensores en la literatura científica. Subiendo el nivel de abstracción al sistema, la tercera contribución se centra en el modelado de un sistema de monitorización que consiste de un conjunto de sensores distribuidos por la superficie del chip. Todos los trabajos anteriores de la literatura tienen como objetivo maximizar la precisión del sistema con el mínimo número de monitores. Como novedad, en nuestra propuesta se introducen nuevos parámetros de calidad aparte del número de sensores, también se considera el consumo de energía, la frecuencia de muestreo, los costes de interconexión y la posibilidad de elegir diferentes tipos de monitores. El modelo se introduce en un algoritmo de recocido simulado que recibe la información térmica de un sistema, sus propiedades físicas, limitaciones de área, potencia e interconexión y una colección de tipos de monitor; el algoritmo proporciona el tipo seleccionado de monitor, el número de monitores, su posición y la velocidad de muestreo _optima. Para probar la validez del algoritmo, se presentan varios casos de estudio para el procesador Alpha 21364 considerando distintas restricciones. En comparación con otros trabajos previos en la literatura, el modelo que aquí se presenta es el más completo. Finalmente, la última contribución se dirige al nivel de red, partiendo de un conjunto de monitores de temperatura de posiciones conocidas, nos concentramos en resolver el problema de la conexión de los sensores de una forma eficiente en área y consumo. Nuestra primera propuesta en este campo es la introducción de un nuevo nivel en la jerarquía de interconexión, el nivel de trillado (o threshing en inglés), entre los monitores y los buses tradicionales de periféricos. En este nuevo nivel se aplica selectividad de datos para reducir la cantidad de información que se envía al controlador central. La idea detrás de este nuevo nivel es que en este tipo de redes la mayoría de los datos es inútil, porque desde el punto de vista del controlador sólo una pequeña cantidad de datos |normalmente sólo los valores extremos| es de interés. Para cubrir el nuevo nivel, proponemos una red de monitorización mono-conexión que se basa en un esquema de señalización en el dominio de tiempo. Este esquema reduce significativamente tanto la actividad de conmutación sobre la conexión como el consumo de energía de la red. Otra ventaja de este esquema es que los datos de los monitores llegan directamente ordenados al controlador. Si este tipo de señalización se aplica a sensores que realizan conversión tiempo-a-digital, se puede obtener compartición de recursos de digitalización tanto en tiempo como en espacio, lo que supone un importante ahorro de área y consumo. Finalmente, se presentan dos prototipos de sistemas de monitorización completos que de manera significativa superan la características de trabajos anteriores en términos de área y, especialmente, consumo de energía. Abstract Temperature is a first class design concern in modern integrated circuits. The important increase in power densities associated to recent technology evolutions has lead to the apparition of thermal gradients and hot spots during run time operation. Temperature impacts several circuit parameters such as speed, cooling budgets, reliability, power consumption, etc. In order to fight against these negative effects, dynamic thermal management (DTM) techniques adapt the behavior of the chip relying on the information of a monitoring system that provides run-time thermal information of the die surface. The field of on-chip temperature monitoring has drawn the attention of the scientific community in the recent years and is the object of study of this thesis. This thesis approaches the matter of on-chip temperature monitoring from different perspectives and levels, providing solutions to some of the most important issues. The physical and circuital levels are covered with the design and characterization of two novel temperature sensors specially tailored for DTM purposes. The first sensor is based upon a mechanism that obtains a pulse with a varying width based on the variations of the leakage currents on the temperature. In a nutshell, a circuit node is charged and subsequently left floating so that it discharges away through the subthreshold currents of a transistor; the time the node takes to discharge is the width of the pulse. Since the width of the pulse displays an exponential dependence on the temperature, the conversion into a digital word is realized by means of a logarithmic counter that performs both the timeto- digital conversion and the linearization of the output. The structure resulting from this combination of elements is implemented in a 0.35_m technology and is characterized by very reduced area, 10250 nm2, and power consumption, 1.05-65.5 nW at 5 samples/s, these figures outperformed all previous works by the time it was first published and still, by the time of the publication of this thesis, they outnumber all previous implementations in the same technology node. Concerning the accuracy, the sensor exhibits good linearity, even without calibration it displays a 3_ error of 1.97oC, appropriate to deal with DTM applications. As explained, the sensor is completely compatible with standard CMOS processes, this fact, along with its tiny area and power overhead, makes it specially suitable for the integration in a DTM monitoring system with a collection of on-chip monitors distributed across the chip. The exacerbated process fluctuations carried along with recent technology nodes jeop-ardize the linearity characteristics of the first sensor. In order to overcome these problems, a new temperature inferring technique is proposed. In this case, we also rely on the thermal dependencies of leakage currents that are used to discharge a floating node, but now, the result comes from the ratio of two different measures, in one of which we alter a characteristic of the discharging transistor |the gate voltage. This ratio proves to be very robust against process variations and displays a more than suficient linearity on the temperature |1.17oC 3_ error considering process variations and performing two-point calibration. The implementation of the sensing part based on this new technique implies several issues, such as the generation of process variations independent voltage reference, that are analyzed in depth in the thesis. In order to perform the time-to-digital conversion, we employ the same digitization structure the former sensor used. A completely new standard cell library targeting low area and power overhead is built from scratch to implement the digitization part. Putting all the pieces together, we achieve a complete sensor system that is characterized by ultra low energy per conversion of 48-640pJ and area of 0.0016mm2, this figure outperforms all previous works. To prove this statement, we perform a thorough comparison with over 40 works from the scientific literature. Moving up to the system level, the third contribution is centered on the modeling of a monitoring system consisting of set of thermal sensors distributed across the chip. All previous works from the literature target maximizing the accuracy of the system with the minimum number of monitors. In contrast, we introduce new metrics of quality apart form just the number of sensors; we consider the power consumption, the sampling frequency, the possibility to consider different types of monitors and the interconnection costs. The model is introduced in a simulated annealing algorithm that receives the thermal information of a system, its physical properties, area, power and interconnection constraints and a collection of monitor types; the algorithm yields the selected type of monitor, the number of monitors, their position and the optimum sampling rate. We test the algorithm with the Alpha 21364 processor under several constraint configurations to prove its validity. When compared to other previous works in the literature, the modeling presented here is the most complete. Finally, the last contribution targets the networking level, given an allocated set of temperature monitors, we focused on solving the problem of connecting them in an efficient way from the area and power perspectives. Our first proposal in this area is the introduction of a new interconnection hierarchy level, the threshing level, in between the monitors and the traditional peripheral buses that applies data selectivity to reduce the amount of information that is sent to the central controller. The idea behind this new level is that in this kind of networks most data are useless because from the controller viewpoint just a small amount of data |normally extreme values| is of interest. To cover the new interconnection level, we propose a single-wire monitoring network based on a time-domain signaling scheme that significantly reduces both the switching activity over the wire and the power consumption of the network. This scheme codes the information in the time domain and allows a straightforward obtention of an ordered list of values from the maximum to the minimum. If the scheme is applied to monitors that employ TDC, digitization resource sharing is achieved, producing an important saving in area and power consumption. Two prototypes of complete monitoring systems are presented, they significantly overcome previous works in terms of area and, specially, power consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Natural regeneration is an ecological key-process that makes plant persistence possible and, consequently, it constitutes an essential element of sustainable forest management. In this respect, natural regeneration in even-aged stands of Pinus pinea L. located in the Spanish Northern Plateau has not always been successfully achieved despite over a century of pine nut-based management. As a result, natural regeneration has recently become a major concern for forest managers when we are living a moment of rationalization of investment in silviculture. The present dissertation is addressed to provide answers to forest managers on this topic through the development of an integral regeneration multistage model for P. pinea stands in the region. From this model, recommendations for natural regeneration-based silviculture can be derived under present and future climate scenarios. Also, the model structure makes it possible to detect the likely bottlenecks affecting the process. The integral model consists of five submodels corresponding to each of the subprocesses linking the stages involved in natural regeneration (seed production, seed dispersal, seed germination, seed predation and seedling survival). The outputs of the submodels represent the transitional probabilities between these stages as a function of climatic and stand variables, which in turn are representative of the ecological factors driving regeneration. At subprocess level, the findings of this dissertation should be interpreted as follows. The scheduling of the shelterwood system currently conducted over low density stands leads to situations of dispersal limitation since the initial stages of the regeneration period. Concerning predation, predator activity appears to be only limited by the occurrence of severe summer droughts and masting events, the summer resulting in a favourable period for seed survival. Out of this time interval, predators were found to almost totally deplete seed crops. Given that P. pinea dissemination occurs in summer (i.e. the safe period against predation), the likelihood of a seed to not be destroyed is conditional to germination occurrence prior to the intensification of predator activity. However, the optimal conditions for germination seldom take place, restraining emergence to few days during the fall. Thus, the window to reach the seedling stage is narrow. In addition, the seedling survival submodel predicts extremely high seedling mortality rates and therefore only some individuals from large cohorts will be able to persist. These facts, along with the strong climate-mediated masting habit exhibited by P. pinea, reveal that viii the overall probability of establishment is low. Given this background, current management –low final stand densities resulting from intense thinning and strict felling schedules– conditions the occurrence of enough favourable events to achieve natural regeneration during the current rotation time. Stochastic simulation and optimisation computed through the integral model confirm this circumstance, suggesting that more flexible and progressive regeneration fellings should be conducted. From an ecological standpoint, these results inform a reproductive strategy leading to uneven-aged stand structures, in full accordance with the medium shade-tolerant behaviour of the species. As a final remark, stochastic simulations performed under a climate-change scenario show that regeneration in the species will not be strongly hampered in the future. This resilient behaviour highlights the fundamental ecological role played by P. pinea in demanding areas where other tree species fail to persist.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vertical dynamic actions transmitted by railway vehicles to the ballasted track infrastructure is evaluated taking into account models with different degree of detail. In particular, we have studied this matter from a two-dimensional (2D) finite element model to a fully coupled three-dimensional (3D) multi-body finite element model. The vehicle and track are coupled via a non-linear Hertz contact mechanism. The method of Lagrange multipliers is used for the contact constraint enforcement between wheel and rail. Distributed elevation irregularities are generated based on power spectral density (PSD) distributions which are taken into account for the interaction. The numerical simulations are performed in the time domain, using a direct integration method for solving the transient problem due to the contact nonlinearities. The results obtained include contact forces, forces transmitted to the infrastructure (sleeper) by railpads and envelopes of relevant results for several track irregularities and speed ranges. The main contribution of this work is to identify and discuss coincidences and differences between discrete 2D models and continuum 3D models, as wheel as assessing the validity of evaluating the dynamic loading on the track with simplified 2D models

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this correspondence, the conditions to use any kind of discrete cosine transform (DCT) for multicarrier data transmission are derived. The symmetric convolution-multiplication property of each DCT implies that when symmetric convolution is performed in the time domain, an element-by-element multiplication is performed in the corresponding discrete trigonometric domain. Therefore, appending symmetric redun-dancy (as prefix and suffix) into each data symbol to be transmitted, and also enforcing symmetry for the equivalent channel impulse response, the linear convolution performed in the transmission channel becomes a symmetric convolution in those samples of interest. Furthermore, the channel equalization can be carried out by means of a bank of scalars in the corresponding discrete cosine transform domain. The expressions for obtaining the value of each scalar corresponding to these one-tap per subcarrier equalizers are presented. This study is completed with several computer simulations in mobile broadband wireless communication scenarios, considering the presence of carrier frequency offset (CFO). The obtained results indicate that the proposed systems outperform the standardized ones based on the DFT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mealiness is a textural attribute related to an internal fruit disorder that involves quality loss. It is characterised by the combination of abnormal softness of the fruit and absence of free juiciness in the mouth when eaten by the consumer. Recent research concluded with the development of precise instrumental procedure to measure a scale of mealiness based on the combination of several rheological properties and empirical magnitudes. In this line, time-domain laser reflectance spectroscopy (TDRS) is a medical technology, new in agrofood research, which is capable of obtaining physical and chemical information independently and simultaneously, and this can be of interest to characterise mealiness. Using VIS & NIR lasers as light sources, TDRS was applied in this work to Golden Delicious and Cox apples (n=90), conforming several batches of untreated samples and storage-treated (20°C & 95%RH) to promote the development of mealiness. The collected database was clustered into different groups according to their instrumental test values (Barreiro et al, 1998). The optical coefficients were used as explanatory variables when building discriminant analysis functions for mealiness, achieving a classification score above 80% of correctly identified mealy versus fresh apples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

System identification deals with the problem of building mathematical models of dynamical systems based on observed data from the system" [1]. In the context of civil engineering, the system refers to a large scale structure such as a building, bridge, or an offshore structure, and identification mostly involves the determination of modal parameters (the natural frequencies, damping ratios, and mode shapes). This paper presents some modal identification results obtained using a state-of-the-art time domain system identification method (data-driven stochastic subspace algorithms [2]) applied to the output-only data measured in a steel arch bridge. First, a three dimensional finite element model was developed for the numerical analysis of the structure using ANSYS. Modal analysis was carried out and modal parameters were extracted in the frequency range of interest, 0-10 Hz. The results obtained from the finite element modal analysis were used to determine the location of the sensors. After that, ambient vibration tests were conducted during April 23-24, 2009. The response of the structure was measured using eight accelerometers. Two stations of three sensors were formed (triaxial stations). These sensors were held stationary for reference during the test. The two remaining sensors were placed at the different measurement points along the bridge deck, in which only vertical and transversal measurements were conducted (biaxial stations). Point estimate and interval estimate have been carried out in the state space model using these ambient vibration measurements. In the case of parametric models (like state space), the dynamic behaviour of a system is described using mathematical models. Then, mathematical relationships can be established between modal parameters and estimated point parameters (thus, it is common to use experimental modal analysis as a synonym for system identification). Stable modal parameters are found using a stabilization diagram. Furthermore, this paper proposes a method for assessing the precision of estimates of the parameters of state-space models (confidence interval). This approach employs the nonparametric bootstrap procedure [3] and is applied to subspace parameter estimation algorithm. Using bootstrap results, a plot similar to a stabilization diagram is developed. These graphics differentiate system modes from spurious noise modes for a given order system. Additionally, using the modal assurance criterion, the experimental modes obtained have been compared with those evaluated from a finite element analysis. A quite good agreement between numerical and experimental results is observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The analysis of deformation in soils is of paramount importance in geotechnical engineering. For a long time the complex behaviour of natural deposits defied the ingenuity of engineers. The time has come that, with the aid of computers, numerical methods will allow the solution of every problem if the material law can be specified with a certain accuracy. Boundary Techniques (B.E.) have recently exploded in a splendid flowering of methods and applications that compare advantegeously with other well-established procedures like the finite element method (F.E.). Its application to soil mechanics problems (Brebbia 1981) has started and will grow in the future. This paper tries to present a simple formulation to a classical problem. In fact, there is already a large amount of application of B.E. to diffusion problems (Rizzo et al, Shaw, Chang et al, Combescure et al, Wrobel et al, Roures et al, Onishi et al) and very recently the first specific application to consolidation problems has been published by Bnishi et al. Here we develop an alternative formulation to that presented in the last reference. Fundamentally the idea is to introduce a finite difference discretization in the time domain in order to use the fundamental solution of a Helmholtz type equation governing the neutral pressure distribution. Although this procedure seems to have been unappreciated in the previous technical literature it is nevertheless effective and straightforward to implement. Indeed for the special problem in study it is perfectly suited, because a step by step interaction between the elastic and flow problems is needed. It allows also the introduction of non-linear elastic properties and time dependent conditions very easily as will be shown and compares well with performances of other approaches.