930 resultados para normalized heating parameter
Resumo:
Flash floods are of major relevance in natural disaster management in the Mediterranean region. In many cases, the damaging effects of flash floods can be mitigated by adequate management of flood control reservoirs. This requires the development of suitable models for optimal operation of reservoirs. A probabilistic methodology for calibrating the parameters of a reservoir flood control model (RFCM) that takes into account the stochastic variability of flood events is presented. This study addresses the crucial problem of operating reservoirs during flood events, considering downstream river damages and dam failure risk as conflicting operation criteria. These two criteria are aggregated into a single objective of total expected damages from both the maximum released flows and stored volumes (overall risk index). For each selected parameter set the RFCM is run under a wide range of hydrologic loads (determined through Monte Carlo simulation). The optimal parameter set is obtained through the overall risk index (balanced solution) and then compared with other solutions of the Pareto front. The proposed methodology is implemented at three different reservoirs in the southeast of Spain. The results obtained show that the balanced solution offers a good compromise between the two main objectives of reservoir flood control management
Resumo:
Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.
Resumo:
Optical hyperthermia systems based on the laser irradiation of gold nanorods seem to be a promising tool in the development of therapies against cancer. After a proof of concept in which the authors demonstrated the efficiency of this kind of systems, a modeling process based on an equivalent thermal-electric circuit has been carried out to determine the thermal parameters of the system and an energy balance obtained from the time-dependent heating and cooling temperature curves of the irradiated samples in order to obtain the photothermal transduction efficiency. By knowing this parameter, it is possible to increase the effectiveness of the treatments, thanks to the possibility of predicting the response of the device depending on the working configuration. As an example, the thermal behavior of two different kinds of nanoparticles is compared. The results show that, under identical conditions, the use of PEGylated gold nanorods allows for a more efficient heating compared with bare nanorods, and therefore, it results in a more effective therapy.
Resumo:
A 2D computer simulation method of random packings is applied to sets of particles generated by a self-similar uniparametric model for particle size distributions (PSDs) in granular media. The parameter p which controls the model is the proportion of mass of particles corresponding to the left half of the normalized size interval [0,1]. First the influence on the total porosity of the parameter p is analyzed and interpreted. It is shown that such parameter, and the fractal exponent of the associated power scaling, are efficient packing parameters, but this last one is not in the way predicted in a former published work addressing an analogous research in artificial granular materials. The total porosity reaches the minimum value for p = 0.6. Limited information on the pore size distribution is obtained from the packing simulations and by means of morphological analysis methods. Results show that the range of pore sizes increases for decreasing values of p showing also different shape in the volume pore size distribution. Further research including simulations with a greater number of particles and image resolution are required to obtain finer results on the hierarchical structure of pore space.
Resumo:
The emission of different harmful gases during the storage of solid fuels is a common phenomenon. The gases emitted during the heating process of those combustibles are the same as those emitted during combustion, mainly CO and CO2[1]. Nowadays, measurement of these emissions is mandatory. That is why in many industrial facilities different gas detectors are located to measure these gases. But it should be also useful if emissions could be predicted and the temperatures at the beginning of the emission process could be determined.
Resumo:
In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.
Resumo:
Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.
Resumo:
Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.
Resumo:
The reason that the indefinite exponential increase in the number of one’s ancestors does not take place is found in the law of sibling interference, which can be expressed by the following simple equation:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}\begin{matrix}{\mathit{N}}_{{\mathit{n}}} \enskip & \\ {\mathit{{\blacksquare}}} \enskip & \\ {\mathit{ASZ}} \enskip & \end{matrix} {\mathrm{\hspace{.167em}{\times}\hspace{.167em}2\hspace{.167em}=\hspace{.167em}}}{\mathit{N_{n+1},}}\end{equation*}\end{document} where Nn is the number of ancestors in the nth generation, ASZ is the average sibling size of these ancestors, and Nn+1 is the number of ancestors in the next older generation (n + 1). Accordingly, the exponential increase in the number of one’s ancestors is an initial anomaly that occurs while ASZ remains at 1. Once ASZ begins to exceed 1, the rate of increase in the number of ancestors is progressively curtailed, falling further and further behind the exponential increase rate. Eventually, ASZ reaches 2, and at that point, the number of ancestors stops increasing for two generations. These two generations, named AN SA and AN SA + 1, are the most critical in the ancestry, for one’s ancestors at that point come to represent all the progeny-produced adults of the entire ancestral population. Thereafter, the fate of one’s ancestors becomes the fate of the entire population. If the population to which one belongs is a successful, slowly expanding one, the number of ancestors would slowly decline as you move toward the remote past. This is because ABZ would exceed 2. Only when ABZ is less than 2 would the number of ancestors increase beyond the AN SA and AN SA + 1 generations. Since the above is an indication of a failing population on the way to extinction, there had to be the previous AN SA involving a far greater number of individuals for such a population. Simulations indicated that for a member of a continuously successful population, the AN SA ancestors might have numbered as many as 5.2 million, the AN SA generation being the 28th generation in the past. However, because of the law of increasingly irrelevant remote ancestors, only a very small fraction of the AN SA ancestors would have left genetic traces in the genome of each descendant of today.
Resumo:
In TJ-II stellarator plasmas, in the electron cyclotron heating regime, an increase in the ion temperature is observed, synchronized with that of the electron temperature, during the transition to the core electron-root confinement (CERC) regime. This rise in ion temperature should be attributed to the joint action of the electron–ion energy transfer (which changes slightly during the CERC formation) and an enhancement of the ion confinement. This improvement must be related to the increase in the positive electric field in the core region. In this paper, we confirm this hypothesis by estimating the ion collisional transport in TJ-II under the physical conditions established before and after the transition to CERC. We calculate a large number of ion orbits in the guiding-centre approximation considering the collisions with a background plasma composed of electrons and ions. The ion temperature profile and the thermal flux are calculated in a self-consistent way, so that the change in the ion heat transport can be assessed.
Resumo:
O objetivo do presente trabalho foi determinar a força de cisalhamento Warner-Bratzler do músculo Longissimus lumborum de animais zebuínos machos inteiros (Bos indicus) durante o período de maturação, nas faixas de pH final (pHf 48 horas post mortem) normal (pH entre 5,5 e 5,8) e anormal (pH entre 5,81 e 6,19) e temperaturas internas de cozimento. Concomitante com a avaliação de força de cisalhamento, foram avaliadas também a degradação da desmina e troponina T, o comprimento do sarcômero, o teor de colágeno total e solúvel, as temperaturas máximas de desnaturação das proteínas e a morfologia geral de agregação das fibras do músculo no cozimento. A degradação da desmina e troponina T foi maior no pHf normal, aparecendo produtos de degradação a partir do dia 7 nessa faixa de pHf. Não houve diferenças nos valores de comprimento do sarcômero, descartando-se assim, a contribuição desse parâmetro sobre a temperatura máxima de desnaturação (Tmáx) das proteínas, determinada utilizando calorímetro exploratório diferencial (DSC). Similarmente, não foram encontradas diferenças para os teores de colágeno total e solúvel, e os valores de colágeno total foram baixos, sugerindo que sua contribuição na segunda transição térmica e nos valores de força de cisalhamento foi mínima. As Tmáx1 e Tmáx2, correspondentes à desnaturação da meromiosina leve e pesada, respectivamente, foram menores no pHf normal, mas o efeito foi maior para a Tmáx2. A Tmáx3 da actina e titina aumentou até 14 dias post mortem na faixa de pHf normal, e posteriormente diminuiu significativamente após 21 dias, sugerindo possível degradação dessas proteínas nesse período de dias. Não foram encontradas diferenças nos valores de Tmáx no pHf anormal, em todos os dias post mortem, o que sugere a contribuição de um possível mecanismo de proteção que estabiliza as miofibrilas no aquecimento. Houve maior agregação das fibras do músculo no pHf normal nas temperaturas internas de cozimento de 65 e 80°C, provavelmente devido à maior desnaturação térmica das miofibrilas. Os valores de força de cisalhamento foram maiores com o aumento da temperatura interna de cozimento, devido ao aumento da desnaturação térmica das miofibrilas do músculo. Independente da temperatura interna de cozimento, os valores de força de cisalhamento foram altos em quase todos os dias post mortem para ambas as faixas de pHf, o que sugere a necessidade de utilizar métodos físicos ou químicos para aumentar a maciez do músculo Longissimus lumborum de animais zebuínos.
Resumo:
The viability of carbon nanofiber (CNF) composites in cement matrices as a self-heating material is reported in this paper. This functional application would allow the use of CNF cement composites as a heating element in buildings, or for deicing pavements of civil engineering transport infrastructures, such as highways or airport runways. Cement pastes with the addition of different CNF dosages (from 0 to 5% by cement mass) have been prepared. Afterwards, tests were run at different fixed voltages (50, 100 and 150V), and the temperature of the specimens was registered. Also the possibility of using a casting method like shotcrete, instead of just pouring the fresh mix into the mild (with no system’s efficiency loss expected) was studied. Temperatures up to 138 °C were registered during shotcrete-5% CNF cement paste tests (showing initial 10 °C/min heating rates). However a minimum voltage was required in order to achieve a proper system functioning.
Resumo:
This research studies the self-heating produced by the application of an electric current to conductive cement pastes with carbonaceous materials. The main parameters studied were: type and percentage of carbonaceous materials, effect of moisture, electrical resistance, power consumption, maximum temperature reached and its evolution and ice melting kinetics are the main parameters studied. A mathematical model is also proposed, which predicts that the degree of heating is adjustable with the applied voltage. Finally, the results have been applied to ensure that cementitious materials studied are feasible to control ice layers in transportation infrastructures.
Resumo:
The susceptibility of clay bearing rocks to weathering (erosion and/or differential degradation) is known to influence the stability of heterogeneous slopes. However, not all of these rocks show the same behaviour, as there are considerable differences in the speed and type of weathering observed. As such, it is very important to establish relationships between behaviour quantified in a laboratory environment with that observed in the field. The slake durability test is the laboratory test most commonly used to evaluate the relationship between slaking behaviour and rock durability. However, it has a number of disadvantages; it does not account for changes in shape and size in fragments retained in the 2 mm sieve, nor does its most commonly used index (Id2) accurately reflect weathering behaviour observed in the field. The main aim of this paper is to propose a simple methodology for characterizing the weathering behaviour of carbonate lithologies that outcrop in heterogeneous rock masses (such as Flysch slopes), for use by practitioners. To this end, the Potential Degradation Index (PDI) is proposed. This is calculated using the fragment size distribution curves taken from material retained in the drum after each cycle of the slake durability test. The number of slaking cycles has also been increased to five. Through laboratory testing of 117 samples of carbonate rocks, extracted from strata in selected slopes, 6 different rock types were established based on their slaking behaviour, and corresponding to the different weathering behaviours observed in the field.