928 resultados para interfacial parameter


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modeling and prediction of the overall elastic–plastic response and local damage mechanisms in heterogeneous materials, in particular particle reinforced composites, is a very complex problem. Microstructural complexities such as the inhomogeneous spatial distribution of particles, irregular morphology of the particles, and anisotropy in particle orientation after secondary processing, such as extrusion, significantly affect deformation behavior. We have studied the effect of particle/matrix interface debonding in SiC particle reinforced Al alloy matrix composites with (a) actual microstructure consisting of angular SiC particles and (b) idealized ellipsoidal SiC particles. Tensile deformation in SiC particle reinforced Al matrix composites was modeled using actual microstructures reconstructed from serial sectioning approach. Interfacial debonding was modeled using user-defined cohesive zone elements. Modeling with the actual microstructure (versus idealized ellipsoids) has a significant influence on: (a) localized stresses and strains in particle and matrix, and (b) far-field strain at which localized debonding takes place. The angular particles exhibited higher degree of load transfer and are more sensitive to interfacial debonding. Larger decreases in stress are observed in the angular particles, because of the flat surfaces, normal to the loading axis, which bear load. Furthermore, simplification of particle morphology may lead to erroneous results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article describes the results of an investigation aimed at the analysis methods used in the design of the protections against scour phenomenon on offshore wind farms in transitional waters, using medium and large diameter monopile type deep foundations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Providing QoS in the context of Ad Hoc networks includes a very wide field of application from the perspective of every level of the architecture in the network. Saying It in another way, It is possible to speak about QoS when a network is capable of guaranteeing a trustworthy communication in both extremes, between any couple of the network nodes by means of an efficient Management and administration of the resources that allows a suitable differentiation of services in agreement with the characteristics and demands of every single application.The principal objective of this article is the analysis of the quality parameters of service that protocols of routering reagents such as AODV and DSR give in the Ad Hoc mobile Networks; all of this is supported by the simulator ns-2. Here were going to analyze the behavior of some other parameters like effective channel, loss of packages and latency in the protocols of routering. Were going to show you which protocol presents better characteristics of Quality of Service (QoS) in the MANET networks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Flash floods are of major relevance in natural disaster management in the Mediterranean region. In many cases, the damaging effects of flash floods can be mitigated by adequate management of flood control reservoirs. This requires the development of suitable models for optimal operation of reservoirs. A probabilistic methodology for calibrating the parameters of a reservoir flood control model (RFCM) that takes into account the stochastic variability of flood events is presented. This study addresses the crucial problem of operating reservoirs during flood events, considering downstream river damages and dam failure risk as conflicting operation criteria. These two criteria are aggregated into a single objective of total expected damages from both the maximum released flows and stored volumes (overall risk index). For each selected parameter set the RFCM is run under a wide range of hydrologic loads (determined through Monte Carlo simulation). The optimal parameter set is obtained through the overall risk index (balanced solution) and then compared with other solutions of the Pareto front. The proposed methodology is implemented at three different reservoirs in the southeast of Spain. The results obtained show that the balanced solution offers a good compromise between the two main objectives of reservoir flood control management

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Correct modeling of the equivalent circuits regarding solar cell and panels is today an essential tool for power optimization. However, the parameter extraction of those circuits is still a quite difficult task that normally requires both experimental data and calculation procedures, generally not available to the normal user. This paper presents a new analytical method that easily calculates the equivalent circuit parameters from the data that manufacturers usually provide. The analytical approximation is based on a new methodology, since methods developed until now to obtain the aforementioned equivalent circuit parameters from manufacturer's data have always been numerical or heuristic. Results from the present method are as accurate as the ones resulting from other more complex (numerical) existing methods in terms of calculation process and resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the smart building control industry, creating a platform to integrate different communication protocols and ease the interaction between users and devices is becoming increasingly important. BATMP is a platform designed to achieve this goal. In this paper, the authors describe a novel mechanism for information exchange, which introduces a new concept, Parameter, and uses it as the common object among all the BATMP components: Gateway Manager, Technology Manager, Application Manager, Model Manager and Data Warehouse. Parameter is an object which represents a physical magnitude and contains the information about its presentation, available actions, access type, etc. Each component of BATMP has a copy of the parameters. In the Technology Manager, three drivers for different communication protocols, KNX, CoAP and Modbus, are implemented to convert devices into parameters. In the Gateway Manager, users can control the parameters directly or by defining a scenario. In the Application Manager, the applications can subscribe to parameters and decide the values of parameters by negotiating. Finally, a Negotiator is implemented in the Model Manager to notify other components about the changes taking place in any component. By applying this mechanism, BATMP ensures the simultaneous and concurrent communication among users, applications and devices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Las terminales de contenedores son sistemas complejos en los que un elevado número de actores económicos interactúan para ofrecer servicios de alta calidad bajo una estricta planificación y objetivos económicos. Las conocidas como "terminales de nueva generación" están diseñadas para prestar servicio a los mega-buques, que requieren tasas de productividad que alcanzan los 300 movimientos/ hora. Estas terminales han de satisfacer altos estándares dado que la competitividad entre terminales es elevada. Asegurar la fiabilidad de las planificaciones del atraque es clave para atraer clientes, así como reducir al mínimo el tiempo que el buque permanece en el puerto. La planificación de las operaciones es más compleja que antaño, y las tolerancias para posibles errores, menores. En este contexto, las interrupciones operativas deben reducirse al mínimo. Las principales causas de dichas perturbaciones operacionales, y por lo tanto de incertidumbre, se identifican y caracterizan en esta investigación. Existen una serie de factores que al interactuar con la infraestructura y/o las operaciones desencadenan modos de fallo o parada operativa. Los primeros pueden derivar no solo en retrasos en el servicio sino que además puede tener efectos colaterales sobre la reputación de la terminal, o incluso gasto de tiempo de gestión, todo lo cual supone un impacto para la terminal. En el futuro inmediato, la monitorización de las variables operativas presenta gran potencial de cara a mejorar cualitativamente la gestión de las operaciones y los modelos de planificación de las terminales, cuyo nivel de automatización va en aumento. La combinación del criterio experto con instrumentos que proporcionen datos a corto y largo plazo es fundamental para el desarrollo de herramientas que ayuden en la toma de decisiones, ya que de este modo estarán adaptadas a las auténticas condiciones climáticas y operativas que existen en cada emplazamiento. Para el corto plazo se propone una metodología con la que obtener predicciones de parámetros operativos en terminales de contenedores. Adicionalmente se ha desarrollado un caso de estudio en el que se aplica el modelo propuesto para obtener predicciones de la productividad del buque. Este trabajo se ha basado íntegramente en datos proporcionados por una terminal semi-automatizada española. Por otro lado, se analiza cómo gestionar, evaluar y mitigar el efecto de las interrupciones operativas a largo plazo a través de la evaluación del riesgo, una forma interesante de evaluar el effecto que eventos inciertos pero probables pueden generar sobre la productividad a largo plazo de la terminal. Además se propone una definición de riesgo operativo junto con una discusión de los términos que representan con mayor fidelidad la naturaleza de las actividades y finalmente, se proporcionan directrices para gestionar los resultados obtenidos. Container terminals are complex systems where a large number of factors and stakeholders interact to provide high-quality services under rigid planning schedules and economic objectives. The socalled next generation terminals are conceived to serve the new mega-vessels, which are demanding productivity rates up to 300 moves/hour. These terminals need to satisfy high standards because competition among terminals is fierce. Ensuring reliability in berth scheduling is key to attract clients, as well as to reduce at a minimum the time that vessels stay the port. Because of the aforementioned, operations planning is becoming more complex, and the tolerances for errors are smaller. In this context, operational disturbances must be reduced at a minimum. The main sources of operational disruptions and thus, of uncertainty, are identified and characterized in this study. External drivers interact with the infrastructure and/or the activities resulting in failure or stoppage modes. The later may derive not only in operational delays but in collateral and reputation damage or loss of time (especially management times), all what implies an impact for the terminal. In the near future, the monitoring of operational variables has great potential to make a qualitative improvement in the operations management and planning models of terminals that use increasing levels of automation. The combination of expert criteria with instruments that provide short- and long-run data is fundamental for the development of tools to guide decision-making, since they will be adapted to the real climatic and operational conditions that exist on site. For the short-term a method to obtain operational parameter forecasts in container terminals. To this end, a case study is presented, in which forecasts of vessel performance are obtained. This research has been entirely been based on data gathered from a semi-automated container terminal from Spain. In the other hand it is analyzed how to manage, evaluate and mitigate disruptions in the long-term by means of the risk assessment, an interesting approach to evaluate the effect of uncertain but likely events on the long-term throughput of the terminal. In addition, a definition for operational risk evaluation in port facilities is proposed along with a discussion of the terms that better represent the nature of the activities involved and finally, guidelines to manage the results obtained are provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wave energy conversion has an essential difference from other renewable energies since the dependence between the devices design and the energy resource is stronger. Dimensioning is therefore considered a key stage when a design project of Wave Energy Converters (WEC) is undertaken. Location, WEC concept, Power Take-Off (PTO) type, control strategy and hydrodynamic resonance considerations are some of the critical aspects to take into account to achieve a good performance. The paper proposes an automatic dimensioning methodology to be accomplished at the initial design project stages and the following elements are described to carry out the study: an optimization design algorithm, its objective functions and restrictions, a PTO model, as well as a procedure to evaluate the WEC energy production. After that, a parametric analysis is included considering different combinations of the key parameters previously introduced. A variety of study cases are analysed from the point of view of energy production for different design-parameters and all of them are compared with a reference case. Finally, a discussion is presented based on the results obtained, and some recommendations to face the WEC design stage are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of the carbon nanotubes (CNTs) content on the fiber/matrix interfacial shear strength (IFSS) in glass/fiber epoxy composites was measured by means of push-in and push-out tests. Both experimental methodologies provided equivalent values of the IFSS for each material. It was found that the dispersion of CNTs increased in IFSS by 19% in average with respect to the composite without CNTs. This improvement was reached with 0.3 wt.% of CNTs and increasing the CNT content up to 0.8 wt.% did not improve the interface strength.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a study of the adsorption of two peptides at the octane–water interface. The first peptide, Lac21, exists in mixed monomer–tetramer equilibrium in bulk solution with an appreciable monomer concentration. The second peptide, Lac28, exists as a tetramer in solution, with minimal exposed hydrophobic surface. A kinetic limitation to interfacial adsorption exists for Lac28 at moderate to high surface coverage that is not observed for Lac21. We estimate the potential energy barrier for Lac28 adsorption to be 42 kJ/mol and show that this is comparable to the expected free energy barrier for tetramer dissociation. This finding suggests that, at moderate to high surface coverage, adsorption is kinetically limited by the availability of interfacially active monomeric “domains” in the subinterfacial region. We also show how the commonly used empirical equation for protein adsorption dynamics can be used to estimate the potential energy barrier for adsorption. Such an approach is shown to be consistent with a formal description of diffusion–adsorption, provided a large potential energy barrier exists. This work demonstrates that the dynamics of interfacial adsorption depend on protein thermodynamic stability, and hence structure, in a quantifiable way.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The reason that the indefinite exponential increase in the number of one’s ancestors does not take place is found in the law of sibling interference, which can be expressed by the following simple equation:\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \setlength{\oddsidemargin}{-69pt} \begin{document} \begin{equation*}\begin{matrix}{\mathit{N}}_{{\mathit{n}}} \enskip & \\ {\mathit{{\blacksquare}}} \enskip & \\ {\mathit{ASZ}} \enskip & \end{matrix} {\mathrm{\hspace{.167em}{\times}\hspace{.167em}2\hspace{.167em}=\hspace{.167em}}}{\mathit{N_{n+1},}}\end{equation*}\end{document} where Nn is the number of ancestors in the nth generation, ASZ is the average sibling size of these ancestors, and Nn+1 is the number of ancestors in the next older generation (n + 1). Accordingly, the exponential increase in the number of one’s ancestors is an initial anomaly that occurs while ASZ remains at 1. Once ASZ begins to exceed 1, the rate of increase in the number of ancestors is progressively curtailed, falling further and further behind the exponential increase rate. Eventually, ASZ reaches 2, and at that point, the number of ancestors stops increasing for two generations. These two generations, named AN SA and AN SA + 1, are the most critical in the ancestry, for one’s ancestors at that point come to represent all the progeny-produced adults of the entire ancestral population. Thereafter, the fate of one’s ancestors becomes the fate of the entire population. If the population to which one belongs is a successful, slowly expanding one, the number of ancestors would slowly decline as you move toward the remote past. This is because ABZ would exceed 2. Only when ABZ is less than 2 would the number of ancestors increase beyond the AN SA and AN SA + 1 generations. Since the above is an indication of a failing population on the way to extinction, there had to be the previous AN SA involving a far greater number of individuals for such a population. Simulations indicated that for a member of a continuously successful population, the AN SA ancestors might have numbered as many as 5.2 million, the AN SA generation being the 28th generation in the past. However, because of the law of increasingly irrelevant remote ancestors, only a very small fraction of the AN SA ancestors would have left genetic traces in the genome of each descendant of today.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interfacial activation-based molecular (bio)-imprinting (IAMI) has been developed to rationally improve the performance of lipolytic enzymes in nonaqueous environments. The strategy combinedly exploits (i) the known dramatic enhancement of the protein conformational rigidity in a water-restricted milieu and (ii) the reported conformational changes associated with the activation of these enzymes at lipid-water interfaces, which basically involves an increased substrate accessibility to the active site and/or an induction of a more competent catalytic machinery. Six model enzymes have been assayed in several model reactions in nonaqueous media. The results, rationalized in light of the present biochemical and structural knowledge, show that the IAMI approach represents a straightforward, versatile method to generate manageable, activated (kinetically trapped) forms of lipolytic enzymes, providing under optimal conditions nonaqueous rate enhancements of up to two orders of magnitude. It is also shown that imprintability of lipolytic enzymes depends not only on the nature of the enzyme but also on the "quality" of the interface used as the template.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

O presente trabalho de mestrado teve como meta realizar um estudo do comportamento do cobre particulado em ensaios tribológicos do tipo pino contra disco. O cobre é atualmente utilizado em até 15% em massa das pastilhas de freios automotivos e tal utilização é responsável pela emissão de até 70% do cobre particulado presente no ar. Devido ao caráter carcinogênico do cobre, se faz necessária sua substituição. Foram realizados ensaios tribológicos pino disco com adição de diferentes meios interfaciais. Foram utilizados pares tribológicos aço/aço, em ensaios a seco de pino contra disco com adição de meio interfacial nanoparticulado de óxido de ferro, grafite e de cobre metálico em diferentes granulometrias (400 m, 20 m e 50 nm). Após os ensaios, amostras das superfícies de pinos e discos para cada uma das adições de cobre, bem como para a condição sem adição de meio interfacial, foram caracterizadas utilizando técnicas de microscopia eletrônica de varredura, de forma a entender o comportamento das partículas de cobre e sua contribuição para o coeficiente de atrito. As adições de cobre obtiveram os maiores coeficientes de atrito, e entre elas os coeficientes de atrito foram mais altos durante todos os ensaios para a adição de 50 nm, seguido de 20 m e 400 m. A análise das superfícies tribológicas em MEV mostrou heterogeneidade das superfícies ensaiadas em relação à presença de debris oxidados e camadas compactas. Observou-se a presença de cobre apenas nas superfícies ensaiadas com adição dos cobres de 50 nm e 20 m. A presença de um filme óxido compacto e contínuo foi observada apenas nas superfícies tribológicas ensaiadas sem adição de meio interfacial e com adição de cobre a 400 m.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Considerando que o petróleo quando extraído dos poços em águas profundas chega a ter teor de água superior a 50% e que antes de ser enviado à refinaria deve ter uma quantidade de água inferior a 1%, torna-se necessário o uso de técnicas de redução da quantidade de água. Durante a extração do petróleo formam-se emulsões de água em óleo que são muito estáveis devido a um filme interfacial contendo asfaltenos e/ou resinas ao redor das gotas de água. Nesse trabalho é apresentada a utilização de ondas estacionárias de ultrassom para realizar a quebra dessas emulsões. Quando gotículas de água com dimensões da ordem de 10m, muito menores que o comprimento de onda, são submetidas a um campo acústico estacionário em óleo, a força de radiação acústica empurra as gotículas para os nós de pressão da onda. Uma célula de coalescência com frequência central ao redor de 1 MHz, constituída por quatro camadas sendo uma piezelétrica, uma de acoplamento sólido, uma com o líquido e outra refletora, foi modelada empregando o método da matriz de transferência, que permite calcular a impedância elétrica em função da frequência. Para minimizar o efeito do gradiente de temperatura entre a entrada e a saída da cavidade da célula, quando está em operação, foram utilizados dois transdutores piezelétricos posicionados transversalmente ao fluxo que são excitados e controlados independentemente. Foi implementado um controlador digital para ajustar a frequência e a potência de cada transdutor. O controlador tem como entrada o módulo e a fase da corrente elétrica no transdutor e como saída a amplitude da tensão elétrica e a frequência. Para as células desenvolvidas, o algoritmo de controle segue um determinado pico de ressonância no interior da cavidade da célula no intervalo de frequência de 1,09 a 1,15 MHz. A separação acústica de emulsões de água em óleo foi realizada em uma planta de laboratório de processamento de petróleo no CENPES/PETROBRAS. Foram testados a variação da quantidade de desemulsificante, o teor inicial de água na emulsão e a influência da vazão do sistema, com uma potência de 80 W. O teor final de água na emulsão mostrou que a aplicação de ultrassom aumentou a coalescência de água da emulsão, em todas as condições testadas, quando comparada a um teste sem aplicação de ultrassom. Identificou-se o tempo de residência no interior da célula de separação como um fator importante no processo de coalescência de emulsões de água e óleo. O uso de desemulsificante químico é necessário para realizar a separação, porém, em quantidades elevadas implicaria no uso de processos adicionais antes do repasse final do petróleo à refinaria. Os teores iniciais de água na emulsão de 30 e 50% indicam que o uso da onda estacionária na coalescência de emulsões não tem limitação quanto a esse parâmetro. De acordo com os resultados obtidos em laboratório, essa técnica seria indicada como uma alternativa para integrar um sistema de processamento primário em conjunto com um separador eletrostático.