841 resultados para physically-based model
Resumo:
Despite numerous studies about nitrogen-cycling in forest ecosystems, many uncertainties remain, especially regarding the longer-term nitrogen accumulation. To contribute to filling this gap, the dynamic process-based model TRACE, with the ability to simulate 15N tracer redistribution in forest ecosystems was used to study N cycling processes in a mountain spruce forest of the northern edge of the Alps in Switzerland (Alptal, SZ). Most modeling analyses of N-cycling and C-N interactions have very limited ability to determine whether the process interactions are captured correctly. Because the interactions in such a system are complex, it is possible to get the whole-system C and N cycling right in a model without really knowing if the way the model combines fine-scale interactions to derive whole-system cycling is correct. With the possibility to simulate 15N tracer redistribution in ecosystem compartments, TRACE features a very powerful tool for the validation of fine-scale processes captured by the model. We first adapted the model to the new site (Alptal, Switzerland; long-term low-dose N-amendment experiment) by including a new algorithm for preferential water flow and by parameterizing of differences in drivers such as climate, N deposition and initial site conditions. After the calibration of key rates such as NPP and SOM turnover, we simulated patterns of 15N redistribution to compare against 15N field observations from a large-scale labeling experiment. The comparison of 15N field data with the modeled redistribution of the tracer in the soil horizons and vegetation compartments shows that the majority of fine-scale processes are captured satisfactorily. Particularly, the model is able to reproduce the fact that the largest part of the N deposition is immobilized in the soil. The discrepancies of 15N recovery in the LF and M soil horizon can be explained by the application method of the tracer and by the retention of the applied tracer by the well developed moss layer, which is not considered in the model. Discrepancies in the dynamics of foliage and litterfall 15N recovery were also observed and are related to the longevity of the needles in our mountain forest. As a next step, we will use the final Alptal version of the model to calculate the effects of climate change (temperature, CO2) and N deposition on ecosystem C sequestration in this regionally representative Norway spruce (Picea abies) stand.
Resumo:
Ethanol-gasoline fuel blends are increasingly being used in spark ignition (SI) engines due to continued growth in renewable fuels as part of a growing renewable portfolio standard (RPS). This leads to the need for a simple and accurate ethanol-gasoline blends combustion model that is applicable to one-dimensional engine simulation. A parametric combustion model has been developed, integrated into an engine simulation tool, and validated using SI engine experimental data. The parametric combustion model was built inside a user compound in GT-Power. In this model, selected burn durations were computed using correlations as functions of physically based non-dimensional groups that have been developed using the experimental engine database over a wide range of ethanol-gasoline blends, engine geometries, and operating conditions. A coefficient of variance (COV) of gross indicated mean effective pressure (IMEP) correlation was also added to the parametric combustion model. This correlation enables the cycle combustion variation modeling as a function of engine geometry and operating conditions. The computed burn durations were then used to fit single and double Wiebe functions. The single-Wiebe parametric combustion compound used the least squares method to compute the single-Wiebe parameters, while the double-Wiebe parametric combustion compound used an analytical solution to compute the double-Wiebe parameters. These compounds were then integrated into the engine model in GT-Power through the multi-Wiebe combustion template in which the values of Wiebe parameters (single-Wiebe or double-Wiebe) were sensed via RLT-dependence. The parametric combustion models were validated by overlaying the simulated pressure trace from GT-Power on to experimentally measured pressure traces. A thermodynamic engine model was also developed to study the effect of fuel blends, engine geometries and operating conditions on both the burn durations and COV of gross IMEP simulation results.
Resumo:
The work described in this thesis had two objectives. The first objective was to develop a physically based computational model that could be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. The second objective was to determine how the secondary phase inclusions observed in Pb1-xSnxTe alloys made by consolidating mechanically alloyed elemental powders impact the ability of the material to harvest waste heat and generate electricity in the 400 K to 700 K temperature range. The motivation for this work was that though the promise of this alloy as an unusually efficient thermoelectric power generator material in the 400 K to 700 K range had been demonstrated in the literature, methods to reproducibly control and subsequently optimize the materials thermoelectric figure of merit remain elusive. Mechanical alloying, though not typically used to fabricate these alloys, is a potential method for cost-effectively engineering these properties. Given that there are deviations from crystalline perfection in mechanically alloyed material such as secondary phase inclusions, the question arises as to whether these defects are detrimental to thermoelectric function or alternatively, whether they enhance thermoelectric function of the alloy. The hypothesis formed at the onset of this work was that the small secondary phase SnO2 inclusions observed to be present in the mechanically alloyed Pb1-xSnxTe would increase the thermoelectric figure of merit of the material over the temperature range of interest. It was proposed that the increase in the figure of merit would arise because the inclusions in the material would not reduce the electrical conductivity to as great an extent as the thermal conductivity. If this were to be true, then the experimentally measured electronic conductivity in mechanically alloyed Pb1-xSnxTe alloys that have these inclusions would not be less than that expected in alloys without these inclusions while the portion of the thermal conductivity that is not due to charge carriers (the lattice thermal conductivity) would be less than what would be expected from alloys that do not have these inclusions. Furthermore, it would be possible to approximate the observed changes in the electrical and thermal transport properties using existing physical models for the scattering of electrons and phonons by small inclusions. The approach taken to investigate this hypothesis was to first experimentally characterize the mobile carrier concentration at room temperature along with the extent and type of secondary phase inclusions present in a series of three mechanically alloyed Pb1-xSnxTe alloys with different Sn content. Second, the physically based computational model was developed. This model was used to determine what the electronic conductivity, Seebeck coefficient, total thermal conductivity, and the portion of the thermal conductivity not due to mobile charge carriers would be in these particular Pb1-xSnxTe alloys if there were to be no secondary phase inclusions. Third, the electronic conductivity, Seebeck coefficient and total thermal conductivity was experimentally measured for these three alloys with inclusions present at elevated temperatures. The model predictions for electrical conductivity and Seebeck coefficient were directly compared to the experimental elevated temperature electrical transport measurements. The computational model was then used to extract the lattice thermal conductivity from the experimentally measured total thermal conductivity. This lattice thermal conductivity was then compared to what would be expected from the alloys in the absence of secondary phase inclusions. Secondary phase inclusions were determined by X-ray diffraction analysis to be present in all three alloys to a varying extent. The inclusions were found not to significantly degrade electrical conductivity at temperatures above ~ 400 K in these alloys, though they do dramatically impact electronic mobility at room temperature. It is shown that, at temperatures above ~ 400 K, electrons are scattered predominantly by optical and acoustical phonons rather than by an alloy scattering mechanism or the inclusions. The experimental electrical conductivity and Seebeck coefficient data at elevated temperatures were found to be within ~ 10 % of what would be expected for material without inclusions. The inclusions were not found to reduce the lattice thermal conductivity at elevated temperatures. The experimentally measured thermal conductivity data was found to be consistent with the lattice thermal conductivity that would arise due to two scattering processes: Phonon phonon scattering (Umklapp scattering) and the scattering of phonons by the disorder induced by the formation of a PbTe-SnTe solid solution (alloy scattering). As opposed to the case in electrical transport, the alloy scattering mechanism in thermal transport is shown to be a significant contributor to the total thermal resistance. An estimation of the extent to which the mean free time between phonon scattering events would be reduced due to the presence of the inclusions is consistent with the above analysis of the experimental data. The first important result of this work was the development of an experimentally validated, physically based computational model that can be used to predict the electronic conductivity, Seebeck coefficient, and thermal conductivity of Pb1-xSnxTe alloys over the 400 K to 700 K temperature as a function of Sn content and doping level. This model will be critical in future work as a tool to first determine what the highest thermoelectric figure of merit one can expect from this alloy system at a given temperature and, second, as a tool to determine the optimum Sn content and doping level to achieve this figure of merit. The second important result of this work is the determination that the secondary phase inclusions that were observed to be present in the Pb1-xSnxTe made by mechanical alloying do not keep the material from having the same electrical and thermal transport that would be expected from “perfect" single crystal material at elevated temperatures. The analytical approach described in this work will be critical in future investigations to predict how changing the size, type, and volume fraction of secondary phase inclusions can be used to impact thermal and electrical transport in this materials system.
Resumo:
Plant diversity has been shown to influence the water cycle of forest ecosystems by differences in water consumption and the associated effects on groundwater recharge. However, the effects of biodiversity on soil water fluxes remain poorly understood for native tree species plantations in the tropics. Therefore, we estimated soil water fluxes and assessed the effects of tree species and diversity on these fluxes in an experimental native tree species plantation in Sardinilla (Panama). The study was conducted during the wet season 2008 on plots of monocultures and mixtures of three or six tree species. Rainfall and soil water content were measured and evapotranspiration was estimated with the Penman-Monteith equation. Soil water fluxes were estimated using a simple soil water budget model considering water input, output, and soil water and groundwater storage changes and in addition, were simulated using the physically based one-dimensional water flow model Hydrus-1D. In general, the Hydrus simulation did not reflect the observed pressure heads, in that modeled pressure heads were higher compared to measured ones. On the other hand, the results of the water balance equation (WBE) reproduced observed water use patterns well. In monocultures, the downward fluxes through the 200 cm-depth plane were highest below Hura crepitans (6.13 mm day−1) and lowest below Luehea seemannii (5.18 mm day−1). The average seepage rate in monocultures (±SE) was 5.66 ± 0.18 mm day−1, and therefore, significantly higher than below six-species mixtures (5.49 ± 0.04 mm day−1) according to overyielding analyses. The three-species mixtures had an average seepage rate of 5.63 ± 0.12 mm day−1 and their values did not differ significantly from the average values of the corresponding species in monocultures. Seepage rates were driven by the transpiration of the varying biomass among the plots (r = 0.61, p = 0.017). Thus, a mixture of trees with different growth rates resulted in moderate seepage rates compared to monocultures of either fast growing or slow growing tree species. Our results demonstrate that tree-species specific biomass production and tree diversity are important controls of seepage rates in the Sardinilla plantation during the wet season.
Resumo:
The induction of late long-term potentiation (L-LTP) involves complex interactions among second-messenger cascades. To gain insights into these interactions, a mathematical model was developed for L-LTP induction in the CA1 region of the hippocampus. The differential equation-based model represents actions of protein kinase A (PKA), MAP kinase (MAPK), and CaM kinase II (CAMKII) in the vicinity of the synapse, and activation of transcription by CaM kinase IV (CAMKIV) and MAPK. L-LTP is represented by increases in a synaptic weight. Simulations suggest that steep, supralinear stimulus-response relationships between stimuli (e.g., elevations in [Ca(2+)]) and kinase activation are essential for translating brief stimuli into long-lasting gene activation and synaptic weight increases. Convergence of multiple kinase activities to induce L-LTP helps to generate a threshold whereby the amount of L-LTP varies steeply with the number of brief (tetanic) electrical stimuli. The model simulates tetanic, -burst, pairing-induced, and chemical L-LTP, as well as L-LTP due to synaptic tagging. The model also simulates inhibition of L-LTP by inhibition of MAPK, CAMKII, PKA, or CAMKIV. The model predicts results of experiments to delineate mechanisms underlying L-LTP induction and expression. For example, the cAMP antagonist RpcAMPs, which inhibits L-LTP induction, is predicted to inhibit ERK activation. The model also appears useful to clarify similarities and differences between hippocampal L-LTP and long-term synaptic strengthening in other systems.
Modelling the effects of land use and climate changes on hydrology in the Ursern Valley, Switzerland
Resumo:
While many studies have been conducted in mountainous catchments to examine the impact of climate change on hydrology, the interactions between climate changes and land use components have largely unknown impacts on hydrology in alpine regions. They need to be given special attention in order to devise possible strategies concerning general development in these regions. Thus, the main aim was to examine the impact of land use (i.e. bushland expansion) and climate changes (i.e. increase of temperature) on hydrology by model simulations. For this purpose, the physically based WaSiM-ETH model was applied to the catchment of Ursern Valley in the central Alps (191 km2) over the period of 1983−2005. Modelling results showed that the reduction of the mean monthly discharge during the summer period is due primarily to the retreat of snow discharge in time and secondarily to the reduction in the glacier surface area together with its retreat in time, rather than the increase in the evapotranspiration due to the expansion of the “green alder” on the expense of grassland. The significant decrease in summer discharge during July, August and September shows a change in the regime from b-glacio-nival to nivo-glacial. These changes are confirmed by the modeling results that attest to a temporal shift in snowmelt and glacier discharge towards earlier in the year: March, April and May for snowmelt and May and June for glacier discharge. It is expected that the yearly total discharge due to the land use changes will be reduced by 0.6% in the near future, whereas, it will be reduced by about 5% if climate change is also taken into account. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment
Resumo:
This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.
Resumo:
Los fieltros son una familia de materiales textiles constituidos por una red desordenada de fibras conectadas por medio de enlaces térmicos, químicos o mecánicos. Presentan menor rigidez y resistencia (al igual que un menor coste de procesado) que sus homólogos tejidos, pero mayor deformabilidad y capacidad de absorción de energía. Los fieltros se emplean en diversas aplicaciones en ingeniería tales como aislamiento térmico, geotextiles, láminas ignífugas, filtración y absorción de agua, impacto balístico, etc. En particular, los fieltros punzonados fabricados con fibras de alta resistencia presentan una excelente resistencia frente a impacto balístico, ofreciendo las mismas prestaciones que los materiales tejidos con un tercio de la densidad areal. Sin embargo, se sabe muy poco acerca de los mecanismos de deformación y fallo a nivel microscópico, ni sobre como influyen en las propiedades mecánicas del material. Esta carencia de conocimiento dificulta la optimización del comportamiento mecánico de estos materiales y también limita el desarrollo de modelos constitutivos basados en mecanismos físicos, que puedan ser útiles en el diseño de componentes estructurales. En esta tesis doctoral se ha llevado a cabo un estudio minucioso con el fin de determinar los mecanismos de deformación y las propiedades mecánicas de fieltros punzonados fabricados con fibras de polietileno de ultra alto peso molecular. Los procesos de deformación y disipación de energía se han caracterizado en detalle por medio de una combinación de técnicas experimentales (ensayos mecánicos macroscópicos a velocidades de deformación cuasi-estáticas y dinámicas, impacto balístico, ensayos de extracción de una o múltiples fibras, microscopía óptica, tomografía computarizada de rayos X y difracción de rayos X de gran ángulo) que proporcionan información de los mecanismos dominantes a distintas escalas. Los ensayos mecánicos macroscópicos muestran que el fieltro presenta una resistencia y ductilidad excepcionales. El estado inicial de las fibras es curvado, y la carga se transmite por el fieltro a través de una red aleatoria e isótropa de nudos creada por el proceso de punzonamiento, resultando en la formación de una red activa de fibra. La rotación y el estirado de las fibras activas es seguido por el deslizamiento y extracción de la fibra de los puntos de anclaje mecánico. La mayor parte de la resistencia y la energía disipada es proporcionada por la extracción de las fibras activas de los nudos, y la fractura final tiene lugar como consecuencia del desenredo total de la red en una sección dada donde la deformación macroscópica se localiza. No obstante, aunque la distribución inicial de la orientación de las fibras es isótropa, las propiedades mecánicas resultantes (en términos de rigidez, resistencia y energía absorbida) son muy anisótropas. Los ensayos de extracción de múltiples fibras en diferentes orientaciones muestran que la estructura de los nudos conecta más fibras en la dirección transversal en comparación con la dirección de la máquina. La mejor interconectividad de las fibras a lo largo de la dirección transversal da lugar a una esqueleto activo de fibras más denso, mejorando las propiedades mecánicas. En términos de afinidad, los fieltros deformados a lo largo de la dirección transversal exhiben deformación afín (la deformación macroscópica transfiere directamente a las fibras por el material circundante), mientras que el fieltro deformado a lo largo de la dirección de la máquina presenta deformación no afín, y la mayor parte de la deformación macroscópica no es transmitida a las fibras. A partir de estas observaciones experimentales, se ha desarrollado un modelo constitutivo para fieltros punzonados confinados por enlaces mecánicos. El modelo considera los efectos de la deformación no afín, la conectividad anisótropa inducida durante el punzonamiento, la curvatura y re-orientación de la fibra, así como el desenredo y extracción de la fibra de los nudos. El modelo proporciona la respuesta de un mesodominio del material correspondiente al volumen asociado a un elemento finito, y se divide en dos bloques. El primer bloque representa el comportamiento de la red y establece la relación entre el gradiente de deformación macroscópico y la respuesta microscópica, obtenido a partir de la integración de la respuesta de las fibras en el mesodominio. El segundo bloque describe el comportamiento de la fibra, teniendo en cuenta las características de la deformación de cada familia de fibras en el mesodominio, incluyendo deformación no afín, estiramiento, deslizamiento y extracción. En la medida de lo posible, se ha asignado un significado físico claro a los parámetros del modelo, por lo que se pueden identificar por medio de ensayos independientes. Las simulaciones numéricas basadas en el modelo se adecúan a los resultados experimentales de ensayos cuasi-estáticos y balísticos desde el punto de vista de la respuesta mecánica macroscópica y de los micromecanismos de deformación. Además, suministran información adicional sobre la influencia de las características microstructurales (orientación de la fibra, conectividad de la fibra anisótropa, afinidad, etc) en el comportamiento mecánico de los fieltros punzonados. Nonwoven fabrics are a class of textile material made up of a disordered fiber network linked by either thermal, chemical or mechanical bonds. They present lower stiffness and strength (as well as processing cost) than the woven counterparts but much higher deformability and energy absorption capability and are used in many different engineering applications (including thermal insulation, geotextiles, fireproof layers, filtration and water absorption, ballistic impact, etc). In particular, needle-punched nonwoven fabrics manufactured with high strength fibers present an excellent performance for ballistic protection, providing the same ballistic protection with one third of the areal weight as compared to dry woven fabrics. Nevertheless, very little is known about their deformation and fracture micromechanisms at the microscopic level and how they contribute to the macroscopic mechanical properties. This lack of knowledge hinders the optimization of their mechanical performance and also limits the development of physically-based models of the mechanical behavior that can be used in the design of structural components with these materials. In this thesis, a thorough study was carried out to ascertain the micromechanisms of deformation and the mechanical properties of a needle-punched nonwoven fabric made up by ultra high molecular weight polyethylene fibers. The deformation and energy dissipation processes were characterized in detail by a combination of experimental techniques (macroscopic mechanical tests at quasi-static and high strain rates, ballistic impact, single fiber and multi fiber pull-out tests, optical microscopy, X-ray computed tomography and wide angle X-ray diffraction) that provided information of the dominant mechanisms at different length scales. The macroscopic mechanical tests showed that the nonwoven fabric presented an outstanding strength and energy absorption capacity. It was found that fibers were initially curved and the load was transferred within the fabric through the random and isotropic network of knots created by needlepunching, leading to the formation of an active fiber network. Uncurling and stretching of the active fibers was followed by fiber sliding and pull-out from the entanglement points. Most of the strength and energy dissipation was provided by the extraction of the active fibers from the knots and final fracture occurred by the total disentanglement of the fiber network in a given section at which the macroscopic deformation was localized. However, although the initial fiber orientation distribution was isotropic, the mechanical properties (in terms of stiffness, strength and energy absorption) were highly anisotropic. Pull-out tests of multiple fibers at different orientations showed that structure of the knots connected more fibers in the transverse direction as compared with the machine direction. The better fiber interconnection along the transverse direction led to a denser active fiber skeleton, enhancing the mechanical response. In terms of affinity, fabrics deformed along the transverse direction essentially displayed affine deformation {i.e. the macroscopic strain was directly transferred to the fibers by the surrounding fabric, while fabrics deformed along the machine direction underwent non-affine deformation, and most of the macroscopic strain was not transferred to the fibers. Based on these experimental observations, a constitutive model for the mechanical behavior of the mechanically-entangled nonwoven fiber network was developed. The model accounted for the effects of non-affine deformation, anisotropic connectivity induced by the entanglement points, fiber uncurling and re-orientation as well as fiber disentanglement and pull-out from the knots. The model provided the constitutive response for a mesodomain of the fabric corresponding to the volume associated to a finite element and is divided in two blocks. The first one was the network model which established the relationship between the macroscopic deformation gradient and the microscopic response obtained by integrating the response of the fibers in the mesodomain. The second one was the fiber model, which took into account the deformation features of each set of fibers in the mesodomain, including non-affinity, uncurling, pull-out and disentanglement. As far as possible, a clear physical meaning is given to the model parameters, so they can be identified by means of independent tests. The numerical simulations based on the model were in very good agreement with the experimental results of in-plane and ballistic mechanical response of the fabrics in terms of the macroscopic mechanical response and of the micromechanisms of deformation. In addition, it provided additional information about the influence of the microstructural features (fiber orientation, anisotropic fiber connectivity, affinity) on the mechanical performance of mechanically-entangled nonwoven fabrics.
Resumo:
Trabalho apresentado na Conferência CPE-POWERENG 2016, 29 junho a 01 de julho 2016, Bydgoszcz, Polónia
Resumo:
This dataset contains continuous time series of land surface temperature (LST) at spatial resolution of 300m around the 12 experimental sites of the PAGE21 project (grant agreement number 282700, funded by the EC seventh Framework Program theme FP7-ENV-2011). This dataset was produced from hourly LST time series at 25km scale, retrieved from SSM/I data (André et al., 2015, doi:10.1016/j.rse.2015.01.028) and downscaled to 300m using a dynamic model and a particle smoothing approach. This methodology is based on two main assumptions. First, LST spatial variability is mostly explained by land cover and soil hydric state. Second, LST is unique for a land cover class within the low resolution pixel. Given these hypotheses, this variable can be estimated using a land cover map and a physically based land surface model constrained with observations using a data assimilation process. This methodology described in Mechri et al. (2014, doi:10.1002/2013JD020354) was applied to the ORCHIDEE land surface model (Krinner et al., 2005, doi:10.1029/2003GB002199) to estimate prior values of each land cover class provided by the ESA CCI-Land Cover product (Bontemps et al., 2013) at 300m resolution . The assimilation process (particle smoother) consists in simulating ensemble of LST time series for each land cover class and for a large number of parameter sets. For each parameter set, the resulting temperatures are aggregated considering the grid fraction of each land cover and compared to the coarse observations. Miniminizing the distance between the aggregated model solutions and the observations allow us to select the simulated LST and the corresponding parameter sets which fit the observations most closely. The retained parameter sets are then duplicated and randomly perturbed before simulating the next time window. At the end, the most likely LST of each land cover class are estimated and used to reconstruct LST maps at 300m resolution using ESA CCI-Land Cover. The resulting temperature maps on which ice pixels were masked, are provided at daily time step during the nine-year analysis period (2000-2009).
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06