29 resultados para physically-based model
em Universidad Politécnica de Madrid
Resumo:
There is growing concern over the challenges for innovation in Freight Pipeline industry. Since the early works of Chesbrough a decade ago, we have learned a lot about the content, context and process of open innovation. However, much more research is needed in Freight Pipeline Industry. The reality is that few corporations have institutionalized open innovation practices in ways that have enabled substantial growth or industry leadership. Based on this, we pursue the following question: How does a firm’s integration into knowledge networks depend on its ability to manage knowledge? A competence-based model for freight pipeline organizations is analysed, this model should be understood by any organization in order to be successful in motivating professionals who carry out innovations and play a main role in collaborative knowledge creation processes. This paper aims to explain how can open innovation achieve its potential in most Freight Pipeline Industries.
Resumo:
Detecting user affect automatically during real-time conversation is the main challenge towards our greater aim of infusing social intelligence into a natural-language mixed-initiative High-Fidelity (Hi-Fi) audio control spoken dialog agent. In recent years, studies on affect detection from voice have moved on to using realistic, non-acted data, which is subtler. However, it is more challenging to perceive subtler emotions and this is demonstrated in tasks such as labelling and machine prediction. This paper attempts to address part of this challenge by considering the role of user satisfaction ratings and also conversational/dialog features in discriminating contentment and frustration, two types of emotions that are known to be prevalent within spoken human-computer interaction. However, given the laboratory constraints, users might be positively biased when rating the system, indirectly making the reliability of the satisfaction data questionable. Machine learning experiments were conducted on two datasets, users and annotators, which were then compared in order to assess the reliability of these datasets. Our results indicated that standard classifiers were significantly more successful in discriminating the abovementioned emotions and their intensities (reflected by user satisfaction ratings) from annotator data than from user data. These results corroborated that: first, satisfaction data could be used directly as an alternative target variable to model affect, and that they could be predicted exclusively by dialog features. Second, these were only true when trying to predict the abovementioned emotions using annotator?s data, suggesting that user bias does exist in a laboratory-led evaluation.
Resumo:
The agent-based model presented here, comprises an algorithm that computes the degree of hydration, the water consumption and the layer thickness of C-S-H gel as functions of time for different temperatures and different w/c ratios. The results are in agreement with reported experimental studies, demonstrating the applicability of the model. As the available experimental results regarding elevated curing temperature are scarce, the model could be recalibrated in the future. Combining the agent-based computational model with TGA analysis, a semiempirical method is achieved to be used for better understanding the microstructure development in ordinary cement pastes and to predict the influence of temperature on the hydration process.
Resumo:
This paper describes the impact of electric mobility on the transmission grid in Flanders region (Belgium), using a micro-simulation activity based models. These models are used to provide temporal and spatial estimation of energy and power demanded by electric vehicles (EVs) in different mobility zones. The increment in the load demand due to electric mobility is added to the background load demand in these mobility areas and the effects over the transmission substations are analyzed. From this information, the total storage capacity per zone is evaluated and some strategies for EV aggregator are proposed, allowing the aggregator to fulfill bids on the electricity markets.
Resumo:
Although most of the research on Cognitive Radio is focused on communication bands above the HF upper limit (30 MHz), Cognitive Radio principles can also be applied to HF communications to make use of the extremely scarce spectrum more efficiently. In this work we consider legacy users as primary users since these users transmit without resorting to any smart procedure, and our stations using the HFDVL (HF Data+Voice Link) architecture as secondary users. Our goal is to enhance an efficient use of the HF band by detecting the presence of uncoordinated primary users and avoiding collisions with them while transmitting in different HF channels using our broad-band HF transceiver. A model of the primary user activity dynamics in the HF band is developed in this work to make short-term predictions of the sojourn time of a primary user in the band and avoid collisions. It is based on Hidden Markov Models (HMM) which are a powerful tool for modelling stochastic random processes and are trained with real measurements of the 14 MHz band. By using the proposed HMM based model, the prediction model achieves an average 10.3% prediction error rate with one minute-long channel knowledge but it can be reduced when this knowledge is extended: with the previous 8 min knowledge, an average 5.8% prediction error rate is achieved. These results suggest that the resulting activity model for the HF band could actually be used to predict primary users activity and included in a future HF cognitive radio based station.
Resumo:
Acquired brain injury (ABI) 1-2 refers to any brain damage occurring after birth. It usually causes certain damage to portions of the brain. ABI may result in a significant impairment of an individuals physical, cognitive and/or psychosocial functioning. The main causes are traumatic brain injury (TBI), cerebrovascular accident (CVA) and brain tumors. The main consequence of ABI is a dramatic change in the individuals daily life. This change involves a disruption of the family, a loss of future income capacity and an increase of lifetime cost. One of the main challenges in neurorehabilitation is to obtain a dysfunctional profile of each patient in order to personalize the treatment. This paper proposes a system to generate a patient s dysfunctional profile by integrating theoretical, structural and neuropsychological information on a 3D brain imaging-based model. The main goal of this dysfunctional profile is to help therapists design the most suitable treatment for each patient. At the same time, the results obtained are a source of clinical evidence to improve the accuracy and quality of our rehabilitation system. Figure 1 shows the diagram of the system. This system is composed of four main modules: image-based extraction of parameters, theoretical modeling, classification and co-registration and visualization module.
Resumo:
La adecuada estimación de avenidas de diseño asociadas a altos periodos de retorno es necesaria para el diseño y gestión de estructuras hidráulicas como presas. En la práctica, la estimación de estos cuantiles se realiza normalmente a través de análisis de frecuencia univariados, basados en su mayoría en el estudio de caudales punta. Sin embargo, la naturaleza de las avenidas es multivariada, siendo esencial tener en cuenta características representativas de las avenidas, tales como caudal punta, volumen y duración del hidrograma, con el fin de llevar a cabo un análisis apropiado; especialmente cuando el caudal de entrada se transforma en un caudal de salida diferente durante el proceso de laminación en un embalse o llanura de inundación. Los análisis de frecuencia de avenidas multivariados han sido tradicionalmente llevados a cabo mediante el uso de distribuciones bivariadas estándar con el fin de modelar variables correlacionadas. Sin embargo, su uso conlleva limitaciones como la necesidad de usar el mismo tipo de distribuciones marginales para todas las variables y la existencia de una relación de dependencia lineal entre ellas. Recientemente, el uso de cópulas se ha extendido en hidrología debido a sus beneficios en relación al contexto multivariado, permitiendo superar los inconvenientes de las técnicas tradicionales. Una copula es una función que representa la estructura de dependencia de las variables de estudio, y permite obtener la distribución de frecuencia multivariada de dichas variables mediante sus distribuciones marginales, sin importar el tipo de distribución marginal utilizada. La estimación de periodos de retorno multivariados, y por lo tanto, de cuantiles multivariados, también se facilita debido a la manera en la que las cópulas están formuladas. La presente tesis doctoral busca proporcionar metodologías que mejoren las técnicas tradicionales usadas por profesionales para estimar cuantiles de avenida más adecuados para el diseño y la gestión de presas, así como para la evaluación del riesgo de avenida, mediante análisis de frecuencia de avenidas bivariados basados en cópulas. Las variables consideradas para ello son el caudal punta y el volumen del hidrograma. Con el objetivo de llevar a cabo un estudio completo, la presente investigación abarca: (i) el análisis de frecuencia de avenidas local bivariado centrado en examinar y comparar los periodos de retorno teóricos basados en la probabilidad natural de ocurrencia de una avenida, con el periodo de retorno asociado al riesgo de sobrevertido de la presa bajo análisis, con el fin de proporcionar cuantiles en una estación de aforo determinada; (ii) la extensión del enfoque local al regional, proporcionando un procedimiento completo para llevar a cabo un análisis de frecuencia de avenidas regional bivariado para proporcionar cuantiles en estaciones sin aforar o para mejorar la estimación de dichos cuantiles en estaciones aforadas; (iii) el uso de cópulas para investigar tendencias bivariadas en avenidas debido al aumento de los niveles de urbanización en una cuenca; y (iv) la extensión de series de avenida observadas mediante la combinación de los beneficios de un modelo basado en cópulas y de un modelo hidrometeorológico. Accurate design flood estimates associated with high return periods are necessary to design and manage hydraulic structures such as dams. In practice, the estimate of such quantiles is usually done via univariate flood frequency analyses, mostly based on the study of peak flows. Nevertheless, the nature of floods is multivariate, being essential to consider representative flood characteristics, such as flood peak, hydrograph volume and hydrograph duration to carry out an appropriate analysis; especially when the inflow peak is transformed into a different outflow peak during the routing process in a reservoir or floodplain. Multivariate flood frequency analyses have been traditionally performed by using standard bivariate distributions to model correlated variables, yet they entail some shortcomings such as the need of using the same kind of marginal distribution for all variables and the assumption of a linear dependence relation between them. Recently, the use of copulas has been extended in hydrology because of their benefits regarding dealing with the multivariate context, as they overcome the drawbacks of the traditional approach. A copula is a function that represents the dependence structure of the studied variables, and allows obtaining the multivariate frequency distribution of them by using their marginal distributions, regardless of the kind of marginal distributions considered. The estimate of multivariate return periods, and therefore multivariate quantiles, is also facilitated by the way in which copulas are formulated. The present doctoral thesis seeks to provide methodologies that improve traditional techniques used by practitioners, in order to estimate more appropriate flood quantiles for dam design, dam management and flood risk assessment, through bivariate flood frequency analyses based on the copula approach. The flood variables considered for that goal are peak flow and hydrograph volume. In order to accomplish a complete study, the present research addresses: (i) a bivariate local flood frequency analysis focused on examining and comparing theoretical return periods based on the natural probability of occurrence of a flood, with the return period associated with the risk of dam overtopping, to estimate quantiles at a given gauged site; (ii) the extension of the local to the regional approach, supplying a complete procedure for performing a bivariate regional flood frequency analysis to either estimate quantiles at ungauged sites or improve at-site estimates at gauged sites; (iii) the use of copulas to investigate bivariate flood trends due to increasing urbanisation levels in a catchment; and (iv) the extension of observed flood series by combining the benefits of a copula-based model and a hydro-meteorological model.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment.
Resumo:
Using a new Admittance-based model for electrical noise able to handle Fluctuations and Dissipations of electrical energy, we explain the phase noise of oscillators that use feedback around L-C resonators. We show that Fluctuations produce the Line Broadening of their output spectrum around its mean frequency f0 and that the Pedestal of phase noise far from f0 comes from Dissipations modified by the feedback electronics. The charge noise power 4FkT/R C2/s that disturbs the otherwise periodic fluctuation of charge these oscillators aim to sustain in their L-C-R resonator, is what creates their phase noise proportional to Leeson’s noise figure F and to the charge noise power 4kT/R C2/s of their capacitance C that today’s modelling would consider as the current noise density in A2/Hz of their resistance R. Linked with this (A2/Hz?C2/s) equivalence, R becomes a random series in time of discrete chances to Dissipate energy in Thermal Equilibrium (TE) giving a similar series of discrete Conversions of electrical energy into heat when the resonator is out of TE due to the Signal power it handles. Therefore, phase noise reflects the way oscillators sense thermal exchanges of energy with their environment
Resumo:
This paper shows a physically cogent model for electrical noise in resistors that has been obtained from Thermodynamical reasons. This new model derived from the works of Johnson and Nyquist also agrees with the Quantum model for noisy systems handled by Callen and Welton in 1951, thus unifying these two Physical viewpoints. This new model is a Complex or 2-D noise model based on an Admittance that considers both Fluctuation and Dissipation of electrical energy to excel the Real or 1-D model in use that only considers Dissipation. By the two orthogonal currents linked with a common voltage noise by an Admittance function, the new model is shown in frequency domain. Its use in time domain allows to see the pitfall behind a paradox of Statistical Mechanics about systems considered as energy-conserving and deterministic on the microscale that are dissipative and unpredictable on the macroscale and also shows how to use properly the Fluctuation-Dissipation Theorem.
Resumo:
OntoTag - A Linguistic and Ontological Annotation Model Suitable for the Semantic Web
1. INTRODUCTION. LINGUISTIC TOOLS AND ANNOTATIONS: THEIR LIGHTS AND SHADOWS
Computational Linguistics is already a consolidated research area. It builds upon the results of other two major ones, namely Linguistics and Computer Science and Engineering, and it aims at developing computational models of human language (or natural language, as it is termed in this area). Possibly, its most well-known applications are the different tools developed so far for processing human language, such as machine translation systems and speech recognizers or dictation programs.
These tools for processing human language are commonly referred to as linguistic tools. Apart from the examples mentioned above, there are also other types of linguistic tools that perhaps are not so well-known, but on which most of the other applications of Computational Linguistics are built. These other types of linguistic tools comprise POS taggers, natural language parsers and semantic taggers, amongst others. All of them can be termed linguistic annotation tools.
Linguistic annotation tools are important assets. In fact, POS and semantic taggers (and, to a lesser extent, also natural language parsers) have become critical resources for the computer applications that process natural language. Hence, any computer application that has to analyse a text automatically and ‘intelligently’ will include at least a module for POS tagging. The more an application needs to ‘understand’ the meaning of the text it processes, the more linguistic tools and/or modules it will incorporate and integrate.
However, linguistic annotation tools have still some limitations, which can be summarised as follows:
1. Normally, they perform annotations only at a certain linguistic level (that is, Morphology, Syntax, Semantics, etc.).
2. They usually introduce a certain rate of errors and ambiguities when tagging. This error rate ranges from 10 percent up to 50 percent of the units annotated for unrestricted, general texts.
3. Their annotations are most frequently formulated in terms of an annotation schema designed and implemented ad hoc.
A priori, it seems that the interoperation and the integration of several linguistic tools into an appropriate software architecture could most likely solve the limitations stated in (1). Besides, integrating several linguistic annotation tools and making them interoperate could also minimise the limitation stated in (2). Nevertheless, in the latter case, all these tools should produce annotations for a common level, which would have to be combined in order to correct their corresponding errors and inaccuracies. Yet, the limitation stated in (3) prevents both types of integration and interoperation from being easily achieved.
In addition, most high-level annotation tools rely on other lower-level annotation tools and their outputs to generate their own ones. For example, sense-tagging tools (operating at the semantic level) often use POS taggers (operating at a lower level, i.e., the morphosyntactic) to identify the grammatical category of the word or lexical unit they are annotating. Accordingly, if a faulty or inaccurate low-level annotation tool is to be used by other higher-level one in its process, the errors and inaccuracies of the former should be minimised in advance. Otherwise, these errors and inaccuracies would be transferred to (and even magnified in) the annotations of the high-level annotation tool.
Therefore, it would be quite useful to find a way to
(i) correct or, at least, reduce the errors and the inaccuracies of lower-level linguistic tools;
(ii) unify the annotation schemas of different linguistic annotation tools or, more generally speaking, make these tools (as well as their annotations) interoperate.
Clearly, solving (i) and (ii) should ease the automatic annotation of web pages by means of linguistic tools, and their transformation into Semantic Web pages (Berners-Lee, Hendler and Lassila, 2001). Yet, as stated above, (ii) is a type of interoperability problem. There again, ontologies (Gruber, 1993; Borst, 1997) have been successfully applied thus far to solve several interoperability problems. Hence, ontologies should help solve also the problems and limitations of linguistic annotation tools aforementioned.
Thus, to summarise, the main aim of the present work was to combine somehow these separated approaches, mechanisms and tools for annotation from Linguistics and Ontological Engineering (and the Semantic Web) in a sort of hybrid (linguistic and ontological) annotation model, suitable for both areas. This hybrid (semantic) annotation model should (a) benefit from the advances, models, techniques, mechanisms and tools of these two areas; (b) minimise (and even solve, when possible) some of the problems found in each of them; and (c) be suitable for the Semantic Web. The concrete goals that helped attain this aim are presented in the following section.
2. GOALS OF THE PRESENT WORK
As mentioned above, the main goal of this work was to specify a hybrid (that is, linguistically-motivated and ontology-based) model of annotation suitable for the Semantic Web (i.e. it had to produce a semantic annotation of web page contents). This entailed that the tags included in the annotations of the model had to (1) represent linguistic concepts (or linguistic categories, as they are termed in ISO/DCR (2008)), in order for this model to be linguistically-motivated; (2) be ontological terms (i.e., use an ontological vocabulary), in order for the model to be ontology-based; and (3) be structured (linked) as a collection of ontology-based
Resumo:
The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.
Resumo:
Los fieltros son una familia de materiales textiles constituidos por una red desordenada de fibras conectadas por medio de enlaces térmicos, químicos o mecánicos. Presentan menor rigidez y resistencia (al igual que un menor coste de procesado) que sus homólogos tejidos, pero mayor deformabilidad y capacidad de absorción de energía. Los fieltros se emplean en diversas aplicaciones en ingeniería tales como aislamiento térmico, geotextiles, láminas ignífugas, filtración y absorción de agua, impacto balístico, etc. En particular, los fieltros punzonados fabricados con fibras de alta resistencia presentan una excelente resistencia frente a impacto balístico, ofreciendo las mismas prestaciones que los materiales tejidos con un tercio de la densidad areal. Sin embargo, se sabe muy poco acerca de los mecanismos de deformación y fallo a nivel microscópico, ni sobre como influyen en las propiedades mecánicas del material. Esta carencia de conocimiento dificulta la optimización del comportamiento mecánico de estos materiales y también limita el desarrollo de modelos constitutivos basados en mecanismos físicos, que puedan ser útiles en el diseño de componentes estructurales. En esta tesis doctoral se ha llevado a cabo un estudio minucioso con el fin de determinar los mecanismos de deformación y las propiedades mecánicas de fieltros punzonados fabricados con fibras de polietileno de ultra alto peso molecular. Los procesos de deformación y disipación de energía se han caracterizado en detalle por medio de una combinación de técnicas experimentales (ensayos mecánicos macroscópicos a velocidades de deformación cuasi-estáticas y dinámicas, impacto balístico, ensayos de extracción de una o múltiples fibras, microscopía óptica, tomografía computarizada de rayos X y difracción de rayos X de gran ángulo) que proporcionan información de los mecanismos dominantes a distintas escalas. Los ensayos mecánicos macroscópicos muestran que el fieltro presenta una resistencia y ductilidad excepcionales. El estado inicial de las fibras es curvado, y la carga se transmite por el fieltro a través de una red aleatoria e isótropa de nudos creada por el proceso de punzonamiento, resultando en la formación de una red activa de fibra. La rotación y el estirado de las fibras activas es seguido por el deslizamiento y extracción de la fibra de los puntos de anclaje mecánico. La mayor parte de la resistencia y la energía disipada es proporcionada por la extracción de las fibras activas de los nudos, y la fractura final tiene lugar como consecuencia del desenredo total de la red en una sección dada donde la deformación macroscópica se localiza. No obstante, aunque la distribución inicial de la orientación de las fibras es isótropa, las propiedades mecánicas resultantes (en términos de rigidez, resistencia y energía absorbida) son muy anisótropas. Los ensayos de extracción de múltiples fibras en diferentes orientaciones muestran que la estructura de los nudos conecta más fibras en la dirección transversal en comparación con la dirección de la máquina. La mejor interconectividad de las fibras a lo largo de la dirección transversal da lugar a una esqueleto activo de fibras más denso, mejorando las propiedades mecánicas. En términos de afinidad, los fieltros deformados a lo largo de la dirección transversal exhiben deformación afín (la deformación macroscópica transfiere directamente a las fibras por el material circundante), mientras que el fieltro deformado a lo largo de la dirección de la máquina presenta deformación no afín, y la mayor parte de la deformación macroscópica no es transmitida a las fibras. A partir de estas observaciones experimentales, se ha desarrollado un modelo constitutivo para fieltros punzonados confinados por enlaces mecánicos. El modelo considera los efectos de la deformación no afín, la conectividad anisótropa inducida durante el punzonamiento, la curvatura y re-orientación de la fibra, así como el desenredo y extracción de la fibra de los nudos. El modelo proporciona la respuesta de un mesodominio del material correspondiente al volumen asociado a un elemento finito, y se divide en dos bloques. El primer bloque representa el comportamiento de la red y establece la relación entre el gradiente de deformación macroscópico y la respuesta microscópica, obtenido a partir de la integración de la respuesta de las fibras en el mesodominio. El segundo bloque describe el comportamiento de la fibra, teniendo en cuenta las características de la deformación de cada familia de fibras en el mesodominio, incluyendo deformación no afín, estiramiento, deslizamiento y extracción. En la medida de lo posible, se ha asignado un significado físico claro a los parámetros del modelo, por lo que se pueden identificar por medio de ensayos independientes. Las simulaciones numéricas basadas en el modelo se adecúan a los resultados experimentales de ensayos cuasi-estáticos y balísticos desde el punto de vista de la respuesta mecánica macroscópica y de los micromecanismos de deformación. Además, suministran información adicional sobre la influencia de las características microstructurales (orientación de la fibra, conectividad de la fibra anisótropa, afinidad, etc) en el comportamiento mecánico de los fieltros punzonados. Nonwoven fabrics are a class of textile material made up of a disordered fiber network linked by either thermal, chemical or mechanical bonds. They present lower stiffness and strength (as well as processing cost) than the woven counterparts but much higher deformability and energy absorption capability and are used in many different engineering applications (including thermal insulation, geotextiles, fireproof layers, filtration and water absorption, ballistic impact, etc). In particular, needle-punched nonwoven fabrics manufactured with high strength fibers present an excellent performance for ballistic protection, providing the same ballistic protection with one third of the areal weight as compared to dry woven fabrics. Nevertheless, very little is known about their deformation and fracture micromechanisms at the microscopic level and how they contribute to the macroscopic mechanical properties. This lack of knowledge hinders the optimization of their mechanical performance and also limits the development of physically-based models of the mechanical behavior that can be used in the design of structural components with these materials. In this thesis, a thorough study was carried out to ascertain the micromechanisms of deformation and the mechanical properties of a needle-punched nonwoven fabric made up by ultra high molecular weight polyethylene fibers. The deformation and energy dissipation processes were characterized in detail by a combination of experimental techniques (macroscopic mechanical tests at quasi-static and high strain rates, ballistic impact, single fiber and multi fiber pull-out tests, optical microscopy, X-ray computed tomography and wide angle X-ray diffraction) that provided information of the dominant mechanisms at different length scales. The macroscopic mechanical tests showed that the nonwoven fabric presented an outstanding strength and energy absorption capacity. It was found that fibers were initially curved and the load was transferred within the fabric through the random and isotropic network of knots created by needlepunching, leading to the formation of an active fiber network. Uncurling and stretching of the active fibers was followed by fiber sliding and pull-out from the entanglement points. Most of the strength and energy dissipation was provided by the extraction of the active fibers from the knots and final fracture occurred by the total disentanglement of the fiber network in a given section at which the macroscopic deformation was localized. However, although the initial fiber orientation distribution was isotropic, the mechanical properties (in terms of stiffness, strength and energy absorption) were highly anisotropic. Pull-out tests of multiple fibers at different orientations showed that structure of the knots connected more fibers in the transverse direction as compared with the machine direction. The better fiber interconnection along the transverse direction led to a denser active fiber skeleton, enhancing the mechanical response. In terms of affinity, fabrics deformed along the transverse direction essentially displayed affine deformation {i.e. the macroscopic strain was directly transferred to the fibers by the surrounding fabric, while fabrics deformed along the machine direction underwent non-affine deformation, and most of the macroscopic strain was not transferred to the fibers. Based on these experimental observations, a constitutive model for the mechanical behavior of the mechanically-entangled nonwoven fiber network was developed. The model accounted for the effects of non-affine deformation, anisotropic connectivity induced by the entanglement points, fiber uncurling and re-orientation as well as fiber disentanglement and pull-out from the knots. The model provided the constitutive response for a mesodomain of the fabric corresponding to the volume associated to a finite element and is divided in two blocks. The first one was the network model which established the relationship between the macroscopic deformation gradient and the microscopic response obtained by integrating the response of the fibers in the mesodomain. The second one was the fiber model, which took into account the deformation features of each set of fibers in the mesodomain, including non-affinity, uncurling, pull-out and disentanglement. As far as possible, a clear physical meaning is given to the model parameters, so they can be identified by means of independent tests. The numerical simulations based on the model were in very good agreement with the experimental results of in-plane and ballistic mechanical response of the fabrics in terms of the macroscopic mechanical response and of the micromechanisms of deformation. In addition, it provided additional information about the influence of the microstructural features (fiber orientation, anisotropic fiber connectivity, affinity) on the mechanical performance of mechanically-entangled nonwoven fabrics.
Resumo:
Triple-Play (3P) and Quadruple-Play (4P) services are being widely offered by telecommunication services providers. Such services must be able to offer equal or higher quality levels than those obtained with traditional systems, especially for the most demanding services such as broadcast IPTV. This paper presents a matrix-based model, defined in terms of service components, user perceptions, agent capabilities, performance indicators and evaluation functions, which allows to estimate the overall quality of a set of convergent services, as perceived by the users, from a set of performance and/or Quality of Service (QoS) parameters of the convergent IP transport network
Linear global instability of non-orthogonal incompressible swept attachment-line boundary layer flow
Resumo:
Instability of the orthogonal swept attachment line boundary layer has received attention by local1, 2 and global3–5 analysis methods over several decades, owing to the significance of this model to transition to turbulence on the surface of swept wings. However, substantially less attention has been paid to the problem of laminar flow instability in the non-orthogonal swept attachment-line boundary layer; only a local analysis framework has been employed to-date.6 The present contribution addresses this issue from a linear global (BiGlobal) instability analysis point of view in the incompressible regime. Direct numerical simulations have also been performed in order to verify the analysis results and unravel the limits of validity of the Dorrepaal basic flow7 model analyzed. Cross-validated results document the effect of the angle _ on the critical conditions identified by Hall et al.1 and show linear destabilization of the flow with decreasing AoA, up to a limit at which the assumptions of the Dorrepaal model become questionable. Finally, a simple extension of the extended G¨ortler-H¨ammerlin ODE-based polynomial model proposed by Theofilis et al.4 is presented for the non-orthogonal flow. In this model, the symmetries of the three-dimensional disturbances are broken by the non-orthogonal flow conditions. Temporal and spatial one-dimensional linear eigenvalue codes were developed, obtaining consistent results with BiGlobal stability analysis and DNS. Beyond the computational advantages presented by the ODE-based model, it allows us to understand the functional dependence of the three-dimensional disturbances in the non-orthogonal case as well as their connections with the disturbances of the orthogonal stability problem.