906 resultados para modeling of arrival processes
Resumo:
This paper deals with the numerical analysis of saturated porous media, taking into account the damage phenomena on the solid skeleton. The porous media is taken into poro-elastic framework, in full-saturated condition, based on Biot's Theory. A scalar damage model is assumed for this analysis. An implicit boundary element method (BEM) formulation, based on time-independent fundamental solutions, is developed and implemented to couple the fluid flow and two-dimensional elastostatic problems. The integration over boundary elements is evaluated using a numerical Gauss procedure. A semi-analytical scheme for the case of triangular domain cells is followed to carry out the relevant domain integrals. The non-linear problem is solved by a Newton-Raphson procedure. Numerical examples are presented, in order to validate the implemented formulation and to illustrate its efficacy. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This article studied the applicability of poly(acrylamide) and methylcellulose (PAAm-MC) hydrogels as potential delivery vehicle for the controlled-extended release of ammonium sulfate (NH(4))(2)SO(4) and potassium phosphate (KH(2)PO(4)) fertilizers. PAAm-MC hydrogels with different acrylamide (AAm) and MC concentrations were prepared by a free radical polymerization method. The adsorption and desorption kinetics of fertilizers were determined using conductivity measurements based on previously built analytical curve. The addition of MC in the PAAm chains increased the quantities of (NH(4))(2)SO(4) and KH(2)PO(4) loaded and extended the time and quantities of fertilizers released. Coherently, both loading and releasing processes were strongly influenced by hydrophilic properties of hydrogels (AAm/MC mass proportion). The best sorption (124.0 mg KH(2)PO(4)/g hydrogel and 58.0 mg (NH(4))(2)SO(4)/g hydrogel) and desorption (54.9 mg KH(2)PO(4)/g hydrogel and 49.5 mg (NH(4))(2)SO(4)/g hydrogel) properties were observed for 6.0% AAm-1.0% MC hydrogels (AAm/MC mass proportion equal 6), indicating that these hydrogels are potentially viable to be used in controlled-extended release of fertilizers systems. (C) 2011 Wiley Periodicals, Inc. J Appl Polym Sci 123: 2291-2298, 2012
Resumo:
In this article, we present a new control chart for monitoring the covariance matrix in a bivariate process. In this method, n observations of the two variables were considered as if they came from a single variable (as a sample of 2n observations), and a sample variance was calculated. This statistic was used to build a new control chart specifically as a VMIX chart. The performance of the new control chart was compared with its main competitors: the generalized sampled variance chart, the likelihood ratio test, Nagao's test, probability integral transformation (v(t)), and the recently proposed VMAX chart. Among these statistics, only the VMAX chart was competitive with the VMIX chart. For shifts in both variances, the VMIX chart outperformed VMAX; however, VMAX showed better performance for large shifts (higher than 10%) in one variance.
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.
Resumo:
Slope failure occurs in many areas throughout the world and it becomes an important problem when it interferes with human activity, in which disasters provoke loss of life and property damage. In this research we investigate the slope failure through the centrifuge modeling, where a reduced-scale model, N times smaller than the full-scale (prototype), is used whereas the acceleration is increased by N times (compared with the gravity acceleration) to preserve the stress and the strain behavior. The aims of this research “Centrifuge modeling of sandy slopes” are in extreme synthesis: 1) test the reliability of the centrifuge modeling as a tool to investigate the behavior of a sandy slope failure; 2) understand how the failure mechanism is affected by changing the slope angle and obtain useful information for the design. In order to achieve this scope we arranged the work as follows: Chapter one: centrifuge modeling of slope failure. In this chapter we provide a general view about the context in which we are working on. Basically we explain what is a slope failure, how it happens and which are the tools available to investigate this phenomenon. Afterwards we introduce the technology used to study this topic, that is the geotechnical centrifuge. Chapter two: testing apparatus. In the first section of this chapter we describe all the procedures and facilities used to perform a test in the centrifuge. Then we explain the characteristics of the soil (Nevada sand), like the dry unit weight, water content, relative density, and its strength parameters (c,φ), which have been calculated in laboratory through the triaxial test. Chapter three: centrifuge tests. In this part of the document are presented all the results from the tests done in centrifuge. When we talk about results we refer to the acceleration at failure for each model tested and its failure surface. In our case study we tested models with the same soil and geometric characteristics but different angles. The angles tested in this research were: 60°, 75° and 90°. Chapter four: slope stability analysis. We introduce the features and the concept of the software: ReSSA (2.0). This software allows us to calculate the theoretical failure surfaces of the prototypes. Then we show in this section the comparisons between the experimental failure surfaces of the prototype, traced in the laboratory, and the one calculated by the software. Chapter five: conclusion. The conclusion of the research presents the results obtained in relation to the two main aims, mentioned above.
Resumo:
[EN]Isobaric vapor–liquid equilibria at p = 101.32 kPa (iso-p VLE) and the mixing properties, hE and vE, are determined for a set of twelve binary solutions: HCOOCuH2u+1(1)+CnH2n+2(2) with u = (1–4) and n = (7– 9). The (iso-p VLE) present deviations from the ideal behavior, which augment as u diminishes and n increases. Systems with [u = 2,3 n = 7] and [u =4 , n = 7,8] present a minimum-boiling azeotrope. The nonideality is also reflected in high endothermic values, hE > 0, and expansive effects, vE > 0, for all the binaries, which increase regularly with n
Resumo:
For many years, RF and analog integrated circuits have been mainly developed using bipolar and compound semiconductor technologies due to their better performance. In the last years, the advance made in CMOS technology allowed analog and RF circuits to be built with such a technology, but the use of CMOS technology in RF application instead of bipolar technology has brought more issues in terms of noise. The noise cannot be completely eliminated and will therefore ultimately limit the accuracy of measurements and set a lower limit on how small signals can be detected and processed in an electronic circuit. One kind of noise which affects MOS transistors much more than bipolar ones is the low-frequency noise. In MOSFETs, low-frequency noise is mainly of two kinds: flicker or 1/f noise and random telegraph signal noise (RTS). The objective of this thesis is to characterize and to model the low-frequency noise by studying RTS and flicker noise under both constant and switched bias conditions. The effect of different biasing schemes on both RTS and flicker noise in time and frequency domain has been investigated.
Resumo:
The object of the present study is the process of gas transport in nano-sized materials, i.e. systems having structural elements of the order of nanometers. The aim of this work is to advance the understanding of the gas transport mechanism in such materials, for which traditional models are not often suitable, by providing a correct interpretation of the relationship between diffusive phenomena and structural features. This result would allow the development new materials with permeation properties tailored on the specific application, especially in packaging systems. The methods used to achieve this goal were a detailed experimental characterization and different simulation methods. The experimental campaign regarded the determination of oxygen permeability and diffusivity in different sets of organic-inorganic hybrid coatings prepared via sol-gel technique. The polymeric samples coated with these hybrid layers experienced a remarkable enhancement of the barrier properties, which was explained by the strong interconnection at the nano-scale between the organic moiety and silica domains. An analogous characterization was performed on microfibrillated cellulose films, which presented remarkable barrier effect toward oxygen when it is dry, while in the presence of water the performance significantly drops. The very low value of water diffusivity at low activities is also an interesting characteristic which deals with its structural properties. Two different approaches of simulation were then considered: the diffusion of oxygen through polymer-layered silicates was modeled on a continuum scale with a CFD software, while the properties of n-alkanthiolate self assembled monolayers on gold were analyzed from a molecular point of view by means of a molecular dynamics algorithm. Modeling transport properties in layered nanocomposites, resulting from the ordered dispersion of impermeable flakes in a 2-D matrix, allowed the calculation of the enhancement of barrier effect in relation with platelets structural parameters leading to derive a new expression. On this basis, randomly distributed systems were simulated and the results were analyzed to evaluate the different contributions to the overall effect. The study of more realistic three-dimensional geometries revealed a prefect correspondence with the 2-D approximation. A completely different approach was applied to simulate the effect of temperature on the oxygen transport through self assembled monolayers; the structural information obtained from equilibrium MD simulations showed that raising the temperature, makes the monolayer less ordered and consequently less crystalline. This disorder produces a decrease in the barrier free energy and it lowers the overall resistance to oxygen diffusion, making the monolayer more permeable to small molecules.
Resumo:
In this work we study the relation between crustal heterogeneities and complexities in fault processes. The first kind of heterogeneity considered involves the concept of asperity. The presence of an asperity in the hypocentral region of the M = 6.5 earthquake of June 17-th, 2000 in the South Iceland Seismic Zone was invoked to explain the change of seismicity pattern before and after the mainshock: in particular, the spatial distribution of foreshock epicentres trends NW while the strike of the main fault is N 7◦ E and aftershocks trend accordingly; the foreshock depths were typically deeper than average aftershock depths. A model is devised which simulates the presence of an asperity in terms of a spherical inclusion, within a softer elastic medium in a transform domain with a deviatoric stress field imposed at remote distances (compressive NE − SW, tensile NW − SE). An isotropic compressive stress component is induced outside the asperity, in the direction of the compressive stress axis, and a tensile component in the direction of the tensile axis; as a consequence, fluid flow is inhibited in the compressive quadrants while it is favoured in tensile quadrants. Within the asperity the isotropic stress vanishes but the deviatoric stress increases substantially, without any significant change in the principal stress directions. Hydrofracture processes in the tensile quadrants and viscoelastic relaxation at depth may contribute to lower the effective rigidity of the medium surrounding the asperity. According to the present model, foreshocks may be interpreted as induced, close to the brittle-ductile transition, by high pressure fluids migrating upwards within the tensile quadrants; this process increases the deviatoric stress within the asperity which eventually fails, becoming the hypocenter of the mainshock, on the optimally oriented fault plane. In the second part of our work we study the complexities induced in fault processes by the layered structure of the crust. In the first model proposed we study the case in which fault bending takes place in a shallow layer. The problem can be addressed in terms of a deep vertical planar crack, interacting with a shallower inclined planar crack. An asymptotic study of the singular behaviour of the dislocation density at the interface reveals that the density distribution has an algebraic singularity at the interface of degree ω between -1 and 0, depending on the dip angle of the upper crack section and on the rigidity contrast between the two media. From the welded boundary condition at the interface between medium 1 and 2, a stress drop discontinuity condition is obtained which can be fulfilled if the stress drop in the upper medium is lower than required for a planar trough-going surface: as a corollary, a vertically dipping strike-slip fault at depth may cross the interface with a sedimentary layer, provided that the shallower section is suitably inclined (fault "refraction"); this results has important implications for our understanding of the complexity of the fault system in the SISZ; in particular, we may understand the observed offset of secondary surface fractures with respect to the strike direction of the seismic fault. The results of this model also suggest that further fractures can develop in the opposite quadrant and so a second model describing fault branching in the upper layer is proposed. As the previous model, this model can be applied only when the stress drop in the shallow layer is lower than the value prescribed for a vertical planar crack surface. Alternative solutions must be considered if the stress drop in the upper layer is higher than in the other layer, which may be the case when anelastic processes relax deviatoric stress in layer 2. In such a case one through-going crack cannot fulfil the welded boundary conditions and unwelding of the interface may take place. We have solved this problem within the theory of fracture mechanics, employing the boundary element method. The fault terminates against the interface in a T-shaped configuration, whose segments interact among each other: the lateral extent of the unwelded surface can be computed in terms of the main fault parameters and the stress field resulting in the shallower layer can be modelled. A wide stripe of high and nearly uniform shear stress develops above the unwelded surface, whose width is controlled by the lateral extension of unwelding. Secondary shear fractures may then open within this stripe, according to the Coulomb failure criterion, and the depth of open fractures opening in mixed mode may be computed and compared with the well studied fault complexities observed in the field. In absence of the T-shaped decollement structure, stress concentration above the seismic fault would be difficult to reconcile with observations, being much higher and narrower.
Resumo:
The research activity carried out during the PhD course was focused on the development of mathematical models of some cognitive processes and their validation by means of data present in literature, with a double aim: i) to achieve a better interpretation and explanation of the great amount of data obtained on these processes from different methodologies (electrophysiological recordings on animals, neuropsychological, psychophysical and neuroimaging studies in humans), ii) to exploit model predictions and results to guide future research and experiments. In particular, the research activity has been focused on two different projects: 1) the first one concerns the development of neural oscillators networks, in order to investigate the mechanisms of synchronization of the neural oscillatory activity during cognitive processes, such as object recognition, memory, language, attention; 2) the second one concerns the mathematical modelling of multisensory integration processes (e.g. visual-acoustic), which occur in several cortical and subcortical regions (in particular in a subcortical structure named Superior Colliculus (SC)), and which are fundamental for orienting motor and attentive responses to external world stimuli. This activity has been realized in collaboration with the Center for Studies and Researches in Cognitive Neuroscience of the University of Bologna (in Cesena) and the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA). PART 1. Objects representation in a number of cognitive functions, like perception and recognition, foresees distribute processes in different cortical areas. One of the main neurophysiological question concerns how the correlation between these disparate areas is realized, in order to succeed in grouping together the characteristics of the same object (binding problem) and in maintaining segregated the properties belonging to different objects simultaneously present (segmentation problem). Different theories have been proposed to address these questions (Barlow, 1972). One of the most influential theory is the so called “assembly coding”, postulated by Singer (2003), according to which 1) an object is well described by a few fundamental properties, processing in different and distributed cortical areas; 2) the recognition of the object would be realized by means of the simultaneously activation of the cortical areas representing its different features; 3) groups of properties belonging to different objects would be kept separated in the time domain. In Chapter 1.1 and in Chapter 1.2 we present two neural network models for object recognition, based on the “assembly coding” hypothesis. These models are networks of Wilson-Cowan oscillators which exploit: i) two high-level “Gestalt Rules” (the similarity and previous knowledge rules), to realize the functional link between elements of different cortical areas representing properties of the same object (binding problem); 2) the synchronization of the neural oscillatory activity in the γ-band (30-100Hz), to segregate in time the representations of different objects simultaneously present (segmentation problem). These models are able to recognize and reconstruct multiple simultaneous external objects, even in difficult case (some wrong or lacking features, shared features, superimposed noise). In Chapter 1.3 the previous models are extended to realize a semantic memory, in which sensory-motor representations of objects are linked with words. To this aim, the network, previously developed, devoted to the representation of objects as a collection of sensory-motor features, is reciprocally linked with a second network devoted to the representation of words (lexical network) Synapses linking the two networks are trained via a time-dependent Hebbian rule, during a training period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from linguistic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with some shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits). PART 2. The ability of the brain to integrate information from different sensory channels is fundamental to perception of the external world (Stein et al, 1993). It is well documented that a number of extraprimary areas have neurons capable of such a task; one of the best known of these is the superior colliculus (SC). This midbrain structure receives auditory, visual and somatosensory inputs from different subcortical and cortical areas, and is involved in the control of orientation to external events (Wallace et al, 1993). SC neurons respond to each of these sensory inputs separately, but is also capable of integrating them (Stein et al, 1993) so that the response to the combined multisensory stimuli is greater than that to the individual component stimuli (enhancement). This enhancement is proportionately greater if the modality-specific paired stimuli are weaker (the principle of inverse effectiveness). Several studies have shown that the capability of SC neurons to engage in multisensory integration requires inputs from cortex; primarily the anterior ectosylvian sulcus (AES), but also the rostral lateral suprasylvian sulcus (rLS). If these cortical inputs are deactivated the response of SC neurons to cross-modal stimulation is no different from that evoked by the most effective of its individual component stimuli (Jiang et al 2001). This phenomenon can be better understood through mathematical models. The use of mathematical models and neural networks can place the mass of data that has been accumulated about this phenomenon and its underlying circuitry into a coherent theoretical structure. In Chapter 2.1 a simple neural network model of this structure is presented; this model is able to reproduce a large number of SC behaviours like multisensory enhancement, multisensory and unisensory depression, inverse effectiveness. In Chapter 2.2 this model was improved by incorporating more neurophysiological knowledge about the neural circuitry underlying SC multisensory integration, in order to suggest possible physiological mechanisms through which it is effected. This endeavour was realized in collaboration with Professor B.E. Stein and Doctor B. Rowland during the 6 months-period spent at the Department of Neurobiology and Anatomy of the Wake Forest University School of Medicine (NC, USA), within the Marco Polo Project. The model includes four distinct unisensory areas that are devoted to a topological representation of external stimuli. Two of them represent subregions of the AES (i.e., FAES, an auditory area, and AEV, a visual area) and send descending inputs to the ipsilateral SC; the other two represent subcortical areas (one auditory and one visual) projecting ascending inputs to the same SC. Different competitive mechanisms, realized by means of population of interneurons, are used in the model to reproduce the different behaviour of SC neurons in conditions of cortical activation and deactivation. The model, with a single set of parameters, is able to mimic the behaviour of SC multisensory neurons in response to very different stimulus conditions (multisensory enhancement, inverse effectiveness, within- and cross-modal suppression of spatially disparate stimuli), with cortex functional and cortex deactivated, and with a particular type of membrane receptors (NMDA receptors) active or inhibited. All these results agree with the data reported in Jiang et al. (2001) and in Binns and Salt (1996). The model suggests that non-linearities in neural responses and synaptic (excitatory and inhibitory) connections can explain the fundamental aspects of multisensory integration, and provides a biologically plausible hypothesis about the underlying circuitry.
Resumo:
The objective of this dissertation is to develop and test a predictive model for the passive kinematics of human joints based on the energy minimization principle. To pursue this goal, the tibio-talar joint is chosen as a reference joint, for the reduced number of bones involved and its simplicity, if compared with other sinovial joints such as the knee or the wrist. Starting from the knowledge of the articular surface shapes, the spatial trajectory of passive motion is obtained as the envelop of joint configurations that maximize the surfaces congruence. An increase in joint congruence corresponds to an improved capability of distributing an applied load, allowing the joint to attain a better strength with less material. Thus, joint congruence maximization is a simple geometric way to capture the idea of joint energy minimization. The results obtained are validated against in vitro measured trajectories. Preliminary comparison provide strong support for the predictions of the theoretical model.