967 resultados para Modeling methods
Resumo:
In this work, a study on the role of the long-range term of excess Gibbs energy models in the modeling of aqueous systems containing polymers and salts is presented. Four different approaches on how to account for the presence of polymer in the long-range term were considered, and simulations were conducted considering aqueous solutions of three different salts. The analysis of water activity curves showed that, in all cases, a liquid-phase separation may be introduced by the sole presence of the polymer in the long-range term, regardless of how it is taken into account. The results lead to the conclusion that there is no single exact solution for this problem, and that any kind of approach may introduce inconsistencies.
Resumo:
In this work, the oxidation of the model pollutant phenol has been studied by means of the O(3), O(3)-UV, and O(3)-H(2)O(2) processes. Experiments were carried out in a fed-batch system to investigate the effects of initial dissolved organic carbon concentration, initial, ozone concentration in the gas phase, the presence or absence of UVC radiation, and initial hydrogen peroxide concentration. Experimental results were used in the modeling of the degradation processes by neural networks in order to simulate DOC-time profiles and evaluate the relative importance of process variables.
Resumo:
Cooling towers are widely used in many industrial and utility plants as a cooling medium, whose thermal performance is of vital importance. Despite the wide interest in cooling tower design, rating and its importance in energy conservation, there are few investigations concerning the integrated analysis of cooling systems. This work presents an approach for the systemic performance analysis of a cooling water system. The approach combines experimental design with mathematical modeling. An experimental investigation was carried out to characterize the mass transfer in the packing of the cooling tower as a function of the liquid and gas flow rates, whose results were within the range of the measurement accuracy. Then, an integrated model was developed that relies on the mass and heat transfer of the cooling tower, as well as on the hydraulic and thermal interactions with a heat exchanger network. The integrated model for the cooling water system was simulated and the temperature results agree with the experimental data of the real operation of the pilot plant. A case study illustrates the interaction in the system and the need for a systemic analysis of cooling water system. The proposed mathematical and experimental analysis should be useful for performance analysis of real-world cooling water systems. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
We address here aspects of the implementation of a memory evolutive system (MES), based on the model proposed by A. Ehresmann and J. Vanbremeersch (2007), by means of a simulated network of spiking neurons with time dependent plasticity. We point out the advantages and challenges of applying category theory for the representation of cognition, by using the MES architecture. Then we discuss the issues concerning the minimum requirements that an artificial neural network (ANN) should fulfill in order that it would be capable of expressing the categories and mappings between them, underlying the MES. We conclude that a pulsed ANN based on Izhikevich`s formal neuron with STDP (spike time-dependent plasticity) has sufficient dynamical properties to achieve these requirements, provided it can cope with the topological requirements. Finally, we present some perspectives of future research concerning the proposed ANN topology.
Resumo:
The canonical representation of speech constitutes a perfect reconstruction (PR) analysis-synthesis system. Its parameters are the autoregressive (AR) model coefficients, the pitch period and the voiced and unvoiced components of the excitation represented as transform coefficients. Each set of parameters may be operated on independently. A time-frequency unvoiced excitation (TFUNEX) model is proposed that has high time resolution and selective frequency resolution. Improved time-frequency fit is obtained by using for antialiasing cancellation the clustering of pitch-synchronous transform tracks defined in the modulation transform domain. The TFUNEX model delivers high-quality speech while compressing the unvoiced excitation representation about 13 times over its raw transform coefficient representation for wideband speech.
Resumo:
One-way master-slave (OWMS) chain networks are widely used in clock distribution systems due to their reliability and low cost. As the network nodes are phase-locked loops (PLLs), double-frequency jitter (DFJ) caused by their phase detectors appears as an impairment to the performance of the clock recovering process found in communication systems and instrumentation applications. A nonlinear model for OWMS chain networks with P + 1 order PLLs as slave nodes is presented, considering the DFJ. Since higher order filters are more effective in filtering DFJ, the synchronous state stability conditions for an OWMS chain network with third-order nodes are derived, relating the loop gain and the filter coefficients. By using these conditions, design examples are discussed.
Resumo:
Functional magnetic resonance imaging (fMRI) has become an important tool in Neuroscience due to its noninvasive and high spatial resolution properties compared to other methods like PET or EEG. Characterization of the neural connectivity has been the aim of several cognitive researches, as the interactions among cortical areas lie at the heart of many brain dysfunctions and mental disorders. Several methods like correlation analysis, structural equation modeling, and dynamic causal models have been proposed to quantify connectivity strength. An important concept related to connectivity modeling is Granger causality, which is one of the most popular definitions for the measure of directional dependence between time series. In this article, we propose the application of the partial directed coherence (PDC) for the connectivity analysis of multisubject fMRI data using multivariate bootstrap. PDC is a frequency domain counterpart of Granger causality and has become a very prominent tool in EEG studies. The achieved frequency decomposition of connectivity is useful in separating interactions from neural modules from those originating in scanner noise, breath, and heart beating. Real fMRI dataset of six subjects executing a language processing protocol was used for the analysis of connectivity. Hum Brain Mapp 30:452-461, 2009. (C) 2007 Wiley-Liss, Inc.
Resumo:
Cementitious stabilization of aggregates and soils is an effective technique to increase the stiffness of base and subbase layers. Furthermore, cementitious bases can improve the fatigue behavior of asphalt surface layers and subgrade rutting over the short and long term. However, it can lead to additional distresses such as shrinkage and fatigue in the stabilized layers. Extensive research has tested these materials experimentally and characterized them; however, very little of this research attempts to correlate the mechanical properties of the stabilized layers with their performance. The Mechanistic Empirical Pavement Design Guide (MEPDG) provides a promising theoretical framework for the modeling of pavements containing cementitiously stabilized materials (CSMs). However, significant improvements are needed to bring the modeling of semirigid pavements in MEPDG to the same level as that of flexible and rigid pavements. Furthermore, the MEPDG does not model CSMs in a manner similar to those for hot-mix asphalt or portland cement concrete materials. As a result, performance gains from stabilized layers are difficult to assess using the MEPDG. The current characterization of CSMs was evaluated and issues with CSM modeling and characterization in the MEPDG were discussed. Addressing these issues will help designers quantify the benefits of stabilization for pavement service life.
Resumo:
Honeycomb structures have been used in different engineering fields. In civil engineering, honeycomb fiber-reinforced polymer (FRP) structures have been used as bridge decks to rehabilitate highway bridges in the United States. In this work, a simplified finite-element modeling technique for honeycomb FRP bridge decks is presented. The motivation is the combination of the complex geometry of honeycomb FRP decks and computational limits, which may prevent modeling of these decks in detail. The results from static and modal analyses indicate that the proposed modeling technique provides a viable tool for modeling the complex geometry of honeycomb FRP bridge decks. The modeling of other bridge components (e.g., steel girders, steel guardrails, deck-to-girder connections, and pier supports) is also presented in this work.
Resumo:
The double-frequency jitter is one of the main problems in clock distribution networks. In previous works, sonic analytical and numerical aspects of this phenomenon were studied and results were obtained for one-way master-slave (OWMS) architectures. Here, an experimental apparatus is implemented, allowing to measure the power of the double-frequency signal and to confirm the theoretical conjectures. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Survival models involving frailties are commonly applied in studies where correlated event time data arise due to natural or artificial clustering. In this paper we present an application of such models in the animal breeding field. Specifically, a mixed survival model with a multivariate correlated frailty term is proposed for the analysis of data from over 3611 Brazilian Nellore cattle. The primary aim is to evaluate parental genetic effects on the trait length in days that their progeny need to gain a commercially specified standard weight gain. This trait is not measured directly but can be estimated from growth data. Results point to the importance of genetic effects and suggest that these models constitute a valuable data analysis tool for beef cattle breeding.
Resumo:
Interval-censored survival data, in which the event of interest is not observed exactly but is only known to occur within some time interval, occur very frequently. In some situations, event times might be censored into different, possibly overlapping intervals of variable widths; however, in other situations, information is available for all units at the same observed visit time. In the latter cases, interval-censored data are termed grouped survival data. Here we present alternative approaches for analyzing interval-censored data. We illustrate these techniques using a survival data set involving mango tree lifetimes. This study is an example of grouped survival data.
Resumo:
A four parameter generalization of the Weibull distribution capable of modeling a bathtub-shaped hazard rate function is defined and studied. The beauty and importance of this distribution lies in its ability to model monotone as well as non-monotone failure rates, which are quite common in lifetime problems and reliability. The new distribution has a number of well-known lifetime special sub-models, such as the Weibull, extreme value, exponentiated Weibull, generalized Rayleigh and modified Weibull distributions, among others. We derive two infinite sum representations for its moments. The density of the order statistics is obtained. The method of maximum likelihood is used for estimating the model parameters. Also, the observed information matrix is obtained. Two applications are presented to illustrate the proposed distribution. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this study, regression models are evaluated for grouped survival data when the effect of censoring time is considered in the model and the regression structure is modeled through four link functions. The methodology for grouped survival data is based on life tables, and the times are grouped in k intervals so that ties are eliminated. Thus, the data modeling is performed by considering the discrete models of lifetime regression. The model parameters are estimated by using the maximum likelihood and jackknife methods. To detect influential observations in the proposed models, diagnostic measures based on case deletion, which are denominated global influence, and influence measures based on small perturbations in the data or in the model, referred to as local influence, are used. In addition to those measures, the local influence and the total influential estimate are also employed. Various simulation studies are performed and compared to the performance of the four link functions of the regression models for grouped survival data for different parameter settings, sample sizes and numbers of intervals. Finally, a data set is analyzed by using the proposed regression models. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Grass reference evapotranspiration (ETo) is an important agrometeorological parameter for climatological and hydrological studies, as well as for irrigation planning and management. There are several methods to estimate ETo, but their performance in different environments is diverse, since all of them have some empirical background. The FAO Penman-Monteith (FAD PM) method has been considered as a universal standard to estimate ETo for more than a decade. This method considers many parameters related to the evapotranspiration process: net radiation (Rn), air temperature (7), vapor pressure deficit (Delta e), and wind speed (U); and has presented very good results when compared to data from lysimeters Populated with short grass or alfalfa. In some conditions, the use of the FAO PM method is restricted by the lack of input variables. In these cases, when data are missing, the option is to calculate ETo by the FAD PM method using estimated input variables, as recommended by FAD Irrigation and Drainage Paper 56. Based on that, the objective of this study was to evaluate the performance of the FAO PM method to estimate ETo when Rn, Delta e, and U data are missing, in Southern Ontario, Canada. Other alternative methods were also tested for the region: Priestley-Taylor, Hargreaves, and Thornthwaite. Data from 12 locations across Southern Ontario, Canada, were used to compare ETo estimated by the FAD PM method with a complete data set and with missing data. The alternative ETo equations were also tested and calibrated for each location. When relative humidity (RH) and U data were missing, the FAD PM method was still a very good option for estimating ETo for Southern Ontario, with RMSE smaller than 0.53 mm day(-1). For these cases, U data were replaced by the normal values for the region and Delta e was estimated from temperature data. The Priestley-Taylor method was also a good option for estimating ETo when U and Delta e data were missing, mainly when calibrated locally (RMSE = 0.40 mm day(-1)). When Rn was missing, the FAD PM method was not good enough for estimating ETo, with RMSE increasing to 0.79 mm day(-1). When only T data were available, adjusted Hargreaves and modified Thornthwaite methods were better options to estimate ETo than the FAO) PM method, since RMSEs from these methods, respectively 0.79 and 0.83 mm day(-1), were significantly smaller than that obtained by FAO PM (RMSE = 1.12 mm day(-1). (C) 2009 Elsevier B.V. All rights reserved.