940 resultados para second-order models
Resumo:
Este artículo presenta un nuevo método de identificación para sistemas de fase no mínima basado en la respuesta escalón. El enfoque propuesto provee un modelo aproximado de segundo orden evitando diseños experimentales complejos. El método propuesto es un algoritmo de identificación cerrado basado en puntos característicos de la respuesta escalón de sistemas de fase no mínima de segundo orden. Él es validado usando diferentes modelos lineales. Ellos tienen respuesta inversa entre 3,5% y 38% de la respuesta en régimen permanente. En simulaciones, ha sido demostrado que resultados satisfactorios pueden ser obtenidos usando el procedimiento de identificación propuesto, donde los parámetros identificados presentan errores relativos medios, menores que los obtenidos mediante el método de Balaguer.
Resumo:
Na indústria têxtil grandes volumes de efluentes são gerados, os quais são caracterizados por serem coloridos e poluentes , devido à presença de corantes em sua composição. Com a necessidade de descontaminação, diferentes métodos são utilizados no tratamento, sendo um deles, a biossorção. Este consiste na remoção das substâncias tóxicas recorrendo a biossorventes obtidos a partir de resíduos agrícolas e sub-produtos de processos industriais. O objetivo principal deste trabalho foi estudar a remoção do corante Preto Reafix Super 2R em soluções aquosas por meio de biossorção com bagaço de malte. Baseando-se sobretudo no estudo da cinética e equilíbrio entre o biossorvente e o corante. Numa primeira fase foi estudada a influência dos parâmetros operacionais, como a influência do diâmetro médio das partículas do biossorvente, o pH da solução e a velocidade de agitação da solução. Sendo as condições ótimas de biossorção definidas a pH 2, velocidade de agitação de 150 rpm e biomassa sem peneiramento. Posteriormente, ajustaram-se os modelos cinéticos de Pseudo-primeira ordem, Pseudo-segunda ordem e de Difusão intrapartícula aos resultados experimentais obtidos pela cinética de adsorção avaliando também a influência da temperatura no tempo de contato para se alcançar o equilibrio. O modelo de Pseudo-segunda ordem conduziu ao melhor ajuste, com um coeficiente de correlação (R2) de apróximadamente 1. A partir dos testes de equilíbrio realizados com diferentes concentrações de corante, foram ajustadas as isotermas de Langmuir, Freundlich, Tempkin aos resultados experimentais tendo-se obtido parâmetros bastante significativos para o modelo Langmuir, cuja capacidade máxima de remoção (qmax) obtida foi de 40,16 mg.g-1. A análise dos parâmetros termodinâmicos permitiram avaliar que o processo de adsorção ocorre espontaneamente, sendo endotérmico e que ao longo do processo aumenta a aleatoriedade na interface sólido/solução, devido à desorganização do processo em virtude das interações que ocorrem.
Resumo:
Heavy metals are used in many industrial processestheirs discard can harm fel effects to the environment, becoming a serious problem. Many methods used for wastewater treatment have been reported in the literature, but many of them have high cost and low efficiency. The adsorption process has been used as effective for the metal remoal ions. This paper presents studies to evaluate the adsorption capacity of vermiculite as adsorbent for the heavy metals removal in a synthetic solution. The mineral vermiculite was characterized by differents techniques: specific surface area analysis by BET method, X-ray diffraction, raiosX fluorescence, spectroscopy in the infraredd region of, laser particle size analysis and specific gravity. The physical characteristics of the material presented was appropriate for the study of adsorption. The adsorption experiments weredriveal finite bath metod in synthetic solutions of copper, nickel, cadmium, lead and zinc. The results showed that the vermiculite has a high potential for adsorption, removing about 100% of ions and with removal capacity values about 85 ppm of metal in solution, 8.09 mg / g for cadmium, 8.39 mg/g for copper, 8.40 mg/g for lead, 8.26 mg/g for zinc and 8.38 mg/g of nickel. The experimental data fit in the Langmuir and Freundlich models. The kinetic datas showed a good correlation with the pseudo-second order model. It was conducteas a competition study among the metals using vermiculiti a adsorbent. Results showed that the presence of various metals in solution does not influence their removal at low concentrations, removing approximat wasely 100 % of all metals present in solutions
Resumo:
Mathematical models of gene regulation are a powerful tool for understanding the complex features of genetic control. While various modeling efforts have been successful at explaining gene expression dynamics, much less is known about how evolution shapes the structure of these networks. An important feature of gene regulatory networks is their stability in response to environmental perturbations. Regulatory systems are thought to have evolved to exist near the transition between stability and instability, in order to have the required stability to environmental fluctuations while also being able to achieve a wide variety of functions (corresponding to different dynamical patterns). We study a simplified model of gene network evolution in which links are added via different selection rules. These growth models are inspired by recent work on `explosive' percolation which shows that when network links are added through competitive rather than random processes, the connectivity phase transition can be significantly delayed, and when it is reached, it appears to be first order (discontinuous, e.g., going from no failure at all to large expected failure) instead of second order (continuous, e.g., going from no failure at all to very small expected failure). We find that by modifying the traditional framework for networks grown via competitive link addition to capture how gene networks evolve to avoid damage propagation, we also see significant delays in the transition that depend on the selection rules, but the transitions always appear continuous rather than `explosive'.
Resumo:
Hardboard processing wastewater was evaluated as a feedstock in a bio refinery co-located with the hardboard facility for the production of fuel grade ethanol. A thorough characterization was conducted on the wastewater and the composition changes of which during the process in the bio refinery were tracked. It was determined that the wastewater had a low solid content (1.4%), and hemicellulose was the main component in the solid, accounting for up to 70%. Acid pretreatment alone can hydrolyze the majority of the hemicellulose as well as oligomers, and over 50% of the monomer sugars generated were xylose. The percentage of lignin remained in the liquid increased after acid pretreatment. The characterization results showed that hardboard processing wastewater is a feasible feedstock for the production of ethanol. The optimum conditions to hydrolyze hemicellulose into fermentable sugars were evaluated with a two-stage experiment, which includes acid pretreatment and enzymatic hydrolysis. The experimental data were fitted into second order regression models and Response Surface Methodology (RSM) was employed. The results of the experiment showed that for this type of feedstock enzymatic hydrolysis is not that necessary. In order to reach a comparatively high total sugar concentration (over 45g/l) and low furfural concentration (less than 0.5g/l), the optimum conditions were reached when acid concentration was between 1.41 to 1.81%, and reaction time was 48 to 76 minutes. The two products produced from the bio refinery were compared with traditional products, petroleum gasoline and traditional potassium acetate, in the perspective of sustainability, with greenhouse gas (GHG) emission as an indicator. Three allocation methods, system expansion, mass allocation and market value allocation methods were employed in this assessment. It was determined that the life cycle GHG emissions of ethanol were -27.1, 20.8 and 16 g CO2 eq/MJ, respectively, in the three allocation methods, whereas that of petroleum gasoline is 90 g CO2 eq/MJ. The life cycle GHG emissions of potassium acetate in mass allocation and market value allocation method were 555.7 and 716.0 g CO2 eq/kg, whereas that of traditional potassium acetate is 1020 g CO2/kg.
Resumo:
Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.
Resumo:
The kinetics of metal uptake by gel and dry calcium alginate beads was analysed using solutions of copper or lead ions. Gel beads sorbed metal ions faster than the dry ones and larger diffusivities of metal ions were calculated for gel beads: approximately 10−4 cm2/min vs. 10−6 cm2/min for dry beads. In accordance, scanning electron microscopy and nitrogen adsorption data revealed a low porosity of dry alginate particles. However, dry beads showed higher sorption capacities and a mechanical stability more suitable for large-scale use. Two sorption models were fitted to the kinetic results: the Lagergren pseudo-first order and the Ho and McKay pseudo-second order equations. The former was found to be the most adequate to model metal uptake by dry alginate beads and kinetic constants in the orders of 10−3 and 10−2 min−1 were obtained for lead solutions with concentrations up to 100 g/m3. The pseudo-first order model was also found to be valid to describe biosorbent operation with a real wastewater indicating that it can be used to design processes of metal sorption with alginate-based materials.
Resumo:
This paper estimates Bejarano and Charry (2014)’s small open economy with financial frictions model for the Colombian economy using Bayesian estimation techniques. Additionally, I compute the welfare gains of implementing an optimal response to credit spreads into an augmented Taylor rule. The main result is that a reaction to credit spreads does not imply significant welfare gains unless the economic disturbances increases its volatility, like the disruption implied by a financial crisis. Otherwise its impact over the macroeconomic variables is null.
Resumo:
El presente artículo, presenta un análisis de las decisiones de estructuración de capital de la compañía Merck Sharp & Dome S.A.S, desde la perspectiva de las finanzas comportamentales, comparando los métodos utilizados actualmente por la compañía seleccionada con la teoría tradicional de las finanzas, para así poder evaluar el desempeño teórico y real. Incorporar elementos comportamentales dentro del estudio permite profundizar más sobre de las decisiones corporativas en un contexto más cercano a los avances investigativos de las finanzas del comportamiento, lo cual lleva a que el análisis de este artículo se enfoque en la identificación y entendimiento de los sesgos de exceso de confianza y statu quo, pero sobre todo su implicación en las decisiones de financiación. Según la teoría tradicional el proceso de estructuración de capital se guía por los costos, pero este estudio de caso permitió observar que en la práctica esta relación de costo-decisión está en un segundo lugar, después de la relación riesgo-decisión a la hora del proceso de estructuración de capital.
Resumo:
Structured abstract Purpose: To deepen, in grocery retail context, the roles of consumer perceived value and consumer satisfaction, as antecedents’ dimensions of customer loyalty intentions. Design/Methodology/approach: Also employing a short version (12-items) of the original 19-item PERVAL scale of Sweeney & Soutar (2001), a structural equation modeling approach was applied to investigate statistical properties of the indirect influence on loyalty of a reflective second order customer perceived value model. The performance of three alternative estimation methods was compared through bootstrapping techniques. Findings: Results provided i) support for the use of the short form of the PERVAL scale in measuring consumer perceived value; ii) the influence of the four highly correlated independent latent predictors on satisfaction was well summarized by a higher-order reflective specification of consumer perceived value; iii) emotional and functional dimensions were determinants for the relationship with the retailer; iv) parameter’s bias with the three methods of estimation was only significant for bootstrap small sample sizes. Research limitations:/implications: Future research is needed to explore the use of the short form of the PERVAL scale in more homogeneous groups of consumers. Originality/value: Firstly, to indirectly explain customer loyalty mediated by customer satisfaction it was adopted a recent short form of PERVAL scale and a second order reflective conceptualization of value. Secondly, three alternative estimation methods were used and compared through bootstrapping and simulation procedures.
Authentic Leadership Questionnaire: invariance between samples of Brazilian and Portuguese employees
Resumo:
The Authentic Leadership Questionnaire (ALQ) is used to assess authentic leadership (AL). Although ALQ is often used in empirical research, cross-cultural studies with this measure are scarce. Aiming to contribute to filling this gap, this study assesses the invariance of the ALQ measure between samples of Brazilian (N = 1019) and Portuguese (N = 842) employees. A multi-group confirmatory factor analysis was performed, and the results showed the invariance of the first- and second-order factor models between the Brazilian and Portuguese samples. The results are discussed considering their cultural setting, with the study’s limitations and future research directions being pointed out.
Resumo:
In this thesis, we perform a next-to-leading order calculation of the impact of primordial magnetic fields (PMF) into the evolution of scalar cosmological perturbations and the cosmic microwave background (CMB) anisotropy. Magnetic fields are everywhere in the Universe at all scales probed so far, but their origin is still under debate. The current standard picture is that they originate from the amplification of initial seed fields, which could have been generated as PMFs in the early Universe. The most robust way to test their presence and constrain their features is to study how they impact on key cosmological observables, in particular the CMB anisotropies. The standard way to model a PMF is to consider its contribution (quadratic in the magnetic field) at the same footing of first order perturbations, under the assumptions of ideal magneto-hydrodynamics and compensated initial conditions. In the perspectives of ever increasing precision of CMB anisotropies measurements and of possible uncounted non-linear effects, in this thesis we study effects which go beyond the standard assumptions. We study the impact of PMFs on cosmological perturbations and CMB anisotropies with adiabatic initial conditions, the effect of Alfvén waves on the speed of sound of perturbations and possible non-linear behavior of baryon overdensity for PMFs with a blue spectral index, by modifying and improving the publicly available Einstein-Boltzmann code SONG, which has been written in order to take into account all second-order contributions in cosmological perturbation theory. One of the objectives of this thesis is to set the basis to verify by an independent fully numerical analysis the possibility to affect recombination and the Hubble constant.
Resumo:
The present paper describes the synthesis of molecularly imprinted polymer - poly(methacrylic acid)/silica and reports its performance feasibility with desired adsorption capacity and selectivity for cholesterol extraction. Two imprinted hybrid materials were synthesized at different methacrylic acid (MAA)/tetraethoxysilane (TEOS) molar ratios (6:1 and 1:5) and characterized by FT-IR, TGA, SEM and textural data. Cholesterol adsorption on hybrid materials took place preferably in apolar solvent medium, especially in chloroform. From the kinetic data, the equilibrium time was reached quickly, being 12 and 20 min for the polymers synthesized at MAA/TEOS molar ratio of 6:1 and 1:5, respectively. The pseudo-second-order model provided the best fit for cholesterol adsorption on polymers, confirming the chemical nature of the adsorption process, while the dual-site Langmuir-Freundlich equation presented the best fit to the experimental data, suggesting the existence of two kinds of adsorption sites on both polymers. The maximum adsorption capacities obtained for the polymers synthesized at MAA/TEOS molar ratios of 6:1 and 1:5 were found to be 214.8 and 166.4 mg g(-1), respectively. The results from isotherm data also indicated higher adsorption capacity for both imprinted polymers regarding to corresponding non-imprinted polymers. Nevertheless, taking into account the retention parameters and selectivity of cholesterol in the presence of structurally analogue compounds (5-α-cholestane and 7-dehydrocholesterol), it was observed that the polymer synthesized at the MAA/TEOS molar ratio of 6:1 was much more selective for cholesterol than the one prepared at the ratio of 1:5, thus suggesting that selective binding sites ascribed to the carboxyl group from MAA play a central role in the imprinting effect created on MIP.
Resumo:
The validation of an analytical procedure must be certified through the determination of parameters known as figures of merit. For first order data, the acuracy, precision, robustness and bias is similar to the methods of univariate calibration. Linearity, sensitivity, signal to noise ratio, adjustment, selectivity and confidence intervals need different approaches, specific for multivariate data. Selectivity and signal to noise ratio are more critical and they only can be estimated by means of the calculation of the net analyte signal. In second order calibration, some differentes approaches are necessary due to data structure.
Resumo:
Synchronization plays an important role in telecommunication systems, integrated circuits, and automation systems. Formerly, the masterslave synchronization strategy was used in the great majority of cases due to its reliability and simplicity. Recently, with the wireless networks development, and with the increase of the operation frequency of integrated circuits, the decentralized clock distribution strategies are gaining importance. Consequently, fully connected clock distribution systems with nodes composed of phase-locked loops (PLLs) appear as a convenient engineering solution. In this work, the stability of the synchronous state of these networks is studied in two relevant situations: when the node filters are first-order lag-lead low-pass or when the node filters are second-order low-pass. For first-order filters, the synchronous state of the network shows to be stable for any number of nodes. For second-order filter, there is a superior limit for the number of nodes, depending on the PLL parameters. Copyright (C) 2009 Atila Madureira Bueno et al.