940 resultados para second-order models
Resumo:
Purpose – The purpose of this paper is to develop a subjective multidimensional measure of early career success during university-to-work transition. Design/methodology/approach – The construct of university-to-work success (UWS) was defined in terms of intrinsic and extrinsic career outcomes, and a three-stage study was conducted to create a new scale. Findings – A preliminary set of items was developed and tested by judges. Results showed the items had good content validity. Factor analyses indicated a four-factor structure and a second-order model with subscales to assess: career insertion and satisfaction, confidence in career future, income and financial independence, and adaptation to work. Third, the authors sought to confirm the hypothesized model examining the comparative fit of the scale and two alternative models. Results showed that fits for both the first- and second-order models were acceptable. Research limitations/implications – The proposed model has sound psychometric qualities, although the validated version of the scale was not able to incorporate all constructs envisaged by the initial theoretical model. Results indicated some direction for further refinement. Practical implications – The scale could be used as a tool for self-assessment or as an outcome measure to assess the efficacy of university-to-work programs in applied settings. Originality/value – This study provides a useful single measure to assess early career success during the university-to-work transition, and might facilitate testing of causal models which could help identify factors relevant for successful transition.
Resumo:
The present study deals with phenol adsorption on chitin and chitosan and removal of contaminants from wastewater of a petroleum refinery. The adsorption kinetic data were best fitted to first- and second-order models for chitosan and chitin, respectively. The results of adsorption isotherms showed Langmuir model more appropriately described than a Freundlich model for both adsorbents. The adsorption capacity was 1.96 and 1.26 mg/g for chitin and chitosan, respectively. Maximum removal of phenol was about 70-80% (flow rate: 1.5 mL/min, bed height: 18.5 cm, and 30 mg/L of phenol. Wastewater treatment with chitin in a fixed-bed system showed reductions of about 52 and 92% for COD and oil and greases, and for chitosan 65 and 67%, respectively. The results show improvement of the effluent quality after treatment with chitin and chitosan.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Pós-graduação em Ciência dos Materiais - FEIS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this paper, multiple regression analysis is used to model the top of descent (TOD) location of user-preferred descent trajectories computed by the flight management system (FMS) on over 1000 commercial flights into Melbourne, Australia. In addition to recording TOD, the cruise altitude, final altitude, cruise Mach, descent speed, wind, and engine type were also identified for use as the independent variables in the regression analysis. Both first-order and second-order models are considered, where cross-validation, hypothesis testing, and additional analysis are used to compare models. This identifies the models that should give the smallest errors if used to predict TOD location for new data in the future. A model that is linear in TOD altitude, final altitude, descent speed, and wind gives an estimated standard deviation of 3.9 nmi for TOD location given the trajectory parame- ters, which means about 80% of predictions would have error less than 5 nmi in absolute value. This accuracy is better than demonstrated by other ground automation predictions using kinetic models. Furthermore, this approach would enable online learning of the model. Additional data or further knowledge of algorithms is necessary to conclude definitively that no second-order terms are appropriate. Possible applications of the linear model are described, including enabling arriving aircraft to fly optimized descents computed by the FMS even in congested airspace.
Resumo:
A poluição relacionada a metais pesados tem recebido uma atenção especial devido a sua alta toxicidade, não biodegradabilidade e tendência de acumular-se na cadeia alimentar. Apesar disso, metais pesados também são considerados recursos valiosos, portanto a sua remoção em conjunto com a sua recuperação torna-se ainda mais importante. Este caso aplica-se aos rejeitos de mineração de cobre, os quais oferecem a possibilidade de recuperação do metal e de sua contenção de maneira segura do meio ambiente. Tais rejeitos se caracterizam por ocuparem enormes áreas inundadas e abrigarem soluções diluídas de cobre (II), porém, muitas vezes, acima dos limites seguros. Diversos processos tradicionais de tratamento mostram-se disponíveis para remover o cobre de tais soluções, no entanto, em certas aplicações eles podem ser ineficientes ou muito onerosos. Nesse contexto, a biossorção é uma alternativa interessante. Nesse processo, certos microrganismos, como fungos, bactérias e algas, ligam-se passivamente ao cobre na forma íons ou outras moléculas em soluções. No presente trabalho foi avaliado o potencial de biossorção de íons cobre (II) pela biomassa do fungo Rhizopus microsporus, coletado e isolado da área de rejeitos da Mina do Sossego, na região norte do Brasil. Isotermas de biossorção foram determinadas experimentalmente em bateladas sob temperatura de 25°C, agitação de 150 rpm, concentração de biomassa de 2,0 a 2,5 g/L e tempo de contato mínimo de 4 horas. O pH mostrou ser um fator importante no equilíbrio da biossorção, sendo o valor máximo da capacidade de biossorção de 33,12 mg de cobre / g biomassa encontrado em pH 6. Valores sucessivamente menores são encontrados pela acidificação da solução, sendo o pH 1 considerado adequado para o processo de dessorção, correspondendo a uma capacidade de biossorção de 1,95 mg/g. Modelos de adsorção de Langmuir e de Freundlich ajustaram-se adequadamente às isotermas tanto com pH controlado quanto não controlado. Foi constatado que a troca iônica é um dos mecanismos envolvidos na biossorção do cobre com Rhizopus microsporus. Tanto o modelo de pseudo-primeira ordem quanto o de pseudo-segunda ordem ajustaram-se aos dados cinéticos da biossorção, sendo que o equilíbrio ocorre em aproximadamente 4 horas. A biomassa conservou a capacidade de biossorção ao operar repetidamente em três ciclos de sorção-dessorção. A biomassa viável e a morta não apresentaram diferença estatisticamente significativa na capacidade de biossorção.
Resumo:
Titanium dioxide (TiO2) nanoparticles with different sizes and crystalloid structures produced by the thermal method and doped with silver iodide (AgI), nitrogen (N), sulphur (S) and carbon (C) were applied as adsorbents. The adsorption of Methyl Violet (MV), Methylene Blue (MB), Methyl Orange (MO) and Orange II on the surface of these particles was studied. The photocatalytic activity of some particles for the destruction of MV and Orange II was evaluated under sunlight and visible light. The equilibrium adsorption data were fitted to the Langmuir, Freundlich, Langmuir-Freundlich and Temkin isotherms. The equilibrium data show that TiO2 particles with larger sizes and doped with AgI, N, S and C have the highest adsorption capacity for the dyes. The kinetic data followed the pseudo-first order and pseudo-second order models, while desorption data fitted the zero order, first order and second order models. The highest adsorption rate constant was observed for the TiO2 with the highest anatase phase percentage. Factors such as anatase crystalloid structure, particle size and doping with AgI affect the photocatalytic activity significantly. Increasing the rutile phase percentage also decreases the tendency to desorption for N-TiO2 and S-TiO2. Adsorption was not found to be important in the photocatalytic decomposition of MV in an investigation with differently sized AgI-TiO2 nanoparticles. Nevertheless C-TiO2 was found to have higher adsorption activity onto Orange II, as the adsorption role of carbon approached synchronicity with the oxidation role.
Resumo:
The truncation errors associated with finite difference solutions of the advection-dispersion equation with first-order reaction are formulated from a Taylor analysis. The error expressions are based on a general form of the corresponding difference equation and a temporally and spatially weighted parametric approach is used for differentiating among the various finite difference schemes. The numerical truncation errors are defined using Peclet and Courant numbers and a new Sink/Source dimensionless number. It is shown that all of the finite difference schemes suffer from truncation errors. Tn particular it is shown that the Crank-Nicolson approximation scheme does not have second order accuracy for this case. The effects of these truncation errors on the solution of an advection-dispersion equation with a first order reaction term are demonstrated by comparison with an analytical solution. The results show that these errors are not negligible and that correcting the finite difference scheme for them results in a more accurate solution. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The distribution of clock signals throughout the nodes of a network is essential for several applications. in control and communication with the phase-locked loop (PLL) being the component for electronic synchronization process. In systems with master-slave (MS) strategies, the PLLs are the slave nodes responsible for providing reliable clocks in all nodes of the network. As PLLs have nonlinear phase detection, double-frequency terms appear and filtering becomes necessary. Imperfections in filtering process cause oscillations around the synchronous state worsening the performance of the clock distribution process. The behavior of one-way master-slave (OWMS) clock distribution networks is studied and performances of first- and second-order filter processes are compared, concerning lock-in ranges and responses to perturbations of the synchronous state. (c) 2007 Elsevier GmbH. All rights reserved.
Resumo:
The last three decades have seen quite dramatic changes the way we modeled time dependent data. Linear processes have been in the center stage in modeling time series. As far as the second order properties are concerned, the theory and the methodology are very adequate.However, there are more and more evidences that linear models are not sufficiently flexible and rich enough for modeling purposes and that failure to account for non-linearities can be very misleading and have undesired consequences.
Resumo:
The evolution of a quantitative phenotype is often envisioned as a trait substitution sequence where mutant alleles repeatedly replace resident ones. In infinite populations, the invasion fitness of a mutant in this two-allele representation of the evolutionary process is used to characterize features about long-term phenotypic evolution, such as singular points, convergence stability (established from first-order effects of selection), branching points, and evolutionary stability (established from second-order effects of selection). Here, we try to characterize long-term phenotypic evolution in finite populations from this two-allele representation of the evolutionary process. We construct a stochastic model describing evolutionary dynamics at non-rare mutant allele frequency. We then derive stability conditions based on stationary average mutant frequencies in the presence of vanishing mutation rates. We find that the second-order stability condition obtained from second-order effects of selection is identical to convergence stability. Thus, in two-allele systems in finite populations, convergence stability is enough to characterize long-term evolution under the trait substitution sequence assumption. We perform individual-based simulations to confirm our analytic results.
Resumo:
Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.