918 resultados para least squares method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Reatores tubulares de polimerização podem apresentar um perfil de velocidade bastante distorcido. Partindo desta observação, um modelo estocástico baseado no modelo de dispersão axial foi proposto para a representação matemática da fluidodinâmica de um reator tubular para produção de poliestireno. A equação diferencial foi obtida inserindo a aleatoriedade no parâmetro de dispersão, resultando na adição de um termo estocástico ao modelo capaz de simular as oscilações observadas experimentalmente. A equação diferencial estocástica foi discretizada e resolvida pelo método Euler-Maruyama de forma satisfatória. Uma função estimadora foi desenvolvida para a obtenção do parâmetro do termo estocástico e o parâmetro do termo determinístico foi calculado pelo método dos mínimos quadrados. Uma análise de convergência foi conduzida para determinar o número de elementos da discretização e o modelo foi validado através da comparação de trajetórias e de intervalos de confiança computacionais com dados experimentais. O resultado obtido foi satisfatório, o que auxilia na compreensão do comportamento fluidodinâmico complexo do reator estudado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maximum entropy spectral analyses and a fitting test to find the best suitable curve for the modified time series based on the non-linear least squares method for Td (diatom temperature) values were performed for the Quaternary portion of the DSDP Sites 579 and 580 in the western North Pacific. The sampling interval averages 13.7 kyr in the Brunhes Chron (0-780 ka) and 16.5 kyr in the later portion of the Matuyama Chron (780-1800 ka) at Site 580, but increases to 17.3 kyr and 23.2 kyr, respectively, at Site 579. Among dominant cycles during the Brunhes Chron, there are 411.5 kyr and 126.0 kyr at Site 579, and 467.0 kyr and 136.7 kyr at Site 580 correspond to 413 kyr and 95 to 124 kyr of the orbital eccentricity. Minor cycles of 41.2 kyr at Site 579 and 41.7 kyr at Site 580 are near to 41 kyr of the obliquity (tilt). During the Matuyama Chron at Site 580, cycles of 49.7 kyr and 43.6 kyr are dominant. The surface-water temperature estimated from diatoms at the western North Pacific DSDP Sites 579 and 580 shows correlation with the fundamental Earth's orbital parameters during Quaternary time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Includes bibliographical references (p. 58-59)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O engajamento no trabalho é um dos objetivos dos gestores de pessoas. Este trabalho se propõe a analisar se a compatibilidade da pessoa com o ambiente de trabalho tem relação com o seu engajamento. Há três fatores na compatibilidade com o ambiente de trabalho (person-environment fit): person-job fit, que aborda a compatibilidade entre as habilidades da pessoa e o trabalho que ela realiza; person-organization fit, que está relacionado com os valores da pessoa frente os valores organizacionais; e needs-supply, que aborda a percepção do indivíduo quanto a ter suas necessidades atendidas pelo seu trabalho e pela organização em que trabalha. Construtos do comportamento organizacional, tais como satisfação no trabalho, comprometimento organizacional e intenções de rotatividade são comumente utilizados como variáveis sucessoras nos estudos de compatibilidade (fit), porém não foram encontrados estudos da relação entre a compatibilidade com o ambiente de trabalho (person-environment fit) e o engajamento no trabalho. Esta pesquisa de abordagem quantitativa baseou-se no instrumento Perceptions Fit, proposto por Cable e DeRue, em 2002; e no instrumento UWES Ultrech Work Engagement Scale, de Schaufelli e colaboradores, de 2006. Participaram da pesquisa 114 respondentes com no mínimo seis meses na atividade atual e pelo menos há cinco anos no mercado de trabalho. As análises por Modelagem de Equações Estruturais pelo método PLS (Partial Least Squares) comprovaram a hipótese de que quanto maior a compatibilidade entre a pessoa e seu trabalho, maior é seu engajamento. Além da hipótese central do trabalho de que a compatibilidade pessoa-trabalho influencia o engajamento no trabalho, a influência das dimensões de fit sobre o engajamento foi testada e os resultados mostraram que a dimensão necessidades atendidas (needs-supply) é a que mais influência tem sobre o engajamento. Este estudo inicia a discussão sobre a relação entre a compatibilidade da pessoa com o ambiente de trabalho e o seu engajamento, sugerindo reaplicação do método em públicos diferenciados, a fim de que os resultados possam ser utilizados para uma melhor eficácia da gestão de pessoas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Correlation and regression are two of the statistical procedures most widely used by optometrists. However, these tests are often misused or interpreted incorrectly, leading to erroneous conclusions from clinical experiments. This review examines the major statistical tests concerned with correlation and regression that are most likely to arise in clinical investigations in optometry. First, the use, interpretation and limitations of Pearson's product moment correlation coefficient are described. Second, the least squares method of fitting a linear regression to data and for testing how well a regression line fits the data are described. Third, the problems of using linear regression methods in observational studies, if there are errors associated in measuring the independent variable and for predicting a new value of Y for a given X, are discussed. Finally, methods for testing whether a non-linear relationship provides a better fit to the data and for comparing two or more regression lines are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To determine whether curve-fitting analysis of the ranked segment distributions of topographic optic nerve head (ONH) parameters, derived using the Heidelberg Retina Tomograph (HRT), provide a more effective statistical descriptor to differentiate the normal from the glaucomatous ONH. Methods: The sample comprised of 22 normal control subjects (mean age 66.9 years; S.D. 7.8) and 22 glaucoma patients (mean age 72.1 years; S.D. 6.9) confirmed by reproducible visual field defects on the Humphrey Field Analyser. Three 10°-images of the ONH were obtained using the HRT. The mean topography image was determined and the HRT software was used to calculate the rim volume, rim area to disc area ratio, normalised rim area to disc area ratio and retinal nerve fibre cross-sectional area for each patient at 10°-sectoral intervals. The values were ranked in descending order, and each ranked-segment curve of ordered values was fitted using the least squares method. Results: There was no difference in disc area between the groups. The group mean cup-disc area ratio was significantly lower in the normal group (0.204 ± 0.16) compared with the glaucoma group (0.533 ± 0.083) (p < 0.001). The visual field indices, mean deviation and corrected pattern S.D., were significantly greater (p < 0.001) in the glaucoma group (-9.09 dB ± 3.3 and 7.91 ± 3.4, respectively) compared with the normal group (-0.15 dB ± 0.9 and 0.95 dB ± 0.8, respectively). Univariate linear regression provided the best overall fit to the ranked segment data. The equation parameters of the regression line manually applied to the normalised rim area-disc area and the rim area-disc area ratio data, correctly classified 100% of normal subjects and glaucoma patients. In this study sample, the regression analysis of ranked segment parameters method was more effective than conventional ranked segment analysis, in which glaucoma patients were misclassified in approximately 50% of cases. Further investigation in larger samples will enable the calculation of confidence intervals for normality. These reference standards will then need to be investigated for an independent sample to fully validate the technique. Conclusions: Using a curve-fitting approach to fit ranked segment curves retains information relating to the topographic nature of neural loss. Such methodology appears to overcome some of the deficiencies of conventional ranked segment analysis, and subject to validation in larger scale studies, may potentially be of clinical utility for detecting and monitoring glaucomatous damage. © 2007 The College of Optometrists.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The kinetic parameters of the pyrolysis of miscanthus and its acid hydrolysis residue (AHR) were determined using thermogravimetric analysis (TGA). The AHR was produced at the University of Limerick by treating miscanthus with 5 wt.% sulphuric acid at 175 °C as representative of a lignocellulosic acid hydrolysis product. For the TGA experiments, 3 to 6 g of sample, milled and sieved to a particle size below 250 μm, were placed in the TGA ceramic crucible. The experiments were carried out under non-isothermal conditions heating the samples from 50 to 900 °C at heating rates of 2.5, 5, 10, 17 and 25 °C/min. The activation energy (EA) of the decomposition process was determined from the TGA data by differential analysis (Friedman) and three isoconversional methods of integral analysis (Kissinger–Akahira–Sunose, Ozawa–Flynn–Wall, Vyazovkin). The activation energy ranged from 129 to 156 kJ/mol for miscanthus and from 200 to 376 kJ/mol for AHR increasing with increasing conversion. The reaction model was selected using the non-linear least squares method and the pre-exponential factor was calculated from the Arrhenius approximation. The results showed that the best fitting reaction model was the third order reaction for both feedstocks. The pre-exponential factor was in the range of 5.6 × 1010 to 3.9 × 10+ 13 min− 1 for miscanthus and 2.1 × 1016 to 7.7 × 1025 min− 1 for AHR.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research presented in this thesis was developed as part of DIBANET, an EC funded project aiming to develop an energetically self-sustainable process for the production of diesel miscible biofuels (i.e. ethyl levulinate) via acid hydrolysis of selected biomass feedstocks. Three thermal conversion technologies, pyrolysis, gasification and combustion, were evaluated in the present work with the aim of recovering the energy stored in the acid hydrolysis solid residue (AHR). Mainly consisting of lignin and humins, the AHR can contain up to 80% of the energy in the original feedstock. Pyrolysis of AHR proved unsatisfactory, so attention focussed on gasification and combustion with the aim of producing heat and/or power to supply the energy demanded by the ethyl levulinate production process. A thermal processing rig consisting on a Laminar Entrained Flow Reactor (LEFR) equipped with solid and liquid collection and online gas analysis systems was designed and built to explore pyrolysis, gasification and air-blown combustion of AHR. Maximum liquid yield for pyrolysis of AHR was 30wt% with volatile conversion of 80%. Gas yield for AHR gasification was 78wt%, with 8wt% tar yields and conversion of volatiles close to 100%. 90wt% of the AHR was transformed into gas by combustion, with volatile conversions above 90%. 5volO2%-95vol%N2 gasification resulted in a nitrogen diluted, low heating value gas (2MJ/m3). Steam and oxygen-blown gasification of AHR were additionally investigated in a batch gasifier at KTH in Sweden. Steam promoted the formation of hydrogen (25vol%) and methane (14vol%) improving the gas heating value to 10MJ/m3, below the typical for steam gasification due to equipment limitations. Arrhenius kinetic parameters were calculated using data collected with the LEFR to provide reaction rate information for process design and optimisation. Activation energy (EA) and pre-exponential factor (ko in s-1) for pyrolysis (EA=80kJ/mol, lnko=14), gasification (EA=69kJ/mol, lnko=13) and combustion (EA=42kJ/mol, lnko=8) were calculated after linearly fitting the data using the random pore model. Kinetic parameters for pyrolysis and combustion were also determined by dynamic thermogravimetric analysis (TGA), including studies of the original biomass feedstocks for comparison. Results obtained by differential and integral isoconversional methods for activation energy determination were compared. Activation energy calculated by the Vyazovkin method was 103-204kJ/mol for pyrolysis of untreated feedstocks and 185-387kJ/mol for AHRs. Combustion activation energy was 138-163kJ/mol for biomass and 119-158 for AHRs. The non-linear least squares method was used to determine reaction model and pre-exponential factor. Pyrolysis and combustion of biomass were best modelled by a combination of third order reaction and 3 dimensional diffusion models, while AHR decomposed following the third order reaction for pyrolysis and the 3 dimensional diffusion for combustion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80, 62M05.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An important variant of a key problem for multi-attribute decision making is considered. We study the extension of the pairwise comparison matrix to the case when only partial information is available: for some pairs no comparison is given. It is natural to define the inconsistency of a partially filled matrix as the inconsistency of its best, completely filled completion. We study here the uniqueness problem of the best completion for two weighting methods, the Eigen-vector Method and the Logarithmic Least Squares Method. In both settings we obtain the same simple graph theoretic characterization of the uniqueness. The optimal completion will be unique if and only if the graph associated with the partially defined matrix is connected. Some numerical experiences are discussed at the end of the paper.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A cikk a páros összehasonlításokon alapuló pontozási eljárásokat alkalmazza svájci rendszerű sakk csapatversenyek eredményének meghatározására. Bemutatjuk a nem körmérkőzéses esetben felmerülő kérdéseket, az egyéni és csapatversenyek jellemzőit, valamint a hivatalos lexikografikus rendezések hibáit. Axiomatikus alapokon rangsorolási problémaként modellezzük a bajnokságokat, definícióinkat összekapcsoljuk a pontszám, az általánosított sorösszeg és a legkisebb négyzetek módszerének tulajdonságaival. A javasolt eljárást két sakkcsapat Európa-bajnokság részletes elemzésével illusztráljuk. A végső rangsorok összehasonlítását távolságfüggvények segítségével végezzük el, majd a sokdimenziós skálázás révén ábrázoljuk azokat. A hivatalos sorrendtől való eltérés okait a legkisebb négyzetek módszerének dekompozíciójával tárjuk fel. A sorrendeket három szempont, az előrejelző képesség, a mintailleszkedés és a robusztusság alapján értékeljük, és a legkisebb négyzetek módszerének alkalmas eredménymátrixszal történő használata mellett érvelünk. ____ The paper uses paired comparison-based scoring procedures in order to determine the result of Swiss system chess team tournaments. We present the main challenges of ranking in these tournaments, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalised row sum and least squares methods. The proposed method is illustrated with a detailed analysis of the two recent chess team European championships. Final rankings are compared through their distances and visualized by multidimensional scaling (MDS). Differences to official ranking are revealed due to the decomposition of least squares method. Rankings are evaluated by prediction accuracy, retrodictive performance, and stability. The paper argues for the use of least squares method with an appropriate generalised results matrix favouring match points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper uses paired comparison-based scoring procedures for ranking the participants of a Swiss system chess team tournament. We present the main challenges of ranking in Swiss system, the features of individual and team competitions as well as the failures of official lexicographical orders. The tournament is represented as a ranking problem, our model is discussed with respect to the properties of the score, generalized row sum and least squares methods. The proposed procedure is illustrated with a detailed analysis of the two recent chess team European championships. Final rankings are compared by their distances and visualized with multidimensional scaling (MDS). Differences to official ranking are revealed by the decomposition of least squares method. Rankings are evaluated by prediction accuracy, retrodictive performance, and stability. The paper argues for the use of least squares method with a results matrix favoring match points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A special class of preferences, given by a directed acyclic graph, is considered. They are represented by incomplete pairwise comparison matrices as only partial information is available: for some pairs no comparison is given in the graph. A weighting method satisfies the property linear order preservation if it always results in a ranking such that an alternative directly preferred to another does not have a lower rank. We study whether two procedures, the Eigenvector Method and the Logarithmic Least Squares Method meet this axiom. Both weighting methods break linear order preservation, moreover, the ranking according to the Eigenvector Method depends on the incomplete pairwise comparison representation chosen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The standard highway assignment model in the Florida Standard Urban Transportation Modeling Structure (FSUTMS) is based on the equilibrium traffic assignment method. This method involves running several iterations of all-or-nothing capacity-restraint assignment with an adjustment of travel time to reflect delays encountered in the associated iteration. The iterative link time adjustment process is accomplished through the Bureau of Public Roads (BPR) volume-delay equation. Since FSUTMS' traffic assignment procedure outputs daily volumes, and the input capacities are given in hourly volumes, it is necessary to convert the hourly capacities to their daily equivalents when computing the volume-to-capacity ratios used in the BPR function. The conversion is accomplished by dividing the hourly capacity by a factor called the peak-to-daily ratio, or referred to as CONFAC in FSUTMS. The ratio is computed as the highest hourly volume of a day divided by the corresponding total daily volume. ^ While several studies have indicated that CONFAC is a decreasing function of the level of congestion, a constant value is used for each facility type in the current version of FSUTMS. This ignores the different congestion level associated with each roadway and is believed to be one of the culprits of traffic assignment errors. Traffic counts data from across the state of Florida were used to calibrate CONFACs as a function of a congestion measure using the weighted least squares method. The calibrated functions were then implemented in FSUTMS through a procedure that takes advantage of the iterative nature of FSUTMS' equilibrium assignment method. ^ The assignment results based on constant and variable CONFACs were then compared against the ground counts for three selected networks. It was found that the accuracy from the two assignments was not significantly different, that the hypothesized improvement in assignment results from the variable CONFAC model was not empirically evident. It was recognized that many other factors beyond the scope and control of this study could contribute to this finding. It was recommended that further studies focus on the use of the variable CONFAC model with recalibrated parameters for the BPR function and/or with other forms of volume-delay functions. ^