901 resultados para = least-squares fit to flow-through data
Resumo:
Este trabalho de pesquisa descreve três estudos de utilização de métodos quimiométricos para a classificação e caracterização de óleos comestíveis vegetais e seus parâmetros de qualidade através das técnicas de espectrometria de absorção molecular no infravermelho médio com transformada de Fourier e de espectrometria no infravermelho próximo, e o monitoramento da qualidade e estabilidade oxidativa do iogurte usando espectrometria de fluorescência molecular. O primeiro e segundo estudos visam à classificação e caracterização de parâmetros de qualidade de óleos comestíveis vegetais utilizando espectrometria no infravermelho médio com transformada de Fourier (FT-MIR) e no infravermelho próximo (NIR). O algoritmo de Kennard-Stone foi usado para a seleção do conjunto de validação após análise de componentes principais (PCA). A discriminação entre os óleos de canola, girassol, milho e soja foi investigada usando SVM-DA, SIMCA e PLS-DA. A predição dos parâmetros de qualidade, índice de refração e densidade relativa dos óleos, foi investigada usando os métodos de calibração multivariada dos mínimos quadrados parciais (PLS), iPLS e SVM para os dados de FT-MIR e NIR. Vários tipos de pré-processamentos, primeira derivada, correção do sinal multiplicativo (MSC), dados centrados na média, correção do sinal ortogonal (OSC) e variação normal padrão (SNV) foram utilizados, usando a raiz quadrada do erro médio quadrático de validação cruzada (RMSECV) e de predição (RMSEP) como parâmetros de avaliação. A metodologia desenvolvida para determinação de índice de refração e densidade relativa e classificação dos óleos vegetais é rápida e direta. O terceiro estudo visa à avaliação da estabilidade oxidativa e qualidade do iogurte armazenado a 4C submetido à luz direta e mantido no escuro, usando a análise dos fatores paralelos (PARAFAC) na luminescência exibida por três fluoróforos presentes no iogurte, onde pelo menos um deles está fortemente relacionado com as condições de armazenamento. O sinal fluorescente foi identificado pelo espectro de emissão e excitação das substâncias fluorescentes puras, que foram sugeridas serem vitamina A, triptofano e riboflavina. Modelos de regressão baseados nos escores do PARAFAC para a riboflavina foram desenvolvidos usando os escores obtidos no primeiro dia como variável dependente e os escores obtidos durante o armazenamento como variável independente. Foi visível o decaimento da curva analítica com o decurso do tempo da experimentação. Portanto, o teor de riboflavina pode ser considerado um bom indicador para a estabilidade do iogurte. Assim, é possível concluir que a espectroscopia de fluorescência combinada com métodos quimiométricos é um método rápido para monitorar a estabilidade oxidativa e a qualidade do iogurte
Resumo:
I simulated somatic growth and accompanying otolith growth using an individual-based bioenergetics model in order to examine the performance of several back-calculation methods. Four shapes of otolith radius-total length relations (OR-TL) were simulated. Ten different back-calculation equations, two different regression models of radius length, and two schemes of annulus selection were examined for a total of 20 different methods to estimate size at age from simulated data sets of length and annulus measurements. The accuracy of each of the twenty methods was evaluated by comparing the back-calculated length-at-age and the true length-at-age. The best back-calculation technique was directly related to how well the OR-TL model fitted. When the OR-TL was sigmoid shaped and all annuli were used, employing a least squares linear regression coupled with a log-transformed Lee back-calculation equation (y-intercept corrected) resulted in the least error; when only the last annulus was used, employing a direct proportionality back-calculation equation resulted in the least error. When the OR-TL was linear, employing a functional regression coupled with the Lee back-calculation equation resulted in the least error when all annuli were used, and also when only the last annulus was used. If the OR-TL was exponentially shaped, direct substitution into the fitted quadratic equation resulted in the least error when all annuli were used, and when only the last annulus was used. Finally, an asymptotically shaped OR-TL was best modeled by the individually corrected Weibull cumulative distribution function when all annuli were used, and when only the last annulus was used.
Resumo:
... .10) i=1 i=1 in which I is the number of data points included in the fit. ... the least squares method can be used to estimate parameters of any model, ...
Resumo:
Bacteriorhodopsin (BR) films oriented by an electrophoretic method are deposited on a transparent conductive ITO glass. A counterelectrode of copper and gelose gel is used to compose a sandwich-type photodetector with the structure of ITO/BR film/gelose gel/Cu. A single 30-ps laser pulse and a mode-locked pulse train are respectively used to excite the BR photodetector. The ultrafast failing edge and the bipolar response signal are measured by the digital oscilloscope under seven different time ranges. Marquardt nonlinear least squares fitting is used to fit all the experimental data and a good fitting equation is found to describe the kinetic process of the photoelectric signal. Data fitting resolves six exponential components that can be assigned to a seven-step BR photocycle model: BR-->K-->KL-->L-->M-->N-->O-->BR. Comparing tests of the BR photodetector with a 100-ps Si PIN photodiode demonstrates that this type of BIR photocletector has at least 100-ps response time and can also serve as a fast photoelectric switch. (C) 2003 Society of Photo-Optical Instrumentation Engineers.
Resumo:
A static enclosure method was applied to determine the exchange of dimethyl sulfide (DMS) and carbonyl sulfide (OCS) between the surface of Sphagnum peatlands and the atmosphere. Measurements were performed concurrently with dynamic (flow through) enclosure measurements with sulfur-free air used as sweep gas. This latter technique has been used to acquire the majority of available data on the exchange of S gases between the atmosphere and the continental surfaces and has been criticized because it is thought to overestimate the true flux of gases by disrupting natural S gas gradients. DMS emission rates determined by both methods were not statistically different between 4 and >400 nmol m−2 h−1, indicating that previous data on emissions of at least DMS are probably valid. However, the increase in DMS in static enclosures was not linear, indicating the potential for a negative feedback of enclosure DMS concentrations on efflux. The dynamic enclosure method measured positive OCS flux rates (emission) at all sites, while data using static enclosures indicated that OCS was consumed from the atmosphere at these same sites at rates of 3.7 to 55 nmol m−2 h−1. Measurements using both enclosure techniques at a site devoid of vegetation showed that peat was a source of both DMS and OCS. However, the rate of OCS efflux from decomposing peat was more than counterbalanced by OCS consumption by vegetation, including Sphagnum mosses, and net OCS uptake occurred at all sites. We propose that all wetlands are net sinks for OCS.
Resumo:
An organic-inorganic hybrid solid, (Cu(2,2'-bpy)(2))(2)Mo8O26, has been hydrothermally synthesized and structurally characterized by single-crystal X-ray diffraction. Dark green crystals crystallize in the orthorhombic system, space group Pna21, a = 24.164(5), b = 18.281(4), c = 11.877(2) Angstrom, alpha = 90 degrees, beta = 90 degrees, gamma = 90 degrees, V= 5247(2) Angstrom (3), Z = 4, lambda (MoK alpha) = 0.71073 Angstrom (R(F) = 0.0331 for 5353 reflections). Data were collected on a Siemens P4 four-circle diffractometer at 293 K in the range 1.69 degrees < theta < 25.04 degrees using the omega -scan technique. The structure was solved by the direct method and refined by full-matrix least squares on F-2 using SHELXL-93. The structure of this compound consists of discrete (Cu(2,2'-bpy)(2))(2)Mo8O26 clusters, constructed from beta -octamolybdate subunits ((Mo8O26)(4-)) covalently bonded to two (Cu(2,2'-bpy)(2))(2+) coordination complexes via bridging oxo groups that connect two adjacent molybdenum sites. (C) 2001 Academic Press.
Resumo:
针对当前模糊隶属函数构造方法中存在的问题,提出一种构造模糊隶属函数方法.采用最小二乘法拟合离散数据来获得隶属函数.为减小拟合误差,采用了3项措施以达到预期目标.所构建的隶属函数,对任意输入物理量可直接得到其对应模糊语言变量的隶属度,从而有效避免专家指定隶属度的主观臆断性及不一致性.该方法简单、求解精度高,具有广泛适用性和较强的应用价值.仿真结果证实了该方法的有效性.
Resumo:
As the most spectacular and youngest case of continental collision on the Earth, to investigate the crust and mantle of Tibetan plateau, and then to reveal its characters of structure and deformation, are most important to understand its deformation mechanism and deep process. A great number of surface wave data were initially collected from events occurred between 1980 and 2002, which were recorded by 13 broadband digital stations in Eurasia and India. Up to 1,525 source-station Rayleigh waveforms and 1,464 Love wave trains were analysed to obtain group velocity dispersions, accompanying with the detail and quantitative assessment of the fitness of the classic Ray Theory, errors from focal and measurements. Assuming the model region covered by a mesh of 2ox2o-sized grid-cells, we have used the damped least-squares approach and the SVD to carry out tomographic inversion, SV- and SH-wave velocity images of the crust and upper mantle beneath the Tibetan Plateau and surroundings are obtained, and then the radial anisotropy is computed from the Love-Rayleigh discrepancy. The main results demonstrate that follows, a) The Moho beneath the Tibetan Plateau presents an undulating shape that lies between 65 and 74 km, and a clear correlation between the elevations of the plateau and the Moho topography suggests that at least a great part of the highly raised plateau is isostatically compensated. b) The lithospheric root presents a depth that can be substantiated at ~140 km (Qiangtang Block) and exceptionally at ~180 km (Lhasa Block), and exhibits laterally varying fast velocity between 4.6 and 4.7 km/s, even ~4.8 km/s under northern Lhasa Block and Qiangtang Block, which may be correlated with the presence of a shield-like upper mantle beneath the Tibetan Plateau and therefore looked as one of the geophysical tests confirming the underthrusting of India, whose leading edge might have exceeded the Bangong-Nujiang Suture, even the Jinsha Suture. c) The asthenosphere is depicted by a low velocity channel at depths between 140 and 220 km with negative velocity gradient and velocities as low as 4.2 km/s; d) Areas in which transverse radial anisotropy is in excess of ~4% and 6% on the average anisotropy are found in the crust and upper mantle underlying most of the Plateau, and up to 8% in some places. The strength, spatial configuration and sign of radial anisotropy seem to indicate the existence of a regime of horizontal compressive forces in the frame of the convergent orogen at the same time that laterally varying lithospheric rheology and a differential movement as regards the compressive driving forces. e) Slow-velocity anomalies of 12% or more in southern Tibet and the eastern edge of the Plateau support the idea of a mechanically weak middle-to-lower crust and the existence of crustal flow in Tibet.
Resumo:
Formation resistivity is one of the most important parameters to be evaluated in the evaluation of reservoir. In order to acquire the true value of virginal formation, various types of resistivity logging tools have been developed. However, with the increment of the proved reserves, the thickness of interest pay zone is becoming thinner and thinner, especially in the terrestrial deposit oilfield, so that electrical logging tools, limited by the contradictory requirements of resolution and investigation depth of this kinds of tools, can not provide the true value of the formation resistivity. Therefore, resitivity inversion techniques have been popular in the determination of true formation resistivity based on the improving logging data from new tools. In geophysical inverse problems, non-unique solution is inevitable due to the noisy data and deficient measurement information. I address this problem in my dissertation from three aspects, data acquisition, data processing/inversion and applications of the results/ uncertainty evaluation of the non-unique solution. Some other problems in the traditional inversion methods such as slowness speed of the convergence and the initial-correlation results. Firstly, I deal with the uncertainties in the data to be processed. The combination of micro-spherically focused log (MSFL) and dual laterolog(DLL) is the standard program to determine formation resistivity. During the inversion, the readings of MSFL are regarded as the resistivity of invasion zone of the formation after being corrected. However, the errors can be as large as 30 percent due to mud cake influence even if the rugose borehole effects on the readings of MSFL can be ignored. Furthermore, there still are argues about whether the two logs can be quantitatively used to determine formation resisitivities due to the different measurement principles. Thus, anew type of laterolog tool is designed theoretically. The new tool can provide three curves with different investigation depths and the nearly same resolution. The resolution is about 0.4meter. Secondly, because the popular iterative inversion method based on the least-square estimation can not solve problems more than two parameters simultaneously and the new laterolog logging tool is not applied to practice, my work is focused on two parameters inversion (radius of the invasion and the resistivty of virgin information ) of traditional dual laterolog logging data. An unequal weighted damp factors- revised method is developed to instead of the parameter-revised techniques used in the traditional inversion method. In this new method, the parameter is revised not only dependency on the damp its self but also dependency on the difference between the measurement data and the fitting data in different layers. At least 2 iterative numbers are reduced than the older method, the computation cost of inversion is reduced. The damp least-squares inversion method is the realization of Tikhonov's tradeoff theory on the smooth solution and stability of inversion process. This method is realized through linearity of non-linear inversion problem which must lead to the dependency of solution on the initial value of parameters. Thus, severe debates on efficiency of this kinds of methods are getting popular with the developments of non-linear processing methods. The artificial neural net method is proposed in this dissertation. The database of tool's response to formation parameters is built through the modeling of the laterolog tool and then is used to training the neural nets. A unit model is put forward to simplify the dada space and an additional physical limitation is applied to optimize the net after the cross-validation method is done. Results show that the neural net inversion method could replace the traditional inversion method in a single formation and can be used a method to determine the initial value of the traditional method. No matter what method is developed, the non-uniqueness and uncertainties of the solution could be inevitable. Thus, it is wise to evaluate the non-uniqueness and uncertainties of the solution in the application of inversion results. Bayes theorem provides a way to solve such problems. This method is illustrately discussed in a single formation and achieve plausible results. In the end, the traditional least squares inversion method is used to process raw logging data, the calculated oil saturation increased 20 percent than that not be proceed compared to core analysis.
Resumo:
A shearing quotient (SQ) is a way of quantitatively representing the Phase I shearing edges on a molar tooth. Ordinary or phylogenetic least squares regression is fit to data on log molar length (independent variable) and log sum of measured shearing crests (dependent variable). The derived linear equation is used to generate an 'expected' shearing crest length from molar length of included individuals or taxa. Following conversion of all variables to real space, the expected value is subtracted from the observed value for each individual or taxon. The result is then divided by the expected value and multiplied by 100. SQs have long been the metric of choice for assessing dietary adaptations in fossil primates. Not all studies using SQ have used the same tooth position or crests, nor have all computed regression equations using the same approach. Here we focus on re-analyzing the data of one recent study to investigate the magnitude of effects of variation in 1) shearing crest inclusion, and 2) details of the regression setup. We assess the significance of these effects by the degree to which they improve or degrade the association between computed SQs and diet categories. Though altering regression parameters for SQ calculation has a visible effect on plots, numerous iterations of statistical analyses vary surprisingly little in the success of the resulting variables for assigning taxa to dietary preference. This is promising for the comparability of patterns (if not casewise values) in SQ between studies. We suggest that differences in apparent dietary fidelity of recent studies are attributable principally to tooth position examined.
Resumo:
The amount of atmospheric hydrogen chloride (HCl) within fire enclosures produced from the combustion of chloride-based materials tends to decay as the fire effluent is transported through the enclosure due to mixing with fresh air and absorption by solids. This paper describes an HCl decay model, typically used in zone models, which has been modified and applied to a computational fluid dynamics (CFD)-based fire field model. While the modified model still makes use of some empirical formulations to represent the deposition mechanisms, these have been reduced from the original three to two through the use of the CFD framework. Furthermore, the effect of HCl flow to the wall surfaces on the time to reach equilibrium between HCl in the boundary layer and on wall surfaces is addressed by the modified model. Simulation results using the modified HCl decay model are compared with data from three experiments. The model is found to be able to reproduce the experimental trends and the predicted HCl levels are in good agreement with measured values
Resumo:
Variable geometry turbines provide an extra degree of flexibility in air management in turbocharged engines. The pivoting stator vanes used to achieve the variable turbine geometry necessitate the inclusion of stator vane endwall clearances. The consequent leakage flow through the endwall clearances impacts the flow in the stator vane passages and an understanding of the impact of the leakage flow on stator loss is required. A numerical model of a typical variable geometry turbine was developed using the commercial CFX-10 computational fluid dynamics software, and validated using laser doppler velocimetry and static pressure measurements from a variable geometry turbine with stator vane endwall clearance. Two different stator vane positions were investigated, each at three different operating conditions representing different vane loadings. The vane endwall leakage was found to have a significant impact on the stator loss and on the uniformity of flow entering the turbine rotor. The leakage flow changed considerably at different vane positions and flow incidence at vane inlet was found to have a significant impact.
Resumo:
Objectives: To identify demographic and socioeconomic determinants of need for acute hospital treatment at small area level. To establish whether there is a relation between poverty and use of inpatient services. To devise a risk adjustment formula for distributing public funds for hospital services using, as far as possible, variables that can be updated between censuses. Design: Cross sectional analysis. Spatial interactive modelling was used to quantify the proximity of the population to health service facilities. Two stage weighted least squares regression was used to model use against supply of hospital and community services and a wide range of potential needs drivers including health, socioeconomic census variables, uptake of income support and family credit, and religious denomination. Setting: Northern Ireland. Main outcome measure: Intensity of use of inpatient services. Results: After endogeneity of supply and use was taken into account, a statistical model was produced that predicted use based on five variables: income support, family credit, elderly people living alone, all ages standardised mortality ratio, and low birth weight. The main effect of the formula produced is to move resources from urban to rural areas. Conclusions: This work has produced a population risk adjustment formula for acute hospital treatment in which four of the five variables can be updated annually rather than relying on census derived data. Inclusion of the social security data makes a substantial difference to the model and to the results produced by the formula.
Resumo:
This paper presents a new method for transmission loss allocation. The method is based on tracing the complex power flow through the network and determining the share of each load on the flow and losses through each line. Transmission losses are taken into consideration during power flow tracing. Unbundling line losses is carried out using an equation, which has a physical basis, and considers the coupling between active and reactive power flows as well as the cross effects of active and reactive power on active and reactive losses. A tracing algorithm which can be considered direct to a good extent, as there is no need for exhaustive search to determine the flow paths as these are determined in a systematic way during the course of tracing. Results of application of the proposed method are also presented.
Resumo:
This paper deals with Takagi-Sugeno (TS) fuzzy model identification of nonlinear systems using fuzzy clustering. In particular, an extended fuzzy Gustafson-Kessel (EGK) clustering algorithm, using robust competitive agglomeration (RCA), is developed for automatically constructing a TS fuzzy model from system input-output data. The EGK algorithm can automatically determine the 'optimal' number of clusters from the training data set. It is shown that the EGK approach is relatively insensitive to initialization and is less susceptible to local minima, a benefit derived from its agglomerate property. This issue is often overlooked in the current literature on nonlinear identification using conventional fuzzy clustering. Furthermore, the robust statistical concepts underlying the EGK algorithm help to alleviate the difficulty of cluster identification in the construction of a TS fuzzy model from noisy training data. A new hybrid identification strategy is then formulated, which combines the EGK algorithm with a locally weighted, least-squares method for the estimation of local sub-model parameters. The efficacy of this new approach is demonstrated through function approximation examples and also by application to the identification of an automatic voltage regulation (AVR) loop for a simulated 3 kVA laboratory micro-machine system.