901 resultados para = least-squares fit to flow-through data
Resumo:
Vertical permeability and sediment consolidation measurements were taken on seven whole-round drill cores from Sites 1253 (three samples), 1254 (one sample), and 1255 (three samples) drilled during Ocean Drilling Program Leg 205 in the Middle America Trench off of Costa Rica's Pacific Coast. Consolidation behavior including slopes of elastic rebound and virgin compression curves (Cc) was measured by constant rate of strain tests. Permeabilities were determined from flow-through experiments during stepped-load tests and by using coefficient of consolidation (Cv) values continuously while loading. Consolidation curves and the Casagrande method were used to determine maximum preconsolidation stress. Elastic slopes of consolidation curves ranged from 0.097 to 0.158 in pelagic sediments and 0.0075 to 0.018 in hemipelagic sediments. Cc values ranged from 1.225 to 1.427 for pelagic carbonates and 0.504 to 0.826 for hemipelagic clay-rich sediments. In samples consolidated to an axial stress of ~20 MPa, permeabilities determined by flow-through experiments ranged from a low value of 7.66 x 10**-20 m**2 in hemipelagic sediments to a maximum value of 1.03 x 10**-16 m**2 in pelagic sediments. Permeabilities calculated from Cv values in the hemipelagic sediments ranged from 4.81 x 10**-16 to 7.66 x 10**-20 m**2 for porosities 49.9%-26.1%.
Resumo:
Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and valuations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. Introduction Nowadays, any engineering calculation performed in the nuclear field should be accompanied by an uncertainty analysis. In such an analysis, different sources of uncertainties are taken into account. Works such as those performed under the UAM project (Ivanov, et al., 2013) treat nuclear data as a source of uncertainty, in particular cross-section data for which uncertainties given in the form of covariance matrices are already provided in the major nuclear data libraries. Meanwhile, fission yield uncertainties were often neglected or treated shallowly, because their effects were considered of second order compared to cross-sections (Garcia-Herranz, et al., 2010). However, the Working Party on International Nuclear Data Evaluation Co-operation (WPEC)
Resumo:
This paper proposes a novel approach to solve the ordinal regression problem using Gaussian processes. The proposed approach, probabilistic least squares ordinal regression (PLSOR), obtains the probability distribution over ordinal labels using a particular likelihood function. It performs model selection (hyperparameter optimization) using the leave-one-out cross-validation (LOO-CV) technique. PLSOR has conceptual simplicity and ease of implementation of least squares approach. Unlike the existing Gaussian process ordinal regression (GPOR) approaches, PLSOR does not use any approximation techniques for inference. We compare the proposed approach with the state-of-the-art GPOR approaches on some synthetic and benchmark data sets. Experimental results show the competitiveness of the proposed approach.
Resumo:
Inverse simulations of musculoskeletal models computes the internal forces such as muscle and joint reaction forces, which are hard to measure, using the more easily measured motion and external forces as input data. Because of the difficulties of measuring muscle forces and joint reactions, simulations are hard to validate. One way of reducing errors for the simulations is to ensure that the mathematical problem is well-posed. This paper presents a study of regularity aspects for an inverse simulation method, often called forward dynamics or dynamical optimization, that takes into account both measurement errors and muscle dynamics. The simulation method is explained in detail. Regularity is examined for a test problem around the optimum using the approximated quadratic problem. The results shows improved rank by including a regularization term in the objective that handles the mechanical over-determinancy. Using the 3-element Hill muscle model the chosen regularization term is the norm of the activation. To make the problem full-rank only the excitation bounds should be included in the constraints. However, this results in small negative values of the activation which indicates that muscles are pushing and not pulling. Despite this unrealistic behavior the error maybe small enough to be accepted for specific applications. These results is a starting point start for achieving better results of inverse musculoskeletal simulations from a numerical point of view.
Resumo:
The associated model for binary systems has been modified to include volume effects and excess entropy arising from preferential interactions between the associate and the free atoms or between the free atoms. Equations for thermodynamic mixing functions have been derived. An optimization procedure using a modified conjugate gradient method has been used to evaluate the enthalpy and entropy interaction energies, the free energy of dissociation of the complex, its temperature dependance and the size of the associate. An expression for the concentration—concentration structure factor [Scc (0)] has been deduced from the modified associated solution model. The analysis has been applied to the thermodynamic mixing functions of liquid Ga-Te alloys at 1120 K, believed to contain Ga2Te3 associates. It is observed that the modified associated solution model incorporating volume effects and terms for the temperature dependance of interaction energies, describes the thermodynamic properties of Ga-Te system satisfactorily.
Resumo:
The proton radioactivity half-lives of spherical proton emitters are calculated by the cluster model with the contribution of a centrifugal potential barrier considered separately. The results are compared with the experimental data and other theoretical data, and good agreement is found for most nuclei. In addition, two formulae are proposed for the proton decay half-life of spherical proton emitters through the least squares fit to the experimental data available, and could reproduce the experimental half-lives successfully.
Resumo:
Six parameters uniquely describe the orbit of a body about the Sun. Given these parameters, it is possible to make predictions of the body's position by solving its equation of motion. The parameters cannot be directly measured, so they must be inferred indirectly by an inversion method which uses measurements of other quantities in combination with the equation of motion. Inverse techniques are valuable tools in many applications where only noisy, incomplete, and indirect observations are available for estimating parameter values. The methodology of the approach is introduced and the Kepler problem is used as a real-world example. (C) 2003 American Association of Physics Teachers.
Resumo:
This paper refers to the numerical solution of the classical Darcy's problem of plane fluid through isotropic media. Regarding the numerical procedure,the Laplace equation, is a classical one in mathematical physics and several procedures have been devised in order to solve it. So as to show the capability of the method, the paper presents some exemples.
Resumo:
Mode of access: Internet.
Resumo:
With most clinical trials, missing data presents a statistical problem in evaluating a treatment's efficacy. There are many methods commonly used to assess missing data; however, these methods leave room for bias to enter the study. This thesis was a secondary analysis on data taken from TIME, a phase 2 randomized clinical trial conducted to evaluate the safety and effect of the administration timing of bone marrow mononuclear cells (BMMNC) for subjects with acute myocardial infarction (AMI).^ We evaluated the effect of missing data by comparing the variance inflation factor (VIF) of the effect of therapy between all subjects and only subjects with complete data. Through the general linear model, an unbiased solution was made for the VIF of the treatment's efficacy using the weighted least squares method to incorporate missing data. Two groups were identified from the TIME data: 1) all subjects and 2) subjects with complete data (baseline and follow-up measurements). After the general solution was found for the VIF, it was migrated Excel 2010 to evaluate data from TIME. The resulting numerical value from the two groups was compared to assess the effect of missing data.^ The VIF values from the TIME study were considerably less in the group with missing data. By design, we varied the correlation factor in order to evaluate the VIFs of both groups. As the correlation factor increased, the VIF values increased at a faster rate in the group with only complete data. Furthermore, while varying the correlation factor, the number of subjects with missing data was also varied to see how missing data affects the VIF. When subjects with only baseline data was increased, we saw a significant rate increase in VIF values in the group with only complete data while the group with missing data saw a steady and consistent increase in the VIF. The same was seen when we varied the group with follow-up only data. This essentially showed that the VIFs steadily increased when missing data is not ignored. When missing data is ignored as with our comparison group, the VIF values sharply increase as correlation increases.^
Resumo:
The Gauss–Newton algorithm is an iterative method regularly used for solving nonlinear least squares problems. It is particularly well suited to the treatment of very large scale variational data assimilation problems that arise in atmosphere and ocean forecasting. The procedure consists of a sequence of linear least squares approximations to the nonlinear problem, each of which is solved by an “inner” direct or iterative process. In comparison with Newton’s method and its variants, the algorithm is attractive because it does not require the evaluation of second-order derivatives in the Hessian of the objective function. In practice the exact Gauss–Newton method is too expensive to apply operationally in meteorological forecasting, and various approximations are made in order to reduce computational costs and to solve the problems in real time. Here we investigate the effects on the convergence of the Gauss–Newton method of two types of approximation used commonly in data assimilation. First, we examine “truncated” Gauss–Newton methods where the inner linear least squares problem is not solved exactly, and second, we examine “perturbed” Gauss–Newton methods where the true linearized inner problem is approximated by a simplified, or perturbed, linear least squares problem. We give conditions ensuring that the truncated and perturbed Gauss–Newton methods converge and also derive rates of convergence for the iterations. The results are illustrated by a simple numerical example. A practical application to the problem of data assimilation in a typical meteorological system is presented.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This article presents dimensionless equations for the temperature dependence of the saturated liquid viscosity of R32, R123, R124, R125, R134a, R141b, and R152a valid over a temperature range of engineering interest. The correlation has the form Phi(D)(n)=A+BTD where Phi(D) is the dimensionless fluidity (1/eta(D)) and T-D is a dimensionless temperature. n, A, and B are evaluated for each of the above refrigerants based on a least-squares fit to experimental data. This equation is found to provide an improved fit over those existing in the literature up to T-D=0.8.
Resumo:
We have investigated the use of hierarchical clustering of flow cytometry data to classify samples of conventional central chondrosarcoma, a malignant cartilage forming tumor of uncertain cellular origin, according to similarities with surface marker profiles of several known cell types. Human primary chondrosarcoma cells, articular chondrocytes, mesenchymal stem cells, fibroblasts, and a panel of tumor cell lines from chondrocytic or epithelial origin were clustered based on the expression profile of eleven surface markers. For clustering, eight hierarchical clustering algorithms, three distance metrics, as well as several approaches for data preprocessing, including multivariate outlier detection, logarithmic transformation, and z-score normalization, were systematically evaluated. By selecting clustering approaches shown to give reproducible results for cluster recovery of known cell types, primary conventional central chondrosacoma cells could be grouped in two main clusters with distinctive marker expression signatures: one group clustering together with mesenchymal stem cells (CD49b-high/CD10-low/CD221-high) and a second group clustering close to fibroblasts (CD49b-low/CD10-high/CD221-low). Hierarchical clustering also revealed substantial differences between primary conventional central chondrosarcoma cells and established chondrosarcoma cell lines, with the latter not only segregating apart from primary tumor cells and normal tissue cells, but clustering together with cell lines from epithelial lineage. Our study provides a foundation for the use of hierarchical clustering applied to flow cytometry data as a powerful tool to classify samples according to marker expression patterns, which could lead to uncover new cancer subtypes.