991 resultados para empirical correlation
Resumo:
2000 Mathematics Subject Classification: 62H15, 62H12.
Resumo:
Empirical correlations are usually used as a predictive tool in geotechnical engineering. However, equations calculated for soils very different to the ones to be characterized are frequently used, and so they are not representative of their mechanical properties. This fact, added to the increasing interest of civil engineering in knowing the shear wave velocity (Vs) of the ground, has led to the calculation of different empirical equations to predict the Vs value of the soils of Madrid. In this study this has been achieved by calculating the empirical correlations between the Vs value obtained through the ReMi (Refraction Microtremor) technique and the Standard Penetration Test (500 NSPT values). The empirical correlations proposed are applicable to the whole metropolitan area of Madrid, and have an excellent predictive capability owing to the incorporation of the measurement depth to the equations, which has an important influence in the resistance properties of soils.
Resumo:
For the optimal design of plate heat exchangers (PHEs), an accurate thermal-hydraulic model that takes into account the effect of the flow arrangement on the heat load and pressure drop is necessary. In the present study, the effect of the flow arrangement on the pressure drop of a PHE is investigated. Thirty two different arrangements were experimentally tested using a laboratory scale PHE with flat plates. The experimental data was used for (a) determination of an empirical correlation for the effect of the number of passes and number of flow channels per pass on the pressure drop; (b) validation of a friction factor model through parameter estimation; and (c) comparison with the simulation results obtained with a CFD (computational fluid dynamics) model of the PHE. All three approaches resulted in a good agreement between experimental and predicted values of pressure drop. Moreover, the CFD model is used for evaluating the flow maldistribution in a PHE with two channels Per Pass. (c) 2008 Elsevier Ltd. All rights reserved.
Resumo:
A hydraulic jump is characterized by strong energy dissipation and mixing, large-scale turbulence, air entrainment, waves and spray. Despite recent pertinent studies, the interaction between air bubbles diffusion and momentum transfer is not completely understood. The objective of this paper is to present experimental results from new measurements performed in rectangular horizontal flume with partially-developed inflow conditions. The vertical distributions of void fraction and air bubbles count rate were recorded for inflow Froude number Fr1 in the range from 5.2 to 14.3. Rapid detrainment process was observed near the jump toe, whereas the structure of the air diffusion layer was clearly observed over longer distances. These new data were compared with previous data generally collected at lower Froude numbers. The comparison demonstrated that, at a fixed distance from the jump toe, the maximum void fraction Cmax increases with the increasing Fr1. The vertical locations of the maximum void fraction and bubble count rate were consistent with previous studies. Finally, an empirical correlation between the upper boundary of the air diffusion layer and the distance from the impingement point was provided.
Resumo:
Production of citric acid from crude glycerol from biodiesel industry, in batch cultures of Yarrowia lipolytica W29 was performed in a lab-scale stirred tank bioreactor in order to assess the effect of oxygen mass transfer rate in this bioprocess. An empirical correlation was proposed to describe oxygen volumetric mass transfer coefficient (kLa) as a function of operating conditions (stirring speed and specific air flow rate) and cellular density. kLa increased according with a power function with specific power input and superficial gas velocity, and slightly decreased with cellular density. The increase of initial kLa from 7 h-1 to 55 h-1 led to 7.8-fold increase of citric acid final concentration. Experiments were also performed at controlled dissolved oxygen (DO) and citric acid concentration increased with DO up to 60% of saturation. Thus, due to the simpler operation setting an optimal kLa than at controlled DO, it can be concluded that kLa is an adequate parameter for the optimization of citric acid production from crude glycerol by Y. lipolytica and to be considered in bioprocess scale-up. Our empirical correlation, considering the operating conditions and cellular density, will be a valid tool for this purpose.
Resumo:
The study of fluid flow in pipes is one of the main topic of interest for engineers in industries. In this thesis, an effort is made to study the boundary layers formed near the wall of the pipe and how it behaves as a resistance to heat transfer. Before few decades, the scientists used to derive the analytical and empirical results by hand as there were limited means available to solve the complex fluid flow phenomena. Due to the increase in technology, now it has been practically possible to understand and analyze the actual fluid flow in any type of geometry. Several methodologies have been used in the past to analyze the boundary layer equations and to derive the expression for heat transfer. An integral relation approach is used for the analytical solution of the boundary layer equations and is compared with the FLUENT simulations for the laminar case. Law of the wall approach is used to derive the empirical correlation between dimensionless numbers and is then compared with the results from FLUENT for the turbulent case. In this thesis, different approaches like analytical, empirical and numerical are compared for the same set of fluid flow equations.
Resumo:
Este trabalho é parte integrante de uma linha de pesquisa destinada ao estudo de viabilidade técnica de melhoramento artificial de camadas de solo. Objetiva-se com este trabalho contribuir para a viabilização de uso de solos melhorados para suporte de fundações superficiais. O estudo baseou-se em resultados experimentais de provas de carga em placas circulares de 0,30m e 0,60m de diâmetro sobre camadas de solo melhorado com cimento (teor de 5%) de 0,15m, 0,30111 e 0,60m de espessura. Os diâmetros das placas (D) e as espessuras das camadas de solo melhorado com cimento (H) foram fixados de forma a obter-se três valores distintos da relação H/D, correspondendo a 0,5, 1 e 2. Os resultados, representados adimensionalmente através de relações entre a tensão normalizada e o recalque relativo, demonstram a influência da espessura da camada de solo melhorado no comportamento de fbndações superficiais submetidas a carregamento vertical. Uma correlação de natureza semiempirica é desenvolvida de forma a permitir a previsão da magnitude de recalques e tensões de ruptura de sapatas a partir de resultados de ensaios de placa. Foram também avaliados a aplicabiidade de modelos analíticos para fundações superficiais assentes em perfis de solos não homegêneos com características coesivo-fnccionais. Neste sentido, apresenta-se urna comparação quantitativa e qualitativa entre os diversos métodos de previsão da capacidade de suporte e recalques, bem como uma validação das proposições através de comparações entre resultados calculados e medidos experimentalmente em campo. Os principais resultados obtidos na pesquisa são: [a] melhora de desempenho das fundações quando apoiadas em solos tratados, [b] dificuldade de previsão das cargas de ruptura e níveis de recalques em fundações apoiadas em solos estratifcados através de métodos analíticos, refletindo a complexidade deste problema de interação solo-estrutura e [c] desenvolvimento de uma metodologia semi-empírica para estimativa do comportamento de fùndações superíiciais com base em resultados de ensaios de placa.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The accurate determination of thermophysical properties of milk is very important for design, simulation, optimization, and control of food processing such as evaporation, heat exchanging, spray drying, and so forth. Generally, polynomial methods are used for prediction of these properties based on empirical correlation to experimental data. Artificial neural networks are better Suited for processing noisy and extensive knowledge indexing. This article proposed the application of neural networks for prediction of specific heat, thermal conductivity, and density of milk with temperature ranged from 2.0 to 71.0degreesC, 72.0 to 92.0% of water content (w/w), and 1.350 to 7.822% of fat content (w/w). Artificial neural networks presented a better prediction capability of specific heat, thermal conductivity, and density of milk than polynomial modeling. It showed a reasonable alternative to empirical modeling for thermophysical properties of foods.
Resumo:
A central goal in unsaturated soil mechanics research is to create a smooth transition between traditional soil mechanics approaches and an approach that is applicable to unsaturated soils. Undrained shear strength and the liquidity index of reconstituted or remoulded saturated soils are consistently correlated, which has been demonstrated by many studies. In the liquidity index range from 1 (at w(l)) to 0 (at w(p)), the shear strength ranges from approximately 2 kPa to 200 kPa. Similarly, for compacted soil, the shear strength at the plastic limit ranges from 150 kPa to 250 kPa. When compacted at their optimum water content, most soils have a suction that ranges from 20 kPa to 500 kPa; however, in the field, compacted materials are subjected to drying and wetting, which affect their initial suction and as a consequence their shear strength. Unconfined shear tests were performed on five compacted tropical soils and kaolin. Specimens were tested in the as-compacted condition, and also after undergoing drying or wetting. The test results and data from prior literature were examined, taking into account the roles of void ratio, suction, and relative water content. An interpretation of the phenomena that are involved in the development of the undrained shear strength of unsaturated soils in the contexts of soil water retention and Atterberg limits is presented, providing a practical view of the behaviour of compacted soil based on the concept of unsaturated soil. Finally, an empirical correlation is presented that relates the unsaturated state of compacted soils to the unconfined shear strength.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
Background: In longitudinal studies where subjects experience recurrent incidents over a period of time, such as respiratory infections, fever or diarrhea, statistical methods are required to take into account the within-subject correlation. Methods: For repeated events data with censored failure, the independent increment (AG), marginal (WLW) and conditional (PWP) models are three multiple failure models that generalize Cox"s proportional hazard model. In this paper, we revise the efficiency, accuracy and robustness of all three models under simulated scenarios with varying degrees of within-subject correlation, censoring levels, maximum number of possible recurrences and sample size. We also study the methods performance on a real dataset from a cohort study with bronchial obstruction. Results: We find substantial differences between methods and there is not an optimal method. AG and PWP seem to be preferable to WLW for low correlation levels but the situation reverts for high correlations. Conclusions: All methods are stable in front of censoring, worsen with increasing recurrence levels and share a bias problem which, among other consequences, makes asymptotic normal confidence intervals not fully reliable, although they are well developed theoretically.
Resumo:
In this paper, we extend the debate concerning Credit Default Swap valuation to include time varying correlation and co-variances. Traditional multi-variate techniques treat the correlations between covariates as constant over time; however, this view is not supported by the data. Secondly, since financial data does not follow a normal distribution because of its heavy tails, modeling the data using a Generalized Linear model (GLM) incorporating copulas emerge as a more robust technique over traditional approaches. This paper also includes an empirical analysis of the regime switching dynamics of credit risk in the presence of liquidity by following the general practice of assuming that credit and market risk follow a Markov process. The study was based on Credit Default Swap data obtained from Bloomberg that spanned the period January 1st 2004 to August 08th 2006. The empirical examination of the regime switching tendencies provided quantitative support to the anecdotal view that liquidity decreases as credit quality deteriorates. The analysis also examined the joint probability distribution of the credit risk determinants across credit quality through the use of a copula function which disaggregates the behavior embedded in the marginal gamma distributions, so as to isolate the level of dependence which is captured in the copula function. The results suggest that the time varying joint correlation matrix performed far superior as compared to the constant correlation matrix; the centerpiece of linear regression models.
Resumo:
Background: Genome wide association studies (GWAS) are becoming the approach of choice to identify genetic determinants of complex phenotypes and common diseases. The astonishing amount of generated data and the use of distinct genotyping platforms with variable genomic coverage are still analytical challenges. Imputation algorithms combine directly genotyped markers information with haplotypic structure for the population of interest for the inference of a badly genotyped or missing marker and are considered a near zero cost approach to allow the comparison and combination of data generated in different studies. Several reports stated that imputed markers have an overall acceptable accuracy but no published report has performed a pair wise comparison of imputed and empiric association statistics of a complete set of GWAS markers. Results: In this report we identified a total of 73 imputed markers that yielded a nominally statistically significant association at P < 10(-5) for type 2 Diabetes Mellitus and compared them with results obtained based on empirical allelic frequencies. Interestingly, despite their overall high correlation, association statistics based on imputed frequencies were discordant in 35 of the 73 (47%) associated markers, considerably inflating the type I error rate of imputed markers. We comprehensively tested several quality thresholds, the haplotypic structure underlying imputed markers and the use of flanking markers as predictors of inaccurate association statistics derived from imputed markers. Conclusions: Our results suggest that association statistics from imputed markers showing specific MAF (Minor Allele Frequencies) range, located in weak linkage disequilibrium blocks or strongly deviating from local patterns of association are prone to have inflated false positive association signals. The present study highlights the potential of imputation procedures and proposes simple procedures for selecting the best imputed markers for follow-up genotyping studies.
Resumo:
We introduce a conceptual model for the in-plane physics of an earthquake fault. The model employs cellular automaton techniques to simulate tectonic loading, earthquake rupture, and strain redistribution. The impact of a hypothetical crustal elastodynamic Green's function is approximated by a long-range strain redistribution law with a r(-p) dependance. We investigate the influence of the effective elastodynamic interaction range upon the dynamical behaviour of the model by conducting experiments with different values of the exponent (p). The results indicate that this model has two distinct, stable modes of behaviour. The first mode produces a characteristic earthquake distribution with moderate to large events preceeded by an interval of time in which the rate of energy release accelerates. A correlation function analysis reveals that accelerating sequences are associated with a systematic, global evolution of strain energy correlations within the system. The second stable mode produces Gutenberg-Richter statistics, with near-linear energy release and no significant global correlation evolution. A model with effectively short-range interactions preferentially displays Gutenberg-Richter behaviour. However, models with long-range interactions appear to switch between the characteristic and GR modes. As the range of elastodynamic interactions is increased, characteristic behaviour begins to dominate GR behaviour. These models demonstrate that evolution of strain energy correlations may occur within systems with a fixed elastodynamic interaction range. Supposing that similar mode-switching dynamical behaviour occurs within earthquake faults then intermediate-term forecasting of large earthquakes may be feasible for some earthquakes but not for others, in alignment with certain empirical seismological observations. Further numerical investigation of dynamical models of this type may lead to advances in earthquake forecasting research and theoretical seismology.