950 resultados para Geo-statistical model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We show that the quasifission paths predicted by the one-body dissipation dynamics, in the slowest phase of a binary reaction, follow a quasistatic path, which represents a sequence of states of thermal equilibrium at a fixed value of the deformation coordinate. This establishes the use of the statistical particle-evaporation model in the case of dynamical time-evolving systems. Pre- and post-scission multiplicities of neutrons and total multiplicities of protons and α particles in fission reactions of 63Cu+92Mo, 60Ni+100Mo, 63Cu+100Mo at 10 MeV/u and 20Ne+144,148,154Sm at 20 MeV/u are reproduced reasonably well with statistical model calculations performed along dynamic trajectories whose slow stage (from the most compact configuration up to the point where the neck starts to develop) lasts some 35×10−21 s.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

"This paper will discuss the major developments in the area of fingerprint" "identification that followed the publication of the National Research Council (NRC, of the US National Academies of Sciences) report in 2009 entitled: Strengthening Forensic Science in the United States: A Path Forward. The report portrayed an image of a field of expertise used for decades without the necessary scientific research-based underpinning. The advances since the report and the needs in selected areas of fingerprinting will be detailed. It includes the measurement of the accuracy, reliability, repeatability and reproducibility of the conclusions offered by fingerprint experts. The paper will also pay attention to the development of statistical models allow- ing assessment of fingerprint comparisons. As a corollary of these developments, the next challenge is to reconcile a traditional practice domi- nated by deterministic conclusions with the probabilistic logic of any statistical model. There is a call for greater candour and fingerprint experts will need to communicate differently on the strengths and limitations of their findings. Their testimony will have to go beyond the blunt assertion" "of the uniqueness of fingerprints or the opinion delivered ispe dixit."

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[cat] Presentem un model estadístic de l’especialització vitícola a la província de Barcelona cap el 1860 que combina la pressió boserupiana de l’augment de població, l’atracció de la demanda induïda per un creixement de tipus smithià (mesurada per les distancies horàries al port més proper), i l’adequació dels sòls disponibles per sembrar gra o plantar ceps (mesurada per l’estès hídric, el pendent i el risc de glaçades). L’assoliment global d’uns nivells de R2 ajustats que oscil·len entre 0,608 i 0,826 poden considerar-se força bons. Creiem que la desigualtat en la propietat de la terra també va jugar un paper molt important, però l’hem hagut d’ometre de moment per manca de dades estadístiques. També cal aprofundir en el tractament del problema de possible endogeneïtat derivat de l’ús de variables socio-demogràfiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

[cat] Presentem un model estadístic de l’especialització vitícola a la província de Barcelona cap el 1860 que combina la pressió boserupiana de l’augment de població, l’atracció de la demanda induïda per un creixement de tipus smithià (mesurada per les distancies horàries al port més proper), i l’adequació dels sòls disponibles per sembrar gra o plantar ceps (mesurada per l’estès hídric, el pendent i el risc de glaçades). L’assoliment global d’uns nivells de R2 ajustats que oscil·len entre 0,608 i 0,826 poden considerar-se força bons. Creiem que la desigualtat en la propietat de la terra també va jugar un paper molt important, però l’hem hagut d’ometre de moment per manca de dades estadístiques. També cal aprofundir en el tractament del problema de possible endogeneïtat derivat de l’ús de variables socio-demogràfiques.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The comparative QSAR is a tool for validating any statistical model that seems to be reasonable in describing an interaction between a bioactive new chemical entity, BIONCE, and the biological system. In order to deeper the understanding of the relationships and the meaning of parameters within the model it is necessary some kind of lateral validation. This validation can be accomplished by chemical procedures using physicochemical organic reactions and by means of biological systems. In this paper we review some of such comparisons and also present a lateral validation between the same set of antimicrobial hydrazides acting against Saccharomyces cerevisiae yeast and Escherichia coli bacterium cells. QSARs are presented to shed light in this important way of stating that the QSAR model is not the endpoint, but the beginning.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Automobile bodily injury (BI) claims remain unsettled for a long time after the accident. The estimation of an accurate reserve for Reported But Not Settled (RBNS) claims is therefore vital for insurers. In accordance with the recommendation included in the Solvency II project (CEIOPS, 2007) a statistical model is here implemented for RBNS reserve estimation. Lognormality on empirical compensation cost data is observed for different levels of BI severity. The individual claim provision is estimated by allocating the expected mean compensation for the predicted severity of the victim’s injury, for which the upper bound is also computed. The BI severity is predicted by means of a heteroscedastic multiple choice model, because empirical evidence has found that the variability in the latent severity of injured individuals travelling by car is not constant. It is shown that this methodology can improve the accuracy of RBNS reserve estimation at all stages, as compared to the subjective assessment that has traditionally been made by practitioners.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

It is a well known phenomenon that the constant amplitude fatigue limit of a large component is lower than the fatigue limit of a small specimen made of the same material. In notched components the opposite occurs: the fatigue limit defined as the maximum stress at the notch is higher than that achieved with smooth specimens. These two effects have been taken into account in most design handbooks with the help of experimental formulas or design curves. The basic idea of this study is that the size effect can mainly be explained by the statistical size effect. A component subjected to an alternating load can be assumed to form a sample of initiated cracks at the end of the crack initiation phase. The size of the sample depends on the size of the specimen in question. The main objective of this study is to develop a statistical model for the estimation of this kind of size effect. It was shown that the size of a sample of initiated cracks shall be based on the stressed surface area of the specimen. In case of varying stress distribution, an effective stress area must be calculated. It is based on the decreasing probability of equally sized initiated cracks at lower stress level. If the distribution function of the parent population of cracks is known, the distribution of the maximum crack size in a sample can be defined. This makes it possible to calculate an estimate of the largest expected crack in any sample size. The estimate of the fatigue limit can now be calculated with the help of the linear elastic fracture mechanics. In notched components another source of size effect has to be taken into account. If we think about two specimens which have similar shape, but the size is different, it can be seen that the stress gradient in the smaller specimen is steeper. If there is an initiated crack in both of them, the stress intensity factor at the crack in the larger specimen is higher. The second goal of this thesis is to create a calculation method for this factor which is called the geometric size effect. The proposed method for the calculation of the geometric size effect is also based on the use of the linear elastic fracture mechanics. It is possible to calculate an accurate value of the stress intensity factor in a non linear stress field using weight functions. The calculated stress intensity factor values at the initiated crack can be compared to the corresponding stress intensity factor due to constant stress. The notch size effect is calculated as the ratio of these stress intensity factors. The presented methods were tested against experimental results taken from three German doctoral works. Two candidates for the parent population of initiated cracks were found: the Weibull distribution and the log normal distribution. Both of them can be used successfully for the prediction of the statistical size effect for smooth specimens. In case of notched components the geometric size effect due to the stress gradient shall be combined with the statistical size effect. The proposed method gives good results as long as the notch in question is blunt enough. For very sharp notches, stress concentration factor about 5 or higher, the method does not give sufficient results. It was shown that the plastic portion of the strain becomes quite high at the root of this kind of notches. The use of the linear elastic fracture mechanics becomes therefore questionable.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Esterification reactions of glycerol with lauric acid in solvent free system were carried out using lipases from several sources. All lipases were immobilized on polysiloxane-polyvinyl alcohol particles by covalent binding with high activity recovered. Among the tested enzymes, the Candida antarctica lipase allowed to attain the highest molar conversion (76%), giving similar proportions of monolaurin, dilaurin and low amount of trilaurin. To further improve the process, the Response Surface Methodology (RSM) was used and optima temperature and molar ratio glycerol to lauric acid were found to be 45 ºC and 5:1, respectively. Under these conditions, 31.35% of monolaurin concentrations were attained and this result was in close agreement with the statistical model prediction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The increasing demand of consumer markets for the welfare of birds in poultry house has motivated many scientific researches to monitor and classify the welfare according to the production environment. Given the complexity between the birds and the environment of the aviary, the correct interpretation of the conduct becomes an important way to estimate the welfare of these birds. This study obtained multiple logistic regression models with capacity of estimating the welfare of broiler breeders in relation to the environment of the aviaries and behaviors expressed by the birds. In the experiment, were observed several behaviors expressed by breeders housed in a climatic chamber under controlled temperatures and three different ammonia concentrations from the air monitored daily. From the analysis of the data it was obtained two logistic regression models, of which the first model uses a value of ammonia concentration measured by unit and the second model uses a binary value to classify the ammonia concentration that is assigned by a person through his olfactory perception. The analysis showed that both models classified the broiler breeder's welfare successfully.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Prostate-specific antigen (PSA) is a marker that is commonly used in estimating prostate cancer risk. Prostate cancer is usually a slowly progressing disease, which might not cause any symptoms whatsoever. Nevertheless, some cases of cancer are aggressive and need to be treated before they become life-threatening. However, the blood PSA concentration may rise also in benign prostate diseases and using a single total PSA (tPSA) measurement to guide the decision on further examinations leads to many unnecessary biopsies, over-detection, and overtreatment of indolent cancers which would not require treatment. Therefore, there is a need for markers that would better separate cancer from benign disorders, and would also predict cancer aggressiveness. The aim of this study was to evaluate whether intact and nicked forms of free PSA (fPSA-I and fPSA-N) or human kallikrein-related peptidase 2 (hK2) could serve as new tools in estimating prostate cancer risk. First, the immunoassays for fPSA-I and free and total hK2 were optimized so that they would be less prone to assay interference caused by interfering factors present in some blood samples. The optimized assays were shown to work well and were used to study the marker concentrations in the clinical sample panels. The marker levels were measured from preoperative blood samples of prostate cancer patients scheduled for radical prostatectomy. The association of the markers with the cancer stage and grade was studied. It was found that among all tested markers and their combinations especially the ratio of fPSA-N to tPSA and ratio of free PSA (fPSA) to tPSA were associated with both cancer stage and grade. They might be useful in predicting the cancer aggressiveness, but further follow-up studies are necessary to fully evaluate the significance of the markers in this clinical setting. The markers tPSA, fPSA, fPSA-I and hK2 were combined in a statistical model which was previously shown to be able to reduce unnecessary biopsies when applied to large screening cohorts of men with elevated tPSA. The discriminative accuracy of this model was compared to models based on established clinical predictors in reference to biopsy outcome. The kallikrein model and the calculated fPSA-N concentrations (fPSA minus fPSA-I) correlated with the prostate volume and the model, when compared to the clinical models, predicted prostate cancer in biopsy equally well. Hence, the measurement of kallikreins in a blood sample could be used to replace the volume measurement which is time-consuming, needs instrumentation and skilled personnel and is an uncomfortable procedure. Overall, the model could simplify the estimation of prostate cancer risk. Finally, as the fPSA-N seems to be an interesting new marker, a direct immunoassay for measuring fPSA-N concentrations was developed. The analytical performance was acceptable, but the rather complicated assay protocol needs to be improved until it can be used for measuring large sample panels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To analyze the prevalence of and factors associated with fragility fractures in Brazilian women aged 50 years and older. METHODS: This cross-sectional population survey, conducted between May 10 and October 31, 2011, included 622 women aged >50 years living in a city in southeastern Brazil. A questionnaire was administered to each woman by a trained interviewer. The associations between the occurrence of a fragility fracture after age 50 years and sociodemographic data, health-related habits and problems, self-perception of health and evaluation of functional capacity were determined by the χ2 test and Poisson regression using the backward selection criteria. RESULTS: The mean age of the 622 women was 64.1 years. The prevalence of fragility fractures was 10.8%, with 1.8% reporting hip fracture. In the final statistical model, a longer time since menopause (PR 1.03; 95%CI 1.01-1.05; p<0.01) and osteoporosis (PR 1.97; 95%CI 1.27-3.08; p<0.01) were associated with a higher prevalence of fractures. CONCLUSIONS: These findings may provide a better understanding of the risk factors associated with fragility fractures in Brazilian women and emphasize the importance of performing bone densitometry.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Today's networked systems are becoming increasingly complex and diverse. The current simulation and runtime verification techniques do not provide support for developing such systems efficiently; moreover, the reliability of the simulated/verified systems is not thoroughly ensured. To address these challenges, the use of formal techniques to reason about network system development is growing, while at the same time, the mathematical background necessary for using formal techniques is a barrier for network designers to efficiently employ them. Thus, these techniques are not vastly used for developing networked systems. The objective of this thesis is to propose formal approaches for the development of reliable networked systems, by taking efficiency into account. With respect to reliability, we propose the architectural development of correct-by-construction networked system models. With respect to efficiency, we propose reusable network architectures as well as network development. At the core of our development methodology, we employ the abstraction and refinement techniques for the development and analysis of networked systems. We evaluate our proposal by employing the proposed architectures to a pervasive class of dynamic networks, i.e., wireless sensor network architectures as well as to a pervasive class of static networks, i.e., network-on-chip architectures. The ultimate goal of our research is to put forward the idea of building libraries of pre-proved rules for the efficient modelling, development, and analysis of networked systems. We take into account both qualitative and quantitative analysis of networks via varied formal tool support, using a theorem prover the Rodin platform and a statistical model checker the SMC-Uppaal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The determination of the sterilization value for low acid foods in retorts includes a critical evaluation of the factory's facilities and utilities, validation of the heat processing equipment (by heat distribution assays), and finally heat penetration assays with the product. The intensity of the heat process applied to the food can be expressed by the Fo value (sterilization value, in minutes, at a reference temperature of 121.1 °C, and a thermal index, z, of 10 °C, for Clostridium botulinum spores). For safety reasons, the lowest value for Fo is frequently adopted, being obtained in heat penetration assays as indicative of the minimum process intensity applied. This lowest Fo value should always be higher than the minimum Fo recommended for the food in question. However, the use of the Fo value for the coldest can fail to statistically explain all the practical occurrences in food heat treatment processes. Thus, as a result of intense experimental work, we aimed to develop a new focus to determine the lowest Fo value, which we renamed the critical Fo. The critical Fo is based on a statistical model for the interpretation of the results of heat penetration assays in packages, and it depends not only on the Fo values found at the coldest point of the package and the coldest point of the equipment, but also on the size of the batch of packages processed in the retort, the total processing time in the retort, and the time between CIPs of the retort. In the present study, we tried to explore the results of physical measurements used in the validation of food heat processes. Three examples of calculations were prepared to illustrate the methodology developed and to introduce the concept of critical Fo for the processing of canned food.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Latent variable models in finance originate both from asset pricing theory and time series analysis. These two strands of literature appeal to two different concepts of latent structures, which are both useful to reduce the dimension of a statistical model specified for a multivariate time series of asset prices. In the CAPM or APT beta pricing models, the dimension reduction is cross-sectional in nature, while in time-series state-space models, dimension is reduced longitudinally by assuming conditional independence between consecutive returns, given a small number of state variables. In this paper, we use the concept of Stochastic Discount Factor (SDF) or pricing kernel as a unifying principle to integrate these two concepts of latent variables. Beta pricing relations amount to characterize the factors as a basis of a vectorial space for the SDF. The coefficients of the SDF with respect to the factors are specified as deterministic functions of some state variables which summarize their dynamics. In beta pricing models, it is often said that only the factorial risk is compensated since the remaining idiosyncratic risk is diversifiable. Implicitly, this argument can be interpreted as a conditional cross-sectional factor structure, that is, a conditional independence between contemporaneous returns of a large number of assets, given a small number of factors, like in standard Factor Analysis. We provide this unifying analysis in the context of conditional equilibrium beta pricing as well as asset pricing with stochastic volatility, stochastic interest rates and other state variables. We address the general issue of econometric specifications of dynamic asset pricing models, which cover the modern literature on conditionally heteroskedastic factor models as well as equilibrium-based asset pricing models with an intertemporal specification of preferences and market fundamentals. We interpret various instantaneous causality relationships between state variables and market fundamentals as leverage effects and discuss their central role relative to the validity of standard CAPM-like stock pricing and preference-free option pricing.